link
stringlengths 41
45
| date
stringlengths 9
9
| paper
dict | reviews
listlengths 1
6
| version
int64 1
5
| main
stringlengths 38
42
|
|---|---|---|---|---|---|
https://f1000research.com/articles/5-1006/v1
|
26 May 16
|
{
"type": "Research Note",
"title": "The prevalence and clinical significance of anemia in patients hospitalized with acute heart failure",
"authors": [
"Attila Frigy",
"Zoltán Fogarasi",
"Ildikó Kocsis",
"Lehel Máthé",
"Előd Nagy",
"Attila Frigy",
"Zoltán Fogarasi",
"Ildikó Kocsis",
"Előd Nagy"
],
"abstract": "Abstract In a cohort of patients hospitalized with acute heart failure (AHF) the prevalence of anemia and the existence of a correlation between anemia and the severity of the clinical picture were assessed. Methods. 50 consecutive patients (34 men, 16 women, mean age 67.5 years) hospitalized with AHF were enrolled. Statistical analysis was performed using a chi-square test, for studying univariate correlation between anemia and the presence of diverse parameters reflecting the severity and prognosis of AHF (α=0.05). Results. 21 patients (14 men, 7 women, mean age 69.6 years), representing 42%, had anemia (Hb<12 g/dl) at admission. Comparing patients with and without anemia there were no significant differences regarding age, gender, presence of atrial fibrillation (p=0.75), diabetes (p=1), ischemic heart disease (p=0.9), ejection fraction < 35% (p=1), hypotension at admission (p=0.34), tachycardia>100 b/min at admission (p=0.75), creatinine level >1.5mg% (p=0.12), and need of high dose of loop diuretic >80 mg/day (p=0.23). Conclusions. Anemia is a frequent finding in patients hospitalized with AHF. The presence of anemia was not correlated with other factors related to AHF severity and prognosis. This fact suggests its independent role in influencing the clinical picture and prognosis.",
"keywords": [
"acute heart failure",
"prognosis",
"anemia"
],
"content": "\n\nAnemia (Hb<12 g/dl or Ht<35%) is relatively frequent in patients with heart failure (HF). In a population of patients with newly diagnosed HF the prevalence of anemia was 17%1. The presence of anemia is related to the severity of functional class (from 9% in NYHA class I to 79% in class IV)2. In acute heart failure (AHF) anemia, regardless of its etiology, could be an important extracardiac factor of decompensation; its diagnosis, evaluation and treatment being an important part of management. Also, the presence of anemia proved to be an important prognostic factor during the in-hospital and post-discharge period3.\n\nThe aim of this study was to assess a cohort of patients hospitalized with AHF for (1) the prevalence of anemia and (2) the existence of a correlation between anemia and the severity of the clinical picture.\n\n\nMethods\n\nWe collected data from 50 consecutive patients (34 men, 16 women, mean age 67.5 years) hospitalized with AHF (acute decompensated heart failure in 36 cases). At admission, all the patients signed the general consent form used at our institution, agreeing with anonymous data collection and usage for scientific purposes. Approval of the hospital ethical committee (permit number: 3865/01.03.2016) was obtained for data processing and publication. Exclusion criteria were: recent (<1 month) acute coronary syndrome, and advanced renal disease on hemodialysis. At admission and during hospital stay routine (part of usual care) clinical and paraclinical data were recorded in a dedicated database: demographic data, clinical diagnosis, triggering factors of decompensation, signs and symptoms at admission, ECG data, echocardiographic data, laboratory parameters at admission, and treatment data. Statistical analysis was performed using a chi-square test (MS Excel 2010) for studying univariate correlation between anemia and the presence of diverse parameters reflecting the severity of AHF (α=0.05).\n\n\nResults\n\n21 patients (14 men, 7 women, mean age 69.6 years), representing 42% of the cohort, had anemia (Hb<12 g/dl) at admission. The most common forms were chronic simple anemia (8 patients) and renal anemia (6 patients). We did not find significant differences between the two groups of patients, with and without anemia, with regards to gender (p=1) and age (p=0.57). Also, there were no significant differences regarding the presence of atrial fibrillation (p=0.75), diabetes (p=1), ischemic heart disease (p=0.9), ejection fraction < 35% (p=1), hypotension (systolic BP <90 mmHg) at admission (p=0.34), tachycardia>100 b/min at admission (p=0.75), creatinine level >1.5mg% (p=0.12), and need of high dose of loop diuretic >80 mg/day (p=0.23).\n\nP values represent the results of a chi-square test (α=0.05).\n\n\nDiscussion and conclusions\n\nThere is general agreement that anemia is a good predictor of prognosis in patients with acute and chronic HF. Anemia is associated with increased mortality, however there are conflicted data whether this is an independent predictor or reflects the progression of HF and/or is related to the presence of more frequent comorbidities1,4,5. In the setting of AHF, anemia could also serve as a precipitating factor of decompensation.\n\nIn our cohort of patients the presence of anemia was not correlated with other factors related to AHF severity and prognosis. This fact suggests its independent role in influencing the clinical picture and prognosis.\n\n\nData availability\n\nF1000Research: Dataset 1. Patient data, 10.5256/f1000research.7872.d1229026\n\n\nConsent\n\nWritten informed consent for publication of their clinical details was obtained from the patients.",
"appendix": "Author contributions\n\n\n\nAF and ZF: study design, data collection, data processing and statistical analysis, manuscript preparation; IK: study design, data collection; LM: data processing and statistical analysis; EN: data processing and statistical analysis, manuscript preparation.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nEzekowitz JA, McAlister FA, Armstrong PW: Anemia is common in heart failure and is associated with poor outcomes: insights from a cohort of 12 065 patients with new-onset heart failure. Circulation. 2003; 107(2): 223–5. PubMed Abstract | Publisher Full Text\n\nSilverberg DS, Wexler D, Blum M, et al.: The use of subcutaneous erythropoietin and intravenous iron for the treatment of the anemia of severe, resistant congestive heart failure improves cardiac and renal function and functional cardiac class, and markedly reduces hospitalizations. J Am Coll Cardiol. 2000; 35(7): 1737–44. PubMed Abstract | Publisher Full Text\n\nFelker GM, Gattis WA, Leimberger JD, et al.: Usefulness of anemia as a predictor of death and rehospitalization in patients with decompensated heart failure. Am J Cardiol. 2003; 92(5): 625–8. PubMed Abstract | Publisher Full Text\n\nMentz RJ, Greene SJ, Ambrosy AP, et al.: Clinical profile and prognostic value of anemia at the time of admission and discharge among patients hospitalized for heart failure with reduced ejection fraction: findings from the EVEREST trial. Circ Heart Fail. 2014; 7(3): 401–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKosiborod M, Curtis JP, Wang Y, et al.: Anemia and outcomes in patients with heart failure: a study from the National Heart Care Project. Arch Intern Med. 2005; 165(19): 2237–44. PubMed Abstract | Publisher Full Text\n\nFrigy A, Fogarasi Z, Kocsis I, et al.: Dataset 1 in: The prevalence and clinical significance of anemia in patients hospitalized with acute heart failure. F1000Research. 2016. Data Source"
}
|
[
{
"id": "14212",
"date": "07 Jun 2016",
"name": "José Machado",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article addresses the prevalence of anemia amongst patients hospitalized with acute heart failure (AHF) and the existence of a correlation between anemia and the severity of the clinical picture. The manuscript is well written, but I have some concerns on certain points. Below are more specific comments by section:\nIntroduction: More information about the purpose of the topic addressed would provide welcome context, i.e. the relevance of the study conducted. A bit more detail about anemia and acute heart failure would also be helpful in order to understand better the relevance of the potential correlation addressed;\n\nMethods: They may be some reservations concerning the data size: a small sample of data was used in order to conduct this study. On the other hand, more information regarding the methods used and how the study was specifically conducted would also be insightful;\n\nDiscussion and conclusions: A poor discussion and conclusions are presented. Thereby, the results should be discussed in more detail, i.e. the results presented in Table 1. For instance, a more specific discussion could be done regarding the most relevant parameters presented in Table 1, i.e. parameters in patients with and without anemia.\nOverall, I consider this study interesting but more information regarding certain topics seems undoubtedly needed in order to complete and clarify some crucial points addressed throughout this paper.",
"responses": []
},
{
"id": "14766",
"date": "04 Jul 2016",
"name": "Manfred Seeberger",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have assessed the prevalence of anemia in a cohort of 50 patients hospitalized with acute heart failure (AHF), and also assessed the existence of a correlation between anemia and the severity of the clinical picture. They found anemia in 21/50 patients but no correlation of anemia with other factors related to severity and prognosis of AHF. They conclude that this finding is suggestive of an independent role of anemia in influencing the clinical picture and prognosis of AHF.\n\nThe study by Frigy et al may serve as an interesting pilot study for a larger prospective study. However, the current sample size is insufficient for drawing any reliable conclusion on the prevalence of anemia in patients with AHF, and on the influence of anemia on course and outcome of AHF. Given the small sample size, it is not meaningful to perform multiple statistical analyses. And the small sample size should keep the authors from rejecting a possible correlation between anemia and other factors related to severity and prognosis of AHF. And the final conclusion remains unclear to me: why does the lack of statistical correlation between anemia and other factors related to severity and prognosis of AHF suggest an independent role of anemia in influencing prognosis of the disease? The authors have not studied prognosis and outcome at all.\n\nThe authors need to define the study question more specifically: what is (are) the outcome(s) they are looking for in the population of patients with acute heart failure? Based on a specific study question and hypothesis, the authors need to perform a sample size calculation. It will be interesting to read the results of that adequately sized study.\n\nThe authors have raised an interesting question. However, they need to define a more specific study hypothesis and calculate the sample size needed for analyzing that hypothesis. The current study design and sample size does not allow for drawing any reliable conclusions.",
"responses": []
},
{
"id": "17358",
"date": "02 Nov 2016",
"name": "Norbert Jost",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract:\nMore or less acceptable. I would suggest adding some details about the general life quality of the patients.\nArticle content:\nPlease give details about the general conditions of the patients including: i) data about other diseases (cardiac and not cardiac as well); ii) status when they arrived at the hospital and how was their status when leaving the hospital; was there post hospitalization care or not, and if yes what were the results.\nConclusions:\nInsufficiently short. Please supplement with information and comments about some comparative details of other studies in this field.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-1006
|
https://f1000research.com/articles/6-1516/v1
|
18 Aug 17
|
{
"type": "Research Article",
"title": "Medical support during an Ironman 70.3 triathlon race",
"authors": [
"Hae-Rang Yang",
"Jinwoo Jeong",
"Injoo Kim",
"Ji Eun Kim",
"Hae-Rang Yang",
"Injoo Kim",
"Ji Eun Kim"
],
"abstract": "Background: The Ironman 70.3 race is also called a half Ironman, and consists of 1.9 km of swimming, 90.1 km of cycling, and 21.1 km of running. The authors provide practical insights that may be useful for medical support in future events by summarizing the process and results of on-scene medical care. Methods: The medical post was established at the transition area between the cycling and running courses, which was close to the finish line, and staffed with the headquarters team comprised of an emergency physician, an EMT, two nurses, and an ambulance with a driver. The other five ambulances were located throughout the course. The medical staff identified participants according to their numbers when providing medical support, and described complaints, treatment provided, and disposition. When treating non-participants, gender and age were recorded instead of numbers. The treatment records were analyzed after the race. Results: The medical team treated a total of 187 participants. One suffered cramps in the calf muscles during the swimming part of the course. Nineteen were treated for injuries suffered during the cycling race. A total of 159 were treated for injuries on the running course. Five casualties, all of which occurred during the cycling race, required transport to hospital. Conclusions: Medical directors preparing medical support during a triathlon event should expect severe injuries in the cycling course. In hot climates, staff may also suffer from heat injuries as well as runners, and proper attention should be paid to these risks.",
"keywords": [
"Athetic injuries",
"Emergency Medical Services",
"Sports Medicine"
],
"content": "Introduction\n\nTriathlon is a sporting activity that combines swimming, cycling, and running into a single event. Triathlon events are divided into Sprint, Olympic, Long, and Ironman distances1. The Ironman 70.3 race is also called a half Ironman, and consists of 1.9 km of swimming, 90.1 km of cycling, and 21.1 km of running2.\n\nWhile the number of mass participation sporting events is on the rise, injury data and availability of medical support plans for such events remain underreported3. An understanding of the temporal and spatial characteristics of injuries sustained during triathlon races would facilitate appropriate medical support planning1. However, there have been very few reports regarding medical support for triathlon events, despite the fact that the occurrence of injuries depends, to some extent, on weather conditions4, and there have been no reports to date regarding medical aspects of triathlon events held in northeast Asia.\n\nThe authors were involved in planning and providing medical support for the Ironman 70.3 Busan event held in 2016. Here, we provide practical insights that may be useful for medical support in future events by summarizing the process and results of on-scene medical care.\n\n\nMethods\n\nThe study was conducted during the Ironman 70.3 Busan race held on June 19, 2016, in the Haeundae and Gijang areas of Busan, South Korea. The Ironman 70.3 race involves 1.9 km swimming, 90.1 km cycling, and 21.1 km running. The race began at 06:45 and finished at 15:28 when the last runner crossed the finish line.\n\nThe number of participants in the race was 765, and more than 800 staff and volunteers also took part in the event.\n\nThe medical support group consisted of one board-certified emergency physician as the medical director, six emergency medical technicians (EMT), four nurses, a physical therapist, three volunteers with first responder training, and six ambulances with drivers. The medical post was established at the transition area between the cycling and running courses, which was close to the finish line, and staffed with the headquarters team comprised of an emergency physician, an EMT, two nurses, and an ambulance with a driver. The other five ambulances were located throughout the course and a team with at least one EMT or nurse was allocated to each ambulance.\n\nThe emergency physician at the medical post provided on-line medical control through radio communication. Group talking using Long-Term Evolution (LTE)-based radio transceivers was the primary communication method. The medical post used another transceiver to communicate with the organizing committee. When participants required medical attention, patrols reported their location to the organizing committee and the medical post dispatched an ambulance. The medical post also provided care for those who visited the medical tent themselves.\n\nThe participants were required to report their name, gender, and age group in 5-year intervals at the time of registration, and they were assigned numbers. The medical staff identified participants according to their numbers when providing medical support, and described complaints, treatment provided, and disposition. When treating non-participants, gender and age were recorded instead of numbers. The treatment records were analyzed after the race. The temperature, humidity, and wind data measured at Haeundae weather station were downloaded from the official website of the Korean Meteorological Administration (http://www.kma.go.kr/weather/climate/past_cal.jsp). The study was approved by the Institutional Review Board of the Dong-A University Medical Center. The need for informed consent was waived by the Institutional Review Board because of the noninvasiveness and retrospective nature of the study. All participants’ personal information was de-identified before analysis.\n\n\nResults\n\nThe swimming part of the race was reduced to 1 km because of rain and poor visibility. The race began on 06:45 with swimming, and the first participant proceeded to the cycling section at 07:04. The swimming section was closed at 08:04. The running race began at 09:25, and the entire race ended at 15:28 when the last runner crossed the finish line. The timeline of event progression and medical support activities are summarized in Table 1.\n\nHQ: medical support headquarters.\n\nT2: Transition point between the cycling and running courses, also close to the running course finish line.\n\nThe temperature and relative humidity data measured at Haeundae weather station are presented in Figure 1, along with progression of the event. The temperature was between 21.5°C and 27.6°C and humidity was between 71% and 97%. There was about 1 mm of precipitation around 06:00.\n\nTemperature (a) and relative humidity (b) measured at Haeundae weather station on the day of the event.\n\nThe medical team treated a total of 187 participants (166 males and 21 females; dataset 1). One suffered cramps in the calf muscles during the swimming part of the course. Nineteen were treated for injuries suffered during the cycling race. A total of 159 were treated for injuries on the running course. Staff, family members, and press personnel were also treated in the medical tent. The chief complaints of the patients are summarized in Table 2.\n\nFive casualties, all of which occurred during the cycling race, required transport to hospital. Four cases involved shoulder injuries and the other case had head trauma with brief loss of consciousness.\n\n\nDiscussion\n\nWhile mass participation sporting events are increasingly held in many parts of the world, there have been few reports of casualty data and medical support plans. Medical planners could utilize data from similar events as a useful guide for training and equipping their staff3. Moreover, lessons learned can significantly reduce casualties in subsequent events by enabling preventive measures and improving preparedness4,5.\n\nThe pattern of injuries during a triathlon race largely depends on climate conditions, so at times cold-related injuries and at other times heat-related injuries are predominant4. The weather conditions during the study period were humid and moderately hot, which were markedly different from previously reported studies in Hawaii and Australia1,2,5, and the pattern of injuries revealed in this study would provide information facilitating the prediction of injuries in similar athletic events in Korea.\n\nMost injuries occurring in triathlon events are minor, with blisters and abrasions as the most common types. This was also the case in the present study, considering that previous studies did not count simple myalgia among the reported injuries1,5. However, more serious injuries, such as fractures and heat-related injuries, do occur, and the organizers and medical directors should prepare for the worst case scenarios1,4.\n\nAlthough swimming is considered, potentially, the most lethal part of the event, previous studies have reported lower incidences of injuries in the swimming part of these events1,2. We also found very few problems in the swimming leg of the event, which may have been partly because the swimming distance was reduced to 1 km due to the poor weather conditions. Most serious injuries occurred in the cycling portion, with five cases requiring transport to nearby hospitals2. The cycling course presented challenges to the emergency responders with regard to accessing the crash sites. The injuries were notified via radio communication by the race patrols, and the exact locations were difficult to specify because of the lack of easily identifiable landmarks around the suburban public roads. Use of geographic coordinate systems with Global Positioning System (GPS) devices, such as smartphones, may improve the location of injured participants in such courses. The running part had the largest number of injury cases, as reported previously1. The preceding cycling race set the stage for dehydration and exhaustion during the run, and most athletes feel that the run is the most difficult part of the race2.\n\nThree participants suffered exhaustion, and recovered with rest and oral rehydration. The incidence of heat injuries showed significant event-to-event differences, even in the same location. Such differences were reported to be caused by temperature in the preceding days, which allowed player acclimatization, in addition to preventive measures. It was reported that intravenous hydration was required for some players in the Beach2Battleship Ironman Triathlon 2014 event, in which the race length was twice that of the event included in the present study4. In the 2006 Melbourne race event, three participants suffered severe heat illness and did not recover in the medical tent, so they had to be transferred to hospital. The temperature at the 2016 Melbourne event was between 21.5°C and 37.0°C, which was much higher than the temperature of 21.5°C – 27.6°C in the present study. No participants suffered heat-related collapse in the 2007 Melbourne race, with the aid of preventive measures, including an earlier start, reduced race length, increased numbers of drink stations, and increased athlete education5.\n\nIn the present study, four of the staff experienced heat-related symptoms, including exhaustion and headache. In many cases of medical support for athletic events, the focus is on participating players and injuries suffered by staff have not been reported5,6. However, support staff are also exposed to a potentially hazardous environment in outdoor events, such as triathlons or marathons. The staff members have limited access to aid stations alongside the course prepared for players, because they are usually stationed at duty positions rather than moving along the course. Therefore, preventive measures, such as provision of sufficient water and education previously suggested for players5, should also be carefully prepared for field staff.\n\nIn conclusion, medical directors preparing medical support during a triathlon event should expect severe injuries in the cycling course. In hot climates, staff may also suffer from heat injuries as well as runners, and proper attention should be paid to these risks.\n\n\nData availability\n\nDataset 1. List of participants treated by the medical support team in the Ironman 70.3 Busan race held on June 19, 2016. Doi: 10.5256/f1000research.12388.d1741907",
"appendix": "Competing interests\n\n\n\nJinwoo Jeong is Consultant of Training Center for International Disaster Management Education\n\n\nGrant information\n\nThe study was supported by Dong-A University Research Fund assigned to Jinwoo Jeong.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgement\n\nHaeryong Woo and the other members from Training Center for International Disaster Management Education significantly participated in the field work and data collection.\n\n\nReferences\n\nGosling CM, Forbes AB, McGivern J, et al.: A profile of injuries in athletes seeking treatment during a triathlon race series. Am J Sports Med. 2010; 38(5): 1007–14. PubMed Abstract | Publisher Full Text\n\nLaird RH, Johnson D: The medical perspective of the Kona Ironman Triathlon. Sports Med Arthrosc. 2012; 20(4): 239. PubMed Abstract | Publisher Full Text\n\nTan CM, Tan IW, Kok WL, et al.: Medical planning for mass-participation running events: a 3-year review of a half-marathon in Singapore. BMC Public Health. 2014; 14: 1109. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlendenning D: ON LAND & SEA. Beach2Battleship Triathlon highlights importance of preparedness for medical response team. JEMS. 2015; 40(12): 42–5. PubMed Abstract\n\nGosling CM, Gabbe BJ, McGivern J, et al.: The incidence of heat casualties in sprint triathlon: the tale of two Melbourne race events. J Sci Med Sport. 2008; 11(1): 52–7. PubMed Abstract | Publisher Full Text\n\nKyong YY, Park KN, Choi SP, et al.: Types of patients during a marathon course: two International scale of marathon runnings. J Korean Soc Emerg Med. 2006; 17(4): 322–7. Reference Source\n\nYang HR, Jeong J, Kim I, et al.: Dataset 1 in: Medical support during an Ironman 70.3 triathlon race. F1000Research. 2017. Data Source"
}
|
[
{
"id": "25198",
"date": "29 Aug 2017",
"name": "Woochan Jeon",
"expertise": [
"Reviewer Expertise Emergency medicine"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAll contents of this article includes proper methods, analysis and conclusion.\nBut, I think that if you surveyed the medical history of all participants, you could have suggested the expected risk of medical problem by a comparative analysis of two groups (health group vs medical support group).\nThis article shows the importance of medical roles like number and location of medical staff in preparing sports event.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "25172",
"date": "01 Sep 2017",
"name": "Gi Woon Kim",
"expertise": [
"Reviewer Expertise Emergency medicine",
"resuscitation",
"disaster"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article reviewed medical support activities and patient characteristics during a triathlon event. As the authors stated, injury patterns of triathlon races are very according to locations and weather conditions, and reports from eastern Asian countries such as Korea would add significant knowledge to those preparing medical support for similar events.\nHowever, the manuscript needs some modifications for improvement such as:\nAbstract:\n\nThe acronym 'EMT' should be explained at the first appearance in the abstract.\n\nMethods:\nThe number and locations of the aid stations should be described in the Methods section, as the authors claimed in the Discussion that increased numbers of drink stations takes part in reducing heat-related injuries.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1516
|
https://f1000research.com/articles/6-1512/v1
|
18 Aug 17
|
{
"type": "Software Tool Article",
"title": "COINSTAC: Decentralizing the future of brain imaging analysis",
"authors": [
"Jing Ming",
"Eric Verner",
"Anand Sarwate",
"Ross Kelly",
"Cory Reed",
"Torran Kahleck",
"Rogers Silva",
"Sandeep Panta",
"Jessica Turner",
"Sergey Plis",
"Vince Calhoun",
"Jing Ming",
"Eric Verner",
"Anand Sarwate",
"Ross Kelly",
"Cory Reed",
"Torran Kahleck",
"Rogers Silva",
"Sandeep Panta",
"Jessica Turner",
"Sergey Plis"
],
"abstract": "In the era of Big Data, sharing neuroimaging data across multiple sites has become increasingly important. However, researchers who want to engage in centralized, large-scale data sharing and analysis must often contend with problems such as high database cost, long data transfer time, extensive manual effort, and privacy issues for sensitive data. To remove these barriers to enable easier data sharing and analysis, we introduced a new, decentralized, privacy-enabled infrastructure model for brain imaging data called COINSTAC in 2016. We have continued development of COINSTAC since this model was first introduced. One of the challenges with such a model is adapting the required algorithms to function within a decentralized framework. In this paper, we report on how we are solving this problem, along with our progress on several fronts, including additional decentralized algorithms implementation, user interface enhancement, decentralized regression statistic calculation, and complete pipeline specifications.",
"keywords": [
"Decentralized algorithm",
"iterative optimization",
"data sharing",
"brain imaging",
"privacy preserving"
],
"content": "Introduction\n\nProliferating neuroimaging data present contemporary neuroscientists with both an exciting opportunity and a cumbersome challenge. The advantages of sharing data are clear. Adding datasets to a study increases sample size, making predictions more certain, and increases diversity, allowing differences between groups to be studied. Although there is indeed an abundance of data, there exist multiple barriers to fully leverage such data. Firstly, a significant amount of existing neuroimaging data has been collected without proper provisions for post hoc data sharing. Secondly, researchers must negotiate data usage agreements (DUAs) to collaborate and build models using multiple sources of data that can be anonymized and shared. Sharing data via a DUA is advantageous in that all the variables collected can be studied. However, these DUAs may require months to complete, and the effort to obtain them could be ultimately fruitless, as researchers only know the utility of the data after they have obtained and explored it. Thirdly, even if neuroimaging data can be shared in an anonymized form, the data require a copious amount of storage, and the algorithms applied to the data require significant centralized computational resources. Fourthly, even anonymized data bears a risk of reidentification, especially for subjects who are rare because of a combination of demographic and clinical data. While centralized sharing efforts are powerful and unquestionably should continue, the community needs a family of approaches to address all the existing challenges, including decentralized models that we describe in this paper. One alternative to centralized data sharing is to perform meta-analyses utilizing existing literature to avoid the burden of negotiating DUAs and storing and processing data (Thompson et al., 2017; Thompson et al., 2014). However, meta-analyses suffer from heterogeneity among studies caused by varying preprocessing methods applied to the data and inconsistent variables collected. In addition, meta-analytic results are not as accurate as those obtained from a centralized analysis.\n\nThe Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC), proposed by Plis et al., in 2016 (Plis et al., 2016), solves the abovementioned problems by providing a decentralized platform by which researchers can collaboratively build statistical and machine learning models, while neither transmitting their data nor sacrificing privacy concerns, thanks to differentially private algorithms. COINSTAC can run both meta-analyses and mega-analyses via “single-shot” and “multi-shot” (iterative) computations, respectively. The COINSTAC software (currently in an early prototype) is freely available, open source, and compatible with all major operating systems (Windows, Mac OS, and Linux). It is an easy-to-install, standalone application with a user-friendly, simple, and intuitive interface. By utilizing Docker containers, COINSTAC can run computations in any programming language (including Python, R, Matlab, FORTRAN, and C++) and is easily extensible. We are also building a development community to help users create their own computations, as well.\n\nThe use of a decentralized analysis framework has many advantages. For example, decentralized analysis can move beyond meta-analysis via iteration, obtaining a solution equivalent to that of the centralized result. In addition, one can move beyond sharing summary measures—which though plausibly private can still potentially be reidentified—to a more formally private solution. Differential privacy has been touted as a solution to the data sharing and reidentification problem. Developed by Dwork et al., 2006, this approach statistically guarantees privacy and allows for sharing aggregated results without the risk of reidentification (Dwork et al., 2006).\n\nIn the past few years, we have developed many algorithms that run in a decentralized and optionally a differentially private manner. Decentralized computations include ridge regression (Plis et al., 2016), multi-shot regression (Plis et al., 2016), independent vector analysis (IVA) (Wojtalewicz et al., 2017), neural networks (Lewis et al., 2017), decentralized stochastic neighbor embedding (dSNE) (Saha et al., 2017), joint independent component analysis (ICA) (Baker et al., 2015), and two-level differentially private support vector machine (SVM) classification (Sarwate et al., 2014). To facilitate and accelerate algorithm development, we have created COINSTAC-simulator, which allows algorithm developers to prototype and troubleshoot their algorithms before deployment to real consortia in COINSTAC.\n\nFurthermore, we include both input and output functionality to the COINSTAC user interface. For example, the interface for regression can accept data produced by FreeSurfer, with a menu to select the region of interest (ROI) in the brain that will be used as the dependent variable in the statistical analysis. Following the analysis, COINSTAC produces a statistics table for the output of ridge regression, which calculates the global p-values and t-values in a decentralized fashion for each site in the consortium, measuring goodness of fit.\n\nCOINSTAC also enables decentralized analyses with multiple computation steps. Easy and flexible computation stacking is a built-in feature in our framework. In this paper, we demonstrate an implementation scheme for specifying and managing multiple computations. With this framework, we can incorporate local computations, such as common preprocessing brain imaging tasks, into the analysis workflow.\n\nA common nuisance among programmers and especially non-expert users is the assembly of an environment to run a computer program. This is a crucial step that may require upgrading an operating system and downloading and installing the latest release of software, a compiler, or a supporting library. Assembly of the environment may involve permission from IT and a substantial amount of troubleshooting, which may lead to a long delay before analysis can begin. Additionally, inconsistent machine state between computers (including operating systems, libraries, and compilers) can lead to inconsistent results from the same computation.\n\nA popular solution to this problem is utilizing a virtual machine (VM) that contains all the dependencies needed to run a program. Because VMs are resource-intensive, many developers have switched to using containers, which are an efficient, lightweight solution to the problem of heterogeneous development environments. Containers only bundle in the supporting software needed to run the program and do not require running a full VM with its own operating system. This reduces the required amount of memory and number of CPUs.\n\nCOINSTAC encapsulates individual computations inside Docker containers (https://www.docker.com/what-docker), which are run in series in a pipeline. Containers holding computations can be downloaded and run locally, which removes the need to assemble a development environment and thus greatly reduces the time to analyze results. This solution will also allow consortium participants to run coordinated preprocessing operations that must often occur before a statistical analysis, such as FreeSurfer processing or voxel-based morphometry. We have already created a Docker container with a standalone SPM package utilizing the Matlab Compiler Runtime. The normalization and coordination of preprocessing operations reduce heterogeneity in the data, creating a solid basis for the main analyses.\n\n\nMethods and use cases\n\nIn our previous paper (Plis et al., 2016), we demonstrated the use of decentralized gradient descent in the optimization of a basic ridge regression model. This decentralized iterative optimization process represents an analysis of virtual data pooling. The resulting model generated in this manner is equivalent to the model generated in centralized repository analysis (i.e., the meta-analysis becomes a mega-analysis).\n\nIn this paper, we apply the decentralized gradient descent methods to other more advanced algorithms in the neuroimaging domain, including t-distributed nonlinear embedding (tSNE), shallow and deep neural networks, joint ICA, and IVA. These methods are already widely used in the neuroimaging domain, but have not previously been extended to work in a decentralized framework. We demonstrate how these methods can be computed within a decentralized framework and report the algorithm performance compared to a centralized analysis.\n\nDecentralized tSNE (dSNE). A common method of visualizing a dataset consisting of multiple high-dimensional data points is embedding the points into a 2- or 3-dimensional space. Such an embedding serves as an intuitive exploratory tool for quick detection of underlying structure of a dataset. In 2008, van der Maaten and Hinton proposed a method named tSNE to efficiently handle this situation (Maaten & Hinton 2008). The embeddings produced by tSNE are usually intuitively appealing and interpretable, which makes this method an attractive tool in many domains, including neuroimaging (Panta et al., 2016).\n\nWe propose a method to embed a decentralized dataset that is spread across multiple locations such that the data at each location cannot be shared with others into a 2D plane. We build the overall embedding by utilizing public, anonymized datasets. The method is similar to the landmark achievements previously used to improve computational efficiency (De Silva & Tenenbaum, 2004; Silva & Tenenbaum, 2003). However, directly copying this approach does not produce accurate results, so we introduce a dynamic modification that generates an embedding that reflects relationships among points spread across multiple locations.\n\nThe detailed algorithm diagram for decentralized multi-shot tSNE is demonstrated in Figure 1. Xp and Xs represent the high-dimensional site data and shared data, respectively. Yp and Ys represent the low-dimensional mapping site data and shared data, respectively. The master node initializes Ys and subsequently calculates a common gradient ∇Ys(j) based on the site gradient ∇Ysp(j) for each iteration j and update Ys, accordingly. Each local node will calculate the pairwise affinities among its own dataset and the shared dataset and then update Yp by locally calculating ∇YP(j). With this scheme, Ys stays constant across all sites for every iteration and serves as a reference function. Meanwhile, Ys is influenced by Yp, which allows local embedding information to flow across the sites, resulting in a final map with less overlapping.\n\nWe have tested the performance of this algorithm by comparing the decentralized result with that of centralized tSNE using the quality control metric of the ABIDE dataset (Di Martino et al., 2014). The results demonstrate that the centralized and decentralized computations generate an equal number of clusters. Additionally, random splits do not affect the stability of the clusters (Saha et al., 2017). Please see Figure 2 for reference.\n\nWe randomly split the data into ten local and one reference dataset. The centralized results show ten different clusters. For three random splits of decentralized computation, we also obtain ten different clusters, and the number of clusters in the embedding is stable regardless of how the data are split among sites.\n\nDecentralized neural networks. Recently, deep learning has gained increasing attention because of its excellent performance in pattern recognition and classification, including in the neuroimaging domain (Plis et al., 2014). To enable both shallow and deep neural network computations within COINSTAC, we developed a feed-forward artificial neural network that is capable of learning from data distributed across many sites in a decentralized manner. We utilize mini-batch gradient descent to average the gradient across sites. For our purposes, each batch contains one sample per site. We then average the resulting gradients from the batch.\n\nFigure 3 shows a flow chart of the decentralized neural network algorithm. As in a stochastic gradient descent (SGD) model, we calculate the error function Qp(Wi) for each site p and ith W. Qp(Wi). represent the discrepancy between the expected result Yi from the training set and the actual result from forward propagation Y^i(Wi). Each site then sends ∇Qp(Wi) to the master node, which averages the gradient and returns the result to the sites. Each site then updates Wi on the basis of the mini-batch gradient decent equation until all training data are exhausted. With the same initialization W in the master node, we find that Wi is always shared across all sites, but the change in Wi at each iteration is determined by the data at each site.\n\nWe use a basic neural network known as a multilayer perceptron to demonstrate the decentralized computation process, but this framework can be easily extended to other types of neural networks. We tested the performance of this model using real functional magnetic resonance imaging (fMRI) data from smokers (Fagerström Test for Nicotine Dependence dataset) (Heatherton and Kozlowski 1992) and found that the decentralized model and pooled centralized model yielded similar classification accuracy, which vastly outperformed the accuracy at local, isolated sites (Lewis et al., 2017). Please see Figure 4 for reference.\n\nIn this experiment, we simulated an addiction dataset with two sites. The centralized classifier (red) and decentralized neural network classifier (yellow) perform similarly, and local sites classifiers (green and aquamarine) perform poorly.\n\nDecentralized joint ICA. When shared signal patterns are anticipated to exist among datasets, joint ICA (jICA) (Calhoun et al., 2006; Calhoun et al., 2001; Sui et al., 2009) presents a solution to combine and identify shared information over multiple datasets. Although originally proposed as a method for multimodal data fusion, jICA can also implement group temporal ICA of fMRI data. In both cases, datasets are concatenated (over modalities in multimodal fusion and over subjects across time in temporal ICA) and then jointly analyzed. The jICA model is particularly attractive for datasets where the number of observations is significantly smaller than the dimensionality of the data, as in temporal ICA of fMRI data (time points < voxels), as concatenation over datasets effectively increases the number of observations. In decentralized jICA (djICA), the datasets are stored at different sites, rendering the traditional centralized approach for concatenation ineffective. To solve this problem, we developed an implicit concatenation procedure based on the assumption that the data from each site will share the same global unmixing matrix.\n\nA diagram of djICA is shown in Figure 5. The global unmixing matrix includes W and bias b. Using this unmixing matrix, each site estimates the independent source Zp(j) and tries to maximize the entropy function of a sigmoid transformation of Zp(j) (Yp(j)). Gp(j) and hp(j) are the local gradients for W and b, respectively. The master node sums the two gradients across all sites and updates the global unmixing matrix for the next iteration until either convergence or the stopping criteria is met.\n\nThe performance of djICA has been evaluated in studies by Plis et al (Plis et al., 2016) and Baker et al (Baker et al., 2015). The results of the experiments in these two studies convincingly demonstrate that with increased sample size the quality of feature estimation increases for both pooled-data ICA and djICA. Furthermore, we have found that splitting data across sites does not degrade the results given the same global data volume. Please see Figure 6 for reference.\n\nThe experiment is based on synthetic functional MRI data using a generalized autoregressive conditional heteroscedastic model (Engle, 1982; Bollerslev, 1986). The top figure shows that as the global number of subjects increases, the Moreau-Amari index (MAI) decreases for both pooled-data ICA and djICA with different principal component analysis (PCA) operations. Additionally, MAI converges for pooled-data ICA and djICA when the number of subjects increases. The bottom figure shows that number of splits in the data have no effect on MAI.\n\nDecentralized IVA. When using joint ICA to decompose temporal or multimodal datasets containing a group of subjects, we make a strong assumption that the underlying source maps are identical across subjects. Clearly, it is more desirable for source maps to contain subject-specific features. IVA is an approach that allows corresponding sources from different subjects to be similar rather than identical. IVA enables the subject source maps to contain unique information, yet still be linked across different subjects (Kim et al., 2006; Silva et al., 2016).\n\nWe proposed a decentralized IVA (dIVA) method, which allows multiple institutions to not only collaborate on the same IVA problem but also spread the computational load to multiple sites, improving execution time. We use IVA with a Laplace assumption for the dependence structure of the underlying source groups (Kim et al., 2006; Lee et al., 2008). Figure 7 shows a diagram of dIVA. Specifically, dIVA optimizes the same information measure as IVA by exploiting the structure of the objective function and fitting it into a decentralized computational model. In this model, a master node (or centralized aggregator) sends requests to local sites that contain the data. The sites send only data summaries (Cp, dp) back to the aggregator, which uses them to update a matrix of norms (C) as well as the objective function (cost(j)). The aggregator sends this matrix back to the sites, which use its inverse (C0–1) to apply a relative gradient update on their local data. Subsequently, the local gradients are transmitted to the master node and aggregated to calculate a global step size (α). α is then returned to the local sites to update their weights. This process is orchestrated iteratively by the local and master nodes until convergence, and results are stored at local sites.\n\nFigure 7 shows the optimization function utilized by IVA can be split across sites, allowing the bulk of the computation to be parallelized with the aid of an aggregator that collects summaries from individual sites. We have already evaluated our decentralized approach on synthetic sources, and experimental results show that dIVA provides high accuracy and significantly reduces the runtime of the method compared with a centralized computation (Wojtalewicz et al., 2017). Please see Figure 8 for reference.\n\nThe experiment is based on synthetic data using a generalized autoregressive conditional heteroscedatic model and the SimTB functional MRI Simulation Toolbox (Erhardt et al., 2012). The top figure shows how the processing time, number of iterations, and intersymbol interference (ISI) change as the global number of subjects increases. The processing time increases with the number of subjects per site (A). Additionally, feature quality increases, indicated as decreasing ISI (C). The bottom figure shows the processing time ratio between dIVA and IVA decreases as the global number of subjects increases. When the global number of subjects reaches 512, dIVA requires only one quarter of the processing time of IVA.\n\nWe have improved the UI for COINSTAC by adding features that facilitate the input of brain imaging data, allow users to easily run computations, and keep users informed on the progress of the computation. To begin a collaborative, decentralized computation, a group of users that will participate in the analysis, called a consortium, must be created. This involves naming the consortium, choosing the computation, and defining the dependent and independent variables. The user who completes these steps is called the consortium owner. As shown in an example in Figure 9, the UI accepts FreeSurfer data saved in a comma-separated value (CSV) file as an input. The ROI of the brain computed by FreeSurfer is selected as the dependent variable in a ridge regression computation. Additionally, the regularization parameter (lambda), which limits overfitting in the model, is selected via a numeric field. A standard regression with no regularization is performed if lambda is given a value of zero.\n\nNext, the consortium owner declares the covariates (independent variables) and determines their types. The UI currently allows either Boolean (True/False) or numeric covariates. Every user who participates in the consortium must then choose a local data source, such as a FreeSurfer CSV file, and map the columns in the file to the variables declared by the consortium owner. Figure 10 shows how this is accomplished in the UI.\n\nOnce all the participants in the consortium have mapped columns in their local data sources to declared variables, the computation commences. The progress of computations in multiple consortiums is displayed on the Home tab of the UI. Figure 11 shows an example of this. In the top computation, a multi-shot ridge regression is on the third iteration out of a maximum of 25 iterations.\n\nRegression analysis generates an equation to describe the statistical relationship between one or more predictor variables and the response variable. Decentralized ridge regression first produces the regression coefficients for all independent variables through an iterative optimization process. However, in most cases, a researcher may not only want to know the coefficient associated with certain regressor but also the statistical significance of this coefficient and the overall goodness of fit or coefficient of determination (R2) for the global model. In order to generate a standard statistical output accompanying the coefficient as in many major statistical tools, we developed a decentralized approach to calculate the t-values and goodness of fit for the global model without sharing any original data.\n\nThe decentralized R2 calculation is demonstrated in Figure 12. First, each local node calculates the local average of dependent variable Y¯p and transmits it and the size of dataset Np to the master node. Then, the master node calculates the global Y¯ and returns it to the local node. Subsequently, every node calculates the local total sum of squares (SSTp) and sum of squared errors (SSEp) on the basis of Y¯ and send them to the master node. Finally, the master node aggregates SSTp and SSEp across all sites to calculate the global value of R2.\n\nThe decentralized t-value calculation is demonstrated in Figure 13. Each local node calculates the local covariance matrix of Xp and SSEp and transmits them and data size Np to the master node. The master node then aggregates cov(Xp) to generate the covariance matrix of global covariates X to allow the following calculation of the t-values. MSE represents the mean squared error of the estimated coefficient W (or β).\n\nAfter generating the t-value for every covariate and intercept, we use the public distributions library on npm (https://www.npmjs.com/package/distributions) to generate the Student’s t-distribution and then calculate the two-tailed p-value for corresponding t-value.\n\nFigure 14 shows an example statistical output table for ridge regression. The COINSTAC UI displays the result with summarized consortium information at the top. In the output table, we first present the global fitting parameters, following by the fitting parameters locally calculated at each site. The COINSTAC UI also provides the detailed covariate name for each β.\n\nThis output is generated using simulated freesurfer brain volume data. In the simulation, the intercept part (β0) was set to a fixed amount (48466.3 for Right-Cerebellum-Cortex); the age effect(β1) was selected randomly from range [-300, -100] and group(isControl) effect(β2) was selected randomly from range [500, 1000] for each pseudo subject; the standard unit Gaussian noise multiplied by random index ranged from 1800 to 2200 was added subsequently.\n\nCOINSTAC is not only designed to apply individual computations, but also to flexibly arrange multiple computations into a pipeline. Both decentralized analyses and local preprocessing steps can be included in a pipeline. The goal of COINSTAC is to provide a shared preprocessing script that is convenient for researchers and minimizes the data discrepancies across sites that become inputs to decentralized computations.\n\nCOINSTAC concatenates multiple computations into a pipeline and uses a pipeline manager to control the entire computation flow. Figure 15 shows a pipeline specification scheme with an initial preprocessing step and a following decentralized computation. Consortium owners will be able to select the computation step and output type through connected dropdown menus. After the computation steps have been selected, all users within a consortium will be shown cascading interfaces to upload input data and set hyperparameters for each computation. Additionally, the input from the latter computation step can be linked to the output from an earlier computation step.\n\nThe output displayed in the user interface can be selected as well.\n\nOnce a complete pipeline has been formed, all pipeline information is transmitted to the pipeline manager. Figure 16 shows how the pipeline manager interacts with a pipeline and its internal computations. The pipeline manager controls the entire computation flow. It is responsible for piping the input data to the first computation step, caching and transferring intermediate computation output, and storing the final pipeline output. An intermediate controller is added to provide fine-grained control for monitoring the iterative process between local and remote nodes for every computation. The computation schema is defined by a JavaScript object notation (JSON) structure and includes input and output specifications. A Docker container is used to encapsulate an individual computation block.\n\nThe pipeline manager handles the input and output of each pipeline, providing a conduit other nodes in the network. Each computation has its own schema that describes the names and types of its input and output parameters. Controllers are used to manage specific behavior in each computation in the pipeline. Each computation is encapsulated in a Docker container to improve portability among development environments.\n\n\nDiscussion\n\nIn this paper, we reviewed our progress on the development of decentralized algorithms that can be implemented on the COINSTAC platform. Every algorithm is structured similarly in that the local gradient of the objective function is transmitted to the master node, and the master node either returns a common averaged gradient or a step size (dIVA) to update the local weights. This scheme guarantees that information is shared across all sites on every iteration in the optimization algorithm to achieve a virtually pooled analysis effect (i.e., a mega-analysis). This framework also facilitates differential privacy by allowing for the addition of noise to each local objective function. We continue to develop decentralized algorithms as described below.\n\nDecentralized network gradient descent. SGD has emerged as the de facto approach to handle many optimization problems arising in machine learning, from learning classification/regression models to deep learning (Bottou, 2010; Song et al., 2013). For decentralized settings, SGD can be costly in terms of message complexity. We are currently developing approaches to limit this message complexity to enable a variety of statistical learning methods within COINSTAC. These approaches are guided by theory, but will involve developing task-specific heuristics to tune the algorithm parameters.\n\nNonnegative matrix factorization (NMF). NMF is another popular method for discovering latent features in data such as images, where measurements are all nonnegative (Lee & Seung, 2001). Although there has been significant work on NMF and its variants, the work on decentralized implementations is more limited, and the focus has been on improving parallelism for multicore systems (Potluru et al., 2014). Because of the message-passing nature of the COINSTAC architecture, we are developing decentralized and accelerated NMF algorithms that are optimized with gradient descent. Further extensions could allow users to find an NMF to minimize a variety of cost functions beyond squared error.\n\nCanonical correlation analysis (CCA). One challenging task in learning from multimodal or multiview data is to find representations that can handle correlations between the two views (Sui et al., 2012; Thompson, 2005). CCA is one such method. We are currently developing privacy-preserving CCA methods, as well as determining whether decentralized, message-passing approaches will be feasible within the COINSTAC architecture.\n\nIn recent years, the ENIGMA Consortium has conducted collaborative meta-analyses of schizophrenia (van Erp et al., 2016) and bipolar disorder (Hibar et al., 2017), in which subcortical brain volumes and cortical thicknesses were compared between patients and controls, respectively. In these studies, many univariate linear regression models were created in parallel to examine group differences for different regions of the brain. ENIGMA distributes analysis software to many sites and aggregates the results to conduct a meta-analysis. The upcoming version of COINSTAC will facilitate such studies by allowing researchers to specify models that contain combinations of selected dependent and independent variables. Table 1 elaborates on this point by showing an example in which a researcher selects a group of dependent variables (right and left cerebellum cortexes) and a group of independent variables (age and isControl). One model is computed separately for each combination of dependent and independent variables. The advantage of COINSTAC is that dissemination of software and aggregation of results will be handled by our software, eliminating many manual steps. In addition, as mentioned earlier, COINSTAC enables us to run multishot regression (hence converting a meta-analysis into a mega-analysis). Finally, COINSTAC opens up the possibility of running multivariate analysis (such as SVM (Sarwate et al., 2014) or IVA), as well as incorporating differentially private analyses, which would significantly extend the current ENIGMA approach, while also preserving the powerful decentralized model.\n\n\nSoftware and data availability\n\nCOINSTAC is free and open source and can be downloaded at: https://github.com/MRN-code/coinstac\n\nArchived source code as at time of publication: http://doi.org/10.5281/zenodo.840562 (Reed et al., 2017)\n\nLicense: MIT\n\nABIDE dataset can be accessed at http://fcon_1000.projects.nitrc.org/indi/abide/\n\nThe Fagerström Test for Nicotine Dependence addiction dataset was collected within the Mind Research Network using local fMRI scanners. This dataset is stored in the Collaborative Informatics and Neuroimage Suit (COINS) https://coins.mrn.org/. This dataset is not a public dataset, but can be requested through COINS after receiving approval from the dataset owner.",
"appendix": "Author contributions\n\n\n\nJM helped design the architecture of COINSTAC, reviewed the decentralized algorithms, developed the statistic output table, wrote the initial draft of paper, and coordinated writing. EV was the overall technical lead, managed the COINSTAC project, and contributed to writing and proofreading the paper. AS helped develop the differentially private algorithms and additional decentralized algorithms. RK provided the pipeline specification graph and was heavily involved in COINSTAC implementation. CR and TK contributed to the detailed COINSTAC implementation. RS helped with the decentralized algorithm review. S.Panta contributed to the brain imaging data preprocessing pipeline. JT provided input on functionality aspects and served as a beta tester for COINSTAC. S.Plis proposed the decentralized data analysis system and led the algorithm development effort. VC led the team and formed the vision. All authors helped edit the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by the National Institutes of Health (grant numbers: P20GM103472/5P20RR021938, R01EB005846, 1R01DA040487), and the National Science Foundation (grant numbers: 1539067 and 1631819).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nBaker BT, Silva RF, Calhoun VD, et al.: Large scale collaboration with autonomy: Decentralized data ICA. Machine Learning for Signal Processing (MLSP), 2015 IEEE 25th International Workshop on, IEEE. 2015. Publisher Full Text\n\nBollerslev T: Generalized autoregressive conditional heteroskedasticity. J Econom. 1986; 31(3): 307–327. Publisher Full Text\n\nBottou L: Large-scale machine learning with stochastic gradient descent. Proceedings of COMPSTAT'2010. Springer, 2010; 177–186. Publisher Full Text\n\nCalhoun VD, Adali T, Giuliani N, et al.: Method for multimodal analysis of independent source differences in schizophrenia: combining gray matter structural and auditory oddball functional data. Hum Brain Mapp. 2006; 27(1): 47–62. PubMed Abstract | Publisher Full Text\n\nCalhoun VD, Adali T, Pearlson GD, et al.: A method for making group inferences from functional MRI data using independent component analysis. Hum Brain Mapp. 2001; 14(3): 140–151. PubMed Abstract | Publisher Full Text\n\nDe Silva V, Tenenbaum JB: Sparse multidimensional scaling using landmark points. Technical report, Stanford University. 2004. Reference Source\n\nDi Martino A, Yan CG, Li Q, et al.: The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol Psychiatry. 2014; 19(6): 659–67. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDwork C, McSherry F, Nissim K, et al.: Calibrating noise to sensitivity in private data analysis. Theory of Cryptography Conference. TCC, Springer, 2006; 265–284. Publisher Full Text\n\nEngle RF: Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica. 1982; 50(4): 987–1007. Publisher Full Text\n\nErhardt EB, Allen EA, Wei Y, et al.: SimTB, a simulation toolbox for fMRI data under a model of spatiotemporal separability. Neuroimage. 2012; 59(4): 4160–4167. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeatherton TF, Kozlowski L: Nicotine addiction and its assessment. Ear Nose Throat J. 1992; 69(11): 763–767.\n\nHibar DP, Westlye LT, Doan NT, et al.: Cortical abnormalities in bipolar disorder: an MRI analysis of 6503 individuals from the ENIGMA Bipolar Disorder Working Group. Mol Psychiatry. 2017. PubMed Abstract | Publisher Full Text\n\nKim T, Eltoft T, Lee TW: Independent vector analysis: An extension of ICA to multivariate components. International Conference on Independent Component Analysis and Signal Separation. Springer; 2006; 165–172. Publisher Full Text\n\nLee DD, Seung HS: Algorithms for non-negative matrix factorization. Advances in neural information processing systems. 2001. Reference Source\n\nLee JH, Lee TW, Jolesz FA, et al.: Independent vector analysis (IVA): multivariate approach for fMRI group study. Neuroimage. 2008; 40(1): 86–109. PubMed Abstract | Publisher Full Text\n\nLewis N, Plis S, Calhoun V: Cooperative learning: Decentralized data neural network. Neural Networks (IJCNN), 2017 International Joint Conference on, IEEE. 2017. Publisher Full Text\n\nMaaten Lvd, Hinton G: Visualizing data using t-SNE. J Mach Learn Res. 2008; 9: 2579–2605. Reference Source\n\nPanta SR, Wang R, Fries J, et al.: A Tool for Interactive Data Visualization: Application to Over 10,000 Brain Imaging and Phantom MRI Data Sets. Front Neuroinform. 2016; 10: 9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPlis SM, Hjelm DR, Salakhutdinov R, et al.: Deep learning for neuroimaging: a validation study. Front Neurosci. 2014; 8: 229. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPlis SM, Sarwate AD, Wood D, et al.: COINSTAC: A Privacy Enabled Model and Prototype for Leveraging and Processing Decentralized Brain Imaging Data. Front Neurosci. 2016; 10: 365. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPotluru V, Diaz-Montes J, Sarwate AD, et al.: CometCloudCare (C3). Distributed Machine Learning Platform-as-a-Service with Privacy Preservation. Neural Information Processing Systems (NIPS). Montreal, Canada. 2014. Reference Source\n\nReed C, Kelly R, tkah, et al.: MRN-Code/coinstac: v2.6.0 Alpha. Zenodo. 2017. Data Source\n\nSaha DK, Calhoun VD, Panta SR, et al.: See without looking: joint visualization of sensitive multi-site datasets. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence(IJCAI'2017). 2017; 2672–2678. Publisher Full Text\n\nSarwate AD, Plis SM, Turner JA, et al.: Sharing privacy-sensitive access to neuroimaging and genetics data: a review and preliminary validation. Front Neuroinform. 2014; 8: 35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSilva RF, Plis SM, Sui J, et al.: Blind Source Separation for Unimodal and Multimodal Brain Networks: A Unifying Framework for Subspace Modeling. IEEE J Sel Top Signal Process. 2016; 10(7): 1134–1149. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSilva VD, Tenenbaum JB: Global versus local methods in nonlinear dimensionality reduction. Advances in neural information processing systems. 2003. Reference Source\n\nSong S, Chaudhuri K, Sarwate AD: Stochastic gradient descent with differentially private updates. Global Conference on Signal and Information Processing (GlobalSIP), 2013 IEEE, IEEE. 2013. Publisher Full Text\n\nSui J, Adali T, Pearlson GD, et al.: An ICA-based method for the identification of optimal FMRI features and components using combined group-discriminative techniques. Neuroimage. 2009; 46(1): 73–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSui J, Adali T, Yu Q, et al.: A review of multivariate methods for multimodal fusion of brain imaging data. J Neurosci Methods. 2012; 204(1): 68–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThompson B: Canonical correlation analysis. Encyclopedia of statistics in behavioral science. 2005. Publisher Full Text\n\nThompson PM, Andreassen OA, Arias-Vasquez A, et al.: ENIGMA and the individual: Predicting factors that affect the brain in 35 countries worldwide. Neuroimage. 2017; 145(Pt B): 389–408. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThompson PM, Stein JL, Medland SE, et al.: The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data. Brain Imaging Behav. 2014; 8(2): 153–182. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Erp TG, Hibar DP, Rasmussen JM, et al.: Subcortical brain volume abnormalities in 2028 individuals with schizophrenia and 2540 healthy controls via the ENIGMA consortium. Mol Psychiatry. 2016; 21(4): 547–553. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWojtalewicz NP, Silva RF, Calhoun VD, et al.: Decentralized independent vector analysis. Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, IEEE. 2017. Publisher Full Text"
}
|
[
{
"id": "26634",
"date": "24 Oct 2017",
"name": "Joshua Balsters",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a wonderful article describing an increasingly necessary resource. I have very little to add to this manuscript.\n\nThe article mostly focuses on decentralized data analysis, however figure 15 highlights the preprocessing and output stages. It would be useful if the article included some information about the expected input formats. For example, does COINSTAC offer preprocessing tools? At the end of the first paragraph of the introduction the authors critique meta-analyses by suggesting \"heterogeniety among studies caused by varying preprocessing methods applied to the data\". Does COINSTAC offer tools to harmonize preprocessing, and if so what are they? Similarly, it would be good to have a summary figure of the output formats available. Can you visualise brain images online or do you have to download these?\n\nFigure 8b is cropped\nI look forward to seeing more additions and extensions to COINSTAC in the future.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "26632",
"date": "25 Oct 2017",
"name": "Jens Foell",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript discusses the development and use of the COINSTAC system, a framework that is meant to facilitate neuroimaging data sharing by allowing for easy decentralized analysis.\nThe authors provide a thorough and easily understandable introduction into recent advances and challenges for neuroimaging data analysis: as technological barriers preventing the open sharing of large amounts of data have been removed, other obstacles have become apparent, ranging from common standards of analysis to legal issues.\nThe manuscript demonstrates the use of the COINSTAC system with existing neuroimaging data to show that centralized and decentralized neural network classifiers lead to comparable results. The authors argue that one advantage of a decentralized analysis approach is that individual data cannot be easily compiled into a coherent data point, so that privacy is preserved through a process which essentially fragments and distributes individual data components. While this approach makes sense to me, I cannot judge whether this will in fact have an effect on the legal situation regarding the sharing of data between groups; this will likely depend on specifications given by regional jurisdictions or institutional bodies.\nAlgorithms necessary for this decentralized processing are named and explained. Additional information includes specification of user interface and processing pipelines.\nOverall, this is a thorough and well-written article about software that is certainly needed to adapt to new challenges and opportunities pertaining to large-scale neuroimaging analyses and that will likely be useful to a large number of researchers.\nMinor comments:\nThe term ‘mega-analysis’ is used without explanation, before being mentioned later on in the text with a quick description. I recommend defining the term at its first use, either in the text or as a footnote.\n\nFor the different software packages mentioned, it should be made clear whether they are freely available or whether they need to be purchased. This could be mentioned in the main text, or the software can be included in the ‘Software and availability’ section at the end of the manuscript. One instance where this was missed is the mention of the Docker software.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1512
|
https://f1000research.com/articles/6-229/v1
|
07 Mar 17
|
{
"type": "Research Article",
"title": "A survey of working conditions within biomedical research in the United Kingdom",
"authors": [
"Nick Riddiford"
],
"abstract": "Background: Many recent articles have presented a bleak view of career prospects in biomedical research in the US. Too many PhDs and postdocs are trained for too few research positions, creating a “holding-tank” of experienced senior postdocs who are unable to get a permanent position. Coupled with relatively low salaries and the high levels of pressure to publish in top-tier academic journals, this has created a toxic environment that is perhaps responsible for a recently observed decline in biomedical postdocs in the US, the so-called “postdocalypse”. Methods: In order to address the gulf of information relating to working habits and attitudes of UK-based biomedical researchers, a survey was conducted and analysed to examine discrete profiles for three major career stages: the PhD, the postdoc and the principal investigator. Results: Overall, the data presented here echoes trends observed in the US: Scientists in the UK feel disillusioned with academic research, due to the low chance of getting a permanent position and the long hours required at the bench. Also like the US, large numbers of researchers at each distinct career stage are considering leaving biomedical research altogether. Conclusions: There are several systemic flaws in the academic scientific research machine – for example to continual overproduction of PhDs and the lack of stability in the early-mid stages of a research career - that are slowly being addressed in countries such as the US and Germany. This data suggests that similar flaws also exist in the UK, with a large proportion of respondents concerned about their future in research. To avoid lasting damage to the biomedical research agenda in the UK, addressing such concerns should be a major priority.",
"keywords": [
"Biomedical science",
"working conditions",
"brain-drain",
"postdocalypse"
],
"content": "Introduction\n\nWhile there is no shortage of recent articles lamenting the current state of affairs in the scientific research machine (Alberts et al., 2014; Bourne, 2013; Gould, 2015; Powell, 2015; Sauermann & Roach, 2016), these have largely focussed on the US, and data relating to the UK is scarce. The general consensus from the US is that there is a growing workforce - particularly in the biomedical sciences - competing for a number of permanent research positions that has remained largely static since the 1980s (Schillebeeckx et al., 2013). Considering that the large majority of this workforce comprises PhD and postdoctoral researchers, who work almost exclusively on short-term, grant-funded contracts, competing for such positions often comes at the cost of stability, financial reward and any sense of work/life balance. Additionally, PhD programmes and postdoctoral posts tend to train scientists solely for a career in academic research, and neglect to equip them with a skill-set that would allow a smooth transition into gainful employment. Perhaps in response to these factors, after three decades of steady growth, the number of biomedical postdocs has started to decline in the US (Garrison et al., 2016). Such a “postdocalypse” is bad for the researchers squeezed out of a career in science, and bad for society as a whole.\n\nAnswering the call of several recent articles advocating for change within the system (Benderly, 2015; Bourne, 2013; Gould, 2015; McDowell et al., 2014; Powell, 2015), there have been a number of attempts to quantify factors contributing to such a trend (McDowell & Heggeness, 2017; Powell, 2016; Sauermann & Roach, 2016). However, while such data is highly revealing, there is a general lack of UK-centric data, and almost a complete absence of the strong advocacy groups for young scientists that have been so successful elsewhere (Cain et al., 2014; McDowell et al., 2014). Consequently, this article attempts to plug this gap, and provide a data point for UK-based biomedical scientists. Here, I present an in-depth analysis of survey data collected in response to a recent article calling for change within the UK biomedical system (Riddiford, 2016a). The survey was answered by 1,128 scientists as of 6th November 2016, and suggests that trends observed in the US are broadly echoed in the UK.\n\n\nMethods\n\nA ten-question survey was designed to formally evaluate the working habits of biomedical researches. While the primary intention was to gather information relating to UK-based biomedical scientists, the survey was also open to non-UK-based scientists from a broad range of backgrounds for comparison. The first three questions “what position are you?”, “broadly, what discipline do you work in?” and “what country do you work in?” aimed to serve as a filter to ensure the accurate analysis of UK-based biomedical scientists at different stages of their career. The following three questions “how many countries have you worked in over the past five years?”, “how old are you?” and “how long have you held this level of position?” aimed to construct a demographic census of the respondents, and to enable comparison between specific age groups. The next three questions focussed on the conditions scientists work under, asking “how many hours did you work last week?”, “how many days did you work last week?”, “what’s your annual salary in pounds sterling?”. The final question “how comfortable do you feel about your long-term prospects in research?” gave respondents the opportunity to select multiple responses, and those selecting the answer “not at all – I’m planning on leaving research” were invited to expand on their answer, and detail any factors contributing to this decision. The full list of questions and accompanying answer options are available in Supplementary File 1. The survey is still active and is hosted by Survey Monkey (https://www.surveymonkey.co.uk/r/HBP6NXX).\n\nTo capture as many responses as possible, data was collected between 21st March 2016 and 6th November 2016 (Dataset 1; Riddiford, 2017). In this time period, the survey was answered by 1,128 scientists. Initially, data were filtered to select only for responses from UK-based biomedical researchers (Q2 response: “biomedical sciences”; Q3: “UK”) to give a broad overview of working conditions within this cohort. Data were then further filtered to provide a career-stage-specific profile for each of the major tiers of an academic research career; the PhD, the postdoc and the principal investigator (Q1: “PhD”, “postdoc” and “principal investigator, permanent contract” or “principal investigator, non-permanent contract”). Data for each discrete profile was analysed using a custom Perl script (Supplementary File 2) to parse downloaded data and include non-standard question answers (i.e. where respondents opted to specify a non-listed answer, or to elaborate on their selected response) in the analysis.\n\nFor the 299 respondents who provided a written answer to describe in detail the reasons they were planning on leaving research (Q10: “not at all – I’m planning on leaving research”), four statements were selected for each career stage as being broadly representative of the issues addressed by others in the same cohort, and are presented in Box 1–Box 3. The complete unanalysed data set for responses collected within the stated time period can be found in Dataset 1 (Riddiford, 2017; answers compromising the anonymity of respondents [IP address and personal comments] are not included).\n\n\"I am told to be ambitious yet there just aren't enough jobs for us all to be ambitious. Too much is down to chance.\"\n\n\"The system is broken and yet is perpetuated as it is the lucky (and clever) few who make it to the top and tell everyone it will work out if you work hard. The simple fact is: For most people it will not.\"\n\n\"The career prospects, a decade of uncertain employment and relatively low pay mean getting out early is a priority for me and many others from my department.\"\n\n\"It's essentially a pyramid scheme and once you realise the stats, you start looking for safer alternatives.\"\n\n\"I'm unwilling to compete against people who will work 12+ hours a day, 7 days a week. The structure of scientific research makes a future in academia look incredibly unappealing.\"\n\n\"I've realised it's a pyramid scheme and I'm never going to get a lectureship so I've decided to leave for more stability. Also, I just can't move again, eventually I want to stay in one place for more than 3 years!\"\n\n\"I am not prepared to uproot my family again for another temporary post, so I am not willing to relocate.\"\n\n\"I think pursuing a permanent position in academia is effectively gambling with my future.\"\n\n\"Teaching standards are plummeting, and research funding is nearly impossible to gain. University education and research is about to collapse. It is not a viable career in the UK, despite our dominance in research.\"\n\n\"If I cannot secure funding in the next two years, I will face losing the job and leaving research.\"\n\n\"When I finally became a PI, I realised that the view I had of academic life was very naive. I can only do research that can be funded. There is not a single day that I do not worry about the project, competition, funding, publications etc.\"\n\n\"I have such a heavy teaching load I can't do research as well.\"\n\n\nResults\n\nOf the 900 biomedical scientists who responded to the survey, 37% reported having worked more than 50 hours in the week preceding the survey (12%, ≥ 60 hours). Perhaps more striking was that 53% reported working more than five days the week before they answered the survey and that 15% worked every day that week (Supplementary File 1). Only 16% reported receiving an annual salary in excess of £35,000. Almost all of the respondents were PhD researchers or postdocs, and 98% were employed on short-term contracts.\n\nPhD students. The majority of respondents to the survey were PhD students (54%), representing the youngest, and most mobile cohort, with 94% aged between 25–29 and 35% having worked in two or more countries over the past five years (Dataset 1 (Riddiford, 2017); Supplementary File 1). On average, they also reported working more hours per week than other cohorts (37% work over 50 hours a week) and the majority worked more than five days in the week before answering the survey (55%, > 5 days; 16%, 7 days; Figure 1 - ‘PhD’). UK-based PhD students are typically funded via a tax-free stipend of between £13,000 and £20,000, which equates to an hourly salary of £6.70 (assuming a 48 hour week earning the average PhD salary of £17,000). PhDs students are funded on a short-term basis, and 92% of PhD respondents have been at their current level of position for fewer than four years.\n\nThe data is presented for three discrete career stages, the PhD, the postdoc and the PI.\n\nIn response to the question “How comfortable do you feel about your long-term prospects in research?” 5% answered “comfortable”, with the vast majority expressing major concerns about one or more work-related factors. The most common reason for respondents’ lack of comfort in the prospect of a career in research was “it’s too competitive, and there aren’t enough jobs” (63%), followed by “I don’t make enough money” (45%). Surprisingly, only 28% plan on leaving academia (see Box 1 for several respondent-provided statements).\n\nPostdoctoral researchers. The next rung on the academic ladder - and therefore the next discrete cohort analysed - is the postdoctoral research fellowship (“postdoc”), and accordingly this cohort generally comprised older respondents (65% age 30 or older; Figure 1 - ‘Postdoc’). Like PhD students, roughly a third reported having worked in two or more countries over the past five years (33%). While postdocs are also employed on a short-term basis, the number of respondents who reported being employed at the same level for four or more years was drastically higher than for PhD students (≥ 4 years: postdoc, 32%; PhD, 11%; ≥ ten years: postdoc, 4.5%; PhD, 0.3%), almost certainly reflecting the growing necessity of pursuing multiple postdocs on the path to becoming a full faculty member (Bourne, 2013).\n\nAlso like PhD students, postdocs work long hours - 79% reported working more than 40 hours a week, and 41% for more than five days a week. Despite their age, experience and work ethic, the average salary for biomedical postdocs in the UK is relatively low, with 75% of postdocs earning between £26,000 and £35,000 (4.5% earn more than £41,000), which constitutes an average hourly salary of approximately £14.00 (assuming a 45 hour week earning the average post doc salary of £33,000). However, despite only 7% describing themselves as “comfortable” in their long-term prospects for a career in research, only 30% plan on leaving academia (see Box 2 for several representative reasons). The large majority that didn’t feel comfortable in a future in research felt that they were working too hard (33% answered “I can’t keep working this hard”) and competing for too few jobs (66% answered “It’s too competitive, and there aren’t enough jobs”).\n\nPrincipal investigators. The final group comprises those who identified as being a principal investigator (“PI”), and therefore represent an older and more stable cohort that PhD students or postdocs. In total, 63% of respondents in this group were employed on a permanent contract, and only 20% reported working in more than two countries over the last five years. In addition, 80% were over 35 years and 48% reported being employed at the same level for four years or more (≥ ten years: 28%; Dataset 1 (Riddiford, 2017)). However, this category was vastly underrepresented in the survey data – only 30 individuals responded in total, and only 8 were aged over 45 years – representing a major caveat in the interpretation of such data. While such low numbers are insufficient to draw any major conclusions, the data collected do provide some insight into the working habits of UK-based biomedical PIs, and particularly of younger individuals (52% employed at this level for ≤ 4 years). In particular, 17% in this cohort reported working over 70 hours in the week preceding the survey, and 25% worked a seven-day week (Figure 1 - ‘PI’).\n\nLike PhD students and postdocs, the average salary from this group was relatively low (£41,000), which is particularly striking when considering the level of experience required to reach such a position. Accordingly, a low salary was cited as a cause for concern by 38% of respondents (Q10: “I don’t make enough money”), while more respondents felt that their work/life balance was unsustainable (46%; “I can’t keep working this hard”). As in the earlier stages of a research career, roughly a third (31%) plan on leaving research for reasons such as those given in Box 3.\n\n\nDiscussion\n\nThe survey data presented here provides a rare and valuable insight into the working conditions of UK-based biomedical researchers. While there has been a recent surge in data collection focussing on the scientific research community - and largely the biomedical sector (McDowell & Heggeness, 2017; Powell, 2016; Sauermann & Roach, 2016) - these tend to be concentrated on the US workforce, and data pertaining specifically to the UK is scarce. Therefore, the data presented here is intended to fill this void, and provide a foundation for future discussion relating to biomedical researchers in the UK.\n\nOverall, the data presented here suggests a large faction of biomedical researchers working in the UK are deeply concerned about their long-term future in research. In each discrete career stage analysed, roughly equal numbers (PhD: 28%; postdoc: 30%; PI: 31%; Dataset 1; (Riddiford, 2017)) plan on leaving research, largely due to the lack of job opportunities, and the degree of competition involved in attaining a permanent position. Such findings are largely consistent with the number of scientists reported to be planning on leaving research in the US (Sauermann & Roach, 2016), and represent a major problem - the “brain-drain” - facing biomedical research (Benderly, 2015; Healy, 1988).\n\nThe data also suggest that biomedical scientists in the UK are working long hours and over weekends for relatively little reward: 53% worked more than five days in the week before they took the survey, and only 16% reported receiving an annual salary of over £35,000. A recent online poll of readers conducted by the journal Nature revealed that almost 40% of the 12,000 respondents worked more than 60 hours a week on average (Powell, 2016), a substantially higher number than that found in this survey (12% across all career stages). One explanation is that while the Nature poll asked readers (from all scientific disciplines) to report their average working week, the survey presented here instead asked respondents to report the number of hours worked in the week immediately preceding the survey, and to estimate an average only if this value was atypical. This approach was adopted to limit over-estimation and to provide a more accurate dataset. The same Nature poll also reported that almost two thirds of readers have considered leaving research altogether, and that 15% have actually left, again, far higher than numbers reported here (Powell, 2016). While approximately 30% of UK-based biomedical scientists surveyed here reported their plans to leave research, it is possible that this figure is somewhat inflated. Firstly, as with any survey or poll, the individuals who don’t respond are just as important as those who do. It is likely that there exists a population of biomedical researchers who are satisfied enough with their work/life balance that that they chose not to engage with articles addressing such issues, which would tend to dilute more positive views. Secondly, despite approximately 30% of respondents surveyed here stating their intention to leave research, it is probable that some fraction of these will decide to remain, and the number who actually do leave may well be lower.\n\nNonetheless, the almost 300 personal testimonials describing why researchers were planning on leaving are striking. Almost all of these reiterated the same concerns: that continuing in research was not only gambling with their future, but that it was also a bad bet to make in the first place. Many also noted that the hypercompetition (Alberts et al., 2014) involved in attaining a faculty position diluted their bargaining power, and drove up the need to sacrifice any sense of work/life balance. For many, this sacrifice is just not a viable option, and rather than facing the prospect of effectively being forced out of a career in scientific research, often at late stages of their careers (Riddiford, 2016b), they are exiting on their own terms.\n\nGiven the febrile political landscape in the UK and elsewhere, it is perhaps more crucial than ever that the biomedical research community in the UK rally together to ensure that pursuing a career in biomedical research does not require one to gamble with one’s future career prospects. In addition, those who make this bet should do so in full knowledge of the employment landscape within academic research.\n\n\nEthics statement\n\nConsidering the absence of identifying information in data published here, and the non-sensitive nature of the survey, no ethical approval was sought for this study. No information presented here can be used to identify survey participants, and in accordance with SurveyMonkey’s data privacy policy (https://www.surveymonkey.com/mp/policy/privacy-policy/), is not accessible to third parties.\n\n\nData availability\n\nDataset 1: Raw data from the survey (anonymity-compromising information has been removed, see Methods). doi, 10.5256/f1000research.11029.d153379 (Riddiford, 2017).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nMany thanks to both Neal Sweeney and Gary McDowell for many fruitful discussions on advocacy and science policy.\n\n\nSupplementary material\n\nSupplementary File 1: The complete survey.\n\nClick here to access the data.\n\nSupplementary File 2: Perl scripts used to analyse days and hours worked.\n\nClick here to access the data.\n\n\nReferences\n\nAlberts B, Kirschner MW, Tilghman S, et al.: Rescuing US biomedical research from its systemic flaws. Proc Natl Acad Sci USA. 2014; 111(16): 5773–5777. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBenderly BL: The case of the disappearing postdocs. Science. 2015. Publisher Full Text\n\nBourne HR: A fair deal for PhD students and postdocs. eLife. 2013; 2: e01139. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCain B, Budke JM, Wood KJ, et al.: How postdocs benefit from building a union. eLife. 2014; 3: e05614. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarrison HH, Justement LB, Gerbi SA: Biomedical science postdocs: an end to the era of expansion. FASEB J. 2016; 30(1): 41–44. PubMed Abstract | Publisher Full Text\n\nGould J: How to build a better PhD. Nature. 2015; 528(7580): 22–25. PubMed Abstract | Publisher Full Text\n\nHealy B: Innovators for the 21st century: will we face a crisis in biomedical-research brainpower? N Engl J Med. 1988; 319(16): 1058–1064. PubMed Abstract | Publisher Full Text\n\nMcDowell GS, Gunsalus KT, MacKellar DC, et al.: Shaping the Future of Research: a perspective from junior scientists [version 1; referees: 1 approved, 1 approved with reservations]. F1000Res. 2014; 3: 291. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcDowell GS, Heggeness ML: Snapshot of the US biomedical workforce. Nature. 2017; 541.\n\nPowell K: Hard work, little reward: Nature readers reveal working hours and research challenges. Nature. 2016. Publisher Full Text\n\nPowell K: The future of the postdoc. Nature. 2015; 520(7546): 144–147. PubMed Abstract | Publisher Full Text\n\nRiddiford N: Dataset 1 in: A survey of working conditions within biomedical research in the United Kingdom. F1000Research. 2017. Data Source\n\nRiddiford N: The hidden costs of a career in scientific research. Nature blogs. 2016b. Reference Source\n\nRiddiford N: Young scientists need to fight for their employment rights. The Guardian. 2016a. Reference Source\n\nSauermann H, Roach M: SCIENTIFIC WORKFORCE. Why pursue the postdoc path? Science. 2016; 352(6286): 663–4. PubMed Abstract | Publisher Full Text\n\nSchillebeeckx M, Maricque B, Lewis C: The missing piece to changing the university culture. Nat Biotechnol. 2013; 31(10): 938–941. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "20768",
"date": "20 Mar 2017",
"name": "Jessica K Polka",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis survey represents an important contribution to our understanding of career satisfaction among early career researchers. As you note, many efforts have focused on the US, so this study is especially valuable in light of its focus on the UK. However, the opt-in nature of the survey should be disclosed in the abstract, and several other parts of the manuscript could be productively modified.\nMethods\nPlease explain how the survey was advertised and what target audiences were likely reached. Since you have IP addresses, can you report how many of the responses came from within the UK and how many were from academic institutions? The former would be essential support for the claim that the report is representative of ECRs in the UK. Furthermore, can you compare age and other factors to any known statistics to evaluate how representative your sample is in these dimensions?\nResults\nPlease clarify whether the term “research” is used to mean academic research or research in industry as well. If the latter, did any of the survey respondents identify as industry researchers? For example, at the end of the section on PhD students, you write that 28% plan on leaving academia, yet the question asks about research - a very important distinction. Furthermore, I’m not sure that the fact that “only 28%” are planning on leaving is surprising, since the respondents did not provide information about their available alternatives. In the postdoc section, the statement “the large majority that didn’t feel comfortable in a future in research felt that they were working too hard” does not make sense at only a 33% response rate. Throughout, it would be helpful to provide the actual # of responses received, especially when discussing a fraction of a category (for example, X% of postdocs, etc).\n\nDiscussion\nScience can offer non-financial rewards, such as the pleasure of doing research and a relatively high level of respect. Therefore I suggest providing a caveat {indicated} to the sentence: “Working long hours and over weekends for relatively little {financial} reward” Regarding the statement “it is probable that some fraction of [researchers stating their intention to leave] will decide to remain, and the number who actually do leave may well be lower.” Rather than speculate, can you compare this to existing data on attrition rate, for example figure 1.6 from the 2010 Royal Society report “The Scientific Century”1?\n\nFigures\nBox 1-3: The colored bullet points are distracting - does the color code have meaning? Figure 1: This graphic is extremely difficult to read. Please label the pie chart sections directly (or better yet, make it a histogram) and provide axis titles and labels for all of the graphs. This will make the legends unnecessary. The “days per week” visualization would be much better represented by a distribution. PhD (which is ambiguous and should perhaps be PhD student), Postdoc, and PI labels are unnecessarily large. The data on comfort with long term prospects in research are very interesting. I would like to see a graphical representation of this as well.",
"responses": []
},
{
"id": "20903",
"date": "28 Mar 2017",
"name": "Kearney T. W. Gunsalus",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper addresses an important gap in the available data regarding the biomedical workforce in the UK. Although larger studies are needed (both in the scope of the questions asked and with a larger and more representative population of respondents), this survey is a nice example of how members of the community can begin to address the gaps in existing data collection and dissemination efforts.\n\nAbstract Please include the number of respondents analyzed and mention something about how the survey was advertised/the target audience. (The caveats about survey responses necessarily being biased to those who were aware of it and cared enough to take the time to respond are buried fairly far into the discussion, and it would be helpful to make some reference to this a little earlier in the paper.)\nData presentation Please state clearly throughout the text the number of responses analyzed in each category. (How many UK-based biomedical researchers responded to the survey? How many PhD student and postdoctoral respondents were there? Etc.)\nIt would be interesting to include a figure showing responses to the final question (how comfortable do you feel about your long-term prospects in research?\"); as respondents had the option to select multiple answers, it would be nice to see the percentage selecting each of the possible answers.\n\nFor clarity, the colorblind, and those who still prefer to print papers in black and white, it would be preferable to directly label the data in figure 1, rather than using the key. Please label the percentage of respondents in each age group; and label graph axes. Though more typical for an infographic than a figure in a paper, I do like the text boxes highlighting the take-home message for each panel.\n\nI also initially found the visual representation the hourly salary data somewhat confusing; it might make more sense to make the \"hourly salary\" its own panel, with bars for minimum wage, PhD, postdoc, and PI average hourly pay (rather than showing the minimum wage three times). The hourly salary numbers could be included in or above each bar.\n\nCould you clarify the way average hourly wages are calculated? I noticed for grad students, you assumed a 48-hour work week, while for postdocs the assumption was a 45-hour work week, and I didn't see a number for PIs. Were the assumptions supposed to represent a \"typical\" respondent, or an average?\n\nFuture directions for the survey I hope that data collection and analysis for this project will continue. I have a few minor suggestions, should the survey be revised.\n\nI found it somewhat confusing that the category for (predoctoral) graduate students appears to be \"PhD researcher.\"\n\nWhile I understand the importance of keeping such a survey short, it might be helpful to collect some additional demographic data, such as gender, relationship/marital status (and if partnered, the partner's salary and discipline), and number of children.\n\nOverall, this work addresses an important knowledge gap. I hope data collection will continue and that in future the survey questions can be expanded and the survey itself advertised more broadly.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-229
|
https://f1000research.com/articles/6-1490/v1
|
16 Aug 17
|
{
"type": "Software Tool Article",
"title": "BAT: Bisulfite Analysis Toolkit",
"authors": [
"Helene Kretzmer",
"Christian Otto",
"Steve Hoffmann",
"Helene Kretzmer",
"Christian Otto"
],
"abstract": "Here, we present BAT, a modular bisulfite analysis toolkit, that facilitates the analysis of bisulfite sequencing data. It covers the essential analysis steps of read alignment, quality control, extraction of methylation information, and calling of differentially methylated regions, as well as biologically relevant downstream analyses, such as data integration with gene expression, histone modification data, or transcription factor binding site annotation.",
"keywords": [
"DNA methylation",
"epigenetics",
"bisulfite sequencing",
"WGBS",
"RRBS",
"software",
"DMRs",
"integrative analysis"
],
"content": "Introduction\n\nHigh-throughput DNA methylation sequencing protocols, such as whole-genome bisulfite sequencing (WGBS) and targeted bisulfite sequencing (e. g., RRBS), have made it possible to precisely and accurately measure this major epigenetic modification on a genome wide scale. The impact of DNA methylation on processes, such as cell differentiation, gene expression, chromatin structure, and cancerogenesis, has raised substantial interest in analyzing DNA methylation in many sectors of life sciences. For example, methylomes of a large number of samples have been sequenced in the context of cancer projects and developmental studies1–5. Also researchers investigating obesity, neurodegenerative diseases, Alzheimer’s, or Parkinson’s disease, have begun to focus on DNA methylation6–9.\n\nA number of time consuming data analysis steps are required in virtually all these projects, i. e., quality control, read alignment, and methylation rate calculation. However, performing each step by hand is highly error prone, takes time, and impacts reproducibility. To ensure a consistent and reproducible processing, we have developed the Bisulfite Analysis Tooklit BAT. The workflow enables a fast and easy analysis of bisulfite converted high-throughput sequencing reads. It is specifically designed to facilitate the analysis for biologists and physicians with little bioinformatic knowledge, as well as for bioinformaticians that already work on sequencing data, but are not familiar with the characteristics of bisulfite sequencing data.\n\n\nMethods\n\nBAT is a modular toolkit allowing to easily generate workflows to analyze bisulfite sequencing data. The toolkit includes modules for read alignment (mapping module), methylation level estimation (calling module), grouping of samples (grouping module) and identification of differentially methylated regions (DMR module) (Figure 1). Further modules allow the integration of gene expression, histone modification data, or transcription factor binding site annotation. These modules facilitate the functional analysis of the effects of differential methylation.\n\nIt comprises four modules covering (left to right) read alignment, methylation rate calling, basic group analysis, and DMR calling. The modules consist of a collection of scripts that build up on one another, but easily single steps can be covered by alternative tools.\n\nEach of the modules can be run on its own, and the minimal system requirements depend on the respective module. The computational most expensive module is the mapping module. Here, the aligner segemehl10 in its bisulfite mode is used, which requires about 55 GB physical RAM for the alignment of reads to the human genome hg19.\n\nThe toolkit itself is written in Perl and calls software components mainly written in C and R to ensure swift calculations. All software requirements are listed on our website (www.bioinf.uni-leipzig.de/Software/BAT/install/#requirements). The default parameters for the tools included in the BAT pipeline are optimized to process bisulfite sequencing data for most applications. In order to enhance reproducibility and reduce potential errors, the number of parameters that need to be set by the user has been carefully reduced to a minimum. Due to its modularity, however, the toolkit is flexible and can easily be extended or customized to specific needs. To allow for workflow modifications and extensions, standardized formats are used and interfaces to several other tools are provided. Basic steps, e.g., processing from raw reads to a single alignment file from multiple sequencing runs, is split into its pre-, post-, and main processing steps, to allow for the customized extension of the workflow. Error handling is eased by parameter and file checks prior to the analysis, and meaningful error messages allow a quick trouble shooting.\n\nA detailed documentation of all modules, including parameter description, recommended additional tools, analysis reports, and data visualizations produced by the BAT workflow are summarized on www.bioinf.uni-leipzig.de/Software/BAT. Moreover, all automatically created visualizations are shown on the webpage. Data and figures displayed there are derived from a small example data set of two groups with four samples each, adopted from Kretzmer et al11. Our webpage provides raw FASTQ files of one sample as well as the methylation rate files of all eight samples along with expression and annotation data. This example data set and shell scripts covering all modules of BAT can be downloaded and adapted together with the toolkit.\n\nFurthermore, BAT is provided as Docker12 image and can be obtained from https://hub.docker.com/r/christianbioinf/bat/. The Docker images ensure platform independent usage of our toolkit. All programs that are used by BAT are already installed in the Docker image and dependencies are resolved. Existing hard drives are mounted to avoid time consuming translocation or upload of the frequently huge data.\n\n\nUse cases\n\nResembling a common study design, we assembled a small case-control example dataset, adopted from recently published data11. It is a subset of a paired-end human WGBS dataset, comprising 8 samples (control: S1–S4, case: S5–S8). It comprises the raw reads in FASTQ format of one sample and the already called methylation rates of all 8 samples in VCF format. The following modules can now be used to process and analyze bisulfite sequencing data including detection of methylation differences between case and control samples. The use case starts with the alignment of the raw sequencing data using the mapping module. The single components of BAT and their functionality are described in the following:\n\nThe read alignment step is taken care of by the module BAT_mapping. It includes a bisulfite-sensitive read alignment using segemehl10, a quality filtering step, and the conversion of the alignments to an indexed and compressed BAM file by samtools13. Using BAT_mapping_stat, the quality of the mapping can be assessed by the number and fraction of mapped pairs or reads, the multiplicity of read alignments, and the alignments’ error rates. In case of large experiments where a sample is sequenced multiplexed on multiple lanes or flow cells, the read alignments of each sample can easily be merged using BAT_merging, including the addition of read group information to allow for tracebacks of lane effects if necessary.\n\nFollowing mapping, the methylation information needs to be extracted from the read alignments. Prior to this methylation calling it is, however, recommended to exclude potential biases by clipping alignment overlaps of paired-end reads (e.g. using bamutil’s clipoverlap14) or by excluding incompletely converted or artificially introduced cytosines with the M-bias detection method (e.g. using BSeQC15). Subsequently, the methylation information can be extracted using the module BAT_calling, which returns a VCF-style file that includes detailed information for each cytosine. This initial set of positions can be filtered by coverage using BAT_filtering to exclude unreliable methylation information from either lowly covered or very highly covered positions (e.g. in repetitive regions). Moreover, it is also possible to filter by genomic context (e.g., to restrict to CG context only). Apart from a VCF file, BAT_filtering reports the methylation level at positions passing the filter in bedGraph format for easy inspection in IGV16 or upload to the UCSC genome browser17. Additionally, the module automatically produces plots showing the distribution of coverages and methylation rates for the complete and the filtered set of positions (Figure 2A), giving the user the opportunity to check and possibly fine tune the filtering parameters.\n\nAnnotation items are ENCODE transcription factor binding sites for GM12878 cell line. A) Distribution of coverage. B) Circos plot showing the genome-wide methylation level of eight samples as heatmap. C) Binned distribution of average methylation rate per CpG for each group. D) Boxplots of genome-wide mean methylation rate per group. E) Hierarchical clustered heatmap of the methylation rates of all samples over all annotation items. F) Boxplots of average methylation rate per annotation item. G) Correlating DMR plot shows methylation and expression of a DMR - gene pair. Note that all figures were produced by BAT itself, but were minorly post-edited to fit the limited space.\n\nThe third module now facilitates the transition from single sample analysis to groups of multiple samples. First, methylation information from individual samples is combined to groups and summarized with BAT_summarize. It reports the mean methylation rate per group and position as well as difference of the group’s mean methylation rate per position. The summary module can be parameterized to only report positions where each group has a minimum number of samples with sufficient coverage. For convenience, all files are exported in both bedGraph and bigWig format for inspection in UCSC genome browser or IGV. Moreover, a circos plot containing a genome-wide methylation rate heatmap for each sample is automatically produced (Figure 2B). Based on the summary files, a number of overview statistics and plots can be generated using BAT_overview. This includes a hierarchical clustering of the samples based on their methylation profile, a plot of binned mean methylation rates per group (Figure 2C), boxplots of group-wise mean methylation rates (Figure 2D), a smoothed scatterplot showing the correlation between the groups’ mean methylation rate per position, and a barplot of the distribution of group methylation differences. Subsequently, BAT_annotation can be used to inspect the methylation of the samples in regions of interest or annotations such as transcription factor binding sites (TFBS), CpG islands, shores, or promoter regions. Therefore, a hierarchically clustered heatmap of all samples (Figure 2E), is produced and the per-group and per-sample mean methylation rate is calculated (Figure 2F).\n\nFinally, the fourth module features the identification and analysis of differentially methylated regions (DMRs) between groups (BAT_DMRcalling). It employs the DMR calling tool metilene18 which is based on circular binary segmentation of the group methylation difference signal in conjunction with a two-dimensional non-parametric statistical test. Afterwards, the DMRs reported by metilene can be filtered by several criteria, e.g., length (in nt or number of Cs), significance (i.e., q-value), and minimum mean methylation difference, and then converted to BED/bedGraph format. The BED file contains unique identifiers per DMR and reports regions of hyper/hypo methylation. Additionally, the bedGraph file can be used to display the mean group methylation difference of the DMRs. Moreover, BAT_DMRcalling produces overview statistics of the set of filtered DMRs including a histogram of the length and methylation difference of the filtered DMRs, a correlation plot of the mean methylation rate of DMRs in both groups and a plot of the methylation difference vs. the q-value for each DMR. Last but not least, BAT_correlating allows for integration of the DMRs with expression data. Given the methylation information, an expression value of genes, and an association between DMRs and genes, the correlation between both types of data can be examined in order to find correlating DMRs (cDMRs). For each DMR-gene pair, a linear and non-linear correlation coefficient is calculated and a correlation plot (Figure 2G), showing methylation and expression of each sample, is generated.\n\n\nSummary\n\nBAT has already successfully been applied in the framework of a large cancer genome study, the ICGC MMML-Seq11. The streamlined processing and analysis modules improve and accelerate the analysis by reducing hands on time and user errors. The modularity of BAT, as well as its input and output formats, allow to easily extend or customize the default workflows. For instance, it is possible to easily integrate tools such as BisSNP19 or BS-Snper20 or DMR calling tools.\n\nThe custom visualizations of the methylation data facilitate data mining and allow to inspect the data quality at each step of the analysis. This is necessary to increase the chance of an early detection of errors, e.g., in library preparation and data handling. Therefore, quality control statistics and graphics are produced continually throughout the entire pipeline.\n\nTaken together, BAT is a collection of modular steps for analyzing bisulfite sequencing data that (i) can easily be run on various platforms due to the virtualization via Docker, (ii) can be combined with or extended by other tools, (iii) automatically generates publication-ready graphics, and (iv) supports data integration, e.g., annotation or gene expression data.\n\n\nSoftware and data availability\n\nSoftware available from: www.bioinf.uni-leipzig.de/Software/BAT/download\n\nSource code available from: https://github.com/helenebioinf/BAT\n\nArchived source code as at time of publication: http://doi.org/10.5281/zenodo.83820021.\n\nLicense: MIT\n\nExample data available from: www.bioinf.uni-leipzig.de/Software/BAT/download/#example_data",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis research was supported by the German BMBF (ICGC MMML-Seq 01KU1002A-J, and ICGC-Data Mining 01KU1505-C and G) the European Union in the framework of the BLUEPRINT Project (HEALTH-F5-2011-282510) and LIFE (Leipzig Research Center for Civilization Diseases), Leipzig University. LIFE is funded by the European Union, by the European Regional Development Fund (ERDF), the European Social Fund (ESF) and by the Free State of Saxony within the excellence initiative. We acknowledge support from the German Research Foundation (DFG) and University of Leipzig within the program of Open Access Publishing.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Stephan H. Bernhart for helpful discussion and proof reading. We acknowledge support from the German Research Foundation (DFG) and Universität Leipzig within the program of Open Access Publishing.\n\n\nReferences\n\nInternational Cancer Genome Consortium, Hudson TJ, Anderson W, et al.: International network of cancer genome projects. Nature. 2010; 464(7291): 993–998. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Genome Atlas Research Network, Weinstein JN, Collisson EA, et al.: The Cancer Genome Atlas Pan-Cancer analysis project. Nat Genet. 2013; 45(10): 1113–1120. PubMed Abstract | Publisher Full Text | Free Full Text\n\nENCODE Project Consortium: An integrated encyclopedia of DNA elements in the human genome. Nature. 2012; 489(7414): 57–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBernstein BE, Stamatoyannopoulos JA, Costello JF, et al.: The NIH Roadmap Epigenomics Mapping Consortium. Nat Biotechnol. 2010; 28(10): 1045–1048. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartens JH, Stunnenberg HG: BLUEPRINT: mapping human blood cell epigenomes. Haematologica. 2013; 98(10): 1487–1489. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Dijk SJ, Molloy PL, Varinli H, et al.: Epigenetics and human obesity. Int J Obes (Lond). 2015; 39(1): 85–97. PubMed Abstract | Publisher Full Text\n\nDe Jager PL, Srivastava G, Lunnon K, et al.: Alzheimer’s disease: early alterations in brain DNA methylation at ANK1, BIN1, RHBDF2 and other loci. Nat Neurosci. 2014; 17(9): 1156–1163. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchumacher A, Petronis A: Epigenetics of complex diseases: from general theory to laboratory experiments. Curr Top Microbiol Immunol. 2006; 310: 81–115. PubMed Abstract | Publisher Full Text\n\nJowaed A, Schmitt I, Kaut O, et al.: Methylation regulates alpha-synuclein expression and is decreased in Parkinson’s disease patients’ brains. J Neurosci. 2010; 30(18): 6355–6359. PubMed Abstract | Publisher Full Text\n\nOtto C, Stadler PF, Hoffmann S: Fast and sensitive mapping of bisulfite-treated sequencing data. Bioinformatics. 2012; 28(13): 1698–1704. PubMed Abstract | Publisher Full Text\n\nKretzmer H, Bernhart SH, Wang W, et al.: DNA methylome analysis in Burkitt and follicular lymphomas identifies differentially methylated regions linked to somatic mutation and transcriptional control. Nat Genet. 2015; 47(11): 1316–1325. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMerkel D: Docker: Lightweight linux containers for consistent development and deployment. Linux J. 2014; 2014(239). Reference Source\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–2079. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang KC, Yang YW, Liu B, et al.: A long noncoding RNA maintains active chromatin to coordinate homeotic gene expression. Nature. 2011; 472(7341): 120–124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLin X, Sun D, Rodriguez B, et al.: BSeQC: quality control of bisulfite sequencing experiments. Bioinformatics. 2013; 29(24): 3227–3229. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThorvaldsdóttir H, Robinson JT, Mesirov JP: Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration. Brief Bioinform. 2013; 14(2): 178–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKarolchik D, Barber GP, Casper J, et al.: The UCSC Genome Browser database: 2014 update. Nucleic Acids Res. 2014; 42(Database issue): D764–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJühling F, Kretzmer H, Bernhart SH, et al.: metilene: Fast and sensitive calling of differentially methylated regions from bisulfite sequencing data. Genome Res. 2016; 26(2): 256–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu Y, Siegmund KD, Laird PW, et al.: Bis-SNP: combined DNA methylation and SNP calling for bisulfite-seq data. Genome Biol. 2012; 13(7): R61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGao S, Zou D, Mao L, et al.: BS-SNPer: SNP calling in bisulfite-seq data. Bioinformatics. 2015; 31(24): 4006–4008. PubMed Abstract | Publisher Full Text | Free Full Text\n\nhelenebioinf: helenebioinf/BAT: Publication Release. Zenodo. 2017. Data Source"
}
|
[
{
"id": "25076",
"date": "29 Aug 2017",
"name": "Bob Zimmermann",
"expertise": [
"Reviewer Expertise Bioinformatics",
"evolution and development"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article presents a tool aggregate which can be a useful one-stop-shop and/or starting off point for analyzing bisulfite data. The authors detail the package and demonstrate its usefulness in an example analysis. Most of the information needed to decide whether to use this package is contained in the article.\nA notable exception are the \"further modules\" mentioned in the first paragraph of the Methods section. While it is clearly useful to integrate gene expression, histone modification data and transcription factor binding site information to your analysis, the reader cannot get an impression of whether the package does this effectively. It would be useful to include either an expansion on this topic or better yet to add example analysis with these modules as well, space permitting. If the authors are space confined, it would be useful to point to where an example of this can be found on the web, as I was unable to locate it on the project page.\nA critical omission from the introduction is a short technical background on bisulfite sequencing and its analysis. The reader has no basis to understand why a \"VCF-style file that includes detailed information for each cytosine\" (in the Calling subsection) would be useful.\nSome minor issues were:\nthe phrase \"grouping of samples\" in the second sentence of the Methods section does not really clarify anything about the function of the grouping module. I would suggest to use \"sample group analysis\" \"Due to its modularity, however\" is awkwardly worded and could be better expressed as \"The toolkit's modularity makes it flexible, extensible and customizable for users with specific needs\". the sentence \"Basic steps, e.g. ...\" should say \"are\" instead of \"is\". \"Resembling a common study design,\" in the Use cases section does not express what I believe is the authors' intended meaning, and could be better worded as \"In order to illustrate the results of using our toolkit on a common study design,\"\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "25078",
"date": "06 Sep 2017",
"name": "Ishaan Gupta",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBAT: Bisulphite analysis toolkit is a timely software which provides an end-to-end solution for performing DNA methylation analysis. The toolkit follows \"good software practises\" and has a clearly laid out work flows, efficient code , extensive documentation and has limited dependencies. Further, ability to perform the complete analysis from sequencing data to actual interpretation and integration of data as shown in the example data in the toolkit from the manuscript by the same authors \"DNA methylome analysis in Burkitt and follicular lymphomas identifies differentially methylated regions linked to somatic mutation and transcriptional control\"1 suggests that the method implemented in the toolkit for calling differentially methylated regions (DMRs) is not only much faster than existing solutions but also extracts biologically relevant information about methylation.\nI outline my reasons below :\nIs the rationale for developing the new software tool clearly explained?\n\nThe rationale for developing the software well explained as sequencing data especially bisulphite sequencing data are prone to human errors and increasing number of samples being processed for cohorts tackling complex disease phenotypes warrant for a streamlined reproducible workflows. Also the ease of use is a term often loosely used for many bioinformatics tools are under-appreciated by the community but the authors have done well here by providing a docker image that obviates any platform dependencies to provide an out of the box solution. Suggestion: As a rationale it would would great if the authors could add a few lines on their method of calculating DMRs in the introduction to contrast with existing tools, I believe this would enhance the manuscript and further convince the readers to use this tool-kit.\n\nIs the description of the software tool technically sound?\n\nSoftware documentation is thorough and technically sound. Moreover, Dr. Hoffmann's lab has been quite consistent in releasing regular updates for their previous tools and is responsive to bug-reports. Suggestion: It is accurate that segemehl requires 55GB to align the entire human genome but it would be important to also point out that alignment could be run on individual chromosomes separately and then combined later which significantly reduces this memory intensive step.\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?\n\nThe workflows are well laid out and broken down into the individual modules establishing a replicable software design. Each module can be run individually or together through the perl wrapper and come with appropriate description of flags used in the command line help allowing a look under the hood of the code. Further, each tool is well documented and the code is commented making the tool reproducible.\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?\nThe example provided herewith runs well and one can quickly reproduce the plots from Kretzmer et al 20151.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "25075",
"date": "12 Sep 2017",
"name": "Lars Feuerbach",
"expertise": [
"Reviewer Expertise Epigenetics",
"Cancer genomics",
"Software development"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript “BAT: Bisulfite Analysis Toolkit” presents a software pipeline for the analysis of sequencing-based analysis of bisulfite treated DNA. It introduces the major modules of this pipeline and familiarize the reader with their basic function, compatibilities and output, but is obviously not intended to provide sufficient detail to allow reimplementation of the described modules. Instead, it refers to external resources such as research articles and documentary webpages, which provide most of this information.\n\nThe article excels in providing a researcher who has to choose among several software pipelines for his next methylation project with the necessary information on BAT, without attempting to benchmark it against other approaches.\n\nEspecially, the offer of a dockerized pipeline version and a real example datasets ensures the applicability of the software, while simultaneously proving the claim of improved reproducibility.\n\nAnother prominent claim, namely the compatibility with other modules for instance alternatives to segemehl, is less well documented. Here the article would profit from an extended example in which some of the modules are exchanged by third party alternatives, e.g. in the alignment step or during the grouping.\n\nFinally, the authors describe the utility of their diagnostic diagrams depicted in figure 2 for the detection of quality problems. To this end a supplementary figure/resource in which a number of examples of how several quality problems manifest in these diagrams is required, not only to proof this statement, but also to educate less experienced users.\n\nMinor comment: The sentences “However, performing each step by hand is highly error prone, takes time, and impacts reproducibility” in the introduction is formulated unfavorably, as it can be misread as a suggestion that someone would attempt to analyze a WGBS dataset by hand.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1490
|
https://f1000research.com/articles/6-1488/v1
|
16 Aug 17
|
{
"type": "Method Article",
"title": "Systematically linking tranSMART, Galaxy and EGA for reusing human translational research data",
"authors": [
"Chao Zhang",
"Jochem Bijlard",
"Christine Staiger",
"Serena Scollen",
"David van Enckevort",
"Youri Hoogstrate",
"Alexander Senf",
"Saskia Hiltemann",
"Susanna Repo",
"Wibo Pipping",
"Mariska Bierkens",
"Stefan Payralbe",
"Bas Stringer",
"Jaap Heringa",
"Andrew Stubbs",
"Luiz Olavo Bonino Da Silva Santos",
"Jeroen Belien",
"Ward Weistra",
"Rita Azevedo",
"Kees van Bochove",
"Gerrit Meijer",
"Jan-Willem Boiten",
"Jordi Rambla",
"Remond Fijneman",
"J. Dylan Spalding",
"Sanne Abeln",
"Jochem Bijlard",
"Christine Staiger",
"Serena Scollen",
"David van Enckevort",
"Youri Hoogstrate",
"Alexander Senf",
"Saskia Hiltemann",
"Susanna Repo",
"Wibo Pipping",
"Mariska Bierkens",
"Stefan Payralbe",
"Bas Stringer",
"Jaap Heringa",
"Andrew Stubbs",
"Luiz Olavo Bonino Da Silva Santos",
"Jeroen Belien",
"Ward Weistra",
"Rita Azevedo",
"Kees van Bochove",
"Gerrit Meijer",
"Jan-Willem Boiten",
"Jordi Rambla",
"Remond Fijneman"
],
"abstract": "The availability of high-throughput molecular profiling techniques has provided more accurate and informative data for regular clinical studies. Nevertheless, complex computational workflows are required to interpret these data. Over the past years, the data volume has been growing explosively, requiring robust human data management to organise and integrate the data efficiently. For this reason, we set up an ELIXIR implementation study, together with the Translational research IT (TraIT) programme, to design a data ecosystem that is able to link raw and interpreted data. In this project, the data from the TraIT Cell Line Use Case (TraIT-CLUC) are used as a test case for this system. Within this ecosystem, we use the European Genome-phenome Archive (EGA) to store raw molecular profiling data; tranSMART to collect interpreted molecular profiling data and clinical data for corresponding samples; and Galaxy to store, run and manage the computational workflows. We can integrate these data by linking their repositories systematically. To showcase our design, we have structured the TraIT-CLUC data, which contain a variety of molecular profiling data types, for storage in both tranSMART and EGA. The metadata provided allows referencing between tranSMART and EGA, fulfilling the cycle of data submission and discovery; we have also designed a data flow from EGA to Galaxy, enabling reanalysis of the raw data in Galaxy. In this way, users can select patient cohorts in tranSMART, trace them back to the raw data and perform (re)analysis in Galaxy. Our conclusion is that the majority of metadata does not necessarily need to be stored (redundantly) in both databases, but that instead FAIR persistent identifiers should be available for well-defined data ontology levels: study, data access committee, physical sample, data sample and raw data file. This approach will pave the way for the stable linkage and reuse of data.",
"keywords": [
"tranSMART",
"EGA",
"Galaxy",
"FAIR",
"reproducibility",
"translational research",
"data management",
"workflows"
],
"content": "Introduction\n\nTranslational research, or translational medicine, sets out to translate novel biological insights into clinical diagnostic tools, medicine, procedures, policies and education1,2. Recent developments in high-throughput profiling techniques like next generation sequencing3, followed by third generation sequencing4 and the earlier techniques like tandem mass spectrometry5 and microarrays6, have revolutionised translational research. Raw data generated by these techniques require extensive computation by bioinformatics workflows7, which transform raw data into interpreted data. The impressive number of observables per sample (e.g. genes, transcripts, exon positions, or peptide fragments) indicates that we need more samples to enhance the statistical power in filtering relevant biological events; moreover, it is still expensive to generate new molecular profiling data for research8. Subsequently, there is an increasing need to be able to reuse patient-derived high-throughput molecular profiling data from existing studies. The clinical and pathological information of such samples should also be stored to allow reanalysis. Additionally, all of these data are privacy sensitive, and hence require careful storage and controlled access. Here, we describe how those needs can be implemented into a well-designed data management ecosystem for archiving, linking and reusing data to facilitate the data-driven translational research on a large scale.\n\nWe consider two potential usage scenarios: 1) the process associated with generating the data; and 2) the process associated with reusing previously generated data. Note that the starting point in the two processes are different: in the former, the user starts by storing and computationally processing the raw data from the high-throughput experiments (green lines in Figure 1:A), whereas the latter process naturally starts from exploring, analysing or querying the interpreted data (orange lines in Figure 1:A).\n\nA: The process for data generation (green lines) is different from that for data reuse (orange lines). B–D: Intended scenario of reusing data for translational research: first, the samples of interest can be discovered by exploring the clinical and interpreted data in tranSMART (v16.1); note that it is essential to present enough metadata for effective exploration (B); next, the raw data in EGA can be traced back from the interpreted data in tranSMART (C); finally, workflows can be re-applied to the raw data in Galaxy (D). A: The process for data generation (green lines) is different from that for data reuse (orange lines). B–D: Intended scenario of reusing data for translational research: first, the samples of interest can be discovered by exploring the clinical and interpreted data in tranSMART (v16.1); note that it is essential to present enough metadata for effective exploration (B); next, the raw data in EGA can be traced back from the interpreted data in tranSMART (C); finally, workflows can be re-applied to the raw data in Galaxy (D).\n\nMany previous initiatives have focused on the implementation of infrastructures for processing and storing previously generated data9–11, but few focus on the scenario of reusing the data. Several consortia currently provide data infrastructures aimed to enable life science research12–15. Moreover, various initiatives have pushed the idea to make scientific results and data more openly accessible16–19. In light of this, a joint effort between ELIXIR and TraIT has been established to set up an implementation study with the aim of designing an ecosystem connecting existing data systems to enable effective reuse of the data. ELIXIR20 is an intergovernmental organisation which builds on existing data resources and services within Europe, enhancing European-wide biological research. Translational research IT (TraIT) is established as a large public-private partnership to develop, implement and maintain a long-lasting IT infrastructure for translational research in the Netherlands. In this work, we describe the setup, results and recommendations of the EGA-TraIT ELIXIR implementation study.\n\nSeveral resources and databases have been dedicated to store, query, explore, process and analyse human data. In this study, we aim to connect the European Genome-phenome Archive (EGA)21, tranSMART10,22,23 and Galaxy24,25. Currently, tranSMART (v16.1) and Galaxy are deployed by TraIT, while the EGA infrastructure is supported by CRG, EBI and ELIXIR. tranSMART is an open source framework and cloud platform for integrating molecular plus clinical data and exploring these; therefore tranSMART is a natural starting point for reusing data by making data findable. Galaxy is an open source bioinformatics workflow management system7,25, in which workflows can be run intuitively to analyse the biomolecular profiling raw data by users without programming expertise. The European Genome-phenome Archive (EGA) is a longterm data repository for molecular profiling and phenotypic data, where data are stored, managed, referenced and distributed with strict access control. As of June 2017, more than 1160 studies are available at EGA, with over 8000 data access accounts. It thus has become a highly used archive for raw human translational research data, helping to improve data accessibility.\n\nThe intended usage scenario of the implementation study is the reproduction and reanalysis of archived data, and can be outlined as follows: a life science researcher is exploring the interpreted and clinical data in tranSMART (Figure 1B) to find a few specific samples of interest; they then can retrieve the identifiers for these samples in EGA, and thus retrieve the raw data from EGA (Figure 1C), and (re)apply computational workflows made available through Galaxy (Figure 1D).\n\nHere we report the full outcome of this implementation study; previously, we described the connection between Galaxy and EGA26. In this paper, we show a proof of concept that demonstrates the feasibility of linking data resources for reusing archived data, with the help of the TraIT Cell Line Use Case (TraIT-CLUC) data. Nevertheless, the dramatic differences in data models between data resources, like EGA and tranSMART (Figure 2), have posed a major challenge for the interoperability of linking data. We finalise this work with a recommendation on how to transform the proof of concept into a mature solution. We show how to bridge the distinct data models of the different data sources by using persistent identifiers (PID), and explain how this befits the FAIR16 use of human data and computational workflows in translational research: findable, accessible, interoperable and reusable.\n\nThe data model of EGA is dramatically different from that of tranSMART (v16.1) due to the deviating purposes and designs of the systems. Furthermore, in both systems, there is an intrinsic flexibility in defining the data model. EGA uses the SRA (sequence read archive) data model for NGS data with the addition of array data from array and genotyping experiments. EGA also exports all sample objects to BioSamples, ensuring each sample has a BioSample ID. tranSMART focuses on the clinical information and interpreted biomolecular profiling data. The data model has a patient-centered, but flexible structure which also shows some design choices due to the underlying relational database. Terminology is not the same between tranSMART and EGA - partially due to the SRA data model employed at EGA, such that an experiment describes the library and platform used for sequencing experiments only. In tranSMART, a wider range of experiments can be described. DAC is a data access committee. The sample level, which is lacking in tranSMART v16.1, will be supported from v17.1.\n\n\nResults and discussion\n\nWe designed a data ecosystem in this implementation study connecting part of the TraIT infrastructure with EGA, as shown in Figure 3; in this figure, the blue arrows show the links implemented in this study. Note that we emphasise the process for reusing data here, starting from the interpreted data in tranSMART, linking back to the raw data in EGA that can be imported within Galaxy. Galaxy can subsequently be used to rerun the workflows over the raw data or perform novel analyses.\n\nThe blue arrows in this figure depict the connections implemented as a proof of concept by the current work.\n\nThe TraIT Cell Line Use Case (TraIT-CLUC) raw data, which are non-privacy sensitive, were made public in EGA. Via the EGA help desk, anyone can access them for testing and developing workflows.\n\nWith the TraIT-CLUC data, we showcase an implementation of data model mapping between tranSMART and EGA (Figure 4), which enables the envisioned data reuse process. Users in tranSMART can: trace back all the interpreted data in one study to all the raw data file IDs by EGA study ID, which is in the metadata of the study in tranSMART - (1) in Figure 4.\n\n(1): The study level mapping; if one hovers over the node ‘TraIT-Cell-line’ study node, one can see the EGA study identifier. (2) and (3): Metadata of node “EGA files” and its parent node (e.g.“RNA expression”) in the tree view contains one EGA dataset ID that those EGA file IDs (i.e. the leaf nodes of “EGA files”) belong to (dataset in EGA is similar to series in GEO). (4): After dragging the node “EGA files” in the tree view to ‘Grid View’, raw data files with EGA File IDs are rendered in a few columns in ‘Grid View’, where each row stands for a mapping from the interpreted data to its corresponding raw data files. Each subnode (not leaf node) of node “EGA files” in the tree view corresponds to a column in ‘Grid View’. Therefore, the interpreted data in tranSMART can be traced back to the corresponding raw data archived in EGA, either via the corresponding files or via the entire dataset.\n\n1. trace back all the interpreted data under one specific experiment type to the raw data file IDs by the EGA Dataset ID. The EGA Dataset ID can be found in the metadata of node \"EGA files\" and its parent node (e.g.\"RNA expression\") in the tree view - (2) and (3) in Figure 4.\n\n2. trace back one piece of specific interpreted data under one specific experiment type to the raw data files by EGA file IDs, which are the leaf nodes of the node ‘EGA files’ in the tree view and rendered as columns in ‘Grid View’ - (4) in Figure 4.\n\nOnce the users in tranSMART retrieve EGA file IDs, they can directly import the raw data files into a Galaxy instance with the Galaxy tool “EGA download streamer”26. Subsequently, the workflow in Galaxy can be applied to these data for reproduction or new analysis.\n\nDuring the upload of the TraIT-CLUC data, there had been extensive communication and feedback between the TraIT and EGA team. This has resulted in an improved data uploading pipeline. EGA has implemented a FUSE layer, which allows all files received from EGA via the downloader to be stored in an encrypted format on the remote filesystem. This also allows processes to natively access these files and decrypt them automatically as they are accessed, removing the need for a separate specific decryption step and hence the storage of unencrypted files on a remote filesystem and the associated security concerns. This implementation is now being extended to allow remote file transfer to remote clouds.\n\nIn order to improve the findability of data stored in EGA, a draft API has been implemented which allows objects to be queried and filtered, with the response in JSON format. The objects to return are specified, followed by the object and ID to filter by. For example, the following query returns the datasets associated with study EGAS00001001476: https://test.ega-archive.org/metadata/v2/datasets?queryBy=study&queryId= EGAS00001001476. It is also possible to retrieve the BioSample and EGA IDs of the samples associated with the study using the following query: https://test.ega-archive.org/metadata/v2/samples? queryBy=study&queryId=EGAS00001001476&limit=0.\n\nThe current work has improved the level of FAIRness of the infrastructure in several ways. The findability (F), even though in this case of a controlled access database, has been improved by generating a link back to the raw data. The accessibility (A), in this case with controlled access, has also been improved by allowing data import using EGA identifiers in Galaxy to access the raw data, making it thereby reusable (R). The main challenge in the implementation study is the interoperability (I), i.e., the data model mapping between EGA and tranSMART, which are unsurprisingly different from each other (Figure 2). Below we outline recommendations to further improve the FAIRness of this ecosystem for privacy sensitive human data.\n\n\nRecommendation to implement a proof of concept\n\nIn this ELIXIR EGA-TraIT implementation study, we showed a proof of concept for linking EGA, tranSMART and Galaxy, effectively providing an ecosystem for translational high-throughput biomolecular profiling data. However, the current implementation of metadata mapping between tranSMART and EGA will become more cumbersome when one item of interpreted data corresponds to multiple raw data files, which leads to multiple columns in the “grid view” of tranSMART. In this situation, to allow the further development of technical links, user-friendly interfaces, better provenance of computational methods and a more structural solution is required. Below we will outline our recommendations, which will ensure interoperability between different elements of these ecosystems, and thus allow the development of user-friendly work processes.\n\nThe ELIXIR implementation study aimed to show a proof of concept for a functioning ecosystem, in which data could be reused by life science researchers. In order to make a user-friendly, and more mature ecosystem, some further improvements need to be made:\n\n1. The current implementation of the Galaxy EGA download streamer means that all users of one Galaxy instance have to share one user credential to access EGA data. Currently, Galaxy does not support password input type. This means that any password will be inadvertently recorded in the Galaxy history, and thereby compromise the security of EGA credentials; the current implementation is an ad hoc solution to this problem. A generic solution in Galaxy should be offered to securely integrate with the third-party authentication27; this would also enable secure personal access to nonpublic databases besides EGA.\n\n2. From a user perspective, error messages from the Galaxy EGA download streamer should be easily interpretable. Currently, it is difficult to obtain associated metadata on the EGA file identifiers, making it difficult to implement helpful error messages. For example, it may be unclear to the user why there is no access to a certain file, and who should be approached if access is needed. This could be addressed if metadata on EGA identifiers would be exposed in a more generic, machine readable format, preferably in RDF.\n\n3. Likewise, human readable metadata associated with EGA identifiers, such as the file identifier, should be exposed, so that researchers can find their way to the correct datasets, studies and data access committees covering the files of interest. Currently, if a life science researcher finds an EGA file ID in tranSMART, and does not have EGA access yet, it is very difficult to find out to which EGA dataset or study it belongs.\n\n4. For life science researchers, a more direct reference from tranSMART to suitable computational workflows would be highly desirable. In terms of provenance, a reference to the workflow that produced the data would be sufficient; however, for reusing data by the life science researchers, it would be helpful if a direct link to a workflow on a Galaxy instance were available. This issue has for example been addressed in the myFAIR Analysis project.\n\n5. Many bioinformaticians running production workflows for generating interpreted data do not, in fact, use Galaxy. An important reason for this is that Galaxy does not always give enough control over the data usage and job scheduling to allow computationally expensive workflows to be run efficiently on HPC systems. Moreover, a bioinformatician — who wants to make a Galaxy workflow available as provenance over the dataset and increase reusability of the data — needs to make additional efforts to port the workflow to Galaxy. Any steps that make this porting easier, will in the longer term greatly serve the provenance of interpreted data.\n\nCurrently, data models used to capture clinical cohorts vary strongly between different data resources (Figure 2). However, aligning these data models, or mapping them via metadata, would only partially resolve the problem for the following reasons:\n\n1) Translational research is a rapidly changing field; study and cohort structures rapidly evolve to reflect the fast advances in data science and high-throughput molecular profiling techniques.\n\n2) Different elements within any such ecosystem can have multifarious purposes and can aim to serve a different market of users.\n\n3) Metadata is essential for good data stewardship16; nevertheless, the purposes of data resources may indicate which metadata is required; moreover, metadata may need to be corrected or updated over time (see for example the fate of the TCGA barcodes).\n\n4) Making huge amounts of (overlapping) metadata a requirement in each data resource will increase the barrier for data submission to any resource.\n\nIn this context, we make a different suggestion that ensures interoperability between these systems without the need to align their full relational structures: globally resolvable and unique persistent identifiers (PID)28 should be generated for well-defined entities in all data resources, and should be used to link the data between data resources (Figure 4). Furthermore, we suggest that following ontology concepts need to be assigned such persistent identifiers: Study, Data Access Committee (DAC), Physical Sample, Data Sample, and Data File (Figure 5).\n\nWe suggest the following requirements should hold for each of these persistent identifiers:\n\n1. A single authority should be responsible for minting the persistent identifier, which also entails a scheme to define what the string looks like, and for standardising minimally required metadata applied for the identifier within the consortium.\n\n2. Any data resource offering these PIDs should make sure the relations between the PID entities are resolvable by querying their database, for those PIDs included in the resource. For example, if EGA contains a File PID, we should be able to ask for the associated DAC PID.\n\nSuch persistent identifiers would be very similar to the recently introduced ORCID ID for researchers. Several data resources, as held by publishers, libraries and funding agencies, are including this in their systems, which obviates the need for a homogeneous relational structure or perfectly overlapping metadata. The linkage of one ORCID ID with multiple DOIs makes the publications and academic activities of one researcher easily traceable, creating a fully workable researcher-centered ecosystem with a wide range of data resources and applications.\n\nThe data model of EGA differs much from that of tranSMART; for example, a tranSMART experiment has a different conceptual meaning compared to the EGA ‘experiment’, which is one of the four ‘processing’ objects at EGA (experiment, run, analysis, and array). A few well-defined entities with persistent identifiers (PIDs) are essential to achieving the interoperability between the systems. From this implementation study, Study PID, File PID and DAC PID are thought to be essential for systematic mapping for a stable ecosystem allowing to reuse data. Moreover, from a TraIT perspective, stable identifier types that describe the physical sample (Physical Sample PID) and the raw data associated with such a sample (Data Sample PID) are desirable. For the first concept, the BioSample definition could be used, for the second concept, it is clear that there is a need for a well-defined aggregate identifier above the file level that covers all raw output data from a single experiment on a single sample. Ongoing studies aim to generate a well-defined level for these needs, which are also consistent with GA4GH29 metadata model systems.\n\nNote that it is not necessary for all types of PIDs to be governed by a single authority. Currently, EGA has two types of PIDs listed at identifiers.org: the EGA study and EGA Dataset. All EGA samples also have a BioSamples PID, which links to the publicly accessible attributes of the sample. To fully adhere to the above criteria, EGA would need to ensure that the controlled-access attributes are available via an EGA PID, along with EGA PIDs for Experiment, Analysis, Run, and Array. The additional PID types required may also be given out by other authorities; distributed governance of PID types would not diminish their usefulness.\n\nWith our recommendations, this implementation study specific data ecosystem will further progress towards FAIR guiding principles. If the associated metadata of these PIDs are made available as linked data, the findability (F) could easily be ensured by metadata exposing systems such as bioschemas30 or wikidata31; in this way, users could easily access the metadata and PIDs in Wikipedia via search engines. A file PID or Data Sample PID should be associated with at least one DAC PID, ensuring that high-throughput biomolecular profiling data can be authorised and accessed (A). The implementation of PIDs in linking metadata specifically achieves the interoperability (I) between different systems. Raw data in EGA can be reused in Galaxy for further analysis in our data ecosystem and the rich metadata will help users evaluate the reusability (R) of the data. The latter will be enhanced if our recommendation can push the regulation of the community standard in human data management. Thus, we suggest that by determining a few well-defined entities in a rigorous way, we can link existing initiatives, built with different purposes in mind, without the need for aligning their full data structures.\n\nEGA has traditionally only allowed a limited set of data to be available publicly because of its controlled-access database. These would be the study, DAC, and dataset objects. This study has shown that for EGA to become fully FAIR, EGA needs to allow all other objects with PIDs to be publicly queryable. EGA can ensure security by restricting the attributes of the PIDs that are visible publicly, but allow the PID itself to be public. For example, as each file in EGA has a PID, this PID could be public, while the filename could be under controlled access, allowing the full structure and links between objects at EGA to be accessible. EGA is developing a new API that will allow the relationships between all objects to be determined (linked data) while ensuring controlled access data is not public. Example queries would be:\n\n’List all files from sample A’\n\n’List all samples used in file B’\n\n’List all files of type C in study D’\n\n’List all samples in dataset E’\n\n’Return the experiments that were performed on sample F by study G’\n\nAdditionally, filters can be applied to restrict results by attributes associated with an object, such as ’Return all BAM files from male samples in study H’. EGA should also extend the extant relevant digital objects listed at identifiers.org30 for each of which EGA is responsible for generating a PID, ensuring that each of these objects will have a unique uniform resource identifier (URI).\n\n\nConclusions\n\nOur implementation study advances the role of EGA from a data archive towards a data port, where data can more readily be reused; additionally, our implementation study has made it possible to link tranSMART, Galaxy and EGA into a full data reuse ecosystem. Interoperability is the centrepiece among all the challenges in linking data and our recommendation offers one solution to it. In addition, this implementation study allowed us to make several recommendations for future projects to improve FAIRness of the designed ecosystem.\n\n\nMethods\n\nWe map the data model of tranSMART (v16.1) and that of EGA. In Figure 2, “study” in both databases are mapped; “interpreted data” is mapped to “analysis” or “run” in EGA which corresponds to one or multiple EGA file IDs (see the section \"Data and software availability\").\n\nTraIT-CLUC data are used in this implementation study for test purposes because they do not have privacy issues. TraIT-CLUC data include results obtained from various high-throughput molecular profiling techniques, such as microarrays, next generation sequencing and tandem mass spectrometry. Raw data were restructured to be uploaded into EGA; the interpreted data were rendered as the tranSMART-ready format to be uploaded into tranSMART (see Data and software availability).\n\nData upload into EGA. Raw TraIT-CLUC data including FASTQ and BAM files were uploaded into EGA together with their metadata.\n\nData files were transferred to EGA archival via FTP after being encrypted locally. Metadata were filled into XML files and uploaded into EGA via its API. The raw TraIT-CLUC data have been structurally published in EGA.\n\nData upload into tranSMART. The interpreted TraIT-CLUC tranSMART-ready data were uploaded into tranSMART using transmart-batch32.\n\nA Galaxy tool called ega_download_streamer26 was used, which wraps EGA download client into Galaxy. We set up an EGA account with access to TraIT-CLUC data into Galaxy. By providing an EGA file identifier, this tool enables the automatic download of data from EGA into Galaxy.\n\n\nData and software availability\n\nThe raw TraIT-CLUC data structurally published in EGA can be accessed via EGA Study ID EGAS00001001476. These data are public and therefore anyone can request the access to the datasets under EGA Study ID EGAS00001001476 via EGA help desk (DAC ID: EGAC00001000514). The access to the tranSMART-ready TraIT-CLUC interpreted data can be found at https://trng-b2share.eudat.eu/records/21bdc3128e1541da83dc48c51cd39a5f. How to load the tranSMART-ready data into tranSMART can be found at http://cluc.trait-platform.org.\n\ntranSMART (v16.1) is used in this implementation study. Information about a demo server of tranSMART showcasing the data model mapping of this work can be found at http://cluc.trait-platform.org.\n\nA Galaxy instance can be deployed either from the source code or from a Docker image. More information can be found at https://galaxyproject.org/. Galaxy tool “EGA download streamer” can be installed from the main Galaxy tool shed under the name “ega_download_streamer” within the Galaxy instance. The source code can be found at http://dx.doi.org/10.5281/zenodo.16733033.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis EGA-TraIT implementation study is funded by ELIXIR, the research infrastructure for life-science data. CZ, J.Bijlard, YH, SH, MB, A.Stubbs, JWB, GM, RF and SA are all supported by CTMM-TraIT (grant agreement number 05T-401).\n\nA.Senf and DS are supported by ELIXIR; the research is supported by ELIXIR-EXCELERATE, ELIXIR and European Molecular Biology Laboratory. ELIXIR-EXCELERATE is funded by the European Commission within the Research Infrastructures programme of Horizon 2020 (grant agreement number 676559).\n\nThe author confirms that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nRubio DM, Schoenbaum EE, Lee LS, et al.: Defining translational research: implications for training. Acad Med. 2010; 85(3): 470–475. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoolf SH: The meaning of translational research and why it matters. JAMA. 2008; 299(2): 211–213. PubMed Abstract | Publisher Full Text\n\nSchuster SC: Next-generation sequencing transforms today’s biology. Nat Methods. 2008; 5(1): 16–18. PubMed Abstract | Publisher Full Text\n\nLee H, Gurtowski J, Yoo S, et al.: Third-generation sequencing and the future of genomics. bioRxiv. 2016. Publisher Full Text\n\nHunt DF, Yates JR 3rd, Shabanowitz J, et al.: Protein sequencing by tandem mass spectrometry. Proc Natl Acad Sci U S A. 1986; 83(17): 6233–6237. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTusher VG, Tibshirani R, Chu G: Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci U S A. 2001; 98(9): 5116–5121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nda Silva RF, Filgueira R, Pietri I, et al.: A characterization of workflow management systems for extreme-scale applications. Future Gener Comput Syst. 2017; 75: 228–238. Publisher Full Text\n\nvan Nimwegen KJ, van Soest RA, Veltman JA, et al.: Is the $1000 genome as near as we think? a cost analysis of Next-Generation sequencing. Clin Chem. 2016; 62(11): 1458–1464. PubMed Abstract | Publisher Full Text\n\nGriffith M, Spies NC, Krysiak K, et al.: CIViC is a community knowledgebase for expert crowdsourcing the clinical interpretation of variants in cancer. Nat Genet. 2017; 49(2): 170–174. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScheufele E, Aronzon D, Coopersmith R, et al.: tranSMART: An open source knowledge management and high content data analytics platform. AMIA Jt Summits Transl Sci Proc. 2014; 2014: 96–101. PubMed Abstract | Free Full Text\n\nCerami E, Gao J, Dogrusoz U, et al.: The cbio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data. Cancer Discov. 2012; 2(5): 401–404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrossman RL, Heath AP, Ferretti V, et al.: Toward a shared vision for cancer genomic data. N Engl J Med. 2016; 375(12): 1109–1112. PubMed Abstract | Publisher Full Text\n\nKasprzyk A: BioMart: driving a paradigm change in biological data management. Database (Oxford). 2011; bar049. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBourne PE, Bonazzi V, Dunn M, et al.: The NIH big data to knowledge (BD2K) initiative. J Am Med Inform Assoc. 2015; 22(6): 1114. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMargolis R, Derr L, Dunn M, et al.: The national institutes of health’s big data to knowledge (BD2K) initiative: capitalizing on biomedical big data. J Am Med Inform Assoc. 2014; 21(6): 957–958. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR guiding principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWells TN, Willis P, Burrows JN, et al.: Open data in drug discovery and development: lessons from malaria. Nat Rev Drug Discov. 2016; 15(10): 661–662. PubMed Abstract | Publisher Full Text\n\nLevin N, Leonelli S, Weckowska D, et al.: How do scientists define openness? exploring the relationship between open science policies and research practice. Bull Sci Technol Soc. 2016; 36(2): 128–141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKiernan EC, Bourne PE, Brown CT, et al.: How open science helps researchers succeed. eLife. 2016; 5: pii: e16800. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrosswell LC, Thornton JM: ELIXIR: a distributed infrastructure for european biological data. Trends Biotechnol. 2012; 30(5): 241–242. PubMed Abstract | Publisher Full Text\n\nLappalainen I, Almeida-King J, Kumanduri V, et al.: The european genome-phenome archive of human data consented for biomedical research. Nat Genet. 2015; 47(7): 692–695. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHerzinger S, Gu W, Satagopam V, et al.: SmartR: An open-source platform for interactive visual analytics for translational research data. Bioinformatics. 2017; 33(14): 2229–2231. PubMed Abstract | Publisher Full Text\n\nBierkens M, van der Linden W, Weistra W, et al.: Abstract 3166: Querying, viewing and analyzing colorectal cancer translational research studies in tranSMART. Cancer Res. 2016; 76(14 Supplement): 3166. Publisher Full Text\n\nThiel WH: Galaxy workflows for web-based bioinformatics analysis of aptamer high-throughput sequencing data. Mol Ther Nucleic Acids. 2016; 5(8): e345. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAfgan E, Baker D, van den Beek M, et al.: The galaxy platform for accessible, reproducible and collaborative biomedical analyses: 2016 update. Nucleic Acids Res. 2016; 44(W1): W3–W10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoogstrate Y, Zhang C, Senf A, et al.: Integration of EGA secure data access into Galaxy [version 1; referees: 2 approved]. F1000Res. 2016; 5: pii: ELIXIR-2841. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMénager H: Report for: Integration of ega secure data access into galaxy [version 1; referees: 2 approved]. F1000Res. 2017; 5. Publisher Full Text\n\nSun S, Lannom L, Boesch B: Handle system overview. 2003. Reference Source\n\nKnoppers BM: International ethics harmonization and the global alliance for genomics and health. Genome Med. 2014; 6(2): 13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeibel PN, Krüger J, Hartmeier S, et al.: XML schemas for common bioinformatic data types and their application in workflow systems. BMC Bioinformatics. 2006; 7: 490. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVrandečić D, Krötzsch M: Wikidata: A free collaborative knowledgebase. Commun ACM. 2014; 57(10): 78–85. Publisher Full Text\n\nthehyve: tranSMART Batch. Zenodo. 2016. Publisher Full Text\n\nyhoogstrate, Hiltemann S: ErasmusMC-Bioinformatics/galaxytools-emc: v1.0 ega_download_streamer. Zenodo. 2016. Publisher Full Text"
}
|
[
{
"id": "25072",
"date": "06 Sep 2017",
"name": "Hervé Ménager",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes the work implemented in the context of an ELIXIR implementation study, which aims at building a proof of concept for an infrastructure that links reference omics data (from the EGA) with a workflow environment (Galaxy) and a data integration platform hosting interpreted data (transMART).\nThe authors make a clear case of showing the interest of their approach, which is to facilitate the discovery and reusability (overall, the FAIRness) of clinical data. A prototype “ecosystem” has been built to evaluate this approach. As the authors mention, this paper builds, among other things, on the work presented in “Integration of EGA secure data access into Galaxy”1, which had also introduced the project. The results of this work are quite encouraging, as the implementation study demonstrates that despite technical issues such as the difference of the data model for different components (EGA and transMART), their integration remains possible. The last “Recommendations” section is helpful in understanding the limitations of the current work. Of particular importance in my opinion is recommendation 5 to move to a “mature solution”, which explains the difference of implementation between the initial analysis and the re-analysis workflows by the restriction of Galaxy usage to smaller scales than the “production workflows” used initially. This raises the question of workflow portability between and Galaxy and other workflow management systems. I personally think that CWL 2 (an initiative I am currently part of) could be used as a standard language to define workflows which can be run both in high-throughput production environments, and in graphical workbench systems like Galaxy. From a more general perspective, most of the recommendations corresponding to potential modifications in the “partner systems” (EGA, Galaxy, transMART), it would be interesting to know if they have been communicated to the corresponding communities, and to be able to track the evolution of these requests.\nA last minor point is that I would modify figure 3 to transform the “Export raw” between EGA and Galaxy into an “Import raw data”, as the data transfer is controlled from Galaxy rather than from EGA.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "27264",
"date": "24 Oct 2017",
"name": "Gilbert S. Omenn",
"expertise": [
"Reviewer Expertise Cancer proteomics",
"bioinformatics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a quite unusual paper. It presents a demo with TraIT of a cell line use case (CLUC) to combine functions, samples, and datasets from tranSMART (v16.1) and EGA into workflows in Galaxy. I was pleased to see proteomics identified as a key data type in Figure 3.\n\nFigure 4 shows the metadata mapping and the assessment of the FAIR principles.\n\nThe Discussion of the ELIXIR implementation lays out improvements needed. Sometimes we might require a manuscript to report the implemented improvements, with results, but in this complex situation, including recommendations for managing public access versus controlled access across different independent resources, I think indexing at this stage is worthy. The model of ORCID IDs for researchers and their publication DOI's is a useful analogy.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1488
|
https://f1000research.com/articles/6-631/v1
|
05 May 17
|
{
"type": "Research Article",
"title": "Investigation of chimeric reads using the MinION",
"authors": [
"Ruby White",
"Christophe Pellefigues",
"Franca Ronchese",
"Olivier Lamiable",
"David Eccles",
"Ruby White",
"Christophe Pellefigues",
"Franca Ronchese",
"Olivier Lamiable"
],
"abstract": "Following a nanopore sequencing run of PCR products of three amplicons less than 1kb, an abundance of reads failed quality control due to template/complement mismatch. A BLAST search demonstrated that some of the failed reads mapped to two different genes -- an unexpected observation, given that PCR was carried out separately for each amplicon. A further investigation was carried out specifically to search for chimeric reads, using separate barcodes for each amplicon and trying two different ligation methods prior to sample loading. Despite the separation of ligation products, chimeric reads formed from different amplicons were still observed in the base-called sequence.The long-read nature of nanopore sequencing presents an effective tool for the discovery and filtering of chimeric reads. We have found that at least 1.7% of reads prepared using the Nanopore LSK002 2D Ligation Kit include post-amplification chimeric elements. This finding has potential implications for other amplicon sequencing technologies, as the process is unlikely to be specific to the sample preparation used for nanopore sequencing.",
"keywords": [
"chimerism",
"MinION",
"interferon",
"amplicon",
"R9.4",
"signal",
"nanopore"
],
"content": "Introduction\n\nHigh-throughput DNA sequencing is a rapidly evolving field with new methods and applications introduced almost weekly1. One of the most recent sequencing technologies available on the market is the MinION sequencing device from Oxford Nanopore Technologies (ONT)2. A brief overview of MinION sequencing technology is discussed in our previous study on mitochondrial genome assembly3.\n\nInstead of exploiting base-pairing as in the sequencing by synthesis approach used by Illumina and others, nanopore sequencing uses an electronic sensor to detect DNA via a change in electric current (reviewed in 4). The MinION’s flow cell is comprised of 2048 wells containing a membrane perforated by nanopores. Ligated with a molecular motor, a single stranded DNA molecule passes through the pore, altering the recorded current. After the electronic sequencing is carried out, a software basecalling algorithm transforms the current trace into a modelled DNA sequence. The advantages of the MinION are rapid library preparation, portability5,6, long molecule sequencing7, and sequencing of non-model modifications of the DNA strand8. With the recent improvement in the chemistry of the MinION, ONT has overcome the majority of issues associated with low yield and high error rates that have limited the range of its application. The MinION sequencing device has now been successfully applied to sequence genomes of a wide range of sizes, from bacterial and viral genomes9,10, amplicon sequencing like bacterial 16S rRNA sequencing11, and more recently a human genome12. The MinION has also been used for cDNA sequencing13, for detecting DNA methylation patterns without chemical treatment8,14, and for direct RNA sequencing with detection of modified 16S rRNA nucleotides15.\n\nUsing the most recent R9.4 flow cells, we have evaluated the MinION technology for the amplicon sequencing of highly similar genes. Since we have an interest in interferon response during large parasitic infection16, we sequenced the type I Interferon (IFN) family. Type I IFNs are a family of intronless antiviral response genes comprised, in mice, of 14 highly homologous Ifna members, as well as the genes Ifnb, Ifnk and Ifne17. In humans, sequence similarity across the 14 members of the Ifna genes is 70–80%, with a further 35% sequence similarity between Ifna and Ifnb. Type I IFN has both an important role in innate antiviral immunity and in mounting adaptive T helper cell responses16,18. Based on previous observations, we aimed to identify which type I IFN member(s) were responsible for driving the type I IFN signalling in our infection model.\n\nDue to the high homology between the Ifna family, accurately detecting quantitative expression of the different gene members by Sanger sequencing or next generation sequencing is difficult. We instead employed nanopore sequencing, which allowed us to acquire full-length reads from each individual sequence that were amplified by the PCR reaction. We aimed to determine the relative quantities of the various Ifna family and Ifnb transcripts, in Nb treated mouse ear tissue using MinION; therefore enabling both the differentiation between the various Ifna genes, and the potential to perform quantitative analysis.\n\n\nMethods\n\nNippostrongylus brasiliensis was originally sourced from Lindsey Dent of the University of Adelaide, South Australia and has been maintained for 22 years by serial passage at the Malaghan Institute. Female Lewis rats were bred and used for maintenance of the N. brasiliensis life cycle when 4 months of age (and weight over 150g), as outlined in Camberis et al.19.\n\nTwo 8-week-old C57BL6/J male mice (Jackson Laboratories, approx 23g), housed and bred at the Malaghan Institute of Medical Research under specific pathogen free conditions, respecting the local and New Zealand ethics guidelines, were chosen for the investigation. 300 dead infective N. brasiliensis L3 larvae (Nb) were injected intradermally in each ear of one mouse in 30uL PBS after anaesthesia with an intraperitoneal injection of 200ul ketamine/xylazine. Another mouse was similarly anaesthetised and injected intradermally in each ear with 30uL PBS. The mice were euthanised in a CO2 chamber 3h post injection and ears (approx 27–30mg in weight) were immediately harvested and conserved in RNALater at 4C for <1h. RNA extraction of each whole ear (30mg) was done in 1mL of Trizol following the products’ guidelines (Thermofisher). cDNA was synthesised using the High Capacity RNA-to-cDNA kit (Applied Biosystems), according to the manufacturer’s instructions. Only the cDNA from the N. brasiliensis-treated mouse was used for this investigation. Ifna, Ifnb, and Actb amplicons were generated using specific primers: IfnaF (ATGGCTAGRCTCTGTGCTTTCCT) and IfnaR (AGGGCTCTCCAGAYTTCTGCTCTG)20; IfnbF (CTGGCTTCCATCATGAACAA) and IfnbR (GCAACCACCACTCATTCTGA); and ActbF (AGGGAAATCGTGCGTGACAT) and ActbR (ACGCAGCTCAGTAACAGTCC), which were purchased from Integrated DNA Technologies. PCR amplification was performed using Phusion High-Fidelity PCR Kit (Thermo Scientific) with 25ng cDNA, see Figure 1. The cycling conditions were as follows: denaturation at 98 for 30 seconds; 35 cycles of 98 degrees for 10 seconds, 61 degrees for 30 seconds, 72 degrees for 30 seconds; final extension of 72 degrees for 10 minutes. Samples were held at 4 degrees until use for PCR clean up and gel electrophoresis. PCR products were cleaned using QIAquick PCR Purification Kit (QIAGEN) and verified by gel electrophoresis.\n\n(A) Amplicons were observed for Ifna and Actb from both PBS treated (1&2, not sequenced in this investigation) and Nb-treated (3) samples at the expected sizes of 535 bp, and 524 bp respectively. The Ifnb gene from the Nb-treated sample (3) failed to amplify during this first attempt. (B) A repeat amplification of Ifnb from Nb-treated sample was carried out, producing a single band of approximately 600bp. This was run alongside amplicons of Ifna, Ifnb and Actb from genomic DNA; however genomic amplicons were not used for subsequent MinION sequencing.\n\nIfna cDNA were amplified by PCR using primers designed across a highly-conserved region of all Ifna coding sequences, which resulted in a mixed PCR product containing all 14 Ifna genes. cDNAs of Ifnb and Actb were amplified separately and used as quantification controls. Altogether, the three pooled amplicons were loaded into a flow cell and sequenced. Among the reads that we obtained, we noticed long chimeric reads comprising of two or more sequences from different amplicons. We decided to further examine this phenomenon.\n\nEthics approval for maintenance of the N. brasiliensis life cycle is overseen and approved by the Victoria University of Wellington Animal Ethics Committee. C57BL/6J mice were originally obtained from The Jackson Laboratory, Bar Harbour, Maine, USA, and maintained at the Biomedical Research Unit of the Malaghan Institute of Medical Research by brother X sister mating. Breeding pairs were refreshed regularly to maintain the genetic integrity of the strain. Mice were maintained in specific pathogen-free conditions, and housed and cared for according to the concepts of “A Culture of Care” of the Ministry of Primary Industries, NZ. All mouse experiments were approved by the Victoria University Animal Ethics Committee (permit number 23907) and carried out according to institutional guidelines.\n\nThe ONT Native Barcoding Kit (EXP-NBD002) and 2D Ligation Sequencing Kit (SQK-LSK208) were used to prepare the samples for sequencing, as per the manufacturer’s protocol. Briefly, purified PCR amplicon products were bluntended, ligated with barcode sequences, pooled in approximately equimolar amounts, then ligated with flow cell adapters and a hairpin linker. In order to explore the effect of ligation method on the degree of chimerism, two different adapter/hairpin ligation reactions were carried out: one using the standard quick (10-minute) ligation, and the other using an overnight ligation at 4° Celsius. No additional adapter-free controls were used; it has been our prior experience that sequencing does not proceed in a callable fashion unless adapter sequences are present. The barcoding scheme used in the library preparation is shown in Figure 2. Samples were quantified after barcoding for overnight ligation (2.14 ng/µl, 2.54 ng/µl and 2.56 ng/µl for Ifna, Ifnb, and Actb respectively) and for quick ligation (2.13 ng/µl, 2.68 ng/µl and 2.45 ng/µl for Ifna, Ifnb, and Actb respectively). These samples were normalised and pooled together to give 26.6ng each in 33.1µl distilled water for ligation. After adapter ligation, the quick ligation method had no detectable nucleic acid, as seen using a fluorescence quantitation with the Quantus fluorometer (Promega), while the overnight ligation quantified at 0.239ng/µl. We decided to pool the samples together anyway, and were pleasantly surprised to discover a substantial proportion of reads from quick-barcoded sequences.\n\nMouse cDNA was extracted and separately amplified for three different amplicons. The amplified product was then separated and barcoded based on the intended ligation process. Barcoded products were pooled and ligated to adapters via the overnight or the quick ligation method, then finally pooled together for sequencing.\n\nReads were initially basecalled during the sequencing runs in January 2017 using Metrichor 2D basecalling, from MinKNOW v1.3.25. An initial analysis of called reads demonstrated substantial disagreement between base calls and the raw signal (e.g. hairpin adapter sequences matching multiple times when the signal showed only one present), so reads were recalled as in March 2017 using Albacore v0.7.5.\n\n\nResults and discussion\n\nDuring the initial MinION sequencing run to investigate the expression of Ifna-family members in mice (comparing with Ifnb and Actb transcripts), we encountered issues with 2D basecalling through the Metrichor web service, which seemed to be due to failed alignment of component 1D strands. A BLAST search on some of the longest basecalled 1D reads led to a discovery that some reads had multiple mappings to our target Ifna-family members. Further exploration of the data demonstrated a situation in which both Ifna and Actb sequences were present in the same read (see Figure 3). This was an unexpected result; we had carried out separate PCR reactions for each transcript, so were not expecting reads to appear that mapped to different transcripts. Our conclusion was that chimeric ligation of input DNA was occurring at some stage during the sample preparation process, but all we were able to determine at the time was that this chimerism was happening some time after the PCR, but before the sequencing. The present experiment was designed in light of these prior results to more easily quantify the degree of ligation that was happening.\n\nThis read mapped to both beta-actin and interferon alpha, suggesting that a ligation of sequence had occurred, either during sample preparation or in-silico.\n\nDespite using a 2D ligation chemistry in the sample preparation, and selecting out hairpin-containing reads using streptavidin beads, the majority of reads could not be called as an aligned 2D sequence: of 329,591 sequenced reads, 299,124 were basecalled by Albacore, and 1005 (0.3%) of these basecalled reads had an aligned 2D sequence (see Supplementary File 1). The reasons behind this basecall failure were not investigated. Any called reads that were not called as 2D were processed further as 1D sequence, i.e. the remaining 298,119 (99.7%) of called reads.\n\nCalled 1D reads were mapped to Actb, Ifnb1, an Ifna consensus sequence, additional interferon sequences, the ONT control strand sequence, and known ONT adapter sequences (see Supplementary File 2) using LAST v83321. A total of 261,183 reads (87.6% of called 1D reads) were discovered that mapped to at least one known amplicon and/or barcode sequence.\n\nUsing a process of elimination, a total of 4563 reads (1.7% of amplicon or barcode-mappable 1D reads) were discovered with basecalled sequences that were definitively chimeric (see Supplementary File 5). These reads mapped at least once to either one of the three amplicon sequences, or at least once to one of the six barcode sequences. These were broken into four categories (with some overlap) based on the observed combinations of barcode and amplicon sequences (see Figure 4):\n\n1. Repeated identical amplicons aligned in the same direction\n\n2. At least two distinct amplicons\n\n3. At least two distinct barcode\n\n4. Disagreement between barcode and amplicon\n\nChimeric read categories are not disjointed: different categories may intersect with each other. Reads that mapped to repeated identical, but reverse-complemented sequences, are not included in these chimeric results, as it was not possible to distinguish at the base sequence level between such a duplicated sequence fragment and a 2D read with hairpin.\n\nThe highest proportion of chimeric reads were associated with repeated identical amplicons, with 3441 reads seen (75% of all definitively chimeric reads). This suggests that an amplicon sequencing procedure will be particularly susceptible to read chimerism, as the same sequence will appear in increased abundance compared to an untargeted sequencing approach. The low-temperature overnight ligation had a much higher proportion of repeated amplicons than the quick ligation; in this case it appears that the quick ligation was better at reducing the occurrence of chimeric reads, despite prior expectations. Of the definitively chimeric reads, 2869 included at least one overnight barcode (1.8% of 159,188 amplicon-mapped reads with an overnight barcode), and 1203 included at least one quick barcode (2.6% of 45,850 amplicon-mapped reads with a quick barcode). While it appears that the use of overnight ligation has helped somewhat to reduce chimeric reads, a substantial proportion of chimeric reads still remain.\n\nIf a cassette of adjacent Ifna genes were transcribed together, it is possible that this cassette could be amplified together as a single sequence. These sequences would appear to be chimeric (and fall into the \"Repeated amplicons\" category), but wouldn’t have any intermediate barcodes. The count similarities for repeated Ifna, Ifnb1 and Actb genes in Table 1 suggest that this is not happening at any significant level.\n\nOnly categories with a count of 5 or more are displayed.\n\nAfter elimination of definitively chimeric reads, 256,620 reads remained that appeared to map uniquely to single sequences (see Figure 5). A small proportion of these sequences (14,223; or 5.5%) had detectable barcode sequences, but did not map to any amplicons (i.e. mappable to an overnight or quick barcode sequence only). It is expected that these unmapped barcoded sequences were unamplified mouse cDNA sequences.\n\n(A) Amplicon counts split by barcode type. (B) Sequence only, quick barcode, and overnight barcode counts for amplicon-mapped sequences.\n\nA difference in read counts was observed between overnight-barcoded sequences and quick-barcoded sequences (77.8% overnight, 22.2% quick), which was consistent with the difference in input amount observed during sample preparation. An attempt was made during sample preparation to add in the three different amplicon preparations in equimolar quantities, which was more successful for the Actb preparation (33.6%) than it was for the Ifna and Ifnb preparations (42.7% and 23.7%, respectively).\n\nAn additional categorisation of Ifna family members (see Supplementary File 3) was attempted, but is not presented here as it detracts from the main chimeric read investigation. Intermediate results and a processing script from this categorisation are available in verbose form as and Supplementary File 4.\n\nA few of the reads were investigated at the raw signal level to make sure that the electrical trace was in agreement with the base-called signal. A demonstrative signal trace for a non-chimeric 2D read comprising of a single barcode-adapted amplicon is shown in Figure 6. Read traces typically began with a high-current (but relatively uniform) open pore state, followed by an intermediate stall signal (also fairly uniform), after which the highly variable sequence trace begins. Hairpin adapters could be easily identified in the raw signal as a bridge structure a little over halfway through a 2D sequence.\n\nThe recorded signal for this read starts with a very short open pore blip, followed by a long stall of 2.5s, then an NB05-flanked coding Ifnb1 sequence that took just under 3s to transition through the pore, then an NB05-flanked non-coding Ifnb1 sequence that took 2.5s seconds to transition through the pore. Note: These figures have been annotated with approximate region boundaries based on the order of hits to the base-called sequence.\n\nA number of situations were observed in the basecalled sequence where ligation during sample prep seems to have occurred, and in some cases this ligation resulted in multiple hairpin adapters being ligated in the same sequence. One such occurrence of this is seen in Figure 7, where two barcoded overnight sequences from two different amplicons (Ifnb1 and Ifna2) were joined together. Because two amplicons were concatenated, this ligation must have happened after the barcoding step of sample preparation (i.e. during adapter ligation).\n\nThe recorded signal begins with a very short open pore state (0.1s), followed by a long stall (2.5s), then an NB04-flanked Ifna2 non-coding sequence with a transition time of 2.5s. At this stage there appears to be the beginning of a hairpin sequence that is finished by a pore stall. This was followed by a coding Ifnb1 sequence with a transition time of 2s, then a hairpin, then an NB05-flanked non-coding Ifnb1 sequence (2.5s), and finally an NB04-flanked coding Ifna2 sequence (2.5s). Barcodes detected from this read (NB04/NB05) suggest that the chimeric sequence was likely formed during overnight ligation.\n\nThis finding has potential implications for other sequencing technologies, as the ligation process used for sample preparation is unlikely to be specific for nanopore sequencing. The formation of chimeric reads during sample preparation may be one explanation for the index switching phenomenon seen in Illumina-sequenced reads (e.g. see22–24), and presents a substantial problem for dual-indexed reads where identical indexes are used for different samples. Where dual-indexed reads are not used, ligation of reads with the same index may still be problematic depending on the particular sequencing application.\n\nThere were 8 instances where both an overnight and a quick barcode were observed in the basecalled sequence. In all such cases, there appears to have been a very short pore-protein dissociation between the sequencing of the two sequences (i.e. these were chimeric reads formed from in-silico ligation). The dissociation was only noticeable after inspecting the raw signal: a very short blip in the signal that matched the open pore current (e.g. see Figure 8).\n\nThe recorded signal begins with a long open pore period (2.9s), and a short stall (0.1s), followed by NB11-flanked coding and non-coding Actb sequences (transition time of 2.5s for each). There is a very short open-pore blip at around 8s, followed by a short stall (0.1s), then NB06-flanked coding and non-coding Actb sequences (transition time of 2.5s for each).\n\nIt is likely to be the case that similar situations involving fast pore reloading are present in other reads, but not easily detectable from the called sequence because other barcode/amplicon combinations fit the expected base calling pattern. Considering that this situation can happen with non-identical sequences, software that is able to flag the presence of dissociation and/or stall events that are not at the start of the raw signal would be useful, as these features suggest that the base call is not likely to be a correct single sequence.\n\nThe imminent release of ONT’s R9.5 flow cells and 1D2 base calling will exploit this phenomenon of fast sequence loading into pores in order to produce high-accuracy reads derived from a combined template/complement base call (i.e. replacing the current hairpin-based 2D call). This replaces the 2D sample preparation process that we used for this investigation (see 25).\n\n\nConclusions\n\nIt is apparent from our investigation that chimeric reads can exist in the output of sequencing runs, and we recommend that researchers consider this possibility when interpreting their own results. As a result, it is a good idea to include easily-detectable adapters when sequencing DNA. These adapters, particularly if present at both ends of a sequence, will help substantially in the identification (and if necessary, filtering) of concatenated sequences that are not native to the sample.\n\nAlthough a non-negligible 1.7% of reads were found to have post-amplification chimeric elements, careful quality control of reads after long-read sequencing should be able to identify and exclude the majority of chimeric reads that are produced during a sequencing run.\n\n\nData availability\n\nRaw read signal and basecalled reads have been uploaded to ENA under accession number PRJEB20601. Additional supplementary scripts used for FASTQ file filtering, mapping, and raw signal investigation are available as part of David Eccles’ bioinformatics script repository (doi, 10.5281/zenodo.556966)26. The following scripts from that repository were used for intermediate discovery and result generation:\n\nmaf_bcsplit.pl Converting MAF format to machine-readable CSV with forward-oriented location information\n\npos_aggregate.pl Merging adjacent MAF matches to the same target sequence in the same orientation\n\nfastx-fetch.pl Retrieving sequences from a FASTQ/FASTA file given a a list of identifiers (possibly as a text file)\n\nfastx-length.pl Generating length information and aggregate statistics for a FASTQ/FASTA file\n\nlength_plot.r Generating \"digital electrophoresis\" image and read density plots given a file containing length information\n\nporejuicer.py Extracting raw data and called FASTQ files from FAST5 files\n\nA rough shell command script (including additional dead-end attempts at discovery & analysis) is provided for reproduction and/or extension of these findings to other investigations (see Supplementary File 6).",
"appendix": "Author contributions\n\n\n\nRW: Sample preparation and QC; CP: Mouse injections, RNA extraction; FR: Project oversight; OL: Sample preparation, project design and oversight; DE: DNA sequencing and bioinformatics analysis. All authors contributed towards the preparation of the manuscript.\n\n\nCompeting interests\n\n\n\nThe R9.4 flow cell and sequencing kit (SQK-LSK208) used for this experiment were provided free of charge by ONT as replacements for a purchased kit and flow cell where the phenomena of chimeric reads was initially discovered. ONT provided advice regarding the sample preparation protocols, including the suggestion of a slow overnight ligation step.\n\n\nGrant information\n\nThis work was funded in full by Health Research Council of NZ Independent Research Organization (IRO) funding to the Malaghan Institute of Medical Research (grant number HRC14/1003).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank the Nanopore Community for providing help and insightful discussion for this investigation.\n\n\nSupplementary material\n\nSupplementary File 1: Base calling summary from Albacore v0.7.5.\n\nClick here to access the data\n\nSupplementary File 2: Reference sequences used for the initial amplicon mapping.\n\nClick here to access the data\n\nSupplementary File 3: Reference sequences used for Ifna paralog mapping.\n\nClick here to access the data\n\nSupplementary File 4: R script and intermediate data files used for Ifna-family gene counting.\n\nClick here to access the data\n\nSupplementary File 5: R script and intermediate data files used for chimeric read filtering.\n\nClick here to access the data\n\nSupplementary File 6: Shell/process script for reproducing the data analysis.\n\nClick here to access the data\n\n\nReferences\n\nLevy SE, Myers RM: Advancements in Next-Generation Sequencing. Annu Rev Genomics Hum Genet. 2016; 17(1): 95–115. PubMed Abstract | Publisher Full Text\n\nMikheyev AS, Tin MM: A first look at the Oxford Nanopore MinION sequencer. Mol Ecol Resour. 2014; 14(6): 1097–1102. PubMed Abstract | Publisher Full Text\n\nChandler J, Camberis M, Bouchery T, et al.: Annotated mitochondrial genome with nanopore r9 signal for nippostrongylus brasiliensis [version 1; referees: 1 approved, 2 approved with reservations]. F1000Res. 2017; 6: 56. Publisher Full Text\n\nReuter JA, Spacek DV, Snyder MP: High-throughput sequencing technologies. Mol Cell. 2015; 58(4): 586–597. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWalter MC, Zwirglmaier K, Vette P, et al.: MinION as part of a biomedical rapidly deployable laboratory. J Biotechnol. 2016; pii: S0168-1656(16)31640-6. PubMed Abstract | Publisher Full Text\n\nCastro-Wallace SL, Chiu CY, John KK, et al.: Nanopore dna sequencing and genome assembly on the international space station. bioRxiv. 2016. Publisher Full Text\n\nUrbanc JM, Bliss J, Lawrence CE, et al.: Sequencing ultra-long dna molecules with the oxford nanopore minion. bioRxiv. 2015. Publisher Full Text\n\nSimpson JT, Workman RE, Zuzarte PC, et al.: Detecting DNA cytosine methylation using nanopore sequencing. Nat Methods. 2017; 14(4): 407–410. PubMed Abstract | Publisher Full Text\n\nDeschamps S, Mudge J, Cameron C, et al.: Characterization, correction and de novo assembly of an Oxford Nanopore genomic dataset from Agrobacterium tumefaciens. Sci Rep. 2016; 6(1): 28625. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuick J, Grubaugh ND, Pullan ST, et al.: Multiplex pcr method for minion and illumina sequencing of zika and other virus genomes directly from clinical samples. bioRxiv. 2017. Publisher Full Text\n\nBenítez-Páez A, Portune KJ, Sanz Y: Species-level resolution of 16s rrna gene amplicons sequenced through the minion™ portable nanopore sequencer. Gigascience. 2016; 5(1): 4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJain M, Koren S, Quick J, et al.: Nanopore sequencing and assembly of a human genome with ultra-long read. bioRxiv. 2017. Publisher Full Text\n\nHargreaves AD, Mulley JF: Assessing the utility of the Oxford Nanopore MinION for snake venom gland cDNA sequencing. PeerJ. 2015; 3: e1441. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRand AC, Jain M, Eizenga JM, et al.: Mapping DNA methylation with high-throughput nanopore sequencing. Nat Methods. 2017; 14(4): 411–413. PubMed Abstract | Publisher Full Text\n\nSmith AM, Jain M, Mulroney L, et al.: Reading canonical and modified nucleotides in 16s ribosomal rna using nanopore direct rna sequencing. bioRxiv. 2017. Publisher Full Text\n\nConnor LM, Tang SC, Cognard E, et al.: Th2 responses are primed by skin dendritic cells with distinct transcriptional profiles. J Exp Med. 2017; 214(1): 125–142. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Pesch V, Lanaya H, Renauld JC, et al.: Characterization of the murine alpha interferon gene family. J Virol. 2004; 78(15): 8219–8228. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrinkmann V, Geiger T, Alkan S, et al.: Interferon alpha increases the frequency of interferon gamma-producing human CD4+ t cells. J Exp Med. 1993; 178(5): 1655–1663. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCamberis M, Le Gros G, Urban J Jr: Animal model of Nippostrongylus brasiliensis and Heligmosomoides polygyrus. Curr Protoc Immunol. 2003; Chapter 19: Unit 19.12. PubMed Abstract | Publisher Full Text\n\nDémoulins T, Baron ML, Kettaf N, et al.: Poly (I:C) induced immune response in lymphoid tissues involves three sequential waves of type I IFN expression. Virology. 2009; 386(2): 225–236. PubMed Abstract | Publisher Full Text\n\nFrith MC, Hamada M, Horton P: Parameters for accurate genome alignment. BMC Bioinformatics. 2010; 11(1): 80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSinha R, Stanley G, Gulati GS, et al.: Index switching causes “spreading-of-signal” among multiplexed samples in illumina hiseq 4000 dna sequencing. bioRxiv. 2017. Publisher Full Text\n\nHadfield J: Index mis-assignment between samples on hiseq 4000 and x-ten. Core-Genomics Blog, 2016. Reference Source\n\nBushnell B: Introducing crossblock, a bbtool for removing cross-contamination. Seqanswers discussion thread, 2017. Reference Source\n\nBrown C: Gridion x5 - the sequel. Technology presentation, 2017. Reference Source\n\nEccles D (gringer): gringer/bioinfscripts: Chimeric read update. 2017. Data Source"
}
|
[
{
"id": "22553",
"date": "17 May 2017",
"name": "Keith E. Robison",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have provided a useful report on technical aspects of the emerging Oxford Nanopore MinION DNA sequencing technology. As noted in the paper, the library preparation methods scrutinized here are commonly found in multiple advanced DNA sequencing technologies, and so lessons learned in this work are likely applicable elsewhere.\nIn the methods section, the authors report injected \"dead infectious\" worms, but not how the worms were killed.\nIt would be greatly preferable for all of the input DNA amounts to library preparations to be given in both mass and fmol. While it is common to report masses of DNA, the ligations really are dependent on the availability of DNA ends.\n\nThe point Figure 8 is trying to convey would be greatly enhanced by adding a zoom of the region around 8s in the plot in which the temporal sequence barcodeNB11-open pore--stall-barcode NB06 is seen. Zooms of other transitions should be considered as well.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23252",
"date": "19 Jun 2017",
"name": "Winston Timp",
"expertise": [
"Reviewer Expertise Biophysics",
"Sequencing",
"Epigenetics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, the authors discuss the detection and potential sources of chimeric reads from minION (nanopore) sequencing. Though the manuscript has some interesting analyses and ideas, I have some problems with the results that should be addressed, specifically points 1, 3, 4, 5 and 7:\nThough 2D libraries were prepared - only *0.3%* of the reads were actually 2D, the vast majority were 1D. This is in my view quite surprising - though with 2D libraries I have seen plenty of 1D reads mixed in, this level of 2D/1D for a 2D prep suggests something strange upstream of sequencing is occurring. However, the authors decline to address it, “The reasons behind this basecall failure were not investigated”. I think this must be addressed more carefully to understand the results.\n\nI feel the low % of 2D reads is important because it may play into the source of the chimeras - if the 2D calling is failing due to heterogenous DNA strands - i.e. hybridization of an IFN to an Actb for example, then end polishing and adapting would lead to a 2D read where the two strands don’t match, hence called as 1D.\n\nThe authors suggest that amplicon sequencing is more susceptible to chimeras because “the same sequencing will appear in increased abundance” - I’m not clear on why that makes chimeras more frequent, just that it makes them more likely to be easily detectable.\n\nThe authors discuss “multiple hairpin adapters being ligated in the same sequence”. I don’t understand how the authors think this is possible? There are only two free ends of DNA, and if there are hairpins on both, the DNA will not be able to enter the pore. Instead I suggest it could be the proposed “in silico” chimerism the authors later discuss.\n\nPCR Chimeras are not unknown in the literature - having been described, for example, in Sanger and 454 here (PMID: 21212162, 20833233). The authors’ assumption that the chimerism is occurring downstream of PCR needs to be demonstrated - Figure 3 suggests that the length of the chimeric is not outside the range of either Illumina or Sanger sequencing, so could be easily validated with these technologies.\n\nBut - given the multiple ligation steps of this protocol, it seems likely that the dA-tailing failing some fraction of the time could results in blunt-end ligation and chimeric reads.\n\nHow does enrichment look comparing the overnight to quick ligation for the different categories of chimerism detailed? The only results given are overall chimerism.\n\nThe authors only tried overnight ligation/quick ligation for the last ligation step, but not for the barcode ligation step. I also wonder if a PCR-barcode may have given better results - the multiple ligations may have led to a higher rate of chimeras, as the end-polishing likely had some fraction of blunt-ended amplicons.\n\nAnother possible point was the the authors may have not added enough (relative) adaptors - the relative high concentration of template allowed self-ligations to be more frequent. Adaptor dimers are probably easier to eliminate in this case than chimeras.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2810",
"date": "20 Jun 2017",
"name": "David Eccles",
"role": "Author Response",
"response": "Thanks for you comments. We will be incorporating your suggested changes into a revised manuscript. To make sure we've got the right idea about your suggestions, here are my initial impressions: I understand what happened now after talking to Forrest Brennen at the London Calling conference this year. 2D reads were not called because the older hairpin was used. Even though we used a newer 2D kit, the barcode kit had the old hairpin sequence in it (which wasn't detected by the Albacore caller used at the time). This mis-call has either been fixed now, or will be fixed soon -- I'll re-call the reads with the most recent Albacore and if no improvement will talk to ONT about the correct software tweaks to fix it. In any case, plenty of hairpin sequences were detected in the linear/1D base calls. We initially thought that the failure of 2D was due to chimeric reads, because it seemed to be occurring at the 2D alignment step. However, reads that were very obviously not chimeric were still failing the 2D calling. There is a chance that a properly called chimeric read will fail the 2D alignment step, but I expect it will still have a called template + complement sequence. I don't recall seeing many situations where the chimerism had happened on a single strand; it was mostly double-stranded fragments that had joined together. My theory on why amplicon sequencing is more susceptible to chimeras is that it encourages the formation of base pairing structures (e.g. quadruplexes) that bring the ends of similar sequences closer to each other. I don't know how deep we should go into this; it's a hypothesis about why they could be in higher abundance for our specific experiment, but we haven't tested whether or not amplicon sequencing runs have a higher rate of chimeric reads. Multiple hairpin adapters do make sense; David Stoddart (ONT sample prep guru) was with me when I was doing some \"napkin drawings\" of the structure that was formed by a 3-hairpin sequence, and helped correct a bit of the structure. We should add that into the paper (I think he kept the drawing, but I can make another one). PCR chimeras may exist in our results, but appear to be of low abundance according to the electrophoresis plot, and I've tried to analyse the results in such a way that PCR chimeras would be excluded. Our experimental design was such that barcoding (and mixing of separate amplicons) happened after the PCR was done, so the results should be at worst an underestimate of chimerism, and PCR chimeras should be observable in the data. Yes; dA-tailing failure seems to be a likely explanation. Due to the physics of chemical reactions, there are going to be some that don't work. However, it is a little bit curious that the overnight ligation produced more chimeric reads. If the chimerism were due solely to dA-tailing failure, then increased abundance for overnight ligation doesn't make sense. Different categories of chimerism are outlined in Table 1, and the four categories are broken down into overnight/quick in figure 5. I've included a full text description of each *read* in the supplementary information, but we felt that it was overly complex to include all those details in a graph. The overnight/quick difference was unexpected (particularly in the direction that it happened). While ONT have discontinued their 2D hairpin kit, we would still be able to carry out a subsequent investigation in the future to look at overnight vs quick ligation during barcoding. We tried to add the recommended amount of adapters to the samples, and the called results suggest that adapter dimers were minimal. Those situations are very easy to pull out at the analysis stage, because there are no amplicon sequences between adapters."
}
]
}
] | 1
|
https://f1000research.com/articles/6-631
|
https://f1000research.com/articles/5-93/v1
|
21 Jan 16
|
{
"type": "Method Article",
"title": "A novel data storage logic in the cloud",
"authors": [
"Bence Mátyás",
"Máté Szarka",
"Gábor Járvás",
"Gábor Kusper",
"István Argay",
"Máté Szarka",
"Gábor Járvás",
"Gábor Kusper",
"István Argay"
],
"abstract": "Databases which store and manage long-term scientific information related to life science are used to store huge amounts of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.",
"keywords": [
"Joker Tao",
"NoSQL",
"Cloud",
"Database",
"Life science",
"Physical data table",
"Virtual data table",
"RDBMS"
],
"content": "Introduction\n\nDatabases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. This is specially true for medical databases1,2. One major downside of these data is that information on multiple occurrences of an illness in the same individual cannot be connected1,3,4. Modern database management systems fall into two broad classes: Relational Database Management System (RDBMS) and Not Only Structured Query Language (NoSQL)5,6. The primary goal of this paper is to introduce a novel database model which provides an opportunity to store and manage each input data in one (physical) data table while the data storage concept is structured. JT can be defined as a NoSQL engine on an SQL platform that can serve data from different data storage concepts without several conversions.\n\n\nMethods\n\nThe technical environment is Oracle Application Express (Apex) 5.0 cloud-based technology. Workstation: OS (which is indifferent) + internet browser (Chrome). The Joker Tao logic (www.jokertao.com) can be applied in any RDBMS system (e.g. www.taodb.hu). Specification of the physical data table structure was determined with -ID (num) as the identifier of the entity, which identifies the entity between the data tables (not only in the given data table); -ATTRIBUTE (num) is the identifier of the attribute; -SEQUENCE (num) which is used in the case of a vector attribute; and -VALUE (VARCHAR2) which is used for storing values of the attributes. The codes which are stored in the Attribute column are also defined, sooner or later, in the ID column. At that time the attribute becomes an entity. In every case, the subjectivity determines the depth of entity-attribute definition in the physical data table. Firstly, we demonstrate a traditional (relational) data table structure (Table 1).\n\nFollowing this, the presented data table has been modified step by step. At the end of these steps, the JT data storage structure is created. The first step is the technical data storage. In Table 2, technical data will be stored which describes exactly what the virtual data table stores in the physical data table.\n\nIn the second step, the identifiers assigned to the attributes are displayed (Table 3).\n\nIn the third step, identifiers assigned to the entities are also displayed (Table 4). These identifiers are assigned to each cell of the entity. These identifiers are determined by the developer. The values of these identifiers can be any natural number that has not already been used in the ID column.\n\nIn the fourth step, the attribute identifiers are also assigned to each cell (Table 5). These identifiers are determined by the developer. The values of these identifiers can be any natural number that has not already been used in the Attribute column.\n\nIn the fifth step, the initial value of the cell is inserted as the Value of the JT structure (Table 6). From this stage, the developer uses identifiers (which were defined in the previous steps) instead of attribute names.\n\nThe final step is to rotate the traditional data table structure 90 degrees. This means each virtual data table is defined in one physical data table. With these steps the developer can design one data table to store each entity, attribute and formula in a database. The above described method can be applied manually. For the automatic conversion we created a Java code below7:\n\n\n\n\nResults\n\nThe resulting table structure is called JT structure (Table 7).\n\nFrom the JT physical data table, the following definitions can be read out:\n\n• Virtual record is the set of the physical data tables which have the same ID value.\n\n• Virtual data table is the set of the virtual records which have the same value of the belonging to the virtual data table (code 1010) attribute.\n\nThesis: In the JT structure, each attribute needs only one index for indexing in the database.\n\nProof using mathematical induction: It is obvious the statement is true for the case of one record stored in a data table (according to the RDBMS structure where the developers use more indexes to indexing more attributes). In this case the data table appears as shown in Figure 1.\n\nIndex = attribute (num) + value (varchar 2)\n\nIn view of entity, an ID (numerical) index is also used in JT logic-based systems. This ID does not depend (no transitive dependency) on any attribute. Thus, the entities of the virtual data tables meet the criteria of the third normal form (Figure 2).\n\nThe modes of the expansion of a data table are: -input new entity (Figure 3); -input new attribute (Figure 4); -input new virtual data table (Figure 5).\n\nThe indexing is correct in case of n+1 record expansion also. With JT logic the user is able to use only one physical data table to define each virtual data table in a database. Therefore, since only one index is required to index each attribute, the statement of the thesis is true in every case of the JT logic-based data table according to the principle of mathematical induction below. Thesis: For n=1 ergo;\n\n\n\nsubstituting one into the equation we get:\n\n\n\nresult of the operation is 1=1, that is, the induction base is true.\n\nUsing proof by induction we can now show that this is true for the following equation:\n\nn = k where k is a optional but fixed natural number. Therefore, we know that the following operation is true:\n\n\n\nFinally using n=k+1 we can prove our assumption to be true:\n\n\n\nThe above induction proof shows:\n\n\n\nConducting the mathematical operations we obtain the following:\n\n\n\n\n\nConducting the mathematical operations on the other side we obtain the same:\n\n\n\nThus, the induction step is true. Given that both the induction base and the induction step are true, the original statement is therefore true. In the present study, we explained the JT data storage logic. In our other study we focused on the query tests. Our previous results7 show that from 10000 records the relational model generates slow (more than 1 second) queries in a cloud-based environment while JT can remain with the one second time frame.\n\n\nDiscussion and conclusions\n\nUsing the developed database management logic, each attribute needs only one index for indexing in the database. JT allows any data whether entity, attribute, data connection or formula, to be stored and managed even under one physical data table. Thanks to this flexibility, a formula which is stored in a database can be utilized for problem solving in another field regardless of the difference in data storage method used in the present environment. In the JT data model, the entity and the attribute are used interchangeably, so users can expand the database with new attributes after or during the development process. With JT logic, NoSQL engine is ensured in SQL database systems for the storage and management of long term scientific information.",
"appendix": "Author contributions\n\n\n\nBM, MSZ, GJ, IA conceived the study. MSZ, GJ, IA, GK tested and the developed method. GK developed the mathematical proof. BM prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe first version of JT is a Hungarian product which was developed in 2008 (R.number: INNO-1-2008-0015 MFB-00897/2008) thanks to an INNOCSEK European Union application.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe corresponding author is thankful to György Mátyás, the idea owner of JT framework. The authors are thankful to the Call Tec Consulting Ltd. (the first one with highest Oracle certification in Hungary) and the first one that validated JT.\n\n\nReferences\n\nGoldacre M, Kurina L, Yeates D, et al.: Use of large medical databases to study associations between diseases. QJM. 2000; 93(10): 669–675. PubMed Abstract | Publisher Full Text\n\nKumar R, Sharma Y, Pattnaik PK: Privacy preservation in vertical partitioned medical database in the cloud environments. IEEE, Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), International Conference on, Noida, 2015; 236–241. Publisher Full Text\n\nSimon GE, Unützer J, Young BE, et al.: Large medical databases, population-based research, and patient confidentiality. Am J Psychiatry. 2000; 157(11): 1731–1737. PubMed Abstract | Publisher Full Text\n\nDelgado M, Sánchez D, Martín-Bautista MJ, et al.: Mining association rules with improved semantics in medical databases. Artif Intell Med. 2001; 21(1–3): 241–245. PubMed Abstract | Publisher Full Text\n\nLeavitt N: Will NoSQL databases live up to their promise? Computer. 2010; 43(2): 12–14. Publisher Full Text\n\nPereira D, Oliveira P, Rodrigues F: Data warehouses in MongoDB vs SQL Server: A comparative analysis of the querie performance. Information Systems and Technologies (CISTI). 10th Iberian Conference on, 2015; 1–7. Publisher Full Text\n\nMátyás B, Mátyás G, Horváth J, et al.: Data storage and management related to soil carbon cycle by a NoSQL engine on a SQL platform - Joker Tao. J Agr Inform. 2015; 6(3): 67–74. Publisher Full Text"
}
|
[
{
"id": "12375",
"date": "15 Feb 2016",
"name": "Jan Lindström",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper authors introduce a new logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. However, the paper is very poorly written. Firstly, the proposed logic is not presented detailed enough for the reader to understand and validate the method. Authors should research how relational model is presented and based on rigorous relational calculus and algebra. Based on this research, this paper should be rewritten based on rigorous mathematical foundation and give clear examples. Secondly, one table based example is far from convincing and provided Java-program is unnecessary. Length of the paper should be greatly increased to contain detailed description of JT method and give examples. Lastly, presentation is so poor that is not even clear how queries to resulting JT structure can be executed. To be honest, currently paper looks more like computer generated rubbish than a real scientific paper.",
"responses": []
},
{
"id": "12373",
"date": "25 Feb 2016",
"name": "Kavita Sunil Oza",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWork demonstrated in the paper is good and well explained. Complexity of work is not mentioned (algorithmic complexity) but this is not necessary as we already have high speed processors and time complexity may not matter much. Some more references should have been added but not mandatory as number of references are sufficient.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-93
|
https://f1000research.com/articles/6-1454/v1
|
14 Aug 17
|
{
"type": "Research Article",
"title": "Phylogeny and biogeography of the carnivorous plant family Droseraceae with representative Drosera species from Northeast India",
"authors": [
"Devendra Kumar Biswal",
"Sureni Yanthan",
"Ruchishree Konhar",
"Manish Debnath",
"Suman Kumaria",
"Pramod Tandon",
"Sureni Yanthan",
"Ruchishree Konhar",
"Manish Debnath",
"Suman Kumaria"
],
"abstract": "Background: Botanical carnivory is spread across four major angiosperm lineages and five orders: Poales, Caryophyllales, Oxalidales, Ericales and Lamiales. The carnivorous plant family Droseraceae is well known for its wide range of representatives in the temperate zone. Taxonomically, it is regarded as one of the most problematic and unresolved carnivorous plant families. In the present study, the phylogenetic position and biogeographic analysis of the genus Drosera is revisited by taking two species from the genus Drosera (D. burmanii and D. Peltata) found in Meghalaya (Northeast India). Methods: The purposes of this study were to investigate the monophyly, reconstruct phylogenetic relationships and ancestral area of the genus Drosera, and to infer its origin and dispersal using molecular markers from the whole ITS (18S, 28S, ITS1, ITS2) region and ribulose bisphosphate carboxylase (rbcL) sequences. Results: The present study recovered most of the findings by previous studies. The basal position of Droseraceae within the non-carnivorous Caryophyllales indicated in the tree topologies and fossil findings strongly support a date of origin for Droseraceae during the Paleocene (55-65 mya). Within the family Droseraceae, the sister relationship between Aldrovanda and Dionaea is supported by our ITS and rbcL dataset. This information can be used for further comparative and experimental studies. Conclusions: Drosera species are best suited as model systems for addressing a wide array of questions concerning evolutionary dynamics and ecological processes governing botanical carnivory.",
"keywords": [
"Botanical carnivory",
"Droseraceae",
"Ancestral area reconstruction",
"Biogeography",
"Taxongap"
],
"content": "Introduction\n\nThe carnivorous plant family Droseraceae is well known for its complex taxonomic diversity in temperate climatic regions. The family comprises nearly 200 species with two monotypic genera Aldrovanda and Dionaea and one large genus Drosera (popularly named as sundew) with a maximum number of species1,2,3. The name Drosera is derived from the Greek word meaning ‘dewdrops’. These types of plants usually exhibit remarkable tolerance to high-stress habitats and have acquired adequate reproductive fitness on the evolutionary ladder for their survival4. Specialized carnivory traps common to all Drosera species are in fact highly modified leaves lined with mucilaginous glandular trichomes or tentacles. Drosera species mostly inhabit regions of the Southern hemisphere and Southwestern Australia. In India, Drosera species are found in some parts of the Northeastern region, Deccan peninsular region, Southern India and along regions in West Bengal5,6. Of the three known Drosera species (D. burmanii Vahl., D. indica L. and D. peltata Thund.) reported in India, two are found in Meghalaya i.e., D. burmanii and D. peltata7.\n\nDrosera species can be grouped into five different habits depending on their growth forms, such as temperate sundews, pygmy sundews, subtropical sundews, tuberous sundews and the petiolaris complex. The diversity of growth forms in this genus is so vast that it comprises annual species forming hibernaculum in winter dormancy or underground tubers in extreme dry summers. The long tentacles on leaves are often brightly coloured and tipped with nectar secreting glands, adhesive compounds, as well as digestive enzymes. These tentacles start moving in to bring as many secretory glands as possible in contact with the prey upon capture. According to Darwin8, glandular formations present in Drosera leaves secrete proteolytic enzymes similar to those found in the animal stomach. It also demonstrates that the substances solubilized and decomposed by the action of enzymes are absorbed by plant foliage. In some species, (for example D. burmannii), the tentacle motion is quite remarkable as the glands can bend 180° in just fractions of a second.\n\nMany Drosera species are best known for their valuable natural products. Secondary metabolites from sundews, such as 1,4-naphthoquinones and flavonoids, have significantly contributed to folklore medicinal practices worldwide9. There are reports in ancient literature describing medicinal usage of different species of Drosera in treating epilepsy10. Many species of Drosera are threatened in India due to their confined distribution and extensive usage in the herbal industry, and thus have been categorized as vulnerable by the International Union for Conservation of Nature6,11,12.\n\nCandolle proposed the first infrageneric classification of Drosera, with two recognized sections of the plant based upon the characteristics and morphology of their styles13. Later Seine and Barthlott14 described three subgenera and 11 sections based on morphological, anatomical, palynological and cytotaxonomical studies. The phylogenetic study of Williams et al.15, based on ribulose bisphosphate carboxylase (rbcL) sequences and morphological data, could identify three major lineages within Drosera with subgenus regia emerging as the first branch, followed by subgenus capensis. Rivadavia et al.16 made an attempt to understand Drosera systematics based on the rbcL and 18S regions. The study highlighted D. regia and D. arcturi to be basal species for Drosera.\n\nThe species distribution of the genus Drosera ranges from both the hemispheres with about ~80 species in Australia, ~30 species in Africa, including North Africa and South Africa, ~30 species in South America, and less than 10 species in North America and Eurasia17. The phylogeography is not merely an extension of phylogenetic principles to the intraspecific level, rather it describes the population strata by utilizing the information belied in geographical patterns of ancestral lineage across the range of a species18. Understanding the process of colonization and population divergence of this species is fundamental to the study of its evolutionary diversification. Previous studies based on rbcL markers16 showed that the South American Drosera species arose from Australian species by dispersal, and the African species other than D. regia and D. indica arose subsequently from their ancestors in South America. Another study conducted by Rivadavia et al.19 on multidisciplinary studies of D. meristocaulis, prevalent in Neblina highlands of northern South America, proposed a long-distance dispersal from Australia to South America. It was also found that the section Bryastrum diversified from its ancestor about 13-12 MYA and does not agree to the Gondwanan origin for the D. meristocaulis19,20. Rivadavia et al.16 vouched for South African/ Australian origin of Drosera. Though the outcomes of their analysis could be attributed to Croizat and Gondwanan vicariance, the origin of Drosera is not supported by the recent studies on Droseraceae and their evolution16. It implies more work needs to be done to fully understand the evolution of the family Droseraceae and the genus Drosera, in particular.\n\nIn the present study, the phylogenetic position and biogeographic study of the genus Drosera is revisited, by representing the two species of genus Drosera (D. burmanii and D. Peltata) found in Meghalaya (Northeast India). The purposes of this study were (1) to investigate the monophyly of the genus Drosera, reconstruct phylogenetic relationships and ancestral area reconstruction of the genus Drosera in the family Droseraceae, (2) to infer the origin and dispersal of Drosera, and (3) to infer the phylogenetic relationships among Aldrovanda, Dionaea, and Drosera, using molecular markers from the whole ITS (18S, 28S, ITS1, ITS2) region and rbcL sequences.\n\n\nMethods\n\nInsectivorous plant species in the genus Drosera were collected from different regions of Meghalaya, according to their present availability. The collected plants included two species of Drosera viz. D. peltata and D. burmannii (Figure 1 and Figure 2). Drosera burmanii Vahl. was collected from Jarain, Jaintia Hills District, Meghalaya (N 25°36ʹ, E 92°15ʹ) and Drosera peltata Sm. was collected from Cherrapunjee, East Khasi Hills District, Meghalaya (N 25°07ʹ, E 91°28ʹ). Identification of these insectivorous plants was carried out at the Botanical Survey of India (BSI), Eastern circle, Shillong, Meghalaya. Herbariums were prepared and submitted in BSI and Department of Botany, North-Eastern Hill University (NEHU), Shillong. Specimen voucher numbers (NEHU) and accession numbers (BSI) of Drosera burmanii are 11924 and 86843; and Drosera peltata are 11962 and 86840, respectively. We amplified the whole ITS and rbcL regions from all the above-mentioned plants for the proposed work. In addition, we collected GenBank data that included these markers from representative species belonging to the genus Nepenthes, Drosera, Aldrovanda, Dionaea and Sarracenia, along with their geographical distribution information (Table 1).\n\n(a) Plant in natural habitat, (b) close up view of leaf, (c) trapped insect, and (d) whole plant.\n\n(a) Plant in natural habitat, (b) close up view of leaf, (c) trapped insects (blue arrows), and (d) whole plant.\n\nFor Drosera sp., the leaves and the stems were taken, washed thoroughly with water removing all the dirt and debris of insects, kept in 70% alcohol for a few minutes, dried, and then wrapped in aluminum foil and stored in liquid nitrogen for further use. Total genomic isolation from Drosera was carried out using DNeasy Plant Mini Kit (Qiagen, USA), according to the manufacturer’s instructions with minor modifications (combination of a borate extraction buffer with the DNA extraction kit, and a proteinase K treatment during extraction). The chosen markers were subjected to PCR amplification with desired forward and reverse primer pairs, as listed in Table 2. Reactions were performed in PCR tubes with a final volume of 100 µl. Each reaction mixture contained 4 µl of genomic DNA (30 ng/µl), 6 µl of dNTP* (2mM), 6 µl of 10X taq buffer B*, 4 µl of MgCl2 (25mM)*, 0.8µl of taq polymerase (3 units/µl)*, 8µl of each primer pair (10 pm) (Metabion, Germany), final volume was made by adding sterilized Millipore water (*Bangalore Genei, India). DNA amplification was performed in an Applied Biosystems Gene-Amp PCR System 2700 programmed for 35 cycles (4 min at 94°C, 30 sec at 94°C, 1 min at 56°C , 40 sec at 72°C and 10 min at 72°C). Sequencing was carried out at Macrogen, Inc, Korea.\n\nMaximum likelihood. All ITS and rbcL sequences were first aligned using MUSCLE21 and subsequently concatenated using MESQUITE V3.0322. Highly variable sequence regions were excluded from analyses of the extended data set. Because initial separate calculations using noncoding spacer regions and coding matK sequences, yielded congruent but incompletely resolved topologies, respectively, both partitions were combined in all subsequent analyses. Maximum likelihood (ML) analyses were carried out using MEGA 723. To find the best substitution model for our analyses, ML fits of 24 different nucleotide substitution models were performed. Models with the lowest BIC scores (Bayesian Information Criterion) are considered to describe the best substitution pattern. For each model, AICc value (Akaike Information Criterion, corrected), Maximum Likelihood value (lnL), and the number of parameters (including branch lengths) were also computed. The following models were verified for this study: General Time Reversible (GTR); Hasegawa-Kishino-Yano (HKY); Tamura-Nei (TN93); Tamura 3-parameter (T92); Kimura 2-parameter (K2); Jukes-Cantor (JC).\n\nFor ITS and rbcL dataset, the evolutionary history was inferred by using the ML method based on the Tamura-Nei model24. Sequence information for the aligned dataset pertaining to total number of sites (excluding sites with gaps/missing data), sites with alignment gaps or missing data, invariable (monomorphic) sites, G+C content, parsimony informative sites, number of haplotypes (h), haplotype gene diversity (Hd), Nucleotide diversity per site (Pi), average number of nucleotide differences (k) were computed and are shown in Table 3. The evolutionary history of the taxa analyzed are represented from the bootstrap consensus tree from 500 replicates of the original dataset25. Branches with less than 55% bootstrap replicates were collapsed. Initial tree(s) for the heuristic search were obtained by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using the Maximum Composite Likelihood (MCL) approach. Evolutionary analyses were conducted in MEGA 7. The ML tree was further used in divergence time analysis.\n\nBayesian inference. The concatenated dataset from the previous analysis (in nexus format) was further analyzed with MrBayes v.3.1.226. The model of best fit was (GTR + Γ + I), determined with Modeltest. Posterior probabilities were estimated by sampling trees from the posterior probability distribution using the Metropolis-coupled Markov chain-Monte Carlo approach (MCMCMC) implemented in MrBayes26, using default priors. The temperature of the heated chain was set to 0.2. Three times, four chains were run for 1 million generations. Consensus trees, clade posterior probabilities, and mean branch lengths were computed relying on the trees sampled every 10 generations after the burn-in and the tree was viewed in FigTree v1.3.1. A split decomposition median network graph tree was generated in SplitsTree427, with the variable positions in the same dataset.\n\nA visualization analysis tool, TaxonGap 2.4.128, was used to illustrate the sequence divergences within and between species of the candidate markers from the ITS, rbcL and matK regions representing the family Droseraceae.\n\nConsensus structures of ITS2 regions for the three genera in the Droseraceae family were predicted using LocARNA29 from Freiburg RNA tools server, which outputs a multiple alignment together with a consensus structure. For the folding, a very realistic energy model for RNAs was used that features RIBOSUM-like similarity scoring and realistic gap cost. The high performance of LocARNA was mainly achieved by employing base pair probabilities during the alignment procedure.\n\nA Time-tree was generated using the RelTime method in MEGA 723,30. Divergence times for all branching points in the user-supplied topology for estimating phylogenetic history and divergence times of Droseraceae were calculated using the ML method based on the Tamura-Nei model24. The estimated log likelihood value of the topology shown is -26876.8062. A discrete Gamma distribution was used to model evolutionary rate differences among sites (5 categories (+G, parameter = 2.0910)). The rate variation model allowed for some sites to be evolutionarily invariable ([+I], 0.0000% sites). The tree is drawn to scale, with branch lengths measured in the relative number of substitutions per site. There were a total of 530 positions in the final dataset.\n\nFossils of Droseraceae pollens from the Eocene (55-38 MYA)31, present day Aldrovanda vesiculosa and its ancestral species (pollen fossil records) dating to Miocene in South Ural, Eocene in Kazakhstan and Belgium and Paleocene in East Germany were considered as internal calibration points while drawing the Time-tree phylogeny. The first real Drosera pollens have appeared in sediments from Miocene (25-5 MYA)31. Based on these studies and fossil data information on Droseraceae31, the following constraints were applied with a normal prior distribution that spanned the full range of nodal age estimates: the most recent common ancestor (MRCA) of Droseraceae (divergence between Drosera and Aldrovando species) was set with a minimum and maximum divergence time to 38 and 55, respectively; the MRCA of Drosera species was set to 22-5 MYA.\n\nFor biogeographic inference, Bayesian Binary MCMC (BBM) and Statistical Dispersal-Vicariance Analysis (S-DIVA) methods were employed in which biogeographic reconstructions were averaged over a sample of highly probable Bayesian trees32. In S-DIVA the occurrence of an ancestral range at a node was computed using all alternative reconstruction frequencies generated by the DIVA algorithm for each tree in the data set. To account for both phylogenetic and ancestral states uncertainty S-DIVA was utilized for an entire posterior distribution of trees. Different geographic areas of endemism for the carnivorous plants meant for this study consistent with the present distribution with both outgroup and in-group sampling are outlined in Table 1.\n\n\nResults\n\nThe rbcL gene and ITS regions were separately aligned and ML and parsimonious trees were used for the phylogenetic analyses. The nucleotide sequences were aligned without any insertions or deletions. A total of 52 accessions from the Droseraceae family, including N. khasiana (Nepenthaceae) and S. flava (Sarraceniaceae) (used as out-groups), were considered for revisiting the Drosera phylogeny (Table 1). A separate concatenated dataset of ITS and rbcL were taken for Bayesian phylogeny reconstruction. The consensus core secondary structures of ITS regions for Drosera species as per their geographical distribution were drawn and shown in Figure 3. The ML tree was further used for time divergence studies.\n\nNepenthes khasiana and Sarracenia flava were taken as out groups. Substitutions per site between taxon groups according to the model of best fit (GTR + Γ + I) determined with Modeltest in MEGA 7. Consensus ITS2 core secondary structures drawn in LOCARNA for representative Drosera species based on geographical distribution are shown next to their respective phylogenetic groupings.\n\nMajor clades within Drosera determined via phylogenetic analysis were subjected to relative rate tests using ML estimates of substitutions per site between taxon groups according to the model of best fit (GTR + Γ + I), determined with Modeltest in MEGA 723. Relative rates were intended to provide evidence of whether certain longer branches within Drosera were the result of rate acceleration of individual species. Models with the lowest BIC scores were considered suitable for the analysis. For each model, AICc value, ML value and the number of parameters (including branch lengths) were also presented (Table 4). Evolutionary rates among sites and their non-uniformity were modeled by using a discrete Gamma distribution (+G) with 5 rate categories. Estimates of gamma shape parameter and/or the estimated fraction of invariant sites are shown (Table 4). For each model assumed or estimated values of transition/transversion bias (R) are shown and are followed by nucleotide frequencies (f) and rates of base substitutions (r) for each nucleotide pair. Sum of r-values is made equal to 1 for each model.\n\nFor estimating ML values, a user-specified topology was used. The analysis involved 52 nucleotide sequences. Codon positions included were 1st+2nd+3rd+Noncoding. All positions containing gaps and missing data were eliminated. There were a total of 530 positions in the final dataset. All analyses supported the monophyly of Droseraceae. Aldrovanda and Dionaea species cladded well, and emerged as a sister branch to the genus Drosera. N. khasiana and S. flava emerged as outgroups. The Bayesian tree and the split tree also congrued with the results of ML tree, though there were slight changes in the overall topology of Drosera species, with very high support (100 percent bootstrap support [BS] values; 1.0 Bayesian posterior probability). Drosera species (D. burmannii and D. peltata) from Meghalaya grouped separately and emerged to be evolutionarily primitive (Figure 4 and Figure 5).\n\nColour codes represent phylogenetic nodes with percent probability scores. Bayesian phylogeny reconstruction obtained by posterior probabilities for the nodes in the ML tree. GTR evolutionary model was implemented in MrBayes 3.2 with four chains during 50,000 generations and trees were sampled every 100 generations.\n\nA DNA barcode marker is judged by its resolving power to discriminate species at generic and infrageneric levels. The intra- and inter-specific sequence divergence amongst the candidate markers chosen for the present study showed a comparative pictorial barcode gap in form of taxon-plots for the marker candidates (ITS, matK, rbcL) for species representing the family Droseraceae. The results are summarized in Figure 6. For each species, sequence similarity of the same gene within the same species was high; therefore, the relevant intra-specific variation (shown as dark grey bars) was low. TaxonGap plots have the discriminatory power to gauge better marker barcodes when phylogenetic trees for multiple genes need to be compared. Moreover, it uses the same scaling for depicting distance values based on individual biomarkers, thus making it straightforward to evaluate multiple genes rather than the need for comparing separate gene trees drawn for each of the taxonomic units. In the present study, it emerged that a combination of different markers (rbcL+ITS+matK) would render them better discriminatory power for identifying species in the carnivorous plant diversity.\n\nThe grey and black bars represent the intra- and inter-specific variations, respectively. The thin, black lines denote the smallest inter-specific variation. Names appearing next to the dark bars denote the closest species to that listed on the left.\n\nSeveral genera of Droseraceae have been blessed with fossil pollens. A single record called Fischeripollis from European Mid Miocene has been assigned to Dionaea33. Fossil seed information and even leaves have contributed to the understanding of Drosera origin on the geological timescale. Drosera pollens have been recorded since Lower Miocene from New Zealand34. Several findings of tertiary pollen in the Mid Miocene from Europe have been assigned to either Drosera (Droserapollis) or Nepenthes (Droseridites)35. The molecular data calibration with cue from previous studies is in congruence with the fossil record information of Droseraceae pollen, thus testifying a wide distribution of the progenitors of Aldrovanda in the Droseraceae family since Late Cretaceous (Figure 7).\n\nDivergence times for all branching points in the user-supplied calibrated topology were calculated using the ML method based on the Tamura-Nei model. A discrete Gamma distribution was used to model evolutionary rate differences among sites [5 categories (+G, parameter = 2.0910)]. Numbers at nodes are median ages in million of years (Ma) with two internal calibration points. Evolutionary analyses were conducted in MEGA7.\n\nThe RASP tree (Figure 8) indicated the phylogenetic roots of Dionaea and Aldrovanda to originate in the northern hemisphere, while Drosera species would have most probably had an Australasian origin. Apparently all palaeoendemics (D. meristocaulis, D. burmannii, D. arcturi) are scattered throughout the southern hemisphere and also in tropical America. In this respect, the extant fossil record, i.e. European Miocene fossils, are somewhat noteworthy. A very old age (Cretaceous) can therefore be hypothesized for the whole family, dating back to stages of tectonic development when South America, Africa, and Australia were in closer proximity compared to present day geographical barriers. From the recent studies, it emerges that Australia is perhaps the secondary center of diversity of the genus Drosera, and most of the Drosera descendants can be assumed to have originated here.\n\nAlternative ancestral ranges of nodes (with frequency of occurrence) are shown in pie chart form. Bootstrap support values/Bayesian credibility values posterior probabilities (50% and higher) are indicated near the pie chart in the Bayesian tree. Colour key to possible ancestral ranges at different nodes represent other ancestral ranges. Nepenthes khasiana and Sarracenia flava were taken as out-groups.\n\n\nDiscussion\n\nThe family Droseraceae exhibited a monophyletic nature with representative species from Drosera, Dionaea, and Aldrovanda (Figure 3 and Figure 4), confirming that these highly diverse plants merit further investigation with a higher number of markers from different genomic regions. Though this study targeted some popular markers from the nuclear and extra chromosomal regions, similar markers, other than rbcL, could not be found in the public repositories for other species in the Droseraceae family to substantiate our research findings. The carnivorous plants are excellent evolutionary models and despite their dramatic journey in the course of evolution, which is no less than an interesting Gothic novel, botanical carnivory is a severely understudied area. In this study, family Droseraceae was revisited with present day investigative DNA marker tools from the chloroplast and nuclear regions trying to comprehend some of the bewildering scientific stories, which these meat-loving plants had to offer. Phylogenetic graphs based on the concatenated rbcL and ITS markers from the rDNA datasets exhibited 100% bootstrap support in most of the clades. The ML tree for the combined data set also showed that Dionaea and Aldrovanda form a sister group with 100% bootstrap values (Figure 3). Though Drosera differs markedly from that of the snap trap system of Dionaea and Aldrovanda, some structures still have strong resemblance at molecular level reflecting homology between them. A strong similarity is seen in the cellular architecture of stalked glands of Drosera and trigger hairs of Aldrovanda, whose origin can be traced to adhesive glands seen in the Plumbaginaceae and other families that are out-groups to the family Droseraceae. This study hints at a common evolutionary origin of trapping mechanisms in Drosera, Dionaea and Aldrovanda. All these findings attest to the sister relationship of Dionaea and Aldrovanda, indicating a single evolutionary origin of an elaborate snap trap system in carnivorous plants.\n\nThe clade from D. barbigera to D. glanduligera in Figure 3 covers a wide range of species spread across Australasia. Species in this clade are well adapted to dry environments and have tubers and stout roots. Subspecies with each adaptive trait forms a different clade. Except D. pygmaea, other species in section Bryastrum have pentamerous flowers and are endemic to southwestern Australia. D. pygmaea has been placed in a different section owing to its unique distributary features and tetramerous flowers,36,37. This implies that tetramerous flowers are an autapomorphic character and would have evolved from the pentamerous flowers shared by other pygmy sundews. More work needs to be done for understanding the different sections of Drosera for systematic revision of these plants.\n\nFor the species D. burmannii and D. sessilifolia of section Thelocalyx, plesiomorphic pollen features are quite apparent with simple cohesion similar to Aldrovanda and Dionaea, instead of cross wall cohesion as observed in other Drosera species, except D. glanduligera. These two species share a common ancestor with 100 bootstrap values in the Bayesian phylogeny (Figure 4). The overall topology of the Droseraceae family though monophyletic, the genus Drosera showed a polyphyletic nature with so many subclades within the tree. D. uniflora and D. stenopetala formed a sister group, which is also supported by their similar morphological characters. The clades from D. capillaris to D. hamiltonii (Figure 3 and Figure 4) encompass species distributed in Eurasia and America. D. rotundifolia and D. anglica are widely distributed in both Eurasia and North America, while the other species studied are native to North and South America. While the clades from D. graminifolia to D. hirtella are distributed in South America, mainly in central and eastern Brazil, the clade from D. collinsiae to D. slackii is composed of African species. The concatenated dataset of rbcL and ITS tree supports the geographical classification of Drosera species grouped into separate clades. Further, TaxonGap analysis speculates the combinatorial use of ITS and rbcL markers to design smart barcodes in delineating and discriminating species with high-resolution power (Figure 6).\n\nThough the phylogenetic reconstruction approach could reveal some clades to be in sync with morphological characters and geographic distribution of Drosera species, it becomes imperative to advance the phylogeny research with a genome to phenome approach by targeting more species and new markers from the genomes of this highly diverse and interesting group of plants.\n\n\nBiogeographic hypotheses\n\nIt is widely believed that Drosera has colonized itself in both the hemispheres37 and Australia happens to be the center of diversity of Drosera species, where more than 80 species thrive38–40. Over 30 species are distributed in Northern Africa and half of the species are distributed in South Africa. South America also has about 30 species, some of which have migrated into North America. Eurasia and North America harbor nearly 10 species although some of these species are cosmopolitan. Aldrovanda is widely distributed in both the hemispheres, including Australia and Africa, while Dionaea is restricted to North America.\n\nThe different phylogenetic trees (Figures 3-5 and 7) corroborate some of the previous hypotheses on the origin and dispersal of Drosera species. Australia to South America dispersal could be seen in the clade that includes D. burmannii and D. sessilifolia. D. stenopetala has disjunct distributions in South America and New Zealand. New Zealand and South American Drosera species have been reported to share close relationships41, and there might be some unknown mechanism for long-distance dispersal between these two continents. There are reports on dispersal events to have occurred in D. burmannii, D. indica, and D. peltata from Australia to Asia via Southeast Asia without any proper explanation for such events. A large number of Drosera species are spread across the Southern hemisphere compared to the Northern hemisphere, which implies that the species in the Northern hemisphere (D. indica, D. capillaris D. burmannii, D. anglica, D. brevifolia, D. filiformis, D. peltata and D. rotundifolia) would have expanded their distributions to the Southern Hemisphere. Further analyses with more taxa would be required to confirm this inference.\n\n\nConclusions\n\nThe combinatorial use of different markers along with different computational tools, ideally the use of NeighborNet algorithm42, takes a different approach to inferring species relationships. A relationship network is drawn rather than restricting the data into a stubborn single line tree structure by incorporating MP trees onto a ML tree. The present study corroborates most of the findings by previous studies. The basal position of Droseraceae within the non-carnivorous Caryophyllales, indicated in the tree topologies and fossil findings, strongly support a date of origin for Droseraceae during the Paleocene (55-65 MYA). Contrary to this hypothesis, which makes the family more ancient, Rivadavia et al.16 argue that the Droseraceae are located close to the tip of the angiosperm phylogenetic tree. Within Droseraceae, the sister relationship between Aldrovanda and Dionaea is supported by various rDNA marker [ITS (18s, ITS1, 5.8s, ITS2, 28s) + rbcL] dataset. Our studies would further help in comparative and experimental studies using carnivorous taxa with similar strong selective pressures. Drosera species are thus genuine plant model systems for addressing a wide array of questions concerning evolutionary and ecological studies governing botanical carnivory.\n\n\nData availability\n\nSequence data have been submitted to GenBank: accession numbers KR081966 – KR081968; KF015998, KF015996; KR081983 - KR081985; KT794003, KT794002, KT285307.1.\n\nThe remaining sequences from previous studies were downloaded from GenBank at NCBI and are outlined in Table 1.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Department of Biotechnology, Government of India (http://btisnet.gov.in/; grant ID BT/BI/04/035/98 sanctioned to DKB and PT) and University Grants Commission-Rajiv Gandhi National Fellowship to SY.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe acknowledge the support received from the DBT-sponsored Bioinformatics Centre at North-Eastern Hill University, Shillong for carrying out the research work.\n\n\nReferences\n\nJuniper BE, Robins RJ, Joel DM: The carnivorous plants. Academic Press, New York; 1989. Reference Source\n\nKról E, Płachno BJ, Adamec L, et al.: Quite a few reasons for calling carnivores ‘the most wonderful plants in the world’. Ann Bot. 2012; 109(1): 47–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcPherson S: Carnivorous plants and their habitats. Redfern Natural History Productions, Poole, Dorset, England; 2010; 2. Reference Source\n\nEllison AM, Gotelli NJ: Energetics and the evolution of carnivorous plants--Darwin's 'most wonderful plants in the world'. J Exp Bot. 2009; 60(1): 19–42. PubMed Abstract | Publisher Full Text\n\nJayaram K, Prasad MN: Drosera indica L. and D. burmanii Vahl., medicinally important insectivorous plants in Andhra Pradesh-regional threats and conservation. Curr Sci. 2006; 91(7): 943–947. Reference Source\n\nMajumdar K, Datta BK, Shankar U: Community structure and population status of Drosera burmanii Vahl. with new distributional record in Tripura, India. J Ecol Nat Environ. 2011; 3(13): 410–414. Reference Source\n\nJoseph J, Joseph KM: Insectivorous Plants of Khasi and Jaintia Hills, Meghalaya, India: A Preliminary Survey. Calcutta: Botanical Survey of India; 1986. Reference Source\n\nDarwin C: Insectivorous Plants. New York: D Appleton and Company; 1875. Reference Source\n\nWang Q, Su J, Zeng L: [The isolation and identification of flavonoids from Drosera burmannii]. Zhong Yao Cai. 1998; 21(8): 401–403. PubMed Abstract\n\nClarke JH: A dictionary of practical Materia Medica. Health Science Press; 1982. Reference Source\n\nRavikumar K, Ved DK: Hundred red listed medicinal plants of conservation concern in Southern India. Bangalore, India: Foundation for Revitalization of Local Health Traditions Publisher limited; 2000. Reference Source\n\nZhuang X: Drosera burmanni. The IUCN Red List of Threatened Species. 2011; e.T169038A6566220. Publisher Full Text\n\nCandolle AD: Droseraceae. In: Prodromus systematis naturalis regni vegetabilis. Treuttel and Wutz, Paris; 1824; 1.\n\nSeine R, Barthlott W: Some proposals on the infrageneric classification of Drosera L. Taxon. 1994; 43(4): 583–589. Publisher Full Text\n\nWilliams SE, Albert VA, Chase MW: Relationships of Droseraceae: a cladistic analysis of rbcL sequence and morphological data. Am J Bot. 1994; 81(8): 1027–1037. Publisher Full Text\n\nRivadavia F, Kondo K, Kato M, et al.: Phylogeny of the sundews, Drosera (Droseraceae), based on chloroplast rbcL and nuclear 18S ribosomal DNA sequences. Am J Bot. 2003; 90(1): 123–130. PubMed Abstract | Publisher Full Text\n\nLowrie A: Carnivorous plants of Australia. Western Australia, Australia: University of Western Australia Press; 1998; 3. Reference Source\n\nAvise JC: Molecular markers, natural history and evolution. Chapman and Hall, J Evol Biol.1994; 7(6): 766–767. Publisher Full Text\n\nRivadavia F, de Miranda VF, Hoogenstrijd G, et al.: Is Drosera meristocaulis a pygmy sundew? Evidence of a long-distance dispersal between Western Australia and northern South America. Ann Bot. 2012; 110(1): 11–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYesson C, Culham A: Phyloclimatic modeling: combining phylogenetics and bioclimatic modeling. Syst Biol. 2006; 55(5): 785–802. PubMed Abstract | Publisher Full Text\n\nEdgar RC: MUSCLE: a multiple sequence alignment method with reduced time and space complexity. BMC Bioinformatics. 2004; 5: 113. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaddison WP, Maddison DR: Mesquite: a modular system for evolutionary analysis. 2016; Version 2.75. Reference Source\n\nKumar S, Stecher G, Tamura K: MEGA7: Molecular Evolutionary Genetics Analysis version 7.0 for bigger datasets. Mol Biol Evol. 2016; 33(7): 1870–4. PubMed Abstract | Publisher Full Text\n\nTamura K, Nei M: Estimation of the number of nucleotide substitutions in the control region of mitochondrial DNA in humans and chimpanzees. Mol Biol Evol. 1993; 10: 512–526. PubMed Abstract | Publisher Full Text\n\nFelsenstein J: Confidence limits on phylogenies: An approach using the bootstrap. Evolution. 1985; 39: 783–791. PubMed Abstract | Publisher Full Text\n\nRonquist F, Huelsenbeck JP: MrBayes 3: Bayesian phylogenetic inference under mixed models. Bioinformatics. 2003; 19(12): 1572–1574. PubMed Abstract | Publisher Full Text\n\nHuson DH, Bryant D: Application of phylogenetic networks in evolutionary studies. Mol Biol Evol. 2006; 23(2): 254–267. PubMed Abstract | Publisher Full Text\n\nSlabbinck B, Dawyndt P, Martens M, et al.: TaxonGap: a visualization tool for intra- and inter-species variation among individual biomarkers. Bioinformatics. 2008; 24(6): 866–867. PubMed Abstract | Publisher Full Text\n\nWill S, Joshi T, Hofacker IL, et al.: LocARNA-P: accurate boundary prediction and improved detection of structural RNAs. RNA. 2012; 18(5): 900–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTamura K, Battistuzzi FU, Billing-Ross P, et al.: Estimating divergence times in large molecular phylogenies. Proc Natl Acad Sci U S A. 2012; 109(47): 19333–19338. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDegreef JD: Early history Drosera and Drosophyllum. Carnivorous Plant Newsletter. 1989; 18(3): 86–89. Reference Source\n\nYu Y, Harris AJ, Blair C, et al.: RASP (Reconstruct Ancestral State in Phylogenies): a tool for historical biogeography. Mol Phylogenet Evol. 2015; 87: 46–49. PubMed Abstract | Publisher Full Text\n\nKrulzsch W: Zur Kenntnis fossiler disperser tetraden pollen. Palaontologische Abhandlungen. Abt. B. Palobotanik Berlin. 1970; 3(3–4): 399–433.\n\nMildenhall DC: New Zealand late Cretaceous and cenozoic plant biogeography: A contribution. Palaeogeogr Palaeoclimatol Palaeoecol. 1980; 31: 197–233. Publisher Full Text\n\nKrutzsch W: Uber Nepenthes-Pollen im europaischen Tertiar. Gleditschia. 1985; 13: 89–93, 36.\n\nPlanchon JE: Sur la famille des Droséracées. Annales des Sciences Naturelles Botanique. 1848; 3(9): 79–98, 185–207, 285–309. Reference Source\n\nSchlauer J: A dichotomous key to the genus Drosera L. (Droseraceae). Carnivorous Plant Newsletter. 1996; 25: 67–88. Reference Source\n\nLowrie A: Carnivorous plants of Australia. University of Western Australia Press, Western Australia, Australia, 1987; 1. Reference Source\n\nLowrie A: Carnivorous plants of Australia. University of Western Australia Press, Western Australia, Australia, 1989; 2. Reference Source\n\nLowrie A: Carnivorous plants of Australia. University of Western Australia Press, Western Australia, Australia, 1998; 3. Reference Source\n\nYokoyama J, Suzuki M, Iwatsuki K, et al.: Molecular phylogeny of Coriaria, with special emphasis on the disjunct distribution. Mol Phylogenet Evol. 2000; 14(1): 11–19. PubMed Abstract | Publisher Full Text\n\nBryant D, Moulton V: Neighbor-net: an agglomerative method for the construction of phylogenetic networks. Mol Biol Evol. 2004; 21(2): 255–65. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "25549",
"date": "20 Sep 2017",
"name": "Andreas Fleischmann",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article was reviewed for a different journal by myself previously, and rejected there. Most critics raised there were not improved in the present submission. The article still claims to present a global phylogeny of Droseraceae based on taxa sampled from India. However only 5 taxa of a genus with globally ca. 250 taxa occur in India. And only sequences from those taxa occurring in India were generated de novo, ALL other sequence data was taken from those published in Rivadavia et al. (2003) and Rivadavia et al. (2012). Interestingly, those previously published phylogenies did also include the 5 respective Indian taxa newly sampled here, thus none of the data presented here is actually new. The paper repeats the results of above-mentioned two articles, without adding any substantial new information.\n\nAdditionally, some of the literature consulted is cited in wrong context, e.g. when stating “Apparently all palaeoendemics (D. meristocaulis, D. burmannii, D. arcturi) are scattered throughout the southern hemisphere and also in tropical America.” Rivadavia et al. (2003) CLEARLY show evidence that D. burmannii (and its sister D. sessilifolia) is NOT a palaeoenemdic, and Rivadavia et al. (2012) do so for D. meristocaulis – both are cases of recent LDD, not old vicariance, thus cannot be palaeoendmics. ONLY D. arcturi can considered a palaeonedemic of the list of species presented here (in addition to D. regia from South Africa). In contrast, Dionaea and Aldrovanda, which can be considered palaeoendemic lineages, occur in the Northern Hemisphere.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "25929",
"date": "28 Sep 2017",
"name": "Lingaraj Sahoo",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript has attempted to place the phylogenetic relationship and ancestral origin of two species of Drosera found in the Meghalya, to that of other species of world using nuclear and chloroplastic markers. The evidences given are fair enough to support the claims.\n\nAt places, the authors have used strong ornamental worlds such as “meat-eater plants”, “bewildering scientific stories” etc. must be replaced with scientific words.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1454
|
https://f1000research.com/articles/5-2431/v1
|
03 Oct 16
|
{
"type": "Research Article",
"title": "Mursamacin: a novel class of antibiotics from soil-dwelling roundworms of Central Kenya that inhibits methicillin-resistant Staphylococcus aureus",
"authors": [
"Ryan Musumba Awori",
"Peter Njenga Ng'ang'a",
"Lorine Nanjala Nyongesa",
"Nelson Onzere Amugune",
"Peter Njenga Ng'ang'a",
"Lorine Nanjala Nyongesa",
"Nelson Onzere Amugune"
],
"abstract": "Antibiotic-resistant bacteria, also called “superbugs”, can at worst retrogress modern medicine to an era where even sore throats resulted in death. A solution is the development of novel types of antibiotics from untapped natural sources. Yet, no new class of antibiotic has been developed in clinical medicine in the last 30 years. Here, bacteria from insect-killing Steinernema roundworms in the soils of Central Kenya were isolated and subjected to specific molecular identification. These were then assayed for production of antibiotic compounds with potential to treat methicillin-resistant Staphylococcus aureus infections. The bacteria were identified as Xenorhabdus griffiniae and produced cell free supernatants that inhibited S. aureus. Fermenting the bacteria for 4 days yielded a heat stable anti-staphylococcal class of compounds that at low concentrations also inhibited methicillin-resistant S. aureus. This class contained two major compounds whose identity remains unknown. Thus X. griffinae isolated from Steinernema roundworms in Kenya have antimicrobial potential and may herald novel and newly sourced potential medicines for treatment of the world’s most prevalent antibiotic resistant bacteria.",
"keywords": [
"Antibiotic resistant bacteria",
"Xenorhabdus bacteria",
"novel antimicrobials"
],
"content": "Introduction\n\nAntibiotic resistant bacteria, otherwise known as “superbugs”, are an imminent threat to every existing healthcare system as they could obviate current clinical antibiotics and thereby retrogress humanity to that dark age of lethal sore throats1. Of note is methicillin-resistant Staphylococcus aureus (MRSA). In this study, we examine the antimicrobial activity of Xenorhabdus griffiniae fermentation media against MRSA.\n\nMRSA not only causes human diseases such as mastitis, chronic open wound infections and endocarditis, but also livestock diseases such as mastitis in dairy cattle and lameness in poultry and rabbits2, that together result in economic losses of billions of dollars to the agricultural sector3. In both humans and animals, an MRSA infection can quickly turn lethal. This is because MRSA is resistant to two antibiotic classes, beta lactams and macrolides, and only lipopeptides and glycopetides remain effective4; this development has contributed in large part to the rise of the vancomycin-resistant S. aureus superbugs. A recommended solution1 to the superbug conundrum is to develop novel classes of antibiotics that can replace those to which disease-causing bacteria have mutated to resist their inhibitory effect.\n\nA potential source of novel anti-MRSA antibiotics are Xenorhabdus bacteria5 that naturally dwell in the guts of Steinernema roundworms. These 1 mm-long6 roundworms are found in soils worldwide7 and live by infecting and killing insects such as moths, caterpillars and weevils. The Xenorhabdus bacteria carried in their gut aid this insect killing lifestyle. Explicitly, Steinernema roundworms enter an insect body and release Xenorhabdus bacteria that secrete insecticidal toxins that quickly kill the insect8. To secure this rich food source for the roundworms only, the Xenorhabdus produce an armory of antibiotic compounds that effectively destroy competing soil fungi and microorganisms9, a mechanism that has been demonstrated as having medical potential against human diseases8,10–14. Each Xenorhabdus species has been demonstrated to produce its own unique array of antibiotics11 and this is prompting new studies on the classes of antimicrobial compounds produced. For example, X. cabalinasii JM26 from Jamaica led to the discovery of nemaucin, a novel and highly potent antibiotic compound against methicillin-resistant S. aureus15.\n\nPolitically, “superbugs” are today’s top global health issue exemplified by the 2016 United Nations High Level General Meeting’s agenda being antibiotic resistant bacteria: this is only the fourth time in its 80-year history that a health issue has been the reason for this annual meeting. Technically, superbugs have been detected in all 114 countries recently surveyed with the number of pan drug resistant superbugs, those immune to every antibiotic available, on the rise1. Accordingly, Kenyan prevalence levels of MRSA have been steadily increasing16,17.\n\nPreviously18 we demonstrated that Xenorhabdus bacteria from Kenya can produce antibiotics against MRSA. However, the identity of the species, the antibiotic classes it produces and their inhibitory concentrations remain unknown. Here, we elucidate the specific molecular identity of Kenyan Xenorhabdus isolates and determine the efficacy of produced antimicrobial compounds against MRSA. Our findings highlight a novel antibiotic class designated “mursamacin”19 obtained from Xenorhabdus bacteria found in Kenyan soils, which is highly active against methicillin-resistant S. aureus.\n\n\nMethods\n\nMRSA strain 133 cultures were obtained as a gift from Dr. John Ndemi of the Kenya Medical Research Institute, Centre for Microbiology Research, Nairobi, Kenya. Pure nematode cultures of Steinernema roundworm isolates were obtained from the nematode culture collection of Horticulture Research Institute Thika, Kenya.\n\nUsing the indirect haemolymph method20 X. griffiniae strains XN45 and L67 were isolated from Steinernema sp. Scarpo and Steinernema sp. L67 nematodes respectively. The isolates were cultured on Xenorhabdus differential media NBTA, composed of nutrient agar (Himedia) supplemented with 0.0025% (w/v) bromothymol blue (Sigma-Aldrich) and 0.004% (w/v) 2,3,5 triphenyl tetrazolium chloride (Sigma-Aldrich)20. Identification of the bacteria as Xenorhabdus was based on the presence of the following characteristics: swarming motility on NBTA of 1% (w/v) agar concentration; swimming motility on NBTA of 0.5% (w/v) agar concentration; and yellow green colony pigmentation on NBTA5. Specific identification of the bacteria was done by multi locus sequence typing of the 16s rRNA, serC and recA genes (See Dataset 1).\n\nTotal DNA extraction from the bacterial strains was done using FastDNA®SPIN Kit for Soil (MP Biomedicals, USA). Isolation of a 1397 base pair(bp) 16s rRNA gene fragment was done by Polymerase Chain Reaction (PCR) with primer sequences (Inqaba Biotech) as follows: (27f-AGA GTT TGA TCA TGG CTC AG) and (1391r-ACG GGC GGT GTG TGC)21. The genes were amplified in a 25 μl reaction volume containing final concentrations of 0.5 U Q5 DNA polymerase (New England Biolabs, USA), 200μM each dNTP, 2mM MgCl2, and 0.05μM of each primer. Cycling conditions were set at 98°C for 30 s, 40 cycles of 98°C for 30 s, 42°C for 15 s (first 20 cycles) and 47°C for 15 s (final 20 cycles), 72°C for 1 min and a final extension of 72°C for 2 min (MJ Research PTC-100, USA). Isolations of 670bp serC and 400bp recA gene fragments were done by PCR with primers (recA-FW CCA ATG GGC CGT ATT GTT GA) and (recAREV-TCA TAC GGA TCT GGT TGA TGA A) and (serCF-CCA CCA GCA ACT TTG TCC TTT C) and (serCR- AAA GAA GCA GAA AAA TAT TGC AC) respectively22. They were amplified in a 50 μl reaction volume containing final concentrations of 2 U MyTaq® DNA polymerase (Bioline, USA), 200μM of each dNTP, 3mM MgCl2, and 0.4μM of each primer. For both, cycling conditions were set at 95°C for 1 min, then 40 cycles of 95°C for 15 s, 52°C for 15 s, 72°C for 40 s, with a final extension of 72°C for 5 min (Thermo Scientific Arktik, USA).\n\nPCR products were visualized on 1.2% (w/v) agarose gels stained with ethidium bromide at final concentrations of 0.5 μg/ml. Typical electrophoresis conditions were 4V/cm for 72 min. Expected bands were excised and purified with Quick Clean II Gel extraction kits® (Genscript, USA). Products were outsourced for sequencing (Macrogen, Netherlands), and obtained sequences were quality checked, assembled and poor quality base calls trimmed in BioEdit23 and MEGA624 software suites.\n\nPhylogenetic reconstruction was performed using a multi locus concatenate of 16s rRNA, serC and recA genes sequences, that jointly constituted 2076 positions. A dataset of n=13 (1= from this study and 12= public databases) was used that contained the 11 strains of Xenorhabdus and a Photorhabdus luminiscens (the out-group sequence) that had public database sequences of all three genes (see data files). Database sequences were checked for quality and ambiguous nucleotides resolved in the MEGA6 software suite24. Multiple sequence alignments were performed in the same suite using the MUSCLE algorithm25. The evolutionary history was inferred by the ML method based on the Generated Time Reversible (GTR) Model (500 bootstraps) (Nei and Kumar 2000). Initial trees for the heuristic search were obtained automatically by applying Neighbor-Join and BioNJ algorithms to a matrix of pairwise distances estimated using the Maximum Composite Likelihood (MCL) approach, and then selecting the topology with superior log likelihood value. The analysis involved 13 nucleotide sequences. All positions containing gaps and missing data were eliminated resulting in a total of 2076 positions used in the reconstruction. Xenorhabdus DNA sequences generated in this study were deposited in the DNA databank of Japan with the partial 16s rRNA gene sequences of X. griffiniae L671, L672, L673, L675, XN45 assigned the following accession numbers respectively; AB987698.1, AB987700.1, AB987701.1, AB987699.1, AB987697.1. The partial recA, serC gene sequences of X. griffiniae L67 and XN45 were assigned LC096094, LC096092 and LC096093, LC096091 respectively. Percentage sequence similarities between gene sequences from this study and other 16s rRNA, recA and serC gene database sequences was determined using blast searches26.\n\nFermentation was done using X. griffiniae XN45 bacterial cultures. Multiple colonies (2–3) of an individual isolate were selected, inoculated into 5 ml of Luria Bertani (LB) media containing 1% tryptone, 0.5% yeast extract and 1% NaCl (w/v) and incubated on a rotatory shaker at 150 rpm at 33°C for 24 h. These served as 1% (v/v) starter inocula. Sterile LB media (500 ml) was dispensed into sterile 1-L Erlenmeyer flasks, with starter cultures (5 ml) thereafter inoculated and LB media incubated at 150 rpm at 33°C for 355 h, 180 h and 108.5 h. LB with no inoculum was also incubated to serve as a control for sterility. After fermentation, cells were removed by centrifugation of broths at 20,000 g for 25 min at 4°C (Beckman Avanti J-25, USA) followed by decanting cell free supernatants (cfs). These were heat-treated by autoclaving at 121°C and 15 p.s.i for 20 min to yield a sterile heat stable fraction of the whole broth extract (antibiotic) that was designated “mursamacin”. These were stored at 4°C until use.\n\nThe broth macro dilution assay was prepared as previously described11 with modifications, using antibiotics obtained from each of the fermentation durations (180.5 h and 355 h) as the antibiotic and MRSA as the test bacterium. MRSA overnight cultures were inoculated into each dilution, at a final concentration of 2.3×104 cfu/ml, and then incubated for 18 h at 37°C without agitation. The following controls were included in every replicate: negative control of 2× LB media inoculated with bacteria without antibiotic; sterility control of 2× LB media with no inoculated bacteria; and sterility control of undiluted antibiotic with no inoculated bacteria.\n\nAfter incubation, turbidity of each dilution was measured by determining the A600nm (See dataset files). This was used in the following formula, modified from Houard, Aumelas13 that included a correction factor for inhibition by the broth media to calculate the percentage growth inhibition of bacterial cultures by an antimicrobial.\n\n\n\nWhere g = A600nm of bacteria in broth culture without antibiotic, and gx = A600nm of bacteria in broth culture with antibiotic.\n\nTo extract an organic heat stable mursamacin class of antibiotics, cfs from a 108.5 h fermentation reaction were lyophilized to yield a yellow powder, whose measured amounts were dissolved in known volumes of methanol, and vortexed for 2 min to yield a dark orange methanol extract. These were centrifuged at 20,000 g for 13 min at room temperature to separate liquid methanol extracts from insoluble residues. Methanol extracts were pipetted into sterile 1.5 ml tubes while pelleted solids were aseptically air dried and then incubated at 37°C overnight to evaporate residual methanol. To provide a negative control, the same procedure was repeated for lyophilized powders of the fermentation media. The difference in weight of powder before and after methanol extraction was then measured to determine the concentration of total dissolved compounds in the methanol extracts.\n\nThis was modified from a previously described method27. To determine the minimum inhibitory concentrations (MICs) against MRSA, 100 μl of known concentrations of organic heat-stable fractions of mursamacin antibiotics were dispensed into sterile 96-well micro-titre plate starting wells. These were left in a biological safety cabinet to evaporate methanol resulting in visible orange solid residues; each was then re-dissolved in 200 μl RPMI media supplemented with 5% (v/v) LB28. Every other well along each row was filled with 100 μl RPMI media supplemented with 5% (v/v) LB; 100 μl of starting well mixture was then dispensed to the subsequent well resulting halving the antibiotic concentration. This was repeated for all wells along the rows resulting in a 2-fold dilution series. MRSA inocula (10 μl) were used that had been previously prepared from plate cultures dissolved in physiological saline to a turbidity of 0.5 Mcfarland standard (ca. concentration= 2.6×106 cfu/ml), then subjected to 10-3 dilution in RPMI media supplemented with 5% (v/v) LB. A positive control row of Daptomycin and negative control of methanol extract of LB media only was incorporated in every replicate. Plates were incubated at 37°C for 21 h without agitation. Experiment was performed in 10 replicates in three reproductions.\n\nTo identify the compounds contained in the organic heat-stable fraction of mursamacin antibiotics, analytical reverse phase chromatography of the methanol extract was performed as previously described13 and modified to use of a c18 column (Aglient Zorbax Eclipse Plus C18; 3.5um, 4.6 ×100 nm) under isocratic conditions of 60% acetonitrile and uv detection of 224 nm.\n\n\nResults and discussion\n\nThe Xenorhabdus isolates identified were most closely related to Xenorhabdus griffiniae (Figure 1). The apt thresholds for Xenorhabdus species identification based on sequence similarities are currently considered to be 98.65% and 97% for 16s-rRNA and for both recA and serC gene fragments respectively29,30. In this study, sequences similarities to X. griffiniae strains, including type strains, were 99.52%, 98.57% and 97.68% for 16s rRNA, recA and SerC respectively (See data files)\n\nThe tree was based on a concatenate of ss-rRNA, recA, and serC gene sequences and was reconstructed with a general time reversal model and test of phylogeny of 500 bootstrap replicates. X. griffiniae isolated from this study (red triangle) clustered in the X. griffiniae clade (red), and was most closely related to X. griffiniae from Malaysia.\n\nXenorhabdus griffiniae has only previously been isolated from Indonesia31, Malaysia32 and South Africa33; therefore our data strongly suggest a new strain of X. griffiniae originating from Central Kenya.\n\nThe growth of methicillin-resistant S. aureus was inhibited when cultured in X. griffiniae cell-free supernatants (cfs) that we termed ‘mursamacin’ (Figure 2). Furthermore, cfs were obtained by various X. griffiniae fermentation durations and had the following percentage of growth inhibition against methicillin-resistant S. aureus at neat concentrations: 81% (cfs from 180 h ferment), 94% (cfs from 355 h ferment) (Figure 3). In both instances, cfs were heat-sterilized by autoclaving and pH adjusted to that of the control (6.6–7.0), indicating that inhibition was due to heat stable compounds contained therein.\n\nMRSA cultures were incubated with (tube a) and without (tube b) mursamacin antibiotics respectively. A clear tube denotes no bacterial growth while turbid tube denotes bacterial growth.\n\nThe longer fermentation duration (355 h) produced antibiotics that were generally more inhibitory to methicillin-resistant S. aureus. Dotted line graphs represent linear equations derived from the raw inhibition values. The high R-squared values demonstrate that the concentration of the antibiotic was predominantly responsible (97–98%) for the level of percentage growth inhibition.\n\nPrevious studies9,11–13,34,35 have demonstrated that Xenorhabdus bacteria are prolific antimicrobial producers with each species producing its unique array of antibiotics that often contained novel compounds. Consistent with these reports, our data demonstrate unprecedented evidence of heat-stable X. griffiniae antimicrobials and further suggests that their production is affected by how long X. griffiniae is cultured in fermentation media.\n\nAn organic heat-stable fraction of mursamacin antibiotics inhibited methicillin-resistant Staphylococcus aureus at a standardized concentration of 8.25 μg/ml while its negative control- organic extract of autoclaved fermentation media only- displayed growth at all concentrations confirming this inhibition as due to antibiotic compounds only (Table 1). However, this concentration was 17-fold higher than the positive control Daptomycin that gave 0.5 μg/ml.\n\nMICS of a,b,c are averages of 3,4,3 replicates respectively (See data files).\n\n*These are projected MICs values should tests be done under standard assay conditions.\n\nCurrently, there are two major clinical antibiotics with inhibitory concentrations against methicillin-resistant S. aureus that are considered effective: Daptomycin (0.5 μg/ml) and Vancomycin (2 μg/ml)4. On the other hand, the larger majority today’s clinical drugs have inhibitory concentrations against methicillin-resistant S. aureus considered that are considered ineffective; Azithromycin (128μg/ml), Amoxicillin/Clavulanic acid (64 μg/ml), Cefriaxone (64 μg/ml) and Erythromycin (32 μg/ml) and Imipenem (16 μg/ml)4. Yet our data demonstrate a heat stable class of compounds with an inhibitory concentration of 8.25 μg/ml. This strongly suggests that antimicrobials contained in this class are even more potent against methicillin-resistant S. aureus, as pure compounds.\n\nFurther high performance liquid chromatographic analysis revealed that this fraction contained two major compounds, eluted at 1.862 min and 2.775 min and absorbed at 224nm when dissolved in methanol (Figure 4). Previous studies have characterized classes of organic Xenorhabdus antibiotics that were highly effective against S. aureus11,36 and had peaks absorption ranges from 200–230nm15. Of note, the PAX lipopeptides10 isolated X. cabanillasii and X. nematophila were highly effective against MRSA at concentrations of 0.5 μg/ml. Yet in contrast to our results, no class has been characterized as heat stable and isolated from X. griffiniae.\n\nTop chromatograph represents the solvent only while bottom chromatograph is of solvent containing organic mursamacin antibiotics. Two dominant compounds, indicated by the red arrows, were detected in this fraction.\n\nIn conclusion, we demonstrate that X. griffiniae antibiotic compounds termed “mursamacin” contained an organic heat stable fraction are highly effective against methicillin-resistant S. aureus and antimicrobial activity seems to be attributed to two dominant uncharacterized compounds. This may offer a founding stone for further development of clinical drugs from X. griffiniae, giving hope to thousands of patients affected by methicillin-resistant S. aureus infections.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data of antibiotics from soil dwelling roundworms of Central Kenya inhibiting methicillin resistant Staphylococcus aureus, 10.5256/f1000research.9652.d13696637\n\nAccession numbers of sequences generated from this study are: AB987698.1, AB987700.1, AB987701.1, AB987699.1, AB987697.1. ,LC096094, LC096092 and LC096093, LC096091.",
"appendix": "Author contributions\n\n\n\nRMA, PNN, LNN conceived the study and carried out the research. NA designed the experiments and supervised the study. RMA wrote the manuscript. All authors were involved in the revision of the manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nKenya National Commission for Science Technology and Innovation funded this study through grants NCST/5/003/3rdCALL/017 and NCST/5/003/3rd CALL/016 assigned to RMA and PNN respectively.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe wish to acknowledge Daniel Masiga for design of molecular biology experiments and provision of equipment and reagents, Janet Irungu for her chemistry expertise and provision of equipment for HPLC and Hosea Mokaya for training in HPLC techniques. We also wish to acknowledge Chris Beadle for his innumerable revisions of the manuscript.\n\n\nReferences\n\nWorld Health O: Antimicrobial resistance global report on surveillance: 2014 summary. 2014. Reference Source\n\nFitzgerald JR: Livestock-associated Staphylococcus aureus: origin, evolution and public health threat. Trends Microbiol. 2012; 20(4): 192–8. PubMed Abstract | Publisher Full Text\n\nBradley AJ: Bovine mastitis: an evolving disease. Vet J. 2002; 164(2): 116–28. PubMed Abstract | Publisher Full Text\n\nYu VL, McKinnon PS, Peolquin C, et al.: Antimicrobial Therapy and Vaccines: Volume 2 Antimicrobial Agents. Pittsburgh, PA USA: Esun Technologies LLC. 2005; 1206. Reference Source\n\nBoemare N, Akhurst R: The genera Photorhabdus and Xenorhabdus. The Prokaryotes: Springer. 2006; 451–94. Publisher Full Text\n\nWaturu CN, Hunt DJ, Reid AP: Steinernema karii sp. n. (Nematoda: Steinernematidae), a new entomopathogenic nematode from Kenya. Int J Nematol. 1997; 7(1): 68–75. Reference Source\n\nHominick W, Gaugler R: Biogeography. Entomopathogenic nematology. 2002; 115–43.\n\nHerbert EE, Goodrich-Blair H: Friend and foe: the two faces of Xenorhabdus nematophila. Nat Rev Microbiol. 2007; 5(8): 634–46. PubMed Abstract | Publisher Full Text\n\nForst S, Nealson K: Molecular biology of the symbiotic-pathogenic bacteria Xenorhabdus spp. and Photorhabdus spp. Microbiol Rev. 1996; 60(1): 21–43. PubMed Abstract | Free Full Text\n\nFuchs SW, Proschak A, Jaskolla TW, et al.: Structure elucidation and biosynthesis of lysine-rich cyclic peptides in Xenorhabdus nematophila. Org Biomol Chem. 2011; 9(9): 3130–2. PubMed Abstract | Publisher Full Text\n\nFurgani G, Böszörményi E, Fodor A, et al.: Xenorhabdus antibiotics: a comparative analysis and potential utility for controlling mastitis caused by bacteria. J Appl Microbiol. 2008; 104(3): 745–58. PubMed Abstract | Publisher Full Text\n\nGualtieri M, Aumelas A, Thaler JO: Identification of a new antimicrobial lysine-rich cyclolipopeptide family from Xenorhabdus nematophila. J Antibiot (Tokyo). 2009; 62(6): 295–302. PubMed Abstract | Publisher Full Text\n\nHouard J, Aumelas A, Noël T, et al.: Cabanillasin, a new antifungal metabolite, produced by entomopathogenic Xenorhabdus cabanillasii JM26. J Antibiot (Tokyo). 2013; 66(10): 617–20. PubMed Abstract | Publisher Full Text\n\nReimer D: Identification and characterization of selected secondary metabolite biosynthetic pathways from Xenorhabdus nematophila. Johann Wolfgang Goethe-Universität, 2013. Reference Source\n\nGualtieri M, Villain-Guillot P, Givaudan A, et al.: Nemaucin, an antibiotic produced by entomopathogenic Xenorhabdus cabanillasii. Google Patents. 2012. Reference Source\n\nKesah C, Ben Redjeb S, Odugbemi TO, et al.: Prevalence of methicillin-resistant Staphylococcus aureus in eight African hospitals and Malta. Clin Microbiol Infect. 2003; 9(2): 153–6. PubMed Abstract | Publisher Full Text\n\nOuko TT, Ngeranwa JN, Orinda GO, et al.: Oxacillin resistant Staphylococcus aureus among HIV infected and non-infected Kenyan patients. East Afr Med J. 2010; 87(5): 179–86. PubMed Abstract | Publisher Full Text\n\nAwori RM: Phylogeny and antibiotic activity of Xenorhabdus spp. isolated from nematode symbionts in kenya. Nairobi: University of Nairobi; 2015. Reference Source\n\nAwori RM, Ng'ang'a PN, Nyongesa LN, et al.: Antimicrobial agents produced by Xenorhabdus griffiniae strain xn45. Google Patents. 2015. Reference Source\n\nAkhurst R: Morphological and functional dimorphism in Xenorhabdus spp., bacteria symbiotically associated with the insect pathogenic nematodes Neoaplectana and Heterorhabditis. J Gen Microbiol. 1980; 121(2): 303–9. Publisher Full Text\n\nLane DJ: 16S/23S rRNA sequencing. Nucleic acid techniques in bacterial systematics. 1991; 125–75.\n\nStock SP, Goodrich-Blair H: Nematode parasites, pathogens and associates of insects and invertebrates of economic importance. Manual of techniques in invertebrate pathology. 2nd edn Academic, San Diego. 2012; 373–426. Publisher Full Text\n\nHall TA: BioEdit: a user-friendly biological sequence alignment editor and analysis program for Windows 95/98/NT. Nucleic acids symposium series. 1999; 41: 95–98. Reference Source\n\nTamura K, Stecher G, Peterson D, et al.: MEGA6: Molecular Evolutionary Genetics Analysis version 6.0. Mol Biol Evol. 2013; 30(12): 2725–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgar RC: MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res. 2004; 32(5): 1792–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltschul SF, Gish W, Miller W, et al.: Basic local alignment search tool. J Mol Biol. 1990; 215(3): 403–10. PubMed Abstract | Publisher Full Text\n\nOkanya PW, Mohr KI, Gerth K, et al.: Marinoquinolines A–F, pyrroloquinolines from Ohtaekwangia kribbensis (Bacteroidetes). J Nat Prod. 2011; 74(4): 603–8. PubMed Abstract | Publisher Full Text\n\nLin L, Nonejuie P, Munguia J, et al.: Azithromycin Synergizes with Cationic Antimicrobial Peptides to Exert Bactericidal and Therapeutic Activity Against Highly Multidrug-Resistant Gram-Negative Bacterial Pathogens. EBioMedicine. 2015; 2(7): 690–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim M, Oh HS, Park SC, et al.: Towards a taxonomic coherence between average nucleotide identity and 16S rRNA gene sequence similarity for species demarcation of prokaryotes. Int J Syst Evol Microbiol. 2014; 64(pt 2): 346–51. PubMed Abstract | Publisher Full Text\n\nTailliez P, Laroui C, Ginibre N, et al.: Phylogeny of Photorhabdus and Xenorhabdus based on universally conserved protein-coding sequences and implications for the taxonomy of these two genera. Proposal of new taxa: X. vietnamensis sp. nov., P. luminescens subsp. caribbeanensis subsp. nov., P. luminescens subsp. hainanensis subsp. nov., P. temperata subsp. khanii subsp. nov., P. temperata subsp. tasmaniensis subsp. nov., and the reclassification of P. luminescens subsp. thracensis as P. temperata subsp. thracensis comb. nov. Int J Syst Evol Microbiol. 2010; 60(pt 8): 1921–37. PubMed Abstract | Publisher Full Text\n\nTailliez P, Pagès S, Ginibre N, et al.: New insight into diversity in the genus Xenorhabdus, including the description of ten novel species. Int J Syst Evol Microbiol. 2006; 56(pt 12): 2805–18. PubMed Abstract | Publisher Full Text\n\nLee M-M, Stock SP: A multilocus approach to assessing co-evolutionary relationships between Steinernema spp. (Nematoda: Steinernematidae) and their bacterial symbionts Xenorhabdus spp. (gamma-Proteobacteria: Enterobacteriaceae). Syst Parasitol. 2010; 77(1): 1–12. PubMed Abstract | Publisher Full Text\n\nMothupi B, Featherston J, Gray V: Draft Whole-Genome Sequence and Annotation of Xenorhabdus griffiniae Strain BMMCB Associated with the South African Entomopathogenic Nematode Steinernema khoisanae Strain BMMCB. Genome Announc. 2015; 3(4): pii: e00785-15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFuchs S: Investigation of the biosynthesis of bacterial natural products. Univ.-Bibliothek; 2014. Reference Source\n\nPark D, Ciezki K, van der Hoeven R, et al.: Genetic analysis of xenocoumacin antibiotic production in the mutualistic bacterium Xenorhabdus nematophila. Mol Microbiol. 2009; 73(5): 938–49. PubMed Abstract | Publisher Full Text\n\nFodor A, Fodor AM, Forst S, et al.: Comparative analysis of antibacterial activities of Xenorhabdus species on related and non-related bacteria in vivo. J Microbiol Antimicrob. 2010; 2(4): 36–46. Reference Source\n\nAwori RM, Ng'ang'a PN, Nyongesa LN, et al.: Dataset 1 in: Mursamacin: a novel superbug-destroying class of antibiotics from soil-dwelling roundworms of Central Kenya. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16762",
"date": "31 Oct 2016",
"name": "Habil Andras Fodor",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI consider the publication of Awori RM, Ng'ang'a PN, Nyongesa LN and Amugune NO, entitled as “Mursamacin: a novel class of antibiotics from soil-dwelling roundworms of Central Kenya that inhibits methicillin-resistant Staphylococcus aureus” is as follows as a well conducted research. I have read this submission.\nI believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. I accept it but some little missing information should be added to get my approval (see details). The subject of paper is extremely important, since the “super bugs”, that is the dramatically increasing numbers of multiresistant pathogens is a real danger for mankind and pose new challenge to the antibiotics researcher, and the research concept outline in this publication is excellent.\nAs for the title: it is appropriate for the content of the article.\nThe abstract represents a suitable summary of the work. As for the article content: the design, methods and analysis of the results from the study has been properly explained and they are appropriate for the topic being studied.\nThe information for the conclusions: the conclusions are sensible, well-balanced and justified on the basis of the results of the study. Enough information has been provided to be able to replicate the experiment. The data are presented in a usable format and all the data we need to understand has been provided.\n\nSome detail:\nThe paper reports the discovery of a new, heat stable antimicrobial (probably peptide) complex effective against the “superbug” MRSA in both in vitro tests and in larger scale. It is a very important discovery, even if we are still rather far to get a commercial product. The conception and the applied methods are suitable, and the discovery and proper molecular identification of the antibiotic producing Xenorhabdus griffiniae from Kenya is very fortunate and great.\n\nI think that the quality of the research and the article is scientifically sound.\n\nI have only one important question:\nTable 1: The reader need the definition how the “Standardized MIC\" values were calculated from the “Average MICs”. It seems to be a result of a statistical calculation, but I cannot be seen the source as mentioned.\n\nI have two remarks concerning the content:\nFig 4 and the related text in “Methods” and “Results and discussions” chapters: Some details related to “high performance liquid chromatographic analysis” is badly missing, such as the specific name of the method, the data of column, the eluting media, and at least an approximated molecular size of the two active fraction 1,862 and 2775.\n\nIf my question got an answer and my remark were taken into consideration, I would give my full Approval.",
"responses": []
},
{
"id": "17200",
"date": "03 Nov 2016",
"name": "Selcuk Hazir",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMueller-Hinton media should have been used in antimicrobial tests instead of Luria Bertoni (LB).\n\nThe optimum growth temperature of Xenorhabdus spp. is 28oC. Why authors used 33oC for bacterial fermentation?\n\nIf autoclaved supernatant will be used, why centrifugation and filtration methods were used to remove the cells?\n\nNot only autoclaved but also non-autoclaved cell-free supernatants should have been tested against S. aureus bacteria as well.\n\nTube-A in Figure-2 looks like there are several tubes in a beaker! This photo needs to be renewed.",
"responses": []
},
{
"id": "17736",
"date": "17 Nov 2016",
"name": "Khushbu Sharma",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIntroduction\n3rd Para, Line 9 “The Xenorhabdus produce an armory of antibiotic compounds….” Can author define some names of antibiotic compounds produced by Xenorhabdus spp.\n\nLast Para, Line last 3rd “designated “mursamacin”19 obtained from Xenorhabdus bacteria” Name the spp. of Xenorhabdus bacteria used.\nMethods\nPage 3, Phylogenetic analysis, line 12, (Nei and Kumar 2000), according to reference style name should not be included despite use number and this reference is cited in text but not mentioned in Reference part of the manuscript.\n\nPage 3, Bacterial fermentation, line 1, only one bacterial strain was used for fermentation what about the second strain mentioned in the manuscript?\n\nPage 3, Bacterial fermentation, line last 4th, “These were heat-treated by autoclaving……” If these were further autoclaved, even after inoculation, the antibiotics produced might get degraded. Explain.\n\nPage 3, Bacterial fermentation, line 5th, The ideal temperature for growth of Xenorhabdus spp. is 28oC, then why 33oC had been used for bacterial fermentation?\n\nPage 4, High performance liquid chromatographic analysis, line 1, “To identify the compounds…..” How can compounds be identified using HPLC only? This part is not explained well as per scientific standard.\n\nPage 4, High performance liquid chromatographic analysis, line 5, “Eclipse Plus C18; 3.5um, 4.6 ×100 nm….” Instead of nm it should be mm and C-18 should be written like this not as author wrote.\n\nHow MIC’s were calculated?\n\nIf my questions would be answered with full justification, I would give my full approval for this manuscript.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2431
|
https://f1000research.com/articles/6-924/v1
|
16 Jun 17
|
{
"type": "Opinion Article",
"title": "Puzzles in modern biology. V. Why are genomes overwired?",
"authors": [
"Steven A. Frank"
],
"abstract": "Many factors affect eukaryotic gene expression. Transcription factors, histone codes, DNA folding, and noncoding RNA modulate expression. Those factors interact in large, broadly connected regulatory control networks. An engineer following classical principles of control theory would design a simpler regulatory network. Why are genomes overwired? Neutrality or enhanced robustness may lead to the accumulation of additional factors that complicate network architecture. Dynamics progresses like a ratchet. New factors get added. Genomes adapt to the additional complexity. The newly added factors can no longer be removed without significant loss of fitness. Alternatively, highly wired genomes may be more malleable. In large networks, most genomic variants tend to have a relatively small effect on gene expression and trait values. Many small effects lead to a smooth gradient, in which traits may change steadily with respect to underlying regulatory changes. A smooth gradient may provide a continuous path from a starting point up to the highest peak of performance. A potential path of increasing performance promotes adaptability and learning. Genomes gain by the inductive process of natural selection, a trial and error learning algorithm that discovers general solutions for adapting to environmental challenge. Similarly, deeply and densely connected computational networks gain by various inductive trial and error learning procedures, in which the networks learn to reduce the errors in sequential trials. Overwiring alters the geometry of induction by smoothing the gradient along the inductive pathways of improving performance. Those overwiring benefits for induction apply to both natural biological networks and artificial deep learning networks.",
"keywords": [
"Gene regulation",
"complex traits",
"artificial intelligence",
"deep learning",
"induction"
],
"content": "Introduction\n\nWhat determines gene expression? The list keeps growing: transcription factors, methylation, histone codes, DNA folding, intron sequences, RNA splicing, noncoding RNA, and others1,2.\n\nHundreds of genomic variants affect human traits, such as height3. Consider pathways of influence. Numerous factors affect gene expression. Many genes affect a trait. Vast wiring connectivity links genomic influence to a trait.\n\nAn engineer following classic principles of control theory would design a simpler system with fewer connections4. Genomes are overwired. They have far more nodes and connections than classically engineered systems.\n\nWhy are genomes overwired? I discuss possible causes. I then consider wiring density more broadly. What other sorts of systems tend to be overwired?\n\nComputational neural networks in artificial intelligence stand out. Deeply, densely connected computational networks pervade modern life. New computational systems often outperform humans.\n\nThe recent computational concepts and methods comprise deep learning. The learning simply means using data, or past experience, to improve classification of inputs and adjustment of response. The deep qualifier refers to the multiple layers of deep and dense network connections5,6.\n\nThat wiring depth, and the computational techniques to use vast connectivity, triggered the revolutionary advances in performance. I discuss genomic wiring in relation to deep learning. I suggest that the inductive systems of biological adaptation and computation learning gain in similar ways from diffusely and densely wired networks.\n\n\nCauses\n\nWhy do so many factors modulate gene expression? Why is the regulatory network architecture for traits often complex?\n\nA noncoding RNA may, by chance, alter the expression of various genes. Small modulations of expression may have relatively little effect on fitness. If so, a novel noncoding RNA variant may be effectively neutral. Nearly neutral variants accumulate by chance.\n\nMany nearly neutral variants may accumulate over time. As each variant spreads, it changes the genomic environment of gene regulation. When the aggregate effect of many nearly neutral variants becomes significant, natural selection will retune expression to compensate.\n\nAfter compensation occurs, one cannot remove the layers of accumulated modulating factors without causing deleterious changes in gene expression. What began as neutral accumulation becomes integral to genomic function. Wiring complexity increases irreversibly.\n\nLynch’s neutral theory of genome architecture makes predictions7,8. Smaller population sizes increase chance fluctuations. Greater fluctuations allow larger fitness effects to become nearly neutral. Broader neutrality enhances the rate at which changes accumulate. Smaller populations may tend toward overwiring.\n\nBy contrast, large populations more efficiently prune small effects on fitness. Small modulations of gene expression accumulate more slowly. Larger populations may not overwire as readily as smaller populations.\n\nIf the fitness effects of modulation tend to be larger, nearly neutral variants will be less common. Prokaryotes may tend to have relatively large deleterious fitness effects of novel modulating factors, because increased genome size and complexity may slow the speed of cellular replication. Eukaryotic genomes may be less sensitive to size and complexity because organismal replication is less strongly coupled to speed of cell division.\n\nOverall, prokaryotes tend to have larger populations and greater sensitivity to genome size and complexity. Such characteristics restrict the scope for neutral accumulation and overwiring. By contrast, eukaryotes tend toward smaller populations and less sensitivity to genome size and complexity. Those characteristics favor neutral accumulation and overwiring. Stronger predictions arise when one can compare closely related organisms that differ in population size and genomic sensitivity.\n\nModulating factors combine to influence traits. The mechanism of combination matters. Consider two alternatives.\n\nFirst, suppose modulating factors add together to determine a trait. Then, the more modulating factors, the greater the trait’s variance. Put another way, the more things that cause fluctuations in gene expression, the more variable the trait. In the classical summation model, the variance contribution of each factor is σ2. Summing n components yields a trait variance of nσ2, rising with the number of components.\n\nSecond, suppose modulating factors average together to determine a trait9. When averaging n components, we divide the effect of each component by n. As the number of components rises, the effect of each component declines. Averaging n components yields a trait variance of σ2/n, declining with the number of components.\n\nOne can think about each additional modulating component as perturbing trait expression. Robustness is decreased sensitivity to perturbation. In the averaging model, the greater the number of factors, the weaker the effect of each individual perturbing factor. Thus, averaging reduces sensitivity to each perturbation, enhancing robustness.\n\nIf modulating factors average together, the benefits of enhanced robustness can favor an increase in the number of factors9. Generally, if the effect of an additional factor causes a sufficient decline in the average contribution of each factor, then natural selection can favor a tendency for the number of factors to increase. Ultimately, many factors of small effect modulate trait expression.\n\nUnder the averaging model, evolutionary dynamics follows an interesting path. An additional modulating factor may be favored because it reduces sensitivity to perturbation. Once the new factor is added and sensitivity is reduced, selective intensity against perturbations weakens. Weaker selection allows the accumulation of additional mutations with larger perturbing effects. That shift in mutation-selection balance causes a decay in the average fitness effect of each factor.\n\nDynamics progresses like a ratchet10,11. New factors get added for their enhanced robustness. All factors then decay. Taking away a recently added factor exposes the increased deleterious effects of the remaining factors. Exposure of those deleterious effects opposes reversal. One cannot go back.\n\nHundreds of genomic variants influence traits, such as human height and weight. Most variants have small effects. Many small effects smooth the gradient of trait values.\n\nA smooth gradient means that a trait may potentially change steadily, or monotonically, with respect to underlying genomic changes. We may think of a smoothly increasing path from a starting point up to the highest peak or down to the lowest valley.\n\nOverwiring leads to many genomic variants of small effect, which in turn smooths the gradient. Thus, we may say that overwiring causes a smooth gradient. What about the converse? Do the benefits of a smooth gradient favor overwiring? Consider three potential benefits.\n\nA smooth gradient enhances adjustability. A densely wired regulatory network has many different connections that can alter traits by a small amount. Such overwired connectivity allows inputs to modulate expression smoothly.\n\nA smooth gradient promotes learning12. Learning requires adjustment in response to input and measurement of success. A system learns as it steadily climbs the gradient of success by smoothly adjusting expression in response to inputs.\n\nA smooth gradient boosts evolutionary adaptability13,14. Natural selection is essentially a trial and error learning algorithm. The advantages of densely overwired control for learning apply to evolutionary adaptation by natural selection.\n\nThe smooth gradient benefits of adjustability, learning, and adaptability can potentially favor overwiring.\n\n\nDeep learning\n\nSystems can easily adjust, learn, and evolve if they have smooth gradients. Many of the algorithmic tricks and underlying concepts of machine learning and artificial intelligence come down to how one smooths the gradient5,6. A smooth gradient provides a steadily improving path from the starting point to an improved target point.\n\nSome biological networks may be densely wired because of the benefits of gradient smoothing. Ideally, we could analyze how network architecture and connectivity strengths affect gradients. However, we do not yet know enough about the details of biological networks. By contrast, the study of computational networks has advanced greatly in recent years. Those advances in computational studies hint at some principles of networks and gradient smoothing. Those principles provide clues about the design of biological networks by natural selection.\n\nComputational networks are loosely modeled after biological neural networks. A set of nodes takes inputs from the environment. Each input node connects to another set of nodes. Each of those intermediate nodes combines its inputs to produce an output that connects to yet another set of nodes, and so on. The final nodes classify the environmental state, possibly taking action based on that classification.\n\nA network learns by altering its parameters5,6. The parameters set the connection strength between nodes, and how individual nodes combine their many inputs to determine the strength of their output. For example, the input to a network may be an image of a numerical digit. The input nodes are sensors that react to the image. Those sensors initiate activations that pass through all of the connections and layers of the network. The final layer provides a set of ten probabilities, one probability for each of the digits 0, 1, …, 9.\n\nThe network, when presented with an image of the digit 7, classifies the image by returning a set of ten probabilities. The optimal classification is a probability of one for 7 and zero for all other digits. We can calculate an error distance between the optimal classification and the network’s guess. An error distance is a function of the differences in the probabilities of the optimal and guessed classification.\n\nThe error distance can be used to update the network’s parameters. We find a set of small changes in the network parameters that would have yielded a small reduction in the error distance. By following this gradient of improving performance, the network may learn from experience.\n\nThat learning approach works as long as there is a smooth path of increasing performance. Improved performance means that the adjustment process truly learns the general features of digit images that enhance future classification. Performance does not improve if adjustments focus on unusual features of the digit images used to train the network. Those unusual features may not be present in many other digit images.\n\nA deep neural network has many layers of nodes between initial inputs and final outputs. Until recently, deep and densely connected computational networks often learned slowly and then got stuck, unable to learn from further information.\n\nGetting stuck often means an unsmooth gradient. Initially, the system learns. It uses past trials to adjust its parameters, yielding a reduction in the error distance for future trials. Then the system gets stuck. Parameter adjustments do not improve future performance.\n\nPut another way, initially the system descended smoothly along the error gradient, improving performance as the error became smaller. Then the gradient flattened out, so that adjustments of the parameters either did not change future error or increased future error.\n\nFrom that stuck location of parameters, there are no easily discovered altered parameters that follow a smoothly continuing path to a lower point on the error gradient. Other parameter combinations with better performance often exist. But there is no smoothly descending path on the error gradient from the current location to those better combinations.\n\nAn improved learning system means a system that smooths the gradient sufficiently, descending on the error gradient to the better locations. The recent revolutionary increase in the performance of deep learning networks arose from a variety of computational adjustments. Many of those adjustments were discovered by trial and error, simply finding that they worked well on real problems5,6.\n\nFor example, limiting the connection strength between nodes prevents dominance by a small set of pathways of connectivity. It seems that broad, densely connected networks that retain many pathways of connectivity have greater learning potential. In essence, a deep, densely and broadly connected network provides a robustly smoothed gradient.\n\nOther adjustments include the functions by which individual nodes combine inputs to determine output. No available theory describes exactly how to construct such functions. Again, trial and error has shown certain functions to work well. Most likely, those successful functions enhance the breadth of pathways that can adjust by small amounts in response to new information, again smoothing the gradient.\n\nNetwork architecture also affects performance. Architecture includes the number of layers of nodes and the manner in which nodes connect. Connections feed forward from inputs to outputs or feed back from later nodes toward earlier nodes. The feature detectors in the sensory input nodes set the initial representation of environmental states. The network generalizes that low-level representation as information passes through the network layers.\n\nPresumably, architecture and representation ultimately contribute to performance through better gradient smoothing. In a sense, better capacity to learn and better gradient smoothing are nearly the same thing. But the emphasis on gradient smoothing can be useful, because it calls attention to the mechanisms by which particular network properties may contribute to better performance.\n\nOver time, we may come to understand the mechanisms that improve performance and smooth gradients in deep learning networks. We can then consider how those advances in computational networks may provide insight into genomic network architecture, sensory representation, and the consequences for gradient smoothing.\n\nWe know that densely connected computational and biological neural networks perform spectacularly at learning, and that densely connected genomic networks perform spectacularly in terms of adjustability and evolvability. We are still trying to understand why.\n\n\nGeometry of induction\n\nThe spectacular performance of large densely wired networks hints at key underlying principles. I conclude by suggesting that large networks are particularly good at smoothing gradients in a way that facilitates induction. Before turning to induction, it is useful to consider deductive principles.\n\nControl theory deduces general principles of wiring to achieve particular design goals4. For example, simple feedback often keeps a system near a setpoint. The setpoint may be a fixed temperature or a fixed concentration. Deviation of the output from the setpoint is fed back to the system as an additional input to the controller. If the feedback signal tells the system that it is below its setpoint, the controller triggers increased output.\n\nMany examples of genomic wiring follow simple feedback15–17. Other classic control theory motifs also occur frequently in genomic wiring pathways18. The deductive theoretical principles of control successfully predict key aspects of genomic wiring.\n\nHowever, more complex challenges in engineering and in genomes often seem to be solved by deeply, densely wired networks. I call those networks overwired, in the sense that their connectivity patterns are much deeper, denser and broader than predicted by classical deductive principles.\n\nOverwired systems may have embedded within them feedback loops and other classic wiring motifs. But those motifs no longer act alone in a simply interpreted manner. Instead, they are enmeshed within such a large web of diffuse connectivity that it is often difficult to trace their particular effects and functions.\n\nWhy do some systems wire simply along classical deductive lines and other systems overwire? I have argued that overwired systems smooth gradients to allow adjustability and adaptability. Put another way, such networks can change in response to experience. A sequence of specific events can lead to improvement of future performance. The networks somehow use their specific experience to find general solutions to a challenge. The networks inductively use specific examples to learn general solutions.\n\nInductive improvement often requires a smooth gradient. Overwiring may be favored because it enhances the scope for small changes in parameters to descend smoothly along a gradient of decreasing error.\n\nThe problem is essentially geometric. How do topological changes in network architecture reshape the error gradient? How do particular bounds on connectivity parameters smooth the gradient? How do particular nodal transformations of inputs into outputs alter gradient shape? How do the input sensors and input representations change the error gradient and consequent inductive performance?\n\nInductive improvement occurs on various timescales. Over short periods of time, an organism may adjust its response to the environment by changing various parameters within its regulatory network. Over long periods of time, natural selection reshapes the design of the regulatory network. Both short-term adjustments and long-term changes in design arise inductively. Biological systems do not deduce principles. They inductively arrive at abstract representations of environmental challenges. They narrow the error distance along the geometric path of inductive improvement.\n\nMany biological regulatory networks are simple, following closely along classical deductive design principles. In those cases, inductive evolutionary processes discovered those simple deductive principles. Other biological networks are overwired, apparently tuned for inductive potential.\n\nFinal questions arise. What sorts of environmental challenges favor classically deductive wiring? What sorts of challenges favor inductive overwiring? What historical aspects of organismal evolution constrain network design? How can we relate deep learning solutions of engineering problems and genomic wiring solutions of biological problems to a more general geometric theory of induction?",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nNational Science Foundation (grant DEB–1251035) supports my research.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAlberts B, Johnson A, Lewis J, et al.: Molecular Biology of the Cell. Garland Science, New York, 6th edition, 2014. Reference Source\n\nPollard TD, Earnshaw WC, Lippincott-Schwartz J, et al.: Cell Biology. Elsevier, San Diego, 3rd edition, 2017. Reference Source\n\nWood AR, Esko T, Yang J, et al.: Defining the role of common variation in the genomic and biological architecture of adult human height. Nat Genet. 2014; 46(11): 1173–1186. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOgata K: Modern Control Engineering. Prentice Hall, New York, 5th edition, 2009. Reference Source\n\nNielsen MA: Neural Networks and Deep Learning. Determination Press, 2015. Reference Source\n\nGoodfellow I, Bengio Y, Courville A: Deep Learning. MIT Press, Cambridge, MA, 2016. Reference Source\n\nLynch M: The Origins of Genome Architecture. Sinauer Associates, Sunderland, MA, 2007. Reference Source\n\nFernández A, Lynch M: Non-adaptive origins of interactome complexity. Nature. 2011; 474(7352): 502–505. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrank SA: Genetic variation of polygenic characters and the evolution of genetic degeneracy. J Evol Biol. 2003; 16(1): 138–142. PubMed Abstract | Publisher Full Text\n\nFrank SA: Maladaptation and the paradox of robustness in evolution. PLoS One. 2007; 2(10): e1021. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrank SA: Evolution of robustness and cellular stochasticity of gene expression. PLoS Biol. 2013; 11(6): e1001578. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBengio Y: Learning deep architectures for AI. Foundations and Trends in Machine Learning. 2009; 2(1): 1–127. Publisher Full Text\n\nGavrilets S: Fitness Landscapes and the Origin of Species. Princeton University Press, Princeton, NJ, 2004. Reference Source\n\nFrank SA: Natural selection. II. Developmental variability and evolutionary rate. J Evol Biol. 2011; 24(11): 2310–2320. PubMed Abstract | Publisher Full Text\n\nAlon U: An Introduction to Systems Biology: Design Principles of Biological Circuits. CRC press, Boca Raton, Florida, 2007. Reference Source\n\nIglesias PA, Ingalls BP: Control Theory and Systems Biology. MIT Press, Cambridge, MA, 2009. Reference Source\n\nCosentino C, Bates DG: Feedback Control in Systems Biology. CRC Press, Boca Raton, Florida, 2011. Reference Source\n\nAlon U: Network motifs: theory and experimental approaches. Nat Rev Genet. 2007; 8(6): 450–461. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "23536",
"date": "03 Jul 2017",
"name": "Sean Nee",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a stimulating, original and thought-provoking and, so, I recommend publication. I see nothing incorrect, so no changes are requested by me as a referee.\nSome thoughts provoked in my mind are as follows.\nFirst: is engineering really as cut-and-dried as we suppose? A cursory reading of fly-by-wire disasters suggests that the elegant theorems of classical control theory may not be as powerful as one would wish. My impression is that engineers are acutely aware that new technologies such as \"cyber physical\" systems1, which are most akin to biological systems, are necessitating a complete rethink of the conceptual foundations of their subject matter. Even for more traditional technologies, it is not obvious to me that engineering is as purely “deductive” a subject as we might like to think as we board an aircraft, using the word “deductive” in a way that I may be misconstruing as Frank’s usage.\nSecond: to use Dawkin’s convenient metaphor, could a sighted watchmaker really design a “simpler” immune system, for example, than a blind one? If so, is that to do with historical aspects of evolution/population sizes/mutational spectra and so on? If so, what sort of science are we as biologists looking to create: one that says, for example: genome duplication events and large population sizes are responsible for … what? This would be restricting our thinking about evolution to providing explanations of the contrasting failings of different groups of creatures.\nMore interesting to me is what I believe Frank is suggesting: the blind watchmaker may have much to teach the sighted ones. This is particularly so in the case of Artificial Intelligences. These are of great interest as both biologists and engineers are only at the starting gate of understanding, and we are all dealing with the question of the design of systems which have a tiny number of component types – “neurons”. I expect to see a unification of psychology and AI engineering in the near future.\nTime will tell whether any notions that may have floated around in classical thinking about evolutionary genetics will advance this program.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2936",
"date": "14 Aug 2017",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "I thank Sean Nee for his thoughtful comments that extend the scope of the discussion. This exchange is included as part of the final publication of my article, so I confine my response to these comments.Italics quote from Nee’s review.First: is engineering really as cut-and-dried as we suppose? ... it is not obvious to me that engineering is as purely “deductive” a subject as we might like to think as we board an aircraft, using the word “deductive” in a way that I may be misconstruing as Frank’s usage.I certainly agree that engineering in practice works as much by trial and error as it does by an abstractly pure deduction. I had intended the labeling of engineering design as \"deductive\" primarily as a comparison with how evolutionary design by natural selection is relatively more \"inductive\" in character.Engineers do use deductive principles of control theory to aid in the design of control systems. When novel deductive understanding of control principles arises, engineers readily alter their approach to design, benefitting from the improved general insight. By contrast, natural selection is essentially a purely inductive process. That inductive process cannot throw out a past design and start over, but must improve only by layering small inductive gain upon small inductive gain. Second: to use Dawkin’s convenient metaphor, could a sighted watchmaker really design a “simpler” immune system, for example, than a blind one? ... More interesting to me is what I believe Frank is suggesting: the blind watchmaker may have much to teach the sighted ones. This is particularly so in the case of Artificial Intelligences.I meant the comparison in both of the ways that Nee discusses. A sighted watchmaker would make a different immune system from a blind watchmaker. Whether that different immune system of the sighted watchmaker would be simpler or better is hard to say. However, I suspect that it would be simpler because humans tend to design systems that they can analyze and understand, whereas blind induction does not care about the logic or the complexity of the mechanism.I agree with the latter aspect in Nee's comments: that the blind watchmaker provides new insights about design that the sighted watchmaker may consider. We are seeing this now in the great advances in artificial intelligence: the goal of the sighted watchmaker has become to improve the ways in which the blind watchmaker's trial and error induction proceeds."
}
]
},
{
"id": "23733",
"date": "24 Jul 2017",
"name": "David M. McCandlish",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe outsides of organisms are most often exquisitely, even ruthlessly, adaptive. Inside the organism’s body, the situation is more heterogeneous. Both physiology and the function of macromolecular complexes are in many instances technically stunning. On the other hand, the wiring diagram of the cell is bedlam. Even based on our current — and likely quite incomplete — state of knowledge, regulatory networks appear to be both more densely and more broadly interconnected than would seem necessary. This is surely a puzzle of modern biology, and Frank has done us a service by cataloging the live hypotheses and pointing us towards the possibility of a resolution wherein this “over-wiring” simply reflects general principles of inductive inference.\nMy main criticism of this article is that it does not engage sufficiently strongly with either the contemporary or historical literature. On the contemporary side, it would seem appropriate to directly address the ongoing efforts by several groups to formally link population genetics to general principles of inference (of course, Frank has contributed substantially in this area himself by clarifying the relationship between natural selection and information geometry, see e.g. Frank 2012 “Natural selection V. How to read the fundamental equations of evolutionary change in terms of information theory”). These efforts have been recently reviewed by Watson and Szathmary 2016 in a TREE piece “How can evolution learn?”, which hits on many of the same themes as the latter half current manuscript. Watson’s work in this area seems particularly relevant, and indeed he calls his theory “Evolutionary connectionism” (Watson et al. 2015). An important insight from this series of papers is a possible relationship between the evolutionary problem of evolvability and the statistical problem of overfitting. In particular, they suggest that pressure for developmental simplicity can improve the ability of evolutionary systems to generalize in a manner similar to how regularization, drop outs, or early stopping can prevent over-fitting in machine learning (e.g. Kouvaris et al. 2017 “How evolution learns to generalize, using the principles of learning theory to understand the evolution of developmental organisation”). The idea that the topology of regulatory networks is a generic consequence of evolution by gene duplication (as in, e.g. the work of Ricard Solé), and more generally by the expansion of gene families, also seems like it deserves a mention as at least a possible proximal cause of over-wiring.\nOn the historical side, I think more could be done to link the current discussion with historical themes in evolutionary thought. For instance, the discussion about many possible genomic changes with small effects smoothing the gradient and allowing evolutionary optimization could be put in the context of Fisher and Wright’s disagreements over the structure of fitness landscapes. Wright thought that the reality of building a functional physiology would produce fitness landscapes with many local maxima, so that the key question in evolution was to identify the population-genetic regimes where progress on such a landscape is possible (Wright 1931, 1932). Fisher thought that in high dimensions, these local maxima would largely turn into saddle points, and that in any case, environments were generally changing fast enough that populations were usually chasing a moving optimum rather than adapting on a fixed fitness landscape (Fisher 1930). Frank’s discussion of “getting stuck” in the current manuscript provides additional nuance to this classical disagreement by emphasizing the possibility of extended, high-dimensional plateaus that while strictly speaking are saddle points function in an evolutionary sense more like local optima. The reader interested in resolving this puzzle should also be directed to some of the stone-cold classics in this area such as Wagner and Altenberg 1996 and Stoltzfus 1999 (“On the possibility of constructive neutral evolution”).\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2937",
"date": "14 Aug 2017",
"name": "Steven Frank",
"role": "Author Response F1000Research Advisory Board Member",
"response": "I thank David McCandlish for his thoughtful comments and his broad knowledge of the literature. I have added an Appendix that includes the suggested references plus a few others. I agree that there is much prior work and relevant literature. In this series of articles, I am writing to a strict word limit and in a way that can be read by the beginning student as well as the advanced professional. In this case, I perhaps had limited my references too much. The Appendix should provide a start for those who wish to follow up in the literature."
}
]
}
] | 1
|
https://f1000research.com/articles/6-924
|
https://f1000research.com/articles/5-2333/v1
|
16 Sep 16
|
{
"type": "Research Article",
"title": "Revisiting inconsistency in large pharmacogenomic studies",
"authors": [
"Zhaleh Safikhani",
"Petr Smirnov",
"Mark Freeman",
"Nehme El-Hachem",
"Adrian She",
"Quevedo Rene",
"Anna Goldenberg",
"Nicolai J. Birkbak",
"Christos Hatzis",
"Leming Shi",
"Andrew H. Beck",
"Hugo J.W.L. Aerts",
"John Quackenbush",
"Benjamin Haibe-Kains",
"Zhaleh Safikhani",
"Petr Smirnov",
"Mark Freeman",
"Nehme El-Hachem",
"Adrian She",
"Quevedo Rene",
"Anna Goldenberg",
"Nicolai J. Birkbak",
"Christos Hatzis",
"Leming Shi",
"Andrew H. Beck",
"Hugo J.W.L. Aerts",
"John Quackenbush"
],
"abstract": "In 2013, we published a comparative analysis mutation and gene expression profiles and drug sensitivity measurements for 15 drugs characterized in the 471 cancer cell lines screened in the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE). While we found good concordance in gene expression profiles, there was substantial inconsistency in the drug responses reported by the GDSC and CCLE projects. We received extensive feedback on the comparisons that we performed. This feedback, along with the release of new data, prompted us to revisit our initial analysis. Here we present a new analysis using these expanded data in which we address the most significant suggestions for improvements on our published analysis — that targeted therapies and broad cytotoxic drugs should have been treated differently in assessing consistency, that consistency of both molecular profiles and drug sensitivity measurements should both be compared across cell lines, and that the software analysis tools we provided should have been easier to run, particularly as the GDSC and CCLE released additional data.\n\nOur re-analysis supports our previous finding that gene expression data are significantly more consistent than drug sensitivity measurements. The use of new statistics to assess data consistency allowed us to identify two broad effect drugs and three targeted drugs with moderate to good consistency in drug sensitivity data between GDSC and CCLE. For three other targeted drugs, there were not enough sensitive cell lines to assess the consistency of the pharmacological profiles. We found evidence of inconsistencies in pharmacological phenotypes for the remaining eight drugs.\n\nOverall, our findings suggest that the drug sensitivity data in GDSC and CCLE continue to present challenges for robust biomarker discovery. This re-analysis provides additional support for the argument that experimental standardization and validation of pharmacogenomic response will be necessary to advance the broad use of large pharmacogenomic screens.",
"keywords": [
"drug sensitivity",
"cancer",
"pharmacogenomics",
"consistency",
"pharmacogenomic agreement"
],
"content": "\n\n\n\nIn 2013 we reported inconsistency in the drug sensitivity phenotypes measured by the Genomics of Drug Sensitivity in Cancer (GDSC) and the Cancer Cell Lines Encyclopedia (CCLE) studies. Here we revisit that analysis and address a number of potential concerns raised about our initial methodology:\n\nDifferent drugs should be compared based on the observed pattern of response. To address this concern, we considered drugs falling into three classes: (1) drugs with no observed activity in any of the cell lines; (2) drugs with sensitivity observed for only a small subset of cell lines; and (3) drugs producing a response in a large number of cell lines. For each class, we assessed the correlation in drug response between studies using a variety of metrics, selecting the metric that performed best in each individual comparison. While no metric identified any substantial consistency for the first class (sorafenib, erlotinib, and PHA−665752) due to no activity, judicious choice of metric found high consistency for three of eight highly targeted therapies in the second class (nilotinib, crizotinib, and PLX4720), but no metric found better than moderate correlation for two of four broad effect drugs in the third class (PD−0332901 and 17-AAG).\n\nMeasure of consistency for targeted drugs. Beyond considering drug response profiles, targeted drugs should be treated differently when assessing consistency. We used six different statistics to test consistency, using both continuous and discretized drug sensitivity data. We confirmed that Spearman rank correlation, used in our 2013 study, does not detect consistency for the three highly targeted therapies profiled by GDSC and CCLE. Other statistics, such as Somers' Dxy or Matthews correlation coefficient, yielded moderate to high consistency for specific drugs, but there was no single metric that found good consistency for each of the targeted drugs.\n\nConsistency of molecular profiles across cell lines. In our initial published analysis, we reported correlations based on comparing drug response “across cell lines” while gene expression levels were compared “between cell lines.” It has been suggested it would be more appropriate to compute correlations “across cell lines” for both molecular and pharmacological data. Here we report a number of statistical measures of consistency for both gene expression and drug response compared across cell lines and confirm our initial finding that gene expression is significantly more consistent than the reported drug phenotypes.\n\nSome published biomarkers are reproducible between studies. In our initial comparative study we found that the majority of known biomarkers predictive of drugs response are reproducible across studies. We extended the list of known biomarkers and found that seven out of 11 are significant in GDSC and CCLE. While one can find such anecdotal examples, they do not lead to a general process for discovering a new biomarker in one study that can be applied to another study.\n\nResearch reproducibility. The code we provided with our original paper was incompatible with updated releases of the GDSC and CCLE datasets. We developed PharmacoGx, which is a flexible, open-source software package based on the statistical language R, and used it to derive the results reported here.\n\n\nIntroduction\n\nThe goal of precision medicine is the identification of the best therapy for each patient and their own unique manifestation of a disease. This is particularly important in oncology where multiple cytotoxic and targeted drugs are available, but their therapeutic benefits are often insufficient or limited to a subset of cancer patients. Large-scale pharmacogenomics studies in which experimental and approved drugs are screened against panels of molecularly characterized cancer cell lines, have been proposed as a means for identifying drugs effective against specific cancers and for developing genomic biomarkers predictive of drug response. The Genomics of Drug Sensitivity in Cancer project (GDSC, referred to as the Cancer Genome Project [CGP] in our initial study)1, and the Cancer Cell Line Encyclopedia (CCLE)2 have each reported results of such screens, providing data on drug sensitivities and molecular profiles for collections of representative cancer cell lines.\n\nPresented with these two large studies, our hope was that we could use the data to identify new molecular biomarkers of drug response in one study that would predict response in the second. We3 and others4–6 reported difficulties in building and validating biomarkers of response using the GDSC and CCLE datasets, even when the analysis was limited to the drugs and cell lines screened in both studies. To understand the cause of this failure, we compared the gene expression profiles and the drug response data reported by the GDSC and CCLE7,8. We found that, although the gene expression data showed reasonable consistency between the two studies, the drug sensitivity measurements were surprisingly inconsistent. This inconsistency can be clearly seen by plotting drug response reported for each of the 15 drugs provided in both GDSC and CCLE for the 471 cell lines assayed by both studies7–10. Since the publication of our comparative analysis, we received a great deal of constructive feedback from the scientific community regarding multiple aspects of the analysis we reported, including suggestions for analytical methods that might uncover greater consistency between the studies. Moreover, both GDSC and CCLE have released new drug sensitivity and molecular profiling data, allowing us not only to revisit our initial analysis, but also to extend it using these new data.\n\nTo begin, we investigated alternative statistics to assess the inter-study consistency for drugs exhibiting different patterns of response across the collection of cell lines common to both studies. We then considered statistical methods for highly targeted drugs expected to be sensitive only in a subset of cell lines. We compared consistency estimates between continuous and discretized molecular features (gene expression, copy number variations and mutations) and drug sensitivity data, and importantly, assessed how potential discordance may affect the discovery of molecular features (biomarkers) predictive of drug response. We also revisited our analysis of consistency of molecular data between studies and evaluated “known biomarkers” of response expected to be predictive in these studies.\n\nThis extensive reanalysis found that by selecting specific statistical measures on a case-by-case basis, one can identify moderate to good consistency for two broad effect and three highly targeted therapies. However, overall, our results support our initial observations that drug sensitivity data in GDSC and CCLE are inconsistent for the majority of the drugs, even when considering metrics yielding the highest consistency for individual drugs. Our present analysis adds further evidence supporting the need for robust and standardized experimental pipelines to assure generation of comparable, biologically relevant measures of drug response as well as unbiased statistical and machine learning methods to better predict response. Failure to do so will continue to limit the potential for use of large-scale pharmacogenomic screens in reliable drug development and precision medicine applications.\n\n\nResults\n\nThe overall analysis design of our study is represented in Figure 1.\n\nGDSC: Genomics of Drug Sensitivity in Cancer; AE: ArrayExpress; Cosmic: Catalogue of Somatic Mutations in Cancer; CGHub: Cancer Genomics Hub; CCLE: Cancer Cell Line Encyclopedia.\n\nTo identify the largest set of cell lines and drugs profiled by both GDSC and CCLE, we used the PharmacoGx computational platform11 that is able to store, analyze, and compare curated pharmacogenomic datasets. We created curated datasets for the new releases of the GDSC (July 2015) and CCLE (February 2015) projects. The improved curation of new data using PharmacoGx11 identified 15 drugs in common between GDSC and CCLE as well 698 cell lines, originating from 23 tissue types (Figure 2). This is the same number of shared drugs but the updated datasets contains a larger number of common cell lines than the 471 reported in our previous analysis7.\n\nOverlap of (A) drugs, (B) cell lines and (C) tissue types.\n\nTo check the accuracy of cell line name matching, we compared single nucleotide polymorphism (SNP) fingerprints using data released in both studies. We first controlled for the quality of the SNP arrays and excluded 11 of 1,396 profiles due to low quality (see Methods). We then compared SNP fingerprints of cell lines with identical name using > 80% as threshold for concordance12,13. Consistent with the results reported by the CCLE2, the vast majority of cell lines had highly concordant fingerprints (462 out of 470 cell lines with SNP profiles available in both GDSC and CCLE; Dataset 1). We found eight cell lines with same identifier but different SNP identity (Figure 3); these were removed from our subsequent analyses to avoid discrepancies due to the use of possibly mislabeled or contaminated cell lines.\n\nWe used the viability measures for each drug concentration in GDSC and CCLE to fit dose-response curves and assess their quality. An important factor influencing the fitting of drug dose-response curves is the range of concentration used for each cell line/drug combination. In CCLE, all dose-response curves were measured at eight concentrations: 2.5×10-3, 8×10-3, 2.5×10-2, 8×10-2, 2.5×10-1, 8×10-1, 2.5, and 8 μM. However, in GDSC response was measured at a different set of concentrations for each drug. The minimum concentrations for different drugs range from 3.125×10-5 to 15.625 μM. In each case, the concentrations tested by GDSC form a geometric sequence of nine terms with a common ratio of two between successive concentrations. Thus, the maximum concentration tested for each drug is 256 times the minimum concentration for that drug and ranges from 8×10-3 to 4000 μM.\n\nTo properly fit drug dose-response curves, one must make multiple assumptions regarding the cell viability measurements generated by the pharmacological platform used in a given study. For instance, one assumes that viability ranges between 0% and 100% after data normalization and that consecutive viability measurements remain stable or decrease monotonically reflecting response to the drug being tested. Quality controls were implemented to flag dose-response curves that strongly violate these assumptions (Supplementary Methods). We identified 2315 (2.9%) and 123 (1%) dose-response curves that failed to pass in GDSC and CCLE, respectively, as exemplified in Figure 4 (all noisy curves are provided in Supplementary File 1. We excluded these cases to avoid erroneous curve fitting.\n\nThe grey area represents the common concentration range between studies. (A) JNS-62 cell line treated with 17-AAG; (B) LS-513 treated with nutlin-3; (C) HCC70 cell lines treated with PD-0332991; and (D) EFM-19 cell line treated with PD-0325901.\n\nWe used least squares optimization to fit a three-parameter sigmoid model (Methods) for the drug dose-response curves in GDSC and CCLE (Supplementary File 2). For each fitted curve, we computed the most widely used drug activity metrics, that are the area under the curve (AUC) and the drug concentration required to inhibit 50% of cell viability (IC50).\n\nWe began by computing the area between the two drug dose-response curves (ABC) to assess consistency of cell viability data for each cell line combination screened in both GDSC and CCLE using the common concentration range. ABC measures the difference between two drug-dose response curves by estimating the absolute area between these curves, which ranges from 0% (perfect consistency) to 100% (perfect inconsistency). The ABC statistic identified highly consistent (Figure 5A, B) and highly inconsistent (Figure 5C, D) dose-response curves between GDSC and CCLE. The mean of the ABC estimates for all drug-cell line combinations was 10% (Supplementary Figure 1A), with paclitaxel yielding the highest discrepancies (Supplementary Figure 1B).\n\nExamples of (A,B) consistent and (C,D) inconsistent drug dose-response curves in GDSC and CCLE. The grey area represents the common concentration range between studies. (A) COLO-320-HSR cell line treated with AZD6244; (B) HT-29 treated with PLX4720; (C) CAL-85-1 cell lines treated with 17-AAG; and (D) HT-1080 cell line treated with PD-0332991.\n\nWe compared biological replicates in GDSC, which were performed independently at the Massachusetts General Hospital (MGH) and the Wellcome Trust Sanger Institute (WTSI). These experiments are comprised of 577 cell lines treated with AZD6482, a PI3Kβ inhibitor screened in GDSC (Supplementary File 3). We computed the ABC of these biological replicates and observed both highly consistent and inconsistent cases (Supplementary Figure 2). We then computed the median ABC values for each pair of drugs in GDSC and used these as a distance metric for complete linkage hierarchical clustering. We found that the MGH- and WTSI-administered AZD6482 experiments clustered together, suggesting that the differences between dose-response curves of biological replicates were smaller than the differences observed between different drugs (Supplementary Figure 3A). We performed the same clustering analysis by computing the ABC-based distance between all the drugs in GDSC and CCLE and observed that only three out of the 15 common drugs clustered tightly (17-AAG, lapatinib, and PHA−665752; Supplementary Figure 3B). Despite the small number of cell lines exhibiting sensitivity to PHA−665752 and lapatinib, these drugs closely clustered between GDSC and CCLE; however this was not the case for other highly targeted therapies, such as AZD0530, nilotinib, crizotinib and TAE684 Supplementary Figure 3B).\n\nAlthough the ABC values provide a measure of the degree of consistency between studies, it is the AUC and IC50 estimates, and their correlation with molecular features (such as mutational status and gene expression) that are commonly used to assess drug response. Therefore we revisited our comparative analysis of the drug sensitivity data using the expanded data and the standardized methods implemented in our PharmacoGx platform. Using the same three-parameter sigmoid model to fit drug dose-response curves in GDSC and CCLE (see Methods), we recomputed AUC and IC50 values and observed very high correlation between published and recomputed drug sensitivity values for each study individually (Spearman > 0.93; Figure 6; Dataset 2).\n\n(A) AUC in GDSC; (B) AUC in CCLE; (C) IC50 in GDSC; and (D) IC50 in CCLE. SCC stands for Spearman correlation coefficient.\n\nIt has been suggested that some of the observed inconsistencies between the GDSC and CCLE may be due to the nature of targeted therapies, which are expected to have selective activity against some cell lines10,14,15. This is a reasonable assumption as the measured response in insensitive cell lines may represent random technical noise that one should not expect to be correlated between experiments. We therefore decided to clearly discriminate between highly targeted drugs with narrow growth inhibition effects and drugs with broader effects. We used the full GDSC and CCLE datasets to compare the variation of the drug sensitivity data of known targeted and cytotoxic therapies as classified in the original studies (Supplementary Figure 4). We observed that drugs can be classified in these two categories based on median absolute deviation (MAD) of the estimated AUC values (Youden’s optimal cutoff16 of AUC MAD > 0.13 for cytotoxic drugs). We then used this cutoff on the common drug-cell line combinations in GDSC and CCLE to define three classes of drugs (Supplementary Figure 5):\n\nNo effect: Drugs with minimal observed activity (typically active in less than five sensitive cell lines with AUC > 0.2 or IC50 < 1 µM in either study). This class includes sorafenib, erlotinib and PHA−665752.\n\nNarrow effect: Highly targeted drugs with activity observed for only a small subset of cell lines (AUC MAD ≤ 0.13). This group includes nilotinib, lapatinib, nutlin-3, PLX4720, crizotinib, PD-0332991, AZD0530, and TAE684.\n\nBroad effect: Drugs producing a response in a large number of cell lines (AUC MAD > 0.13). This includes AZD6244, PD-0325901, 17-AAG and paclitaxel.\n\nWe then compared the AUC (Figure 7, Supplementary Figure 6 and Supplementary Figure 7 for published AUC, recomputed AUC and AUC computed based on the common concentration range, respectively) and IC50 (Supplementary Figure 8 and Supplementary Figure 9) values and calculated the consistency of drug sensitivity data between studies using all common cases and only those that the data suggested were sensitive in at least one study (Figure 8 and Supplementary Figure 10 for AUC and IC50, respectively, and Dataset 3). Given that no single metric can capture all forms of consistency, we extended our previous study by using the Pearson correlation17, Spearman18, and Somers' Dxy19 rank correlation coefficients to quantify the consistency of continuous drug sensitivity measurements across studies (see Methods).\n\nFor cytotoxic drugs (paclitaxel), cell lines with AUC < 0.4 were considered as insensitive, while for targeted therapies cell lines with AUC < 0.2 were considered insensitive (grey dashed lines). In case of perfect consistency, all points would lie on the grey diagonal.\n\nAs expected, no consistency was observed for drugs with “no effect” (Figure 8A). For AUC of drugs with narrow and broad effects, Somers' Dxy was the most stringent, with consistency estimated to be < 0.4 except for two drugs (PD-0325901 and 17-AAG), which were also the two drugs identified as the most consistent using Spearman correlation (ρ ~ 0.6; Figure 8A). However, these statistics did not capture potential consistency for the most highly targeted therapies, nilotinib, crizotinib, and PLX4720, for which the Pearson correlation coefficient gave the best evidence of concordance, as this statistics is strongly influenced by a small number of highly sensitive cell lines (Figure 7). Our results concur with the recent comparative study published by the GDSC and CCLE investigators15.\n\n(A) Consistency assessed using the full set of cancer cell lines screened in both studies. (B) Consistency assessed using only sensitive cell lines (AUC ≥ 0.4 for broad effect drugs, and AUC ≥ 0.2 for drugs with narrow effects). (C) Consistently assessed by discretizing the drug sensitivity data using the aforementioned cutoffs for AUC. PCC: Pearson correlation coefficient; SCC: Spearman rank-based correlation coefficient; DXY: Somers’ Dxy rank correlation; MCC: Matthews correlation coefficient; CRAMERV: Cramer’s V statistic; INFORM: Informedness. The symbol '*’ indicates whether the consistency is statistically significant (p< 0.05).\n\nWe then restricted our analysis to the cell lines identified as sensitive in at least one study and computed the same consistency measures (Figure 8B). To our surprise, eliminating the insensitive cell lines resulted in decreased consistency for most drugs, which suggests a high level of inconsistency across sensitive cell lines, with the only exceptions of the highly targeted drugs nilotinib and crizotinib.\n\nTo test whether discretization of drug sensitivity data into binary calls (“insensitive” vs. “sensitive”; see Methods) improves consistency across studies, we used three association statistics, the Matthews correlation coefficient20, Cramer’s V21, and the informedness22 statistics (Figure 8C). These statistics are designed for use with imbalanced classes, which is particularly relevant in large pharmacogenomic datasets where, for targeted therapies, there are often many more insensitive cell lines than sensitive ones. As expected, the highly targeted therapies, nilotinib and PLX4720 (and nutlin-3 using informedness), yielded high level of consistency, but this was not the case for the other targeted therapies. We also found that the drug sensitivity calls for drugs with broader inhibitory effects were also poorly correlated between studies (Figure 8C).\n\nWe performed the same analysis using IC50 values truncated to the maximum concentration used for each drug in each study separately. We observed similar patterns with nilotinib and crizotinib yielding moderate to high consistency across studies (Supplementary Figure 10). Note that Somers' Dxy rank correlation is biased in the presence of many repeated values in the datasets being analyzed, which is the case for truncated IC50 — pairs of cell line with identical IC50 values in one dataset but not in the other will not be taken into account as evidence of inconsistency — which explains the artifactual perfect consistency it suggests for both nilotinib and crizotinib.\n\nDiscovering new biomarkers predictive of drug response requires both robust pharmacological data and molecular profiles. In our original study, we showed that the gene expression profiles for each cell line profiled by both GDSC and CCLE were highly consistent. However, we found that mutation profiles were only moderately consistent, a result that was later confirmed by Hudson et al.23.\n\nThere have been questions as to whether the measures of consistency we reported for drug response should be compared to those we reported for gene expression. Specifically, we reported correlations based on comparing drug response “across cell lines,” meaning that we examined the correlation of response of each cell line to a particular drug reported by the GDSC with the response of the same cell line to the same drug reported by the CCLE. In contrast we reported correlation of gene expression levels “between cell lines,” meaning that we compared the expression of all genes within each cell line in the GDSC to the expression of all genes in the same cell line in the CCLE (see Supplementary Methods). It has been suggested that a more valid comparison would be to compare both drug response and gene expression across cell lines. We report the results of such an “across cell lines” analysis of gene expression here, computed using techniques analogous to those we used to compare drug response.\n\nWe began by comparing the distribution of gene expression measurements generated using the microarray Affymetrix HG-U219 platform in GDSC, the microarray Affymetrix HG-U133PLUS2 platform and the new Illumina RNA-seq data in CCLE (Supplementary Figure 11). We observed similar bimodal distributions, suggesting the presence of a natural cutoff to discriminate between lowly vs. highly expressed genes. We therefore fit a mixture of two gaussians and identified an expression cutoff for each platform separately (Supplementary Figure 11). We then compared the consistency of continuous and discretized gene expression values between (i) the microarray Affymetrix HG-U133PLUS2 and Illumina RNA-seq platforms within CCLE (intra-lab consistency); (ii) the microarray Affymetrix HG-U219 and HG-U133PLUS2 platforms used in GDSC and CCLE, respectively (microarray, inter-lab consistency); and (iii) the microarray Affymetrix HG-U219 and Illumina RNA-seq platforms used in GDSC and CCLE, respectively (inter-lab consistency). We performed a similar analysis for CNV log-ratios and observed high consistency across cell lines (Figure 9A). Supporting our previous observations, we found that CNV and gene expression measurements are significantly more consistent than drug sensitivity values when using all cell lines (Wilcoxon rank sum test p-value < 0.05; Figure 9A; Supplementary Figure 12A).\n\n(A) Consistency assessed using the full set of cancer cell lines screened in both studies. (B) Consistency assessed using only sensitive cell lines (AUC ≥ 0.4 for broad effect drugs, and AUC ≥ 0.2 for drugs with narrow effects). (C) Consistently assessed by discretizing the molecular and drug sensitivity data. PCC: Pearson correlation coefficient; SCC: Spearman rank-based correlation coefficient; DXY: Somers’ Dxy rank correlation; MCC: Matthews correlation coefficient; CRAMERV: Cramer’s V statistic; INFORM: Informedness.\n\nSimilarly to the filtering we performed for drug sensitivity data, we subsequently restricted our analysis to the cell lines showing high expression of a given gene/cell line combination in at least one study. Again, CNV and gene expression measurements were significantly more consistent than drug sensitivity values in this case (Wilcoxon rank sum test p-value < 0.05; Figure 9B; Supplementary Figure 12B). When dichotomizing data into lowly/highly expressing, amplifications/deletions, and wild type/mutated cell lines and insensitive/sensitive cell lines, the CNV and gene expression data were still more consistent (Figure 9C) although the difference was not always significant (Supplementary Figure 12C). Concurring with the report of Hudson et al.23, we observed low consistency for mutation calls across cell lines (Figure 9C).\n\nThe primary goal of the GDSC and CCLE studies was to identify new genomic predictors of drug response for both targeted and cytotoxic therapies. We therefore evaluated whether the good consistency in drug sensitivity data observed for nilotinib, PLX4720 and crizotinib, and the moderate consistency observed for 17-AAG and PD-0332901 would translate in reproducible biomarkers. We estimated gene–drug associations by fitting, for each gene and drug, a linear regression model including gene expression, CNV and mutations as predictors of drug sensitivity, adjusted for tissue source (see Methods). As illustrated in Figure 1, we used the molecular and pharmacological data generated independently in GDSC and CCLE to identify and compare gene-drug associations. This approach prevents any information leak between the two datasets, which may lead to overoptimistic consistency between the studies, as in the recent comparative study published by the GDSC and CCLE investigators9. Given the high correlation between the published and recomputed AUC values in each study (Figure 6) and their similar consistency (Figure 9), all gene-drug associations were computed using published AUC for clarity.\n\nWe first computed the strength and significance of each gene expression in both datasets separately. Similarly to our initial study7, the strength of a given gene-drug association is provided by the standardized coefficient associated to the corresponding gene expression in the linear model and its significance is provided by the p-value of this coefficient (see Methods). We then identified gene-drug associations that were reproducible in both datasets (same sign and False Discovery Rate [FDR] < 5%) or that were dataset-specific (different sign or significant in only one dataset) using continuous (Supplementary Figure 13 and Supplementary Figure 14 for common and all cell lines, respectively) and discretized (Supplementary Figure 15 and Supplementary Figure 15 for common and all cell lines, respectively) published AUC values as drug sensitivity data. We assessed the overlap of gene-drug associations discovered in both datasets using the Jaccard index24. All Jaccard indices were low, with nilotinib yielded the largest overlap of gene-drug associations (32%), followed by PD-0325901 and erlotinib (almost 20%), while the other drugs yielded less than 15% overlap (Supplementary Figure 17). Our results further indicate that larger overlap exists for gene-drug associations identified using the continuous drug sensitivity data compared with associations using discretized drug sensitivity calls (Wilcoxon signed rank test p-value of 4×10-2 and 2×10-3 for the common set and the full set of cell lines, respectively). We therefore focused our analyses on the gene-drug associations identified using continuous published AUC values. The number (and identity) of gene-drug associations computed using continuous published AUC values are provided in Supplementary Table 1 and Supplementary Table 2 (Dataset 5 and Dataset 6) for common and all cell lines, respectively.\n\nGiven that simply intersecting significant gene-drug associations identified in each dataset separately yielded poor reproducibility for all drugs, we sought to more closely mimic the biomarker discovery and validation process. We therefore used one dataset to discover significant gene-drug associations and test whether this subset of markers validated in an independent dataset. Using the discovery dataset, gene-drug associations are first ranked by nominal p-values and their FDR is computed. An association is selected if it is part of the top 100 markers and its FDR is less than 5%. This procedure ensure to control for both significance and number of selected biomarkers, which can vary with respect to the cell line panel used for the analysis (larger panels enable the identification of more significant biomarkers due to increased statistical power). A gene-drug association is validated in an independent dataset if its nominal p-value is less than 0.05 and its “direction”, that is whether the marker is associated with sensitivity or resistance, is identical to the one estimated during the discovery process.\n\nWe computed the proportions of validated gene-drug associations for each drug using gene expression data in GDSC as discovery set and CCLE as validation set, and vice versa (Figure 10). Overall, we found that biomarkers for PD-0325901 and nilotinib yielded a high validation rate (> 80%) with either dataset as discovery set using the common cell lines screened in GDSC and CCLE (Figure 10A). When using the entire cell line panels used in each study, two more drugs -- lapatinib and erlotinib -- yielded high validation rate (Figure 10B). 17-AAG, and PLX4720 yielded validation rate between 60% and 80%, while the other drugs yielded a validation rate around 50% or lower. For eight out of the fifteen drugs, using the entire panel of cell lines screened in each study (Figure 10B) improved the validation rate compared to limiting the analysis to common cell lines (Figure 10A). However validation rate decreased for five other drugs, suggesting that using large, but different panels of cell lines may increase statistical power but could also introduce biases in the biomarker discovery process.\n\nIn blue and red are the gene-drug associations identified in GDSC and CCLE, respectively. Associations are identified using gene expression data as input and (A) continuous published AUC values as output in a linear model using only common cell lines or (B) all cell lines. The number of selected gene-drugs associations in each datasets is provided in parentheses. The symbol ’*’ represents the significance of the proportion of validated gene-drug associations, computed as the frequency of 1000 random subsets of markers of the same size having equal or greater validation rate compared to the observed rate.\n\nWe then investigated whether higher validation rates would be obtained by using more stringent significance threshold and relaxing the constraint on the number of significant associations in the discovery set (Supplementary Figure 18 and Supplementary Figure 19). Using common cell lines, we found that proportion of validated gene-drug association monotonically increases with FDR stringency for six drugs, with very high validation rate for the most stringent FDR cutoff (validation rate > 80% for FDR < 0.1%) for 17-AAG, PD-0325901, PLX4720 and nilotinib using either dataset as discovery set (Supplementary Figure 18). Using the entire panel of cell lines in each study actually improved validation rate for six drugs, AZD6244, TAE684, AZD0530, lapatinib — and erlotinib and sorafenib, for which insufficient number of sensitive cell lines were screened in both GDSC and CCLE (Supplementary Figure 19). However, validation rate decreased for 17-AAG, crizotinib and PLX4720, which suggests again that large, but different panels of cell lines might introduce selection bias for some drugs.\n\nAs reported in the original GDSC (1) and CCLE (2) publications and in recent reports10,14,15, several known biomarkers for targeted therapies have been shown to be predictive in both GDSC and CCLE. In our initial comparative study we also found the following known gene-drug associations:\n\nBRAF mutations were significantly associated with sensitivity to MEK inhibitors (AZD6244 and PD-0325901) and BRAFV600E inhibitor (PLX4720) with nominal p-values < 0.01; see Supplementary File 10–Supplementary File 13 of our initial study.\n\nERBB2 expression was significantly associated with sensitivity to lapatinib with nominal p-value = 0.04 and 8.4×10-15 for GDSC and CCLE, respectively; see Supplementary File 4 and Supplementary File 5 of our initial study.\n\nNQ01 expression was significantly associated with sensitivity to 17-AAG with nominal p-value = 2.4×10-13 and 6.2×10-14 for GDSC and CCLE, respectively; see Supplementary File 4 and Supplementary File 5 of our initial study.\n\nMDM2 expression was significantly associated with sensitivity to Nutlin-3 with nominal p-value = 7.7×10-18 and 7×10-8 for GDSC and CCLE, respectively; see Supplementary File 4 and Supplementary File 5 of our initial study.\n\nALK expression was significantly associated with sensitivity to TAE684 with nominal p-value = 1.6×10-9 and 1.7×10-9 or GDSC and CCLE, respectively; see Supplementary File 4 and Supplementary File 5 of our initial study.\n\nWe revisited our biomarker analysis using the new data released by GDSC and CCLE to test whether additional known biomarkers can be identified. In addition to the expression-based gene-drug association reported in Dataset 6, we recomputed all gene-drug associations based on mutations (Dataset 7) and gene fusions using the entire panel of cell lines in each study. We confirmed the reproducibility of the known associations reported in our initial study, but we were not able to find reproducible associations for EGFR mutations with response to AZD0530 and erlotinib, and HGF expression with response to crizotinib (Table 1). The reproducibility of the majority of these previously known associations attests to the relevance of the GDSC and CCLE datasets although our results demonstrated that the noise and inconsistency in drug sensitivity data render discovery of new biomarkers difficult for the majority of the drugs.\n\nGene-drug associations were estimated using the full panel of cell lines and AUC as measure of drug sensitivity.\n\n\nDiscussion\n\nOur original motivation in analyzing the GDSC and CCLE data was to develop predictive gene expression biomarkers of drug response. When we applied a number of methods using one study to select gene expression features and to train a classifier, and then applied it to predict reported drug response in the second study, our predictive models failed to validate for half of the drugs tested3. Indeed, out of nine predictors yielding concordance index25 ≥0.65 in cross-validation in the training set (GDSC), only four were validated in identical cell lines treated with the same drugs in the validation set (CCLE)3.\n\nAs we explored the reasons for this failure, we first checked whether cell lines could have drifted and consequently exhibited different transcriptional profiles between GDSC and CCLE. We found that any genome-wide expression profile in one study would almost always identify “itself” (its purported biological replica) as being most similar among the cell lines in the other study. In a way this is not surprising. When gene expression studies were in their infancy, there were many reports that compared the results from studies and found that they were inconsistent and unreproducible in new studies — as demonstrated by the countless microarray signatures that fail to reproduce beyond their initial publication. As a result, scientists involved in gene expression studies “circled the wagons” and developed both much more standardized laboratory protocols and “best practices” for reproducible analysis, including data normalization and batch corrections, that now mean that independent measurements from different laboratories are far more often consistent and so can be used for signature development and validation26,27.\n\nUnexpectedly, when we compared phenotypic measures of drug response that were released by the GDSC and CCLE projects, we found discrepancies in growth inhibition effects of multiple anticancer agents. What that means in practice is that, for some drugs, a molecular biomarker of drug response learned from one study would not likely be predictive of the reported response in the other. And consequently neither of the studies might be useful in predicting response in patients as many had hoped when these large pharmacogenomic screens were published.\n\nThe feedback from the scientific community on our analysis, the availability of new data from the GDSC and CCLE, as well as improvements in the PharmacoGx software platform we developed to support this type of analyses11, prompted us to revisit the question of consistency in these studies to see if we could find a principled way to identify correlated drug response phenotypes. By testing a variety of methods of classifying the data, and choosing the metric which gave the best consistency for each drug, we were able to find moderate to good consistency of sensitivity data for two broad effect and three highly targeted drugs. We also confirmed the overall lack of consistency between the studies for eight drugs, while there were not enough sensitive cell lines that had been screened by both GDSC and CCLE to properly assess consistency for the remaining three drugs. The summary box included with this paper briefly describes the most significant issues that people have raised in discussing our previous findings with us and summarizes what we have found in our reanalysis.\n\nSome have suggested that one way to improve correlation would have been to compare the studies and throw out the most discordant data as noise and then compare the remaining concordant data. While this would certainly find concordance in the remaining data, the approach is equivalent to fitting data to a desired result, which is bad practice and certainly could not be extended to other data sets or to the classification of patient tumors as responsive or nonresponsive to a particular therapy.\n\nThere is, however, merit in the suggestion that one would not expect to see correlation in noise. And noise is precisely what one would expect to see in drug response data from cell lines that are resistant to a particular drug or nonresponsive across the range of doses tested. As reported here, filtering the data in each study independently to classify cell lines in a binary fashion, and then comparing the binary classification between studies using a variety of metrics developed to handle the intricacies of this sort of response data, also failed to find simple correlations in the data, except for three of the highly targeted therapies, nilotinib, PLX4720 and crizotinib. What this ultimately means is that the most and the least sensitive cell lines would not appear to be the same when comparing the two studies.\n\nThere are many reasons for potential differences in measured phenotypes reported by the GDSC and CCLE, including substantial differences in doses used for each drug and in the methods used to both assay cell viability and to estimate drug response parameters. By comparing GDSC and CCLE with an independent pharmacogenomic dataset published by GlaxoSmithKline (GSK), we showed that higher consistency is achieved when the same pharmacological assay is used (GSK and CCLE used the CellTiter-Glo assay, while GDSC used Syto60)7,8. Genentech also used the CellTiter-Glo assay and observed higher consistency of drug sensitivity data with CCLE compared to GDSC10. The authors elegantly evaluated the impact of cell viability readout, growth medium, and seeding density. They observed only weak impact of the choice of pharmacological assay as their follow-up screen with the Syto60 assay clustered closer to their own CellTiter-Glo screen than GDSC, suggesting that other parameters might have driven the inconsistency observed with GDSC10. They further showed that increased fetal bovine serum and seeding cell density had a systematic effect on mean cell viability. Pozdeyev et al. showed that restricting the computation of AUC to the concentration range shared between GDSC and CCLE, the equivalent of our AUC* drug sensitivity measure, yielded a small, but statistically significant improvement in consistent of pharmacological profiles28. Ultimately what our analysis and these recent reports suggest is that not only drug sensitivity measurements must be carefully and appropriately compared, but also that there is a pressing need for standardization of both laboratory and computational methods for assaying drug response.\n\nThe primary goal of the GDSC and CCLE studies was to link molecular features of a large panel of cancer cell lines to their sensitivity to cytotoxic and targeted drugs. The reproducibility of most of the known gene-drug associations provides evidence that these large pharmacogenomic datasets are biologically relevant. When we investigated whether we could find significant gene-drug associations discovered in one dataset that validate in the other independent dataset, we observed over 75% validation rate for the most significant molecular biomarkers for eight of 15 drugs, which is a major improvement over our initial comparative study. However, this does not suggest that one can use these studies to find new, reproducible gene-drug associations for the rest of the drugs -- excluding paclitaxel and PHA-655752 for which no significant biomarkers could be identified -- as the majority of associations can be found in only one dataset but not in both.\n\nThis study has several potential limitations. First, while the raw drug sensitivity data are publicly available for GDSC, these data have not been released within the CCLE study. We could not fit the drug dose-response curves using the technical triplicates but rather relied on the published median sensitivity values. Second, we discretized drug sensitivity values by selecting a common threshold to discriminate between insensitive (AUC ≤ 0.2 and IC50 ≥ 1 µM) and the rest of the cell lines for all the targeted agents. However, it is clear that such a threshold could be optimized for each drug, which might have an impact on the consistency of drug phenotypes and gene-drug associations based on binary sensitivity calls (note that the same applies for molecular data as well). Unfortunately the size of the current drug sensitivity datasets is not sufficient to develop drug-specific thresholds for sensitivity values but the release of larger pharmacogenomic studies may allow us to address this issue in the near future. Lastly, the current set of mutations assessed in both study is small (64 mutations), which drastically limits the search for mutation-based and other genomic aberrations associated with drug response. The exome-sequencing data available within the new GDSC1000 dataset will enable to better explore the genomic space of biomarkers in cancer cell lines, and their reproducibility across studies.\n\n\nConclusion\n\nAs is true of many scientists working in genomics and oncology, we were excited when the GDSC and CCLE released their initial data sets and were hopeful that these projects would help to accelerate drug discovery and further the development of precision medicine in oncology. However, what we found initially, and what the reanalysis presented here further indicates, is that there are inconsistencies between the measured phenotypic response to drugs in these studies. Even in our reanalysis, where we used methods specific to individual drugs and the response characteristics of the cell lines tested, we were only able to find new biomarkers predictive of response for around half of the drugs screened in both studies. Consequently, it is challenging to use the data from these studies to develop general purpose classification rules for all drugs.\n\nOur finding that molecular profiles are significantly more consistent than drug sensitivity data, indicates that the main barrier to biomarker development using these data is the unreliability in the reported response phenotypes for many drugs. For studies such as these to realize their full potential, additional work must be done to develop robust and reproducible experimental and analytical protocols so that the same compound, tested on the same set of cell lines by different groups, yields consistent and comparable results. Barring this, a predictive biomarker of response developed from one study is unlikely to be able to reliably validated on another, and consequently, is unlikely to be useful in predicting patient response.\n\nFrom having worked in large-scale genomic analyses, we recognize the challenges involved in planning and executing such studies and commend the GDSC and CCLE for their work and for making all the data available. However, we strongly encourage the GDSC, the CCLE, the pharmacogenomics and bioinformatics communities as a whole, to invest the necessary time and effort to standardize drug response assays in order to achieve greater consistency and to assure that measurements in cell lines are relevant for predicting response in patients. The recent report from Genentech is a significant step in this direction. Ultimately, that effort will help to assure that mammoth undertakings in drug characterization can deliver on their promise to identify better therapies and biomarkers predictive of response.\n\n\nMethods\n\nThe lack of standardization of cell line and drug identifiers hinders comparison of molecular and pharmacological data between large-scale pharmacogenomic studies, such as the GDSC and CCLE. To address this issue we developed PharmacoGx, a computational platform enabling users to download and interrogate large pharmacogenomic datasets that were extensively curated to ensure maximum overlap and consistency11. PharmacoGx provides (i) a new object class, called PharmacoSet, that acts as a container for the high-throughput pharmacological and molecular data generated in large pharmacogenomics studies (detailed structure provided in Supplementary Methods); and (ii) a set of parallelized functions to assess the reproducibility of pharmacological and molecular data and to identify molecular features associated with drug effects. The PharmacoGx package is open-source and publicly available on Bioconductor.\n\nDrug sensitivity data. We used the data release 5 (June 2014) with 6,734 new IC50 values for a total of 79,903 drug dose-response curves for 139 different drugs tested on a panel of up to 672 unique cell lines. The data are accessible from ftp://ftp.sanger.ac.uk/pub4/cancerrxgene/releases/release-5.0/.\n\nMolecular profiles. Gene expression data were downloaded from ArrayExpress, accession number E-MTAB-3610. These new data were generated using Affymetrix HG-U219 microarray platform. We processed and normalized the CEL files using RMA29 with BrainArray30 chip description file based on Ensembl gene identifiers (version 19). This resulted in a matrix of normalized expression for 17,616 unique Ensembl gene ids. SNP array data for the Genome-Wide Human SNP Array 6.0 platform were downloaded from GEO with the accession number GSE36139. We processed the raw CEL data using Affymetrix Power Tools (APT) v1.16.1. Copy number segments were generated using HAPSEG v1.1.131 based on RMA-normalized signal intensities and Birdseed v2-called genotypes. These segments were further refined using ABSOLUTE v1.0.632 to identify allele-specificity within each segment. Mutation and gene fusion calls were downloaded from the GDSC website and processed as in our initial study7.\n\nDrug sensitivity data. We used the drug sensitivity data available from the CCLE website (https://portals.broadinstitute.org/ccle/data/browseData) and updated on February 2015 with a total number of 11,670 dose-response curves for 24 drugs tested in a panel of up to 504 cell lines.\n\nMolecular profiles. Gene expression data were downloaded from the CCLE website and CGHub33 for the Affymetrix HG-U133PLUS2 and Illumina HiSeq 2500 platforms, respectively. SNP array data were downloaded from EMBL-EBI with the accession number EGAD00010000644. Normalization of microarray data (1036 cell lines) and SNP array data (1190 cell lines) was performed the same way than for GDSC. RNA-seq data (935 cell lines) were downloaded as BAM files previously aligned using TopHat34 and the quantification of gene expression was performed using Cufflinks34 based on Ensembl GrCh37 human reference genome. Mutation data were retrieved from the CCLE website and processed as in our initial study7.\n\nThe lack of standardization for cell line names and drug identifiers represents a major barrier for performing comparative analyses of large pharmacogenomics studies, such as GDSC and CCLE. We therefore curated these datasets to maximize the overlap in cell lines and drugs by assigning a unique identifier to each cell line and drug. Entities with the same unique identifier were matched. Manual search was then applied to match any remaining cell lines or drugs which were not matched based on string similarity; annotations were consistently extracted from Cellosaurus35. The cell line curation was validated by ensuring that the cell lines with matched name had a similar SNP fingerprint (see below). The drug curation was validated by examining the extended fingerprint of each of their SMILES strings36 and ensuring that the Tanimoto similarity37 between two drugs called as the same, as determined by this fingerprint, was above 0.95.\n\nTo assess the identity of cell lines from GDSC and CCLE, data of low quality were first excluded from our analysis panel (detailed procedure described in Supplementary Methods). Of the 973 CEL files from GDSC, only 66 fell below the 0.4 threshold (6.88%) for contrast QC scores, indicating issues in resolving base calls. Additionally, five of the 1,190 CEL files from CCLE had an absolute difference between contrast QC scores for Nsp and Sty fragments greater than 2, thus indicating some issues with the efficacy of one enzyme set during sample preparation. CEL files with contrast QC scores indicative of some sort of issue with the assay that would affect the genotype call rate or birdseed accuracy were removed and genotype calling was conducted on the remaining CEL files using Birdseed version 2. The resulting files were then filtered to keep only the 1006 SNP fingerprints that originated from CEL files that had a common cell line annotation between GDSC and CCLE (503 CEL files from each). Finally, pairwise concordances of all SNP fingerprints were generated according to the method outlined by Hong et al.12.\n\nTo identify artefactual drug dose-response curves due to experimental or normalization issues, we developed simple quality controls (QC; details in Supplementary Methods). Briefly, we checked whether normalized viability measurements range between 0% and 100% and that consecutive measurements remain stable or decrease monotonically reflecting response to the drug being tested. The drug dose-response curves which did not pass these simple QC were flagged and removed from subsequent analyses as the curve fitting would have yielded erroneous results.\n\nAll dose-response curves were fitted to the equation\n\n\n\nwhere y = 0 denotes death of all infected cells, y = y(0) = 1 denotes no effect of the drug dose, EC50 is the concentration at which viability is reduced to half of the viability observed in the presence of an arbitrarily large concentration of drug, and HS is a parameter describing the cooperativity of binding. HS < 1 denotes negative binding cooperativity, HS = 1 denotes noncooperative binding, and HS > 1 denotes positive binding cooperativity. The parameters of the curves were fitted using the least squares optimization framework. Comparison of our dose-response curve model with those used in the GDSC and CCLE publications is provided in Supplementary Methods.\n\nDrug sensitivity data. To discretize the drug sensitivity data, we used AUC ≤ 0.2 (IC50 ≥ 1 µM) and AUC ≤ 0.4 (IC50 ≥ 10 µM) to identify the “insensitive” cell lines for targeted and cytotoxic drugs, respectively, while the rest of the cell lines are classified as “sensitive”. These reasonable, although somewhat arbitrary, cutoffs enabled to explore the potential of such binary drug sensitivity calls as new drug phenotypic measures to find consistency in drug sensitivity data and gene-drug associations.\n\nGene expression data. To discretize the drug sensitivity data into lowly vs. highly expressed genes, we fit a mixture of two Gaussians of unequal variance using the full distribution of expression values of the 17,401 genes in common between GDSC and CCLE datasets. We defined the expression threshold as the expression value for which the posterior probability of belonging to the left tail of the highly expression distribution is 10%.\n\nMutation data. Similarly to the GDSC and CCLE publications, we transformed the original mutation data into binary values that represent the absence (0) or presence (1) of any missense mutations in a given gene in a given cell line.\n\nWe assessed the association, across cell lines, between a molecular feature and response to a given drug, referred to as gene-drug association, using a linear regression model adjusted for tissue source:\n\nY = β0 + βiGi + βtT\n\nwhere Y denotes the drug sensitivity variable, Gi and T denote the expression of gene i and the tissue source respectively, and βs are the regression coefficients. The strength of gene-drug association is quantified by βi, above and beyond the relationship between drug sensitivity and tissue source. The variables Y and G are scaled (standard deviation equals to 1) to estimate standardized coefficients from the linear model. Significance of the gene-drug association is estimated by the statistical significance of βi (two-sided t test). When applicable, p-values were corrected for multiple testing using the FDR approach38.\n\nAs we recognized that continuous drug sensitivity is not normally distributed, which violates one of the assumption of the linear regression model described above, we also assessed the consistency of gene-drug association using discretized (binary) drug sensitivity calls as the response variable in a logistic regression model adjusted for tissue source, similarly to the linear regression model.\n\nArea between curves (ABC). To quantify the difference between two dose-response curves, we computed the area between curves (ABC). ABC is calculated by taking the unsigned area between the two curves over the intersection of the concentration range tested in the two experiments of interest, and normalizing that area by the length of the intersection interval. In the present study, we compared the curves fitted for the same drug-cell line combinations tested both in GDSC and CCLE. Further details are provided in Supplementary Methods.\n\nPearson correlation coefficient (PCC). PCC is a measure of the linear correlation between two variables, giving a value between +1 and −1 inclusive, where 1 represents total positive correlation, 0 represents no correlation, and −1 represents total negative correlation17. PCC is sensitive to the presence of outliers, like a few sensitive cell lines in the case of drug sensitivity data measured for highly targeted therapies or genes rarely expressed.\n\nSpearman rank correlation coefficient (SCC). SCC is a nonparametric measure of statistical dependence between two variables and is defined as the Pearson correlation coefficient between the ranked variables18. It assesses how well the relationship between two variables can be described using a monotonic function. If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other. Contrary to PCC, SCC can capture non linear relationship between variables but is insensitive to outliers, which is frequent for drug sensitivity data measured for highly targeted therapies or genes rarely expressed.\n\nSomers’ Dxy rank correlation (DXY). DXY is a non-parametric measure of association equivalent to (C - 0.5) * 2 where C represents the concordance index25 that is the probability that two variables will rank a random pair of samples the same way19.\n\nMatthews correlation coefficient (MCC). MCC20 is used in machine learning as a measure of the quality of classification predictions. It takes into account true and false positives and negatives, acting as a balanced measure which can be used when the classes are of different sizes. MCC is in essence a correlation coefficient between two binary classifications; it returns a value between −1 (perfect opposite classification) and +1 (identical classifications), with 0 representing association no better than random chance.\n\nCramer’s V (CRAMERV). CRAMERV is a measure of association between two nominal variables, based on Pearson's chi-squared statistic, giving a value between 0 (no association) and +1 (perfect association)21. In the case of 2×2 contingency table, such as binary drug sensitivity or gene expression measurements, CRAMERV is equivalent to the Phi coefficient.\n\nInformedness (INFORM). For a 2×2 contingency table comparing two binary classifications, INFORM can be defined as Specificity + Sensitivity - 1, which is equivalent to true positive rate - false positive rate22. The magnitude of INFORM gives the probability of an informed decision between the two classes, where INFORM > 0 represents appropriate use of information, 0 represents chance-level decision, < 0 represents perverse use of information.\n\n\nData and software availability\n\nOpen Science Framework: Dataset: Revisiting inconsistency in large pharmacogenomics studies, doi 10.17605/OSF.IO/CD8Z239\n\nData: The list of all the pharmacogenomic datasets available through the PharmacoGx platform can be obtained from R using the availablePSets() function from the R/Bioconductor library PharmacoGx.\n\nThe GDSC and CCLE PharmacoSets used in this study are available from pmgenomics.ca/bhklab/sites/default/files/downloads/ using the downloadPset() function.\n\nCode: The R code necessary to replicate all the results presented in this article is available from the cdrug2 GitHub repository.",
"appendix": "Author contributions\n\n\n\nZ Safikhani, P Smirnov, M Freeman, R Quevedo, N El-Hachem, and A She were responsible for downloading and curating the pharmacogenomic data. Z Safikhani wrote most of the analysis code with contributions of P Smirnov, M Freeman, and R Quevedo. Z Safikhani, J Quackenbush and B Haibe-Kains designed the study. B Haibe-Kains supervised the study. All authors participated in the interpretation of the results. Z Safikhani, A Goldenberg, N Juul Birkbak, C Hatzis, L Shi, A Beck, H Aerts, J Quackenbush and B Haibe-Kains participated in the manuscript writing.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nZ Safikhani was supported by the Cancer Research Society (Canada; grant #19271) and the Ontario Institute for Cancer Research through funding provided by the Government of Ontario. P Smirnov was supported by the Canadian Cancer Society Research Institute. C Hatzis was supported by Yale University. N Juul Birkbak was funded by The Villum Kann Rasmussen Foundation. L Shi was supported by the National High Technology Research and Development Program of China (2015AA020104), the National Natural Science Foundation of China (31471239), the 111 Project (B13016), and the National Supercomputer Center in Guangzhou, China. J Quackenbush was supported by grants from the NCI GAME-ON Cancer Post-GWAS initiative (5U19 CA148065) and the NHLBI (5R01HL111759). B Haibe-Kains was supported by the Canadian Institutes of Health Research, Cancer Research Society, Terry Fox Research Institute, and the Gattuso Slaight Personalized Cancer Medicine Fund at Princess Margaret Cancer Centre.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank the investigators of the Genomics of Drug Sensitivity in Cancer (GDSC) and the Cancer Cell Line Encyclopedia (CCLE) who have made their invaluable data available to the scientific community. We thank the MAQC/SEQC consortium and the scientific community for their constructive feedback.\n\n\nSupplementary material\n\n(A) Histogram of ABC estimates for all common drug dose-response curves between GDSC and CCLE. (B) Boxes represent the median and inter quartile range of ABC for drug-cell line combinations screened in GDSC and CCLE.\n\nExamples of (A,B) consistent and (C,D) inconsistent replicated experiments screening AZD6482 in GDSC. The grey area represents the common concentration range between studies. (A) NCI-H1092; (B) BPH-1; (C) A498; and (D) KURAMOCHI cell line treated with AZD6482.\n\n(A) Dendrogram of the clustering of all drugs in GDSC based on their mean ABC values. (B) Dendogram of the clustering of all drugs in CCLE and GDSC based on their mean ABC values, overlapped drugs are shown with the same colour.\n\nComparison of median absolute deviation (MAD) of published AUC values between cytotoxic and targeted drugs using all cell lines in (A) GDSC and (B) CCLE.\n\nComparison of median absolute deviation (MAD) of published AUC values between drugs using common cell lines in (A) GDSC and (B) CCLE.\n\nFor cytotoxic drugs (paclitaxel), cell lines with AUC < 0.4 were considered as insensitive, while for targeted therapies cell lines with AUC < 0.2 were considered insensitive (grey dashed lines). In case of perfect consistency, all points would lie on the grey diagonal.\n\nFor cytotoxic drugs (paclitaxel), cell lines with AUC* < 0.4 were considered as insensitive, while for targeted therapies cell lines with AUC* < 0.2 were considered insensitive (grey dashed lines). In case of perfect consistency, all points would lie on the grey diagonal.\n\nFor cytotoxic drugs (paclitaxel), cell lines with IC50 ≤ 10μM were considered as insensitive, while for targeted therapies cell lines with IC50 ≤ 1μM were considered insensitive (grey dashed lines). In case of perfect consistency, all points would lie on the grey diagonal.\n\nFor cytotoxic drugs (paclitaxel), cell lines with IC50≤ 10μM were considered as insensitive, while for targeted therapies cell lines with IC50 ≤ 1μM were considered insensitive (grey dashed lines). In case of perfect consistency, all points would lie on the grey diagonal.\n\n(A) Consistency assessed using the full set of cancer cell lines screened in both studies. (B) Consistency assessed using only sensitive cell lines (IC50 ≥ 10μM for broad effect drugs, and AUC ≥ 1μM for drugs with narrow effects). (C) Consistently assessed by discretizing the drug sensitivity data using the aforementioned cutoffs for IC50. PCC: Pearson correlation coefficient; SCC: Spearman rank-based correlation coefficient; DXY: Somers’ Dxy rank correlation; MCC: Matthews correlation coefficient; CRAMERV: Cramer’s V statistic; INFORM: Informedness. The symbol ’*’ indicates whether the consistency is statistically significant (p< 0.05).\n\nEach cell in the matrix represents the p-value (coded by colour) for a given pairwise comparison of consistency estimates. For instance, consistency of gene expression data is statistically significantly higher than consistency of drug sensitivity data. GE.CCLE.ARRAY.RNASEQ: Consistency between gene expression data generated using Affymetrix HG-U133PLUS2 microarray and Illumina RNA-seq platforms within CCLE; GE.ARRAYS: Consistency between gene expression data generated using Affymetrix HG-U133A and HG-U133PLUS2 microarray platforms in GDSC and CCLE, respectively; GE.ARRAY.RNASEQ: Consistency between gene expression data generated using Affymetrix HG-U133A microarray and Illumina RNA-seq platforms in GDSC and CCLE, respectively; CNV: Consistency of copy number variation data in CCLE and GDSC, respectively; MUTATION: Consistency of mutation profiles in CCLE and GDSC, respectively; AUC.PUBLISHED: Consistency of AUC values as published in GDSC and CCLE; AUC.PUBLISHED: Consistency of AUC values as published in GDSC and CCLE; AUC.RECOMPUTED: Consistency of AUC values in GDSC and CCLE as recomputed using PharmacoGx; AUC.STAR: Consistency of AUC values in GDSC and CCLE as recomputed from the common concentration range using PharmacoGx ; IC50.PUBLISHED: Consistency of IC50 values as published in GDSC and CCLE; IC50.RECOMPUTED: Consistency of IC50 values in GDSC and CCLE as recomputed using PharmacoGx.\n\nGene-drug associations are identified using gene expression data and continuous published AUC as input and output of a linear model, respectively. In case of perfect consistency, all points would lie on the grey diagonal.\n\nGene-drug associations are identified using molecular profiles including gene expression, mutation and copy number variation data and continuous published AUC as input and output of a linear model, respectively. In case of perfect consistency, all points would lie on the grey diagonal.\n\nGene-drug associations are identified using gene expression data and discretized published AUC as input and output of a linear model, respectively. Note that the small number of cell lines classified as \"sensitive\" did not allow for finding enough significant gene-drug associations for the majority of the drugs. This is due to the lack of convergence of the logistic regression model when 3 or less cell lines are in one category.\n\nGene-drug associations are identified using gene expression data and discretized published AUC as input and output of a linear model, respectively. Note that the small number of cell lines classified as \"sensitive\" did not allow for finding enough significant gene-drug associations for PHA-665752 and sorafenib. This is due to the lack of convergence of the logistic regression model when 3 or less cell lines are in one category.\n\n’Continuous Common’ refers to the associations identified using continuous published AUC values on the common cell lines in GDSC and CCLE; ’Continuous All’ refers to the associations identified using continuous published AUC values on the entire panel of cell lines screened in each study; ’Binary Common’ refers to the associations identified using the discretized (binary) published AUC values on the common cell lines in GDSC and CCLE; ’Binary All’ refers to the associations identified using the discretized (binary) published AUC values on the entire panels of cell lines screened in each study.\n\nGene-drug associations are identified using gene expression data and continuous published AUC as input and output of a linear mode, respectively. The symbol ’*’ represents the significance of the proportion of validated gene-drug associations, computed as the frequency of 1000 random subsets of markers of the same size having equal or greater validation rate compared to the observed rate.\n\nGene-drug associations are identified using gene expression data and continuous published AUC as input and output of a linear mode, respectively. The symbol ’*’ represents the significance of the proportion of validated gene-drug associations, computed as the frequency of 1000 random subsets of markers of the same size having equal or greater validation rate compared to the observed rate.\n\nThe proportion of associations that are dataset-specific or reproducible across GDSC and CCLE are provided in the last three columns. The column ’% Both’ reports the overlap of gene-drug associations between the two studies, as computed using the Jaccard index.\n\nThe proportion of associations that are dataset-specific or reproducible across GDSC and CCLE are provided in the last three columns. The column ’% Both’ reports the overlap of gene-drug associations between the two studies, as computed using the Jaccard index.\n\nSupplementary file 1.\n\nAll the noisy curves identified in GDSC and CCLE.\n\nClick here to access the data.\n\nSupplementary file 2.\n\nAll drug dose-response curves in common between GDSC and CCLE.\n\nClick here to access the data.\n\nSupplementary file 3.\n\nAll drug dose-response curves for replicated experiments using AZD6482 in GDSC.\n\nClick here to access the data.\n\nSupplementary methods.\n\nClick here to access the data.\n\n\nReferences\n\nGarnett MJ, Edelman EJ, Heidorn SJ, et al.: Systematic identification of genomic markers of drug sensitivity in cancer cells. Nature. 2012; 483(7391): 570–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarretina J, Caponigro G, Stransky N, et al.: The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity. Nature. 2012; 483(7391): 603–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPapillon-Cavanagh S, De Jay N, Hachem N, et al.: Comparison and validation of genomic predictors for anticancer drug sensitivity. J Am Med Inform Assoc. 2013; 20(4): 597–602. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDong Z, Zhang N, Li C, et al.: Anticancer drug sensitivity prediction in cell lines from baseline gene expression through recursive feature selection. BMC Cancer. 2015; 15: 489. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJang IS, Neto EC, Guinney J, et al.: Systematic assessment of analytical methods for drug sensitivity prediction from cancer cell line data. Pac Symp Biocomput. 2014; 63–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCortés-Ciriano I, van Westen GJ, Bouvier G, et al.: Improved large-scale prediction of growth inhibition patterns using the NCI60 cancer cell line panel. Bioinformatics. 2016; 32(1): 85–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaibe-Kains B, El-Hachem N, Birkbak NJ, et al.: Inconsistency in large pharmacogenomic studies. Nature. 2013; 504(7480): 389–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHatzis C, Bedard PL, Birkbak NJ, et al.: Enhancing Reproducibility in Cancer Drug Screening: How Do We Move Forward? Cancer Res. 2014; 74(15): 4016–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSafikhani Z, El-Hachem N, Quevedo R, et al.: Assessment of pharmacogenomic agreement [version 1; referees: 3 approved]. F1000 Res. 2016; 5: 825. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaverty PM, Lin E, Tan J, et al.: Reproducible pharmacogenomic profiling of cancer cell line panels. Nature. 2016; 533(7603): 333–7. PubMed Abstract | Publisher Full Text\n\nSmirnov P, Safikhani Z, El-Hachem N, et al.: PharmacoGx: an R package for analysis of large pharmacogenomic datasets. Bioinformatics. 2016; 32(8): 1244–6. PubMed Abstract | Publisher Full Text\n\nHong H, Xu L, Liu J, et al.: Technical reproducibility of genotyping SNP arrays used in genome-wide association studies. PLoS One. 2012; 7(9): e44483. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu M, Selvaraj SK, Liang-Chu MM, et al.: A resource for cell line authentication, annotation and quality control. Nature. 2015; 520(7547): 307–11. PubMed Abstract | Publisher Full Text\n\nGoodspeed A, Heiser LM, Gray JW, et al.: Tumor-derived Cell Lines as Molecular Models of Cancer Pharmacogenomics. Mol Cancer Res. 2015. 14(1): 3–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCancer Cell Line Encyclopedia Consortium, Genomics of Drug Sensitivity in Cancer Consortium: Pharmacogenomic agreement between two cancer cell line data sets. Nature. 2015; 528(7580): 84–7. PubMed Abstract | Publisher Full Text\n\nYouden WJ: Index for rating diagnostic tests. Cancer. 1950; 3(1): 32–5. PubMed Abstract | Publisher Full Text\n\nPearson K: Note on Regression and Inheritance in the Case of Two Parents. Proc R Soc Lond. 1895; 58: 240–2. Reference Source\n\nSpearman C: The proof and measurement of association between two things. By C. Spearman, 1904. Am J Psychol. 1987; 100(3–4): 441–71. PubMed Abstract\n\nSomers RH: A New Asymmetric Measure of Association for Ordinal Variables. Am Sociol Rev. 1962; 27(6): 799–811. Reference Source\n\nMatthews BW: Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Biophys Acta. 1975; 405(2): 442–51. PubMed Abstract | Publisher Full Text\n\nCramér H: Mathematical Methods of Statistics. (Princeton: Princeton UniversityPress, 1946). CramérMathematical Methods of Statistics 1946. Reference Source\n\nPowers DM: Evaluation: from Precision, Recall and F-measure to ROC, Informedness, Markedness and Correlation. 2011. Reference Source\n\nHudson AM, Yates T, Li Y, et al.: Discrepancies in cancer genomic sequencing highlight opportunities for driver mutation discovery. Cancer Res. 2014; 74(22): 6390–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJaccard P: Etude comparative de la distribution florale dans une portion des Alpes et du Jura. Impr Corbaz; 1901; 37: 547–579. Reference Source\n\nHarrell FE Jr, Califf RM, Pryor DB, et al.: Evaluating the yield of medical tests. JAMA. 1982; 247(18): 2543–6. PubMed Abstract | Publisher Full Text\n\nMAQC Consortium, Shi L, Reid LH, et al.: The MicroArray Quality Control (MAQC) project shows inter- and intraplatform reproducibility of gene expression measurements. Nat Biotechnol. 2006; 24(9): 1151–61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShi L, Campbell G, Jones WD, et al.: The MicroArray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models. Nat Biotechnol. 2010; 28(8): 827–38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPozdeyev N, Yoo M, Mackie R, et al.: Integrating heterogeneous drug sensitivity data from cancer pharmacogenomic studies. Oncotarget. 2016; 7(32): 51619–51625. PubMed Abstract | Publisher Full Text\n\nIrizarry RA, Hobbs B, Collin F, et al.: Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003; 4(2): 249–64. PubMed Abstract | Publisher Full Text\n\nde Leeuw WC, Rauwerda H, Jonker MJ, et al.: Salvaging Affymetrix probes after probe-level re-annotation. BMC Res Notes. 2008; 1: 66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarter SL, Meyerson M, Getz G: Accurate estimation of homologue-specific DNA concentration-ratios in cancer samples allows long-range haplotyping. Scott L Carter. 2011; 59. Reference Source\n\nCarter SL, Cibulskis K, Helman E, et al.: Absolute quantification of somatic DNA alterations in human cancer. Nat Biotechnol. 2012; 30(5): 413–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilks C, Cline MS, Weiler E, et al.: The Cancer Genomics Hub (CGHub): overcoming cancer through the power of torrential data. Database (Oxford). 2014; 2014: pii: bau093. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTrapnell C, Roberts A, Goff L, et al.: Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and Cufflinks. Nat Protoc. 2012; 7(3): 562–78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBairoch A: ExPASy - Cellosaurus [Internet]. Cellosaurus. 2015. [cited 2016 Jan 26]. Reference Source\n\nAnderson E, Veith GD, Weininger D: SMILES, a Line Notation and Computerized Interpreter for Chemical Structures.1987. Reference Source\n\nTanimoto TT: An Elementary Mathematical Theory of Classification and Prediction. International Business Machines Corporation; 1958. (Internal Technical Report ). Reference Source\n\nBenjamini Y, Hochberg Y: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc Series B Stat Methodol. 1995; 57(1): 289–300. Reference Source\n\nSafikhani Z, Smirnov P, Freeman M, et al.: Dataset: Revisiting inconsistency in large pharmacogenomics studies. Open Science Framework. 2016. 2016. Data Source"
}
|
[
{
"id": "16370",
"date": "03 Nov 2016",
"name": "Michael T. Hallett",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript seeks to compare two large pharmacogenomics datasets (several hundred cancer cell lines screened against 15 common drugs) and evaluate their level of agreement via (1) the drug sensitivity values and (2) gene expression profiles of the cell lines. Broadly speaking, the value of these profiles is the discovery of (gene) biomarkers that could predict response of cells to the drugs. Previous efforts, including the authors' previous attempts, have had trouble with reproducibility. The authors have previously given harsh critiques regarding the reproducibility of the two datasets.\nThis manuscript is very important, and it has the potential to dissect sources of both agreement and disagreement that can be amplified or minimized in the future respectively. The reviewer also has little doubt that there is in fact disagreement between these two datasets and, moreover, it is significant enough as to interfere with the discovery of biomarkers. The reviewer also agrees with the authors that this is important to point out and understand, and the \"call to arms\" in the Discussion (the best written part of the manuscript) should certainly be listened to.\nHowever, because this manuscript is very important , the cornerstones of the comparative analysis must be correct. The Supplemental Methods are near impossible to decipher and are littered with undefined terms, confusion in mathematical notation, poor equation formatting non-intuitive statements that do not assist the reader with understanding the numerous design choices (from throwing out poor quality data, to model fitting, to different measures of consistency). The Results section is not sufficient methodical to follow the argument of what does or does not represent ***statistically significant*** disagreement. Almost every paragraph until the conclusion presented serious challenges to this referee. They are included below.\nThis is an important effort and the authors should return with an improved manuscript. Many of the co-authors are skilled mathematicians and they are strongly recommended to revisit every line of this manuscript to ensure correctness and to present with craftmanship. This is especially true in the Supplementary Methods that actually provide the \"meat\" of the methodology: this I believe must have been an oversight with this submission.\n\nThis manuscript needs to be published and I believe there are important lessons to be learnt here but there has to be a more focused, tighter argument to establish where there is disagreement and hypotheses as to why (in the Discussion) and what can be done about it. But the main issue here is that the basics of the paper are not solid, or at least they cannot be evaluated. The authors should be commended for the effort to be reproducible (in the sense they give the list of R packages used and their code) but that is only one aspect. The mathematics and statistics requires clarity and correctness. Terms such as \"metric\" should be used properly, and novel equations that are derived (e.g their \"E\" parameter) must be done so in a careful correct manner, with attempts made to justify these parameters (e.g \\epsilon, \\rho, 2*\\epsilon, the $E$ parameter from the modified fit etc.\nI would be happy to view a revised version of the manuscript and I hope that my comments aid in this important project.\npg5: What is “Dataset 1”? The link here doesn’t lead anywhere that I can tell.\nFigure 3. I’m not really sure what the value is in plotting the density functions for the mismatched and matched cell lines. First, wouldn’t one density function suffice with a threshold I guess? Second, do you really need it at all?\nIn the Methods “Cell line identity….”, it is stated that 66 samples fell below threshold with a reference to the Supplementary methods. However I don’t see anything in the supplementary methods that discusses this. Moreover in the text, it seems that you threw 8 cases away. This is confusing.\npg 5. I’m not sure what you mean by “remain stable or decrease monotonically”? Do you just mean “monotonically non-increasing”?\nPlease see comment regarding “Filtering of drug dose-response curves” form Supp Methods below. I think this really needs to be reworked, and I have to trust you guys here that you are doing the right thing.\n“as exemplified…” depicted?\nIn Figure 4, is it possible to relate this back to the choice of \\rho, \\epsilon and 2 \\cdot \\epsilon from the Supp Methods, or perhaps integration a version of this figure (but annotated) into the Supp Methods. In panel A, the grey area is a bit non-intuitive no? I would say that post 0.03, it’s looking pretty good, and it’s not measured in GDSC after that. However the first points are off.\nI don’t know what \\epsilon or \\rho are so it’s hard to relate what is depicted in Figure 4 back to your model.\nWhen I look through Supplemental Figure 1 (all the excluded comparisons), it seems like your criterion for excluding a comparison boils down cases where at least one of the curves has high variance, and the cutoff 2\\epsilon I think is a constant independent of the distribution of points for either curve. I don’t see in your equations how you encode that the sequence is monotonically non-increasing, or how “order” along the left to right sweep is incorporated.\n\nSupp Methods Filtering of drug dose-response curves\n\ni +1^{st} -> (i+1)^{st}\nIt seems like there is a formatting problem here. In my pdf there is something like I”A squiggle after “in some large fraction ??? of the cases (1).” I guess that’s supposed to be \\rho right?\nI’m not sure I understand this sentence “Our quality control …” Are you saying that \\Delta_{i, i+1} < \\epsilon in some fraction \\phi of the cases?\nequation #2 below: I also have never seen set notation such as \\{ \\Delta_{i,i+1} | \\Delta_{i,i+1} < \\epsilon \\}. Are you trying to say “given all the \\Deltas that are less than \\epsilon? So the vertical | means cardinality here right? But then the denominator has the cardinality of a value, or do you mean absolute value? What is \\rho? It’s undefined.\n“Unfortunately …” The english is a bit rough - could be rephrased in terms of specificity and sensitive, I guess.\nI don’t understand the significance of the sentence “Consider, for instance, …” But in the main text, you say that it remains stable or decreases monotonically. Here it increases monotonically for many successive points, so this violates your model, no?\n\nI think that this subsection wants simply to spell out mathematically the thresholds and also provide some rationale for the parameters. I think the text doesn’t really do a good job of establishing this rationale and needs work. Perhaps define the parameters precisely and then phrase the exposition in standard terms e.g. specific and sensitivity for different \\epislon, etc.\n\nYour equation (2) is inconsistent. In the text you specify \\Sigma_{\\forall i,j} \\Delta_{i,j} but below your criterion seems to change to (the correct) $i < j$. It is also sufficient to write \\Sigma{ i < j } \\Delta_{i,j} and avoid the double summation.\nYou should probably define D_i in the text and not make the reader deduce it from the figure below\nThe comments here w.r.t. the Supplementary Methods also apply to the associated subsection of the Methods. The mathematical correctness of some comments needs attention. For example, it is not quite correct to say that the “curve fitting would have yielded erroneous results”. The curve fitting is just that, curve fitting. It’s not really an error. Then in the Methods, you claim to use this equation but this is inconsistent with the discussion in the Supplemental Methods (where you have an equation with this undefined parameter $E$).\n\nThe least-squares method using a three-parameter sigmoid model. I understand the intuition for this, but when I look through Supplementary File 2, I think there are a lot cases where this is perhaps not the correct pattern to assume (e.g. straight lines). Moreover, there are some very strange fits, for example, AZD6244:G−361 AZD6244:SK−MM−2 PLX4720:MDA−MB−175−VII In some cases the curve is always above all of the observations. Perhaps this is because the measurements of viability > 100%? In your model you have removed the $E_0$ from Barretina et al. for different reasons.\n\nSupp Methods - Fitting of drug dose-response curves\nI’m a bit lost with your choice of notation here. For example, you define y as an equation, not a function, and then write y(0) which I assume is supposed to be something like y(x). Ok but then you have y=0 and y = y(0) = 1. I’m not sure this is correct mathematically. (I think you mean to say that y(x ) = … *where* y(0) =1.\n\n“viability is reduced to half … concentration of the drug”… so the “Top”, E_\\infinity … I find this a bit wordy.\n“The dose response equation now becomes …”\nSo I deduce that E is the new parameter?!?! Where has E_\\infinity gone?!?!\n\n(But then down below E is constrained to be in [0,1] and seems to related to the fitness of neoplastic cells. I’m not sure I understand this.)\nThere is no derivation of this formula whatsoever. In fact, I don’t see how this could be correct any longer. Couldn’t this be expressed as mixture of two cell types, and y would be then a sort of weighted average?\nI really don’t see how this was derived. This is a very central part of your paper (since the manuscript is measuring agreement) and therefore it needs to be bulletproof.\nPlease define “extant drug”. Also HS is allowed to vary apparently but I don’t see where it is then optimized in your analysis later on. This is confusing.\n\nConsistency of Drug Sensitivity Data\nIs the ABC method standard? Are there citations for this? You should probably define properly what you mean by the “insertion of the concentration range”. Elsewhere it seems that you are referring to this as “common concentration range” e.g. SFig 2 More generally, isn’t this a sort of (non-statistical) version of the Kolmogorov-Smirnoff test?\nFigure 5A: Actually there are many such cases in your Supplementary Figures. Doesn’t this just mean that the range of concentrations are not sufficient in both datasets?\npg 7 “We then computed the median …” I don’t understand what your distance metric is here. If I understand correctly, you could the ABC for each pair of drugs in the GDSC dataset. From that I can imagine a distance matrix D where D_ij i the ABC between drugs i and j. But you said you take the median ABC? median over what? cell lines? repeats? Whatever the case, are you sure it’s a distance metric?! Is it really true that distances derived from ABCs are metrics? I think this should be shown in the Supplemental Methods. Also as a minor comment, the caption in Supp Figure 3 says that you are using the mean ABC value but elsewhere it says median.\np8. I am not sure what the significance of Supplementary Figure 3 is: are any of these clusters significant? Why are two drugs coloured red in panel A? On page 8 it is claimed that the samples split by hospital (MGH vs WTSI) but I don’t see how this is represented in Supplemental Figure 3.\n\npg 8. I have a very hard time estimating the significance of a statement like “…3 out of 15 common drugs clustered tightly”. I am not sure what tight means here. When I look at Supp Figure 3 there has been no effort to annotate the clusters with their reproducibility e.g. pvclust or measure their significance in some other way. When I look at the figure I think there appears to be a lot of co-clustering of the drugs, at least given that the median ABC across a diverse collection of cell lines might not be such a great “distance” measure.\n\npg 9. What do you mean by “highly targeted therapies”?\npg 9. paragraph “Although the ABC values …” I think the ABC is interesting but it takes a very prominent role in your paper when there are other standard techniques already like AUC and IC_50. Perhaps the manuscript should have comments about why have chosen this approach that is not standard. Also I think you would need to make precise what the differences are between how GDSC and CCLE computed the AUC and IC_50 that are different than how PharmacoGX does. This is a very central concept in your comparison so it would have to have a very solid definition and analysis. Supplemental Figure 4 suggests in a round-about way that the only difference in in the number of cell lines (figure caption). This is a bit confusing.\nAgain, I am not sure what “Dataset 2” refers to. Perhaps the manuscript would benefit from the addition of some interpretation as to what you believe Supp Figure 4 means.\nI don’t understand the definition of your three classes of drugs (no effect (AUC > 0.2); narrow effect AUC \\leq 0.13 or broad affect AUC > 0.13). I don’t see how this definition clearly delineates between “no effect” and “narrow effect”.\n\nThe bottom paragraph of the first column is one sentence that spans 8.5 lines. It references 2 main figures of the paper and 5 supplementary figures. To be honest, this is very frustrating. I have gone through the Supplemental Methods very closely and I don’t see anywhere where the authors have distinguished between “recomputed AUC” and “AUC computed based on the common concentration range”. Then “IC_50 (figure figure) values” ??\n\nI’m not sure what to interpret re: Figure 7 for example. To me it looks like there is excellent agreement except for perhaps the first row. Only paclitaxel fits into this “cyotoxic drug’ category but for the life of me, I don’t see where this is defined. The authors just defined three types of drugs (no effect, narrow effect and broad effect) but that’s not what they are using here. I don’t understand this. To me it simply seems to be that at low AUCs there is high variance in the last 3 distributions of the first line (17-AAG PD-0325901 and AZD6244), but actually they look like they pretty well agree at higher AUCs. I’m not sure what that means\n\n“and calculated the consistency of drug sensitivity data between studies using all common cases and only those that the data suggested were sensitive in at least on study.\nMaybe a table would help, especially if each of these different objects were properly defined in the Methods/Supp Methods.\n\n“Given that no single metric can capture all forms of consistency, …” So you add three more. I don’t see the point here. Why these three? and how is something like pearson \\rho applied here. What is the vector? I would guess that Supp Figures should show the distribution of correlations for all three distributions so that we can look the different moments of these distributions (e.g. skew). In Figure 8, there is a use of a * but how where these p-values estimated? Are these empirical estimations of the p-value?!\n\nI am totally confused here. You say in Figure 8 that panel A is “full data”. But panel B is “sensitive cell lines”. Where is this defined? the parentheses beside this in the figure caption? But why did you introduce this “broad, narrow, no effect” definitions only to redefine something else here?\nI’m not sure I understand Supplemental Figure 11. Is this just all probe groups for the Affy arrays, or how were features chosen? What is an “RNA-seq expression value”? how is this formed? rpm? Most importantly, I just don’t know what the message is here, and if there is any statistics to support that statement.\nI’m not sure I understand Figure 9 or what the take home message should be. I have a hard time understanding the labels along the x-axis in these figure. I just don’t really know statistically how one can conclude that gene expression is more “consistent” than the drug sensitivity values. There could be a million things going on in those arrays. There are so many more datapoint and you have literally a hundred thousand probes that probably don’t have an IQR > 1.5 on those arrays that “pump up” the correlation values, I would guess. What does this analysis mean?",
"responses": [
{
"c_id": "2875",
"date": "25 Jul 2017",
"name": "Benjamin Haibe-Kains",
"role": "Author Response",
"response": "We thank the reviewer for his constructive comments. We too believe it is important for the community to be aware of the challenges for biomarker discovery stemming from the lack of consistency across large-scale pharmacogenomic datasets. We have addressed most of the reviewer’s comments, as detailed below.pg5: What is “Dataset 1”? The link here doesn’t lead anywhere that I can tell.We have updated the manuscript to add a description of each “Dataset”. Figure 3. I’m not really sure what the value is in plotting the density functions for the mismatched and matched cell lines. First, wouldn’t one density function suffice with a threshold I guess? Second, do you really need it at all?The representation of both mismatched and matched cell line density functions serves to illustrates two main points: to give context to the concordance scores, and to show that the mismatched cell lines have a concordance score that is distinct and separate from the matched cell lines. By representing only a single density function, a reader may not be able to appreciate the bimodal nature of matched/mismatched concordances, and that the distance between the two functions is large enough to allow for a robust classification scheme. In the Methods “Cell line identity….”, it is stated that 66 samples fell below threshold with a reference to the Supplementary methods. However I don’t see anything in the supplementary methods that discusses this. Moreover in the text, it seems that you threw 8 cases away. This is confusing.We analyzed the SNP array profiles of all 973 and 1190 CEL files available for GDSC and CCLE respectively. From these samples 66 and 5 with low quality SNP arrays in GDSC and CCLE have been removed from SNP fingerprinting pipeline. We continued the SNP fingerprinting pipeline with 503 cell lines with high quality SNP profiles available in common between CCLE and GDSC.We compared the genotype concordance score for 503 out of 698 cell lines in common between CCLE and GDSC. We confirm that we removed 8 of 503 cell lines from analyses because their genotype concordance scores fell below the 0.8 threshold, as well as within the range of genotype concordances for cell lines with discordant genotypes. As such, we concluded that despite having the same annotations, these cell lines may have been contaminated or mislabelled in one of the two studies and further analysis on drug sensitivity cannot be compared between them. We have reported the quality scores and concordance scores in Dataset 1.pg 5. I’m not sure what you mean by “remain stable or decrease monotonically”? Do you just mean “monotonically non-increasing”?That is correct. We changed the manuscript accordingly.Please see comment regarding “Filtering of drug dose-response curves” form Supp Methods below. I think this really needs to be reworked, and I have to trust you guys here that you are doing the right thing.“as exemplified…” depicted?We agree and updated the manuscript accordingly.In Figure 4, is it possible to relate this back to the choice of \\rho, \\epsilon and 2 \\cdot \\epsilon from the Supp Methods, or perhaps integration a version of this figure (but annotated) into the Supp Methods. Figure 4 legend is updated with the explanation of why these curves are identified as noisy.In panel A, the grey area is a bit non-intuitive no? I would say that post 0.03, it’s looking pretty good, and it’s not measured in GDSC after that. However the first points are off.In this panel the problematic curve is CCLE While the curve is monotonically decreasing for just 62% of points (5 out of 8) it is expected to be for at least 80% (the value considered for the \\rho parameter) of points. However, GDSC curve is consistently decreasing monotonically for all points except for one. So we think it is a good example of how the constraints we considered are useful in identifying noisy curves like CCLE curve in this panel.I don’t know what \\epsilon or \\rho are so it’s hard to relate what is depicted in Figure 4 back to your model.Figure 4 legend is updated with the default values of \\epsilon and \\rho which have been used to flag noisy curves in our study.When I look through Supplemental Figure 1 (all the excluded comparisons), it seems like your criterion for excluding a comparison boils down cases where at least one of the curves has high variance, and the cutoff 2\\epsilon I think is a constant independent of the distribution of points for either curve. I don’t see in your equations how you encode that the sequence is monotonically non-increasing, or how “order” along the left to right sweep is incorporated.By definition, we expect to see two types of drug response curves. When the cell is resistant to the drug it is expected that the viability fluctuates slightly around 100% and when the drug is sensitive a monotonically decreasing manner is expected to be seen. However, noise is unavoidable in these experiments. So we assumed that the viability of each point on the curve may get higher than the viability of its immediate prior at most with the size of \\epsilon. To filter the largely noisy experiments and keep the slightly noisy ones at the same time, we consider this constraint to be true for the majority of points which is defined by \\rho. Applying these simple constraints will result in omitting all the curves in which viability is increasing monotonically or if it is fluctuating largely. Supp Methods Filtering of drug dose-response curves i +1^{st} -> (i+1)^{st}Thanks for pointing this out, we corrected this in the revised manuscript.It seems like there is a formatting problem here. In my pdf there is something like I”A squiggle after “in some large fraction ??? of the cases (1).” I guess that’s supposed to be \\rho right?Thanks for pointing it out. You are right and we corrected this.I’m not sure I understand this sentence “Our quality control …” Are you saying that \\Delta_{i, i+1} < \\epsilon in some fraction \\phi of the cases?You are correct and more explanation is presented in our previous comment.equation #2 below:I also have never seen set notation such as \\{ \\Delta_{i,i+1} | \\Delta_{i,i+1} < \\epsilon \\}.Are you trying to say “given all the \\Deltas that are less than \\epsilon?So the vertical | means cardinality here right?But then the denominator has the cardinality of a value, or do you mean absolute value?Vertical bars are used as cardinality notation in both numerator and denominator of that equation. What is \\rho? It’s undefined.“Unfortunately …” The english is a bit rough - could be rephrased in terms of specificity and sensitivity, I guess.We updated the manuscript accordingly.I don’t understand the significance of the sentence “Consider, for instance, …” But in the main text, you say that it remains stable or decreases monotonically. Here it increases monotonically for many successive points, so this violates your model, no? Apply only equation (1) will not filter all the noisy cases and the curve explained in this sentence is one of those cases. Hence we also applied equations (2) and (3) to filter these remaining noisy curves. We clarified this in the revised manuscript.I think that this subsection wants simply to spell out mathematically the thresholds and also provide some rationale for the parameters. I think the text doesn’t really do a good job of establishing this rationale and needs work. Perhaps define the parameters precisely and then phrase the exposition in standard terms e.g. specific and sensitivity for different \\epislon, etc. We thank the reviewer for his suggestion. We improved the clarity of our equations and clearly state the parameters we used. We agree that the selection of the parameter value is arbitrarily, although reasonable. It would be possible to create a set of manually curated curves as a gold standard set and tune our parameters accordingly. Although the idea is appealing, it would require a large set of curators as manual classification tends to be unstable too. Such an analysis is definitely of interest and we will pursue this in future studies.Your equation (2) is inconsistent. In the text you specify \\Sigma_{\\forall i,j} \\Delta_{i,j} but below your criterion seems to change to (the correct) $i < j$. It is also sufficient to write \\Sigma{ i < j } \\Delta_{i,j} and avoid the double summation.Corrected.You should probably define D_i in the text and not make the reader deduce it from the figure belowCorrected.The comments here w.r.t. the Supplementary Methods also apply to the associated subsection of the Methods. The mathematical correctness of some comments needs attention. For example, it is not quite correct to say that the “curve fitting would have yielded erroneous results”. The curve fitting is just that, curve fitting. It’s not really an error. Then in the Methods, you claim to use this equation but this is inconsistent with the discussion in the Supplemental Methods (where you have an equation with this undefined parameter $E$). We agree with the reviewer and updated the manuscript accordingly.The least-squares method using a three-parameter sigmoid model. I understand the intuition for this, but when I look through Supplementary File 2, I think there are a lot cases where this is perhaps not the correct pattern to assume (e.g. straight lines). Moreover, there are some very strange fits, for example,AZD6244:G−361AZD6244:SK−MM−2PLX4720:MDA−MB−175−VIIIn somecases the curve is always above all of the observations. Perhaps this isbecause the measurements of viability > 100%? In your model you have removed the $E_0$ from Barretina et al. for different reasons.Data points where the measured viability exceeds 100% always lie above their respective curves of best fit, since the functional form of the equation forces predicted viability to lie between 0 and 100% for all drug concentrations. The three-parameter sigmoid model’s intuition and flexibility makes it an attractive choice for the majority of cases, and for ease and uniformity of analysis, we felt it prudent to choose the same model to fit all curves, even if it may not have fit well in a few anomalous cases.Supp Methods - Fitting of drug dose-response curvesI’m a bit lost with your choice of notation here. For example, you define y as an equation, not a function, and then write y(0) which I assume is supposed to be something like y(x). Ok but then you have y=0 and y = y(0) = 1. I’m not sure this is correct mathematically. (I think you mean to say that y(x ) = … *where* y(0) =1. “viability is reduced to half … concentration of the drug”… so the “Top”, E_\\infinity … I find this a bit wordy.“The dose response equation now becomes …”So I deduce that E is the new parameter?!?! Where has E_\\infinity gone?!?! (But then down below E is constrained to be in [0,1] and seems to related to the fitness of neoplastic cells. I’m not sure I understand this.)There is no derivation of this formula whatsoever. In fact, I don’t see how this could be correct any longer. Couldn’t this be expressed asmixture of two cell types, and y would be then a sort of weighted average?I really don’t see how this was derived. This is a very central part of your paper (since the manuscript is measuring agreement) and therefore it needs to be bulletproof.Please define “extant drug”.Also HS is allowed to vary apparently but I don’t see where it is then optimized in your analysis later on. This is confusing.We thank the reviewer for his comments and we apologise for the lack of clarity of our Supplementary Methods. We have now rewritten this section to improve clarity and address all the reviewer’s comments.Consistency of Drug Sensitivity DataIs the ABC method standard? Are there citations for this? You should probably define properly what you mean by the “insertion of the concentration range”. Elsewhere it seems that you are referring to this as “common concentration range” e.g. SFig 2More generally, isn’t this a sort of (non-statistical) version of the Kolmogorov-Smirnoff test?We created and used the ABC method as a convenient and intuitive way of quantifying the agreement of analogous dose-response curves in different datasets over the intersection of the concentration ranges tested by them (henceforth referred to as their “common concentration range”). This method is inspired from two recent publications where the authors restricted the analysis to the common concentration range between datasets (Pozdeyev et al, Oncotarget 2016) and compared two curves directly (Yadav et al, Scientific Reports 2014). While ABC does have some similarities to Kolmogoroff-Smirnoff, it evaluates the area between curves rather than the maximum vertical linear distance between them. Furthermore, it takes into account the behaviour of the fitted dose-response curves over their common concentration range only, rather than across their entire domains. Since the behaviour of fitted dose-response curves at concentrations far outside the concentration ranges over which they were fitted tends not to be robust to noise, we felt that ABC was a more appropriate test than Kolmogorov-Smirnoff for assessing accordance of dose-response curves in this study.We updated the manuscript wit these references and clarifications.Figure 5A: Actually there are many such cases in your Supplementary Figures. Doesn’t this just mean that the range of concentrations are not sufficient in both datasets?Given the limited concentration range tested in high-throughput in vitro drug screening studies, such as GDSC nd CCLE, it is not possible to rule out that drug yielding no effect on cell viability could actually yield substantial effect at higher dose. However, these higher doses are likely to be clinically irrelevant.pg 7 “We then computed the median …” I don’t understand what your distance metric is here. If I understand correctly, you could the ABC for each pair of drugs in the GDSC dataset. From that I can imagine a distance matrix D where D_ij i the ABC between drugs i and j. But you said you take the median ABC? median over what? cell lines? repeats? Whatever the case, are you sure it’s a distance metric?! Is it really true that distances derived from ABCs are metrics? I think this should be shown in the Supplemental Methods.The ABC is computed for each cell lines and the median of ABC values was used as a measure of “distance” between two drugs. Also as a minor comment, the caption in Supp Figure 3 says that you are using the mean ABC value but elsewhere it says median.The caption has been corrected to read ‘median’ rather than ‘mean’.p8. I am not sure what the significance of Supplementary Figure 3 is: are any of these clusters significant? Why are two drugs coloured red in panel A? On page 8 it is claimed that the samples split by hospital (MGH vs WTSI) but I don’t see how this is represented in Supplemental Figure 3. As described in the manuscript, GDSC drug sensitivity experiments have been performed in two centers (MGH and MSTI) separately. The only drug has been tested by both centres is AZD6482 (the ones in red in panel A of Supplementary figure 3). The aim of that figure is to illustrate how well the biological replicates have been clustered together. However the other drugs are not expected to cluster according to their corresponding center thus there is not such a labeling in this figure.pg 8. I have a very hard time estimating the significance of a statement like “…3 out of 15 common drugs clustered tightly”. I am not sure what tight means here. When I look at Supp Figure 3 there has been no effort to annotate the clusters with their reproducibility e.g.pvclust or measure their significance in some other way. When I look at the figure I think there appears to be a lot of co-clustering of the drugs, at least given that the median ABC across a diverse collection of cell lines might not be such a great “distance” measure. We refer to the closest neighbor for each drug. We agree with the reviewer that our statement should be more quantitative. We therefore compared the ABC values between common drugs and different drugs and observed a significant differences (one-sided Wilcoxon test p-value = 0.004). We agree that median ABC might not be the best distance measure, this is why we updated Supplementary Figure 3 with other distance measures for completeness. pg 9. What do you mean by “highly targeted therapies”?It meant drugs for which there is a few sensitive cell lines in the CCLE and GDSC (narrow effect). However, we changed it to targeted to avoid any confusion.pg 9. paragraph “Although the ABC values …” I think the ABC is interesting but it takes a very prominent role in your paper when there are other standard techniques already like AUC and IC_50. Perhaps the manuscript should have comments about why have chosen this approach that is not standard. Also I think you would need to make precise what the differences are between how GDSC and CCLE computed the AUC and IC_50 that are different than how PharmacoGX does. GDSC and CCLE fit a different family of curves to their dose-response data, as described in Haibe-Kains et al, Nature 2013. To eliminate this source of heterogeneity, we fitted the same three-parameter model for all the CCLE and GDSC curves, as implemented in PharmacoGx does. Once the curve is fitted, GDSC, CCLE, and PharmacoGx agree on how to calculate its AUC and IC_50.This is a very central concept in your comparison so it would have to have a very solid definition and analysis. Supplemental Figure 4 suggests in a round-about way that the only difference in in the number of cell lines (figure caption). This is a bit confusing. Again, I am not sure what “Dataset 2” refers to. Perhaps the manuscript would benefit from the addition of some interpretation as to what you believe Supp Figure 4 means.We have updated the manuscript with a clear interpretation of Suppl Figure 4, which shows that drugs listed as targeted therapies exhibit less variation (as estimated by the median absolute deviation) in drug sensitivity (AUC) than cytotoxic therapies. Although expected, these results allowed us to define a cutoff fro MAD(AUC) to classify drug into broad vs narrow effect, as described in the manuscript.We have also updated the manuscript to add a description of each “Dataset”, a denomination required by F1000Research formatting guidelines.I don’t understand the definition of your three classes of drugs (no effect (AUC > 0.2); narrow effect AUC \\leq 0.13 or broad affect AUC > 0.13). I don’t see how this definition clearly delineates between “no effect” and “narrow effect”. We apologize for the confusion. We corrected the manuscript with the following definitions. Drugs with “no effect”: all AUC values < 0.2 (no sensitive cell lines); “narrow effect”: MAD(AUC) <= 0.13 (see Suppl Figure 4); “broad effect”: MAD(AUC) > 0.13 (see Supp Figure 4).The bottom paragraph of the first column is one sentence that spans 8.5 lines. It references 2 main figures of the paper and 5 supplementary figures. To be honest, this is very frustrating. I have gone through the Supplemental Methods very closely and I don’t see anywhere where the authors have distinguished between “recomputed AUC” and “AUC computed based on the common concentration range”. Then “IC_50 (figure figure) values” ?? We have now clearly stated these definitions in the manuscript. AUC recomputed and AUC computed based on common concentration range both are computed by our PharmacoGx package by fitting the sigmoid model described in the Supplemental Methods. The only difference between these metrics is that the former is computed over the whole concentration range for each study while the former one is computed over the common concentration range between CCLE and GDSC. We updated the manuscript to reflect the fact that recomputed IC_50 values have been used in Supplementary Figure 8. Recomputed IC_50 values are inferred from the sigmoid model fitted to the data by the aim of PharmacoGx package.I’m not sure what to interpret re: Figure 7 for example. To me it looks like there is excellent agreement except for perhaps the first row. Only paclitaxel fits into this “cyotoxic drug’ category but for the life of me, I don’t see where this is defined. The authors just defined three types of drugs (no effect, narrow effect and broad effect) but that’s not what they are using here. I don’t understand this. To me it simply seems to be that at low AUCs there is high variance in the last 3 distributions of the first line (17-AAG PD-0325901 and AZD6244), but actually they look like they pretty well agree at higher AUCs. We agree with the reviewer that we have not been consistent in our definition of drugs with no, narrow and broad effect. This is now fixed in the updated manuscript. In Figure 7 (and all other figures) we have ordered the drugs by their “status” (no, narrow and broad effect). For the ease of interpretation, we also choose to color each AUC based on a standard cutoff for sensitivity of AUC > 0.2 (and therefore cell lines with AUC <= 0.2 are called “insensitive”). Although paclitaxel is the only drugs that is referred to as cytotoxic in the literature, we observed that 17-AAG, PD-0325901, and AZD6244 decreases cell viability for a large number of cell lines. As their MAD(AUC) > 0.13 (Supp Figure 4), we classified these drugs as “broad effect”. In this case, the consistency of drug sensitivity data (AUC) seems to be poor, with CCLE having much more sensitive cell lines than GDSC. Drugs with narrow effect (MAD(AUC) <= 0.13) (2 middle rows) yield better consistency for some drugs (e.g., crizotinib, PLX4720, lapatinib, lapatinib) but there are still cell lines with AUC > 0.2 (“sensitive”) that are far off the diagonal. The last row include all the drugs with “no effect”, i.e., the vast majority of cell lines yielded AUC <= 0.2, where no consistency is expected due to low signal / noise ratio.I’m not sure what that means “and calculated the consistency of drug sensitivity data between studies using all common cases and only those that the data suggested were sensitive in at least on study.The consistency of drug sensitivity data was performed twice. First using all the cell lines in common between the two studies. Second, using only the cell lines that are “sensitive” (AUC > 0.2) in at least one dataset. The second analysis aims to address a criticism we received from the community that only sensitive cell lines should be compared. We rephrased this part in the updated version of the manuscript.Maybe a table would help, especially if each of these different objectswere properly defined in the Methods/Supp Methods. We enriched the “acronym table” in Supplementary Methods to add definitions of the additional objects and concepts used in our paper.“Given that no single metric can capture all forms of consistency, …” So you add three more. I don’t see the point here. Why these three? and how is something likepearson \\rho applied here. What is the vector? I would guess that Supp Figures should show the distribution of correlations for all three distributions so that we can look the different moments of these distributions (e.g. skew). In Figure 8, there is a use of a * but how where these p-values estimated? Are these empirical estimations of the p-value?! In the absence of gold standard measure of consistency for drug sensitivity data, we decided to include other measures that could be used as alternative to Pearson and Spearman correlation already used in previous publications. The consistency measures are computed across cell lines. For each drug, a vector of drug sensitivity measurements (AUC’ IC_50,...) is extracted from GDSC and CCLE and then compared. P-values were computed analytically, as described in the updated Supplemental Methods. We updated the caption of Figure 8 to state these important points.I am totally confused here. You say in Figure 8 that panel A is “full data”. But panel B is “sensitive cell lines”. Where is this defined? the parentheses beside this in the figure caption? But why did you introduce this “broad, narrow, no effect” definitions only to redefine something else here?We apologise for the lack of definition and inconsistency. We have updated the figure to use a consistent classification of drugs and now clearly define the restriction to “sensitive data” (now renamed as “sensitive cell lines” for clarity).I’m not sure I understand Supplemental Figure 11. Is this just all probe groups for the Affy arrays, or how were features chosen? What is an “RNA-seq expression value”? how is this formed? rpm? Most importantly, I just don’t know what the message is here, and if there is any statisticsto support that statement.Brainarray probe gene mapping cdf files have been used to quantify the expression value for each gene represented on the Affymetrix arrays. FPKM values for genes annotated by Gencode V.19 annotation were normalized by transforming them using log2(FPKM+1). The aim of Supplementary Figure 11 is to show the distribution of expression data for each platform and how well a mixture of 2 gaussians could help define a cutoff to binarize the data. The caption has been updated to clearly state how the cutoffs have been determined.I’m not sure I understand Figure 9 or what the take home message should be. I have a hard time understanding the labels along the x-axis in these figure. I just don’t really know statistically how one can conclude that gene expression is more “consistent” than the drug sensitivity values. There could be a million things going on in those arrays. There are so many more datapoint and you have literally a hundred thousand probes that probably don’t have an IQR > 1.5 on those arrays that “pump up” the correlation values, I would guess. What does this analysis mean?We have now clearly defined the labels of the x-axis in the caption. We agree with the reviewer that sensitivity data and gene expression data have very different properties. However, as we looked at univariate biomarkers, one gene at a time, we sought to assess whether expression of each individual gene suffers from the same level of inconsistency than drug sensitivity data across cell lines. We have added a word of caution in the text to reflect on the limitation of this analysis."
}
]
},
{
"id": "17399",
"date": "21 Dec 2016",
"name": "Paul T. Spellman",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSafikhani et al. have updated their previous analysis of two of the largest systematic drug screening projects linked to genomics data. The previous findings indicated that there is a lack of concordance between the two datasets that makes finding biomarkers of response difficult. Updating these studies with a wider array of methods in response to comments about the original article leaves largely the same result. Drug sensitivity profiles show significant variation between the two groups, likely due to differences in assay condition.\nSafikhani et al. follow this analysis up with a discussion on how to improve the situation and here I have some significant issues. The base argument is that the differences in platform are creating biases in the results and therefore the platforms need to be standardized. I think this is completely wrong. This makes sense if one platform were known to recapitulate in vivo response more accurately, but that is not true. We do not know if one platform is more physiologically relevant than another so the lack of standardization actually tells you something, it tells you when a predictor result is robust against biological context and is therefore more likely to work in new biological contexts. I would argue we need *more* variability in assays and platforms to broaden the scope of biological systems, not less.\nSimilarly, the statement is made that there is a 75% validation rate for eight drugs but that \"this does not suggest that one can use these studies to find new, reproducible gene-drug associations...\", I actually think it does, but perhaps I am missing a subtlety.\nFinally, I think it is possible to set drug specific thresholds for each dataset. We have done this, I believe successfully, with datasets far smaller.",
"responses": [
{
"c_id": "2874",
"date": "25 Jul 2017",
"name": "Benjamin Haibe-Kains",
"role": "Author Response",
"response": "We thank the reviewer for his constructive comments. We agree with the reviewer that we need to update the discussion to reflect this important point. In the absence of “gold standard” screening platform, the best biomarkers are likely those that are robust to the use of different assays, as these assays assess different biological aspects of growth inhibition. However one must clearly distinguish between technical and biological variations. While biological variations might be interesting for biomarker discovery, assay variation must be kept as low as possible. Looking at the replicates performed for AZD6482 in GDSC, we found that drug sensitivity data lack consistency even when the same assay is used (see new Supplementary Figure 2E). In this setting, one cannot claim that the inconsistencies observed between GDSC and CCLE are solely due to differences in the type of assay used for drug screening. Even if we agree with the reviewer, we believe there is still work to be done to improve the robustness of each pharmacological assay. We have updated the discussion section of our manuscript accordingly.Similarly, the statement is made that there is a 75% validation rate for eight drugs but that \"this does not suggest that one can use these studies to find new, reproducible gene-drug associations...\", I actually think it does, but perhaps I am missing a subtlety.Although our new analysis revealed reasonable consistency for biomarker discovery for 8 out of 15 drugs, we could not get such a validation rate for the rest of the drugs where the biomarkers are only significant in one study but not the other. Finally, I think it is possible to set drug specific thresholds for each dataset. We have done this, I believe successfully, with datasets far smaller.Like this reviewer, we too tried to identify drug-specific cutoff that would allow us to binarize the drug sensitivity while optimising the consistency across datasets. Our best efforts were not successful though, except for Nilotinib (see Safikhani et al, Nature 2016; PMID: 27905430). Not to say that it is not feasible but we found it very challenging to define a cutoff within a dataset that would yield good concordance across datasets."
}
]
},
{
"id": "22599",
"date": "10 May 2017",
"name": "David G. Covell",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper under review, 'Revisiting inconsistency in large pharmacogenomics studies' by Zhaleh Safikhani, Petr Smirnov, Mark Freeman, Nehme El-Hachem, Adrian She, Quevedo Rene, Anna Goldenberg, Nicolai J. Birkbak, Christos Hatzis, Leming Shi, Andrew H. Beck, Hugo J.W.L. Aerts, John Quackenbush, Benjamin Haibe-Kains, reports an updated analysis of results from two previously published systematic drug screening projects[1,2]. As explained in their introductory material, this report is motivated in part by the expansion of data from these earlier studies, and as a means to document alternative data analysis strategies that have been proposed for improving the original publication[3].\n\nThe authors address two highly important areas in basic and clinical research: data reproducibility and predictive (gene expression) biomarkers based on drug sensitivity data. The former issue represents a hallmark of basic science research; where results derived from different labs and measurement techniques serve to establish strong confidence in a proposed experimental protocol. The latter issue pertains best to highly confident (e.g. reproducible or consistent) experimental measurements; while data inconsistencies foreshadow a Pandora’s Box of alternatives in the search for the origins of these differences.\n\nWith respect to the issue of data reproducibility, I find no fault in the new manuscript. All of the results reported in the original 2013 paper and current paper under review can be obtained from their Supplementary R code. With a bit of diligence and tenacity, sequentially stepping through their R-code will yield the reported figures and tables. Towards that end, the original R-code is tedious, but their addition of an open-source R-package, PharmacoGx, relieves much of the tedium. In fact, the authors must be applauded for making their analysis completely reproducible, a feat rarely achieved with biological results.\n\nNotwithstanding, the results remain largely the same; inconsistencies remain in the drug sensitivity profiles between the GCSC[2] and CCLE[1] groups. Data analysis based on alternative methods appear to be constructed around the arguments proposed in the pair of Brief Communications Arising from the original paper[4,5]. The general idea of this alternative data analysis is based on the limited role of weakly responsive tumor cells, and thus a failure to contribute to meaningful statistics. While segregating the data into three classes (drugs with no observed tumor cell activity, activity in a few tumor cells and activity in a large number of tumor cells) improves the statistics, the differences largely remain.\n\nSpeculations about the differences between the two datasets focus, naturally, on each measurement platform. The current manuscript’s proposal of internal standardization may help identify the origin(s) of these differences, but this alone may not be sufficient. In this regard, I would recommend the Supplementary Information (all 47 pages) from the original article[3]. Specifically, Section 3, Comparison of experimental protocols, and the included Comparative table. The details of this section identify a number of platform differences that may underlie their measurement differences. Although not within the scope of the current article, a future study focused on these differences, combined with standardization, rather than looking for answers by segregating the data into three response classes, would be highly informative. An alternative speculation regarding data inconsistencies considers the ‘dated’ possibility that each tumor cell’s drug sensitivity and underlying phenotypic architecture (expression, mutation, snp, etc.) exemplifies a ‘snowflake’ phenomenon. Each tumor cell represents a unique circumstance, which can be modulated by any sort of environmental condition. Thus a drug response, even for the same tumor cell, may exhibit variation. Under these circumstances, the functional pathways, represented by groups of genes and their concordant expressions, become the focus, and derivation of pathway-based scoring schemes may significantly overcome inconsistencies between experimental groups. Clearly an appropriate pathway fitness scoring scheme has yet to be devised.\n\nIn summary, the analysis is sound, the results are clear, and the analyses of inconsistent data, as a means to obtain predictive biomarkers, remains a significant challenge.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2862",
"date": "10 Jul 2017",
"name": "Benjamin Haibe-Kains",
"role": "Author Response",
"response": "We thank Dr Covell for his constructive comments regarding our study. We are glad to hear that our PharmacoGx package is useful to reproduce our analysis results. The hope is that our package will enable other research groups to analyze and compare their own data with published large-scale pharmacogenomic datasets. We agree with the reviewer that more investigation is required to better assess the technical vs biological variations for each of the pharmacological assays. Then biological variations could be leveraged, at the pathway level as the reviewer suggested, to define more robust biomarkers."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2333
|
https://f1000research.com/articles/6-852/v1
|
08 Jun 17
|
{
"type": "Opinion Article",
"title": "Pathogenic helminths in the past: Much ado about nothing",
"authors": [
"Christian Mulder"
],
"abstract": "Despite a long tradition on the extent to which Romanisation has improved human health, some recent studies suggest that Romanisation in general, and Roman sanitation in particular, may not have made people any healthier, given that in Roman times gastrointestinal parasites were apparently widespread, whilst in the present day such parasites rarely cause diseases. Unfortunately, this novel claim neglects the empirical evidence that worldwide infections in over 1.5 billion people are caused by ubiquitous foodborne nematodes. Therefore, many may wonder if fossil remains of soil-transmitted helminths have been reported in ancient sanitation infrastructures. Beneficial access to improved sanitation should always be prioritized, hence how can historical sanitation efforts have ever been harmful? In this short article, a strong plea for caution is given, asking for an augmented nematological record and showing that there is not any evidence against Roman sanitation, neither in the past nor in the present.",
"keywords": [
"Human diseases",
"Endoparasitic nematodes",
"Roman settlements",
"Fossil eggs."
],
"content": "\n\nIn her Nature feature, Chelsea Wald1 reviewed some of the conclusions by Piers D. Mitchell2 and describes the fascinating rise of latrines in Mesopotamia, Greece and the Roman Empire. Both authors tried to point out that most of these sanitation facilities were not doing much for the residents’ health, despite the idea that sophisticated plumbing systems, like those of ancient Rome, may have acted as a kind of control that could benefit even the poor. This debated interpretation was based on the fact that human hosts mainly acquire infective nematodes via the faecal–oral route through the soil, although unembryonated eggs can remain viable in the soil for 15 years. Helminth preservation seems to be the highest in moist anaerobic environments like latrines3; therefore, even Roman latrines, with continuous flushing and related sediments (coprolites), can become valuable for the reconstruction of past gastrointestinal infections, if evaluated correctly.\n\nAs a matter of fact, water purification will always be one of the most intriguing examples of how public health and societal health are interwoven. Amazing examples come from Roman history, where water and wastewater systems rapidly became pillars for European civilisation. The large-scale introduction by Romans of fountains into or near public buildings, together with closed aqueducts, can be seen as the very first Water Safety Plan. Interestingly, archaeologists somehow seem to be ideologically motivated to conceptualize diseases and outbreaks in Roman times, despite the thin palynological record from ancient sanitation infrastructures around the Mediterranean Sea1,2 (Figure 1).\n\nBackground map implemented with palaeoparasitological records of roundworms (Ascaris) and whipworms (Trichuris) recovered from a global selection of archaeological sites built between 200 BCE and 500 CE (Common Era)2,4–6,9,10. The background map has been adapted from World Health Organization, program on Control of Neglected Tropical Diseases (gamapserver.who.int/mapLibrary/Files/Maps/STH_2011_global.png).\n\nMany pathogens are reported in ancient latrines because they are intrinsically correlated to human settlements, and not to sanitation infrastructures themselves, which are supposed to reduce the risk of contact with outbreak sources. On one hand, it is true that fossil remains of roundworms, whipworms and hookworms (collectively referred to as soil-transmitted helminths) have been reported from ancient sanitation infrastructures. On the other hand, Romans were fully aware of the importance of clean water and efficient sanitation systems. Already during the short reign of Nerva in Rome (96 – 98 CE), Frontinus decided that water from different sources had to be kept separate: clean water was reserved for potable use, intermediate quality water was used for recreation and only poor quality water was sent for irrigation. Thanks to sophisticated hydraulic systems for aqueducts, cisterns, pipes, terms, baths, fountains and latrines, the capital of the Roman Empire became famous as Roma regina aquarum. Due to the greatest care provided to their waters for the health and security of the capital and later of the other cities, these neglected latrines became an open archaeological window not so much on Roman sanitation, but on our civilisation as a whole. Fossilized helminth eggs in dung sediments from latrines are very peculiar tools to reconstruct migration routes, trades, animal domestication, diets, past outbreaks and even urban catastrophes4–6.\n\nNematodes are the most frequently occurring invertebrates. These primitive soil organisms occupy diverse trophic levels in ecological networks and can act either as antagonists for soil-borne pests or be pathogens themselves. It can be dangerous to suggest that sanitation may not have made people any healthier, as humans can also get infected with soil nematodes by ingesting unclean vegetables or by contact with infected domestic animals. Along the aforementioned faecal–oral route, behavioral and allometric factors have been put forward in existing literature7, being the host-related factors linked to human size prominent. According to host–parasite regression models for mammals7 and assuming on average one adult body weight of 62.0 kg (corresponding to a volume of 61,400 cc), each infected human might contain up to 12,300 helminths.\n\nHence, it is not surprising to find helminths in sanitation systems of ancient settlements, especially if only the palaeoparasitological data for sites at which these pathogens were detected are gathered together. For instance, archaeological records of common-source outbreaks can be collated to support the idea that sanitation facilities historically linked to Romanisation have widespread helminths, although these cosmopolitan endoparasites are well-known to occur during Roman times even around the Pacific Ocean, including the New World in pre-colonial times5,8–10 (Figure 1). Thus, we have to realize that there would be many more helminth eggs in ancient sanitation facilities if these facilities had not be there in the past. Surprisingly, archaeologists like to invert this basic framework, and suggestive interpretation may be worse than no interpretation at all.\n\nBut even if such pathogens are identified, it remains challenging either to exclude false parasitism (incidental presence in human faeces of eggs resulting from the consumption of an infected animal9,11) or to determine with certainty human outbreaks (helminth eggs might demonstrate their human origin by some circumstantial evidence only3,11). Allometric rules that express parasites and non-infected animals per square meter12, in tandem with the several possible contamination pathways, will always lead to diseases with a high global burden13. Moreover, a parasitic occurrence can also be related to open water contamination, for instance from livestock grazing in upland areas causing outbreaks downstreams. This has nothing to do with any sanitation structure.\n\nOmitting such a relevant weight of evidence in any comparison between archaeological excavations will introduce de facto a strong bias towards false-positive results into palaeoecological meta-analyses. In the future, to avoid interesting, but geographically misleading or even statistically speculative conclusions, one of the most intriguing hypotheses that will arise might be the investigation by microscopy of soils from archaeological sites associated either with one sanitation infrastructure or without that sanitation. In the case of Roman sanitation, due to the Hadrian’s Wall bordering the northern part of the Roman Empire with all its social infrastructures, including latrines, England (entirely inside the Wall during Roman times) and Scotland (outside the Wall during Roman times) can together provide the perfect study area.\n\nThere are 2.5 billion people still living on Earth without improved sanitation facilities. A correct Big Data mining of all nematological palaeorecords14 combined with objective interpretation of, probably thin, circumstantial evidence will require great care, as the conclusion shall have implications on ongoing global control programs relating to helminthiases. On the other hand, as the taxonomic status of Ascaris is contentious15, palaeoecological evidence from archaeological sites in synergy with present-day molecular ecology can become an unexplored avenue to improve current control programs.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nWald C: The secret history of ancient toilets. Nature. 2016; 533(7604): 456–458. PubMed Abstract | Publisher Full Text\n\nMitchell PD: Human parasites in the Roman World: health consequences of conquering an empire. Parasitology. 2017; 144(1): 48–58. PubMed Abstract | Publisher Full Text\n\nMoore PD: Life seen from a medieval latrine. Nature. 1981; 294: 614. Reference Source\n\nAraújo A, Reinhard KJ, Ferreira LF, et al.: Parasites as probes for prehistoric human migrations? Trends Parasitol. 2008; 24(3): 112–115. PubMed Abstract | Publisher Full Text\n\nReinhard KJ: Archaeoparasitology in North America. Am J Phys Anthropol. 1990; 82(2): 145–163. PubMed Abstract | Publisher Full Text\n\nReinhard KJ, Confalonieri UE, Herrmann B, et al.: Recovery of parasite remains from coprolites and latrines: Aspects of paleoparasitological technique. Homo. 1986; 37(4): 217–239. Reference Source\n\nGeorge-Nascimento M, Munoz G, Marquet PA, et al.: Testing the energetic equivalence rule with helminth endoparasites of vertebrates. Ecol Lett. 2004; 7(7): 527–531. Publisher Full Text\n\nMejia R, Vicuña Y, Broncano N, et al.: A novel, multi-parallel, real-time polymerase chain reaction approach for eight gastrointestinal parasites provides improved diagnostic capabilities to resource-limited at-risk populations. Am J Trop Med Hyg. 2013; 88(6): 1041–1047. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGonçalves ML, Araújo A, Ferreira LF: Human intestinal parasites in the past: new findings and a review. Mem Inst Oswaldo Cruz. 2003; 98(Suppl 1): 103–118. PubMed Abstract | Publisher Full Text\n\nLeles D, Reinhard K, Fugassa MH, et al.: A parasitological paradox: Why is ascarid infection so rare in the prehistoric Americas? J Archaeol Sci. 2010; 37(7): 1510–1520. Publisher Full Text\n\nBrinkkemper O, van Haaster H: Eggs of intestinal parasites whipworm (Trichuris) and mawworm (Ascaris): Non-pollen palynomorphs in archaeological samples. Rev Palaeobot Palynol. 2012; 186: 16–21. Publisher Full Text\n\nHechinger RF: Parasites help find universal ecological rules. Proc Natl Acad Sci U S A. 2015; 112(6): 1656–1657. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBethony J, Brooker S, Albonico M, et al.: Soil-transmitted helminth infections: ascariasis, trichuriasis, and hookworm. Lancet. 2006; 367(9521): 1521–1532. PubMed Abstract | Publisher Full Text\n\nDallas T: helminthR: an R interface to the London Natural History Museum’s Host–Parasite Database. Ecography. 2016; 39(4): 391–393. Publisher Full Text\n\nSøe MJ, Kapel CM, Nejsum P: Ascaris from Humans and Pigs Appear to Be Reproductively Isolated Species. PLoS Negl Trop Dis. 2016; 10(9): e0004855. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23627",
"date": "17 Jul 2017",
"name": "Karl J. Reinhard",
"expertise": [
"Reviewer Expertise Archaeoparasitology",
"palynology",
"paleoparasitology",
"paleonutrition",
"archaeobotany",
"palynology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nChristian Mulder’s “Pathogenic helminths in the past: Much ado about nothing” reveals some basic problems with the research presented by Piers Mitchell in “Human parasites in the Roman World: health consequences of conquering an empire. Parasitology”. The critique recognizes that the conclusions of the study are overblown and points to errors in methods and theory. I read Mitchell’s work and concur with Mulder. Mitchell does not approach the problem from the perspective of the science of archaeology nor the science of parasitology. Mulder suggests that Mitchell did not control for false parasitism through spread of eggs through the environment. In the science of archaeology, as applied to parasitology, concentrations of parasite eggs per ml or gr are calculated by researchers. These calculations document the distribution of eggs through strata within pits and across ancient village landscapes. This leads to statistical identification of transmission points. When these archaeological egg accumulations are verified as fecal deposits via ancillary pollen and seed analysis, then fecal contamination “hot spots” are defined. Dating of these points can lead to very solid information about emergence and control of geohelminths. This has been demonstrated archaeologically by the references of Fisher and Trigg’s work below. Secondly, the geohelminth life cycle of ascarids seems to be misunderstood by Mitchell who asserted that people could have been infected in Roman baths.\n\nI have a paper that has just been published addressing rigor in archaeological parasitology. This highlights the sort of issues such as false parasitism noted by Mulder.\n\nI believe that Mulder is spot on with regard to Mitchell’s assertion. I strongly recommend this work for indexing and I would hope that Mulder includes references to the work by Fisher’s team and Trigg’s team. The work by Trigg is in press and can be obtained via Heather Trigg or Steve Mrozowski.\n\nUseful References:\nFisher et al. (2007)1 Trigg et al. (2017)2 Reinhard (2017)3\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-852
|
https://f1000research.com/articles/6-1413/v1
|
10 Aug 17
|
{
"type": "Research Article",
"title": "Effects of Schiff base aromatic amino acid derivatives on antioxidant and immune system disturbances in a rat model of aflatoxin B1 induced experimental mycotoxicosis",
"authors": [
"Margarita Malakyan",
"Violeta Ayvazyan",
"Gayane Manukyan",
"Laura Hovsepyan",
"Elina Arakelova",
"Diana Avetyan",
"Hovsep Ghazaryan",
"Ani Melkonyan",
"Roksana Zakharyan",
"Arsen Arakelyan",
"Violeta Ayvazyan",
"Gayane Manukyan",
"Laura Hovsepyan",
"Elina Arakelova",
"Diana Avetyan",
"Hovsep Ghazaryan",
"Ani Melkonyan",
"Roksana Zakharyan"
],
"abstract": "Background: Alflatoxin B1 (AFB1) is the most hepatotoxic and hepatocarcinogenic of the aflatoxins and occurs as a contaminant in a variety of foods. The toxicity of AFB1 has been shown to be associated with a wide range of pathological events, such as enhanced apoptosis and oxidative events. Currently there is no treatment for mycotoxin exposure. The aim of this study was to evaluate the potential ability of picolinyl-L-phenylalaninate (PLP), picolinyl-L-tryptophanate (PLT), and nicotinyl-L-tryptophanate (NLT) Schiff base amino acid derivatives to act against damaging effects of AFB1 using a rat model of mycotoxicosis. For this purpose, a range of markers of immune and antioxidant systems in liver and blood plasma samples, as well as the apoptotic rate in neutrophils and monocytes was assessed. Methods: Mongrel white pubescent rats (with 180-200g b/w) were used in all experiments. Concentration of the markers of immune and antioxidant systems was measured in plasma by ELISA, using commercially available kits according to manufacturers’ instructions. The rate of apoptosis in neutrophils and monocytes was analyzed by flow cytometry. Results: AFB1 induced mycotoxicosis caused significant elevation of malonic dialdehyde contents (plasma and liver: p = 0.0001 compared with untreated rats), the levels of superoxide dismutase (p=0.005), total non-enzymatic water-soluble antioxidants (p = 0.0001), and terminal complement complex (p = 0.021). Moreover, the increased rates of early and late apoptosis in neutrophils and monocytes were observed as well. Treatment with PLP, PLT and NLT were shown to mitigate these effects, though to a different extent. Conclusions: The results obtained in this study clearly demonstrated that chronic AFB1 exposure induced oxidative cell damage, immunosuppression and apoptosis of circulating immune cells. The oral administration of Schiff base cyclic amino acid derivatives was capable of minimizing the detrimental effects of mycotoxicosis by possessing multi-mechanistic effects that target AFB1-induced pathological events.",
"keywords": [
"Aflatoxin B1",
"apoptosis",
"Schiff bases amino acid derivatives",
"immune system",
"antioxidant system"
],
"content": "Introduction\n\nAflatoxin B1 (AFB1) is a mycotoxin produced by Aspergillus flavus and related fungi that grow in staple foods, including cereals and nuts, such as corn, rice and peanuts1, especially in areas with appropriate conditions of moisture and heat where these fungi are ubiquitous. AFB1 causes a serious threat for human and animal health and, in extreme cases, lead to death2. The toxicity of AFB1 has been shown to be associated with a wide range of pathological events such as enhanced apoptosis, oxidative events and carcinogenesis3–5. Moreover, AFB1 is the one of mycotoxins that were adopted for the use in bioterrorism6–7.\n\nAFB1 is metabolized into aflatoxin-8,9-epoxide, reactive oxygen species (ROS)8–9, which can react with proteins and DNA to form adducts and cause mutations in the p53 and other genes essential for cell malignant transformation3,10. AFB1 has also been shown to be immunotoxic to animals and is suspected to be immunosuppressive in humans11–14.\n\nCurrently there is no treatment for mycotoxin exposure, except of supporting therapy, such as diet and hydration15. Development of the efficient measures neutralization of toxic effects of mycotoxins and prevention of associated pathological changes is an issue of great importance. The key aspect here is that the potential therapeutics should possess multi-mechanistic actions and simultaneously target a range of pathological processes caused by AFB1.\n\nOur previous studies have demonstrated that Schiff base amino acid derivatives picolinyl-L-phenylalaninate (PLP), picolinyl-L-tryptophanate (PLT) and nicotinyl-L-tryptophanate (NLT) are capable of scavenging free-radicals, elevating the capacities of antioxidant and the immune system in radiation injury, and possessing anticytotoxic, antigenotoxic and antimutagenic properties16,17. Based on these results, we suggest that the above-mentioned multifunctional compounds may have protective effects against mycotoxins. This suggestion is also supported by other results indicating that several Schiff base derivatives are capable of decreasing the concentrations of aflatoxin M1 in artificially contaminated raw milk18, and have good neutralization activity against Aspergillus niger19. In addition, L-tryptophan was shown to alleviate aflatoxin-induced chicken growth retardation and immunosuppression20, while phenylalanine may prevent ochratoxin A-induced suppression of the immune response21 and inhibition of protein synthesis in spleen, kidney and liver22.\n\nThis study is aimed to evaluate the ability of novel Schiff base cyclic amino acid derivatives to protect against oxidative stress and immunosuppression in an animal model of AFB1 mycotoxicosis.\n\n\nMethods\n\nThe synthesis of PLT was performed as described previously17 by condensation of picolinaldehyde (2-pyridinecarboxaldehyde) and potassium salt of L-tryptophan in alcohol solution (ethanol, methanol) in a molar ratio 1:1 at the temperature range 5–25°C. A similar procedure was used for NLT synthesis, which is a condensation product of nicotinaldehyde (3-pyridinecarboxaldehyde) and L-tryptophan potassium salt, and PLP, which is a condensation product of picolinaldehyde (2-pyridinecarboxaldehyde) and L-phenylalanine potassium salt (Figure 1).\n\nChemical structure of picolinyl-L-phenylalaninate (A), picolinyl-L-tryptophanate (B) and nicotinyl-L-tryptophanate (C).\n\nMongrel white pubescent male rats (180–200 g; Animal Facility of the Institute of Molceular Biology, National Academy of Sciences, Armenia) were used in all experiments. Twelve randomly selected animals were used in each of the following groups: 1) Controls – no treatment; 2) AFB1 mycotoxicosis – rats orally treated with AFB1 mycotoxin during 21 days at 25μg/kg per day dose level and left for 10 additional days without any treatment; 3) AFB1 mycotoxicosis + Schiff bases – rats orally treated with AFB1 mycotoxin during 21 days at 25μg/kg dose level and 10-day oral treatment with PLP, PLT or NLT at 10 mg/kg per day dose level; 4) Schiff-base only – rats received 10-day oral treatment with 10mg/kg PLP, PLT or NLT. Oral treatment with AFB1 and Schiff bases was delivered with water. At the end of the treatment period, animals were euthanized by decapitation.\n\nDuring the treatment period animals were allowed free access to water and food and were kept (maximum 5 animals per cage) in pathogen free conditions at regular 12 hour day-night cycles. Animal care, handling and use in research were performed according to the international regulations adopted by the Ministry of Health of the Republic of Armenia. The described experimental protocols of animal studies were considered and approved by the Ethical Committee acting at Institute of Molecular Biology. All efforts were made to ameliorate any suffering of the animals; animal decapitation was performed in dedicated room located distantly from animal care facility by trained personnel using sharpened guillotines regularly adjusted to ensure proper performance.\n\nFresh trunk blood samples were collected in EDTA containing tubes during decapitation. Fresh blood aliquots were immediately used for flow cytometry (see below). Plasma was separated with centrifugation (10 minutes at 3000g at 4°C) and stored at -30°C until further analyses.\n\nMalonic dialdehyde (MDA) content, as a marker of terminal phase of lipid peroxidation, was measured in blood plasma (0.1 mL)23 and liver homogenate. Homogenate was obtained by excising 100 mg of liver and pulverizing in solution containing 0.3 mL 40mM Tris-HCl buffer (pH = 7.4), 0.3 mL 12*10-6 M Mohr salt and 0.3 mL of 0.8 mM of ascorbic acid24. The activity of lipid peroxidation in blood plasma and liver is based on the amount of MDA formation, which during interaction with 0.8 mL 0.12M thiobarbituric acid, gives a coloring reaction, determined at a wavelength 535 nm using UV-752 UV-VIS Spectrophotometer (Shanghai Phenix Optical Scientific Instrument Co. Ltd, China). Optical density was converted to concentration units using Excel 2007. The MDA concentration was calculated using an extinction coefficient and expressed as µMol MDA/g liver tissue or µMol MDA/mL plasma.\n\nThe integral antioxidant activity (AOA) represents the sum of the antioxidant capacity of hydrophilic and lipophilic antioxidants of low-molecular non-enzymatic water-soluble antioxidants (ascorbic acid, glutathione and uric acid) of blood serum. AOA was analyzed by photochemiluminescence detection using a Photochem analyzer and ACW Kit (Analytik Jena AG, Jena, Germany), as per the manufacturer’s instructions. In this assay, generation of free radicals is partially eliminated by the reaction with the antioxidants present in the serum samples, and the remaining radicals are quantified by luminescence generation. An ascorbate calibration curve was used to evaluate AOA levels. The results were expressed as conventional units equal to mMol ascorbate with equivalent activity.\n\nPlasma levels of circulating immune complexes (CIC) (Rat Circulating Immune Complexes ELISA kit; BlueGene Biotech, Shanghai, China), terminal complement complex (C5b-C9) (Rat Terminal complement complex (C5b-9) ELISA kit; BlueGene Biotech), superoxide dismutase (SOD) (Rat Superoxide Dismutase Copper (SOD) ELISA kit; BlueGene Biotech), and catalase (CAT) (Rat Catalase ELISA kit; BlueGene Biotech) were measured using commercially available ELISA kits, according to manufacturer’s instructions using StatFax-2100 plate reader (Awareness Technology Inc, USA). The detection limit for CICs, C5b9, SOD, CAT was 0.1ng/mL, 0.1pg/mL, 0.1µg/mL, 0.1ng/mL, respectively.\n\n100 μL of whole blood from each studied rat was used to quantify apoptotic rate and percentage of non-viable cells. Erythrocytes were discarded by lysis (ammonium chloride lysis buffer); white blood cells were washed in Annexin-binding buffer and stained with 5 μL of Annexin V–FITC conjugate for 20 minutes, followed by staining with 1 μg/mL of propidium iodide (PI). Apoptotic rate and cell viability were analyzed on a Partec CyFlow Space (Partec, Germany). 10 000 events were collected from each sample. The neutrophil and monocyte populations in peripheral blood were distinguished by forward scatter and side scatter. Gating and determination of early and late apoptotic rates were done by FlowJo vX0.7 software (Tree Star Inc, USA). The positively stained apoptotic cells were counted, and the apoptotic index was calculated as the percentage of apoptotic cells within the total number of cells. Cells that stained only for Annexin V were considered early apoptotic (Annexin V+/PI-), and cells that dually stained for both Annexin V and PI were considered as late apoptosis (Annexin V+/PI+).\n\nData is presented as the mean ± SD, unless otherwise specified. Comparison of intergroup mean differences between the levels of studied markers in controls, AFB1-exposed, as well as treated groups, was performed using one-way analysis of variance (ANOVA). P values <0.05 were considered as significant. Statistical analysis of plasma markers was performed using GraphPad Prizm 5.0 software (GraphPad Software, Inc, USA).\n\n\nResults and discussion\n\nFirst we evaluated the effect of Schiff bases on the studied parameters in intact animals (Schiff-base only group). In blood plasma (Figure 2A) and liver (Figure 2B) of the intact animals treatment with PLP (blood: p = 0.0023, liver: p = 0.0001), PLT (blood: p= 0.0002, liver: p=0.0342), and NLT (blood: p = 0.0001, liver: p = 0.0001) caused a significant decrease in MDA levels. No changes in SOD and CAT levels were observed during treatment with Schiff bases (Figure 3A and B), while a significant increase in total soluble AOA of blood serum was observed (Figure 4).\n\nMDA levels in the plasma (A) and liver (B) of studied groups. CNTRL – intact animals (n = 10); PLP, PLT and NLT intact animals (n = 10 in each group) received 10-day oral treatment with corresponding Schiff bases at 10mg/kg dosage; AFB1 – rats (n = 10) treated with AFB1 for 21 days at 25μg/kg dosage; AFB1+PLP, AFB1+PLT, AFB1+NLT – AFB1 exposed rats (n = 10 in each group) treated with corresponding Schiff bases (mycotoxin and Schiff base dosages were similar in all treated groups). Data presented as mean±SD; *p<0.05 vs. CNTRL; #p< 0.05 vs. AFB1. AFB1, alflatoxin B1; MDA, malonic dialdehyde; PLP, picolinyl-L-phenylalaninate; PLT, picolinyl-L-tryptophanate; NLT, nicotinyl-L-tryptophanate.\n\nLevels (µg/mL) of SOD (A) and CAT (B) in blood plasma of studied groups. CNTRL – intact animals (n = 10); PLP, PLT and NLT intact animals (for SOD: n =10 in each group; for CAT: n = 10 in PLP and PLT, n = 9 in NLT groups) received 10-day oral treatment with corresponding Schiff bases at 10mg/kg dosage; AFB1 – rats (n = 5) treated with AFB1 for 21 days at 25μg/kg dosage; AFB1+PLP, AFB1+PLT, AFB1+NLT – AFB1 exposed rats (n = 5 in each group) treated with corresponding Schiff bases (mycotoxin and Schiff base dosages were similar in all treated groups). Data presented as mean±SD. *p<0.05 vs. CNTRL; #p<0.05 vs. AFB1. AFB1, alflatoxin B1; SOD, superoxide dismutase; CAT, catalase; PLP, picolinyl-L-phenylalaninate; PLT, picolinyl-L-tryptophanate; NLT, nicotinyl-L-tryptophanate.\n\nTotal soluble antioxidant activity (expressed as conventional units, c.u.) in blood plasma of studied groups. CNTRL – intact animals; PLP, PLT and NLT intact animals received 10-day oral treatment with corresponding Schiff bases at 10mg/kg dosage; AFB1 – rats treated with AFB1 for 21 days at 25μg/kg dosage; AFB1+PLP, AFB1+PLT, AFB1+NLT – AFB1 exposed rats treated with corresponding Schiff bases (mycotoxin and Schiff base dosages were similar in all treated groups). Number of animals in all groups was 6. Data presented as mean±SD. *p<0.05 vs. CNTRL; #p<0.05 vs. AFB1. AFB1, alflatoxin B1; AOA, antioxidant activity; PLP, picolinyl-L-phenylalaninate; PLT, picolinyl-L-tryptophanate; NLT, nicotinyl-L-tryptophanate.\n\nNext we analyzed the changes of the above mentioned parameters in AFB1 and AFB1+Schiff bases groups of animals. We assessed the intensity of AFB1-induced lipid peroxidation and the effects of the Schiff bases. According to our results, AFB1 induced a significant increase of MDA both in plasma (59%, p = 0.0001) and liver (85%, p = 0.0001) in rats of mycotoxicosis group (Figure 2), which suggests about activation of lipid peroxidation processes. Treatment of mycotoxicosis with Schiff bases caused a significant decrease of MDA levels both in blood and liver (Figure 2).\n\nNext, we tested the status of enzymatic free radical defense system during AFB1 mycotoxicosis and Schiff base treatment 10 mg/kg dose level. The levels of SOD were significantly increased (p=0.005) in the plasma of AFB1 treated rats, while no difference was observed for CAT levels (Figure 3A). Treatment with PLT and NLT decreased SOD levels to its control values, while treatment with PLP reduced its levels further (p=0.001) (Figure 3A).\n\nFinally, we observed fourfold elevation of total soluble AOA of blood serum non-enzymatic water-soluble antioxidants in AFB1 (p = 0.0001) mycotoxicosis compared to control. Treatment with PLT (p = 0.0001) and NLT (p = 0.0013) had a tendency to normalize AOA levels in ABF1-exposed animals, while PLP had no clear impact on their levels (Figure 4).\n\nThe increase of the malonic dialdehyde in blood plasma and liver homogenate suggests the intensification of lipid peroxidation in the organ and systemic levels. It is well known that AFB1 metabolizes into aflatoxin-8,9-epoxide, which aggressively interacts with DNA and forms adducts8,9. Moreover, in line with our results it has been shown that AFB1 induces lipid peroxidation in liver25. Meanwhile, we observed the increase of the levels of SOD and water-soluble non-enzymatic antioxidants, which can be a compensatory reaction to counterbalance oxidative stress. Schiff bases (PLT and NLT) were shown to normalize the levels of both SOD and AOA levels, as well as decrease the levels of MDA, which indicates their ability to interfere with the process of ROS generation in response to AFB1 exposure. Though the precise mechanism of their action is unknown, it was proposed that it might be related to the contents of active hydroxyl and amino groups of the Schiff bases26,27.\n\nIn order to evaluate the immune system changes caused by exposure to AFB1 and the effects of Schiff bases, the levels of terminal complement component (C5b-C9, TCC), circulating immune complexes, as well as the rates of early and late apoptosis of neutrophils and monocytes were assessed.\n\nThe results showed a statistically significant increase (p = 0.021) of TCC levels (marker of complement activation) and reduced (p = 0.003) levels of CICs in the blood plasma of AFB1-exposed animals (Figure 5). The treatment with NLT and PLP further increased the levels of TCC (p = 0.001 and p = 0.003 compared to controls, respectively), in the meantime normalizing CIC levels to that of controls. By contrast, treatment with PLT decreased the levels of TCC to the control levels without affecting low CIC levels. Neither CIC nor TCC levels were affected by the treatment with Schiff bases alone (Figure 5).\n\nLevels (pg/mL) of terminal complement complexes (A) and CICs (B) in blood plasma of studied groups. CNTRL – intact animals (n = 10); PLP (n = 9), PLT (n = 8) and NLT (n = 10) intact animals received 10-day oral treatment with corresponding Schiff bases at 10mg/kg dosage; AFB1 – rats treated with AFB1 for 21 days at 25μg/kg dosage (n = 8); AFB1+PLP (n = 6), AFB1+PLT (n = 6), AFB1+NLT (n = 9) – AFB1 exposed rats treated with corresponding Schiff bases (mycotoxin and Schiff base dosages were similar in all treated groups). Data presented as mean±SD. *p<0.05 vs. CNTRL; #p<0.05 vs. AFB1. AFB1, alflatoxin B1; CICs, circulating immune complexes; PLP, picolinyl-L-phenylalaninate; PLT, picolinyl-L-tryptophanate; NLT, nicotinyl-L-tryptophanate.\n\nThe spontaneous apoptotic rates (early and late) of whole blood neutrophils and monocytes from control rats were not different compared with those from the rats treated with NLT, PLT, and PLP (Figure 5). The only exception was significantly reduced late apoptosis of neutrophils in the blood of rats treated with PLT (p = 0.032) and PLP (p = 0.034). AFB1 aflatoxin had a profound effect on the apoptosis of neutrophils and monocytes (Figure 6 and Figure 7). Particularly, in mycotoxocosis induced rats significantly accelerated early apoptosis of neutrophils and monocytes (p = 0.045) and late apoptosis of monocytes (p = 0.0027) were observed. Meanwhile, the treatment of AFB1-exposed rats with NLT significantly decreased the apoptotic rate of neutrophils (p = 0.05 and p = 0.036, respectively) and monocytes (p = 0.011 and p = 0.0025, respectively). We also observed a significant effect of PLT on the late apoptosis of neutrophils (p = 0.035) and monocytes (p = 0.0075) in the rats with developed aflatoxicosis compared to untreated rats. A less prominent effect was shown for PLP. The only significant difference was found in the late apoptosis of monocytes (p = 0.016).\n\nSpontaneous (A) and induced apoptosis (B) of neutrophils in studied groups. CNTRL – intact animals (n = 5); PLP, PLT and NLT intact animals received 10-day oral treatment with corresponding Schiff bases at 10mg/kg dosage (n = 9 in each group); AFB1 – rats treated with AFB1 for 21 days at 25μg/kg dosage (n = 9); AFB1+PLP, AFB1+PLT, AFB1+NLT – AFB1 exposed rats treated with corresponding Schiff bases (mycotoxin and Schiff base dosages were similar in all treated groups) (n = 9 in each group). Data presented as mean±SD. *p<0.05 vs. CNTRL; #p<0.05 vs. AFB1. AFB1, alflatoxin B1; PLP, picolinyl-L-phenylalaninate; PLT, picolinyl-L-tryptophanate; NLT, nicotinyl-L-tryptophanate.\n\nSpontaneous (A) and induced apoptosis (B) of monocytes in studied groups. CNTRL – intact animals (n = 5); PLP, PLT and NLT intact animals received 10-day oral treatment with corresponding Schiff bases at 10mg/kg dosage (n = 9 in each group); AFB1 – rats treated with AFB1 for 21 days at 25μg/kg dosage (n = 9); AFB1+PLP, AFB1+PLT, AFB1+NLT – AFB1 exposed rats treated with corresponding Schiff bases (mycotoxin and Schiff base dosages were similar in all treated groups) (n = 9 in each group). Data presented as mean±SD. *p<0.05 vs. CNTRL; #p<0.05 vs. AFB1. AFB1, alflatoxin B1; PLP, picolinyl-L-phenylalaninate; PLT, picolinyl-L-tryptophanate; NLT, nicotinyl-L-tryptophanate.\n\nMycotoxins may induce severe immunosuppression by downregulation of T and B lymphocyte activity, inhibition of antibody production and synthesis of complement components and interferon as well as impairment of macrophage-effectors cell function. While the exact mechanisms of mycotoxins action on immune system are presently unknown, oxidative stress, DNA damage, inhibition of gene expression and protein synthesis can be involved in immunosuppressive action of mycotoxins23,28,29.\n\nIn this study, we observed massive induction of apoptosis of neutrophils and monocytes induced by AFB1, which is in line with a previous report30. It is known that apoptotic cells activate complement. Subsequently, complement binding by apoptotic cells in normal human plasma occurs mainly to late apoptotic, secondary necrotic cells, and the dominant mechanism involves the classical pathway of complement activation by antibodies. Depletion of antibodies abolishes most complement fixation by apoptotic cells and causes delayed clearance of them31. Furthermore, we for the first time, reported the decrease of circulating immune complexes and the increase of circulating terminal complement component in AFB1-treated mice, which is in line with these findings. In this regard, Schiff bases (PLP and NLT) were shown to have modulating effect on immunity, by decreasing apoptosis rate and restoring the ability of efficient removal of apoptotic cells, which can be seen by restoring the TCC and CIC levels.\n\n\nConclusions\n\nThe results obtained in this study clearly demonstrated that AFB1 administration induced oxidative cell damage, immunosuppression and apoptosis of circulating immune cells. The oral administration of Schiff base cyclic amino acid derivatives is capable of minimizing the detrimental effects of mycotoxicosis by possessing multi-mechanistic effects that target AFB1-induced pathological events.\n\n\nData availability\n\nDataset 1. Raw data for raw values for all the commercial ELISA kits (SOD, CAT, TCC and CIC), the values from the MDA, AOA. Dataset 1 contains tables with raw data for all measured assays except flow cytometry. CNTRL – intact animals; PLP, PLT and NLT intact animals received 10-day oral treatment with corresponding Schiff bases at 10mg/kg dosage; AFB1 – rats treated with AFB1 for 21 days at 25μg/kg dosage; AFB1+PLP, AFB1+PLT, AFB1+NLT – AFB1 exposed rats treated with corresponding Schiff bases (mycotoxin and Schiff base dosages were similar in all treated groups). Measurement units provided in corresponding table legend. doi, 10.5256/f1000research.11756.d16918032\n\nFACS output files for neutrophils and monocytes are available at Zenodo33.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors acknowledge funding from the International Science and Technology Center (ISTC) in the frame of A-2116 (MM and AA) grant.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank Hakob Devejyan and Svetlana Kirakosyan for their advice and assistance in experiments.\n\n\nReferences\n\nWild CP, Gong YY: Mycotoxins and human disease: a largely ignored global health issue. Carcinogenesis. 2010; 31(1): 71–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdzahan N, Jalili M, Jinap S: Survey of aflatoxins in retail samples of whole and ground black and white peppercorns. Food Addit Contam Part B Surveill. 2009; 2(2): 178–182. PubMed Abstract | Publisher Full Text\n\nMeki AR, Abdel-Ghaffar SK, El-Gibaly I: Aflatoxin B1 induces apoptosis in rat liver: protective effect of melatonin. Neuro Endocrinol Lett. 2001; 22(6): 417–426. PubMed Abstract\n\nQin H, Li H, Zhou X, et al.: Effect of superoxide and inflammatory factor on aflatoxin B1 triggered hepatocellular carcinoma. Am J Transl Res. 2016; 8(9): 4003–4008. PubMed Abstract | Free Full Text\n\nIARC: IARC monographs on the evaluation of carcinogenic risks to humans: some traditional herbal medicines, some mycotoxins, naphthalene and styrene. IARC Press, 2002; 82. Reference Source\n\nAnderson PD: Bioterrorism: toxins as weapons. J Pharm Pract. 2012; 25(2): 121–9. PubMed Abstract | Publisher Full Text\n\nMalír F, Roubal T, Ostrý V, et al.: Mycotoxins and bioterrorism. Chem Listy. 2007; 101: 119–121. Reference Source\n\nFodor J, Meyer K, Gottschalk C, et al.: In vitro microbial metabolism of fumonisin B1. Food Addit Contam. 2007; 24(4): 16–20. PubMed Abstract | Publisher Full Text\n\nPeraica M, Domijan AM: Contamination of food with mycotoxins and human health. Arh Hig Rada Toksikol. 2001; 52(1): 23–35. PubMed Abstract\n\nSmela ME, Currier SS, Bailey EA, et al.: The chemistry and biology of aflatoxin B1: from mutational spectrometry to carcinogenesis. Carcinogenesis. 2001; 22(4): 535–545. PubMed Abstract | Publisher Full Text\n\nBakheet SA, Attia SM, Alwetaid MY, et al.: β-1,3-Glucan reverses aflatoxin B1-mediated suppression of immune responses in mice. Life Sci. 2016; 152: 1–13. PubMed Abstract | Publisher Full Text\n\nChang CF, Hamilton PB: Impaired phagocytosis by heterophils from chickens during aflatoxicosis. Toxicol Appl Pharmacol. 1979; 48(3): 459–466. PubMed Abstract | Publisher Full Text\n\nGiambrone JJ, Ewert DL, Wyatt RD, et al.: Effect of aflatoxin on the humoral and cell-mediated immune systems of the chicken. Am J Vet Res. 1978; 39(2): 305–8. PubMed Abstract\n\nThaxton JP, Tung HT, Hamilton PB: Immunosuppression in chickens by aflatoxin. Poult Sci. 1974; 53(2): 721–5. PubMed Abstract | Publisher Full Text\n\nBennett JW, Klich M: Mycotoxins. Clin Microbiol Rev. 2003; 16(3): 497–516. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalakyan M, Bajinyan S, Boyajyan A, et al.: Radioprotection by Cu(II) Chelates of Nicotinyl-L-Aminoacidates. Radioprotection. 2008; 43(5). Publisher Full Text\n\nMalakyan M, Babayan N, Grigoryan R, et al.: Synthesis, characterization and toxicity studies of pyridinecarboxaldehydes and L-tryptophan derived Schiff bases and corresponding copper (II) complexes [version 1; referees: 2 approved]. F1000Res. 2016; 5: 1921. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoyajyan A, Poghosyan A, Hovsepyan T, et al.: Cyclic Amino Acid Derivatives as New Generation of Radioprotectors. Advanced Sensors for Safety and Security. 2013; 271–278. Publisher Full Text\n\nVenugopala KN, Jayashree BS: Synthesis of carboxamides of 2'-amino-4'-(6-bromo-3-coumarinyl) thiazole as analgesic and antiinflammatory agents. Indian J Heterocy Ch. 2003; 12(4): 307–310. Reference Source\n\nLovely PS, Christudhas M: Synthesis, characterization and antimicrobial activity of Schiff base complexes of Cu (II) and Ni (II). Asian J Res Chem. 2012; 5(9): 1143–1149. Reference Source\n\nPatil RJ, Tyagi JS, Sirajudeen M, et al.: Effect of Dietary Melatonin and L-Tryptophan on Growth Performance and Immune Responses of Broiler Chicken under Experimental Aflatoxicosis. Iran J Appl Anim Sci. 2013; 3(1): 139–144. Reference Source\n\nHaubeck HD, Lorkowski G, Kölsch E, et al.: Immunosuppression by Ochratoxin A and Its Prevention by Phenylalanine. Appl Environ Microbiol. 1981; 41(4): 1040–1042. PubMed Abstract | Free Full Text\n\nPlacer ZA, Cushman LL, Johnson BC: Estimation of product of lipid peroxidation (malonyl dialdehyde) in biochemical systems. Anal Biochem. 1966; 16(2): 359–364. PubMed Abstract | Publisher Full Text\n\nMihara M, Uchiyama M, Fukuzawa K: Thiobarbituric acid value on fresh homogenate of rat as a parameter of lipid peroxidation in aging, CCl4 intoxication, and vitamin E deficiency. Biochem Med. 1980; 23: 302–311. PubMed Abstract | Publisher Full Text\n\nShen HM, Shi CY, Shen Y, et al.: Detection of elevated reactive oxygen species level in cultured rat hepatocytes treated with aflatoxin B1. Free Radic Biol Med. 1996; 21(2): 139–146. PubMed Abstract | Publisher Full Text\n\nGuo Z, Xing R, Liu S, et al.: The synthesis and antioxidant activity of the Schiff bases of chitosan and carboxymethyl chitosan. Bioorg Med Chem Lett. 2005; 15(20): 4600–4603. PubMed Abstract | Publisher Full Text\n\nAnouar el H, Raweh S, Bayach I, et al.: Antioxidant properties of phenolic Schiff bases: structure-activity relationship and mechanism of action. J Comput Aided Mol Des. 2013; 27(11): 951–964. PubMed Abstract | Publisher Full Text\n\nBondy GS, Pestka JJ: Immunomodulation by fungal toxins. J Toxicol Environ Health B Crit Rev. 2000; 3(2): 109–143. PubMed Abstract | Publisher Full Text\n\nCorrier DE: Mycotoxicosis: Mechanisms of immunosuppression. Vet Immunol Immunopathol. 1991; 30(1): 73–87. PubMed Abstract | Publisher Full Text\n\nWang F, Shu G, Peng X, et al.: Protective effects of sodium selenite against aflatoxin B1-induced oxidative stress and apoptosis in broiler spleen. Int J Environ Res Public Health. 2013; 10(7): 2834–2844. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZwart B, Ciurana C, Rensink I, et al.: Complement Activation by Apoptotic Cells Occurs Predominantly via IgM and is Limited to Late Apoptotic (Secondary Necrotic) Cells. Autoimmunity. 2004; 37(2): 95–102. PubMed Abstract | Publisher Full Text\n\nMalakyan M, Ayvazyan V, Manukyan G, et al.: Dataset 1 in: Effects of Schiff base aromatic amino acid derivatives on antioxidant and immune system disturbances in a rat model of aflatoxin B1 induced experimental mycotoxicosis. F1000Research. 2017. Data Source\n\nMalakyan M, Ayvazyan V, Manukyan G, et al.: FACS output files for neutrophils and monocytes. [Data set]. Zenodo. 2017. Data Source"
}
|
[
{
"id": "26258",
"date": "27 Oct 2017",
"name": "Jalil Mehrzad",
"expertise": [
"Reviewer Expertise Immunology and single-immune cell technologies"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors showed here more interesting effects of Schiff base aromatic amino acid derivatives on antioxidant and immune system disturbances caused by AFB1 and Schiff base aromatic amino acid derivatives as potential good candidates. I appreciate the authors' focus on this interesting topic, strategy and detoxification, but regardless of its potentials there are some questions and uncertainties needed to be clarified in the text, methods and interpretation. Also the way of addressing at its current format with some unnecessary words, speculations and incomplete methods. We also like to see the interesting point of the immunological mechanistic effects of this work in the rat model.\n\nThe way of writing should be switched towards the following direction and ...\n\nThe acute toxicity is not an issue for the world, the chronic and invisible exposure is an issue. The authors should reformat their points and direction and in the introduction the authors should more precisely explain the issues related to chronic aspects/exposure/toxicity of AFB1, and some immunosuppressing effects of this toxin in human and animals’ immune cells and molecules. Also apply this direction for your interpretation of the works. Some useful papers it might be useful for this work to mention are referenced below this report.\n\nIn the methods the authors should improve their methodology of work; Flow cytometry needs to be improved and data related to gating, and selecting the specific cells for the assays should be shown, and readers might want to see and learn. How you gated the neutrophils and monocytes and the process of flow cytometry needed for improving its quality. How you chose the cells in the gates?\n\nAlso in the methods it is necessary how you prepared and dissolved AFB1 and mixed them with your Schiff base aromatic amino acid derivatives?\n\nAnd better explain the administration schedule for your rats? They are unclear; please make them clearer.\n\nThe detailed methods for antioxidant assessments and luminometry should be addressed.\n\nThe paper needs some revision and improvement and it is the authors’ duty to improve their nice works.\n\nFor the caption of each figs please add the main message of each figure?\n\nEnglish and typing errors through the text should be improved and cleaned…\n\nTerminology issues should be improved. For example, what do you mean for the “apoptotic rat? I do not understand. AFB1 (1 should be subscripted through the text). Sometimes you should use rat rather than animals.\n\nGood luck\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "29183",
"date": "24 Apr 2018",
"name": "Gökhan Eraslan",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe purpose of the work should be described in more detail. No post hoc test was performed on the statistical evaluation. Discussion and result parts are given together. While the findings are given in detail, discussion of the data is not appropriate. The mechanisms of the possible toxic effects of aflatoxin should be discussed in detail. The mechanisms of the positive effects of Schiff base aromatic amino acids should also be considered in detail. For this reason, the discussion part is written insufficiently. The conclusion should be rearranged taking into account the scientific results.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1413
|
https://f1000research.com/articles/6-687/v1
|
17 May 17
|
{
"type": "Research Note",
"title": "Polyp bailout in Pocillopora damicornis following thermal stress",
"authors": [
"Alexander J Fordyce",
"Emma F Camp",
"Tracy D Ainsworth",
"Emma F Camp",
"Tracy D Ainsworth"
],
"abstract": "Polyp bailout is an established but understudied coral stress response that involves the detachment of individual polyps from the colonial form as means of escaping unfavourable conditions. This may influence both the mortality and asexual recruitment of coral genotypes across a range of species. It was first described by Goreau & Goreau (1959) and has been observed in response to numerous stressors including high salinity and low pH. However, polyp bailout has not previously been described in association with thermal stress and the coral bleaching response, which is becoming increasingly common around the world. We present the first qualitative observation of polyp bailout following thermal stress in a mesocosm experiment. Detached polyps show similar characteristics to those described in previous studies, including the retention of endosymbiotic zooxanthellae and the ability to disperse across short distances. As the frequency of thermal stress increases globally, we suggest further detailed research into the prevalence of this response in situ and its implications for the survival of individual corals, as well as the potential for migration into cooler micro-habitats within the coral reef environment.",
"keywords": [
"polyp bailout",
"thermal stress",
"coral bleaching",
"Pocillopora damicornis"
],
"content": "Introduction\n\nCoral reefs around the world are facing increasingly frequent acute thermal stress events (Ainsworth et al., 2016; Hughes et al., 2017). As such there has been a corresponding increase in research into how corals respond to high temperatures, how these responses vary within individuals and communities, and how variation influences the resilience, recovery and structure of coral communities. Understanding this variation helps predict patterns of bleaching-induced mortality and reef-wide degradation. As the possibility of frequent, severe bleaching events increases (van Hooidonk et al., 2016), it is important to understand the drivers of variability in order to improve management and target restoration efforts. Polyp bailout is a possible source of variation that may influence the survival of individual genotypes and recruitment at local scales.\n\nPolyp bailout has been observed in at least six species of scleractinian coral (Serrano et al., 2017) and involves the withdrawal of individual polyps from the coenosarc followed by their detachment from the skeleton (Sammarco, 1982). The detachment of individual polyps from a parent coral colony has previously been recorded in response to poor water quality (Sammarco, 1982; Serrano et al., 2017), large changes in pH (Kvitt et al., 2015), changes in salinity (Shapiro et al., 2016) and following competition from macroalgae (Sin et al., 2012). As polyps often retain their endosymbiotic dinoflagellates (zooxanthellae) and are able to re-settle, polyp bailout is thought to be a generalised escape response from detrimental conditions (Kvitt et al., 2015; Sammarco, 1982). It may therefore constitute rapid migration away from local sources of mortality.\n\nDespite increasing temperatures being arguably the most significant threat to coral reefs (Hughes et al., 2017), this response has yet to be described during thermal stress. Here we present the first qualitative observation of polyp bailout following thermal stress in Pocillopora damicornis during an ex situ mesocosm study.\n\n\nMethods\n\nFragments of P. damicornis were collected from the Heron Island reef flat in January 2017, from a maximum depth of two metres. They were housed in four 500 litre aquaria as part of an outdoors, semi-closed system supplied by a continuous flow of unfiltered seawater from the reef flat. During the week preceding simulated thermal stress, all fragments were acclimated to the aquaria and subjected to ambient conditions (ca. 7.980 – 8.020 pH; conductivity of 53 – 54 μS/m; temperature of 26 – 30°C; and PAR of 0 – 3875 K). Following this, temperature was gradually increased in two mesocosms on top of natural variation for six days up to a peak daytime temperature of 34°C to simulate a severe bleaching event (as previously reported by Ainsworth et al., 2016). Two control mesocosms continued to be exposed to ambient conditions, differing from treatments in temperature only. Fragments were monitored throughout the day and when polyps were observed to bail out, they were collected using a wide-ended pipette and examined under an Olympus SZX16 stereomicroscope.\n\n\nResults\n\nOn the fifth day of the simulated bleaching event, polyps were observed to begin bailing out at approximately 09:30 (Figure 1; Dataset 1, Fordyce et al., 2017). At this time, peak daytime temperature was 33°C, which is the equivalent of 13 degree heating days, a measure of accumulated heat stress used in the prediction of mass bleaching events. By the end of day six, at peak temperature of 34°C, reflecting 18 degree heating days, all polyps had detached (Dataset 1, Fordyce et al., 2017). At the end of the bailout period, polyps began to detach in sheets rather than as individuals. This suggests that thermal stress was too severe to allow successful withdrawal of all polyps from the coenosarc. In contrast, fragments in the control mesocosms showed no signs of bleaching or polyp bailout (Supplementary Material 1).\n\nMacrophotograph of polyps, having withdrawn from the connective coenosarc, dropping off the skeleton of the fragments of Pocillopora damicornis. Photograph taken with an Olympus Stylus Tough TG-4.\n\nBailed polyps were slightly negatively buoyant, sinking slowly, but could easily be re-suspended with mild disturbance. The detached, individual polyps retained their zooxanthellae and many were observed to extend coiled mesenterial filaments (as described in Richmond, 1985; Figure 2). Clusters of detached polyps were also observed, however these lacked calcified tissue and so did not resemble the larval clusters described by Richmond (1985) (Figure 2).\n\nMicrograph of single polyps and clusters of polyps placed in a glass petri dish, taken using an Olympus SZX16 stereomicroscope with a computer-linked 2.0x objective lens. Total magnification is 17.0x. Small brown dots in the polyp tissue are endosymbiotic zooxanthellae. Coiled filaments are adhesive mesenterial filaments, presumed to aid in rapid settlement.\n\n\nImplications and future research\n\nIn past observations of polyp bailout, corals have been subjected to extreme aquarium conditions such as high salinity (up to 54 parts per thousand; Shapiro et al., 2016), low pH (7.2; Kvitt et al., 2015) or little to no water replacement resulting in anoxic and low nutrient conditions (Serrano et al., 2017; Sin et al., 2012). This makes it difficult to apply these results to the context of natural systems and elucidate the possible role of this response during environmental stress. The present observation was in aquaria with near-natural conditions. The peak daily temperature of 34°C was sustained over several days, reflecting severe thermal stress. However, accumulated heat stress amounted to between two and three degree heating weeks (14–21 degree heating days). This is becoming widely reported during bleaching events on the Great Barrier Reef (Ainsworth et al., 2016; Hughes et al., 2017). Therefore, this observation suggests that polyp bailout can occur in natural reef environments, in response to currently observed temperature increases. For coral species that utilise this response, the present record has implications for coral recruitment and recovery on local scales, and also suggests that these processes can occur independent of sexual reproduction and the impact of thermal stress on reproductive potential.\n\nDetached polyps appear to be viable, able to settle near the parent colony and capable of being dispersed short distances in a reef environment. Sammarco (1982) previously described low (< 5%) settlement and survival rates of detached polyps on settlement tiles contained in jars but survival of individual polyps within a simulated or natural reef environment has yet to be investigated, particularly how it is affected by reef degradation. If polyps were to ‘escape’ into cooler conditions with higher food availability, it may boost their survival and settlement success rates beyond the 5% observed by Sammarco. In the current study, we noted that each parent colony detached a large population of individual polyps (>50 per fragment), showing that even survival of 5% has the potential to allow from some immediate re-seeding of the local reef habitat. Furthermore, an obvious source of nearby refugia are neighbouring reef slope and mesophotic reef environments (Bridge et al., 2013; Smith et al., 2014), down to which these negatively buoyant polyps may slowly sink. However, light attenuation may limit viable settlement depths as individual polyps would need to rapidly acclimate to lower light conditions in order to settle and successfully begin asexual division.\n\nClearly, extensive future research is needed to explore the survival of individual polyps in both simulated refugia and within the reef habitat following thermal stress events. This will lead to greater understanding of the ecology and wider implications of this stress response and its potential role in coral recruitment and reef recovery following bleaching events. We additionally lack in situ observations of this phenomenon. Time-intensive ecological surveying during a predicted bleaching event is needed to reveal whether this is a widespread response to thermal stress. Subsequent use of mass mark and recapture of coral polyps, tagged with stable heavy isotopes, would then allow the tracking of the fate of coral polyps following bailout. Despite being a well-established response to stress, little research has focused on how polyp bailout may influence the survival and recovery of local coral populations.\n\n\nData availability\n\nDataset 1. Table of qualitative observations of polyp bailout in control and heat-treated mesocosms. Indicated is the peak daytime temperature (± 0.5°C) of the four treated mesocosms, the accumulated heat stress corals are exposed to and any observations during the four day bleaching period, including at the beginning and end of polyp bailout.\n\nDOI, 10.5256/f1000research.11522.d161213 (Fordyce et al., 2017)",
"appendix": "Author contributions\n\n\n\nTDA made the initial observation of polyp bailout. EFC photographed bailout for Figure 1. AJF photographed polyps for Figure 2 and prepared the first draft of the manuscript. All authors contributed to subsequent revisions of the manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors thank the Heron Island Scientific Services for their support during field work.\n\n\nSupplementary Material\n\nSupplementary Material 1. Comparative photographs of control and bleached fragments to show bleaching. Photographs were taken using an Olympus Stylus Tough TG-4 camera. A) Photograph of control fragment taken on 31/01/17; B) Photograph of bleached fragment, with visibly withdrawn individual polyps still attached to the skeleton, taken on 30/01/17.\n\nClick here to access the data.\n\n\nReferences\n\nAinsworth TD, Heron SF, Ortiz JC, et al.: Climate change disables coral bleaching protection on the Great Barrier Reef. Science. 2016; 352(6283): 338–342. PubMed Abstract | Publisher Full Text\n\nBridge TC, Hoey AS, Campbell SJ, et al.: Depth-dependent mortality of reef corals following a severe bleaching event: implications for thermal refuges and population recovery [version 3; referees: 2 approved, 1 approved with reservations]. F1000Res. 2013; 2: 187. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFordyce AJ, Camp EF, Ainsworth TD: Dataset 1 in: Polyp bailout in Pocillopora damicornis following thermal stress. F1000Research. 2017. Data Source\n\nGoreau TH, Goreau NI: The physiology of skeleton formation in corals. II. Calcium deposition by hermatypic corals under various conditions in the reef. Biol Bull. 1959; 117(2): 239–250. Publisher Full Text\n\nHughes TP, Kerry JT, Álvarez-Noriega M, et al.: Global warming and recurrent mass bleaching of corals. Nature. 2017; 543(7645): 373–377. PubMed Abstract | Publisher Full Text\n\nKvitt H, Kramarsky-Winter E, Maor-Landaw K, et al.: Breakdown of coral colonial form under reduced pH conditions is initiated in polyps and mediated through apoptosis. Proc Natl Acad Sci U S A. 2015; 112(7): 2082–2086. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichmond RH: Reversible metamorphosis in coral planulae larvae. Mar Ecol Prog Ser. 1985; 22: 181–185. Publisher Full Text\n\nSammarco PW: Polyp bail-out: an escape response to environmental stress and a new means of reproduction in corals. Mar Ecol Prog Ser. 1982; 10: 57–65. Publisher Full Text\n\nSerrano E, Coma R, Inostroza K, et al.: Polyp bail-out by the coral Astroides calycularis (Scleractinia, Dendrophylliidae). Mar Biodiv. Published online. 2017; 1–5. Publisher Full Text\n\nShapiro OH, Kramarsky-Winter E, Gavish AR, et al.: A coral-on-a-chip microfluidic platform enabling live-imaging microscopy of reef-building corals. Nat Commun. 2016; 7: 10860. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSin LC, Walford J, Goh BP: The effect of benthic macroalgae on coral settlement. Contrib Mar Sci. 2012; 2012: 89–93. Publisher Full Text\n\nSmith TB, Glynn PW, Maté JL, et al.: A depth refugium from catastrophic coral bleaching prevents regional extinction. Ecology. 2014; 95(6): 1663–1673. PubMed Abstract | Publisher Full Text\n\nvan Hooidonk R, Maynard J, Tamelander J, et al.: Local-scale projections of coral reef futures and implications of the Paris Agreement. Sci Rep. 2016; 6: 39666. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23072",
"date": "19 Jun 2017",
"name": "Sylvain Agostini",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports on the qualitative of polyp bailout for the coral species Pocillopora damicornis under heat stress in mesocosms.\nThe study remains qualitative and the authors are clear on this point.\nThe only remark I would have is in the discussion. The sentence \"Detached polyps appear to be viable, able to settle near the parent colony and capable of being dispersed short distances in a reef environment.\" sounds like that resettlement of the bailed polyp were observed. However it was not observed and remain a speculation. I would suggest to rephrase it to something like that: \"While resettlement of the bailed out polyps could not be observed, the polyps appear to be viable, and could be able to settle near the parent colony and to be dispersed accross short distances in a reef environment.\"\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "22841",
"date": "24 Jul 2017",
"name": "Dan Tchernov",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral remarks:\n\nIn the Abstract the authors claim: ”However, polyp bailout has not previously been described in association with thermal stress and the coral bleaching response…”\na. This is completely inaccurate-polyp bailout or expulsion in response to thermal stress has been reported in 2007 by Kruzic P.\nb. Moreover, the claim that bleaching is also involved: “in association with thermal stress and the coral bleaching response” is misleading - there is not a single experimental proof or measurement of bleaching in the current report, merely some remarks on “Paling observed in coral tissue” and “All corals fully bleached\" (Dataset 1.). Moreover, the polyps are clearly detaching with the zooxanthellae, (Figure 2: “Small brown dots in the polyp tissue are endosymbiotic zooxanthellae”). So where is the bleaching? In the coenosarc? If so - experimental evidence should be provided. Or the sentence: “..the coral bleaching response” should have been omitted.\n\nIn the Results, the use of the term “heating days” is unclear, and should have been explained (“At this time, peak daytime temperature was 33°C, which is the equivalent of 13 degree heating days, a measure of accumulated heat stress used in the prediction of mass bleaching events “).\n\nFigure 2 is unclear, especially the presumed “coiled filaments are adhesive mesenterial filaments, presumed to aid in rapid settlement”. I would expect, under current technologies available, for a better image to convince that indeed these are “adhesive mesenterial filaments”.\n\nThe literature is not correctly cited.\nIn the Abstract, the paper by Goreau & Goreau (1959) does not clearly describe polyp bailout or expulsion. It deals with calcification all over, and it mentions merely in the Discussion, as unpublished results, that polyps can detach. This citation should have been be omitted from the Abstract and put elsewhere.\n\nThe paper of Kramarsky-Winter et al. 1997 on polyp expulsion is not cited.\n\nThe paper of Kruzic P. 2007 on polyp expulsion under thermal stress, which is the most relevant paper to this report, is not cited.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-687
|
https://f1000research.com/articles/6-1396/v1
|
09 Aug 17
|
{
"type": "Method Article",
"title": "Accurate cytogenetic biodosimetry through automated dicentric chromosome curation and metaphase cell selection",
"authors": [
"Jin Liu",
"Yanxin Li",
"Ruth Wilkins",
"Farrah Flegal",
"Joan H.M. Knoll",
"Peter K. Rogan",
"Jin Liu",
"Yanxin Li",
"Ruth Wilkins",
"Farrah Flegal",
"Joan H.M. Knoll"
],
"abstract": "Accurate digital image analysis of abnormal microscopic structures relies on high quality images and on minimizing the rates of false positive (FP) and negative objects in images. Cytogenetic biodosimetry detects dicentric chromosomes (DCs) that arise from exposure to ionizing radiation, and determines radiation dose received based on DC frequency. Improvements in automated DC recognition increase the accuracy of dose estimates by reclassifying FP DCs as monocentric chromosomes or chromosome fragments. We also present image segmentation methods to rank high quality digital metaphase images and eliminate suboptimal metaphase cells. A set of chromosome morphology segmentation methods selectively filtered out FP DCs arising primarily from sister chromatid separation, chromosome fragmentation, and cellular debris. This reduced FPs by an average of 55% and was highly specific to these abnormal structures (≥97.7%) in three samples. Additional filters selectively removed images with incomplete, highly overlapped, or missing metaphase cells, or with poor overall chromosome morphologies that increased FP rates. Image selection is optimized and FP DCs are minimized by combining multiple feature based segmentation filters and a novel image sorting procedure based on the known distribution of chromosome lengths. Applying the same image segmentation filtering procedures to both calibration and test samples reduced the average dose estimation error from 0.4 Gy to <0.2 Gy, obviating the need to first manually review these images. This reliable and scalable solution enables batch processing for multiple samples of unknown dose, and meets current requirements for triage radiation biodosimetry of high quality metaphase cell preparations.",
"keywords": [
"Ionizing Radiation",
"Radiation Exposure",
"Computer-Assisted Image Processing",
"Software",
"Quality Control",
"Cytogenetics",
"Metaphase",
"Mass casualty incidents"
],
"content": "Abbreviations\n\nADCI, Automated Dicentric Chromosome Identifier and dose estimator; CNL, Canadian Nuclear Laboratories; DC, Dicentric chromosome; DCA, Dicentric chromosome assay; FP, False positive; HC, Health Canada; K–S, Kolmogorov–Smirnov test; MC, Monocentric chromosome; MC-DC SVM, Monocentric-Dicentric Support Vector Machine; ML, Machine learning; SCS, Sister chromatid separation; SD, Standard deviation; SVM, Support Vector Machine; TP, True positive.\n\n\nIntroduction\n\nAnalysis of microscopy images of metaphase cells demonstrates the damaging effects of ionizing radiation and can be used to measure the amount of radiation absorbed. The gold standard method for radiation biodosimetry, the dicentric chromosome assay (DCA), uses the frequency of aberrant dicentric chromosomes (DCs) formed after radiation exposure to determine the dose received by an individual (in Gy). While some aspects of the assay have been successfully streamlined, the overall throughput remains limited by the labour-intensive identification of DCs in many cells. This affects the timely estimation of radiation exposure, especially for testing multiple affected individuals in a large accident or a mass casualty nuclear event1,2.\n\nThe selection of images of adequate quality for accurate identification of the chromosome damage is a prerequisite to automating DCA. The decision to select or exclude particular microscope images based on the quality of metaphase cells has been performed manually, which is impractical given the increasing sizes of datasets produced by automated image capture systems. Image quality assessment has traditionally compared new data relative to reference images3, complex mathematical models4, or distortions from a training set recognized by machine learning5. Such generic approaches are not appropriate in the DCA because features tailored for ranking morphologically diverse chromosome images are not easily generalized as entropic or other measures applying frequency filters to intensity distributions. We demonstrate that quality chromosomal images can be selected for the DCA using supervised, image segmentation rules aimed at categorizing the preferred images and eliminating false positive (FP) DCs.\n\nWe previously developed the Automated Dicentric Chromosome Identifier and Dose Estimator (ADCI) software to automate DC detection and estimate radiation exposures6–11. Briefly, ADCI uses image segmentation techniques to extract possible chromosomes. Preprocessing image filters remove most but not all non-chromosomal objects (e.g. debris, nuclei, overlapping chromosomes). Each remaining object is regarded as a single, intact, post-replication “chromosome-like” object. Each of these objects is processed by a series of algorithms7–10 which create a quantitative profile measuring chromosome width from one telomere to the other. Potential centromere locations (“centromere candidates”) are identified at constrictions in the width profile (Figure 1)12. Machine learning (ML) modules then use features sourced from computer vision analysis of each chromosome to classify centromeres and dicentric chromosomes6,11. An initial Support Vector Machine (SVM) ranks potential centromere candidates in each chromosome according to their corresponding distances to the hyperplane that distinguishes centromeres from non-centromeric constrictions; then, another SVM scores the chromosome as either monocentric (MC) or dicentric (DC), using features derived from the top two centromere candidates.\n\n(A) Monocentric and (B) Dicentric chromosome. Chromosome contour overlaid in green, long-axis centreline in red. For reference, the minimum bounding box of the contour is also displayed in magenta and green. Yellow and cyan markers on the centerline indicate the top-ranked and 2nd-ranked centromere candidates, respectively, and all other candidates are indicated with a dark blue marker. For each centromere candidate, their corresponding width traceline (crossing through the candidate and running approximately orthogonal to the centerline) are displayed in dark blue. The arc lengths of width tracelines running down the centerline (not all shown) are used to construct a chromosomal width profile. Note that for the monocentric chromosome (A), the top-ranked candidate correctly labels the true centromere location, while the 2nd-ranked candidate labels a minor non-centromeric constriction. Meanwhile, for the dicentric example (B), both the top and 2nd-ranked candidates label true centromere locations. By comparing features extracted from the top 2 candidates (including width and pixel intensity information), the software will determine if the chromosome is monocentric or dicentric.\n\nSamples from blood exposed ex vivo to known radiation doses are processed by ADCI to construct a dose-response calibration curve. The average frequency of DCs per cell in dose calibrated samples, i.e. the radiation response, is fit to a linear-quadratic function. Responses for test samples exposed to unknown radiation levels can then be analyzed with this function to estimate the corresponding doses.\n\nWe noticed that metaphase cell images of inconsistent, lower quality can affect the accuracy of dose estimation by ADCI. Previous studies evaluated the efficacy of ADCI at chromosome classification and dose estimation10,11. While the sensitivity (recall) for DCs was acceptable (~70%) and relatively constant at all radiation exposure levels, precision showed a strong dependence on dose. Chromosome misclassification, in particular FPs comprised a larger fraction of DCs at low (≤1 Gy) relative to high (3–4 Gy) doses; at 1 Gy, FPs could outnumber true positive (TP) dicentrics by a factor of 4 to 5. Consequently, ADCI-processed samples exhibited a reduced range of accurate responses to radiation compared to manually scored samples. Although use of the same algorithm to derive the calibration curve compensates for some of these differences, reliability of the dose estimation ultimately hinges on DC classification accuracy. As DCs are always greatly outnumbered by MCs in a cell (background frequency in normal, unexposed individuals is one DC per 1000 cells6), this study focuses on improving the distinction between TP and FP DCs without compromising sensitivity.\n\nFPs reflect inadequacies in interpreting certain chromosome morphologies or non-chromosomal objects as DCs. To improve overall DC classification accuracy, FPs must be selectively identified and removed without limiting TP counts. We first investigated FPs to categorize problematic cases and devised a set of post-processing object segmentation filters to eliminate them. Then, to ensure consistent overall performance within a set of images from a sample, statistical filters were developed to remove poor quality cells. Frequently, these images either lacked any chromosomes or contained incomplete metaphase cells, misclassified interphase or micro-nuclei as metaphase cells, or incorrectly segmented sister chromatids as individual chromosomes. Chromatid separation and chromosome fragments increase the object count in an image, but the pixel areas of said objects are smaller than actual chromosomes. Chromosome-overlaps reduce the object count, but their areas tend to exceed those of discrete chromosomes. Each proposed statistical filter was tested individually, and the best performing filters were applied cumulatively, then tested on cytogenetic dosimetry data at various radiation exposures. Effects of these filters on classification performance and dose estimation were then evaluated with dose-blinded, irradiated samples obtained from biodosimetry laboratories at Health Canada (HC) and Canadian Nuclear Laboratories (CNL).\n\nThis hybrid approach selects images based on optimal metaphase cell image properties and customized segmentation, and by identification and elimination of FP DCs. These improvements in ADCI ensure timely, reproducible, and accurate quantitative assessment of acute radiation exposure.\n\n\nMethods\n\nCytogenetic image data were obtained at biodosimetry laboratories at HC and CNL, according to International Atomic Energy Agency (IAEA) guidelines. Blood samples were irradiated by an XRAD-320 (Precision X-ray, North Branford, CT) at Health Canada and processed at both laboratories. Samples were obtained with written informed consent from anonymous donors by the HC laboratory as approved by the Health Canada and Public Health Agency of Canada’s Research Ethics Board of protocol: “Development of Biological Dosimeters for Ionizing Radiation.” Peripheral blood lymphocyte samples were cultured, fixed, and stained at each facility according to established protocols2,12. Metaphase images from Giemsa-stained slides were captured independently by each laboratory using an automated microscopy system (Metasystems, Newton, MA). One set of metaphase images from CNL and two sets from HC (Table 1) were used for development and initial testing of the proposed algorithms. After image processing by ADCI, the identified DCs were manually reviewed and of the numbers of TPs or FPs were tallied. Calibration curves were prepared based on 6 samples of known radiation dose (Table 2). An additional 6 samples11 were initially blinded to the actual radiation exposures as test samples (Table 3). Test samples were exposed to a range of radiation doses bounded by the doses of samples used to construct the calibration curve. The sample naming convention is the laboratory name followed by the sample identifier, e.g. HC1Gy signifies the 1 Gy calibration sample prepared at HC, whereas CNL-INTC03S04 represents the test sample, INTC03S04, from an international laboratory inter-comparison exercise that was prepared at CNL (which had been exposed to 1.8 Gy).\n\n*HC-mixed refers to a combined set of all images from both the HC-low + HC-high datasets\n\n**Defined as number of valid segmented objects defined by ADCI.\n\nn/a: data were not available.\n\nEach calibration and test sample consisted of images from the same individual. HC provided an unselected set of all metaphase cells that were automatically recognized and captured using the default classifier of the microscopy system. By contrast, CNL previously manually curated a set of 500 high quality metaphase cell images, selected according to IAEA guidelines12, which deem metaphase cells analyzable based on chromosome count, distribution and morphology.\n\nADCI software (V1.0)11 was used for DC detection and dose prediction, setting the tuning parameter, σ, for the MC-DC SVM to 1.5. Software libraries were initially developed as available MATLAB scripts to test segmentation filters that detected FP DCs; once validated, C++ versions of these libraries were integrated into ADCI. For validation, two low dose and one high dose dataset were used (Table 1; the combination from HC comprise the HC-mixed image set).\n\nQuantitative morphological filters to delineate FP DCs were created and tested (i-viii, below). Each filter is designed to detect one or more of 6 FP morphological subclasses of FPs (described in Supplementary File 1). The FPs result from either I) excessive sister chromatid separation (SCS), II) fragmented or III) overlapping chromosomes, IV) chromosomes with highly variable boundaries or contours, V) non-chromosomal cellular debris, or VI) errors in the machine learning algorithms that detect centromere candidates and distinguish MCs from DCs.\n\nThe set of N chromosomes in any metaphase image is denoted by {c1,…,cN} and c* denotes the predicted DC of interest. Each filter (designated i – viii, below) classifies c* as either a TP or FP by comparing its filter score against a heuristically-defined threshold that is independent of laboratory source. Quantitative thresholds were established for each filter to eliminate the maximum number of FPs, without compromising detection of TP. Due to the relatively low frequency of DCs in the samples, maximal detection of TPs is essential for accurate dose estimation. Since FPs generally produce lower filter scores than TPs (i.e. lower area, lower width, less oblong footprint, more asymmetrical), FPs were selected by eliminating candidate DCs with scores below each threshold. The corresponding FP filter scores were calculated for all DCs in the HC-mixed image set (Table 1), and a heuristic threshold (to 2 significant digits; see below) was set to the minimum value observed in TPs for each filter. Thresholds for filters vi, vii and viii were calculated by repeating the same procedure on a set of 244 TP chromosomes from the MC-DC SVM training set6, and the final thresholds were set to the lower of each pair of values.\n\ni. Area filter: A(c) denotes the pixel area occupied by chromosome c (Figure 2B). c* was classified as FP, if A(c*)/median({A(c1),…,A(cN)}) < 0.74 or as TP otherwise. This filter targets small chromosomes commonly displaying SCS (Figure 2A) and chromosome fragments.\n\nii. Mean width filter: Wmean(c) denotes the mean value of the width profile of chromosome c (Figure 2C). c* was classified as FP if Wmean(c*)/median({Wmean(c1),…, Wmean(cN)}) < 0.80 or as TP otherwise. This filter targets SCS and chromosome fragments.\n\niii. Median width filter: Wmed(c) denotes the median value of the width profile of chromosome c (Figure 2C). c* was classified as FP if Wmed(c*)/median({Wmed(c1),…, Wmed(cN)}) < 0.77, or as TP otherwise. This filter targets SCS and chromosome fragments.\n\niv. Max width filter: Wmax(c) denotes the maximum value of the width profile of chromosome c (Figure 2C). c* was classified as FP if Wmax(c*)/median({Wmax(c1),…, Wmax(cN)}) < 0.83, or as TP otherwise. This filter targets SCS and chromosome fragments.\n\nv. Centromere width filter: Wcent(c) denotes the width of chromosome c at the position of the top-ranked centromere candidate (Figure 2C). c* was classified as FP if Wcent(c*)/median({Wcent(c1),…,Wcent(cN)}) < 0.72, or as TP otherwise. This filter targets SCS and chromosome fragments.\n\nvi. Oblongness filter: S(c) denotes the pair of side lengths of the minimum bounding rectangle enclosing the contour of chromosome c (Figure 2D). c* was classified as FP if 1 − min(S(c*))/max(S(c*)) < 0.28, or as TP otherwise. This filter targets acrocentric chromosomes with SCS and some cases of overlapping chromosomes.\n\nvii. Contour symmetry filter: L(c) denotes the pair of arc lengths of contour halves produced by partitioning the contour of chromosome c at its centerline endpoints (Figure 2E). c* was classified as FP if min(L(c*))/max(L(c*)) < 0.51, or as TP otherwise. This filter targets SCS.\n\nviii. Intercandidate contour symmetry filter: LC(c) denotes the pair of arc lengths of the contour regions of chromosome c that run between the traceline endpoints of its top 2 centromere candidates (Figure 2F). c* was classified as FP if min(LC(c*))/max(LC(c*)) < 0.42, or as TP otherwise. This filter targets SCS and some instances of overlapping chromosomes.\n\nDC filters are defined in Methods. (A) A processed FP (chromosome with SCS), with contour in green, centerline in red, top-ranked centromere candidate and its width traceline in yellow, 2nd-ranked centromere candidate and its width traceline in cyan. (B) Filter i: Thresholded binary image of the chromosome is used to calculate pixel area (in white). (C) Filters ii–v: Width profile along centerline is shown in red (horizontal axis plots centerline location, vertical axis plots width), with mean width in green (filter ii), median width in blue (filter iii), max width in magenta (filter iv), and width of top centromere candidate in yellow (filter v). (D) Filter vi: Contour in blue and its minimum bounding rectangle in magenta and green. (E) Filter vii: Partitioning of contour at centerline endpoints (intersection of red line with contour) into two segments, green and blue. (F) Filter VIII: Traceline endpoints of top 2 centromere candidates (intersection of yellow and cyan lines with contour) are used to partition contour into 4 segments (1 blue, 1 green, 2 magenta); relative arc lengths of blue and green segments are taken into consideration.\n\nDetermination of optimal filter subset: The same chromosome segmentation features were present in different segmentation filters, usually in combination with other elements (i.e. width for filters ii–v, contour symmetry for vi–viii) and/or targeted the same morphological subclass (notably, SCS). Thus, the “optimal” filter subset (termed “FP filters”) was defined as the subset of filters that maximized reclassification of the maximum number of FPs, while minimizing redundant detection of the same FPs. The performance for a given set of filters was the cumulative percentage of FPs removed by any of its filters, based on the HC-mixed set of images (Table 1). Using a forward selection approach, individual filters were added iteratively to identify those that produced the largest improvement in performance.\n\nModifications to ADCI: After chromosome processing and MC-DC SVM classification11 but prior to dose determination, all DC chromosomes inferred by ADCI were analyzed with the FP filters. DCs classified as FPs by any of the filters were reclassified and the remaining TP DCs were used for dose determination. The contours of DCs that were reclassified as MCs are outlined in yellow in the ADCI metaphase image viewer11 (Figure 3; top centre).\n\nGraphical User Interface for viewing cell images within a sample processed by ADCI11. Valid segmented objects (generally chromosomes, but occasionally nuclei or debris) are shown with coloured contours. Red contours indicate predicted DCs, yellow contours indicate chromosomes that were initially classified as DC and then reclassified by the FP filters (example at 12 o’clock), green contours indicate predicted MCs, and blue contours indicate objects that could not be further processed after image segmentation. Below the cell image, options were added to allow manual inclusion or exclusion of images within a sample from dose determination.\n\nIn ADCI, a pre-computed dose-response calibration curve is also used to estimate radiation absorbed in samples with unknown whole body exposures11. For a given sample, the radiation response is the ratio of the number of DCs detected to the number of selected metaphase cells. Calibration curves can be generated either from a set of samples of known exposures either by determining the response for each sample automatically with ADCI, or by entering the corresponding response from manually scored samples, and fitting the dose-response paired data to a linear-quadratic curve by regression. Because sample preparation protocols can vary and affect results, dose estimation of test samples (of unknown exposures) were performed with calibration curves generated with data from the same laboratory11.\n\nThe impact of segmentation filters to remove FPs on calibration curves was determined for the 0, 0.5, 1, 2, 3 and 4 Gy calibration samples. Radiation doses were estimated for CNL and HC test samples using the HC calibration curve after applying the same FP filters (Table 6).\n\nWe compared HC calibration curves derived from manually curated samples with the FP filters either enabled or disabled to assess the impact of image selection on dose accuracy (Table 2). The criteria for manually curated HC samples were similar to the manual image selection performed by CNL. These images required: A) a complete complement of approximately 46 chromosomes, >40 segmented objects, <5 segmented objects from different nuclei if multiple nuclei present; B) exclusion of metaphase cells with “harlequin” hemi-stained chromosomes (indicative of multiple rounds of division after radiation exposure) that distort true DC frequencies10; C) images with <5 incorrectly-segmented chromosomes (chromosome overlaps indicating poor spreading), fragments (indicating sister chromatid separation) and overly-noisy contours (indicating poor image contrast); and D) an adequate degree of chromosome condensation. Depending on the stage of metaphase arrest, the degree of chromosome condensation can differ1,13. Prometaphase cells have longer chromosomes, are less rigid, exhibit greater overlap and less well-defined centromere constrictions, all of which pose significant challenges for automated chromosome classifiers1,14. Metaphase images with longer, thinner chromosomes (roughly corresponding to >550-band level14) were also excluded.\n\nA minimum sample size of 500 cells per dose was adopted from IAEA recommendations12. Cell images selected from HC samples with automatic morphology filtering (see Methods section #5) were compared with a high quality set of images that were manually identified using the ADCI microscope viewer. For each sample, consecutive images meeting all criteria were evaluated manually until a sufficient number of cells were accrued. DC classifications were hidden during image selection to minimize bias. After generation of the curated HC calibration curves, the radiation doses of the three HC test samples (Table 3) were re-estimated on the new curves, with and without the FP filters enabled.\n\nManual selection of images assures consistency and reliability of metaphase data, which increases accuracy in DC analysis. Exclusion of lower quality images was automated in ADCI, since it was expected to reduce the number of FP DCs, thereby more accurately estimating radiation exposures.\n\nWe derived a set of image selection filters, implemented as available Python scripts, by segmenting features (I-VI, below) that eliminate metaphase cells in a sample with characteristics that increased the number of FPs:\n\nI. Length-width ratio filter (LW) is based on the average length-width ratio of all chromosomes in an image. For a given chromosome c in a given image I containing N chromosomes, L(c,I) denotes the arc length of the centerline of c, Wmean(c,I) denotes the mean value of the width profile of c, SD is the standard deviation on Wmean(c,I), and T denotes the threshold value of SD common to all of these filters that distinguishes acceptable from outlier images. MW(I) is defined as mean{L(c1,I)/Wmean(c1,I),…,L(cN,I)/Wmean(cN,I)}. I* is removed if MW(I*) > mean{MW(I1),…,MW(IM)} + T×SD{MW(I1),…,MW(IM)}.\n\nII. Centromere candidate density filter (CD) counts occurrences of centromere candidates in chromosomes and eliminates images containing chromosomes with a high density of candidate centromeres. For a given chromosome c in image I containing N chromosomes, L(c,I) denotes the arc length of the centerline of c, and Ncent(c,I) denotes the number of centromere candidates along c. CD(I) is defined as the mean{Ncent(c1,I)/L(c1,I),…,Ncent(cN,I)/L(cN,I)}. I* is removed if CD(I*) > mean{CD(I1),…,CD(IM)} + T×SD{CD(I1),…,CD(IM)}.\n\nIII. Contour finite difference filter (FD) represents the smoothness of contours of segmented objects in an image. It eliminates images with non-chromosomal objects with smooth contours, such as nuclei or micronuclei. For a given chromosome c in a given image I containing N chromosomes, WPD(c,I) denotes the set of first differences of the normalized width profile of c (range normalized to interval [0,1]). WD(I) is defined as the mean{mean{abs{WPD(c1,I)}},…,mean{abs{WPD(cN,I)}}}. I* is removed if WD(I*) < mean{WD(I1),…,WD(IM)} – T×SD{WD(I1),…,WD(IM)}.\n\nIV. Total object count (ObjCount) filter is based on the number of all objects detected in an image. Values lying outside of a threshold range are rejected to eliminate images with multiple metaphases or excessive cellular debris. Based on empirical analyses, the suggested object count range falls within the interval [40, 60].\n\nV. Segmented object count (SegObjCount) filter is based on the number of objects processed by the gradient vector flow7 (GVF) algorithm in an image. It is applied in the same way as filter IV. The suggested range for the object count interval is [35, 50].\n\nVI. Classified object ratio (ClassifiedRatio) filter is derived from the ratio of objects recognized as chromosomes to the total number of segmented objects. It excludes images in which ADCI fails to process the majority of chromosomes. An image is removed if the ratio is less than either 0.6 or 0.7, which is determined by the desired level of stringency for this filter.\n\nFilters I and II detect cells in prometaphase (having relatively long and thin chromosomes), with prominent sister chromosome separation, and with highly bent and twisted chromosomes. Filter III detects overly-smooth contours characterized by images containing intact nuclei and otherwise incomplete chromosome sets. The total object count (IV) and segmented object count filters V enrich for nearly normal metaphase images of approximately 46 chromosomes. These filters are then used to exclude images with extreme object counts. Filter VI selects images based on effectiveness of chromosome recognition by ADCI.\n\nImage level filters I-III are based on the z-scores of different properties and comprise all objects in an image. For metaphase image I* in a sample containing M images, {I1,...,IM}, {c1,…,cN} denotes the set of N chromosomes within image I*. This SD value was determined to be 1.5 by varying T and applying these filters to the HC 2Gy calibration sample (Table 2). The corresponding thresholds for filters IV-VI were also derived from testing multiple samples.\n\nImage ranking by combining image selection filters: Applying these filters sequentially to the same image distinguished the metaphase images used for dose estimation from lower quality images. Features consisting of counts, ratios and Z-scores for image filters I-VI were linearly combined to globally assess image quality. The combined score is one representation of the degree to which a particular image deviates from the population in a sample:\n\nCombined Z Score =w(LW)×z(LW)+w(CD)×z(CD) – w(FD) ×z(FD)+w(ObjCount)×|z(ObjCount) | +w(SegObjCount)×|z(SegObjCount) |- w(ClassifiedRatio)×z(ClassifiedRatio)\n\nSmaller Combined Z Scores represent higher quality images. Longer and thinner chromosomes in the image will increase the LW score, whereas bent and twisted chromosomes increase the CD term. Decreased chromosome concavity results in a higher FD score. The object and segmented object counts and their respective Z scores are related to chromosome distribution, and the level of sister chromatid separation in an image. These terms contribute to higher Combined Z Scores for images exhibiting either incomplete cells, multiple cells or severe sister chromatid separation. The Classified Ratio terms produce high scores for images that the algorithm does not process accurately. Each feature has a positive free parameter, weight, to adjust its contribution to the total score. Weights are determined by evaluating many possible weights using a grid search technique, and selecting those that minimize the error in curve calibration. The optimal weights for calibration samples are expected to perform similarly on test samples exposed to unknown radiation levels, assuming that the calibration and test samples have comparable chromosome morphologies. The Combined Z Score, however, cannot be used to compare the overall qualities of different samples, as Z-scores are normalized within each sample.\n\nImage comparisons based on chromosome length distributions: The previously described tests use image morphology as the primary consideration in assessing metaphase image quality. The most common problems in lower quality metaphase cells are severe sister chromatid separation, excessive chromosome overlap, fragments of chromosomes in image segmentation, and multiple cells or incomplete cells in the same image. These often result in changes in either the number or the sizes of segmented objects. These tests do not account for the known relationships between the chromosomes in a cell with a nearly normal karyotype.\n\nWe derived a novel quality measure based on the observation that lengths and areas of chromosome images (in pixels) are approximately proportional to the well-known base-pair counts for each human chromosome. By comparing the distribution of observed chromosome object lengths with this “gold standard” inferred from the lengths of chromosomes in the reference human genome sequence, the overall quality of chromosome segmentation can be assessed in each cell image. Excluding chromosome abnormalities, which result from radiation exposure and are randomly distributed among cells, individual chromosome lengths are approximated by their corresponding chromosome areas (in pixels), since the actual chromosome lengths are difficult to measure accurately. Once noisy non-chromosomal objects, nuclei and large overlapped chromosome clusters have been removed, the areas of each remaining object are then determined relative to the total area of all chromosomes. The chromosomes in a metaphase cell are binned into three groups corresponding to the ISCN cytogenetic classification system16: The (AB) set comprises the A and B chromosome groups, (C) contains all of group C, and (DG) includes the D, E, F, and G groups. A single chromosome in group AB contains > 2.9% of nucleotides in the complete genome (determined by the shortest B group chromosome). A chromosome in category C has < 2.9% (determined by the longest C group chromosome), but > 2% (determined by the shortest C group chromosome) of nucleotides in the complete genome. Any chromosome in category DG contains < 2% of the complete genome (determined by the longest D group chromosome). These thresholds, 2.9% and 2% of the genome length, are respectively considered to be the maximum lengths of X and Y chromosomes. These thresholds are then applied to the areas of each chromosome object to count the number of chromosomes in each category in a metaphase image. An ideal metaphase image will have 10 AB chromosomes, 16 C chromosomes and 20 DG chromosomes in a female, and 10 AB chromosomes, 15 C chromosomes and 21 DG chromosomes in a male. We find that images with many overlapping chromosomes will have increased AB chromosome counts, while images with excessive sister chromatid separation generally have elevated DG chromosome counts. The quality of a metaphase image is determined by comparing the observed quantities of chromosomes in each group to the female or male standard. In practice, the result for an image is treated as a 3-element vector (AB, C, DG) and the Euclidean distance between the observed vector and the ideal standard is determined. Larger group bin distances correspond to less satisfactory images. We find that this measure appears to be universally applicable to metaphase images from different samples.\n\nSorting all images in a sample by either their Combined Z Score or by chromosome area Group Bin distance ranks cells according to metaphase quality for subsequent DC analyses. Image selection models can also be created in multiple stages by first qualifying images with chromosome morphology filters and then by selecting the top scoring images according to their Combined Z Scores or Group Bin distances.\n\nCytogenetic artifacts, such as sister chromatid separation and chromosome fragmentation, interfere with correct identification of DCs, thereby compromising reliability of dose estimates. This motivated the development of criteria to evaluate how well automated cell and FP curation improves sample quality. Samples exposed to low energy transfer, whole-body irradiation exhibit DC distributions that follow a Poisson distribution17 in all cells. The number of DC occurrences in a cell is constructed as a probability model of a sample. Each DC is assumed to be independent of other DCs in the first cell division and the rate at which DCs occur is constant for a single sample at a given radiation dose. The DC distribution detected either manually or by ADCI can be approximated by the Poisson statistic, with the λ parameter corresponding to the average number of DCs per cell in a sample.\n\nDeviation from the Poisson distribution can occur when either some TPs are not accounted for or when FP DCs have not been reclassified. We evaluated post-processing sample quality by comparing the observed distribution of DCs in each sample (manual and automated) to its corresponding Poisson distribution. Observed and Poisson DC distributions were analyzed with the Pearson chi-squared goodness-of-fit test, which indicates the likelihood of rejecting the null hypothesis that the DCs were Poisson distributed. Only samples with ≥ 1 DC were analyzed. Very low p-values at or below α = 0.005 (99.5% confidence level) reject the null hypothesis and indicate lower quality samples.\n\n\nResults\n\nFalse positive DCs (n=98) from a set of metaphase cells exposed to low dose radiation were classified into morphological subclasses to identify and ultimately eliminate these objects (described in Supplementary File 1). FP subclasses (Figure S1; subclasses A–F) included those exhibiting high levels of sister chromatid separation (A, n=51), chromosome fragmentation (B, n=10), overlap (C, n=17), noisy contour (D, n=5), cellular debris (E, n=4), as well as inaccurate recognition by either the centromere candidate10 or MC/DC6 machine learning algorithms (F, n=11).\n\nSegmentation filters i–viii were applied to reclassify FPs in these images. Scale-invariant filters were tested to determine thresholds that selectively removed subclasses I-III without eliminating any TPs. Of the 51 SCS cases, 35 involved short, acrocentric chromosomes. FPs were distinguished from TPs based on either their lower relative pixel area or width (filters i–v), substantially non-oblong footprint (filter vi), or substantial contour asymmetry across the centerline (filters vii and viii). For filters i-v, normalization to median scores of other objects in the same image was performed, as well as normalization to other measures of central tendency (e.g. z-score, mean, and mode after binning scores). FPs could be eliminated for each morphological subclass (Table S1), with most of the segmentation filters acting on their targeted subclass. However, the effects of each filter were not exclusive to those subclasses.\n\nTo evaluate individual filter performance, the percentage of FPs removed by each filter was calculated for the HC-mixed image set (Table 4). A two-sample Kolmogorov–Smirnov test (K–S) was also performed for each filter (α=0.05) on the same data, where one group consisted of the filter scores of all TPs (n=183) and the other group consisted of the scores of all FPs (n=158). All 8 filters rejected the null hypothesis (Table 4), suggesting that these groups are distinguishable by thresholded segmentation filters. Applying the intercandidate contour symmetry filter (filter viii) achieved the largest overall reduction of FPs (44.9%), and eliminated the most SCS-induced FPs (43 of 51) in the low dose exposure set of metaphase images (Table S1). The max width filter (filter iv) yielded the next largest reduction in FPs (27.8%) and was the most efficient filter for detecting the fragmented chromosome class of FPs (8 of 10).\n\n*Calculated from HC-mixed image set from Table 1.\n\n**See Methods section 2 for description of each filter.\n\nFPs were eliminated cumulatively by combining multiple segmentation filters. Since individual filters were separately thresholded to avoid removal of TPs, the inclusive disjunction (logical “or” operation) of multiple filters produced a stronger FP discriminator, but was not expected to reduce the TP count. Different combinations of filters were tested using forward selection (Table S2). The best performing filter set removed 58.9% of FPs and consisted of 5 FP filters (i + iv + v + vi + viii). Of these, iv and viii accounted for 54.4% of the FPs, with the others identifying the remaining FPs. Performance was evaluated with independent sets of metaphase images (Table 5), consisting of two HC image sets at low and high dose exposures (HC-low and HC-high) and one CNL image set exposed to low dose radiation (CNL-low). On average, 55 ± 9.6% of FPs were removed among all sets; individually, the filters eliminated 52% FPs from CNL-low, 66% from HC-low, and 48% FPs from HC-high. All TPs were retained in each of the sets after FP filtering (i.e. 100% specificity).\n\n*FP filters refer to the subset of filters i + iv + v + vi + viii (Methods section 2).\n\n**See Table 1 for sample details.\n\n*FP filters refer to the subset of filters i + iv + v + vi + viii (Methods section 2). Calibration curve image data were not curated or filtered. HC samples were unselected (INTC03S01: 540 images, INTC03S08: 637 images, and INTC03S10: 708 images) and The CNL samples were previously manually selected (INTC03S04: 448 images; INTC03S05: 500 images; INTC03S07: 385 images).\n\n**See Table 3 for sample details.\n\nDose calibration curves for HC and CNL data were generated in ADCI to investigate the impact of the FP filters on dose estimation accuracy (Figure 4). Dose estimation errors, the absolute difference between dose estimate by ADCI and the known physical dose, were determined for three CNL and three HC test samples from HC; then, results for uncorrected vs. FP-filtered images were compared (Table 6). In manually curated samples from CNL, accuracy was also improved >2-fold by applying the FP filters (average error decreased from 0.43 Gy to 0.18 Gy).\n\nThe dose-response calibration curves for (A) HC and (B) CNL metaphase cell image sample data. Response (mean DC frequency/cell) is on the vertical axis, corresponding radiation dose (Gy) on the horizontal axis. Green curves are based on unfiltered images, cyan curves were derived by recomputing DC frequencies after applying false positive (FP) filters (filters i + iv + v + vi + viii). HC and CNL curves were constructed by fitting a linear-quadratic curve through their respective HC and CNL calibration samples (refer to Table 2). The CNL curves consistently showed a more pronounced quadratic component than the HC curves, which exhibited a nearly linear response. The curves before (green) and after applying FP filters (cyan) are shown. After application of the filters, the HC and CNL curves showed diminished response at different Gy levels, due to elimination of some FP DCs.\n\nSurprisingly, the dose accuracy of the HC samples did not improve after application of the FP filters (mean absolute error increased from 0.85 Gy to 1.03 Gy). All objects eliminated with these filters in the three HC test samples were reviewed and manually classified as either TP or FP, and the FP specificity across the samples was determined (Table 7), where FP specificity was defined as the ratio of FPs to all filtered objects. Similar to our earlier findings, the FP filters exhibited very high specificity for FPs (97.7–100%), indicating that the filters retained high specificity for TPs in the HC samples.\n\nTP, true positive\n\n*FP filters refer to the subset of filters i + iv + v + vi + viii (Methods #2).\n\n**See Table 3 for sample details.\n\nWe hypothesized that a difference in image selection protocols between the two laboratories was responsible for the discrepancies seen in classification performance and dose estimation accuracy. CNL manually selected for images deemed suitable for DCA analysis, and HC image selection was done with an automated metaphase classifier that effectively eliminates only images that lack metaphase cells. Manual review of images in these HC and CNL samples suggested differences in input image quality due to these image selection protocols. In concordance with findings from our previous study1, CNL data contained more images with well-spread, minimally-overlapping chromosomes, and fewer images with extreme SCS and chromosome fragments. The HC data contained a greater percentage of high-band-level (less condensed) chromosomes, characteristic of prometaphase/early-metaphase cell images. These chromosomes were the source of many unfiltered FPs, due to the lack of a strong primary constriction at the centromere which affects automated chromosome classification15.\n\nA new set of HC calibration curves were then generated from manually curated, selected images from calibration samples (Figure 5). Images were excluded based on IAEA criteria17, along with cells exhibiting long chromosomes in early prometaphase16. Dose estimation accuracy of the HC test samples was significantly improved by enabling the FP segmentation filters (mean absolute error on unfiltered, curated images was 0.37 Gy prior to and 0.15 Gy after filtering; Table 8). Application of FP filters to both CNL and curated HC data led to >2-fold reduction in the mean absolute error of the estimated dose (p = 0.024, paired two tailed t-test). These results motivated the development of approaches to automatically select higher quality metaphase cell images.\n\nThe dose-response calibration curves for HC sample data, with and without false positive (FP) filters applied, before and after curation. Response (mean DC frequency/cell) on vertical axis, corresponding radiation dose (Gy) on horizontal axis. Green curve is not curated and includes all images, cyan curve is not curated and applies FP DC filters, red curve is curated, but unfiltered, and dark blue curve is curated and FP filters have been applied. Uncurated curves were generated from 0, 0.5, 1, 2, 3 and 4Gy calibration image data (Table 2). Curated curves were generated from the same data (except 0.5Gy was excluded) after lower quality images were manually removed (Methods section 4). After manual curation, the HC curves show a stronger quadratic component, similar to the original manually curated CNL curves (Figure 4).\n\n*Sample identifier, physical dose (Gy). False positive filters were enabled. Out of bounds indicates that the estimated dose exceeded the maximum calibrated dose.\n\n^Figure 6 indicates effect of these image selection models on DC frequency for INTC03S04, INTC03S05, INTC03S7 as a function of the number of images analyzed.\n\nSamples exposed to different radiation levels and generated by each laboratory can be toggled and compared using the drop down menu (top left). The static image in the portable document format displays this relationship for the HC sample exposed at 3 Gy. Images were ranked by different scoring methods (see key). DC frequencies based on unordered, unselected images (order corresponds to the alphabetized file names, which is random with respect to image quality) are indicated with a blue line, images ranked by Group Bin Distance are shown in orange, and those ranked according to Combined Z Score are shown in green. Lowest count numbers in the ranked images correspond to the highest quality and lower quality images are progressively introduced as the count increases. Graphs were generated with Plotly (https://plot.ly/).\n\nTo the best of our knowledge, assessment of metaphase cell image quality for DC analysis has not been objectively and quantitatively standardized between laboratories. Cell selection by cytogenetic experts is based on their knowledge of metaphase chromosome conformation, sensitivity, and even individual preferences in interpreting images that can sometimes be inconsistent. Therefore, image selection methods were evaluated through dose estimation of filtered test samples and comparisons with known physical exposures. Images in all calibration and test samples from the same laboratory were processed by the same image selection model. Dose estimates of test samples were calculated using a curve fit to the dose-response of calibration samples. Dose estimation errors indicate the accuracy of dicentric chromosome detection, and therefore provide a means of assessing the accuracy based on the image selection model used.\n\nEach image in a sample was ranked based on its Combined Z Score, which is the sum of the products of the Z score for each of the filters (I – VI) and their corresponding weights. Weights were assigned integer values from 1 to 5. The optimal weights were obtained by searching all possible integer values among the set of calibration samples to determine those exhibiting smallest residual differences with the physical dose after fitting these estimated doses to the curve. This approach, while limiting the search space and reducing the computational complexity, ensured that a diverse combination of weights were used to evaluate each sample. The three optimal weight vectors resulting from this analysis, [5, 2, 4, 3, 4, 1], [4, 3, 4, 5, 2, 1], and [1, 2, 1, 5, 1, 5], were used to independently estimate doses of test samples of unknown exposure.\n\nAfter images from a sample were assigned either Combined Z Scores or Group Bin Scores and sorted by rank, the 250 top ranked images were selected to determine dicentric aberration frequency for that sample. An adequate number of top ranked images are selected to provide sufficient images to generate a reproducible DC frequency for that sample. In the absence of a predicate filtering step, the ranking procedure has to effectively remove poor quality images that could distort the DC frequency. IAEA recommends >100 DCs be counted for samples with physical doses < 1 Gy17. In practice, laboratories usually score >250 images, but at least 500 cells may be required to achieve this level of DC detection and often more are required for samples with low radiation exposures. Selecting at least the 250–300 top scoring images resulted in stable dicentric frequencies for samples from both laboratories over a range of exposures (Figure 6: the interactive version allows viewing of individual calibration samples from 0 to 4 Gy exposure and three blinded samples from both the CNL and HC laboratories; the HC3Gy sample is shown in the static PDF version). Compared to the unselected, unordered images, the image selection models show a monotonic increase of DC frequency with radiation dose for image counts with stable frequencies for most samples (eg. HC2Gy, HC3Gy, HC-INTC03S10, CNL2Gy, CNL INTC03S05, CNL-INTC03S07). However, DC frequencies can differ by image selection method. For higher ranking images, the Combined Z Score more consistently eliminated cells with DCs than the Group Bin Distance Scoring method, resulting slightly lower overall DC frequencies, which may be due to more stringent selection of cells possessing fewer FPs. Dose responses for the image selection methods are generally lower for samples with large numbers of top ranked, high quality images, which gradually increase with lower image quality due to the presence of increasing numbers of unfiltered FP DCs. By contrast, unfiltered randomly sampled images from the same sample exhibit higher overall DC frequencies due to increased numbers of FP DCs. As expected, all of the DC frequencies converge to the same value when none of the images are excluded by the ranking methods.\n\nDeviations of the estimated doses of the HC and CNL test samples from their corresponding physical exposures were determined for various image selection models (Table 8 and Table 9, respectively). For comparison, the dose estimation results of unselected, comprehensive sets of images for each sample are also shown. Deviations of ≤ 0.5 Gy from their calibrated physical dose are considered acceptable in triage biodosimetry5,12,17. For the unfiltered HC samples, the average absolute error was 0.8 Gy, and only a single sample, INTC03S01, fulfilled the triage criteria. The image selection model comprising filters I-III sorted by Chromosome Group Bin rank was the most accurate, with dose estimates for 4 HC samples (INTC03S01, INTC03S08, INTC03S10 and INTC03S05) exhibiting acceptable error tolerances (± 0.5 Gy). The Combined Z Score ranking with weights: [1, 2, 1, 5, 1, 5] had the lowest dose estimation accuracy for the HC samples (average error is ~1 Gy), with only INTC03S05 having an acceptable dose estimate. Of the 5 unfiltered, manually curated CNL samples, only INTC03S08 had an acceptable dose estimate. However, an image selection model consisting of all Z Score filters I-VI was the most accurate for CNL samples (mean absolute error of ~0.3 Gy), with 4 of 5 samples (INTC03S08, INTC03S04, INTC03S05 and INTC03S07) having acceptable estimated doses.\n\n*Sample identifier, physical dose (Gy). False positive filters were enabled. Out of bounds indicates that the estimated dose exceeded the maximum calibrated dose.\n\n^Figure 6 indicates the effect of these image selection models on DC frequency for INTC03S01, INTC03S08, INTC03S10 as a function of the number of images analyzed.\n\nWhile automated image selection rejects poor images and reduces FP DCs, dose estimates can only be considered reliable if sufficient numbers of images remain after filtering. Application of image filters can result in fewer than the recommended images for accurate dose estimation. Samples CNL-INTC03S08 and HC-INTC03S07, had 195 and 109 metaphase cells, respectively, after filtering and image selection. HC-INTC03S07 was of relatively lower quality, and the unfiltered set of 477 metaphase images contained fewer than the recommended minimum number after filtering (Table 10).\n\n*Poisson score is the p-value of chi-square goodness of fit (without merging bins) of observed distribution of DCs/cell vs. Poisson distribution determined from average DC frequency. Filtering parameters chosen for each laboratory exhibit dose estimates that are closest to the physical dose:\n\n^HC image sets were selected with filters I-III and ranked by chromosome group bin score;\n\n+Minimum positive floating value in Windows operating system;\n\n#CNL image sets were selected with filters I-VI.\n\nTo determine if image selection improved sample quality, a Chi-squared goodness of fit test on Poisson distributed DCs was performed, both before and after automated and manual image selection (Table 10). Manual image selection for CNL samples was performed by CNL during sample preparation, while image selection for HC samples was performed on unselected datasets (samples HC-INTC03S01, HC-INTC03S08, HC-INTC03S10 were analyzed, despite <500 available images). The optimal image selection models were used for FP and image filtering for each laboratory (Table 8 and Table 9). The HC samples were selected with filters I-III and the chromosome group bin method, whereas the CNL samples were processed with filters I-VI. At the 1% significance level (i.e. Chi-square goodness-of-fit, p ≤ 0.01), 86% (19 of 22) of unfiltered samples were significantly different from the Poisson distribution, and 76% (13 of 17) of manually- and 77% (17 of 22) of automatically-selected samples did not differ. Manually curated and uncurated sample groups also significantly differed from each other (p = 0.0021; one-tailed Wilcoxon Signed-Rank Test, α=0.05, n=17). Therefore, the Poisson goodness of fit measures improvements in overall sample quality from image model selection. While the overall goodness of fit is improved for all of the automatically selected datasets, the Poisson distributions of DCs in the lowest quality samples (CNL1Gy, CNL05Gy, CNL-INTC03S01, HC-INTC03S05, HC-INTC03S07) were still rejected at a 0.5% significance threshold after filtering.\n\n\nDiscussion\n\nAutomated biodosimetric methods to detect DCs can produce incorrect assignments because the algorithms cannot capture the full range of morphological variability inherent in chromosome images of metaphase cells. Accuracy of these radiation exposure estimates can be improved by morphology-based chromosome image segmentation filters that eliminate suboptimal metaphase cell images and false positive DCs in the remaining images. Compared to results generated by the previous version of ADCI which did not reclassify FPs or remove any cell images11, the filters described here reduced FP DC rates by ~55% across a wide range of radiation exposure levels. Additionally, we showed that the object segmentation filters were highly specific for FPs in test image sets consisting of irradiated samples blinded to known dose (97.7–100%, n=6). Overall, the FP filters substantially improved DC classification accuracy.\n\nThe segmentation filters successfully target the majority of cells with SCS and chromosome fragments. The intercandidate contour symmetry filter is a particularly promising SCS detector, individually eliminating 84% of all SCS-induced FPs in our test dataset. Acrocentric chromosomes were disproportionally susceptible to SCS-induced errors compared to other chromosome types (69% of SCS cases, despite making up only 22% of human chromosomes). Given the rarity of acrocentric TP DCs (due to width profile inaccuracies at the extreme ends of chromosomes7–9), filters targeting acrocentric or small chromosomes, in general (such as filters i and vi), can also be useful for reducing SCS-induced FPs.\n\nCertain FP subclasses were commonly targeted by multiple filters. Redundancy among the segmentation features resulted in only a subset of the filters being required to maximize FP elimination. Notably, filters ii–v eliminated FPs based on different definitions of chromosome width. The final FP filter combination consisted of only 5 of the 8 originally proposed filters. However, it should be noted that a combination of only 2 of the filters - the intercandidate contour symmetry (viii) and max width (iv) filters - achieved nearly the same level of FP detection in the test sample dataset, with the others having only incremental benefit.\n\nThe image selection filters were required to be scale-invariant, since chromosome structures may vary between cells, individual samples, and laboratory preparations. Scale invariance is also necessary to control for pixel-based chromosome measurements affected by chromosome condensation differences within a metaphase cell, and differences arising from optical magnification. This was achieved by either using image level filter scores normalized to the median “raw” score of all objects within the same cell image (i.e. filters I–V), or in which scores were determined from the ratios of pixel-based features (i.e. filters VI–VIII).\n\nDifferences in accuracy between the manually- and automatically-selected images for dose estimation revealed limitations of the current set of filters. The FP object filters in the manually curated CNL and HC image samples reduced the average dose estimation error from 0.4 Gy to <0.2 Gy (with a maximum error of 0.4 Gy), respectively. However, solely applying the FP object filters to unselected HC metaphase data was insufficient to correct this problem (average error increased by 0.15 Gy), and led to more inaccurate dose response values.\n\nVariability in cell image quality contributed to this source of error. Some unselected HC samples contained images with high levels of SCS, which upon processing, produced large numbers of incorrectly classified chromosome fragments in some cells. While FP DC filters i–v target detection of these fragments, they were not reclassified in these cells, because they comprised the predominant chromosome morphology. For similar reasons, FP filtering was not suitable for elimination for removal of FPs in prometaphase cells containing many high resolution, long chromosomes (>700 band level). These observations suggested the need for another class of morphological filters that operated on complete images to remove those of low quality prior to dose estimation.\n\nImage quality is a critical aspect of accurate DC detection and dose estimation. Manual inspection and quality control of metaphase selection is a common and essential practice in cytogenetic and biodosimetry laboratories, but it can be labor-intensive, and is frequently not automated. Image-level filtering automatically applies statistical thresholds to eliminate chromosomes with morphological features and non-chromosomal objects that predispose to FP DC assignments. Image scoring methods can also select a defined number of top-ranked, processed images for dose estimation. These FP filtering and image scoring methods can be applied either individually or in combination, resulting in improvements in the accuracy of DC frequency. Errors in dose estimates are considerably reduced using suitable image selection models in samples with ≥250 images. Doses were accurately estimated for most test samples within ±0.5Gy of their physical doses, as recommended17. Therefore, the image selection models presented provide reliable quality control, and can minimize manual review or DC analysis.\n\nAutomated image selection aims to simulate manual image curation. At this point, it does not quite achieve the same overall accuracy as manual image selection, especially for samples containing numerous images of lower quality. However, the respective differences in dose estimates of higher quality samples from HC and CNL, especially at exposures >2 Gy, are not significant. Automating image selection, nevertheless, offers unique advantages over manual image selection by introducing a uniform approach for chromosome analyses, ensuring both increased reliability and speed.\n\n\nData and software availability\n\nPython code and sample data files for “Accurate cytogenetic biodosimetry through automated dicentric chromosome curation and metaphase cell selection” are available at http://doi.org/10.5281/zenodo.83353618.\n\nMATLAB code and sample data files for “Accurate cytogenetic biodosimetry through automated dicentric chromosome curation and metaphase cell selection” are available at http://doi.org/10.5281/zenodo.83354019.\n\nSource code license: CC-BY 4.0",
"appendix": "Competing interests\n\n\n\nPKR and JHMK cofounded CytoGnomix Inc., which is commercializing ADCI. YL and BCS are employees of CytoGnomix. ADCI is copyrighted and protected by existing and pending patents (US Pat. No. 8,605,981, German Pat. No. 112011103687).\n\n\nGrant information\n\nThis study was supported by the Build in Canada Innovation Program (Contract No. EN579-172270/001/SC) and CytoGnomix Inc. Previous software versions and associated algorithms were supported by the Western Innovation Fund; the Natural Sciences and Engineering Research Council of Canada (NSERC Discovery Grant 371758-2009); US Public Health Service (DART-DOSE CMCR, 5U01AI091173-0); the Canadian Foundation for Innovation; Canada Research Chairs, and CytoGnomix Inc.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1: File containing Morphological characterization of FPs, Figure S1, and Table S1 and Table S2.\n\nClick here to access the data.\n\n\nReferences\n\nBlakely WF, Salter CA, Prasanna PG: Early-response biological dosimetry--recommended countermeasure enhancements for mass-casualty radiological incidents and terrorism. Health Phys. 2005; 89(5): 494–504. PubMed Abstract | Publisher Full Text\n\nWilkins RC, Romm H, Kao TC, et al.: Interlaboratory comparison of the dicentric chromosome assay for radiation biodosimetry in mass casualty events. Radiat Res. 2008; 169(5): 551–560. PubMed Abstract | Publisher Full Text\n\nZhou W, Bovik AC, Sheikh HR, et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004; 13(4): 600–612. PubMed Abstract | Publisher Full Text\n\nNill NB, Bouzas B: Objective image quality measure derived from digital image power spectra. Opt Eng. 1992; 31(4): 813–825. Publisher Full Text\n\nNarwaria M, Lin W: Objective image quality assessment based on support vector regression. IEEE Trans Neural Netw. 2010; 21(3): 515–519. PubMed Abstract | Publisher Full Text\n\nLi Y, Knoll JH, Wilkins RC, et al.: Automated discrimination of dicentric and monocentric chromosomes by machine learning-based image processing. Microsc Res Tech. 2016; 79(5): 393–402. PubMed Abstract | Publisher Full Text\n\nArachchige AS, Samarabandu J, Knoll J, et al.: An image processing algorithm for accurate extraction of the centerline from human metaphase chromosomes. In Image Processing (ICIP), 2010 17th IEEE International Conference on, 2010; 3613–3616. Publisher Full Text\n\nArachchige AS, Samarabandu J, Rogan PK, et al.: Intensity integrated Laplacian algorithm for human metaphase chromosome centromere detection. In Electrical & Computer Engineering (CCECE), 2012 25th IEEE Canadian Conference on. 2012; 1–4. Publisher Full Text\n\nArachchige AS, Samarabandu J, Knoll JH, et al.: Intensity integrated Laplacian-based thickness measurement for detecting human metaphase chromosome centromere location. IEEE Trans Biomed Eng. 2013; 60(7): 2005–2013. PubMed Abstract | Publisher Full Text\n\nSubasinghe A, Samarabandu J, Li Y, et al.: Centromere detection of human metaphase chromosome images using a candidate based method [version 1; referees: 2 approved with reservations]. F1000Res. 2016; 5(5): 1565. Publisher Full Text\n\nRogan PK, Li Y, Wilkins RC, et al.: Radiation Dose Estimation by Automated Cytogenetic Biodosimetry. Radiat Prot Dosimetry. 2016; 172(1–3): 207–217. PubMed Abstract | Publisher Full Text\n\nInternational Atomic Energy Agency: Cytogenetic Analysis for radiation dose assessment, a manual. Technical Reports Series. No. 405. IAEA, Vienna, 2001. Reference Source\n\nRieder CL, Palazzo RE: Colcemid and the mitotic cycle. J Cell Sci. 1992; 102(Pt 3): 387–392. PubMed Abstract\n\nSethakulvichai W, Manitpornsut S, Wiboonrat M, et al.: Estimation of band level resolutions of human chromosome images. In Computer Science and Software Engineering (JCSSE), 2012 International Joint Conference on. 2012; 276–282. Publisher Full Text\n\nCarothers A, Piper J: Computer-aided classification of human chromosomes: a review. Stat Comput. 1994; 4(3): 161–171. Publisher Full Text\n\nInternational Standing Committee on Human Cytogenetic Nomenclature, McGowan-Jordan J, Simons A, et al.: ISCN 2016: An International System for Human Cytogenomic Nomenclature (2016). Karger, 2016. Reference Source\n\nInternational Atomic Energy Agency: Cytogenetic Dosimetry: Applications in Preparedness for and Response to Radiation Emergencies. International Atomic Energy Agency, Vienna, 2011. Reference Source\n\nLi Y, Liu J: Image Selection Code for Automated Dicentric Chromosome Identification. Zenodo. 2017. Data Source\n\nLiu J, Li Y: FP Filter code for Automated Dicentric Chromosome Identification. Zenodo. 2017. Data Source"
}
|
[
{
"id": "24918",
"date": "25 Aug 2017",
"name": "David Lloyd",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper presents a considerable amount of work devoted to improving the accuracy of existing automatic dicentric systems for biological dosimetry. The authors have developed some very interesting ideas in constructing filters that detect false positive dicentrics and thereby improve the accuracy of ‘hands-off’ microscopy. The work has demonstrated that inter-laboratory differences, commonly reported in biological dosimetry, and evident between the two labs here, can be considerably improved by application of the filters. It is particularly interesting to see how the linear dose response reported by one lab, over a dose range that should show a lin/quadratic curve, can indeed be converted to linear quadratic by filtering out the false positives. Moreover they have demonstrated that much of the improvement can be achieved by a sub-set of their filters which should simplify future developments by not needing to employ all the methods.\nThe single most pressing remaining problem is the selection of metaphases of sufficient quality for passing onto the filtration procedures. Manual selection still seems better for removing the wide range of unsuitable material although the authors have demonstrated that the automated approach is getting there. The authors suggest that manual screening of candidate metaphases is labour intensive but most experienced cytogeneticists would probably consider that a list of, say, 1000 automatically presented images can be screened by eye in a few minutes. This is probably acceptable for much routine biological dosimetry but I agree that fully automated image selection would be particularly advantageous when rapid triage dosimetry sorting of many cases is needed.\nIt is gratifying to see that the accuracy of dose estimations using the procedures described here falls well within the requirements for triage sorting following a major radiological incident.\nI note that this system uses Giemsa stained material examined with bright-field microscopy. There is an alternative approach being employed in biological dosimetry which uses fluorescent probes to highlight centromeres. I wonder if the authors would like to comment on this and speculate on the extent to which their filtration ideas could be applied to this approach too.\nOverall they are to be congratulated on a well-presented account of an improved approach to automated ‘dicentric-hunting’.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3091",
"date": "12 Oct 2017",
"name": "Peter Rogan",
"role": "Author Response",
"response": "Regarding the use of fluorescent probes to highlight centromeres:PNA-FISH or any type of FISH can achieve higher intensities and contrast at centromeres than Giemsa staining, and would presumably require fewer morphological segmentation features to distinguish these structures. In our early publications (Figure 5 of Subasinghe et al. 2010; Figure 7 of Subasinghe et al. 2013 [compare panels a and b with panels c – f]), we stained chromosomes with DAPI to digitally locate centromeres by epifluorescence. This type of staining also results in significant contrast between centromeric regions and non-centromeric regions. In contrast with FISH or PNA probes, DAPI is inexpensive, abundant, and available from many sources. It would be feasible to develop epifluorescent staining approaches that could be integrated into the software described in this study, were the market and the investment to justify it. There are additional steps in FISH laboratory procedures that are not required for Giemsa staining, which means that more time is required to obtain chromosome imaging data. This and the associated FISH reagent costs would have a substantial impact on the throughput of diagnoses for any type of radiation mass casualty. We shifted our algorithm development several years ago to Giemsa staining, because the published IAEA protocol was established with this approach. Besides, it has been standardized, it is inexpensive, and uses less complex, bright field microscopy systems. It is also a bedrock method in just about every cytogenetics laboratory. Subasinghe A, J Samarabandu, Knoll J, Khan W, Rogan PK. An Accurate Image Processing Algorithm for Detecting FISH Probe Locations Relative to Chromosome Landmarks on DAPI Stained Metaphase Chromosome Images. IEEE 2010 Canadian Conference on Computer and Robot Vision. 2010. Pp 223-230. DOI: 10.1109/CRV.2010.36. Subasinghe AA, J Samarabandu, J Knoll, PK Rogan. Intensity Integrated Laplacian Based Thickness Measurement for Detecting Human Metaphase Chromosome Centromere Location. IEEE Trans. Biomedical Engineering, 60:2005-13, 2013."
}
]
},
{
"id": "25106",
"date": "31 Aug 2017",
"name": "Eric Gregoire",
"expertise": [
"Reviewer Expertise Biological dosimetry"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper presents an approach for the improvement of automatic detection of dicentrics and particularly the removal of some kinds of False Positives (FP). Indeed, the dicentric is the standard bio-marker in biological dosimetry. However the manual scoring of dicentric chromosome is time consuming and the improvement of the result time is necessary in biological dosimetry. For dose assessment in case of triage of radiation exposed population, automatization of dicentric detection is very useful. In order to increase the dicentric detection, the authors concentrate the study on the removal of FPs as well as on the selection of quality images. Apparently the main FPs in the metaphases of the 2 study laboratories are Sister Chromatid Separation (SCS) (84% of the FPs). This study shows that the authors reach to eliminate them with filters (about 55% of SCS removed). This is great and encouraging to remove other kind of FPs. The study showed also the importance of selecting metaphases before the analysis and the detection. The quality of images and chromosomes is an important factor. The dose assessment becomes better when the images were selected than when they are not selected. The data show a difference in dose assessment between the 2 labs. The filters gave better results for CNL images than for HC images particularly on low doses. The dose assessment may be a problem of sampling or of the number of analyzed metaphases. this could be improved. However the selection of good quality metaphases is also time-consuming. There is a good comparison between manual and automatic triage of metaphases for the 2 labs highlighted by the statistical tests. On the other hand, the number of metaphases analyzed remains important for the dose estimation as well as the detection of true dicentrics. Perhaps it should be interesting to compare this algorithm to the Metafer DCScore algorithm proposed by Metasystems in terms of detected dicentric rate and dose assessment. Overall the authors are to be congratulated on a well-presented study of their work.\nPage 7 last paragraph : prominent sister CHROMOSOME separation. Is it right or is it a mistake instead of CHROMATID?\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "3092",
"date": "12 Oct 2017",
"name": "Peter Rogan",
"role": "Author Response",
"response": "We agree that a direct comparison of ADCI with DCScore might be worthwhile in the future, but it was not possible to carry this out in the present study. In our previous study (Rogan et al. 2016), we did point out a number of differences between our respective approaches. ADCI can accurately process bent chromosomes, those with indistinct or fuzzy boundaries, and ones with some separation of sister chromatids. It can separate and segment touching chromosomes. For these reasons, most manual pre- and post-processing steps in DCScore are unnecessary with ADCI. Nevertheless, users have the ability to curate individual images in ADCI should they desire. Unlike DCScore, ADCI is intentionally restricted from processing highly overlapped chromosomes, so objects with two centromeres that are not single chromosomes called DCs. The single interface of ADCI for processing metaphases is integrated with calibration curve generation and dose estimation. DCScore does not generate or store calibration curves or estimate dose of test samples. The software allows reuse of previously processed biodosimetry curves and can estimate doses across a range of sensitivies and specificities for DC detection, set by the user. Rogan PK, Li Y, Wilkins R, Flegal FN, and Knoll JHM. Radiation Dose Estimation by Automated Cytogenetic Biodosimetry. Rad. Prot. Biodosimetry, 172(1-3):207-217, 2016."
}
]
}
] | 1
|
https://f1000research.com/articles/6-1396
|
https://f1000research.com/articles/6-1019/v1
|
28 Jun 17
|
{
"type": "Research Article",
"title": "The choice between surgical scrubbing and sterile covering before or after induction of anaesthesia: A prospective study",
"authors": [
"Irene Sellbrandt",
"Metha Brattwall",
"Pether Jildenstål",
"Margareta Warrén Stomberg",
"Jan G. Jakobsson",
"Irene Sellbrandt",
"Metha Brattwall",
"Pether Jildenstål",
"Margareta Warrén Stomberg"
],
"abstract": "Background: Day surgery is increasing, and safe and effective logistics are sought. One part of the in-theatre logistics commonly discussed is whether surgical scrub and sterile covering should be done before or after induction of anaesthesia. The aim of the present study was to compare the impact of surgical scrub and sterile covering before vs. after the induction of anaesthesia in male patients scheduled for open hernia repair.\n\nMethods: This is a prospective randomised study. Sixty ASA 1-3 patients scheduled for open hernia repair were randomised to surgical scrub and sterile covering before or after induction of anaesthesia; group “awake” and group “anaesthetised”, respectively. Patients and theatre nurses were asked about their experiences and willingness to have the same logistics on further potential surgeries, through a survey provided before post-surgery. Duration of anaesthesia, surgery, theatre time, recovery room stay and time to discharge was studied. Results: There was no difference in the patients’ assessment of quality of care, and only one patient in the awake group would prefer to be anaesthetised on a future procedure. All nurses found pre-anaesthesia scrubbing acceptable as routine. The duration of anaesthesia was shorter and doses of propofol and remifentanil were reduced by 10 and 13%, respectively, in the awake group. Time in recovery area was significantly reduced in the awake group (p<0.05), but time to discharge was not different. Conclusion: Surgical scrub and sterile covering before the induction of anaesthesia can be done safely and without jeopardising patients’ quality of care.",
"keywords": [
"Day surgery",
"operating theatre turn around",
"theatre efficacy",
"surgical scrubbing",
"sterile covering"
],
"content": "Introduction\n\nDay surgery, where the patient leaves the hospital on the same day as surgery, is increasing. Shortening a hospital stay is associated to several benefits: Early ambulation reduces the risk for thromboembolic complications, as well as postoperative infections; and it will reduce health care costs. Thus, its implementation is of value for both patients and society. However, day surgery calls for good perioperative care, enabling rapid recovery to send patients safely home after a few hours following the end of surgery/anaesthesia. Shortening anaesthesia time, avoiding unnecessary anaesthetic exposure, has several potential benefits, including avoiding unnecessary cardiovascular depression, since there is a miss-match between anaesthetic depression and stimuli, thus requiring vasoactive support, improving early recovery, and reducing the amount of anaesthetic used.\n\nThe aim of the present study was to compare surgical scrub and sterile covering before vs. after induction of anaesthesia. Our hypothesis was that avoiding prolonged anaesthesia by inducing anaesthesia prior to surgical scrubbing and sterile covering would reduce the need for vasoactive medication. Additionally, the study aimed to determine if this different theatre logistic further affect drug doses of anaesthetic agents, post-anaesthesia care unit (PACU) time and quality of care?\n\n\nMethods\n\nThe study protocol has been reviewed and approved by the Gothenburg Ethical Committee (Dnr. 751-16 scientific secretary Sven Wallerstedt).\n\nThe study was conducted at Capio Lundby Hospital in Gothenburg, November 2016 – February 2017. Male patients scheduled for elective open hernia repair with a modified Lichtenstein technique under general anaesthesia were requested to participate. Exclusion criteria was severe cardiovascular, respiratory, hepatic or renal disease, and American Society of Anaesthesiology (ASA) score of >3. Sixty ASA 1-3 patients scheduled for elective open hernia repair, modified Lichtenstein procedure, participated in the study following verbal and written informed consent. These patients were randomised by envelope technique into two groups:\n\n1. Awake group: Surgical scrub and sterile covering before induction of anaesthesia, having the patient awake but sedated.\n\n2. Anesthetised group: Surgical scrub and sterile covering when the patient is asleep, anaesthesia induction and securing airway and start of maintenance has been initiated.\n\nPatients received all medication and care in accordance to routine procedures of the department, apart from the scrubbing and sterile covering. Premedication was with paracetamol and diclofenac.\n\nAnaesthesia was induced and maintained with propofol and remifentanil (total intravenous anaesthesia; TIVA). Anaesthesia was adjusted per clinical signs. No EEG-based depth of anaesthesia monitor was used. Patients had local anaesthesia in the wound area during the surgery. Postoperative nausea and vomiting (PONV) prophylaxis was administered based on risk, assessed by Apfel score.\n\nAll patients received care in accordance to routines of the department, apart from the preoperative preparation, surgical scrub and sterile covering awake or after induction of anaesthesia.\n\nPatients’ assessment of their experience being awake or anaesthetised during surgical scrub and sterile covering was collected using a postoperative survey. The survey used a visual analogue scale (VAS; 0, unacceptable to 10, fully acceptable) to describe the experience, and the question ‘would you like to have the same care if you needed further surgery?’ (yes/no/I don’t know).\n\nPerioperative observations were collected from the patient case record.\n\nOperating room nurses (n=7) were asked whether they found the surgical scrub and covering acceptable from a patient care perspective (VAS scale) only for awake patients.\n\nAll data is presented as mean ± standard deviation (SD), unless otherwise stated. Differences between groups’ continuous data, e.g. demographics and perioperative observation were assessed by Student’s t-test, and categorical data with Chi-square test. A p<0.05 was considered statistically significant. Data was analysed with StatView (v1.04) for MAC.\n\nThe number of patients studied is based on a power analysis from findings in a pilot study; awake surgical scrub and sterile covering should reduce the need for vasoactive medication. A difference of 10 to 5 mg composite with SD of 6 with a power of 80% would require two groups of patients with 23 each to show a difference p<0.05.\n\n\nResults\n\nAssessment of quality of care during surgical scrub and sterile covering was assessed by all 60 patients; three patients were excluded from analysis of anaesthesia and recovery, as the surgery became more extensive than planned or for social reasons, and the patients were kept as inpatients. There were only minor demographic differences between the groups: the mean age of the awake group was 5 years older, but the ASA class was not different (Table 1).\n\nData is displayed as the mean (standard deviation), unless otherwise stated.\n\nIn total, 27 of the awake patients would undergo surgery using the same logistics, two were indifferent and one was “negative”, while 21 of the anaesthetised patient would like to have the same logistics, and nine were indifferent. The theatre nurses rated patients being awake during surgical scrub and sterile covering as acceptable; 4/7 rated 10/10, while the remaining three rated as follows: 1, 6/10; 2, 8/10. All 7 nurses involved in the patient care considered it feasible to perform surgical scrub and sterile covering before induction of anaesthesia as routine procedure.\n\nDuration of anaesthesia and time with laryngeal mask airway was shorter in the awake group (p>0.05). The amount of propofol and remifentanil required was lower in the awake group: 10% reduction in propofol and 13% reduction of remifentanil, but this was not significant. We found no difference in vasoactive need during surgery between groups. There were no differences in early recovery or vital signs, and pain was regained at similar times in both groups. Time in PACU was shorter for the awake patients (p<0.05), but time to discharge, pain and PONV showed no difference between groups (Table 2).\n\nData is displayed as the mean (standard deviation), unless otherwise stated. Two patients in the awake group and one in the anaesthetised group were excluded from analysis, since they were kept as inpatients. Surgery time is defined as the time the patient is being operated on; theatre time is defined as the entrance to theatre to leaving for PACU. LMA, laryngeal mask airway; PACU, post-anaesthesia care unit; VAS, visual analog scale.\n\n\nDiscussion\n\nWe found surgical scrub and sterile covering prior to induction of anaesthesia feasible and with a maintained quality of care. It reduced drug usage and shortened time in the recovery area. However, no difference in need for vasoactive medication was found.\n\nMany theatre nurses in Sweden prefer the patient being asleep while surgical scrubbing and sterile covering is performed. This prolongs the time of anaesthesia, may cause additional need for vasoactive medications and may prolong recovery. The argument is that it may be distressful for patients if they are awake during preparation, surgical scrubbing and sterile covering. However, in this study, generally patients did not mind being awake, on the contrary some patients gave positive feedback about being awake during preparation. Some nurses also feel that the liquid used for scrubbing may cause a freezing sensation in patients; we did not hear any comments supporting this notion. There are also discussions regarding that awake patients may be at an increased risk for surgical site infections (SSI). In a majority of SSI cases, the pathogen source is the native flora of the patient’s skin and there is no firm evidence that the anaesthetic technique used, i.e. patient being awake or asleep during scrub and sterile covering, should impact the infection risk1,2. Two recent studies in a huge number of patients undergoing orthopaedic procedures did not show any difference between general anaesthesia, spinal anaesthesia and peripheral blocks3,4.\n\nShortening the duration of anaesthesia and drug doses may be of value, especially for elderly patients. There are studies suggesting that tailoring anaesthesia reduces the risks for cognitive side effects5. We did not follow patients beyond discharge. All our patients had total intravenous anaesthesia; whether use of inhaled anaesthesia for maintenance could further improve emergence, the early recovery, and quality of recovery cannot be assessed from the present study. We found in a previous study that inhaled anaesthesia facilitates early recovery6. We could however not see any reduced need for vasoactive medication, thus our primary hypothesis was negative. We cannot give any firm explanation as to why this occurred, since in the awake group the need for both propofol and remifentanil was reduced.\n\nTheatre turnaround time is of increasing importance. Efforts to improve efficacy has been addressed in several studies. Koenig et al. studied anaesthesia induction when the surgeon was in theatre or not, and its impact on waiting time and unnecessary anaesthesia duration7. They found a significant shortening of anaesthesia time when surgeons were readily available in the theatre. Saha et al. found that transfer of patient to and from theatre has a significant impact on theatre turnaround time8. We found clear logistical benefits associated with the use of local anaesthesia and sedation as compared to general anaesthesia in a previous study9. The benefit of local anaesthesia sedation technique has also been supported by others for vaginal prolapse surgery10,11. Open hernia repair is commonly done under local anaesthesia only12. Thus, avoiding anaesthesia during surgery preparation also seems to be a feasible alternative when patients are undergoing general anaesthesia.\n\nThere are limitations to our study. We studied only one procedure, elderly male patients scheduled for inguinal hernia repair. Whether these results are transferable to other procedures needs further studies. Proper information around the importance of scrubbing and covering should be given to patients, and providing light sedation should be done in accordance to a patient’s wish. Whether fine tuning anaesthetic delivery could further impact the results cannot be stated. We could not find studies assessing the surgical scrub and sterile covering impact on quality of care or theatre time events, thus we are not able to truly put our findings into perspective of previous similar results. We still believe that our findings can be of interest and importance as lean operating theatre planning is of growing importance. There are studies looking at different anaesthetic techniques and the use of a holding area for theatre preparation, which show benefits to introducing peripheral blocks before entry to the theatre13.\n\nIn conclusion, preparation, surgical scrub and sterile covering, before induction of anaesthesia is feasible, and does not jeopardise quality of care. In addition, it reduce anaesthetic agents need and may thus shorten recovery room stay.\n\n\nData availability\n\nDataset 1: Demographics, perioperative observations, and response to questionnaire of the patients undergoing surgical scrub and covering pre and post-anaesthesia. doi, 10.5256/f1000research.11965.d16603414",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study has been supported by Capio Lundby Hospital. No external funding or financial support has been received.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors gratefully acknowledge the staff of the Operating unit, PACU and Postoperative Ward in Capio Lundby Hospital, Gothenburg for help with collecting all data for this study.\n\n\nReferences\n\nGrant MC, Yang D, Wu CL, et al.: Impact of Enhanced Recovery After Surgery and Fast Track Surgery Pathways on Healthcare-associated Infections: Results From a Systematic Review and Meta-analysis. Ann Surg. 2017; 265(1): 68–79. PubMed Abstract | Publisher Full Text\n\nIorio R, Osmani FA: Strategies to Prevent Periprosthetic Joint Infection After Total Knee Arthroplasty and Lessen the Risk of Readmission for the Patient. J Am Acad Orthop Surg. 2017; 25(Suppl 1): S13–S16. PubMed Abstract | Publisher Full Text\n\nCurry CS, Smith KA, Allyn JW: Evaluation of anesthetic technique on surgical site infections (SSIs) at a single institution. J Clin Anesth. 2014; 26(8): 601–5. PubMed Abstract | Publisher Full Text\n\nKopp SL, Berbari EF, Osmon DR, et al.: The Impact of Anesthetic Management on Surgical Site Infections in Patients Undergoing Total Knee or Total Hip Arthroplasty. Anesth Analg. 2015; 121(5): 1215–21. PubMed Abstract | Publisher Full Text\n\nOliveira CR, Bernardo WM, Nunes VM: Benefit of general anesthesia monitored by bispectral index compared with monitoring guided only by clinical parameters. Systematic review and meta-analysis. Braz J Anesthesiol. 2017; 67(1): 72–84. PubMed Abstract | Publisher Full Text\n\nDolk A, Cannerfelt R, Anderson RE, et al.: Inhalation anaesthesia is cost-effective for ambulatory surgery: a clinical comparison with propofol during elective knee arthroscopy. Eur J Anaesthesiol. 2002; 19(2): 88–92. PubMed Abstract\n\nKoenig T, Neumann C, Ocker T, et al.: Estimating the time needed for induction of anaesthesia and its importance in balancing anaesthetists' and surgeons' waiting times around the start of surgery. Anaesthesia. 2011; 66(7): 556–62. PubMed Abstract | Publisher Full Text\n\nSaha P, Pinjani A, Al-Shabibi N, et al.: Why we are wasting time in the operating theatre? Int J Health Plann Manage. 2009; 24(3): 225–32. PubMed Abstract | Publisher Full Text\n\nSellbrant I, Pedroletti C, Jakobsson JG: Pelvic organ prolapse surgery: changes in perioperative management improving hospital pathway. Minerva Ginecol. 2017; 69(1): 18–22. PubMed Abstract | Publisher Full Text\n\nBuchsbaum GM, Albushies DT, Schoenecker E, et al.: Local anesthesia with sedation for vaginal reconstructive surgery. Int Urogynecol J Pelvic Floor Dysfunct. 2006; 17(3): 211–4. PubMed Abstract | Publisher Full Text\n\nFlam F: Sedation and local anaesthesia for vaginal pelvic floor repair of genital prolapse using mesh. Int Urogynecol J Pelvic Floor Dysfunct. 2007; 18(12): 1471–5. PubMed Abstract | Publisher Full Text\n\nPrakash D, Heskin L, Doherty S, et al.: Local anaesthesia versus spinal anaesthesia in inguinal hernia repair: A systematic review and meta-analysis. Surgeon. 2017; 15(1): 47–57. PubMed Abstract | Publisher Full Text\n\nLohela TJ, Chase RP, Hiekkanen TA, et al.: Operating unit time use is associated with anaesthesia type in below-knee surgery in adults. Acta Anaesthesiol Scand. 2017; 61(3): 300–308. PubMed Abstract | Publisher Full Text\n\nSellbrandt I, Brattwall M, Jildenstå lP, et al.: Dataset 1 in: The choice between surgical scrubbing and sterile covering before or after induction of anaesthesia: A prospective study. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23882",
"date": "17 Jul 2017",
"name": "Jakob Walldén",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments:\n\nInteresting pragmatic study evaluating if surgical scrub should be done before or after induction of anesthesia. The perspective is both from the patients and nurses point of view. The authors have found no major differences between the approaches and the major conclusion of the study is adequate.\n\nMy major concerns regarding the manuscript is that the authors must be more consistent with the endpoints in the study, there is no clear line in the manuscript of the primary and secondary endpoints. The power analysis is done on the reduction of an undefined vasoactive drug and this is consistent with the hypothesis stated in the end of the introduction, but this outcome is not even mentioned in the abstract.\n\nPlease define the endpoint in the study clearly, and be consistent with these endpoint when presenting and discussing the results throughout the manuscript.\n\nPlease correct tense throughout the manuscript and use past tense when describing the study. Please also proof-read so that grammar is correct throughout the manuscript, there are still quite a few grammatical errors.\n\nSpecific comments:\n\nAbstract/Results: Present the figures of the main results, not only the p-values. Abstract/Conclusion:\n\nIs the statement \"safe\" supported\"? No outcomes regarding safety.\n\nIntroduction: Relevant section. Punctuation instead of question-mark in in last sentence of introduction.\n\nMethods:\nPlease use correct name of the review board (Regional Ethical Review Board in Gothenburg). State date for decision.\n\nPlease state main outcome variables, primary and secondary.\n\nDuplicated sections?\n\nPatients received all medication and care in accordance to routine procedures of the department, apart from the scrubbing and sterile covering. Premedication was with paracetamol and diclofenac.\n\nAll patients received care in accordance to routines of the department, apart from the preoperative preparation, surgical scrub and sterile covering awake or after induction of anaesthesia.\n\nHow was the procedure/routine for giving vasoactive drugs during anesthesia? First choice, second choice? Blood pressure cutpoints?\n\nPower analysis: Reduction in what drug?\n\nResults and Table 1: Present the results in a structured order according to primary and secondary outcomes.\n\nThe primary endpoint (vasoactive drugs) is a parameter that might needs to be presented more extensively. Three drugs are presented, and there is a small tendency that the awake group received more vasoactive drugs. Another dimension to explore the data is to present the number of patients that needed vasoactive drugs.\n\nNurse rating: unclear what you mean with:\n\n/ … follows: 1, 6/10; 2, 8/10./\n\nAre there one nurse rating missing?\n\nDiscussion: Main conclusions adequate in first paragraph.\nIn second paragraph regarding theatre nurses, there are many statements without references. ( ..prefer patients being asleep… distressful for patients…. freezing sensations…discussion regarding that awake patients may be at an increased risk for surgical site infections… ). Please support the statements if possible.\n\nDiscuss more extensively the differences between the group and possible impact on the results (i.e. age differences, anesthetic doses).\n\nDiscuss if the study was properly powered. Better to use other variable calculate power?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2917",
"date": "09 Aug 2017",
"name": "Jan Jakobsson",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Thank you for adequate and constructive comments;My major concerns regarding the manuscript is that the authors must be more consistent with the endpoints in the study, there is no clear line in the manuscript of the primary and secondary endpoints. The power analysis is done on the reduction of an undefined vasoactive drug and this is consistent with the hypothesis stated in the end of the introduction, but this outcome is not even mentioned in the abstract. Please define the endpoint in the study clearly, and be consistent with these endpoints when presenting and discussing the results throughout the manuscript. The study was set-up to study differences between the two logistics, sterile washing and dressing before or after induction of anaesthesia. The power analysis was based on a pilot study looking at the amount of ephedrine needed, with MAP 60 and SAP 90 as cut-off values.Secondary outcomes where patients and scrub nurses assessment, any signs of safety concerns, and last theatre turn time around and PACU stay duration. We have now clarified and amended the manuscript accordingly. Please correct tense throughout the manuscript and use past tense when describing the study. Please also proof-read so that grammar is correct throughout the manuscript, there are still quite a few grammatical errors. We have amended the language to past tense and further checked spelling and grammar Specific comments: Abstract/Results: Present the figures of the main results, not only the p-values. Added accordinglyAbstract/Conclusion: Is the statement \"safe\" supported\"? No outcomes regarding safety. Amended/clarifiedIntroduction:Relevant section. Punctuation instead of question-mark in in last sentence of introduction. Methods: Please use correct name of the review board (Regional Ethical Review Board in Gothenburg). State date for decision. Amended/added Please state main outcome variables, primary and secondary. Clarified Duplicated sections? Patients received all medication and care in accordance to routine procedures of the department, apart from the scrubbingand sterile covering. Premedication was with paracetamol anddiclofenac. All patients received care in accordance to routines of thedepartment, apart from the preoperative preparation, surgical scruband sterile covering awake or after induction of anaesthesia. Corrected How was the procedure/routine for giving vasoactive drugs during anesthesia? First choice, second choice? Blood pressure cutpoints? Power analysis: Reduction in what drug? Efedrin Results and Table 1:Present the results in a structured order according to primary and secondary outcomes. The primary endpoint (vasoactive drugs) is a parameter that might needs to be presented more extensively. Three drugs are presented, and there is a small tendency that the awake group received more vasoactive drugs. Another dimension to explore the data is to present the number of patients that needed vasoactive drugs. Nurse rating: unclear what you mean with: / … follows: 1, 6/10; 2, 8/10./ Are there one nurse rating missing? Discussion:Main conclusions adequate in first paragraph.In second paragraph regarding theatre nurses, there are many statements without references. ( ..prefer patients being asleep… distressful for patients…. freezing sensations…discussion regarding that awake patients may be at an increased risk for surgical site infections… ). Please support the statements if possible. Discuss more extensively the differences between the group and possible impact on the results (i.e. age differences, anesthetic doses). Further commented"
}
]
},
{
"id": "24416",
"date": "04 Aug 2017",
"name": "Adam Magos",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a simple, small prospective RCT addressing the issue of scrubbing and prepping patients undergoing hernia repair (\"anaesthetised\" group) before or after (\"awake\" group) induction of general anaesthesia. The manuscript is clearly written and provide adequate details about methodology, including a power calculation. The results show that the only statistically significant advantage an \"awake\" approach was less time in recovery by 9 (48 v 39) minutes. None of the other variables were significantly different although there was a trend to less anaesthetic usage in the awake group associated with a 4 (64 v 60) minutes reduction in anaesthetic time.\nWhether or not there is a meaningful advantage of pre-anaesthetic scrubbing and prepping is not confirmed by this study, perhaps because it was underpowered, and this is reflected in the authors' conclusions.\nNonetheless, this is an interesting study and one which may stimulate larger studies in different surgical areas. For this reason, publication would be worthwhile.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1019
|
https://f1000research.com/articles/6-1374/v1
|
08 Aug 17
|
{
"type": "Method Article",
"title": "Identification and analysis of functional associations among natural eukaryotic genome editing components",
"authors": [
"Estienne C. Swart",
"Cyril Denby Wilkes",
"Pamela Y. Sandoval",
"Cristina Hoehener",
"Aditi Singh",
"Dominique I. Furrer",
"Miroslav Arambasic",
"Michael Ignarski",
"Mariusz Nowacki",
"Cyril Denby Wilkes",
"Pamela Y. Sandoval",
"Cristina Hoehener",
"Aditi Singh",
"Dominique I. Furrer",
"Miroslav Arambasic",
"Michael Ignarski",
"Mariusz Nowacki"
],
"abstract": "During development in the ciliate Paramecium, excess DNA interspersed throughout the germline genome is deleted to generate a new somatic genome. In this process, most of the intervening DNA is excised by a Piggybac-derived transposase, assisted by small RNAs (scnRNAs and iesRNAs) and chromatin remodelling. As the list of genes involved in DNA elimination has been growing, a need for a general approach to discover functional relationships among these genes now exists. We show that deep sequencing-based comparisons of experimentally-induced DNA retention provide a sensitive, quantitative approach to identify and analyze functional associations among genes involved in native genome editing. This reveals two functional molecular groups: (i) iesRNAs/scnRNAs, the putative Piwi- and RNA-binding Nowa1/2 proteins, and the transcription elongation factor TFIIS4; and (ii) PtCAF1 and Ezl1, two proteins involved in chromatin remodelling. Comparative analyses of silencing effects upon the largely unstudied regions comprising most developmentally eliminated DNA in Paramecium suggests a continuum between precise and imprecise DNA elimination. These findings show there is now a way forward to systematically elucidate the main components of natural eukaryotic genome editing systems.",
"keywords": [
"genome editing",
"DNA excision",
"sRNA",
"transposase",
"chromatin remodelling"
],
"content": "Introduction\n\nParamecium, like other ciliates, contains both a large, somatic, polyploid macronucleus (MAC), and a relatively tiny, gametic, diploid micronucleus (MIC), within the same cell. During sexual or asexual “reproduction” (conjugation or autogamy), a copy of Paramecium’s micronucleus develops into a new macronucleus, and at the same time the micronuclear genome it contains is transformed into a new macronuclear genome by extensive, targeted DNA deletion. Concomitant with DNA deletion, the developing MAC genome is amplified from 2N to ~800N. In Paramecium, the best studied germline DNA sequences destined to be deleted, known as internally eliminated sequences (IESs), are interspersed among the macronuclear-destined sequences (MDSs) (Figure 1). Characterized Paramecium IESs are typically short (> 90% are < 150 bp), TA-flanked, and unique (∼45,000 IESs per Paramecium tetraurelia MIC genome1). These IESs are recognized and generally precisely removed by the PiggyMac transposase complex2. For some IESs, development-specific small RNAs (sRNAs) are required for the recognition and efficient removal of all their copies3.\n\nA representative micronuclear chromosome is transformed into macronuclear chromosomes via precise excision of internally eliminated sequences (IESs) and imprecise elimination of other eliminated sequences (OESs). Macronuclear-destined sequences (MDSs) inbetween are joined and capped with new telomeres to give rise to the final macronuclear chromosomes. When DNA is imprecisely eliminated, the telomere-capped ends of the macronuclear chromosomes occur at different sites. In some cases, this process also produces alternative macronuclear chromosome forms. Note that the distinction between OESs and IESs, though useful for the purposes of description and analysis here, is an artificial one, and that there may only be one main class of eliminated DNA in Paramecium (see later in Discussion).\n\nParamecium’s current annotated IESs comprise just a small fraction (∼13%) of all the micronuclear-specific DNA that is eliminated during development1. Though much of the remaining eliminated DNA has been assembled during IES annotation1, it has yet to be assembled as a complete micronuclear genome, and comparatively little is known about it. For the sake of explanation, and because some of this DNA may reside at chromosome ends, and hence may not be internally eliminated, we classify this DNA as “other eliminated sequences” (OESs; see Figure 1).\n\nDevelopmental elimination of the few OESs investigated in detail4–6 is associated with two possible outcomes: either breakage of the MIC chromosomes into smaller MAC chromosomes, accompanied by end capping with telomeres, or fusion of the regions flanking the removed region5–7. The lengths of studied OESs are on the order of kilobases to tens of kilobases4–6. The boundaries of MAC chromosome telomere addition sites occur at similar locations to the boundaries of the internally removed sequences6. In contrast to classical IESs, the cut sites of these internally removed sequences are imprecise6. During chromosome breakage, the exact locations of the telomere addition sites vary over a span of at least hundreds of base pairs, but whether this is due to resectioning of DNA before telomere addition or due to initial imprecise DNA cleavage, or both, is unknown6.\n\nAn important question has therefore been whether the mechanisms underlying imprecise DNA elimination are distinct from those employed in precise IES excision, as previously proposed6, or whether some of the same developmental DNA deletion pathway is shared, as previously proposed4. Suggesting the latter may be true, substantial retention of OESs occurs after silencing the gene encoding PiggyMac (PGM)1, and OESs and transposons embedded within them are typically retained when gene silencing leads to IES retention3,8–12.\n\nIn ciliates, small RNAs (> 21 nt and < 32 nt) involved in MAC genome development have proposed roles either in DNA targeting and excision (as in Tetrahymena and Paramecium13–16) or in marking/protecting DNA for retention (as in Oxytricha17,18). Paramecium tetraurelia has two classes of sRNAs involved in DNA elimination: scnRNAs and iesRNAs3,13. scnRNAs are produced in gametic MICs during micronuclear meiosis, early during sexual development. Newly formed scnRNAs are transported to the existing, old MAC where MDS-matching scnRNAs are removed (i.e. removed by “RNA scanning”, as originally proposed in the ciliate Tetrahymena19), after which the non-MDS-matching scnRNAs (including IES-matching scnRNAs) are transported to the developing new MAC where they target DNA deletion3,13. Paramecium’s scnRNAs have been proposed to bind to longer RNA transcripts, rather than directly to DNA, in both the old and new MAC12,20. iesRNAs are produced in the developing MAC, where, in principle, they can immediately target unexcised IESs, and their production peaks later during development3. iesRNAs are hypothesized to be produced from transcripts of excised IESs3.\n\nscnRNAs and RNA scanning were discovered in the ciliate Tetrahymena (belonging to class Oligohymenophorea, like Paramecium)14,16,21. Other than the use of scnRNAs to target excision, Tetrahymena’s macronuclear genome development resembles that of Paramecium in a number of other key ways, most notably in employing a domesticated PiggyBac in DNA elimination22, and that both species have two phases of development-specific sRNA production16. There are also important differences between Paramecium and Tetrahymena genome development, in particular, unlike Paramecium’s classical short, precisely excised, single copy IESs, Tetrahymena’s conventional IESs tend to be longer, imprecisely excised, multicopy elements23,24. In contrast to Paramecium, which produces scnRNAs and iesRNAs with distinct Dicer-like proteins (Dcl2/3 for scnRNAs13; Dcl5 for iesRNAs3), Tetrahymena produces two types of scnRNAs (early-scnRNAs and late-scnRNAs16) with a single Dicer-like protein (Dcl1)25,26. These scnRNAs are differentiated by the Piwi proteins (Twi1 and Twi11) processing Dcl1-generated dsRNA precursors16. At present, it is not known how the development-specific sRNAs of Paramecium and Tetrahymena guide the domesticated PiggyBacs to their DNA targets, nor what role proteins implicated in their genome editing have in mediating the intervening interactions.\n\nIn the past, the effect on the excision of Paramecium IESs following microinjection of IES DNA copies into the old MAC during autogamy was investigated for a small set of IESs. IESs were classified either as maternally controlled (i.e. governed by the old or maternal MAC) when the DNA injection led to IES retention in the new MAC, and non-maternally controlled when there was no observable IES retention27,28. In principle, since DCL2/3 co-silencing stops transmission of epigenetic information from the old macronucleus to the new one by eliminating the scnRNAs bearing this information, we previously classified IESs affected by DCL2/3 silencing as epigenetically controlled, and those that were not, as non-epigenetically controlled3,15. This classification is a simplification, since there is no clear boundary between epigenetically and non-epigenetically controlled IESs (and between maternally and non-maternally controlled IESs), and because it may be difficult to detect low levels of epigenetic control. After DCL2/3 silencing, some non-maternally controlled IESs are weakly retained, below the detection level of conventional PCR analyses3, and hence are weakly epigenetically controlled. IESs affected by DCL5 silencing do not correlate well with those affected by DCL2/3 silencing, suggesting that a substantial amount of Paramecium's sRNA-dependent IES excision is not epigenetically controlled3,15.\n\nIn the last few years, additional proteins influencing IES removal in Paramecium have been reported, including homologs of proteins with known functions in other eukaryotes, such as those involved in histone modification (PtCAF1, Ezl1) and RNA transcription (TFIIS4)10–12 (see also Table 1 and Table 2). From these studies, following existing models in Tetrahymena29,30, this has led to the proposal that scnRNAs in Paramecium direct certain chromatin modifications that may facilitate DNA excision10,11. Proteins involved in IES removal with no readily detectable homologs in other eukaryotes have also been investigated8. Other than these proteins, two closely related protein paralogs, Nowa1 and Nowa2 (Nowa1/2), were previously implicated in IES elimination and proposed to be involved in scnRNA-mediated, trans-nuclear crosstalk31. Identification and establishment of the relationships between all these proteins and the development-specific sRNAs is necessary to determine how genome editing functions in Paramecium, and the main objective of the research presented here.\n\nEffects of knockdowns as described in this and existing studies. See Results for a description of the functional group assignment.\n\nProteins are those considered in this study. Timing of expression during development is gauged from microarray data33. Localization is that determined from GFP fusion proteins. Putative function indicates the clearest known function.\n\nIn this paper, highlighting the utility of DNA retention analyses in discovering associations between genome editing proteins, we present what we have learnt about the role of Nowa1/2 proteins and their relation to other Paramecium genome editing components. We report strong correlations between the IESs affected by NOWA1/2 co-silencing (NOWA1/2-KD; KD=knockdown), DCL2/3/5 triple silencing (DCL2/3/5-KD) and TFIIS4 silencing (TFIIS4-KD), suggesting that Dcl2/3/5, Nowa1/2 and TFIIS4 are all components of an sRNA-guided DNA excision subsystem. Strong correlations also exist between PTCAF1-KD and EZL1-KD IES retention, whereas the correlations between the IES retention of these gene knockdowns and those of RNA-related genes are somewhat weaker. We also observe more IES retention due to the silencing of chromatin modifying genes (PTCAF1 and EZL1) than due to depletion of most IES-matching sRNAs by DCL2/3/5-KD, suggesting that chromatin modification due to PtCaf1 and Ezl1 also influences genome editing in an sRNA-independent manner. The correlations in OES retention following gene knockdown suggest most OESs are effectively the longest IESs.\n\n\nMethods\n\nAll experiments were carried out with Paramecium tetraurelia stock 51, which may be obtained from the ATCC as ATCC30567. Cells were grown in a wheat grass powder (WGP; Pines International, Lawrence, KS) infusion medium bacterized the previous day with Klebsiella pneumoniae supplemented with 0.8 mg/l β-sitosterol. Cultivation and autogamy were carried out at 27°C as previously described28.\n\nGene silencing (knockdown) was carried out by RNA interference by feeding of Paramecium cells with E. coli expressing doubled-stranded RNA from a portion of the target gene cloned into the L4440 vector, as previously described3,8,10,31. Sequence regions used for each of the genes silenced in DCL2/3/5-KD were as previously described3,13. Silencing of PTCAF1 and ND7 were as previously described10,31.\n\nThree knockdowns of NOWA1/2 were performed; where necessary we distinguish between these knockdowns (i.e. NOWA1/2_a-KD, NOWA1/2_b-KD, NOWA1/2_c-KD). Macronuclear DNA from postautogamous cells from the first and third knockdowns was used to produce Illumina paired-end sequence libraries. Illumina sRNA libraries were produced from NOWA1/2_b-KD at the same time as those produced for sRNAs from DCL2/3-KD, DCL5-KD and control cells - the latter knockdowns were described in \"Experiment 1\" of 3.\n\nThe following controls were used in this manuscript:\n\nControl 1 = feeding with an empty vector (EV) construct. This was the control for DCL2/3/5-KD and PDSG2-KD;\n\nControl 2 = ND7-KD. This was the control for PGM-KD1,\n\nControl 3 = ND7-KD. This was the control for an unpublished experiment (Singh et al., in preparation);\n\nControl 4 = ND7-KD. This was the control for EZL1-KD11.\n\nND7 is a trichocyst discharge gene which is not expected to affect IES retention32.\n\nTests of survival of sexual progeny cells were performed by isolating and refeeding 30 individual cells. NOWA1/2 knockdown is generally lethal, i.e., typically all cells die within at most the time taken for 3 divisions of control cells following autogamy; for DCL2/3/5-KD 90% of cells were dead 3 days after autogamy; for PDSG2-KD after 3 days 50% of cells were dead and 47% appeared to be “sick” (i.e. dividing slower than usual); for PTCAF1-KD after 3 days 80% of cells were dead and 17% were sick.\n\nMorphological characteristics of MAC development were used to assign the developmental stages of the control and knockdown samples: \"Early\", \"Middle\" and \"Late\", correspond to 50% of cells with fragmented MACs; 100% of cells with fragmented MACs; and 100% cells with fragmented MACs and a new MAC; Figure S3A,C). Note that because Paramecium cells are not completely synchronous during induction of autogamy33 and because the underlying assessments are somewhat subjective, due caution should be exercised when comparing developmentally staged analyses between gene knockdowns.\n\nTo assess the effectiveness of DCL2/3/5-KD, northern blotting was performed as in 3. The DCL2 probe was from gene position 857-1810 (ParameciumDB: GSPATG00003051001); the DCL3 probe was from gene position 518-1354 (ParameciumDB: GSPATG00027456001); the DCL5 probe was from gene position 2074-2461 (ParameciumDB: GSPATG00012932001). For PTCAF1-KD and control (ND7-KD) cells, northern blotting was performed as in10,31.\n\nA portion of Nowa1's C-terminal domain (amino acids 744-1024) fused to an N-terminal Strep-Tag was expressed in E. coli BL21-CodonPlus(DE3)-RIL cells (formerly Stratagene, currently Agilent, USA). The protein was column purified, followed by removal of the tag by HRV 3C protease cleavage. Immunization of a rabbit with this protein was performed by Eurogentec, Belgium, following their 28-day polyclonal antibody protocol, then affinity purified using the matrix bound C-terminal domain portion of Nowa1, before performing western blotting using a 1:500 antibody dilution (Figure S3B). As a loading control, western blotting was performed using 1:5000 dilution of monoclonal anti-alpha-tubulin (Sigma T8203 lot 111M4828) and 2% BSA for blocking.\n\nFor small RNA electrophoretic analysis, total RNA samples (5.0 μg) were 5'-end-labelled with [γ-32P] ATP (5000 Ci/mmol, Amersham) by the exchange reaction of T4 polynucleotide kinase (Fermentas), then denatured and run on a 15% polyacrylamide, 7 M urea gel.\n\nSynthetic Nowa1 codon adjusted for E. coli translation (GENART synthesis) was cloned into the pMAL-c2x vector and expressed in Rosetta2(DE3)p-LysS (formerly Novagen, currently Merck) cells. Protein translation was performed at 37°C; cells were supplied with 1 mM IPTG and 0.2% glucose. Protein was purified according to the pMAL™ Protein Fusion and Purification System (NEB e8200s). EMSA was performed with the LightShift® Chemiluminescent RNA EMSA Kit (Thermo scientific, 201558) using 6.25 μM of RNA or DNA, 4 μg of MBP, and 12 μg of MBP-Nowa1.\n\nMacronuclear DNA was isolated from post-autogamous cells as previously described1. For PCR analyses of IES retention, DNA was extracted from approximately 800 cells with a Nucleospin Tissue kit (Macherey-Nagel).\n\nTotal RNA was extracted from 400 ml of culture per the TRI Reagent BD protocol (Sigma) and was resuspended in RNase free water. A MirVana miRNA isolation kit (Ambion) was used for sRNA enrichment.\n\nMAC genomic DNA and small RNA libraries were produced and sequenced per standard Illumina protocols. Paired-end Illumina DNA-seq libraries were constructed and sequenced on either the Illumina GAIIX, HiSeq 2000 or HiSeq 3000 sequencers.\n\nSequence data corresponding to the MAC DNA from NOWA1/2_a-KD can be obtained from the GenBank Sequence Read Archive (SRA) under the accession SRR646462. MAC DNA sequence data for NOWA1/2_c-KD, DCL2/3/5-KD, PTCAF1-KD, PDSG2-KD, ND7-KD and control 1 (empty vector) is available from the European Nucleotide Archive, respectively under the accession numbers ERS1033672, ERS1033670, ERS1033674, ERS1033673, ERS1033676, ERS1033671. sRNA sequence data for NOWA1/2_b-KD and empty vector control can be obtained from the European Nucleotide Archive accessions ERS1455004, ERS1455005, ERS1455006, ERS1455007, ERS1455008, ERS1455009, and for DCL2/3/5-KD, from ERS1455358, ERS1455359, ERS1455360.\n\nPCR conditions and primers used to test IES retention following NOWA1/2_a-KD were the same as those in 3.\n\nThe following reference genomes for Paramecium tetraurelia stock 51, obtained from ParameciumDB34, were used in the IES analyses and read mapping:\n\nMAC: http://paramecium.cgm.cnrs-gif.fr/download/fasta/assemblies/ptetraurelia_mac_51.fa\n\nMAC+IES: http://paramecium.cgm.cnrs-gif.fr/download/fasta/assemblies/ptetraurelia_mac_51_with_ies.fa (md5 checksums: dbcc54fb2987c8f60f8e765db7ed274c and 3e5b3fa65ebfaa484566a1ffddf20239, respectively).\n\nIES retention scores (IRSs) were determined by ParTIES35. For each IES, ParTIES counts mapped reads with unexcised IESs (IES+) and excised IESs (IES-) to determine an IES retention score, IRS = IES+ ÷ (IES+ + IES-). A table of all the IES retention scores used in this manuscript is included as Dataset 1 (see Data and software availability). IES retention scores from previous knockdown experiments, i.e. PGM-KD1, TFIIS4-KD12, EZL1-KD (EZL1-2 silencing fragment)11, DCL2/3-KD3 and DCL5-KD3, were downloaded from ParameciumDB34. Unless otherwise indicated, NOWA1/2-KD IRSs are for NOWA1/2_a-KD.\n\nTo generate the IES and OES score regression matrices, shown in Figures 3A–C, Figure 4E and Figures S1A–B, we provide the program “after_ParTIES.py” (see Data and software availability), which can be run on Dataset 1 and Dataset 2 (IES and OES retention scores, respectively). This was tested using Python3.6 and relevant modules for scientific computing from the Anaconda software distribution from Continuum Analytics (version: Anaconda3-4.3.1).\n\nThree regressions are graphically shown for IRS pairs: an ordinary least squares (OLS) regression, a LOWESS regression (non-parametric) and an orthogonal distance regression (ODR) with a linear function initialized by the slope and intercept from the OLS. OLS and ODR used standard SciPy functions. LOWESS regressions used the Python statsmodels library. ODR was performed since errors in both the \"dependent\" and \"independent\" regression variables are expected (both IRSs), not just the dependent variable as assumed by OLS. The combined effect of background IES retention and systematic IES retention errors on the regressions, and the effect of subtracting the mean of multiple control IES retention scores is shown in Figure S1A. For each pair of knockdown IES retention scores, Spearman’s rank correlation coefficient (rs) was calculated instead of Pearson’s correlation coefficient to mitigate the effect of outliers. Reproducibility of IES retention scores between experiments is demonstrated by the strongly correlated scores of different NOWA1/2 silencing experiments (Figure S1B).\n\nTwo-sided p-values, calculated with SciPy’s spearmanr function, were used to test the null hypothesis of no correlation (for p ≤ α/m ≤ 0.01/45; i.e. with a Bonferroni correction for multiple hypothesis testing; uncorrected p-values are given in Supplementary Data S1–4), which was rejected in all the displayed correlations in the main figures, except where indicated by a circumflex accent. To facilitate assessment of the statistical significance of differences between rs values, tables containing the two-tailed p-values are provided in Supplementary Data S5-7. p-values were calculated using methods from the R statistical package cocor version 1.1-336 that employ Fisher’s r-to-Z transformation (which are also applicable to rs37). Appropriate methods for calculating test statistics for correlations with independent groups or dependent groups with either overlapping or non-overlapping variables were used from cocor (noted in the supplementary data files). All the comparisons of rs differences described in the text are statistically significant at α=0.01 with a Bonferroni correction for multiple hypothesis testing.\n\nGenomic regions that are not part of the current reference Paramecium MAC+IES assembly were previously assembled1, but we decided to use an alternative, metagenome assembler to attempt to produce a better assembly, and to minimize bacterial contamination. Reads from PGM-KD1 not mapping to the current Paramecium tetraurelia strain 51 MAC genome and 51 MAC genome with IESs (bwa38 version 0.7.12-r1039; default parameters) were assembled with IDBA-UD39 version 1.0.9 (default parameters). To remove bacterial contaminants assembled during the assembly, only scaffolds with GC < 40% were selected (bacterial scaffolds peak around 58% GC; Figure S5A). This procedure resulted in a ∼20 Mb assembly consisting of 7266 scaffolds. As a small fraction of the P. tetraurelia MAC genome is currently missing from the reference assembly, this fraction was present in the ∼20 Mb assembly. To exclude the missing MAC genome regions, empty vector control (control 1) reads were mapped to the 7266 scaffolds with HISAT240 version 2.0.4 (parameters: “-X 600 --pen-noncansplice 0 --mp 24,8 --max-intronlen 1000), and then, using a custom Python script, only those regions with little or no sequence coverage (≤ 2.0×; compared to a mean MAC coverage of ∼100×) were extracted (Figure S5C shows an illustrative scaffold from which OESs were extracted). Coverage was smoothed using a bi-directional average of two exponentially weighted moving averages (EWMA) with a 200 bp span. This procedure extracted 4799 sequence regions (12.3 Mb).\n\nTo minimize the effects of potential genome assembly chaff (short assembled regions, esp. those that might arise from sequence errors), analyses were continued with only the low control sequence coverage regions extracted from scaffolds longer than 300 bp and with a PGM-KD sequence coverage of between 3× and 70× (Figure S5B). These 3882 regions, obtained from 5236 scaffolds, corresponding to 11.8 Mb of sequence data, are classified as putative OESs (Supplementary Data S8). Note that OES boundaries are a first approximation, since imprecise DNA elimination generates variable boundaries (as illustrated in Figure 1 and shown in Figure S5C), and many OESs situated at the ends of the scaffolds are likely incomplete due to the limitations of genome assembly (esp. since the assembly may end when untraversable repetitive regions are encountered).\n\nFor each knockdown, to quantify the effects of gene knockdown on OES retention, reads were mapped to the OESs (identified in the previous methods section) with HISAT240 version 2.0.4 (parameters: “-X 600 --pen-noncansplice 0 --mp 24,8 --max-intronlen 1000) before determining coverage, and normalized to the median coverage of the knockdown reads mapped with HISAT2 to the Paramecium tetraurelia strain 51 MAC genome plus IES assembly, i.e. OES retention score=OEScoverage/median(MAC-with-IES-genomecoverage). OES retention scores may not be as accurate as IES retention scores, both because their denominators use a measure of global, rather than local, sequence coverage and because only reads that map within OES boundaries are counted. In the latter case, read mapping may fail to detect short matches to an OES when only a short part of an OES/MDS-derived read matches, and so the shorter the OES, the greater the underestimation of its retention. Shorter OESs are more weakly retained in most knockdowns affecting genome reorganization (Figure S5D) in Paramecium, but at least some of this effect may be due to boundary-related underestimation of retention. In future, when a complete Paramecium MIC genome is assembled, the relationship between OES length and retention can be fully assessed. OES retention estimates are provided in Dataset 2. Control OES retention was subtracted from the knockdown OES retention for each OES used in the regression analyses.\n\nsRNAs were mapped and histograms of their genomic distribution were produced as previously3 with the addition of OESs.\n\nSequence logos were created with WebLogo41, version 3.3, with redundant sRNAs and base frequencies A=U=0.4 and C=G=0.1.\n\n\nResults\n\nPreviously, each newly silenced Paramecium gene involved in IES excision appeared to generate a different distribution of IES retention3,11,12. From additional knockdowns we have subsequently performed, it can now be seen that the IES retention distributions for NOWA1/2 knockdown, TFIIS4 knockdown and DCL2/3/5 triple knockdown are very similar (Figure 2A; see Methods for the IES retention score calculation). Given the role of TFIIS4 and Dcl2/3/5 in producing and processing of RNAs, and the suggested role of Nowa1 in RNA interactions (see “Nowa1/2 is involved in RNA scanning, may mediate scnRNA- and iesRNA-targeted DNA cutting and excision” later in the results), we classify these proteins as \"RNA-associated\" proteins involved in Paramecium MAC genome development. The IES retention score distributions of the EZL1 knockdown and PTCAF1 knockdown also have similar forms, though the effect of the former knockdown is slightly stronger (Figure 2B). Since both Ezl1 and PtCAF1 affect histone modifications10,11, we classify these proteins as those involved in \"histone modification\".\n\n(A–D) For the sake of clarity, histogram bars with more than 10000 IESs have been truncated. The number of IESs binned in the left-most bars are as follows: (A) NOWA1/2-KD - 17468, TFIIS4-KD - 14152, DCL2/3/5-KD 18915 (Figure S3B and S3D provide Western and Northern blots, for NOWA1/2-KD and DCL2/3/5-KD, respectively); (B) PTCAF1-KD - 12614, EZL1-KD - 8047; (C) DCL2/3-KD - 37410, DCL5-KD - 35019, DCL2/3/5-KD - 22136; (D) Control - 43294; PDSG2-KD - 20730. Note that, as judged by survival tests in comparison to those previously published 8, PDSG2-KD may have been of reduced efficiency. (E,F) IES retention scores (scales on right y-axes) vs. IES length over shorter (E) (≤ 200 bp) and longer (F) (≤ 1000 bp) IES length scales. The IES length distribution is shown in grey in the background (scale on left y-axes). Lines are the exponentially-weighted mean averages (EWMA) with spans of 5 (E) or 50 (F) bp. (G–H) Mean base frequencies of positions 1-3 after the TA repeat relative are plotted relative to IES retention score for IESs from the third (45-55 bp) length peak (only the highest frequency bases are shown — see (H) for the base color scheme; similar trends are visible for ~10 bp windows surrounding longer IES length peaks). IES length is restricted to control for IES end base frequencies variation associated with large scale (26-1000 bp) IES length variation. EWMA lines for spans of 10 data points (intervals of 0.01) are plotted for data within 2 standard deviations (dotted vertical lines) of the mean retention score (dashed vertical line). Mean IES lengths (bp; counting only one of the two TA pointers) vs. IES retention score are potted in the bottom subgraphs (navy diamonds). See Figure S2 for end base frequency graphs of EZL1-KD and TFIIS4-KD and for end base frequency graphs of shorter IESs (26-36 bp). Equivalent graphs for DCL2/3-KD, DCL5-KD and PGM-KD can be seen in 15.\n\nAs we previously reported, the effects of either DCL2/3 knockdown or DCL5 knockdown on IES retention alone are quite modest3,15. Even when their IES retention scores are summed, it is apparent that the knockdowns of DCL2/3 and DCL5 influence considerably fewer IESs, and to a lesser degree than the PGM knockdown. For DCL2/3/5, NOWA1/2 and TFIIS4, given that the gene knockdowns are efficient (as judged by depletion of their mRNAs or proteins; Figure S3B,D and12), we think the similarity of their IES retention score distributions indicates that the upper bounds of IES retention for these knockdowns have been approached. Consequently, these knockdowns may indicate most of the Paramecium IESs requiring scnRNAs and iesRNAs for their excision.\n\nThough an interesting idea, we find no evidence of a proposed missing class of sRNAs11,12 responsible for targeting and excision of additional IESs. Instead, we observe that after DCL2/3/5-KD the levels of IES-matching sRNAs are negligible (Figure 5C). We therefore infer that most IES excision in Paramecium does not require IES-targeting sRNAs. This is consistent with past studies reporting that most tested IESs were non-maternally controlled27,28, i.e. that scnRNAs alone are insufficient for most IES excision3,15. On the other hand, from IES retention following DCL2/3/5-KD we infer that many IESs require some scnRNAs or iesRNAs to cleanly remove all their copies (the exact number of affected IESs depending on the chosen retention score threshold, e.g. 19043 IESs are affected by DCL2/3/5-KD above a threshold of 0.1 retention).\n\nAn important observation from the DCL2/3/5 knockdown is the considerably stronger IES retention compared to the separate DCL2/3 and DCL5 knockdowns (Figure 2C); for example, at an IES retention score > 0.1, for DCL2/3/5-KD 19043 IESs are affected vs. 4815 IESs for DCL2/3-KD, and 5384 IESs for DCL5-KD. This suggests some co-operation between scnRNAs and iesRNAs in IES excision. From the DCL2/3/5 triple silencing, it is clear that multiple gene co-silencing may be necessary to gauge the full contributions of individual genes to IES excision. This is a critical consideration in gene function investigations in Paramecium tetraurelia with its abundance of paralogs that arose from successive whole genome duplications42.\n\nAs we previously proposed, IES excision/recognition efficiency and the need for particular genome reorganization proteins vary as a function of IES end bases and lengths15. A trend of increasing IES retention with IES length was shown for the knockdowns of DCL2/3, TFIIS4 and EZL13,11,12,15. It can now be seen that, with the exceptions of PGM-KD and DCL5-KD, most knockdowns leading to IES retention show this trend (Figure 2E,F). A new observation with respect to IES length is that IES retention for the knockdowns of DCL2/3/5, TFIIS4 and NOWA1/2 also has an obvious 10-11 base periodicity for ~45-130 bp IESs (Figure 2E), mirroring that of the IES length peaks (which are proposed to reflect the constraints of DNA twist on transposase excision1). This signal is not apparent in knockdowns with weaker effects (DCL2/3-KD, DCL5-KD and PDSG2-KD), and only weakly visible in the knockdowns with stronger effects (PTCAF1 and EZL1). We previously also observed that the terminal IES base frequencies relative to IES length show a periodic, 10-11 bp signal (Figure 2A of 15). As in knockdowns of DCL2/3, DCL5 and PGM15, for all the new knockdowns we examined, the end base frequencies of longer IESs (e.g. 45-55 bp) vary in a similar manner with respect to their IES retention scores (Figure 2G–H; Figure S2), consistent with the idea that some IESs have a greater requirement for the products of these genes than others15.\n\nWhile similar IES retention distributions suggest different gene knockdowns might affect retention of particular IESs in a similar manner, examination of pairwise correlations is necessary to clearly establish this. We therefore determined all the possible pairwise correlations between IES retention scores of different gene knockdowns (Figure 3). From this figure, it can be seen that the DCL2/3/5, NOWA1/2, TFIIS4 (group 1), and histone modification genes (group 2) not only have the most similar IES retention score distributions within each of these two groups, but also the strongest IES retention score correlations. Though not as strong as those within the DCL2/3/5, NOWA1/2, TFIIS4, and histone modification groups, positive correlations are also evident between these groups (Figure 3; e.g. Spearman’s rank correlation coefficient, rs=0.74 between DCL2/3/5-KD and PTCAF1-KD). The correlations between PGM knockdown and other gene knockdowns are generally weak — not much greater than the correlations between the control (ctrl3) and other gene knockdowns. Such weak correlations are an expected consequence of the ability of PiggyMac to excise most IES copies without scnRNAs/iesRNAs and without chromatin state changes brought about by Ezl1 and PtCAF1.\n\nSince IES retention increases with IES length for many of the knockdowns influencing IES excision (see Figure 2E,F), we examined the retention score correlations between the abundant IESs corresponding to the first length peak (e.g. ≤ 35 bp; ∼15600 IESs; the IES length peaks can be seen in the background of Figure 2E) compared to longer IESs (e.g. ≥ 500 bp; ~380 IESs; patterns of correlations between these length intervals are intermediate to those in Figure 2E,F). For short IESs there are good correlations between all the possible pairs of IES retention scores from DCL2/3/5-KD, NOWA1/2-KD, TFIIS4-KD, PTCAF1-KD and EZL1-KD. This could be because the protein products of all the genes involved in these knockdowns are cooperating in the excision of short IESs. For the subset of short IESs, the right-skewed IES retention score distributions of these knockdowns also all resemble each other (diagonal of Figure 3B). We previously noted that DCL5-KD leads to greater retention of short IESs than DCL2/3-KD, i.e. iesRNAs are marginally more important for short IES excision than scnRNAs and short IESs are generally weakly epigenetically controlled3,15. This is reflected by the stronger correlations in IES retention score for short IESs between DCL5-KD and the knockdowns of NOWA1/2, TFIIS4, PTCAF1 and EZL1, relative to the correlations between DCL2/3-KD and the same genes.\n\nWhen focusing on longer IESs (e.g., those > 500 bp) the IES retention score correlations between the knockdowns of histone-associated genes, and also those between DCL2/3/5, NOWA1/2, TFIIS4 and histone-associated genes weaken substantially. Compared to their short counterparts the retention score distributions of longer IESs also look very different, with those of DCL2/3, DCL2/3/5, NOWA1/2, TFIIS4 closer to uniform, and those of the histone modification group closer to normal. The forms of the histone modification group IES retention score distributions now also become more like that of PGM-KD. These patterns are consistent with the chromatin state changes brought about by EZL1-KD and PTCAF1-KD exerting more influence on long IESs than short ones, with a much lesser requirement for iesRNAs, and generally some requirement for scnRNAs (i.e. long IESs are generally more strongly epigenetically controlled). At longer scales, the effect of PTCAF1-KD and EZL1-KD on IES retention (e.g. at ≥ 500 bp, rs=0.58) does not correlate as well as at the shorter scale (e.g. at ≤ 35 bp, rs=0.88), likely reflecting that the products of these genes do not have exactly the same consequences on chromatin state at this scale. The decrease in IES retention score correlations between either DCL2/3-KD or DCL2/3/5-KD and histone modification groups largely reflects that iesRNAs have little influence on long IES excision.\n\n(A) For the purposes of regression, for each IES, negative control IES retention scores have been subtracted and zeroed when the subtraction yields a negative value, i.e. IRSregression=max(IRSexperiment - mean(IRScontrols), 0) (controls=controls 1-4). This calculation was performed to reduce the effects of natural, background IES retention and systematic errors in IES retention score estimation as much as possible (see Figure S1, which examines these effects on regression). Hexagonal binning of IES retention scores was used to generate the plots in the lower triangular matrix. Red lines are for OLS regression, orange lines for LOWESS, and grey lines for ODR. IES retention score distributions given along the diagonal are shown to a maximum of 4000 IESs except for PGM-KD (maximum 8500 IESs). For each plot in the lower triangular matrix (Mi,j), Spearman’s rank correlation coefficient (rs) is given in the corresponding position diagonally opposite in the upper triangular matrix (Mj,i). Note rs=0.91 for biological replicates of NOWA1/2-KD (Figure S1B). (B) as in (A) but for short IESs (≤ 35 bp). IES retention score distributions given along the diagonal are shown to a maximum of 1500 IESs except for PGM-KD (maximum 3200 IESs). (C) as in (A) but for long IESs (≥ 500 bp). IES retention score distributions given along the diagonal are shown to a maximum of 50 IESs except for PGM-KD (maximum 75 IESs).\n\nTo date, information about the effects of gene silencing upon Paramecium’s other eliminated sequences (OESs), the remaining deleted DNA currently not annotated as IESs, has typically come from Southern blotting or dot blotting, e.g. 8,10,11,13,31, though more recent studies have also begun to investigate the overall effects of gene knockdown on these regions11,12. Most gene knockdowns that lead to IES retention also lead to retention of transposons within OESs8,10–13,31. For each of the knockdowns used to investigate IES retention, we examined OES retention by mapping the knockdown reads to genomic regions which do not match MDSs and IESs from the reference P. tetraurelia assemblies that were extracted from a PGM-KD DNA assembly (see Methods for more details). Since PGM-KD leads to the most IES retention, like previous reports11,12, we assume that much of the remaining DNA eliminated from the MIC genome can be recovered from this knockdown.\n\nExamination of the distributions of OES retention (Figure 4A–D), generated similarly to those for IES retention (Figure 2A–D), shows that while some features of OES retention resemble those of IES retention, e.g. the forms of the NOWA1/2-KD, TFIIS4-KD and DCL2/3/5-KD OES retention distributions are roughly similar, there are also important differences. OES retention due to DCL5-KD is very weak, hardly differing from that of a negative control, and DCL2/3/5-KD OES retention is weaker than that of DCL2/3-KD. Compared to the very strong IES retention following PGM-KD versus all other knockdowns (Figure 2D), PGM-KD OES retention is not as pronounced (Figure 4D). Like IESs, in part this may be because the strength of OES retention in other knockdowns is greater, but it is also possible that DNA cutting enzymes other than PiggyMac may be involved in OES elimination.\n\n(A–D) For the sake of clarity, histogram bars with more than 2000 OESs have been truncated. The number of OESs binned in the left-most bars are as follows: (A) NOWA1/2-KD - 1238, TFIIS4-KD - 1230, DCL2/3/5-KD 2812; (C) DCL2/3-KD - 473, DCL5-KD - 3788, DCL2/3/5-KD - 2812; (D) Control - 3882; PDSG2-KD – 3062; PGM-KD – 709. (E) OES retention score correlations and regressions. For the purposes of regression, for each IES the mean of two negative control OES retention scores (control 1 and control 2) have been subtracted, and zeroed when the subtraction yields a negative value, i.e. ORSregression=max(ORSexperiment – mean(ORScontrols), 0). This calculation was performed to reduce the effects of natural, background OES retention and systematic errors in OES retention score estimation as much as possible (but note that when OES retention scores are very low, some residual background retention may lead to some correlation, e.g. between DCL5-KD and control 3). Hexagonal binning of OES retention scores was used to generate the plots in the lower triangular matrix. Red lines are for OLS regression; orange lines for LOWESS and grey lines for ODR. For each plot in the lower triangular matrix (Mi,j), Spearman’s rank correlation coefficient (rs) is given in the corresponding position diagonally opposite in the upper triangular matrix (Mj,i).\n\nExamination of the correlations in OES retention in Figure 4E reveals a number of important effects: (i) between PGM-KD and other knockdowns, OES retention correlation is greater than general IES retention score correlations (Figure 3A); (ii) NOWA1/2-KD correlates most strongly with TFIIS4-KD and DCL2/3-KD; PTCAF1-KD and EZL1-KD correlate the best with each other, and moderately with DCL2/3-KD, DCL2/3/5-KD, TFIIS4-KD and NOWA1/2-KD; (iii) though a clear role for the Pdsg2 protein has yet to be defined, OES retention following PDSG2-KD correlates most strongly with PGM-KD; (iv) as OES length increases (e.g. > 4 kb) the PTCAF1-KD and EZL-KD OES retention scores become very narrowly distributed in contrast to those of DCL2/3-KD, NOWA1/2-KD and PGM-KD, suggesting that chromatin state changes brought about by PtCaf1 and Ezl1 may have a very general effect on OES retention. From these observations and those of the effects of previously reported analyses of the overall sequence complexity of retained OES DNA following gene silencing11,12, it follows that OES elimination mostly does not need iesRNAs, whereas scnRNAs, together with TFIIS4 and Nowa1/2 are necessary for OES elimination. The stronger correlations in OES retention scores of other knockdowns relative to those of PGM-KD suggests greater cooperation between PiggyMac, scnRNAs and other protein factors in OES elimination than in general IES excision. Overall, the patterns of OES retention correlation for the different knockdowns resemble those of retention correlations for long IESs, suggesting most OESs may be treated like long IESs.\n\nAs the Nowa1/2 proteins have previously been implicated in removal of maternally controlled IESs31, but their precise role in Paramecium genome development has not yet been established, we wished to obtain additional insight into the function of these proteins, and their relation to other molecules involved in this process. Nowa1/2 proteins contain two distinct domains: an unstructured N-terminal domain consisting of alternating blocks of ‘FRG’ repeats and ‘GGWG’ repeats, and a structured C-terminal domain which is sufficient to route Nowa1 from the maternal MAC to the zygotic nuclei31. Nowa1/2's ‘FRG’ repeats resemble the ‘RGG’ boxes present in many hnRNP and other RNA-binding proteins, raising the possibility that they could facilitate interaction between Piwi-bound sRNAs and nascent transcripts on the old and the new MAC. One way Nowa1/2 may mediate these interactions is via its \"GGWG\" repeats, which resemble the \"WG/GW\" repeats found in Argonaute binding proteins43,44 (Argonaute proteins are members of the Piwi protein superfamily45). \"WG/GW\" repeat-containing proteins in Tetrahymena (Wag1 and CnjB) have been shown to interact with its Twi1 Piwi protein46. In principle, Nowa1/2 could interact with Paramecium’s sRNA-bound Piwi proteins in both the old and the developing new MAC, and during the internuclear transport of the sRNAs. Like PtCAF1-KD, NOWA1/2-KD also substantially decreases H3K9me3 and H3K27me3 in the old and developing new macronuclei10.\n\nTo gain more insight into the role of Nowa1/2 in Paramecium genome reorganization we analyzed the effects of NOWA1/2-KD upon IES retention and the sRNA population during macronuclear development (Figure 5). We previously showed that in control and DCL5-KD cells, MAC genome-matching scnRNAs decrease during development, consistent with RNA scanning, and that there is a progressive increase in iesRNAs3. It can now also be seen that OESs give rise mainly to 25 nt scnRNAs and, compared to IESs, relatively few iesRNAs (Figure 5A; see also Figure S3G,H).\n\n(A) Length distribution of small RNAs from control cells and NOWA1/2-KD cells (see Methods for details of histogram construction and normalization). Vector=vector backbone only. Figure 2SA shows the cytological characterization of the cells these sRNAs were isolated from. (B) Radioactively end-labelled total RNA from NOWA1/2-KD cells and control cells (cells fed with E. coli producing dsRNAs corresponding to the plasmid L4440 with no sequence target in the Paramecium genome; the gel portion showing the control sRNAs was previously shown in Figure 2A of 3 as \"Control 1; Experiment 1\"). (C) sRNA length distribution for DCL2/3/5-KD; the peak at 33 nt corresponds to an unknown sRNA family (e.g. 5'-GGAUCUAUCGUAUAGUGGUUAGUACCUGAGGCU-3') that matches a handful of locations in the MAC genome, which we have also be observed in other controls and knockdowns (e.g. from 8). See Figure 2SE for cytological characterization. (D) To examine the relationship between maternal control and IES retention scores (values determined in Table S1), maternal control scores were obtained from the maximal IES retention observed in Figure 6 of 28. Linear regression lines are shown along with Pearson's r and the two-tailed p-values for hypothesis testing with a null hypothesis that the regression slope is zero. (E) Retention of IESs tested by PCR for control and NOWA1/2-KD cells. NOWA1/2-KD IES retention scores are given to the right. Additional bands indicate the presence of heterodimers of ssDNAs with excised and unexcised IESs. (F) Electrophoretic mobility shift assay with MBP (maltose-binding protein), MBP-Nowa1 (alternating lanes) and oligonucleotides, visualized on a native polyacrylamide gel.\n\nSince it is known that smaller IESs can be nested within larger IESs47, we searched for IESs embedded within OESs. Visual inspection of DNA reads mapping to the assembled OESs from different gene knockdowns confirms the presence of shorter IESs in OESs, often with matching iesRNAs. We previously noted that, though both scnRNAs and iesRNAs are enriched at IES ends, scnRNAs overlap IES ends, whereas iesRNAs map very specifically within the limits of IESs3. Given the patterns of OES retention shown in Figure 4E, we therefore suggest that iesRNAs primarily facilitate precise IES excision, whereas scnRNAs facilitate both precise IES excision and imprecise OES elimination.\n\nCompared to control cells, scnRNAs accumulate and iesRNAs are reduced after NOWA1/2 silencing (Figure 5A, B). scnRNA accumulation is consistent with inhibition of RNA scanning when NOWA1/2 is silenced, i.e. MDS scnRNAs are not removed. We do not think that NOWA1/2-KD affects RNA scanning by interfering with transport of scnRNA-bearing Piwis (Ptiwi01/099), as we observed no differences in localization of GFP-tagged Ptiwi09 with and without silencing NOWA1/2 (Figure S4). On the other hand, the pronounced mobility shift in an electrophoretic mobility shift assay upon addition of synthetic MBP-Nowa1 protein (Figure 5F) indicates Nowa1 may interact with RNA, RNA:DNA hybrids, and, to a lesser degree, DNA. Hence NOWA1/2-KD-induced interference in RNA scanning in the old MAC could be due to disruption of interactions normally mediated by Nowa1/2 between nucleic acids and sRNA-carrying Ptiwi proteins.\n\nFrom the electrophoretic visualization of sRNAs (Figure 5B) it can be seen that iesRNAs are reduced following NOWA1/2-KD (since Figure 5A shows only proportions, it is not possible to deduce whether absolute levels of sRNAs change from this figure). A decrease in iesRNAs was also observed following DCL2/3 co-silencing, and, to a lesser degree for the independent DCL2 and DCL3 silencings3. For the lower quantity of iesRNAs in NOWA1/2-KD cells, there are two possible explanations, both of which may be valid: (i) Nowa1/2 is directly involved in iesRNA production or stability; (ii) Nowa1/2 depletion inhibits scnRNA-dependent IES excision due to the inhibition of RNA scanning, which in turn inhibits iesRNA production. Teasing apart these possibilities is a possible future experimental avenue to gain additional insight into the function of the Nowa1/2 proteins.\n\nNOWA1/2 silencing was previously reported to lead to retention of maternally controlled IESs, and indeed maternally controlled IES retention levels (following microinjection of complementary IESs into the old MAC) correlate well with the IES retention scores (Figure 5D). However, following NOWA1/2 silencing we also observe low levels of retention of some non-maternally controlled IESs by PCR analyses and in DNA-seq data (Figure 5D, E; Table S1). As the non-maternally controlled IESs that are weakly retained following NOWA1/2-KD are also weakly retained following DCL2/3-KD, we infer that they are weakly epigenetically controlled (Table S1). Therefore, due to the sensitivity of IES retention detection in DNA-seq data, some \"non-maternally controlled\" IESs may actually be weakly epigenetically controlled (it is also possible that with more sensitive methods these IESs would be classified as weakly maternally controlled). From Figure 3, we infer that Nowa1/2 is not only involved in scnRNA-dependent IES excision (epigenetically controlled IES excision), but also iesRNA-dependent IES excision, since IES retention scores of NOWA1/2-KD are positively correlated with those of both DCL2/3-KD and DCL5-KD, and strongly correlated with those of DCL2/3/5-KD.\n\nFrom these observations of the effects of NOWA1/2-KD, we propose that Nowa1/2's WG/GW (Argonaute-binding) repeats could mediate IES targeting by both Ptiwi01/09-bound scnRNAs (both old and new MAC) and iesRNAs bound to other Ptiwi proteins48 in the new MAC. The apparent failure to remove MAC genome-matching scnRNAs after NOWA1/2 silencing (Figure 5A) is consistent with the involvement of Nowa1/2 in RNA scanning in the old macronucleus, as expected if these proteins mediate pairing of scnRNAs with nascent maternal non-coding transcripts. In principle, Nowa1/2 proteins could interact with such transcripts via their FRG (RNA-binding RG boxes) repeats. From the correlations in Figure 4, we infer that Nowa1/2 proteins may operate predominantly in concert with scnRNAs and not iesRNAs during OES elimination.\n\n\nDiscussion\n\nThe strong similarity of IES retention following silencing of DCL2/3/5, NOWA1/2 and TFIIS4 suggests that the protein products of these genes may work in the same sRNA-dependent DNA elimination pathway during short IES excision, and that DCL2/3, NOWA1/2 and TFIIS4 work in the predominantly scnRNA-dependent DNA pathway responsible for OES and long IES elimination. PGM-KD-induced DNA retention correlates very weakly with that of other knockdowns for short IESs, but moderately with longer IESs and OESs. This suggests that for most short DNA excision, interactions between PiggyMac and the other proteins and scnRNAs/iesRNAs are less important than for longer DNA excision. For comparison, the silencing effects upon these and related genes affecting Paramecium genome development, and the expression/localization patterns of their encoded proteins are summarized in Table 1 and Table 2, respectively.\n\nEven though the knockdowns of DCL2/3/5, NOWA1/2 and TFIIS4 seem to affect short IESs in a very similar manner, they affect the development-specific sRNAs in quite different ways. DCL2/3/5-KD leads to elimination of both scnRNAs and iesRNAs; NOWA1/2-KD inhibits RNA scanning (MAC genome-matching scnRNAs are not properly removed) and iesRNA production; TFIIS4-KD has little, if any, effect upon RNA scanning, but does eliminate iesRNA production12. Though limited to four analyzed IESs, it is thought that TFIIS4 may be involved in general IES transcription in the developing new MAC12, including that of an IES whose excision has no apparent requirement for scnRNAs, iesRNAs or TFIIS412 (51A4404; see Table S1). This is proposed to produce RNA to which the sRNAs bind, facilitating IES targeting12. Since TFIIS4-KD also leads to OES retention, it is likely that TFIIS4 is also involved in extensive transcription of these regions. It is also apparent that even when there is a good correlation in IES or OES retention scores, the normal expression and localization patterns of tagged proteins encoded by the genes tested in these knockdowns may be quite distinct, e.g. Nowa1/2 proteins are present in both old and new macronuclei, whereas TFIIS4 is predominantly present in new macronuclei (Table 1).\n\nAt present, the simplest explanation which reconciles the observed correlations in IES and OES retention scores between gene knockdowns, the effects of these knockdowns on sRNAs, and the expression and localization of the proteins encoded by these genes is that: (i) TFIIS4-KD removes the RNA transcripts which both scnRNAs and iesRNAs need to bind to their target IESs (or OESs, in the case of scnRNAs), hence IES retention following TFIIS4-KD correlates well with that following DCL2/3/5-KD (and TFIIS4-KD OES retention correlates modestly with DCL2/3-KD); (ii) NOWA1/2-KD prevents binding of Piwi-bound scnRNAs and iesRNAs to their targets via the nascent RNAs produced by TFIIS4, hence NOWA1/2-KD IES retention correlates well with both TFIIS4-KD and DCL2/3/5-KD IES retention (and NOWA1/2-KD OES retention correlates well with both TFIIS4-KD and DCL2/3-KD IES retention).\n\nFor the shortest IESs (those < 35 bp), we propose that the high IES retention score correlations between DCL2/3/5-KD, NOWA1/2-KD, TFIIS4-KD, PTCAF1-KD and EZL1-KD, may be a consequence of cooperation of scnRNAs, iesRNAs, RNA transcripts and chromatin state changes allowing these genomic regions to be recognized and targeted (Figure 3B). For longer IESs the correlations in IES retention score between the different possible pairs of DCL2/3-KD, DCL2/3/5-KD, NOWA1/2-KD, TFIIS4-KD are all strong, as are those between PTCAF1-KD and EZL1-KD, but those between the latter two knockdowns and the former ones are modest (Figure 3C). This reflects that long IESs have a greater requirement for scnRNAs, Nowa1/2 proteins and TFIIS4-dependent RNA transcripts, and very little requirement for iesRNAs, but that only some IES retention due to PTCAF1-KD and EZL1-KD also occurs when either scnRNAs, Nowa1/2 or RNA transcripts are depleted.\n\nThough potential co-operation between chromatin modifying genes and scnRNA/iesRNA genes in IES excision is suggested by the modest correlations between their knockdown-induced IES retention scores, much of the excision requiring the Ezl1 and PtCAF1 chromatin-modifying proteins does not appear to need development-specific sRNAs, as there is generally much less IES retention after DCL2/3/5-KD (median 0.053) than after EZL1-KD or PTCAF1-KD (median 0.37 and 0.23, respectively) even though DCL2/3/5-KD effectively eliminates the IES-matching sRNAs (Figure 5C). DCL2/3 knockdown abolishes most H3K27me3 and H3K9me3 during new MAC development11, like EZL1-KD, PTCAF1-KD and NOWA1/2-KD10,11, whereas DCL5-KD does not influence these modifications11. Thus it has been suggested that scnRNAs alone may guide histone modifications that facilitate DNA elimination in Paramecium11.\n\nThe idea that Paramecium’s scnRNAs guide chromatin state changes in the developing new MAC should, however, be interpreted cautiously: though both PTCAF1-KD and NOWA1/2-KD affect H3K27me3 and H3K9me3 histone modifications in the new MAC, both PtCAF1 and Nowa1/2 also localize in the actively expressed, old MAC, and both PTCAF1-KD and NOWA1/2-KD10 affect H3K9me3 and H3K27me3 and scanning of scnRNAs proposed to occur in this nucleus, showing that PtCAF1 and Nowa1/2 (and potentially also the chromatin modifications that they influence) operate upstream of any possible scnRNA-directed chromatin modifications in the new MAC. It will thus be important to determine whether, like PTCAF1-KD and NOWA1/2-KD, EZL1-KD influences RNA scanning and the scnRNA population. Assuming fairly accurate quantification, more IESs are very strongly retained following DCL2/3/5-KD than EZL1-KD or PTCAF1-KD, e.g. at a threshold of > 0.7, 4316 IESs for DCL2/3/5-KD, compared to 1077 and 649 IESs for EZL1-KD or PTCAF1-KD), suggesting there may be some scnRNA/iesRNA-dependent IES excision which does not require the chromatin state changes induced by Ezl1- and PtCAF1. In future, it will thus be necessary to determine how the silencing of chromatin modifying genes, such as EZL1 and PTCAF1, and the removal of histone modifications, prevent excision of scnRNA/iesRNA-independent IESs, and, reciprocally, how depletion of scnRNAs and iesRNAs by DCL2/3/5 silencing prevents IES excision independently of Ezl1- or PtCAF1-dependent histone modifications.\n\nPreviously, it was proposed that the mechanisms involved in imprecise DNA elimination in Paramecium are distinct from those involved in IES excision6. For imprecise DNA elimination, it was suggested that the resultant boundaries are a mixture of AT-rich direct repeats6. Since these repeats are an arbitrary length and are in relatively AT rich genomic regions it is straightforward to find them purely by chance: e.g. 39 out 40 of the proposed direct repeats bounding the imprecisely eliminated DNA contained a TA dinucleotide, with all 7 of the two nucleotide repeats being TA6. We examined additional junctions of imprecisely eliminated regions (OESs) in DNA from control cells and consistently found that where direct repeats were present and unambiguous, they invariably contained a TA dinucleotide. We therefore favor the simple explanation that, like IESs, the imprecisely eliminated OESs are excised by PiggyMac transposases, with TA dinucleotides as the invariant bases necessary for this cleavage47. We previously noted that long IESs in Paramecium tend to be underrepresented in MIC genome regions that develop into coding sequences in the MAC genome, suggesting counterselection against such IESs49, either due to their lower excision efficiency or higher excision imprecision. Supporting the latter, we find that IESs with evidence of alternative excision in their vicinity are longer than those that do not (Figure S5E).\n\nFor the correlations between pairs of gene knockdowns, the effects on short IESs can be seen to differ from those on long IESs, and OES retention correlations more closely resemble those of long IESs than those of short IESs. We therefore propose there is a continuum of excision precision, which is highest for the shortest IESs and lowest for the longest IESs and OESs. In this case, the abbreviation “IES” may be a misnomer, since excision of longer DNA may lead to chromosome fragmentation rather than internal elimination. In a model in which PiggyMac operates as a dimer1 that recognizes and cuts at two sites, for short IESs it is most likely that both subunits recognize and cut where expected. As IES length increases, DNA elimination imprecision increases. When IESs/OESs become sufficiently long, the two PiggyMac proteins become increasingly unlikely to make contact, or the cut sites may be too far apart to be rejoined efficiently (the latter, proposed by 6), thus imprecise chromosome breakage results. Thus, consistent with our overall observations, it follows that the recognition and delimitation of the longest eliminated DNA sequences, including transposons, benefits most from additional factors, such as sRNAs or chromatin alterations.\n\nIn conclusion, we have shown that comparisons of DNA excision readout following gene silencing by RNAi now provide the means to identify and study associations among genome editing components in Paramecium on a genomic scale. Using this approach, we have proposed putative associations between pairs of Dicer-like and Piwi proteins involved in DNA elimination, which were subsequently confirmed by experimental pulldowns of these proteins and their bound sRNAs48. In principle, the application of similar methods would also be useful in identifying and analyzing the molecular components of pathways in forms of developmental DNA deletion found in diverse eukaryotes, including that occurring in parasitic nematodes50, sea lamprey51, and copepods52. In the case of the parasitic Ascaris nematodes, this includes DNA elimination which is considered unlikely to have a requirement for development-specific sRNAs such as those in ciliates53, but which could conceivably require specific chromatin modifications. As highlighted by the analysis of NOWA1/2 knockdown effects in this paper, these methods, together with complementary experiments, set the stage to begin generally disentangling interactions between candidate genome editing genes.\n\n\nData and software availability\n\nDatasets 1 and 2, IES and OES retention scores, respectively, are available from Zenodo: DOI, 10.5281/zenodo.82376654.\n\nThe latest source code of after_ParTIES is available from GitHub: https://github.com/gh-ecs/After_ParTIES.\n\nThe archived source code as at the time of publication is available from Zenodo: DOI, 10.5281/zenodo.83223655.\n\nLicense: BSD 2-Clause License.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Swiss National Science Foundation (31003A_129957, 31003A_146257 and 31003A_166407 and NCCR RNA & Disease to MN); the European Research Council (EPIGENOME and G-EDIT to MN); the European Cooperation in Science and Technology (Action BM1102 \"Ciliates as model systems to study genome evolution, mechanisms of non-Mendelian inheritance, and their roles in environmental adaptation'' to MN). CDW was supported by a Ph.D. fellowship from the French Ministère de l’Enseignement Supérieur et de la Recherche and the French National Research Agency (“Inferno”).\n\n\nAcknowledgements\n\nThe authors wish to thank Linda Sperling (Institute for Integrative Biology of the Cell (I2BC), CNRS, CEA, University of Paris Sud, France) for her feedback on this manuscript.\n\n\nSupplementary material\n\nFigure S1. IES retention score correlations among controls and evaluation of reproducibility of NOWA1/2-KD. (A) Where indicated, IES retention score subtractions are to a minimum value of zero. IES retention scores for CtrlM are the mean of the other controls. Randomly permuted Ctrl1 IES retention scores were produced with NumPy's random.permutation function. For each plot in the lower triangular matrix (Mi,j), Spearman’s rank correlation coefficient (rs) is given in the corresponding position diagonally opposite in the upper triangular matrix (Mj,i). (B) Correlation between NOWA1/2_a-KD and NOWA1/2_c-KD IES retention scores. Spearman’s rank correlation coefficient (rs) is given. As described in the Methods section, mean control retention scores were subtracted before performing the regressions.\n\nClick here to access the data.\n\nFigure S2. Relationships between knockdown IES retention scores and IES end base frequencies. Graphs constructed as in Figure 2G–H.\n\nClick here to access the data.\n\nFigure S3. Cytological characterization and verification of knockdown efficacy. (A) Cytological characterization of control and NOWA1/2-KD cells whose sRNAs were sequenced. Bar graphs represent the percentages of cells in the vegetative stage or during MAC development, coloured per the observed developmental stages: \"Skein\" - cells displaying maternal MACs stretched to a characteristic convoluted structure; \"Frag\" - cells with fragmentation of the maternal MAC with no evident new MAC; and \"Frag + new MAC\" indicates cells at later stages of development with a fragmented MAC and a new developing MAC. In the early stage (\"E\") ~50% of cells have fragmented maternal MACs. The middle stage (\"M\") and late stage (\"L\") corresponds to succeeding stages with an increasing proportion of new MACs. (B) Western blot (6% SDS-PAGE) for control and NOWA1/2-KD cells. (C) Cytological characterization of DCL2/3/5-KD. Legend as in (A). Cytological characterization of DCL2/3-KD and DCL5-KD cells was given in 3. (D) Northern blot for DCL2/3/5-KD and control cells (E) Cytological characterization of PTCAF1-KD and control (ND7-KD) cells. (F) Northern blot for PTCAF1-KD and control (ND7-KD) cells. (G) Sequence logo of 25 nt scnRNAs from late development control cells. (H) Sequence logo of 27 nt iesRNAs from late development control cells.\n\nClick here to access the data.\n\nFigure S4. Localisation of Ptiwi09-GFP during sexual reproduction in control and NOWA1/2-KD cells. The GFP construct was obtained from 9. N-terminally GFP tagged Ptiwi09 localized in the maternal MAC upon meiosis and later in the macronuclear fragments and developing new MACs in control and NOWA1/2-KD. Scale bar: 10 μm.\n\nClick here to access the data.\n\nFigure S5. Properties of non-reference P. tetraurelia MAC+IES scaffolds, OESs and IESs. (A) Base composition of 7266 IDBA-UD scaffolds assembled from PGM-KD reads not mapping to the P. tetraurelia MAC genome and MAC genome with IES assemblies. (B) PGM-KD sequence coverage vs. scaffold length of the 4799 scaffolds with little retention in an empty silencing vector control. (C) Illustrative read mapping by BWA with empty silencing vector control (EV) reads aligned to a scaffold assembled by IDBA-UD using PGM-KD reads not mapping to the current P. tetraurelia MAC+IES genome assembly. OESs between the MDSs are extracted by identifying genomic regions with low read coverage. The positions of two IESs between MDSs is shown. Note that the boundaries of MDSs and IESs are not precise – e.g. the coverage of the right-most MDS region has a low sequence coverage region preceding the main MDS region. (D) OES retention scores (scales on right y-axes) vs. OES length. Lines are the exponentially-weighted mean averages (EWMA) with spans of 50 bp.\n\nClick here to access the data.\n\nTable S1. Comparison of maternal control scores and IES retention scores. IESs are those described in (Duharcourt et al., 1998) and maternal control scores were calculated as the maximal observed retention described previously (Duharcourt et al., 1998, Sandoval et al., 2014). Maternally controlled IESs are those whose maternal control score is > 0; non-maternally controlled IESs are those with maternal control score = 0. For all the IESs listed in this table the mean retention scores across the four controls used in this paper are 0.00.\n\nClick here to access the data.\n\nSupplementary Data S1 to S8 is available from Zenodo: DOI, 10.5281/zenodo.82376652.\n\nSupplementary Data S1: Statistics for Spearman’s correlations for all IESs\n\nSupplementary Data S2: Statistics for Spearman’s correlations for IESs <= 35 bp\n\nSupplementary Data S3: Statistics for Spearman’s correlations for IESs >= 500 bp\n\nSupplementary Data S4: Statistics for Spearman’s correlations for OESs (>= 300 bp and 3x-70x PGM-KD coverage)\n\nSupplementary Data S5: Statistics for comparisons of Spearman correlation differences for all IESs\n\nSupplementary Data S6: Statistics for comparisons of Spearman correlation differences for all OESs\n\nSupplementary Data S7: Statistics for comparisons of the differences in Spearman correlations between IESs and OESs\n\nSupplementary Data S8: OES sequences\n\n\nReferences\n\nArnaiz O, Mathy N, Baudry C, et al.: The Paramecium germline genome provides a niche for intragenic parasitic DNA: evolutionary dynamics of internal eliminated sequences. PLoS Genet. 2012; 8(10): e1002984. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaudry C, Malinsky S, Restituito M, et al.: PiggyMac, a domesticated piggyBac transposase involved in programmed genome rearrangements in the ciliate Paramecium tetraurelia. Genes Dev. 2009; 23(21): 2478–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSandoval PY, Swart EC, Arambasic M, et al.: Functional Diversification of Dicer-like Proteins and Small RNAs Required for Genome Sculpting. Dev Cell. 2014; 28(2): 174–88. PubMed Abstract | Publisher Full Text\n\nAmar L: Chromosome end formation and internal sequence elimination as alternative genomic rearrangements in the ciliate Paramecium. J Mol Biol. 1994; 236(2): 421–6. PubMed Abstract | Publisher Full Text\n\nForney JD, Blackburn EH: Developmentally controlled telomere addition in wild-type and mutant paramecia. Mol Cell Biol. 1988; 8(1): 251–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLe Mouël A, Butler A, Caron F, et al.: Developmentally regulated chromosome fragmentation linked to imprecise elimination of repeated sequences in paramecia. Eukaryot Cell. 2003; 2(5): 1076–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaron F: A high degree of macronuclear chromosome polymorphism is generated by variable DNA rearrangements in Paramecium primaurelia during macronuclear differentiation. J Mol Biol. 1992; 225(3): 661–78. PubMed Abstract | Publisher Full Text\n\nArambasic M, Sandoval PY, Hoehener C, et al.: Pdsg1 and Pdsg2, novel proteins involved in developmental genome remodelling in Paramecium. PLoS One. 2014; 9(11): e112899. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBouhouche K, Gout JF, Kapusta A, et al.: Functional specialization of Piwi proteins in Paramecium tetraurelia from post-transcriptional gene silencing to genome remodelling. Nucleic Acids Res. 2011; 39(10): 4249–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIgnarski M, Singh A, Swart EC, et al.: Paramecium tetraurelia chromatin assembly factor-1-like protein PtCAF-1 is involved in RNA-mediated control of DNA elimination. Nucleic Acids Res. 2014; 42(19): 11952–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLhuillier-Akakpo M, Frapporti A, Denby Wilkes C, et al.: Local effect of enhancer of zeste-like reveals cooperation of epigenetic and cis-acting determinants for zygotic genome rearrangements. PLoS Genet. 2014; 10(9): e1004665. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaliszewska-Olejniczak K, Gruchota J, Gromadka R, et al.: TFIIS-Dependent Non-coding Transcription Regulates Developmental Genome Rearrangements. PLoS Genet. 2015; 11(7): e1005383. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLepère G, Nowacki M, Serrano V, et al.: Silencing-associated and meiosis-specific small RNA pathways in Paramecium tetraurelia. Nucleic Acids Res. 2009; 37(3): 903–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMochizuki K, Fine NA, Fujisawa T, et al.: Analysis of a piwi-related gene implicates small RNAs in genome rearrangement in tetrahymena. Cell. 2002; 110(6): 689–99. PubMed Abstract | Publisher Full Text\n\nSwart EC, Wilkes CD, Sandoval PY, et al.: Genome-wide analysis of genetic and epigenetic control of programmed DNA deletion. Nucleic Acids Res. 2014; 42(14): 8970–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNoto T, Kataoka K, Suhren JH, et al.: Small-RNA-Mediated Genome-wide trans-Recognition Network in Tetrahymena DNA Elimination. Mol Cell. 2015; 59(2): 229–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFang W, Wang X, Bracht JR, et al.: Piwi-interacting RNAs protect DNA against loss during Oxytricha genome rearrangement. Cell. 2012; 151(6): 1243–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZahler AM, Neeb ZT, Lin A, et al.: Mating of the stichotrichous ciliate Oxytricha trifallax induces production of a class of 27 nt small RNAs derived from the parental macronucleus. PLoS One. 2012; 7(8): e42371. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMochizuki K, Gorovsky MA: Conjugation-specific small RNAs in Tetrahymena have predicted properties of scan (scn) RNAs involved in genome rearrangement. Genes Dev. 2004; 18(17): 2068–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLepère G, Bétermier M, Meyer E, et al.: Maternal noncoding transcripts antagonize the targeting of DNA elimination by scanRNAs in Paramecium tetraurelia. Genes Dev. 2008; 22(11): 1501–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchoeberl UE, Kurth HM, Noto T, et al.: Biased transcription and selective degradation of small RNAs shape the pattern of DNA elimination in Tetrahymena. Genes Dev. 2012; 26(15): 1729–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheng CY, Vogt A, Mochizuki K, et al.: A domesticated piggyBac transposase plays key roles in heterochromatin dynamics and DNA cleavage during programmed DNA deletion in Tetrahymena thermophila. Mol Biol Cell. 2010; 21(10): 1753–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChalker DL, Yao MC: DNA elimination in ciliates: transposon domestication and genome surveillance. Annu Rev Genet. 2011; 45: 227–46. PubMed Abstract | Publisher Full Text\n\nFass JN, Joshi NA, Couvillion MT, et al.: Genome-Scale Analysis of Programmed DNA Elimination Sites in Tetrahymena thermophila. G3 (Bethesda). 2011; 1(6): 515–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalone CD, Anderson AM, Motl JA, et al.: Germ line transcripts are processed by a Dicer-like protein that is essential for developmentally programmed genome rearrangements of Tetrahymena thermophila. Mol Cell Biol. 2005; 25(20): 9151–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMochizuki K, Gorovsky MA: A Dicer-like protein in Tetrahymena has distinct functions in genome rearrangement, chromosome segregation, and meiotic prophase. Genes Dev. 2005; 19(1): 77–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuharcourt S, Butler A, Meyer E: Epigenetic self-regulation of developmental excision of an internal eliminated sequence on Paramecium tetraurelia. Genes Dev. 1995; 9(16): 2065–77. PubMed Abstract | Publisher Full Text\n\nDuharcourt S, Keller AM, Meyer E: Homology-dependent maternal inhibition of developmental excision of internal eliminated sequences in Paramecium tetraurelia. Mol Cell Biol. 1998; 18(12): 7075–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu Y, Taverna SD, Muratore TL, et al.: RNAi-dependent H3K27 methylation is required for heterochromatin formation and DNA elimination in Tetrahymena. Genes Dev. 2007; 21(12): 1530–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTaverna SD, Coyne RS, Allis CD: Methylation of histone h3 at lysine 9 targets programmed DNA elimination in tetrahymena. Cell. 2002; 110(6): 701–11. PubMed Abstract | Publisher Full Text\n\nNowacki M, Zagorski-Ostoja W, Meyer E: Nowa1p and Nowa2p: novel putative RNA binding proteins involved in trans-nuclear crosstalk in Paramecium tetraurelia. Curr Biol. 2005; 15(18): 1616–28. PubMed Abstract | Publisher Full Text\n\nSkouri F, Cohen J: Genetic approach to regulated exocytosis using functional complementation in Paramecium: identification of the ND7 gene required for membrane fusion. Mol Biol Cell. 1997; 8(6): 1063–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArnaiz O, Goût JF, Bétermier M, et al.: Gene expression in a paleopolyploid: a transcriptome resource for the ciliate Paramecium tetraurelia. BMC Genomics. 2010; 11: 547. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArnaiz O, Sperling L: ParameciumDB in 2011: new tools and new data for functional and comparative genomics of the model ciliate Paramecium tetraurelia. Nucleic Acids Res. 2011; 39(Database issue): D632–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDenby Wilkes C, Arnaiz O, Sperling L: ParTIES: a toolbox for Paramecium interspersed DNA elimination studies. Bioinformatics. 2016; 32(4): 599–601.PubMed Abstract | Publisher Full Text\n\nDiedenhofen B, Musch J: cocor: a comprehensive solution for the statistical comparison of correlations. PLoS One. 2015; 10(3): e0121945. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMyers L, Sirois MJ: Spearman Correlation Coefficients, Differences between. Encyclopedia of Statistical Sciences. John Wiley & Sons, Inc.; 2004. Publisher Full Text\n\nLi H, Durbin R: Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009; 25(14): 1754–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeng Y, Leung HC, Yiu SM, et al.: IDBA-UD: a de novo assembler for single-cell and metagenomic sequencing data with highly uneven depth. Bioinformatics. 2012; 28(11): 1420–8. PubMed Abstract | Publisher Full Text\n\nKim D, Langmead B, Salzberg SL: HISAT: a fast spliced aligner with low memory requirements. Nat Methods. 2015; 12(4): 357–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrooks GE, Hon G, Chandonia JM, et al.: WebLogo: a sequence logo generator. Genome Res. 2004; 14(6): 1188–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAury JM, Jaillon O, Duret L, et al.: Global trends of whole-genome duplications revealed by the ciliate Paramecium tetraurelia. Nature. 2006; 444(7116): 171–8. PubMed Abstract | Publisher Full Text\n\nEl-Shami M, Pontier D, Lahmy S, et al.: Reiterated WG/GW motifs form functionally and evolutionarily conserved ARGONAUTE-binding platforms in RNAi-related components. Genes Dev. 2007; 21(20): 2539–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPartridge JF, DeBeauchamp JL, Kosinski AM, et al.: Functional separation of the requirements for establishment and maintenance of centromeric heterochromatin. Mol Cell. 2007; 26(4): 593–602. PubMed Abstract | Publisher Full Text\n\nSwarts DC, Makarova K, Wang Y, et al.: The evolutionary journey of Argonaute proteins. Nat Struct Mol Biol. 2014; 21(9): 743–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBednenko J, Noto T, DeSouza LV, et al.: Two GW repeat proteins interact with Tetrahymena thermophila argonaute and promote genome rearrangement. Mol Cell Biol. 2009; 29(18): 5020–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGratias A, Lepère G, Garnier O, et al.: Developmentally programmed DNA splicing in Paramecium reveals short-distance crosstalk between DNA cleavage sites. Nucleic Acids Res. 2008; 36(10): 3244–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFurrer DI, Swart EC, Kraft MF, et al.: Two Sets of Piwi Proteins Are Involved in Distinct sRNA Pathways Leading to Elimination of Germline-Specific DNA. Cell Rep. 2017; 20(2): 505–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwart EC, Nowacki M: The eukaryotic way to defend and edit genomes by sRNA-targeted DNA deletion. Ann N Y Acad Sci. 2015; 1341: 106–14. PubMed Abstract | Publisher Full Text\n\nWang J, Mitreva M, Berriman M, et al.: Silencing of germline-expressed genes by DNA elimination in somatic cells. Dev Cell. 2012; 23(5): 1072–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith JJ, Antonacci F, Eichler EE, et al.: Programmed loss of millions of base pairs from a vertebrate genome. Proc Natl Acad Sci U S A. 2009; 106(27): 11212–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSun C, Wyngaard G, Walton DB, et al.: Billions of basepairs of recently expanded, repetitive sequences are eliminated from the somatic genome during copepod development. BMC Genomics. 2014; 15: 186. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang J, Czech B, Crunk A, et al.: Deep small RNA sequencing from the nematode Ascaris reveals conservation, functional diversification, and novel developmental profiles. Genome Res. 2011; 21(9): 1462–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwart EC: Identification and analysis of functional associations among natural eukaryotic genome editing components. Zenodo. 2017. Data Source\n\ngh-ecs: gh-ecs/After_ParTIES: First release. Zenodo. 2017. Data Source"
}
|
[
{
"id": "24870",
"date": "15 Aug 2017",
"name": "Kazufumi Mochizuki",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, the authors aimed to classify genes reported to be involved in DNA elimination of Paramecium by comparing sensitivities of internal eliminated sequences (IESs) and other eliminated sequences (OESs) to RNAi knockdown (KD) of the genes. Their classification method provides quantitative scores for pairwise correlations of genes in DNA elimination pathway and they successfully separated two groups of genes: RNA-associated genes and histone modification genes. The authors also showed that shorter and longer IESs have different sensitivities to RNAi KD of different genes and the sensitivities of OESs were similar to those of longer IESs. Therefore, they suggested that the mechanisms for IES and OES eliminations are overlapping. I believe the presented data is compelling and the strategy described is useful to elucidate the function of uncharacterized genes in DNA elimination in future study.\n\nMajor points\nBecause this is a method article, I suggest the authors to minimize biological interpretations and rather extensively discuss methodological/strategical validities and technical/theoretical disadvantages, if any. Related to this point, it may be better to present the data shown in Figure 5 in another paper.\n\nThe authors' strategy does not distinguish direct roles of a genes in DNA elimination from its indirect effects in DNA elimination. I believe this point should be discussed. Some possibilities may be discusses are: a) histone modification genes may affect DNA elimination not only directly through histone modifications on IESs/OESs but also indirectly through regulating expressions of genes involved in DNA elimination; b) RNA-associated genes may act not only on chromatin but also in post-transcriptional regulations of genes involved in DNA elimination, like classical RNAi. In these cases, direct targets of RNA-associated genes and histone modification genes may be identical and the observed differences are due to indirect effects.\n\nI am confused by the following descriptions in the 2nd and the 3rd paragraphs of Results (page 8), which need some clarification. a) Though they first mentioned that \"these knockdowns may indicate most of the Paramecium IESs requiring scnRNAs and iesRNAs for their excision\", they then wrote in the next paragraph that \"We therefore infer that most IES excision in Paramecium does not require IES-targeting sRNAs.\" b) I do not see why \"most IES excision in Paramecium does not require IES-targeting sRNAs\" is \"consistent\" with the fact that \"scnRNAs alone are insufficient for most IES excision\". I think a requirement cannot be inferred from an insufficiency.\n\nAccording to the authors' recent publication (Allen et al. 2017), the production of iesRNA requires circularization of IESs. Therefore, different sensitivities of short and long IESs (and OESs) to KDs of the DNA elimination genes are likely related to their ability to make circle when they are eliminated. I think this possibility should be discussed.\n\nVery little discussion for PDSG2 in the manuscript. The IES retention scores of PDSG2 KD shows no strong correlation with those of any other KDs. Does this mean PDSG2 forms the third group? If so, what would be the function of the third group? If they do not provide any additional explanation, it may be better to consider removing PDSG2-related data from this study.\n\nOther minor points\nAbstract: I do not see \"comparisons of experimentally-induced DNA retention\" in this study.\n\nAbstract: although I agree that the approach is quantitative but not clear how the authors can claim it is sensitive.\n\nPtCAF1 encodes a protein similar to chromatin remodeling factor and thus classifying this gene as \"histone modification\" would be misnomer.\n\nFigure 3A: The colors for \"IESs per bin\" are not visible in the individual plots (all dots look purple). Same for Figure 4E.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "24871",
"date": "22 Aug 2017",
"name": "Martin Simon",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn Paramecium as well as other ciliates, massive genome re-arrangements in form of elimination of thousands of transposon derived IES sequences occur after sexual fertilization. Some of these excision events have been described to be controlled by small RNAs acting in a genome comparison mechanism. Swart and colleagues compared IES retention and OES retention in F1 progeny after dsRNA induced knockdown of genes involved in these genome re-arrangements and aimed to classify functional relationship of genes by similarity of the individual IES/OES retention. They show that RNA associated and chromatin associated genes form individual groups. In addition, useful information is provided by the de novo assembly of eliminated sequences apart of IESs. The paper represents an interesting analysis providing useful tools and solid data for future research applications.\n\nMajor points:\nAlthough the described method to classify IES retention after silencing of different genes is innovative, the paper is a balancing act between methods and research paper. In its current shape, the conclusions drawn and the main results of this paper are of biological significance pushing the methodological aspect into the background. Right now, the paper is more a research paper. The authors need to decide for a strategy.\n\nMinor Points:\nThe identification of the functional groups of Dcl2/3/5, TFIIS4, NowaX and Ezl1, PtCAF also reflects the expression behavior of these enzymes during development (with the summary of Dcl2/3 early - Dcl5 late). As it makes sense at first glance that cooperating enzymes need to become induced at the same time, the authors should also discuss whether the dsRNA feeding efficiency could be different in early and late stages of development.\n\nIn Table 2, the induction time points differ for some enzymes to the microarray data in ParameciumDB described in Arnaiz et al. 2010. For instance, TFIIS4 is indicated to be expressed late in Table 2 but according to Maliszewska-Olejniczak et al. 2015 and ParameciumDB, this gene is induced early during autogamy.\n\nAs most of the analyses here relies on the calculation of IES retention scores, please give a more precise description of their definition in the text, as this term could be new for readers outside the field. Maybe also give more information in Methods, which reads are counted that map to an IES junction or to IES, in particular those, which only partially overlap to the IES. Also, are there any cutoffs set at any time of calculation?\n\nIn Methods: IES and OES retention score regressions: It's nice to see the effort put in creating different kinds of regression. However, in the paper, authors do not address about regression in the text (Also, not in Supp. Fig. S1). They only focus on the correlation co-efficient and the IES retention score distributions.\n\nIntroduction: asexual reproduction would be cell division in paramecia which is not autogamy or conjugation\n\nIn Methods: Estimation of OES retention section, the first sentence is too long and difficult to follow. Kindly rephrase it. Authors do not explain how exactly OES scores are obtained.\n\nI'm looking for how the authors mapped the 3882 OESs with variable boundaries (obtained from the 5236 scaffolds in the new assembly) to the already available 697 scaffolds (in MAC+IES assembly).\n\nIn Results: Examination of basic IES retention properties (last paragraph) Shouldn't it be Figure 2G-J.\n\nFigure 2: In the legend H should be replaced with J. Also in the sentence \"(G–H) Mean base frequencies of positions 1-3 after the TA repeat relative are plotted relative to IES retention score\" there seems to be a typographical error with the use of the word relative. And, for G-J, along the Y-axis it would be nice to say \"Bases after TA repeat\" along the Y-axis for the base frequency plots.\n\nDiscussion: The idea that….This sentence is too long.\n\nAuthors say they have attempted to produce a better assembly. It would be helpful to provide this assembly.\n\nFor readers not used to Paramecium Mac development, the different classifications of IESs into epigenetic/not epigenetic, scnRNA dependent/non scnRNA dependent, long/short etc. can be confusing. Together with the classification of the genetic requirements, is it possible to sum the discussion up in kind of a venn diagram to compare IES properties and genetic requirements for excision? As a suggestion…\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1374
|
https://f1000research.com/articles/6-1033/v1
|
30 Jun 17
|
{
"type": "Study Protocol",
"title": "Service evaluation of the implementation of a digitally-enabled care pathway for the recognition and management of acute kidney injury",
"authors": [
"Alistair Connell",
"Hugh Montgomery",
"Stephen Morris",
"Claire Nightingale",
"Sarah Stanley",
"Mary Emerson",
"Gareth Jones",
"Omid Sadeghi-Alavijeh",
"Charles Merrick",
"Dominic King",
"Alan Karthikesalingam",
"Cian Hughes",
"Joseph Ledsam",
"Trevor Back",
"Geraint Rees",
"Rosalind Raine",
"Christopher Laing",
"Alistair Connell",
"Hugh Montgomery",
"Stephen Morris",
"Claire Nightingale",
"Sarah Stanley",
"Mary Emerson",
"Gareth Jones",
"Omid Sadeghi-Alavijeh",
"Charles Merrick",
"Dominic King",
"Alan Karthikesalingam",
"Cian Hughes",
"Joseph Ledsam",
"Trevor Back",
"Geraint Rees"
],
"abstract": "Acute Kidney Injury (AKI), an abrupt deterioration in kidney function, is defined by changes in urine output or serum creatinine. AKI is common (affecting up to 20% of acute hospital admissions in the United Kingdom), associated with significant morbidity and mortality, and expensive (excess costs to the National Health Service in England alone may exceed £1 billion per year). NHS England has mandated the implementation of an automated algorithm to detect AKI based on changes in serum creatinine, and to alert clinicians. It is uncertain, however, whether ‘alerting’ alone improves care quality.\n\nWe have thus developed a digitally-enabled care pathway as a clinical service to inpatients in the Royal Free Hospital (RFH), a large London hospital. This pathway incorporates a mobile software application - the “Streams-AKI” app, developed by DeepMind Health - that applies the NHS AKI algorithm to routinely collected serum creatinine data in hospital inpatients. Streams-AKI alerts clinicians to potential AKI cases, furnishing them with a trend view of kidney function alongside other relevant data, in real-time, on a mobile device. A clinical response team comprising nephrologists and critical care nurses responds to these AKI alerts by reviewing individual patients and administering interventions according to existing clinical practice guidelines.\n\nWe propose a mixed methods service evaluation of the implementation of this care pathway. This evaluation will assess how the care pathway meets the health and care needs of service users (RFH inpatients), in terms of clinical outcome, processes of care, and NHS costs. It will also seek to assess acceptance of the pathway by members of the response team and wider hospital community. All analyses will be undertaken by the service evaluation team from UCL (Department of Applied Health Research) and St George’s, University of London (Population Health Research Institute).",
"keywords": [
"nephrology",
"acute kidney injury",
"AKI",
"e-alert"
],
"content": "Introduction\n\nAcute kidney injury (AKI) is a sudden loss of kidney function, defined by a rise in serum creatinine or a fall in urine volume1. It has diverse causes, which include sepsis or acute infection, hypovolaemia or hypoperfusion, nephrotoxicity (from drugs or radiological contrast), obstruction of the renal tract, and primary renal diseases such as acute glomerulonephritis. In the United Kingdom, AKI affects up to 15% of hospital admissions and 20% of emergency admissions2,3. AKI may result in fluid overload, respiratory failure and metabolic derangements such as hyperkalaemia4, and is thus strongly associated with adverse outcomes including death5, prolonged hospitalisation6, requirement for renal replacement therapy7, and a need for high dependency or intensive care8. It is also associated with an increased lifetime risk of chronic kidney disease9. AKI is also expensive: the associated excess costs to the National Health Service (NHS) in England may exceed £1 billion per annum3.\n\nManagement of AKI involves four processes of care: timely recognition, general supportive care, therapy directed at the underlying cause, and management of complications. Across the NHS, there are substantial cross-pathway deficits in care10. Increasing awareness of these and of the clinical and economic impact of this condition has led to local, regional, national and global initiatives to try to prevent AKI from occurring, and to encourage timely and appropriate interventions to prevent progression and deliver more rapid recovery. Clinical practice guidelines for the management of AKI have now been developed11.\n\nMore recently, on the basis that the prompt and reliable identification of AKI cases to clinicians may trigger improved care, NHS England issued a national patient safety alert on “standardising the early identification of Acute Kidney Injury”12. This mandated the installation of a new detection algorithm in each NHS hospital, so that potential AKI incidents could be flagged to treating clinicians. The ‘Think Kidneys’ NHS England National programme has provided best practice examples of how AKI alerts may be clinically deployed13. However, simply alerting a clinician to the presence of a possible AKI incident may be insufficient to improve outcomes14. A much richer clinical dataset is required to help clinicians prioritise, diagnose and manage patients. The UK’s National Institute for Health and Care Excellence (NICE) guidelines on AKI management suggest that patients with more severe AKI might benefit from care delivery by suitably expert clinicians, for example as part of a ‘rapid referral’ nephrology service11.\n\nThe Royal Free Hospital (RFH) will implement existing standards for best practice through deployment of a digitally-enabled care pathway as a core service to hospital inpatients. A key component of this is a mobile software application (Streams-AKI). This application will identify potential new cases of AKI by applying the NHS AKI algorithm to a live stream of serum creatinine data, providing real-time monitoring of kidney function. The application will provide real-time alerts of potential AKI cases, alongside other critical clinical data, to a clinical response team comprising nephrologists and critical care nurses. This response team will assess the data provided by the application, prioritise cases and then deliver investigations and therapies according to current best practice guidelines11.\n\nWe propose a service evaluation of the introduction of the digitally-enabled care pathway with respect to processes of care, patient outcomes, qualitative feedback from patients and staff, and NHS costs.\n\n\nAims and objectives\n\nThis service evaluation aims to assess the benefits of the implementation of the digitally-enabled AKI pathway from January 2017, with respect to clinical outcome, renal recovery, processes of care, and NHS costs. The evaluation will also assess the experience of service users (RFH in-patients who have been treated by the pathway), members of the clinical response team and the wider clinical community of RFH.\n\n\nMethods\n\nThe Streams-AKI app and clinical response team are deployed at a single hospital site within the Royal Free London Foundation Trust (RFLFT): the RFH, an 800-bed teaching hospital which also provides diverse specialist and tertiary services, including a dialysis unit and 34-bed intensive care unit with renal replacement therapy onsite. The outcomes of this evaluation will inform further development of the care pathway and its deployment in other RFLFT sites.\n\nFollowing the implementation of the digitally-enabled care pathway, data from the RFH will be compared with data from the RFH prior to deployment in addition to pre-deployment and post-deployment data from a second hospital that is part of the RFLFT, Barnet General Hospital (BGH). BGH is a 450-bed district general hospital providing acute care (including onsite renal replacement therapy, a 21-bed ITU and on-site nephrology services), with similar arrangements for the care of AKI patients to that at the RFH prior to implementation of the digitally-enabled care pathway.\n\nThe RFH has onsite wireless internet networks available to clinicians, and was classified as a global digital exemplar (GDE) by NHS England in April 2016.\n\nThe AKI care pathway prior to implementation. Serum creatinine, an indicator of kidney function, is currently measured in the hospital laboratory. The current AKI detection algorithm in the laboratory information management system (which predates the national algorithm) identifies potential AKI cases and presents a message for clinicians in both the hospital results system and electronic patient record. This message also flags the availability of clinical guidance and education via a link to a webpage displaying local guidelines (www.londonaki.net). Prior to implementation, such results were normally batch reviewed by nonspecialists at the end of the day and may have been seen several hours after the results first become available. Clinicians may have opted to review results earlier, but this process relied upon repeated accessing of the results systems, as clinicians did not know when results were ready. Where blood tests suggested AKI, this may have been communicated by telephone to the clinical teams responsible for the patient by the biochemistry laboratory. However, this process was cumbersome and may have been unreliable.\n\nPatients develop AKI in multiple wards and settings. Early management of AKI was overwhelmingly delivered by nonspecialist, ward-based teams with primary clinical responsibility for the patient. Specialist nephrology review of kidney function blood tests or of patients with AKI only occurred if requested by the patient’s responsible (or ‘home’) clinical team. This required the home team to assess kidney function results, assess the patient, decide to option a specialist review, contact the renal team via phone or pager systems and await a response. The renal team would then receive verbal referral information or would manually access results and other clinical data to prioritise the referral, managing information relating to multiple referrals with paper-based processes. The hospital’s critical care nursing team did not receive automated referrals and were entirely reliant on being contacted by pager systems when ward staff were concerned that a patient was deteriorating.\n\nThe RFLFT deployed clinical guidelines and had an active AKI education programme to support clinical teams, but local audit showed that performance in managing AKI varied, and did not always consistently meet national standards15. Quality improvement in AKI care is a RFLFT organisational priority, and has driven the development of the care pathway and its proposed evaluation.\n\nThe digitally-enabled care pathway is the service whose implementation is being evaluated.\n\nThe Streams-AKI application. Streams-AKI is a mobile application that is deployed on iPhone Operating System (iOS)-enabled smartphones. It processes routinely-collected demographic data (i.e. patient identifiers, location, responsible consultant, and responsible medical specialty), and also serum creatinine data in real time according to the nationally mandated NHS AKI algorithm, which grades patients’ AKI stage from 1 to 3. When the algorithm identifies a case of AKI, a patient-specific notification is delivered directly to the clinician user’s iOS device. In current clinical practice, clinicians must distinguish patients with clinically relevant changes in creatinine from those without, through review of current and historical blood tests, or elements of past medical history that indicate disease causality, complications or pre-existing risk. These routine data are therefore displayed in-app alongside the AKI alert to facilitate interpretation and clinical decision making. The Streams-AKI app is fully integrated with the existing RFH electronic health record system, and operates on Fast Healthcare Interoperability Resources (FHIR) interoperability standards. During implementation, it will be utilised alongside existing electronic health record software. Data security is ensured through the use of on-disk (AES256) and in-flight encryption (TLS v1.2) for all app data in compliance with NHS Digital information security guidelines. The app was first registered with the Medicines and Healthcare Products Regulatory Agency (MRHA) as a Class I, non-measuring, non-sterile medical device on 30/08/2016.\n\nThe clinical response team. At the RFLFT, Streams-AKI will be installed on RFLFT-owned iOS devices. These will be held by members of a dedicated clinical response team, consisting of a clinical lead nephrologist, a duty consultant nephrologist, a specialty nephrology registrar, and a critical care outreach nurse. Following in-app review of alerts, all patients determined to be suffering from clinically-relevant AKI will receive a prompt bedside review form the nephrology team. Members of the clinical response team will then administer a standardized care protocol. The critical care outreach nurse will also receive alerts on more severe (stage 2 and 3) cases and will respond to the most severely unwell cases according to clinical judgement of patient risk. All interventions and future care requirements will be communicated to responsible clinicians verbally and through a standard written proforma entered into the patient record. Where necessary, the clinical response team will arrange a further review within 24 hours. The Streams-AKI app will further alert the team if the patient’s AKI stage subsequently worsens and will also alert after 48 hours if a patient is still suffering from AKI, as determined by the national AKI algorithm. The team will respond to such follow up alerts according to best practice and clinical judgement.\n\nAll clinicians using the app first receive training. This comprises:\n\n- A detailed review of the Streams-AKI app\n\n- Introduction to the devices being used to host the app: RFLFT-owned iPhones (Apple, Inc., Cupertino, Calif., USA)\n\n- An introduction to the other members of the response team\n\n- Review of the standardised digitally-enabled AKI care protocol.\n\nStreams-AKI was deployed in January 2017. Following a pilot phase of 12 weeks to allow optimisation of the care pathway, an 18-week evaluation phase commenced, during which outcome data will be accrued (see Figure 1). For comparative analysis, data will be collected from the two hospital sites (RFH and BGH) at three time points:\n\n- One year before deployment (January to August 2016)\n\n- Immediately before deployment (September 2016 to January 2017)\n\n- During deployment (January to August 2017)\n\nPhases of the evaluation are listed in order. A summary of the main aims of each phase is also listed. The digitally-enabled care pathway was implemented in January 2017.\n\nAn interim analysis is planned half way through the evaluation phase. Results from this may inform the duration of the service evaluation.\n\nEvaluation sample. The service implementation evaluation will include data from all inpatients triggering an AKI alert as defined by the national AKI detection algorithm who are aged 18 or over.\n\nUtilising a daily data feed from the RFLFT chronic dialysis database, AKI alerts for patients receiving kidney dialysis will be identified by the application itself and removed. Alerts for inpatients on the Acute Kidney or Critical Care Units will also be removed. Patients on an end-of-life pathway at the time of the AKI alert will be excluded.\n\nOutcome measures. The evaluation of the pathway will employ a mixture of both quantitative and qualitative methods. The primary outcome will be recovery of renal function, which will be defined as a return to a creatinine level within 120% of the baseline (as defined by the National AKI algorithm) prior to discharge from hospital. Secondary outcomes will be categorised into four areas: processes of care, clinical outcomes, Trust-wide metrics, and NHS costs. Definitions of each outcome, and the sources of all data to be collected, are provided in Table 1–Table 4.\n\nHealth Level 7 (HL7) messages are used to transfer information between different healthcare IT systems.\n\nQualitative evaluation. During the 12-week pilot phase, the clinical response team was observed by a member of the service evaluation team, allowing key issues relating to both the technological and clinical aspects of the enhanced care pathway (including resource use) to be recorded. Semi-structured interviews were carried out with a selection of response team members (including nephrology consultants and specialty registrars, and critical care outreach nurses). The interviews explored whether the clinical response team members found that the new care pathway and the Streams-AKI application helped them provide best-practice care for patients, which aspects of the digitally-enabled pathway worked well or where they could be improved, adverse experiences or consequences of app use, and any unexpected indirect beneficial or adverse effects. Observational work and interviews carried out during the pilot phase were used to drive iterative improvements in the digitally-enabled care pathway prior to the beginning of the evaluation phase.\n\nAt the conclusion of the evaluation phase, a selection of doctors and nurses will be invited to a second series of semi-structured interviews. We will specifically target those responsible for the care of those patients who have been reviewed by the AKI response team during the evaluation. Participants will be purposively sampled to include a mixture of grades of clinicians (House Officers; Senior House Officers; Registrars; Consultants) and nurses (Staff Nurse; Charge Nurse). These will explore strengths and weaknesses of the digitally-enabled care pathway; how these map to perceived deficiencies in AKI care; how the new pathway affected the quality and equity of patient care; and how they feel the service could be improved. Approximately 20 interviews will be carried out. The exact number of health care professionals interviewed will be determined by the need to achieve sufficient diversity in elicited accounts for the questions to fully address all major themes, with no new issues arising at further interviews.\n\nFigure 1 outlines the timeframes for each phase of the service evaluation, as described above.\n\nWe will use interrupted time series segmented regression analysis to examine the effect of the new AKI service on the weekly patient recovery rate for acute kidney injury within 30 days; recovery is defined as a return to creatinine level within 120% of baseline level. The primary dependent variable (recovery of renal function) will be modelled using a generalised linear model assuming a binomial distribution and using a logit link. This modelling approach will ensure that predicted values yielded from the model cannot fall outside the valid 0–100% range. We will also allow for autocorrelation in the model, which can be an issue with time series data. Using this approach, it will be possible to test for a change in level and/or regression slope following the implementation of the intervention.\n\nPathological and clinical endpoints will also be compared to those from a partner hospital (BGH) not deploying the digitally-enabled care pathway over the same time period, and from the pre-implementation periods as specified above. It is anticipated that the care pathway will be subsequently deployed in BGH, with such deployment informed by the results of this service evaluation. Although comparison with the BGH control site should negate any seasonal effects, this is based on the assumption that the effect of time is the same in both intervention and control sites. The inclusion of the second pre-intervention control period (i.e. immediately preceding the intervention period) will allow us to assess whether this is a fair assumption.\n\nFor the economic analysis, unit costs for each of the components will be obtained from two sources. First, we will use tariffs from the NHS National Tariff Payment System. Second, we will use local tariffs at RFLFT sites. Costs will be calculated using both sets of unit costs. We will multiply resource use by unit costs for each patient/cost component and sum across patients to calculate total costs per patient. The output will be a patient-level dataset of total costs per patient before and after the introduction of the digitally-enabled care pathway. Any attributable cost savings will be balanced against the additional costs of the alerting system and care pathway. This will include estimates of the use of clinician time; an activity observation exercise will be carried out with response team members during the evaluation period. A sensitivity analysis will be completed to assess the cost of care delivered to patients with AKI alerts that were discarded at clinician triage.\n\nAt the point of writing this service evaluation protocol, there are no published sample size calculations available for determining the number of timepoints needed for a well powered service evaluation using an interrupted time series design. In line with best practice in service evaluation, we therefore utilised simulations implementing the SIMSAM command in Stata (v14) (StataCorp LLC, College Station, Texas, USA) to establish the sample size needed. We simulated data containing weekly referral rates for four years prior to intervention, where the intervention will occur at 208 weeks. The average baseline recovery rate was assumed to be 0.51 (SD 0.08) which was determined using one year of pre-intervention data from the Royal Free Hospital, the site where the intervention will be implemented. One hundred observations (patients) or more per timepoint are encouraged16, which is a viable assumption based on historical data. A normally distributed random variable with mean of zero and standard deviation of 0.08 was generated to simulate the variation in recovery rate. The pre-intervention regression slope and the change in the effect of the intervention over time following the intervention were both assumed to have an odds ratio of one. The recovery rate was generated as a function of these effects, the average baseline referral rate and the random variable.\n\nThe number of timepoints (measured on a weekly basis) needed to detect an odds ratio of 1.15 for the intervention effect with 90% power assuming a significance level of 5%, determined by simulation, was 11 in total. This number of post-intervention timepoints increased to 32 if the size of effect to be detected has an odds ratio of 1.1, i.e. a 10% increase in the odds of recovery. The number of timepoints needed to detect an odds ratio of 1.1 for the intervention effect with 80% power assuming a significance level of 5%, determined by simulation, was 20 in total. In further analyses, the segmented regression model will be extended to include a control for comparison of the change in level or change in slope post intervention. The control will be Barnet Hospital, which will not receive the intervention.\n\nAll other quantitative data collected (see Table 1) will be analysed using Stata. Data will be screened for normality and homogeneity of variance prior to analysis.\n\nA series of interim analyses will be performed midway through the evaluation phase. These will include inter- and intra-operator analyses of variance for transcription of process of care data from patient notes, and initial modelling of primary outcomes. These analyses will be carried out by staff from University College London and St George’s University of London.\n\nAnalysis of qualitative data. The semi-structured interviews will be digitally recorded and transcribed verbatim. We will analyse the data from a realist viewpoint17, but our model will be revised to fit emerging interpretations of the data. We will apply a framework approach18, whereby the initial transcripts will undergo scrutiny by one member of the service evaluation team (AC) in order to gain familiarity with the data and to identify key themes. These will be discussed and further scrutinised with another service evaluation team member (RR), and with emerging interpretations or questions will be shared and critically explored with the with the entire service evaluation team. Transcripts will be analysed in a systematic manner by applying the coding framework and by rearranging the data according to the thematic content. Generalisations that represent the total set and that address each of the objectives will be developed. During analysis, we will maintain a constant vigilance for deviant cases that may question the emerging thematic and conceptual relations.\n\nSteering committee. A steering committee for this service evaluation has been convened at RFLFT. This committee includes an independent chair with no relationship to the project, a patient member and a nephrologist from a different NHS Trust. This committee will review the results of the interim analyses discussed above.\n\nRFLFT uses Streams as part of its provision of care for patients. To support this service, DeepMind Health processes Patient Identifiable Data. This is in line with the governance arrangements for all other clinical software applications, and this arrangement forms part of an information processing agreement with the Trust, which has been published on the DeepMind website. The digitally-enabled care pathway will be a new standard service at RFH, and under NHS guidance there are therefore no consent requirements for patients for the processing of their personally identifiable data for direct patient care functions.\n\nWhen undertaking the service evaluation, data will be transferred from RFLFT to the Department of Applied Research, University College London for analysis. Prior to transfer and analysis, all data will be de-identified, meaning that no consent will be required from patients for this purpose.\n\nPlans for the evaluation of the digitally-enabled care pathway have been independently reviewed by the University College London Joint Research Office. They directed that this project falls under the remit of service evaluation, as per “Defining Research” guidance from the NHS Health Research Authority. As such, the service evaluation has been registered locally with the RFLFT Audit Lead and Medical Director. The service evaluation has the approval of the RFLFT Executive, RFLFT Board and Sub-Board Patient Safety Committee (including patient governor representatives and a non-Executive Chair and RFLFT Board Member). The Patient Safety Committee and RFLFT Board will receive reports on the results of the service evaluation. The service evaluation has also been presented to the UK’s National Institute for Health Research (NIHR) North Thames Collaboration for Leadership in Applied Health Research and Care (CLAHRC) Patient and Public Involvement Panel. The service evaluation has the full support of the Royal Free Kidney Patients Association.\n\nIt is theoretically possible that implementation of the digitally-enabled care pathway could have unforeseen adverse consequences. These will be sought through clinical feedback at weekly Trust implementation meetings and monthly patient safety programme operational group meetings. Additionally, during the service evaluation, broad metrics of care quality and safety will be monitored by the Trust, and these will be reviewed by the independent steering committee at RFLFT for this evaluation (described above).\n\nResults will be presented to the renal, acute medicine and critical care departments, and to the RFLFT Patient Safety Committee and RFLFT Board. The findings will inform subsequent developments of the care pathway and will inform the RFLFT strategy for detecting and managing AKI. Data will also be presented on the Trust website via our patient safety portal and presented in lay language. Our findings will be shared at a learning event of the UCL Partners AKI Quality Improvement Collaborative, which has 9 participant Trusts as well as a London AKI Network event and as a case study to Think Kidneys (the NHS England National AKI Programme). We will publish in relevant scientific journals, and present at conferences.",
"appendix": "Competing interests\n\n\n\nCL, HM, GR, and RR are paid clinical advisors to DeepMind. AC’s clinical research fellowship is part-funded by DeepMind. DeepMind will remain independent from the collection and analysis of all data. HM co-holds a patent on a fluid delivery device which might ultimately help in preventing some (dehydration-related) cases of AKI occurring.\n\n\nGrant information\n\nThe service evaluation will be conducted by NIHR-CLAHRC North Thames, based at UCL and directed by RR. RR is also an NIHR Senior Investigator. GR is funded in part by the NIHR University College London Hospitals Biomedical Research Centre. HM was similarly funded during project inception. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.\n\n\nReferences\n\nKidney Disease Improving Global Outcomes (KDIGO) Acute Kidney Injury Work Group: KDIGO Clinical practice guidelines for acute kidney injury. Kidney Int. 2012; (Suppl 2): 1–138. Reference Source\n\nPorter CJ, Juurlink I, Bisset LH, et al.: A real-time electronic alert to improve detection of acute kidney injury in a large teaching hospital. Nephrol Dial Transplant. 2014; 29(10): 1888–93. PubMed Abstract | Publisher Full Text\n\nKerr M, Bedford M, Matthews B, et al.: The economic impact of acute kidney injury in England. Nephrol Dial Transplant. 2014; 29(7): 1362–1368. PubMed Abstract | Publisher Full Text\n\nConnell A, Laing C: Acute kidney injury. Clin Med (Lond). 2015; 15(6): 581–584. PubMed Abstract | Publisher Full Text\n\nWang HE, Muntner P, Chertow GM, et al.: Acute kidney injury and mortality in hospitalized patients. Am J Nephrol. 2012; 35(4): 349–355. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAitken E, Carruthers C, Gall L, et al.: Acute kidney injury: outcomes and quality of care. QJM. 2013; 106(4): 323–332. PubMed Abstract | Publisher Full Text\n\nMetcalfe W, Simpson M, Khan IH, et al.: Acute renal failure requiring renal replacement therapy: incidence and outcome. QJM. 2002; 95(9): 579–583. PubMed Abstract | Publisher Full Text\n\nUchino S, Kellum JA, Bellomo R, et al.: Acute renal failure in critically ill patients: a multinational, multicenter study. JAMA. 2005; 294(7): 813–818. PubMed Abstract | Publisher Full Text\n\nChawla LS, Amdur RL, Amodeo S, et al.: The severity of acute kidney injury predicts progression to chronic kidney disease. Kidney Int. 2011; 79(12): 1361–1369. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlleway R: Acute Kidney Injury: Adding Insult to Injury. (NCEPOD, 2009). Reference Source\n\nNational Institute for Health and Care Excellence: Acute kidney injury: prevention, detection and management. (NICE, 2013). Reference Source\n\nSelby NM, Hill R, Fluck RJ, et al.: Standardizing the Early Identification of Acute Kidney Injury: The NHS England National Patient Safety Alert. Nephron. 2015; 131(2): 113–117. PubMed Abstract | Publisher Full Text\n\nHill R, Selby NM: Acute Kidney Injury Warning Algorithm: Best Practice Guidance. (Think Kidneys, 2014). Reference Source\n\nWilson FP, Shashaty M, Testani J, et al.: Automated, electronic alerts for acute kidney injury: a single-blind, parallel-group, randomised controlled trial. Lancet. 2015; 385(9981): 1966–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAcute kidney injury: Quality Standard. (National Institute for Health and Clinical Excellence, 2014). Reference Source\n\nWagner AK, Soumerai SB, Zhang F, et al.: Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002; 27(4): 299–309. PubMed Abstract | Publisher Full Text\n\nPatton MQ: Qualitative Research & Evaluation Methods: Integrating Theory and Practice. (SAGE Publications, 2014). Reference Source\n\nRitchie J, Lewis J, Nicholls CM, et al.: Qualitative Research Practice: A Guide for Social Science Students and Researchers. (SAGE, 2013). Reference Source"
}
|
[
{
"id": "23972",
"date": "03 Jul 2017",
"name": "Mitchell H Rosner",
"expertise": [
"Reviewer Expertise Acute kidney injury"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors provide the rationale and study design for the implementation and evaluation of a digitally-enable care pathway for the recognition and management of acute kidney injury (AKI). The paper is well-written and the topic is timely given the impact of AKI on short and long-term outcome. This program has the potential to radically change how patients with the earliest signs of AKI are managed and the team should be applauded for these efforts. I had just a few minor queries:\nIt might be nice to show the actual protocol for intervention based upon the NICE guidelines\n\nIs the timeline to intervention once the alert triggered standardized?\n\nI am not sure I understand the rationale behind including Barnet General Hospital as compared to RFH- it would seem that the patient mix (cardiac surgery, etc) might be very different between these centers. Is there any methodology used to match causes of AKI and severity of illness?\n\nOverall, this is an exciting study with potentially profound implications for care of patients with AKI.\n\nIs the rationale for, and objectives of, the study clearly described? Yes\n\nIs the study design appropriate for the research question? Yes\n\nAre sufficient details of the methods provided to allow replication by others? Partly\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": [
{
"c_id": "2932",
"date": "07 Aug 2017",
"name": "Alistair Connell",
"role": "Author Response",
"response": "We thank Prof. Rosner for his insightful comments, to which we are happy to be able to respond: We have appended the care protocol for the intervention as a supplementary figure. The timeline to intervention after alerts has not been standardized, recognising that ‘real world’ clinical pressures and judgement may require prioritisation of actions within available resource constraints. We have, instead, specified immediate review for patients with life threatening results (e.g. hyperkalaemia), and that other, less urgent, cases are viewed within hours. The Streams app allows clinicians to view results and triage patients for early review based on perceived clinical urgency. The timeframe for review will therefore depend on clinical prioritisation and other (clinically determined) variable demands. Barnet General Hospital has been included as a comparison site to allow us to demonstrate that impacts attributed to the service being evaluated are not due to systemic changes in process locally or nationally (e.g. relating to the English national awareness campaign, ‘Think Kidneys’). Resources do not allow us to implement the new care pathway simultaneously in Barnet and Hampstead sites, though we plan to implement the new service on the Barnet site as part of our staged service improvement plans. Learning from this evaluation will inform that process. We anticipate site-specific variation in outcomes relating to casemix, as Professor Rosner rightly points out. Baseline care pathways are, however, similar at Barnet and Hampstead sites: there is a legacy alert system in the results viewing platform, non-specialists undertake batch review of results, biochemistry staff phone out some AKI results, early care is administered by unsupervised non-specialist teams with discretionary escalation to expert input via bleep or phone referrals. We thus feel it is a useful site to compare process outcomes and time-series trends in both process and outcome measures. We will collect demographic and clinical data for all patients with AKI at both sites during the time periods stated, and will present descriptive comparison data at the point of publication."
}
]
},
{
"id": "23970",
"date": "13 Jul 2017",
"name": "Christopher J. Kirwan",
"expertise": [
"Reviewer Expertise AKI detection",
"management and follow up"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is the publication of a protocol describing a much needed intervention study regarding the detection and treatment of AKI. This project is clearly described and is already very advanced and close to the completion stage.\n\nThis project relies on a new technology to rapidly process patient data through the NHS England national algorithm for the detection of AKI. This technology then distributes AKI alerts, via an ‘app’ to a dedicated specialist multi disciplinary clinical team who will then act according to national guidelines. The aim is to improve outcomes of AKI.\n\nThis project tackles a complex medical problem yet I feel is appropriately designed to try and provide answers to the following question; Does the provision of real time processing and delivery of patient data regarding AKI to the dedicated MDT of clinicians, allow the implementation of an intervention (AKI care bundles) that improves outcome.\n\nThe main difficulty will be demonstrating that outcome has improved, as this in itself is very hard to define as a primary measure, and if it has not, where in the process has this failed. However, I think the methodology and the inclusion of a number of secondary outcome definitions is sensible, useful and achievable in the context of the clinical problem. There is only really one other study that, unsuccessfully, tried to answer similar clinical questions (Wilson et al) so the authors have minimal literature to guide them yet there approach is sound. The analysis will incorporate a before and after side to the study at the intervention hospital but there is also a second site control which again adds strength to the protocol.\n\nThis study cannot be replicated without access to the technology stream AKI app, however, should this trial show benefit, a wider distribution by the sponsor to a number of different hospital types for a multicentre study would seem logical.\n\nI would however ask the authors to consider on further outcome measure metric. The literature is clear that renal function measured by serum creatinine at discharge following significant illness can over estimate renal function (Prowle et al CJASN). This may confound the interpretation of success by a d/c creatinine being within 20% of baseline. A series of follow up creatinine measures at 3 and potentially 6 months would be a more robust measure of AKI treatment success as the long term effects of AKI on renal function is potentially the most important success of early AKI intervention.\n\nFinally, following recent media coverage in the UK in relation a court judgement regarding the sharing of patient data between the RFLFT and DeepMind, can the authors clarify for the benefit of all readers that this project remains legally and ethically sound.\n\nI very much look forward to the results of this important study.\n\nIs the rationale for, and objectives of, the study clearly described? Yes\n\nIs the study design appropriate for the research question? Yes\n\nAre sufficient details of the methods provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": [
{
"c_id": "2933",
"date": "07 Aug 2017",
"name": "Alistair Connell",
"role": "Author Response",
"response": "We thank Dr. Kirwan for his helpful comments, to which we are happy to be able to respond: We agree that structured recording of follow-up creatinine at three and six months would be highly desirable. RFLFT does not currently have the infrastructure to implement long term hospital-based AKI follow-up for all patients with AKI exposed to the pathway under evaluation, nor do we have funding or resource for such work at this time. This may be a future development. However, we will present follow-up creatinine data from both intervention and comparator sites and time periods where these are readily available through existing clinical practice on the reviewer’s recommendation. Though hospital-based, structured AKI follow up clinics are not part of the current service improvement it is quite possible that more reliable detection and more rigorous early management will result in more reliable AKI follow up either through referral to renal clinic, non-renal secondary care clinics or through primary care follow up. We will attempt to study this effect. We fully understand the reviewer raising this point, and agree that it requires clarification. The determination by the Office of the Information Commissioner (ICO) primarily focused on the clinical safety testing phase of Streams-AKI that occurred between app development (using only synthetic data) and the full clinical implementation that is being evaluated. The ICO ruled that the processing of data for the clinical safety testing of the application did not amount to processing for direct care purposes and so did not fully comply with data protection law. Concerns were also raised that not enough was done to ensure that patients and the public were aware of the project prior to data processing. RFLFT undertook such clinical safety testing in the interests of patient safety, but has accepted the determination by the ICO and is in the process of completing her specified undertakings. The Trust statement on this determination is available on their website: https://www.royalfree.nhs.uk/patients-visitors/how-we-use-patient-information/information-commissioners-office-ico-investigation-into-our-work-with-deepmind/"
}
]
}
] | 1
|
https://f1000research.com/articles/6-1033
|
https://f1000research.com/articles/6-787/v1
|
06 Jun 17
|
{
"type": "Method Article",
"title": "Improved deconvolution of very weak confocal signals",
"authors": [
"Kasey J. Day",
"Patrick J. La Rivière",
"Talon Chandler",
"Vytas P. Bindokas",
"Nicola J. Ferrier",
"Benjamin S. Glick",
"Kasey J. Day",
"Patrick J. La Rivière",
"Talon Chandler",
"Vytas P. Bindokas",
"Nicola J. Ferrier"
],
"abstract": "Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.",
"keywords": [
"deconvolution",
"Gaussian blur",
"fluorescence microscopy",
"confocal microscopy",
"4D microscopy",
"signal-to-noise",
"Huygens"
],
"content": "Introduction\n\nDeconvolution is an established method for sharpening fluorescence images and removing background noise (Biggs, 2010; Sage et al., 2017). The usual input to a deconvolution algorithm is a Z-stack of optical sections generated by widefield or confocal microscopy. Because the benefits of deconvolution are fully realized when the signals are strong, the creators of deconvolution software recommend capturing a large number of photons while keeping the pixel sizes and Z-step intervals relatively small.\n\nThose conditions are hard to meet with live cell imaging if Z-stacks are being collected at regular intervals to create a 3D time-lapse (4D) data set (De Mey et al., 2008). Intracellular structures are dynamic, so the images need to be taken rapidly. Moreover, the number of captured photons is severely constrained by the need to avoid photodamage to the cells and fluorophores (Carlton et al., 2010; Pawley, 2006). Such issues are prominent in our 4D confocal microscopy studies of secretory compartments in yeast cells (Bevis et al., 2002; Losev et al., 2006; Papanikou et al., 2015). We maximize the scan speed, minimize the intensities of the excitation lasers, and set the pixel sizes and Z-step intervals at the Nyquist limit to achieve a tolerable light exposure while ensuring accurate representation of the imaged structures (Day et al., 2016; Pawley, 2006). The resulting data sets typically comprise thousands of optical sections and have a low signal-to-noise ratio (SNR).\n\nEven though the characteristics of our 4D data are not ideal for deconvolution, the Huygens deconvolution software from Scientific Volume Imaging (SVI) can facilitate the analysis. A number of other freeware and commercial software packages are also available for deconvolution (Biggs, 2010; Sage et al., 2017), but in our experience, those programs are unsuitable for processing of multi-channel 4D confocal data due to some combination of cumbersome user interfaces, lack of compatibility with relevant file formats, and inadequate noise removal. Huygens is unique in that it readily deconvolves our data sets (Day et al., 2016; Papanikou et al., 2015). This software is widely used in the cell biology research community. Importantly, in addition to removing noise, Huygens smooths the uneven shapes and intensities obtained with low-SNR data to generate images that are easy to view and quantify.\n\nHuygens works well for certain low-SNR fluorescence images, but this approach pushes the software to a regime for which it is not optimally designed, and when the fluorescence signals are very weak, Huygens performs poorly (Arigovindan et al., 2013). We encountered this problem when imaging low-abundance proteins associated with yeast organelles. In such experiments, a pixel in the signal-containing portion of a confocal section may capture as few as 1–2 photons. To enable deconvolution of images with a very low SNR, Agard and colleagues developed deconvolution software called ER-Decon, which employs a novel regularization method tailored to fluorescence data (Arigovindan et al., 2013). However, ER-Decon has incompletely defined parameters, and it proved to be challenging to use. We therefore sought a method for processing very weak fluorescence signals with Huygens.\n\n\nMethods\n\n4D imaging of live yeast cells expressing Vps8-GFP and Sec7-mCherry was performed as previously described (Day et al., 2016) with a Leica SP5 confocal microscope. Briefly, images were collected at the maximum scan speed with a 63x 1.4 NA objective using a voxel size of 80x80x250 nm, a pinhole setting of 1.2 Airy units, and HyD hybrid detectors in photon counting mode. Z-stacks of 28 optical sections were captured at 2 s intervals with the line accumulation (summing) set to either 8x or 1x. Image manipulations other than deconvolution, including 2D and 3D Gaussian blurs, employed 64-bit ImageJ 1.51i (http://rsbweb.nih.gov/ij/) (RRID: SCR_003070). This software has a sophisticated Gaussian blur algorithm that chooses a suitable kernel based on the user-specified radius (sigma) value. Multi-channel 8-bit confocal 3D time series data were converted to TIFF format, and the TIFF images were converted to 16-bit format, multiplied by 256, and Gaussian blurred where indicated. After deconvolution, the image stacks were average projected and then scaled to provide a quantitatively accurate view of the fluorescent structures (Hammond & Glick, 2000), and the series of projections was exported to AVI movie format. An online tool was used to convert the movies to MP4 format (http://video.online-convert.com/convert-to-mp4).\n\nFor labeling the yeast nuclear envelope and peripheral ER membranes, gene replacement was used to tag Hmg1 with GFP. The accompanying pHMG1-GFP.dna SnapGene file (Supplementary File 1) shows the plasmid used for the strain construction. That file can be opened with SnapGene Viewer (http://www.snapgene.com/products/snapgene_viewer/). The construction steps can be visualized using History view, and instructions for tagging Hmg1 by gene replacement can be found in the Description Panel. Confocal imaging of yeast cells expressing Hmg1-GFP was performed with a Leica SP8 confocal microscope using the same parameters as for 4D imaging, except that 31 optical sections were captured. A series of 14 optical sections (numbers 13–26) representing approximately 3 μm from the central portions of the cells were processed and average projected as described above.\n\nA simulated point-like fluorescent object was generated in a voxel array of XYZ dimensions 200x200x40 with a voxel size of 80x80x250 nm. The fluorescent object was centered along the Z-axis, and was duplicated at an XY spacing of 20 pixels to create an 8x8 array.\n\nThe effective confocal point spread function (PSF) was generated by multiplying simulated excitation and emission PSFs, which were produced by the ImageJ plugin PSF Generator (http://bigwww.epfl.ch/algorithms/psfgenerator/) using the Born & Wolf 3D optical model with the following parameters: refractive index = 1.5, numerical aperture = 1.4, voxel size = 80x80x250 nm, excitation wavelength = 488 nm, emission wavelength = 510 nm.\n\nThe effective confocal PSF was convolved with the simulated objects using fast Fourier transform-based 3D convolution, and the image values were normalized so that the maximum pixel value corresponded to an average of 1 detected photon. The resulting image stack represented the average detected image, which comprised a total of 17.7 photons per object. Where indicated, random background noise was included by adding a value of 0.01 photons to every voxel in the average detected image. This information was used as input to a Poisson random number generator, yielding a simulated image stack in which each voxel value was drawn from a Poisson distribution whose mean was equal to the corresponding voxel value in the average detected image.\n\nThe output was saved in 8-bit TIFF format, and was scaled so that a pixel value of 255 corresponded to 4 photons. Further processing was carried out as for the live cell confocal data, except that Gaussian blurring and/or deconvolution were performed with 8-bit format. The images were then converted to 16-bit format and multiplied by 256 followed by average projection.\n\nTo quantify the signal intensity for an object after average projection, ImageJ was used to create a selection of 20x20 pixels centered on the object, and the integrated density was measured. For the deconvolved image, the numbers were multiplied by a correction factor to compensate for scaling of the image by Huygens.\n\nDeconvolution with Huygens Essential 15.10 software (https://svi.nl/HomePage) (RRID: SCR_014237) was performed on an iMac using up to 40 iterations of the Classic Maximum Likelihood Estimation algorithm with a theoretical PSF. Background correction was automatic, except in the case of the simulated confocal Z-stacks with added background noise, for which the background setting was manually adjusted to 0.8. The SNR setting, adjusted empirically to give satisfactory results, was as follows: 4 for the live cell 4D confocal data; 7 for the confocal images of cells with a labeled nuclear envelope; 7 for the simulated confocal Z-stacks with no added background noise; 1 for the simulated confocal Z-stack with added background noise; or 10 for the widefield data. The other parameters used by the Huygens algorithm were configured for either confocal microscopy of live yeast cells (Day et al., 2016), confocal microscopy of simulated fluorescent objects under the conditions specified during the simulation, or widefield microscopy under the conditions reported for the ER-Decon software (Arigovindan et al., 2013).\n\nThe ER-Decon software and associated image data were obtained from the University of California, San Francisco (http://msg.ucsf.edu/IVE/Download/). Images of fluorescent yeast Zip1 filaments were obtained as part of the ER-Decon package, and were converted to TIFF format using the Bio-Formats Importer plugin for ImageJ (http://www.openmicroscopy.org/site/support/bio-formats5.1/).\n\n\nResults and discussion\n\nWe generated two small 4D data sets to illustrate confocal imaging of organelles in live Saccharomyces cerevisiae cells. The parameters were adjusted to capture either weak signals using a line accumulation of 8x, where each line in the image was scanned eight times and the results were summed (Video S1), or very weak signals using a line accumulation of 1x, where each line in the image was scanned only once (Video S2). Projections of representative Z-stacks from the two movies are shown in Figure 1. The organelles were dynamic, so the labeling patterns in the two movies are not identical, but the movies were brief and were captured sequentially, so the labeling patterns are similar enough to allow for comparison. With 8x line accumulation, the raw projections are noisy and display fluorescent structures with uneven shapes and intensities (Video S1 and Figure 1A). Deconvolution with Huygens efficiently removed the background noise and smoothed the structures. With 1x line accumulation, the data quality is even lower, but fluorescent structures can still be discerned in the raw projections (Video S2 and Figure 1B). In this case, deconvolution with Huygens erased almost all of the fluorescent structures. The processing employed standard settings in the Huygens software, including deconvolution with the Classic Maximum Likelihood Estimation algorithm. Although a larger percentage of the fluorescent structures in very weak data sets could be preserved by greatly reducing the number of deconvolution iterations or by using different SNR or background settings, the preserved structures often had distorted shapes (not shown). Similar loss or poor preservation of very weak fluorescent structures was seen with the Good’s roughness Maximum Likelihood Estimation algorithm, which is recommended for use with noisy confocal data (not shown). Based on these observations, we have continued to use standard settings in Huygens. Our data sets often lie between the two extremes depicted in Figure 1, and when movies are generated after deconvolution, the fluorescent structures blink because a given structure is erased in some movie frames but not in others (see Video S2). Such movies cannot be productively analyzed.\n\nGene replacement in Saccharomyces cerevisiae was used to label late Golgi cisternae with Sec7-mCherry (red) and prevacuolar endosomes with Vps8-GFP (green) (Papanikou et al., 2015). Cells were imaged by 4D confocal microscopy. In consecutive movies, line accumulation was set to (A) 8x or (B) 1x. The data were average projected either with no processing, or after deconvolution with Huygens, or after prefiltering with a 2D Gaussian blur using a radius of 0.75 pixels followed by deconvolution. Fluorescence data are superimposed on differential interference contrast images of the cells (blue). Shown are representative frames from Video 1 (8x) and Video 2 (1x). The fluorescence patterns in (A) and (B) are similar, but not identical because the labeled structures changed during the interval between the two movies. Scale bar, 2 µm.\n\nGene replacement in Saccharomyces cerevisiae was used to label ER membranes with Hmg1-GFP (Koning et al., 1996). A confocal Z-stack was captured with line accumulation set to (A) 8x or (B) 1x. The data were average projected either with no processing, or after deconvolution with Huygens, or after prefiltering with a 2D Gaussian blur using a radius of 0.75 pixels followed by deconvolution. Fluorescence data are superimposed on differential interference contrast images of the cells (gray). Scale bar, 2 µm.\n\nSimulated confocal Z-stacks of fluorescent point sources were created as described in Methods, either (A) without background noise or (B) with background noise. The data were processed and average projected as in Figure 1.\n\nThese images of fluorescent yeast Zip1 filaments correspond to Figure 4 of Arigovindan et al. (2013). The two exposure levels represent strong (100%) or weak (0.25%) signals, respectively. Where indicated, the data were subjected either to a Gaussian blur with a radius of 1.00 pixel, or to deconvolution with Huygens, or to a Gaussian blur prefilter followed by deconvolution. The theoretical point spread function was based on imaging parameters supplied with ER-Decon.\n\nIn the course of testing several types and combinations of image filters (Day et al., 2016), we discovered that for very weak confocal signals, the key step was to prefilter the optical sections in ImageJ with a Gaussian blur. That prefilter dramatically improved the results obtained after deconvolution (Video S2 and Figure 1B). Fluorescent structures were no longer erased, and instead were preserved and smoothed while the background noise was largely eliminated. Most of the structures visualized by this method were biologically relevant because they persisted between movie frames (Video S2). Essentially identical results were obtained with 2D and 3D Gaussian blurs (not shown), so we use a 2D Gaussian blur because the processing is faster. This prefiltering step enables us to generate useful 4D movies from data sets that contain very weak confocal signals.\n\nApplication of the Gaussian blur prefilter requires the data to be in a suitable format. Our images are collected with a high-sensitivity detector in photon counting mode, and the pixel values are in 8-bit format. For very weak signals, typical pixel values are 0, 1, or 2 because a pixel rarely captures more than 2 photons. To obtain a meaningful blur, the numbers are scaled up to allow for intermediate integer values. We convert the images to 16-bit format and multiply by 256, resulting in typical pixel values of 0, 256, and 512. A Gaussian blur then generates a range of intermediate values, effectively spreading the individual photon signals over multiple pixels.\n\nAn important question is how to determine whether a confocal data set is suitable for processing with a Gaussian blur prefilter. Ideally, this prefilter would be used routinely, because even if the average signal intensity is strong, some structures may have very weak signals. The concern with routine application of a Gaussian blur prefilter is that blurring might be propagated to the deconvolved images. Indeed, when the Gaussian blur prefilter was applied to signals strong enough to be preserved during normal deconvolution, we saw some blurring of the fluorescent structures (Video S1 and Figure 1A). However, this effect was minor with suitable parameters for the prefilter (see below). Our results indicate that a Gaussian blur prefilter can be used to image structures with a range of signal intensities, resulting in preservation of very weak signals without significant degradation of stronger signals.\n\nBecause the labeled structures in our 4D data sets were punctate, we tested whether a Gaussian blur prefilter would also improve deconvolution of other shapes. For this purpose, GFP was fused to a yeast endoplasmic reticulum (ER) protein that localizes mainly to the nuclear envelope (Koning et al., 1996). A single confocal Z-stack was captured at a low excitation laser setting. As shown in Figure 2A, which employed 8x line accumulation, the labeled protein appeared as prominent nuclear envelope rings with weaker labeling of peripheral ER membranes. Deconvolution of the raw data preserved the nuclear envelope rings. When a Gaussian blur was applied before deconvolution, additional signals outside the nuclear envelope were preserved. Figure 2B shows a parallel analysis with 1x line accumulation. In this case, the fluorescence signals were completely erased by deconvolution of the raw data, but application of a Gaussian blur before deconvolution preserved the nuclear envelope rings. We conclude that for various types of fluorescence patterns, a Gaussian blur prefilter preserves very weak confocal signals during deconvolution with Huygens.\n\nTo explore the Gaussian blur effect systematically, and to confirm that it was not limited to the particular configuration of our confocal microscopy setup, we used simulated data. 3D confocal imaging was simulated for an array of 64 faintly fluorescent point-like objects, each of which was represented by about 10–25 photons spread over multiple optical sections. Figure 3A shows projections of this simulated Z-stack before and after processing. After deconvolution with Huygens, only 7 objects were preserved, but after a Gaussian blur prefilter followed by deconvolution, all 64 objects were preserved. The total signal intensities for the objects were largely unchanged after either a Gaussian blur alone or a Gaussian blur followed by deconvolution (Supplementary Figure 1). A setting of 0.75 pixels for the radius (sigma) parameter of the prefilter preserved signals while causing very little blur in the final images (Supplementary Figure 2). When the simulation was repeated with added background noise, a Gaussian blur prefilter followed by deconvolution removed most of the noise while preserving all of the objects (Figure 3B). In this case, deconvolution in the absence of a prefilter completely erased the objects. The voxels in those simulations were 80x80x250 nm to mimic Nyquist imaging with our confocal system (Pawley, 2006), but similar results were obtained with voxels of 40x40x120 nm (Supplementary Figure 3). Thus, a Gaussian blur prefilter preserves weak confocal signals during deconvolution under multiple real and simulated conditions.\n\nThe paper describing the ER-Decon software showed that Huygens could give unsatisfactory results with low-SNR widefield images (Arigovindan et al., 2013). We processed low-SNR widefield microscopy data from that study with a Gaussian blur prefilter before deconvolution. The improvement was only moderate because Huygens did not erase the structures, but when the signal was weak, the prefilter did increase contrast between labeled structures and the background (Figure 4, right panels), yielding results similar to those obtained with ER-Decon (compare to Figure 4 in Arigovindan et al., 2013). The combined observations demonstrate that a Gaussian blur prefilter consistently improves deconvolution of low-SNR fluorescence data.\n\nThe reason for this beneficial effect of the prefilter is not fully understood. Gaussian blurs suppress high-frequency noise. That approach reduces pixel-to-pixel intensity variations, and it can facilitate analysis methods such as edge detection and particle tracking (Cheezum et al., 2001; Russ & Neal, 2015). A different mechanism presumably underlies the ability of a Gaussian blur to prevent loss of very weak signals during deconvolution. Huygens apparently “expects” a gradually varying distribution of the signal intensities within a set of nearby voxels, and the Gaussian blur prefilter converts the data to a form suitable for the Huygens algorithm.\n\nIs a Gaussian blur prefilter before deconvolution an acceptable procedure? Processing of images before deconvolution is not generally recommended, but a Gaussian blur is relatively safe. This filter causes a simple and well behaved transformation of the data, and it preserves the total intensity of a fluorescent structure (Burger & Burge, 2008). Gaussian blurs have previously been employed during deconvolution to suppress noise buildup (Agard et al., 1989). A Gaussian blur prefilter was actually proposed by the founder of SVI as a method for reducing noise sensitivity during deconvolution of confocal data (van Kempen et al., 1997). Therefore, it seems reasonable to apply this prefilter to very weak confocal signals for the novel purpose of avoiding complete erasure of biologically meaningful structures. When the signals are stronger, the Gaussian blur prefilter has a barely detectable effect on the final image, so there seems to be little risk in applying this prefilter routinely.\n\nIt could be argued that the Gaussian blur prefilter merely sidesteps a software flaw, in which case a better option would be to fix the Huygens algorithm. However, Huygens is optimized for processing images that exceed a minimum signal strength, and our confocal data sometimes fall below this threshold. Other deconvolution algorithms may perform differently. The available evidence specifically shows that the Gaussian blur prefilter is useful with Huygens. This straightforward method allows us to take advantage of the flexibility, noise removal capability, and smoothing properties of the Huygens software to process very weak fluorescence signals.\n\n\nData availability\n\nDataset 1: TIFF files for the experimental and simulated image data are provided in the compressed folder Original Image Files.zip. The following files are included: 4D_movie_1x.tif and 4D_movie_8x.tif are the 4D confocal data sets used for Figure 1 and Supplementary Figure 1, and for Video S1 and Video S2; Hmg1_1x.tif and Hmg1_8x.tif are the confocal image stacks used for Figure 2; simulation_80x80x250.tif is the simulated confocal image stack used for Figure 3A and Supplementary Figure 2; simulation_80x80x250_plus_noise.tif is the simulated confocal image stack used for Figure 3B; simulation_40x40x120.tif is the simulated confocal image stack used for Supplementary Figure 3; and Zip1_0.25%.tif and Zip1_100%.tif are the widefield image stacks used for Figure 4. doi, 10.5256/f1000research.11773.d163336 (Day et al., 2017).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding was provided through the Biological Systems Science Division, Office of Biological and Environmental Research, Office of Science, U.S. Dept. of Energy, under Contract DE-AC02-06CH11357.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThanks for helpful discussion to Marc Bruce of Microvolution.\n\n\nSupplementary material\n\nSupplementary File 1. SnapGene file for the Hmg1-GFP construct.\n\nClick here to access the data.\n\nSupplementary Figure 1. Preservation of signal intensities after image processing. For the simulated array shown in Figure 3A, the intensities after average projection for the eight objects in the top row were measured using either the raw data, or the data after a Gaussian blur with a radius of 0.75 pixels, or the data after a Gaussian blur followed by deconvolution with Huygens. Intensity values are in arbitrary units.\n\nClick here to access the data.\n\nSupplementary Figure 2. Comparison of different radius values for the Gaussian blur prefilter. The simulated confocal Z-stack in Figure 3A was subjected to a Gaussian blur prefilter using the indicated radius (sigma) values in pixels, then deconvolved with Huygens and average projected. A radius of 0.50 was not completely effective at preserving the objects, and a radius of 1.00 caused slight blurring in the final image, indicating that a radius of 0.75 was a good compromise.\n\nClick here to access the data.\n\nSupplementary Figure 3. Effect of a Gaussian blur prefilter with a smaller voxel size. A simulated confocal Z-stack was generated and processed as in Figure 3A, except that the voxel size was 40x40x120 nm.\n\nClick here to access the data.\n\nVideo S1. Movie generated with weak signals from labeled yeast organelles. Confocal Z-stacks were collected at 2 s intervals with a line accumulation setting of 8x. The final frame of this movie corresponds to Figure 1A.\n\nClick here to access the data.\n\nVideo S2. Movie generated with very weak signals from labeled yeast organelles. Confocal Z-stacks were collected at 2 s intervals with a line accumulation setting of 1x. The final frame of this movie corresponds to Figure 1B.\n\nClick here to access the data.\n\n\nReferences\n\nAgard DA, Hiraoka Y, Shaw P, et al.: Fluorescence microscopy in three dimensions. In Fluorescence Microscopy of Living Cells in Culture, Part B. Methods Cell Biol. Taylor DL and Wang Y, editors. Academic Press, San Diego. 1989; 30: 353–377. PubMed Abstract | Publisher Full Text\n\nArigovindan M, Fung JC, Elnatan D, et al.: High-resolution restoration of 3D structures from widefield images with extreme low signal-to-noise-ratio. Proc Natl Acad Sci U S A. 2013; 110(43): 17344–17349. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBevis BJ, Hammond AT, Reinke CA, et al.: De novo formation of transitional ER sites and Golgi structures in Pichia pastoris. Nat Cell Biol. 2002; 4(10): 750–756. PubMed Abstract | Publisher Full Text\n\nBiggs DS: 3D deconvolution microscopy. Curr Protoc Cytom. 2010; Chapter 12: Unit12.19.1–20. PubMed Abstract | Publisher Full Text\n\nBurger W, Burge MJ: Digital Image Processing: An Algorithmic Introduction using Java. Springer, New York NY. 2008. Publisher Full Text\n\nCarlton PM, Boulanger J, Kervrann C, et al.: Fast live simultaneous multiwavelength four-dimensional optical microscopy. Proc Natl Acad Sci U S A. 2010; 107(37): 16016–16022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheezum MK, Walker WF, Guilford WH: Quantitative comparison of algorithms for tracking single fluorescent particles. Biophys J. 2001; 81(4): 2378–2388. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDay KJ, Papanikou E, Glick BS: 4D Confocal Imaging of Yeast Organelles. Methods Mol Biol. 2016; 1496: 1–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDay KJ, La Rivière PJ, Chandler T, et al.: Dataset 1 in: Improved deconvolution of very weak confocal signals. F1000Research. 2017. Data Source\n\nDe Mey JR, Kessler P, Dompierre J, et al.: Fast 4D Microscopy. Methods Cell Biol. 2008; 85: 83–112. PubMed Abstract | Publisher Full Text\n\nHammond AT, Glick BS: Raising the speed limits for 4D fluorescence microscopy. Traffic. 2000; 1(12): 935–940. PubMed Abstract | Publisher Full Text\n\nKoning AJ, Roberts CJ, Wright RL: Different subcellular localization of Saccharomyces cerevisiae HMG-CoA reductase isozymes at elevated levels corresponds to distinct endoplasmic reticulum membrane proliferations. Mol Biol Cell. 1996; 7(5): 769–789. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLosev E, Reinke CA, Jellen J, et al.: Golgi maturation visualized in living yeast. Nature. 2006; 441(7096): 1002–1006. PubMed Abstract | Publisher Full Text\n\nPapanikou E, Day KJ, Austin J, et al.: COPI selectively drives maturation of the early Golgi. eLife. 2015; 4: pii: e13232. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPawley JB: Handbook of Biological Confocal Microscopy, Second Edition. Springer. 2006; 985. Publisher Full Text\n\nRuss JC, Neal FB: The Image Processing Handbook, Seventh Edition. CRC Press, Boca Raton FL. 2015; 957–1101. Publisher Full Text\n\nSage D, Donati L, Soulez F, et al.: DeconvolutionLab2: An open-source software for deconvolution microscopy. Methods. 2017; 115: 28–41. PubMed Abstract | Publisher Full Text\n\nvan Kempen GM, van Vliet LJ, Verveer PJ, et al.: A quantitative comparison of image restoration methods for confocal microscopy. J Microsc. 1997; 185(3): 354–365. Publisher Full Text"
}
|
[
{
"id": "23574",
"date": "19 Jun 2017",
"name": "Akihiko Nakano",
"expertise": [
"Reviewer Expertise Membrane traffic",
"live cell imaging"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes a tip on using the commercial software Huygens for fluorescent image processing. The authors show that a Gaussian prefilter is useful for preserving very weak signals when the data are deconvolved by Huygens, whose algorithm cannot adequately deal with such images. From a strictly scientific point of view, we think that options of image processing including the use of filters should be discussed for general algorithms not on particular commercial software packages, because they often contain hidden algorithms. Indeed, Huygens’s algorithms are not completely open. However, for most of the cell biologists who are not professional in mathematics, the use of a commercial software package for data processing is common and a tip on obtaining reasonably good-looking data may be helpful for Huygens users.\n\nWe have the following comments.\n\nThe deconvolution method needs be defined more clearly. Making arguments on algorithms with a black box is problematic. Can the details of maximum likelihood algorithm be disclosed, for example, by referring to a document about the method employed by Huygens? How the parameter setting was determined should also be explained.\n\nTo test the effects of Gaussian blurring on deconvolution of weak signals, the authors performed simulations with a generated set of fluorescent points and PSF. We agree the signal to noise ratio is very important here. In reality, the essential difference between very small numbers of measured photons and background noise must be carefully assessed. Can the authors confirm whether similar conditions are realized on actual conditions under a microscope?\n\nThe consideration on the unfavorable influence of the Gaussian prefilter is insufficient. The authors suggest using the filter routinely because it gives only a minor blurring effect on strong signals. This is too qualitative. Limitations of the range the method permits should be clearly stated. It will also depend on the purpose of measurements, whether quantitative numerical analysis is required with precision such as particle tracking and edge detection, or qualitative analysis is sufficient such as description of organelle localization dynamics.\n\nOthers are minor comments.\n\nER-decon depends on a totally different methodology and its comparison is not relevant in this paper.\n\nAlthough the Gaussian blur prefilter has a good characteristic, the rounding error may still cause an unexpected influence on the deconvolution algorithm when the Gaussian radius is not large enough compared to the voxel size. The authors must be aware of this and may want to mention it.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Partly\n\nAre sufficient details provided to allow replication of the method development and its use by others? Partly\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "2931",
"date": "07 Aug 2017",
"name": "Benjamin Glick",
"role": "Author Response",
"response": "Thanks to the referees for their thoughtful feedback. Please also see the Notes to Readers for an explanation of how we approached this revision. 1) The SVI website provides considerable information about their deconvolution algorithms, but Huygens is necessarily a black box to some degree. Most of the deconvolution parameter settings that we used were either automatic or determined by the properties of the imaging system. The SNR value is crucial, and was chosen empirically for each data set as described in the Methods. 2) The simulated data were chosen to give results similar to those we obtain from imaging yeast organelles by confocal microscopy. Noise levels in Figure 3B are actually higher than the levels we typically observe. 3) Apart from minor blurring of the final deconvolved structures, we have not noticed ill effects of a Gaussian blur prefilter under any imaging conditions. We have endeavored to emphasize this point in the text. As documented in the paper, our method preserves quantitative information about fluorescence intensities. 4) ER-Decon uses a different algorithm, but like Huygens with a Gaussian blur prefilter, it serves the purpose of deconvolving very weak fluorescence signals. The ER-Decon paper was the first to address this topic in depth and is therefore relevant for our discussion. 5) Empirically, we find that a radius of 0.75 – 1.00 pixel in the Gaussian blur prefilter yields good results. This point is now emphasized more clearly in the text. Note that by using 16-bit data, we avoid the rounding errors that might occur when working with smaller integer values"
}
]
},
{
"id": "23779",
"date": "26 Jun 2017",
"name": "Vladimir Denic",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, Day et al. provide a protocol for preventing signal loss resulting from deconvolution of low-intensity fluorescence microscopy images by the Huygens deconvolution software. The authors convincingly demonstrate that image pre-processing using a Gaussian filter prevents loss of low-intensity fluorescent objects in the deconvolved image. Their method is well-described except for the omission of a few details (see below). Their experimental validation demonstrates the power of the method but might be further strengthened by small additional experiments that confirm a one-to-one correspondence between low-intensity structures derived by their image processing and the same structures derived by detection of higher-intensity signals.\n\nDescription of methods: The authors clearly describe the rationale for their method and how it fits with pre-existing deconvolution technologies. The flow of their analysis pipeline is generally well-described, though some important details are left out:\nHow is the standard deviation (sigma) of the Gaussian filter determined? Is it empirically optimized to produce minimal loss of signal? After optimization for one image, can the same sigma be reliably applied to other images from the same dataset with consistent results?\n\nThe authors should address possible artifacts that may arise from spreading fluorescence detected by an SPC to adjacent pixels. Is it appropriate to artificially spread photons from single pixels to adjacent regions that may not contain excited fluorophores? How does this relate point-spread fluorescence of multiple photons to adjacent regions in high-intensity images?\n\nFigures 3-4 contain all essential information for intepretation of the figure and do not require additional work. Minor adjustments are recommented for Figures 1 and 2, and a small additional experiment is recommended for Figure 1:\nFigure 1:\nBecause of mobility of imaged structures within the cell, it is difficult to fully ascertain whether blurring and deconvolution of low-intensity fluorescence results in the appearance of biologically relevant structures or artificial creation of structures from background noise. This issue could in principle be resolved by: 1. repeating the experiment in Figure 1 with fixed cells in which the structures are immobile, allowing more direct comparison between visualized structures with 8 scans vs. 1 scan; or (to avoid potential cell fixation artifacts introduced by the above approach) 2. a high-abundance protein that colocalizes with the dim structures could be fluorescently labeled in a second channel and co-visualized (alternatively, a second copy of the same protein could be fused to a bright fluor). This would help validate the expection that the blurred, deconvolved weak signal co-localizes with a stronger marker for the same structure.\n\nBlurred, pre-deconvolution intermediate images should be shown for both the high-intensity (8 scans) and low-intensity (1 scan) images.\nFigure 2:\nBlurred, pre-deconvolution intermediate images should be shown as indicated for Figure 1.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Partly\n\nAre sufficient details provided to allow replication of the method development and its use by others? Partly\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "2930",
"date": "07 Aug 2017",
"name": "Benjamin Glick",
"role": "Author Response",
"response": "Thanks to the referees for their thoughtful feedback. Please also see the Notes to Readers for an explanation of how we approached this revision. 1) The radius (sigma) value for the Gaussian blur was optimized empirically. We found that a radius of 0.75 – 1.00 pixels worked well for a variety of real and simulated confocal images. This point has been clarified in the text. 2) The pixel size in our images was 80 nm, which is below the resolution limit of the confocal system (~200 nm). With stronger signals, the photons from a fluorescent point source would be spread over multiple pixels. Therefore, our Gaussian blur prefilter generates patterns akin to those seen with well-sampled images. 3) We find that the best way to determine if a structure is biologically relevant is to ask whether it can be tracked between successive frames in a video. By this criterion, the videos associated with the paper confirm that most of the structures seen in the blurred and deconvolved images are indeed biologically relevant. 4) The blurred raw images are actually not very informative, and an extra panel would disrupt the 1:1 correspondence between the figures and the videos. Therefore, unless the referee feels strongly that the blurred images are essential, we would prefer to omit them."
}
]
}
] | 1
|
https://f1000research.com/articles/6-787
|
https://f1000research.com/articles/6-1333/v1
|
07 Aug 17
|
{
"type": "Research Article",
"title": "Treatment of femoral shaft fractures with monoaxial external fixation in polytrauma patients",
"authors": [
"Gianluca Testa",
"Domenico Aloj",
"Alessandro Ghirri",
"Eraclite Petruccelli",
"Vito Pavone",
"Alessandro Massé",
"Domenico Aloj",
"Alessandro Ghirri",
"Eraclite Petruccelli",
"Vito Pavone",
"Alessandro Massé"
],
"abstract": "Background: Femoral shaft fractures, typical in younger people, are often associated with polytrauma followed by traumatic shock. In these situations, despite intramedullary nailing being the treatment of choice, external fixation could be used as the definitive treatment. The aim of this study is to report evidence regarding definitive treatment of femoral shaft fractures with monoaxial external fixation. Methods: Between January 2006 and December 2015, 83 patients with 87 fractures were treated at the Department of Orthopaedics and Traumatology CTO of Turin, with a monoaxial external fixation device. Mean age at surgery, type of fracture, mean follow-up, time and modalities of treatment, non-weight bearing period, average healing, external fixation removal time, and complications were reported. Results: The average patient age was 31.43±15.19 years. In 37 cases (42.53%) the right femur was involved. 73 (83.91%) fractures were closed, and 14 (16.09%) were open. The average follow-up time was 61.07±21.86 weeks. In 68 (78.16%) fractures the fixation was carried out in the first 24 hours, using a monoaxial external fixator. In the remaining 19 cases, the average delay was 6.80±4.54 days. Mean non-weight bearing time was 25.82±27.66 days (ranging from 0 to 120). The 87 fractures united at an average of 23.60±11.37 weeks (ranging from 13 to 102). The external fixator was removed after an average of 33.99±14.33 weeks (ranging from 20 to 120). Reported complications included 9.19% of delayed union, 1.15% of septic non-union, 5.75% of malunion, and 8.05% cases of loss of reduction. Conclusions: External fixation of femoral shaft fractures in polytrauma is an ideal method for definitive fracture stabilization, with minimal additional operative trauma and an acceptable complication rate.",
"keywords": [
"femoral shaft fractures",
"polytrauma",
"monoaxial external fixator",
"definitive treatment"
],
"content": "Introduction\n\nFemoral shaft fractures are typical in younger people1, and can be caused by car accidents, falling down from heights or gunshot wounds2. Several intensive traumatic agents frequently bring about comminuted and open femoral shaft fractures3. These fractures are typically associated with polytrauma, followed by traumatic shock4.\n\nIntramedullary nailing is considered to be the treatment of choice for fixation of most femoral shaft fractures5–7. However, there are instances where fixation with intramedullary nailing cannot not be performed, for example during severe polytrauma, when the general condition of patients precludes major surgery and there are severe open fractures with extensive soft tissue damage. In these situations, external fixation is used for temporary fixation. Surgical conversion from external fixation to intramedullary nailing within one to two weeks of the injury is the standard practice8; however, due to financial constraints, in large parts of the world external fixation of femoral shaft fractures is often the definitive treatment9.\n\nThe aim of this study is to report monoaxial external fixation as the definitive treatment of femoral shaft fractures.\n\n\nMethods\n\nBetween January 2006 and December 2015, 160 patients with 182 femoral shaft fractures were treated at the Department of Orthopaedics and Traumatology CTO of Turin, with monoaxial external fixation, Orthofix Procallus®. The study was conducted according to the principles expressed in the Declaration of Helsinki. Only fractures with external fixation as the definitive treatment were included. Patients who did attend follow-ups or who died for reasons unrelated to the fracture, such as cardiopulmonary arrest or septicaemia, were ruled out.\n\nData on 83 patients with 87 fractures were gathered retrospectively, from hospital records. Follow-ups were carried out for a minimum period of 39 weeks (9 months), or until bone union. The reasons for injury were motor vehicle accidents in all cases.\n\nAge at surgery, gender, injured side, location and type of fracture, AO classification, mean follow-up time and modalities of treatment, non-weight bearing period time, average union time, and external fixation removal time were recorded. Bone union was clinically and radiographically evaluated, according to common criteria in the literature. At clinical assessment, fractures were considered healed, when the absence of movement and pain on stress at the fracture site was observed. Radiographic union was achieved in the presence of uniform and continuous ossification of callus, with consolidation and development of trabeculae across the fracture site10.\n\nUnion time of more than 26 weeks in closed fractures and 39 weeks in open fractures was considered a delayed union11–13. The diagnosis of non-union was made in the presence of abnormal movement at the fracture site at least 9 months after the injury and with no progressive signs of healing for at least 3 months, despite continuing treatment12. Malunion was defined with one of the following criteria: shortening of more than 2.5 cm, angulation of more than 10°, or rotational malalignment of more than 5°. Major and minor complications with secondary surgical procedures were noted.\n\n\nResults\n\nThe average patient age was 31.43±15.19 years (ranging from 14 to 87). There were 66 men (79.52%) and 17 women (20.48%). Four patients (4.82%) had a bilateral femur fracture. In 37 cases (42.53%) the right femur was involved, and in 50 cases (57.47%) the left femur. In 14 cases (16.09%) the fracture was located in the proximal third of the femur, in 57 cases (65.52%) in the middle third of the femur and in 16 cases (18.39%) in the distal third. 73 fractures (83.91%) were closed, and 14 (16.09%) were open (Table 1). Following the AO classification of fractures, there were: 4 (4,60%) 32A1, 13 (14,94%) 32A2, 21 (24,14%), 32A3, 7 (8.05%) 32B1, 11 (12.64%) 32B2, 12 (13.79%) 32B3, 3 (3.45%) 32C1, 3 (3.45%) 32C2, and 13 (14,94%) 32C3 (Table 2). Of 14 open fractures, following Gustilo-Anderson classification, there were: 6 GI (42.86%), 4 GII (28.57%), 1 GIIIa (7.14%), 2 GIIIb (14.29%), and 1 GIIIc (7.14%) (Table 3).\n\nThe average follow-up time was 61.07±21.86 weeks (ranging from 28 to 160). In 68 fractures (78.16%) the fixation was carried out in the first 24 hours, using a monoaxial external fixator. In the remaining 19 cases, the average delay was 6.80±4.54 days (ranging from 3 to 20). Of these 19 patients, 7 (8.05%) had skeletal traction and 12 (13.79%) a stabilization with temporary external fixation. Mean surgery duration time was 55.36±11.13 minutes (ranging from 35 to 80).\n\nThe patients were mobilized with crutches as soon as possible, with a gradual increase of weight bearing within tolerable limits of pain. Weight bearing was not immediately allowed in patients with other associated lower limb fractures or severe systemic complications. Mean non-weight bearing time was 25.82±27.66 days (ranging from 0 to 120).\n\nThe 87 fractures united at an average of 23.60±11.37 weeks (ranging from 13 to 102). The external fixator was removed after sufficient callus was seen at an average of 33.99±14.33 weeks (ranging from 20 to 120) (Table 4). Please see Figure 1 to see the progression of a patient treated with external fixation after a femoral shaft fracture.\n\nA. Femoral shaft fracture in a patient, AO type 32C3. B. Post-operative radiographic evaluation in AP view. C. Post-operative radiographic evaluation in lateral view. D. Radiographic evaluation after three months from surgery. E. External fixator removal after 7 months from surgery.\n\nExcluding the delayed unions, 79 (90.8%) fractures united at an average time of 20.80±3.59 weeks (ranging from 13–30 weeks). Eight fractures (9.2%) had delayed union, with an average union time of 51.25±21.97 weeks (ranging from 36–102 weeks). All of the delayed unions occurred in closed comminuted fractures, with or without bone loss in multiply injured patients. Bone loss of 5 cm or more was noted in 9 patients (10.34%) (Table 5).\n\nSecondary surgical procedures were performed in eight cases of delayed union (9.19%): 2 corticocancellous grafts 12 months after injury; one fibular graft 11 months after injury; 2 applications of ring fixator 8 months after injury; 2 applications of circular hexapod external fixator 8 months after injury; and 1 reduction and fixation precedure with plates and screws after 15 months. Septic non-union occurred in two fractures (2.3%), and treatment involved surgical debridement and application of a ring fixator. Malunion occurred in five (5.75%) cases: two shortenings of 3 cm and one varus deformity corrected with application of ring external fixator; two recurvatum deformity associated internal rotation deformity of 20°, treated with hexapod external fixator. Two re-fractures occurred (2.3%), which were successfully treated with repeat monoaxial external fixation.\n\nLoss of reduction after external fixation was observed in seven cases (8.05%) and treated with an external fixator reset.\n\nOne major complication, a decrease in the range of motion of the knee, occurred in one patient (1.15%). The fracture was located in the distal third of the femur. In this case a Judet arthromiolysis14,15 was performed.\n\nMinor complications, namely pin-tract infections, were noted in 12 (13.8%) cases but did not influence the outcome; they were managed by improvement of hygiene and antibiotic therapy. Breakage of Schanz screws was reported in one case (1.15%) and successfully managed by debridement and removal and re-insertion of the screw. One patient (1.15%) had pain at the fracture location after removal of the external fixator, so it was repositioned for another 2 months (Table 6).\n\n\nDiscussion\n\nIntramedullary nailing for the treatment of femoral shaft fractures was introduced by Groves in United Kingdom and Kuntcher in Germany16–18. Today, reduction and fixation with reamed intramedullary nailing is considered the gold standard for the treatment of most femoral shaft fractures5–7.\n\nExternal fixation is not widely used for femoral shaft fractures, and there are few studies in the literature that have reported this use. External fixation has been generally reserved for initial stabilization of polytrauma patients, or for open fractures19. Early stabilization in polytrauma patients could decrease morbidity and mortality, avoiding pulmonary complications, including pneumonia, fat embolism and acute respiratory failure20,21, although a delay of surgery up to 72 hours does not increase the risk of complications22. The reported benefits included improved patient mobility, improvement of pulmonary hygiene, decreased pain and reduced need of narcotics23. Moreover, the procedure is rapid and could be performed in around 30 minutes, even though in our study the mean duration was 55 minutes, because more time was needed to treat soft tissue and skin. This is particularly important for patients in critical condition, and in cases of open fractures with relevant damages to the vascular supply of the bone24.\n\nOpen fractures are usually associated with severe comminution at the fracture site and bone loss25, so external fixation is the treatment of choice because it stabilizes the fracture and allows any soft-tissue wound to be treated daily, as necessary19. In these cases, unlike internal fixation devices24 , external fixation spares uninjured tissue planes and the periosteal circulation, allowing vascular repair26. Using the data collected for our study, only 14 (16.09%) fractures were open, with more than 50% of II, IIIa, IIIb, IIIc types. The treatment of such fractures was associated with increased risk of infection and delayed union27.\n\nWe applied the concept of damage control surgery, based on management of multiply-injured patients with associated fractures of long bones and pelvic fractures. This concept consists of an early temporary stabilization of unstable fractures, control of haemorrhage and treatment of possible abdominal or intracranial lesions. When the condition of the patient has been optimized, it is possible to perform a delayed definitive management of fractures. The delayed, definitive stabilization procedure of femoral fractures that has been most commonly used, was the removal of the external fixation and intramedullary nailing of the fracture28. In our series, we performed definitive external fixation within 24 hours in 78% of cases, while in the remaining 22% skeletal traction or temporary external fixation was performed.\n\nMean healing time in our study of femoral fractures treated by external fixation was 23.60 weeks (ranging from 13 to 102), similar to previous studies9,19,29,30. In most reported cases, patients had been given some form of after-support (braces, casts) after approximately 3 to 7 months8,26. In our series, removal of external fixation was performed at an average time of 34 weeks, with application of brace until clinical stability of fracture.\n\nMain complications reported in the literature were pin-tract infections and contracture of the knee joint29,30, the risk of these happening can be minimized with good pin hygiene, antibiotic therapy and knee exercises29. Pin-tract infections was registered in 13.8% of cases in our study, while there was only one severe contracture of the knee joint, treated with Judet arthromiolysis15.\n\nOther complications, such as delayed union and re-fractures, were successfully resolved with secondary surgical procedures, such as corticocancellous grafts, or applications of a ring fixator9. In one case, a fibular graft was necessary. Malunion was typically treated by changing the external fixation. Only in one case the external fixation was removed and an internal fixation device applied to correct a recurrent valgus deformation.\n\nAlthough the external fixation is considered a safe procedure to achieve temporary rigid stabilization in patients with multiple injuries at risk of an adverse outcome8, we performed external fixation as definitive management, because for patients with polytrauma, we preferred to avoid another surgical procedure such as a conversion to an internal device. Indeed, our rate of septic nonunion was 2.3%, which is comparable to the rate of seen with intramedullary nailing8,19. Septic nonunion was managed with surgical debridement and application of a ring fixator.\n\nIn conclusion, external fixation of femoral shaft fractures in polytrauma patients is an ideal method of fracture stabilization, with minimal additional operative trauma. Satisfactory outcomes can be reported using a damage control strategy for these fractures, before definitive external fixation, with acceptable complication rates and a reduced need of other open and invasive surgical procedures. A strict postoperative protocol, including early weight-bearing, intensive physical therapy and protection of the bone after complete removal, needs to be followed. Pin tract infections are the main complications and can be treated by local wound care and antibiotic therapy.\n\n\nData availability\n\nDataset 1: Data and details of the 83 patients that underwent treatment for femoral shaft fractures, used as a basis for the findings in this study. DOI, 10.5256/f1000research.11893.d17064531\n\n\nEthical statement\n\nThis study has been conducted according to the principles expressed in the Declaration of Helsinki.\n\nEthical approval was not necessary in this study because the data and clinical pictures have been sufficiently anonymised. Written informed consent for anonymous publication of their clinical details and clinical images was obtained from all patients. The Department-Chief, Alessandro Massé, authorized the authors to take information about patient records, allowing their use for this study. CTO Hospital of Turin owns the patient data that was recorded for Gianluca Testa and Alessandro Ghirri.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe thank Andrea Vescio for assistance in statistical analysis, and Giuseppe Sessa for editing the manuscript. Both are affiliated with the University of Catania.\n\n\nReferences\n\nWolinsky PR, McCarty E, Shir Y, et al.: Reamed intramedullary nailing of the femur: 551 cases. J Trauma. 1999; 46(3): 392–9. PubMed Abstract\n\nJurkovich JG, Carrico CJ: Trauma-Management of Acutely Injured Patients. In: Sabiston D ed. Textbook of surgery: The Biological basis of modern surgical practice-fifteenth edition. W.B. Saunders Company, 1997; 296–340.\n\nAgarwal-Harding KJ, Meara JG, Greenberg SL, et al.: Estimating the global incidence of femoral fracture from road traffic collisions: a literature review. J Bone Joint Surg Am. 2015; 97(6): e31. PubMed Abstract | Publisher Full Text\n\nHalvorson JJ, Pilson HT, Carroll EA, et al.: Orthopaedic management in the polytrauma patient. Front Med. 2012; 6(3): 234–42. PubMed Abstract | Publisher Full Text\n\nWinquist RA, Hansen ST Jr, Clawson DK: Closed intramedullary nailing of femoral fractures. A report of five hundred and twenty cases. J Bone Joint Surg Am. 1984; 66: 529–39. PubMed Abstract | Publisher Full Text\n\nBishop JA, Rodriguez EK: Closed intramedullary nailing of the femur in the lateral decubitus position. J Trauma. 2010; 68(1): 231–5. PubMed Abstract | Publisher Full Text\n\nLi AB, Zhang WJ, Guo WJ, et al.: Reamed versus unreamed intramedullary nailing for the treatment of femoral fractures: A meta-analysis of prospective randomized controlled trials. Medicine (Baltimore). 2016; 95(29): e4248. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPairon P, Ossendorf C, Kuhn S, et al.: Intramedullary nailing after external fixation of the femur and tibia: a review of advantages and limits. Eur J Trauma Emerg Surg. 2015; 41(1): 25–38. PubMed Abstract | Publisher Full Text\n\nTuttle MS, Smith WR, Williams AE, et al.: Safety and efficacy of damage control external fixation versus early definitive stabilization for femoral shaft fractures in the multiple-injured patient. J Trauma. 2009; 67(3): 602–605. PubMed Abstract | Publisher Full Text\n\nKessel L: Clinical and radiographic diagnosis of Watson-Jones’ fractures and joint injuries. Wilson, Edinburgh, 1992; 258–9.\n\nZlowodzki M, Prakash JS, Aggarwal NK: External fixation of complex femoral shaft fractures. Int Orthop. 2007; 31(3): 409–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTaylor J: Delayed union and nonunion of fractures. In: Crenshaw A ed. Campbells’ operative orthopedics. St Louis: Mosby, 1992; 858–84.\n\nRixen D, Steinhausen E, Sauerland S, et al.: Randomized, controlled, two-arm, interventional, multicenter study on risk-adapted damage control orthopedic surgery of femur shaft fractures in multiple-trauma patients. Trials. 2016; 17: 47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOliveira VG, D'Elia LF, Tirico LE, et al.: Judet quadricepsplasty in the treatment of posttraumatic knee rigidity: long-term outcomes of 45 cases. J Trauma Acute Care Surg. 2012; 72(2): E77–80. PubMed Abstract | Publisher Full Text\n\nNoda M, Saegusa Y, Takahashi M, et al.: Decreasing Complications of Quadricepsplasty for Knee Contracture after Femoral Fracture Treatment with an External Fixator: Report of Four Cases. J Orthop Case Reports. 2013; 3(1): 3–6. PubMed Abstract | Free Full Text\n\nScannell BP, Waldrop NE, Sasser HC, et al.: Skeletal traction versus external fixation in the initial temporization of femoral shaft fractures in severely injured patients. J Trauma. 2010; 68: 633–640. PubMed Abstract | Publisher Full Text\n\nRozbruch SR, Müller U, Gautier E, et al.: The evolution of femoral shaft plating technique. Clin Orthop Relat Res. 1998; (354): 195–208. PubMed Abstract\n\nBabalola OM, Ibraheem GH, Ahmed BA, et al.: Open Intramedullary Nailing for Segmental Long Bone Fractures: An Effective Alternative in a Resource-restricted Environment. Niger J Surg. 2016; 22(2): 90–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStojiljković P, Golubović Z, Mladenović D, et al.: External skeletal fixation of femoral shaft fractures in polytrauma patients. Med Pregl. 2008; 61(9–10): 497–502. PubMed Abstract | Publisher Full Text\n\nBone LB, Johnson KD, Weigelt J, et al.: Early versus delayed stabilization of femoral fractures: a prospective randomized study. 1989. Clin Orthop Relat Res. 2004; (422): 11–6. PubMed Abstract\n\nRichards JE, Matuszewski PE, Griffin SM, et al.: The Role of Elevated Lactate as a Risk Factor for Pulmonary Morbidity After Early Fixation of Femoral Shaft Fractures. J Orthop Trauma. 2016; 30(6): 312–8. PubMed Abstract | Publisher Full Text\n\nKazakos KJ, Verettas DJ, Tilkeridis K, et al.: External fixation of femoral fractures in multiply injured intensive care unit patients. Acta Orthop Belg. 2006; 72: 39–43. PubMed Abstract\n\nKobbe P, Micansky F, Lichte P, et al.: Increased morbidity and mortality after bilateral femoral shaft fractures: myth or reality in the era of damage control? Injury. 2013; 44(2): 221–5. PubMed Abstract | Publisher Full Text\n\nSabharwal S, Kishan S, Behrens F: Principles of external fixation of the femur. Am J Orthop (Belle Mead NJ). 2005; 34(5): 218–23. PubMed Abstract\n\nLong WT, Chang W, Brien EW: Grading system for gunshot injuries to the femoral diaphysis in civilians. Clin Orthop Relat Res. 2003; (408): 92–100. PubMed Abstract\n\nDabezies EJ, D'Ambrosia R, Shohji H, et al.: Fractures of the femoral shaft treated by external fixation with the Wagner device. J Bone Joint Surg Am. 1984; 66: 360–4. PubMed Abstract | Publisher Full Text\n\nDella Rocca GJ, Crist BD: External fixation versus conversion to intramedullary nailing for definitive management of closed fractures of the femoral and tibial shaft. J Am Acad Orthop Surg. 2006; 14(10 Spec No): S131–5. PubMed Abstract\n\nD'Alleyrand JC, O'Toole RV: The evolution of damage control orthopedics: current evidence and practical applications of early appropriate care. Orthop Clin North Am. 2013; 44(4): 499–507. PubMed Abstract | Publisher Full Text\n\nBabar IU: External fixation in close comminuted femoral shaft fractures in adults. J Coll Physicians Surg Pak. 2004; 14(9): 533–5. PubMed Abstract\n\nBonnevialle P, Manset P, Cariven P, et al.: Single-plane external fixation of fresh fractures of the femur: critical analysis of 53 cases. Rev Chir Orthop Repartrice Appar Mot. 2005; 91(5): 446–56. PubMed Abstract | Publisher Full Text\n\nTesta G, Aloj D, Ghirri A, et al.: Dataset 1 in: Definitive treatment of femoral shaft fractures with monoaxial external fixator in polytrauma. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24819",
"date": "10 Aug 2017",
"name": "Dario Pitino",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI suppose this choice was due to critical conditions of patients. My suggestion is to specify this concept in the materials and methods. This may explicate also the validity of this kind of definitive treatment, associated with high rate of complications.\nI consider the manuscript valid and suitable for publication.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "24818",
"date": "29 Aug 2017",
"name": "Lorenza Marengo",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well written manuscript on femoral shaft fracture in polytrauma patients treated by monoaxial external fixation as definitive treatment. The sample characteristic are adequate and the study is well designed, I suggest to define more precisely the amount of loss of range of motion when considered as a complication as well as when arhrtomyolisis was indicated.\nFurthermore, it could be interesting to compare the average duration of the surgical procedure in closed and open fracture groups. The reported duration for external fixator application as initial stabilization device for femoral shaft fractures in polytrauma patients is around 30 minutes. In this study, the average duration is higher (55 minutes). The authors of the study suggested that this difference in duration is explained by the fact that “more time is needed to treat soft tissue and skin”. However, the difference might be the result of using the external fixation as definitive treatment, which requires better reduction and more stable construct. If this is the case, the surgical time might increase also for closed fracture group.\nIn conclusion, this manuscript reported external fixation to be a valid option as definitive treatment for femoral shaft fracture in polytrauma patients. This technique allows early fracture stabilization and eliminate the need of additional surgery for obtaining definitive fracture reduction and stabilization in 78% of the patients.\nThe topic of this manuscript is interesting and it deserves serious consideration for publication.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "24820",
"date": "31 Aug 2017",
"name": "Saverio Comitini",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript treated about definitive monoaxial external fixation for shaft femural fractures in polytrauma cases. I agree with the choice of Authors for this option of treatment, useful to minimize the surgical duration time and blood loss. External fixation technique is easy and reproducible, but associated with minor complications, as superficial pin infections. This is the reason why surgeons are not encouraged to use this device as definitive solution, and also for the poor attitude to allow an early-weight bearing.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1333
|
https://f1000research.com/articles/6-1134/v1
|
17 Jul 17
|
{
"type": "Research Note",
"title": "ChemMaps: Towards an approach for visualizing the chemical space based on adaptive satellite compounds",
"authors": [
"J. Jesús Naveja",
"José L. Medina-Franco",
"J. Jesús Naveja"
],
"abstract": "We present a novel approach called ChemMaps for visualizing chemical space based on the similarity matrix of compound datasets generated with molecular fingerprints’ similarity. The method uses a ‘satellites’ approach, where satellites are, in principle, molecules whose similarity to the rest of the molecules in the database provides sufficient information for generating a visualization of the chemical space. Such an approach could help make chemical space visualizations more efficient. We hereby describe a proof-of-principle application of the method to various databases that have different diversity measures. Unsurprisingly, we found the method works better with databases that have low 2D diversity. 3D diversity played a secondary role, although it becomes increasingly relevant as 2D diversity increases. For less diverse datasets, taking as few as 25% satellites seems to be sufficient for a fair depiction of the chemical space. We propose to iteratively increase the satellites number by a factor of 5% relative to the whole database, and stop when the new and the prior chemical space correlate highly. This Research Note warrants the full application of this method for several datasets.",
"keywords": [
"chemical space",
"data visualization",
"epigenetics",
"principal components analysis",
"similarity matrix"
],
"content": "Introduction\n\nVisual representation of chemical space has multiple implications in drug discovery for virtual screening, library design and comparison of compound collections, among others1. Amongst the multiple methods to explore chemical space, principal component analysis (PCA) of pairwise similarity matrices computed with structural fingerprints has been used to analyze compound datasets2,3. A drawback of this approach is that it becomes impractical for large libraries due to the large dimension of the similarity matrix4. Other approaches use molecular representations different from structural fingerprints, such as physicochemical properties or complexity descriptors, or methods different from PCA, such as multidimensional-scaling and neural networks5,6.\n\nIn representation of the chemical space based on PCA there have been “chemical satellite” approaches, such as ChemGPS, which select satellites molecules that might not be included in the database to visualize, but have extreme features that place them as outliers, with the intention to reach as much of the chemical space as possible7–9. Although we concur with the fact that not all compounds in a compound data set should be necessary to generate a meaningful chemical space, there are still obvious limitations of using a fixed set of satellites to which the user is blinded.\n\nWe therefore suggest the hybrid approach, ChemMaps, in which a portion of the database to be represented is used as satellite, thereby decreasing the computational effort required to compute the similarity matrix without losing adaptability of the method to any particular database. Since it is expected that more diverse sets would require more satellites, a second goal of this study was to qualitatively explore the relationship between the internal diversity of compound datasets and the fraction of compounds required as satellites, in order to generate a good approximation of the chemical space.\n\n\nMethods\n\nTable 1 summarizes the six compound data sets considered in this study. Note that small median similarity values imply higher diversity. The datasets were selected from a large scale study of profiling epigenetic datasets (unpublished study, Naveja JJ and Medina-Franco JL) with relevance in epigenetic-drug discovery. We also included DrugBank as a control diverse dataset10. Briefly, we selected focused libraries of inhibitors of DNMT1 (a DNA-methyltransferase; library diverse 2D and 3D), L3MBTL3 (a histone methylation reader; diverse 3D and less diverse 2D), SMARCA2 (a chromatin remodeller; diverse 2D, less diverse 3D), and CREBBP (a histone acetyltransferase; less diverse both 2D and 3D). Datasets were selected based on their different internal diversity (as measured with Tanimoto index/MACCS keys for 2D measurements and Tanimoto combo/OMEGA-ROCS for 3D; see Figure S1 in Supplementary File 1). Data sets in this work have approximately the same number of compounds except for HDAC1 and DrugBank, which were selected to benchmark the method in larger databases (Table 2). We evaluated 2D diversity using the median of Tanimoto/MACCS similarity measures in KNIME version 3.3.2, and 3D diversity using the median of Combo Score from the ROCS, version 3.2.2 and OMEGA, version 2.5.1, OpenEye software11–14.\n\naMedian of Tanimoto/MACCS similarity; bMedian of Tanimoto/ECFP4 similarity; cMedian of OMEGA-ROCS similarity; NC: not calculated\n\nTo assess the hypothesis of this work we performed two main approaches A): Backwards approach: start with computing the full similarity matrix of each data set and remove compounds systematically; and B) Forward approach: start adding compounds to the similarity matrix until finding the reduced number of required compounds (called ‘satellites’) to reach a visualization of the chemical space that is very similar to computing the full similarity matrix. The second approach would be the usual and realistic approach from a user standpoint. Each method is further detailed in the next two subsections.\n\nThe following steps were implemented in an automated workflow in KNIME, version 3.3.215:\n\n1. For each compound in the dataset with N compounds, generate the N X N similarity matrix using Tanimoto/extended connectivity fingerprints radius 4 (ECFP4) generated with CDK KNIME nodes.\n\n2. Perform PCA of the similarity matrix generated in step 1 and selected the first 2 or 3 principal components (PCs).\n\n3. Compute all pair-wise Euclidean distances based on the scores of the 2 or 3 PCs generated in step 2. The set of distances are later used as reference or ‘gold standard’.\n\n4. Repeat steps 1 to 3 with one compound as satellite, generating an N X 1 similarity matrix. The first compound was selected randomly. In this case, for example, it is only possible to calculate one PC, but as the number of satellites increases, we can again compute 2 or 3 PCs.\n\n5. Calculate the correlation among the pairwise distances generated in step 2 obtained using the whole matrix (e.g., gold standard) and those obtained in step 4.\n\n6. Iterate over steps 4 and 5 increasing the number of satellites one by one until N - 1 satellites are reached. To select the second, third, etc. compounds, two approaches were followed: select compounds at random and select compounds with the largest diversity to the previously selected (i.e., Max-Min approach).\n\n7. Estimate the proportion of satellite compounds required to preserve a ‘high’ (of at least 0.9) correlation.\n\n8. The prior steps were repeated five times for each dataset in order to capture the stability of the method.\n\nThe former approach is useful only for validation purposes of the methodology as a proof-of-principle. However, the obvious objective of a satellite-approach is to avoid the calculation of the complete similarity matrix e.g., step 1 in backwards approach. To this end, we developed a satellite-adding or forward approach, in contrast with the formerly introduced backwards approach. We started with 25% of the database as satellites and for each iteration we added 5% until the correlation of the pairwise Euclidean distances remains high (at least 0.9).\n\n\nResults\n\nIn this pilot study, we assessed a few variables to tune up the method, such as the number of PCs used (2 or 3) and the selection of satellites at random or by diversity. We found that selection at random is more stable, above all in less diverse datasets (Figure 1 and Figure 2; Figure S2 and Figure S3). Likewise, selecting 2 PCs the performance is slightly better and more stable (compare Figure 1 and Figure 2 against Figure S2 and Figure S3).\n\nThe correlation with the results from the whole matrix was calculated with increasing numbers of satellites. Each colored line represents one of the five random sets.\n\nThe correlation with the results from the whole matrix was calculated with increasing numbers of satellites. Each colored line represents one of the five random sets.\n\nTherefore, from this point onwards we will focus on the results of the at random satellites selection and using 2 PCs (Figure 2). From the four datasets, we conclude that for datasets with lower 2D diversity (CREBBP and L3MBTL3, see Table 1), around 25% of satellite compounds are enough to obtain a high correlation (≥ 0.9) with the gold standard (e.g., PCA on the whole matrix). Whereas for 2D-diverse datasets i.e., DNMT1 and SMARCA2, up to 75% of the compounds could be needed to ensure a high correlation. Nonetheless, even for these datasets, using 25% of the compounds as satellites the correlation with the gold standard is already between 0.6 and 0.8; using 50% of the compounds as satellites the correlation is between 0.7 and 0.9. Hence, the higher the diversity of a dataset (especially 2D), the higher the number of satellites required.\n\nEvidently, a useful method for reducing computing time and disk usage space should not use the PCA on the whole similarity matrix to determine an adequate number of satellites for each dataset. With that in mind, we decided to design a method that starts with a given percentage of the database as satellites, and then keeps adding a proportion of them until the correlation between the former and the updated data is of at least 0.9. In Figure 3 we depict this approach on the same databases in Table 1 for step sizes of 5% and starting from zero. Similarly as what we saw in the backwards method, around 5 steps (25% of the database) are usually necessary to reach a stable, high correlation between steps. Figure S4 shows that for step sizes of 10% there is no further improvement. Therefore we suggest that the method should, for default, start with 25% of compounds as satellites and then keep adding 5% until a correlation between steps of at least 0.9 is reached.\n\nIn this pilot study we applied the method to visualize the chemical space of two larger datasets (HDAC1 and DrugBank with 3,257 and 1,900 compounds, respectively, Table 1). As shown in Table 2, a significant reduction in time performance was achieved as compared to the gold standard, and the correlation between the gold standard and the satellites approach was in both cases higher than 0.9. Figure 4 depicts the chemical spaces generated in both instances. Although the orientation of the map changed for HDAC1, the shape and distances remain quite similar, which is the main objective. This preliminary work supports the hypothesis that a reduced number of compounds is sufficient to generate a visual representation of the chemical space (based on PCA of the similarity matrix) that is quite similar to the chemical space of the PCA of the full similarity matrix.\n\nChemical space of DrugBank using (A) the adaptive satellites approach or (B) the gold standard. As well as for HDAC1 using (C) the adaptive satellites approach or (D) the gold standard.\n\n\nConclusion and future directions\n\nThis proof-of-concept study suggests that using the adaptive satellite compounds ChemMaps is a plausible approach to generate a reliable visual representation of the chemical space based on PCA of similarity matrices. The approach works better for relatively less-diverse datasets, although it seems to remain robust when applied to more diverse datasets. For datasets with small diversity, fewer satellites seem to be enough to produce a representative visual representation of the chemical space. The higher relevance of 2D diversity over 3D in this study could be importantly related to the fact that the chemical space depiction is based on 2D fingerprints. Therefore, the performance of the methods depicting the chemical space based on 3D fingerprints could also be assessed.\n\nA major next step is to conduct a full benchmark study to assess the general applicability of the approach proposed herein, and also in larger databases, in which we anticipate this method would be even more useful. A second step is to propose a metric that determines the number of compounds required as satellites for PCA representation of the chemical space based on similarity matrices.\n\n\nData availability\n\nDataset 1: This file contains five compound datasets used in this work in SDF format. No special software is required to open the SDF files. Any commercial or free software capable of reading SDF files will open the data sets supplied. The HDAC1 dataset is available from ChEMBL, version 23 at https://www.ebi.ac.uk/chembl/. doi, 10.5256/f1000research.12095.d16832216",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nConsejo Nacional de Tecnología (CONACyT) scholarship 622969 (JJN). Universidad Nacional Autónoma de México (UNAM), Programa de Apoyo a la Investigación y el Posgrado PAIP, grant 5000-9163 (JLMF) and Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica PAPIIT, grant IA204016 (JLMF).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nInsightful discussions with Dr. Jakyung Yoo (Daewoong Life Science Research Institute) are highly appreciated. The authors thank OpenEye for the academic license granted.\n\n\nSupplementary material\n\nSupplementary File 1: File with four supporting figures. Figure S1: 3D-Consensus Diversity Plot depicting the diversity of the datasets used for the backwards approach; Figure S2: Backwards analysis with 3PCs picking satellites by diversity; Figure S3: Backwards analysis with 3PCs picking satellites at random; Figure S4: Forward analysis with 2PCs picking satellites at random with step sizes of 10%.\n\nClick here to access the data.\n\n\nReferences\n\nMedina-Franco J, Martinez-Mayorga K, Giulianotti M, et al.: Visualization of the chemical space in drug discovery. Curr Comput-Aided Drug Discov. 2008; 4(4): 322–333. Publisher Full Text\n\nReymond JL: The chemical space project. Acc Chem Res. 2015; 48(3): 722–730. PubMed Abstract | Publisher Full Text\n\nNaveja JJ, Medina-Franco JL: Activity landscape sweeping: insights into the mechanism of inhibition and optimization of DNMT1 inhibitors. RSC Adv. 2015; 5(78): 63882–63895. Publisher Full Text\n\nMaggiora GM, Bajorath J: Chemical space networks: a powerful new paradigm for the description of chemical space. J Comput Aided Mol Des. 2014; 28(8): 795–802. PubMed Abstract | Publisher Full Text\n\nMedina-Franco JL: Interrogating novel areas of chemical space for drug discovery using chemoinformatics. Drug Dev Res. 2012; 73(7): 430–438. Publisher Full Text\n\nOsolodkin DI, Radchenko EV, Orlov AA, et al.: Progress in visual representations of chemical space. Expert Opin Drug Discov. 2015; 10(9): 959–973. PubMed Abstract | Publisher Full Text\n\nLarsson J, Gottfries J, Muresan S, et al.: ChemGPS-NP: tuned for navigation in biologically relevant chemical space. J Nat Prod. 2007; 70(5): 789–794. PubMed Abstract | Publisher Full Text\n\nLarsson J, Gottfries J, Bohlin L, et al.: Expanding the ChemGPS chemical space with natural products. J Nat Prod. 2005; 68(7): 985–991. PubMed Abstract | Publisher Full Text\n\nRosén J, Lövgren A, Kogej T, et al.: ChemGPS-NP(Web): chemical space navigation online. J Comput Aided Mol Des. 2009; 23(4): 253–259. PubMed Abstract | Publisher Full Text\n\nWishart DS, Knox C, Guo AC, et al.: DrugBank: a comprehensive resource for in silico drug discovery and exploration. Nucleic Acids Res. 2006; 34(Database issue): D668–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOpenEye Scientific Software, Santa Fe NM: ROCS 3.2.1.4. 2017. Reference Source\n\nOpenEye Scientific Software, Santa Fe NM: OMEGA 2.5.1.4. 2017. Reference Source\n\nHawkins PC, Skillman AG, Warren GL, et al.: Conformer generation with OMEGA: algorithm and validation using high quality structures from the Protein Databank and Cambridge Structural Database. J Chem Inf Model. 2010; 50(4): 572–584. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHawkins PC, Skillman AG, Nicholls A: Comparison of shape-matching and docking as virtual screening tools. J Med Chem. 2007; 50(1): 74–82. PubMed Abstract | Publisher Full Text\n\nBerthold MR, Cebron N, Dill F, et al.: KNIME - the Konstanz information miner. SIGKDD Explor Newsl. 2009; 11(1): 26. Publisher Full Text\n\nNaveja JJ, Medina-Franco JL: Dataset 1 in: ChemMaps: Towards an approach for visualizing the chemical space based on adaptive satellite compounds. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24274",
"date": "20 Jul 2017",
"name": "Gerald Maggiora",
"expertise": [
"Reviewer Expertise Physical chemistry",
"biophysics",
"computer-aided drug design",
"chemical informatics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGraphically representing coordinate-based chemical spaces requires some type of dimensionality reduction. One method involves the use of similarity matrices treated as data matrices that are subsequently subjected to principal component analysis (PCA). The first two or three PCs are then used as a basis to graphically depict the chemical space. Although this approach works reasonably well, the size of chemical spaces that can be treated is somewhat limited, since the PCA transformation requires diagonalizing a matrix whose dimension is equal to the number of molecules in the chemical space of interest. The work of Naveja and Medina-Franco seeks to overcome this limitation by building a lower dimensional representation of chemical space in a stepwise manner using “backwards” or “forward” procedures. While the method has the potential for accomplishing their goals, it does not in my estimation provide a sufficiently rigorous test of the approximations that are the foundation of their approach. For this reason additional work needs to be done before their method can be applied with confidence.\n\nMy objection is based on the authors’ use of the first 2 or 3 PCs as the ‘gold standard’ for representing of the entire chemical space, and as a basis for all subsequent comparisons of the approximate chemical spaces. I would at least like to see what percent of the total sample variance is accounted for by these PCs. If it is an insignificant amount, then approximating these PCs by whatever method will not produce a sufficiently accurate model of the chemical space and their model will have to be improved. The true ‘gold standard’ is the original set of column vectors in their data matrix from which the PCs are obtained. This will produce the ‘true’ distance between ‘molecular points’ in the full dimensional chemical space, but because of its very high dimension computing distances in the original chemical space can be a problem. An alternative is to carry out the PCA and choose a larger subset of PCs (say 6 or 8) that do account for most of the sample variance and then use these in the correlation or error analysis.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2920",
"date": "28 Jul 2017",
"name": "José L. Medina-Franco",
"role": "Author Response",
"response": "Dear Dr. Maggiora, We thank you for your feedback on this Application Note. We entirely agree with your comment that if the variance captured by the first 2 or 3 PCs is not high enough, the visual representation of the chemical space will not be meaningful. For the data sets included in this work, we have seen that the variance is high. We also agree that formally speaking the ´true gold standard´ would involve computing the distances for the full matrix. Based on your feedback we are preparing a revised version of this manuscript."
}
]
},
{
"id": "24276",
"date": "28 Jul 2017",
"name": "Dmitry I. Osolodkin",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper under consideration presents an elegant approach to efficient mapping of chemical space using principal component analysis. Being technically sound in general, well-written and easily understandable, the paper lacks several technical details without which it is not complete. In particular:\n\nThe concept of 'chemical satellites' is discussed in a rather concise manner, a bit more details may be added and the seminal paper by Oprea & Gottfries [1] needs to be cited. The approach suggested here is rather different from the Oprea's one, because satellites are defined there as intentional outliers, whereas in the current work they are just extracted from the mapped dataset. This difference should be stated in a clearer way.\n\nDataset processing routine is not presented. Although the suggested technique would work on totally random datasets (by the way, addition of such a dataset to the list of examples would be beneficial and illustrative), standardization of structures should be performed for consistency and for more informative application of similarity measures. Targeted datasets in the supplement look standardized, but DrugBank contains metal ions, unconnected molecules, and macromolecules, all of which may significantly distort the comparison. For HDAC1 inhibitors the procedure to obtain this dataset from ChEMBL should be provided, because simple target keyword search for 'hdac1' gives 9 different datasets.\n\nDiversity of datasets may be additionally illustrated by any of currently available visualization methods. A method that clearly shows compound clustering or diversity of the dataset would be preferred.\n\nVisual comparison of figures is not sufficient to make conclusions about preference of random selection over diversity-based (Figures 1, 2, S2, S3). Differences are visible, but their importance and significance are not obvious (maybe just for me), so use of a quantitative measure would be highly appreciated. Random selection shows sometimes lower stability of the backwards analysis (larger difference between the iterations), and this observation could be discussed.\n\nSome analysis of the technique applicability domain would significantly improve the conclusions of the paper. One parameter that deserves attention is dataset diversity threshold above which the technique becomes unstable or less useful. Will it work good for totally random or intentionally diverse compounds or for datasets with two or three large congeneric series? A slightly more thorough characterization of example datasets would be useful to deal with this question.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2926",
"date": "04 Aug 2017",
"name": "José L. Medina-Franco",
"role": "Author Response",
"response": "Dear Dr Osolodkin, thank you, we highly appreciate your comments to this Research Note.Regarding the modifications we have done considering your comments: We added a citation to the first publication related to ChemGPS by Oprea and Gottfries. In the Introduction, we further, although briefly (given the extension limit of a Research Note), explained the differences among these two approaches. We added a Supplementary Information file describing the data curation methodology used. -We also added the HDAC1 dataset to the supplementary files. Supplementary Figure 1 should address the visualization of the diversity of the datasets. We find quite interesting your observation about quantifying the stability of the iterations, as well as that about determining the applicability domain of the approach (including defining a diversity threshold). Based on this Research Note we are planning an extensive study fully addressing these concerns."
}
]
},
{
"id": "24277",
"date": "31 Jul 2017",
"name": "Jean-Louis Reymond",
"expertise": [
"Reviewer Expertise Cheminformatics and drug design"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nJ. Jesús Naveja et al present a methodology for representation of chemical space of small sets of compounds. In general, the approach involves selection of satellite compounds from the database, computing the similarities of all compounds in the database to these satellites, and finally projection of the resulting similarity matrix using principal component analysis. J. Jesús Naveja et al further report various methods for selecting satellite compounds (backward or forward selection approach; selection at random or selection by diversity check) and show how the number of selected satellite compounds influence the quality of projection.\nComments:\nThe authors are completely hiding the fact that similarity mapping is quite well-known and absolutely not new, the authors should read and cite Awale et al., J. Chem. Inf. Model., 2015, 55 (8), pp 1509–1516 and the detailed discussion of literature precedents on similarity mapping presented therein.\n\nThe authors compare their satellites to the satellite compounds used by T. Oprea in his 2001 approach to mapping chemical space. Obviously either they did not read Oprea’s paper or they misunderstood it: Oprea’s satellites are artificial molecules with extreme properties such as to orient the PCA projection and stretch its dimensions in reproducible directions. However the projection is simply PCA, and does not involve similarity mapping. In similarity mapping the satellites are molecules from within the database to which similarities are calculated.\n\nIn the abstract, author mentioned that “3D diversity played a secondary role, although it becomes increasingly relevant as 2D diversity increases”. However, I didn't found the relevant explanation in main text supporting this statement.\n\nFigure 1 and Figure 2: The five random sets in the legend. Its not clear exactly what the author meant by five random sets. As per my understanding the author used the complete set of compounds for each target and what is changing is the random selection of satellites, which is repeated for five times.\n\nIn case of forward selection approach: “..With that in mind, we decided to design a method that starts with a given percentage of the database as satellites, and then keeps adding a proportion of them until the correlation between the former and the updated data is of at least 0.9. ” The correlation between projections obtained from the current set of satellites and projections obtained from former set of satellites might well be high, but still the correlation to the projection obtained from the complete similarity matrix is low. How one can assure the quality of projection in this case?\n\nFor all plots axis labels are too small to read.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2927",
"date": "04 Aug 2017",
"name": "José L. Medina-Franco",
"role": "Author Response",
"response": "Dear Dr. Reymond, thank you for your comments to this Research Note.Regarding the modifications we have done considering your comments: In the Introduction we briefly discuss other similarity approaches to visualize the chemical space. We have expanded that discussion there with a reference to the Similarity Mapplet approach. We did not intend to imply that Oprea’s and Gottfires’ ChemGPS approach is based on structural similarity. To clarify this point we rephrased that in the introduction. In the corresponding Figures legends we changed “random sets” with “iterations”. We added to the Supplementary Information a discussion on the correlation of the complete similarity matrix Euclidean distances and using only 2 and 3PCs. However, we would like to highlight that our approach is intended to approximate the best possible chemical space visualization using PCA. This last is given by the first 3PCs at most. We augmented the font size in all figures."
}
]
}
] | 1
|
https://f1000research.com/articles/6-1134
|
https://f1000research.com/articles/6-94/v1
|
31 Jan 17
|
{
"type": "Method Article",
"title": "Computational design of molecular motors as nanocircuits in Leishmaniasis",
"authors": [
"Dipali Kosey",
"Shailza Singh",
"Dipali Kosey"
],
"abstract": "Cutaneous leishmaniasis is the most common form of lesihmaniasis, caused by Leishmania major and is spread by the bite of a sandfly.This species infects the macrophages and dendritic cells Due to multi-drug resistance, there is a need for a new therapeutic technique. Recently, a novel molecular motor of Leishmania, Myosin XXI, was classified and characterized. In addition, the drug resistance in this organism has been linked with the overexpression of ABC transporters. Systems biology aims to study the simulation and modeling of natural biological systems whereas synthetic biology deals with building novel and artificial biological parts and devices Together they have contributed enormously to drug discovery, vaccine design and development, infectious disease detection and diagnostics. Synthetic genetic regulatory networks with desired properties, like toggling and oscillation have been proposed to be useful for gene therapy. In this work, a nanocircuit with coupled bistable switch – repressilator has been designed, simulated in the presence and absence of inducer, in silico, using Tinker Cell. When inducer is added, the circuit has been shown to produce reporter at high levels, which will impair the activity of Myosin XXI and ABC transporters. Validation of the circuit was also performed using GRENITS and BoolNet. The influence of inducer on the working of the circuit, i.e., the type of gene expression, response time delay, the steady states formed by the circuit and the quasipotential landscape of the circuit were performed. It was found that the addition of inducer reduced the response time delay in the graded type of gene expression and removed the multiple intermediate attractors of the circuit. Thus, the inducer increased the probability of the circuit to be present in the dominant stable state with high reporter concentration and hence the designed nanocircuit may be used for the treatment of leishmaniasis.",
"keywords": [
"Leishmaniasis",
"nanocircuit",
"synthetic biology",
"molecular motor"
],
"content": "Introduction\n\nLeishmaniasis is a neglected tropical disease, caused by the protozoan parasite of the genus Leishmania. It mainly infects macrophages and dendritic cells of the immune system. Although several drugs are available for treatment, the rapid development of resistance in Leishmania to these drugs constantly demands new therapies (Hadighi et al., 2006). Of the three major types of Leishmaniasis viz (cutaneous, mucocutaneous and visceral leishmaniasis), cutaneous leishmaniasis is caused by the species Leishmania major.\n\nA novel molecular motor, Myosin XXI, has been identified in Leishmania and has been characterized recently. Its main role is suggested to be membrane anchorage and intracellular trafficking activity. Studies on Myosin XXI have shown that it is very important for the survival of the parasite and that it is the only myosin isoform present in the organism (Batters et al., 2014). It is also present only in the Leishmania genus, with less than 35% identity with other myosin isoforms. Its phosphorylation and dephosphorylation may be by Myosin Heavy Chain Kinase and Protein Phosphatase 2A (PP2A), respectively (Batters et al., 2014). Although the complete structure of Myosin XXI has not yet been reported, the various motifs present in it have been identified. Among these, there are six calmodulin binding (CB) motifs, which function by binding to calmodulin in the presence of calcium. This binding regulates Myosin XXI motility during various cellular functions. Amino acids 809–823 form the most potential CB motif. Therefore, antisense RNA to this CB motif will inhibit its translation and impair the motility of Myosin XXI. Also, the exposure to high PP2A levels will cause dephosphorylation of myosin, hindering its normal activity (Batters et al., 2014). Thus, the intracellular trafficking activity of Myosin XXI will be affected to a great extent, leading to the death of the parasite.\n\nAccording to Katta et al. (2009) Myosin XXI is the only myosin isoform present in Leishmania. It is also exclusively a Leishmanial protein product. Attempts to obtain a Myosin XXI null mutant were unsuccessful (Katta et al., 2009), due to ploidy generation, indicating its essentiality for the organism. Further, the reduction of its levels has been found to result in a loss of endocytosis within the parasite’s flagellar pocket and impairment of other intracellular trafficking processes (Katta et al., 2009). Therefore, the specificity and essentiality of the target supports the selection of Myosin XXI as the target. In addition, though RNA silencing machinery is absent in L. major, antisense RNA has been used for interference with snoRNA in this species (Liang et al., 2005), showing that ssRNA silencing is possible in this species. Antisense RNA of the CB motif may directly impede the motility of the myosin XXI by interfering with the translation of the motif.\n\nThe resistance shown by Leishmania to the current therapeutic drugs is a rising concern among those combating the neglected disease, Leishmaniasis. As an approach to devise a novel therapeutic strategy and to fight leishmanial drug resistance, this study aims to target the molecular motor, Myosin XXI, as well as ABC transporters of L. major. The basic idea of this study is that impairments of the targets can be brought about by either antisense RNA for the most potential calmodulin binding motif (amino acids 809–823) of Myosin XXI or PP2A, which dephosphorylates Myosin XXI and ABC transporters.\n\nEven though numerous studies have been performed on L. major, virtually no study is available on its molecular motor, Myosin XXI. Ever since Foth et al, 2006 established a new myosin class, Myosin XXI, for the previously unclassified trypanosomatid myosin, attempts have been made to understand it. Still in-depth knowledge of its characteristics is unknown, including its structure and mechanical properties. Exploring this novel molecular motor deeper might lead to newer therapeutic approaches for leishmaniasis, and to the best of our knowledge this is the first study on treatment of Leishmaniasis by targeting Myosin XXI. The use of a nanocircuit comprising a coupled bistable switch and repressilator for the treatment of disease is a novel idea, promising a whole new vista for therapeutic approaches.\n\n\nMethods\n\nA coupled bistable switch and repressilator, with the bistable switch under the control of repressilator can be delivered using a liposome (of nano size) with antibodies (Ab) targeting the lipophosphoglycan (LPG) displayed on an L. major infected macrophage. The liposome will be designed to withstand the acidic conditions of the phagolysosome of targeted macrophages. The circuit with bistable switch under the control of repressilator is given in Figure 1.\n\nThe repressilator has TetR (represses PLTet01) under the control of the PLlac01 promoter, lacI (represses PLlac01) under the control of λPR, λcI (represses λPR), which is under the control of PLTet01. All the repressilator proteins repress each other in a cyclic fashion. The bistable switch comprises of cIts (represses PLs1con), which is under the control of PLlac01 and lacI (represses PLlac01) under the control of PLs1con. PLlac01 is repressed by lacI of both the repressilator and bistable switch. The reporter gene is placed downstream of PLlac01.\n\nThe addition of an inducer of PLTet01, namely aTc, will increase the production of λcI, which will inhibit λPR. As production of lacI of the repressilator is stopped, some of the repression on PLlac01 will be released. cIts represses the production of lacI of the bistable switch. Thus, the reporter gene present downstream of PLlac01 will be expressed in increased levels.\n\nThe expression will be kept under control as the repression of lacI production in repressilator will release the repression on TetR (represses PLTet01), which will counteract with the inducer aTc.\n\nThe choice of the reporter gene in the nanocircuit may be based on one of the following strategies.\n\nStrategy I (Figure 2). The reporter gene may be devised such that its transcription results in ssRNA (antisense RNA), which will be complementary to the mRNA strand coding for the amino acids 809–823 – LQWVEEASNMFPDF (Batters et al., 2014). The inhibition of this calmodulin binding (CB) motif in Myosin XXI will result in the hindrance in CB, necessary for the motility of the parasite. There will be an increase in the levels of calmodulin calcium complex in the system. The calcium calmodulin complex activates the myosin heavy chain kinases, which will result in the continuous contraction of myosin. Thus, the motility and intracellular trafficking of the parasite will be disturbed. Also, the complex of calcium and calmodulin will choose to bind to other CB proteins, namely calcineurin. Activation of calcineurin will in turn activate PP1. PP1 and PP2A function as molecular switch that reciprocally regulate the eukaryotic phosphatases, hence suggesting the fact that PP1 modulates PP2A activity in cells. (Ohki et al., 2001). As more PP1 is activated, phosphorylation of ABC transporter proteins necessary for its regulation will also be affected, thus disturbing the efflux of molecules. Consequently, L. major will be put under stress as the motility, efflux of endogenous metabolites; survival mechanisms and intracellular trafficking are affected, leading to death of the parasite.\n\nCBF, Calmodulin Binding Motif; PP1, protein phosphatase 1; ABC transporters, ATP-binding cassette transporters.\n\nStrategy II (Figure 3). The reporter gene can be devised to synthesize Protein Phosphatase 2A (PP2A), which dephosphorylates the myosin heavy chain. Excess production of PP2A by the circuit will result in continuous relaxation of myosin. This will in turn affect the parasite’s motility and intracellular trafficking, endangering its survival. PP2A will also disturb the phosphorylation states needed for ABC transporter activity (Berridge MJ, 2010). Also, dephosphorylation of mitogen-activated protein kinase (MAPK), by PP2A, has been noted in several studies (Toivola, 1999). Several studies have reported that since extracellular signaling kinase (ERK) is involved in activation of transcription factors, like Ap-1, which is associated with ABCC1 (multidrug resistance protein-1) expression, dephosphorylation by PP2A may also have an effect on ABC transporter expression (Mohammed et al., 2012; Xu et al., 2006). Thus, the efflux of molecules will be affected. The trafficking mechanism of L. major, its survival mechanisms and drug resistance property breaks down as a result. Thus, L. major will cease to exist, unable to cope up with the build-up of stress.\n\nMAPK, Mitogen activated protein kinase; ABCC1, ATP Binding Cassette Subfamily C Member 1; PP2A, Protein phosphatase-2A; Ap-1, Activator protein 1; ABC transporters, ATP-binding cassette transporters\n\nCofactor – streptavidin and biotin. Streptavidin has been selected as a cofactor to ensure the internalization of the reporter produced by the circuit. This is based on the fact that biotin-streptavidin-based isolation of Leishmania has been widely practiced. Biotin, if encapsulated inside the nanoliposome containing the nanocircuit, will become attached to the surface proteins of L. major inside the macrophages. Therefore, the streptavidin-reporter complexes attaches with this bound biotin, finally leading to internalization of complex (Figure 4).\n\nThe nanocircuit was designed using different biological parts available in Tinker Cell (http://www.tinkercell.com/). The transcriptional repression processes were also indicated in the circuit. Sequences for the different parts of the gene modules were obtained from various sources: the reporter DNA and protein sequences were obtained from NCBI; the DNA and protein sequences for repressor proteins, ribosome binding sites, terminators were obtained from Registry for Standard Biological Parts; and the promoter sequences with binding strengths for the repressor proteins were acquired using TRANSFAC and TFBind (Supplementary File 1). These sequences were entered in appropriate places in the Text Attributes dialog box in Tinker Cell. Steady state parameter scan was performed for the designed circuit. Simulation of the working of the circuit was done by altering the values of protein degradation rates, rbs and promoter strengths, dissociation constants, and hill coefficients, until the desired graph was obtained. Inducer was added to PLTet01 of the circuit, and the simulation was performed again. The final results were exported in SBML format for further analysis. The SBML files were validated using an online SBML validator (http://sbml.org/Facilities/Validator/).\n\nThe SBML files of the simulated circuit (with and without inducer) were loaded in COPASI (v4.18; software for simulation and modeling of biochemical networks; http://copasi.org/) and time course data for a period of 10s was generated for the protein concentrations. The acquired files were analyzed by GRENITS package (v1.24.0; https://www.bioconductor.org/packages/release/bioc/html/GRENITS.html) for the circuit’s convergence and network link probabilities. Inferred network of the circuit components was obtained from the network links with probability greater than 0.8. The files were also analyzed by BoolNet (v2.1.3; https://cran.r-project.org/web/packages/BoolNet/index.html). The possible attractor states formed by the circuit, the probability of Boolean network transitions and network wiring were found. Robustness of the circuit for around perturbations was also checked. Attractor results from BoolNet were exported as .net format and the circular layout of the state transitions were obtained using Pajek (v4.10; http://mrvar.fdv.uni-lj.si/pajek/).\n\nThe SBML files of the designed and simulated circuit (with and without inducer) were loaded in COPASI, and the ODE files of the circuit were exported in .mmd format. The ODE files were loaded to Berkeley Madonna (v8.3.18; www.berkeleymadonna.com). The Bistable switch - Repressilator coupling equation was included as follows:\n\ncod1 = LcI1+ (PLlac01_strength*rs4_c*rs5_c * DefaultCompartment)\n\nThe ODEs were integrated using Euler’s method from 0 to 10s. A plot of Antisense RNA or PP2A vs time was obtained. The plots for the circuit with and without inducer were compared to determine the type of gene expression in the nanocircuit and to understand the effect of inducer on the response time delay. A plot of Antisense RNA or PP2A vs LacI3, showing the phase plane of the two proteins, was obtained.\n\nA nullcline was plotted using the option Nc in Berkeley Madonna. Another nullcline was obtained by plotting several trajectories from different initial conditions using the option Ic.\n\nThe point of intersection of the two nullclines shows the steady state of the designed circuit.\n\n1. Difference equation for quasipotential, Vq = -(((Antisense_RNA_or_PP2A)^2) + ((lacI3)^2))*DT was included and integration was performed again.\n\n2. A plot of antisense RNA or PP2A vs LacI3 vsVq was obtained.\n\n3. The values were exported from Berkeley Madonna and 3D plots (contour plot, 3D mesh, linearized 3D mesh) of the quasipotential landscape were obtained using SigmaPlot 12.0 (http://www.sigmaplot.co.uk/). The landscapes for the circuit with and without inducer were compared.\n\n\nResults\n\nThe circuit was designed using Tinker Cell, a CAD software. All the sequences for the different parts of the gene modules were entered as detailed in the Methods section.\n\nFigure 5 shows the designed nanocircuit without inducer.\n\nThe upper portion of the circuit forms the repressilator with TetR, Lambda cI, LacI protein repressing each other in a cyclic fashion. The lower portion forms the Bistable switch comprised by cIts and lacI mutually repressing each other. LacI of repressilator represses PLLac01 of Bistable switch forming the coupling between repressilator and Bistable switch. The reporters Antisense RNA or PP2A, as well as streptavidin, are placed downstream of PLLac01 of Bistable switch.\n\nSimulation was performed and the desired graph was obtained for the values shown in Table 1.\n\nFigure 6 shows the working of the circuit in the absence of the inducer aTc. cIts and so Antisense RNA or PP2A (and streptavidin) are in off stage, due to the repression by LacI2, while LacI3 is on. The repressilator proteins TetR, LcI, LacI2 are forming oscillations.\n\nFigure 7 shows the circuit with inducer aTc added, which will reduce the repression caused by TetR on PLTet01.\n\nTable 2 shows the values of the three parameters of the induced activation, for which the desired graph was obtained. The rest of the values of the circuit were kept constant. The graph (Figure 7) shows the working of the circuit after the addition of aTc.\n\nThe reporters (Antisense RNA or PP2A) are switched on when Antisense RNA or PP2A degradation rate is in the range of 0–2s, while the lacI3 gets switched off, as in Figure 8.\n\nThe cofactor Streptavidin also peaks in the same range (Figure 9).\n\nThe repressilator proteins TetR, LcI and LacI2 form limit cycle oscillations in this range (Figure 10).\n\nThus, the circuit was designed and the working in the absence and presence of inducer. The corresponding values of the parameters were also noted.\n\nThe validation of the circuit with and without inducer was performed using GRENITS and BoolNet package in R (v3.2.2; https://cran.r-project.org/).\n\n(i) Convergence\n\nConvergence of the circuit was checked using Monte Carlo Markov Chain Algorithm by GRENITS. Various parameters, such as indicator variables of Gibbs sampler, coefficient of regression, network connectivity parameter, precision of each regression and intercepts of regression of Chain 1 and Chain 2 of state transitions were analyzed.\n\nThe graphs in Figure 11 shows the convergence of indicator variables of Gibbs sampler (Gamma), coefficients of regression (B), network connectivity parameter (Rho), precision of each regression (Lambda) and intercepts of each regression (Mu) of state transition chain 1 and 2, thus validating the circuit without inducer.\n\nGraphs in Figure 12 shows that convergence of the parameters was also obtained for the circuit with inducer, thus validating it.\n\n(ii) Network link analysis plots\n\nThe links between the different genetic components in the circuit were analyzed using GRENITS package. Links with probability > 0.8 were considered to infer the network.\n\nFigure 13 shows the link probabilities between the components. From the plot, it is evident that LacI3 of the bistable switch is the most important and strong regulator of the circuit, as it regulates the expression of all other proteins.\n\nFigure 14 shows the number of regulating parents for each of the genes and the probabilities of each of the parent regulating it. It shows that LacI3 has no parent regulating it.\n\nThe inferred network between the genetic components considering the links with probabilities > 0.8 (Figure 15A) is shown in Figure 15B\n\n(A) Link probability threshold and (B) the inferred network for the nanocircuit.\n\nSimilarly, links in the nanocircuit with inducer were analyzed using GRENITS, which resulted in the same results; LacI3 was the main regulator (Figure 16) and had no parents regulating it (Figure 17).\n\nThe inferred network from the links with probability > 0.8 (Figure 18A) is shown in Figure 18B.\n\n(A) Link probability threshold and (B) the inferred network for the nanocircuit with inducer.\n\nThe inferred network shows that all the components are linked to one another with high probability. This proves the non-randomness of the network.\n\n(iii) Attractor state analysis\n\nThe possible attractor states formed by the circuit were determined using BoolNet to visualize the switching on and off of the genes comprising the circuit.\n\nFigure 19 shows the ten possible attractor states with continuous switching of the repressilator genes between on and off states. The reporters are off when LacI3 is on and they are on when LacI3 is off, which is an evidence for the toggling between the genes of the bistable switch.\n\nFigure 20 depicts the ten attractors with either 8 or 16 states forming the basin of attractor.\n\nThe attractor state analysis of the nanocircuit with inducer also yielded 10 attractors with 8 or 16 states forming the attractor basin (Figure 21 and Figure 22).\n\nFigure 23 illustrates the Probability Boolean Network transitions in the circuit with and without inducer. It gives the binary representation of the attractors and its basins.\n\n(iv) Robustness analysis\n\nThe robustness of the circuit to 10 different perturbations was checked using BoolNet (a state transition table was generated, one or several transitions are perturbed randomly and the gene transition function are robust from the modified transition table).\n\nFigure 24 shows the percentage of attractors present in the perturbed networks of the circuit: 10% of original attractors were present in 6 perturbed networks; 20% in 1; 30% in 2; and 40% in 1 network. Hence, the result showed that 70% of the random results had the original attractors undisturbed.\n\nFigure 25 shows the Gini index of state-in degrees of the circuit. The state-in degrees of a particular state is the number of transitions leading to it. The Gini index of the in-degrees is returned as a characteristic value of the network. It is a measure of inequality. If all states have an in-degree of 1, the Gini index is 0. If all state transitions lead to one single state, the Gini index is 1 (Kim et al., 2007). The higher the Gini index, the higher the number of transitions leading to it. This parameter is measured based on the fact that in biological networks, many state transitions lead to the same states. Figure 25 show that 96% of the states have a high Gini index, proving that the designed circuit is a biologically valid network and not a random network.\n\nSimilar results were obtained for nanocircuit with inducer with BoolNet. In total, 70% of the perturbed networks had the original attractors (Figure 26), while 96% of the attractors had high Gini index of state-in degrees (Figure 27). Hence, the biological validity of the circuit has been proved.\n\n(v) Network wiring\n\nBased on the transition function of each gene found from the links between the gentic components, network wiring was made using BoolNet. From the network (Figure 28), it is evident that LacI3 controls the expression of other proteins of the circuit and is also a function of itself, while not being controlled by other components.\n\n(vi) Circular layout of the state transitions\n\nFigure 29A and B shows the circuit layout of the 2129 transitions and the probable attractors in the circuit and the circuit with inducer, respectively.\n\nCircular layout of the state transitions of the nanocircuit (A) without and (B) with inducer.\n\n(i) Gene expression and response time delay\n\nThe plot of Antisense RNA or PP2A vs time (Figure 30 and Figure 31) was obtained using Berkeley Madonna. The gradual increase in the concentration of the reporter shows that there is graded expression in the circuit. Comparison between the two graphs shows that in addition to a high level production of reporter, the response time delay has also been reduced by the addition of inducer.\n\nConsidering the biological fact that at equilibrium protein concentration does not change over time, nullclines (dx/dt = 0 or dy/dt = 0) have been plotted for reporters and LacI3 in the phase plane of the two proteins, i.e. d(Antisense RNA or PP2A)/dt = 0 or d(LacI3)/dt=0). The point of intersection indicates the steady states (d(Antisense RNA or PP2A)/dt = 0 and d(LacI3)/dt=0)). The nullclines of a bistable switch genes intersect thrice with two stable steady states. From Figure 32 and Figure 33, it is evident that the designed nanocircuit favors only one steady state with high reporter concentration and low LacI3 concentration. The individual trajectories in the graph show the evolution of the system from their respective initial conditions.\n\nFrom the sharp threshold of steady state in equilibrium conditions, it is apparent that the circuit is ultrasensitive.\n\n(ii) Quasipotential landscape\n\nTo evaluate the dynamics and evaluation of the circuit under non-equilibrium conditions, quasipotential landscape was mapped. It also gives a quantitative measure of the epigenetic landscape. The directionality of the circuit can also be studied.\n\nFigure 34 shows the contour plot of quasipotential energy at different levels of reporter and LacI3. The regions marked by yellow lines are local minima in the quasipotential surface. It corresponds to the stable states as the areas of minimum potential that have the highest probability of occupancy. This plot also shows that the circuit works by autoactivation. Autoactivation of the reporter leads to amplification of transient differences in expression between the reporter and LacI3.\n\nContour plot of the Quasipotential of nanocircuit (A) without and (B) with inducer.\n\nThe 3D quasipotential landscape is given in Figure 35. The branching valleys and ridges depict the stable cellular states and the barriers between those states, respectively. The “deeper” valleys in the figure are associated with a higher probability of occupancy than the “shallower” valleys. Figure 35 shows that autoactivation has caused multistability in the circuit. A circuit with an initial condition of low reporter and high LacI3 will flow down-hill to the intermediate stable state with minimal quasi potential (Figure 36). For the circuit to flow to the stable steady state from the intermediate state with high reporter and low LacI3, it has to cross the potential barrier between the two states, for which additional reactions are required which will need energy. Therefore, the probability of this transition is less. In Figure 36, it can be seen that there is a smooth down-hill flow to the stable steady state, with high reporter levels and low LacI3 levels from all initial conditions when inducer is added. There is translational burst of the reporter, leading it to the dominant attractor. Therefore, the probability of occupancy of this stable steady state is very high. The number of intermediate stable states with minimal potential has been reduced by the inducer.\n\n3D Quasipotential landscape of the nanocircuit (A) without and (B) with inducer.\n\nLinearized quasipotential landscape of the nanocircuit (A) without and (B) with inducer.\n\nFigure 36 shows the linearized 3D plot from which the relative distances between the stable states can be inferred.\n\nIn the absence of inducer, there is a higher number of intermediate attractors closely spaced (Figure 36A). When inducer is added, the dominant attractor is placed well away from the initial condition with a smooth down-hill flow (Figure 36B). This occurs due to the high level expression of the reporter placed downstream of the active promoter and low levels of regulatory LacI3, which is upstream of this promoter.\n\nThus, it can be concluded that the circuit works by autoactivation of the reporter, while repressing LacI3. This has led to multistability. Addition of inducer reduces the multistability caused by the autoactivation in the circuit and smooths the undulating quasipotential landscape. Therefore, the circuit will occupy the stable state of high reporter levels with high probability.\n\n\nDiscussion\n\nThe basic idea of this study is that impairments of the targets can be brought about by either antisense RNA for the most potential calmodulin binding motif (amino acids 809-823) of Myosin XXI or protein phosphatase 2A (PP2A; dephosphorylates Myosin XXI and ABC transporters). The nanocircuit behaves as a nanomachine when a chemical gradient, e.g. inducer, is given and produces high levels of reporter. Antisense RNA of the CB motif may directly impede the motility of the Myosin XXI by interfering with the translation of the motif. It may also affect the phosphorylation levels of myosin by increasing the cytoplasmic levels of Ca2+/calmodulin complex, in turn activating the myosin heavy chain kinase that phosphorylates myosin. Abnormal phosphorylation of Myosin XXI, due to high levels of ssRNA made by the circuit, leads to continuous contraction which hampers its activity. Similarly, high levels of PP2A made available by the nanocircuit may cause continuous relaxation by dephosphorylating myosin XXI and inhibit phosphorylation by Myosin Heavy Chain Kinase.\n\nIn nutshell, the simulation, validation and behavior studies of the nanocircuit confirmed the hypothesis that the designed nanocircuit is a robust biological network and, when inducer is added, will lead to high amounts of the reporter with reduced response time delay. While the circuit by itself favors only this stable steady state, though with multiple attractors, the inducer increases the probability of occupancy of this state under various initial conditions, through a translational burst of the reporter. The intracellular trafficking, motility by the myosin XXI and the metabolite efflux, survival mechanism of the ABC transporters will be hindered, leading to the death of the leishmanial parasite. Future studies will involve in vitro and in vivo validation of the circuit. The nanocircuit would be constructed in a plasmid and delivered using Lipophosphoglycan-antibody coated nanoliposomes. The method adopted helps to devise a nanocircuit for treating Leishmaniasis which may behave as a nanomachine.\n\n\nData availability\n\nDataset 1: Source codes (zipped file). The file contains all the source codes and other files for the nanocircuit designing, simulation and validation. doi, 10.5256/f1000research.10701.d150383 (Kosey & Singh, 2017).",
"appendix": "Author contributions\n\n\n\nSS designed and conceptualized the study. DK performed the study. SS and DK participated in the discussion and writing of the manuscript. All authors read and approved the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors would like to thank Department of Biotechnology, Ministry of Science and Technology, Government of India (BT/PR10286/BRB/10/1258/2013) for funding the work.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nDipali Kosey acknowledges her Junior Research Fellowship (JRF) from Department of Biotechnology (BT/PR10286/BRB/10/1258/2013), Ministry of Science and Technology, Government of India. We would like to thank our Director, National Centre for Cell Science (NCCS) for supporting the Bioinformatics and High Performance Computing Facility (BHPCF) at National Centre for Cell Science, Pune, India.\n\n\nSupplementary material\n\nSupplementary File 1: Source sequences. All the sequences for the different parts of the gene modules are available.\n\nClick here to access the data.\n\n\nReferences\n\nBatters C, Ellrich H, Helbig C, et al.: Calmodulin regulates dimerization, motility, and lipid binding of Leishmania myosin XXI. Proc Natl Acad Sci U S A. 2014; 111(2): E227–E236. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerridge MJ: ‘Ion Channels’. Cell Signaling Biology. 2012. Publisher Full Text\n\nEl Azreq MA, Naci D, Aoudjit F: Collagen/β1 integrin signaling up-regulates the ABCC1/MRP-1 transporter in an ERK/MAPK-dependent manner. Mol Biol Cell. 2012; 23(17): 3473–3484. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFoth BJ, Goedecke MC, Soldati D: New insights into myosin evolution and classification. Proc Natl Acad Sci U S A. 2006; 103(10): 3681–3686. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHadighi R, Mohebali M, Boucher P, et al.: Unresponsiveness to Glucantime treatment in Iranian cutaneous leishmaniasis due to drug-resistant Leishmania tropica parasites. PLoS Med. 2006; 3(5): e162. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatta SS, Tammana TV, Sahasrabuddhe AA, et al.: Trafficking activity of myosin XXI is required in assembly of Leishmania flagellum. J Cell Sci. 2010; 123(Pt 12): 2035–44. PubMed Abstract | Publisher Full Text\n\nKim S, Kim J, Cho KH: Inferring gene regulatory networks from temporal expression profiles under time-delay and noise. Comput Biol Chem. 2007; 31(4): 239–245. PubMed Abstract | Publisher Full Text\n\nKosey D, Singh S: Dataset 1 In: Computational design of molecular motors as nanocircuits in Leishmaniasis. F1000Research. 2017. Data Source\n\nLiang XH, Uliel S, Hury A, et al.: A genome-wide analysis of C/D and H/ACA-like small nucleolar RNAs in Trypanosoma brucei reveals a trypanosome-specific pattern of rRNA modification. RNA. 2005; 11(5): 619–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOhki S, Eto M, Kariya E, et al.: Solution NMR structure of the myosin phosphatase inhibitor protein CPI-17 shows phosphorylation-induced conformational changes responsible for activation. J Mol Biol. 2001; 314(4): 839–849. PubMed Abstract | Publisher Full Text\n\nToivola DM, Eriksson JE: Toxins affecting cell signalling and alteration of cytoskeletal structure. Toxicol In Vitro. 1999; 13(4–5): 521–530. PubMed Abstract | Publisher Full Text\n\nXu C, Shen G, Yuan X, et al.: ERK and JNK signaling pathways are involved in the regulation of activator protein 1 and cell death elicited by three isothiocyanates in human prostate cancer PC-3 cells. Carcinogenesis. 2006; 27(3): 437–445. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "19802",
"date": "15 Feb 2017",
"name": "Jaime J. Seguel",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article by Kosey and Singh proposes a novel molecular motor intended to impair the activity of Myosin and ABC transporters as agents of Leishmaniasis. A wide array of simulations and measurements seems to support the authors' claims, specifically the claim that the addition of an inducer increases the possibility of avoiding undesired basins of attraction in the circuit's dynamics, and the claim that both, Myosin XXI motility and the metabolite efflux will be hindered as result of the action of the proposed circuit. Design and simulation are by themselves a significant undertaking. Molecular motors such as dynein or myosin are engineered by Nature, and the artificial replication of these remarkable designs is one of the greatest challenges of biological engineering. A still limited understanding of the laws of molecular mechanics at different scales in a biological system prevents a design based on solid theoretical principles. Genetic circuit design is yet an experimental art. And mostly as a result of early experimentation in protein and metabolic engineering, a wide variety of tools that include several computational methods, have been developed and are currently used to predict the effects of a genetic circuit design in a larger biological system. Besides the novelty of the circuit design, this article is an excellent display of the tools and methodological approach to design verification. My sole critique is not about the novelty of the design or the strength of the tools and methodology but rather the limited discussion of the results. I think that the article would be greatly benefited by the addition of interpretations of the statistics and the Boolnet and GRENITS outputs, which in the current version are mostly left to the interpretation of the reader.",
"responses": []
},
{
"id": "23063",
"date": "31 Jul 2017",
"name": "Prinessa Chellan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study attempts to explore the molecular motor, Myosin XXI, using computational methods in order to assist discovery of new therapeutic approaches for leishmaniasis. The authors designed a nanocircuit with a coupled bistable switch inducer, in silico, using Tinker Cell. The basic premise for this study was that the intended targets could be impaired either through antisense RNA for the calmodulin binding motif or protein phosphatase 2A. The simulations carried out showed that the nanocircuit acted as a nanomachine when a chemical gradient was applied. The authors concluded that their designed circuit was found to be a good biological network which could be manipulated to potentially disrupt several processes such as intracellular trafficking, metabolite efflux, motility by Myosin XXI and the survival mechanism of ABC transporters causing mortality of the leishmanial parasite.\nOverall, the article is well written and the experimental approach appears to be well thought out. I cannot comment on the specifics of the computational methods as I have limited experience in this area.\nI recommend indexing of this article as it contributes a new strategy for elimination of the Leishmanial parasite.\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-94
|
https://f1000research.com/articles/6-1315/v1
|
03 Aug 17
|
{
"type": "Opinion Article",
"title": "The new alchemy: Online networking, data sharing and research activity distribution tools for scientists",
"authors": [
"Antony J. Williams",
"Lou Peck",
"Sean Ekins",
"Lou Peck",
"Sean Ekins"
],
"abstract": "There is an abundance of free online tools accessible to scientists and others that can be used for online networking, data sharing and measuring research impact. Despite this, few scientists know how these tools can be used or fail to take advantage of using them as an integrated pipeline to raise awareness of their research outputs. In this article, the authors describe their experiences with these tools and how they can make best use of them to make their scientific research generally more accessible, extending its reach beyond their own direct networks, and communicating their ideas to new audiences. These efforts have the potential to drive science by sparking new collaborations and interdisciplinary research projects that may lead to future publications, funding and commercial opportunities. The intent of this article is to: describe some of these freely accessible networking tools and affiliated products; demonstrate from our own experiences how they can be utilized effectively; and, inspire their adoption by new users for the benefit of science.",
"keywords": [
"Online networking",
"Social networking",
"Research data sharing",
"altmetrics",
"Alternative metrics"
],
"content": "Introduction\n\nIn the past 40 years, society has undergone considerable changes driven by the development of affordable personal computers, the internet and most recently, mobile devices, which allow widespread connection to the internet. In turn, these technologies have shaped how we interact with each other and form online networks. Since 2000 there has been an almost 800% increase in the number of people using the internet, with over 3.5 billion people online. Online sharing now ranges from posting opinions, 140 character nuggets on Twitter, updates and discussions on LinkedIn, photos or videos via free platforms such as YouTube or Vimeo, presentation slides on SlideShare and so on. The specific online media sites preferred in each country across the world differs. Despite the prevalence of social media tools, the vast majority of scientists do not use these tools to help share, evidence and amplify their scientific research1. We believe there are a number of reasons that most scientists do not use these tools. It could be because few people would even think of using online media tools for their scientific research, or because they do not understand the potential value. The lack of credit for sharing pre-published data, code or other forms of research outputs, especially in terms of citations that can contribute to career progression, may also be an issue. Maybe scientists do not see this activity as a valuable use of their time or they require initial guidance to help navigate use of these tools. Other issues include fear of being scooped, and the idea that if your work is solid, it should be able to stand on its own merits without needing to be “amplified.”\n\nWith new tools being developed, it is difficult to keep track, determine what works, and optimize their use, especially in the context of science. For example, Kramer and Bosman (2017) manage a growing list containing over 400 tools and innovations in scholarly communications, and some of these can be used for online sharing. This list reinforces the breadth of software tools available, and demonstrates how this can be confusing for any scientist to know where to start.\n\nAs the world has changed in the past three decades, scientific publishing, seen increasingly as the currency of science for at least 50 years, has also seen a dramatic shift. Since publishing of the first modern journal in 1665 (Le Journal des Sçavans), a staggering number of scholarly articles have been published, reaching the milestone of 50 million by 20092. Recently Ware and Mabe3 projected between 1.8–1.9 million scientific articles are now being published every year, while contributing one aspect of Big Data for science4. In recent years, there has also been an explosion in the number of new Open Access publishers, a number of which seem to be focused on the profit potential of this new marketplace as predatory open access publishers. This exploitative open-access publishing business model charges publication fees without providing the editorial and publishing services associated with legitimate journals. This could allow scientists to game the system by publishing even poor quality science in predatory journals, effectively padding their resume. This was tested in an elaborate sting when a spoof research article was accepted by dozens of open access publishers5. It even extended to allowing fake editors roles on their editorial boards6. Such dubious practices in the publishing world, especially in terms of gaming a publication record can unfortunately extend to the use of online networking and sharing tools to boost online profiles and game altmetric scores (vide infra).\n\nWith such a large quantity of scientific research finding its way into published works, it is hard for scientists of any description, whether chemists, biologists or physicists to make sure their work rises above the ‘noise’. This is of course important if it is to be seen by peers and in turn used by them and cited, to ultimately be captured and further used to infer the importance for any given publication through various citation metrics. Historically, publishing was seen as the end result of the scientific endeavor, which one might consider as a simplistic linear process from hypothesis – experiment – publication (Figure 1). However, the traditional approach of a particular piece of research dissemination reaching its conclusion once it has been published was a limitation of printed journals and should not pose such limitations in an electronic online world. Now, with the advent of internet-based technologies, the last decade has seen an explosive growth in additional forms of media for the dissemination of publications and associated research data, analyses and results, including wikis, blogs and code-sharing platforms (e.g. GitHub). The process of research publication is now non-linear, with a potentially infinite variety of dissemination steps that could be taken between hypothesis to publication and beyond (Figure 1). This could include early release of data and scientific manuscripts as preprints. Sharing details regarding a research study and associated data offers a number of potential positive effects that can contribute to the quality of science. The historical approach of peer review was to hopefully both improve and ensure the quality of the science and the published output. Sharing research data and preprints allows for early feedback on both the results and preliminary findings from peer groups and reviewers.\n\nThis has many potential implications, one being a way to extend the life of a scientific publication and its enmeshing in a network of other electronic outputs. Another is the considerable effort that would be required to fulfill even a sampling of these approaches. Examples of the increasing importance of these non-traditional sources of scientific information is the increasing prevalence of links to Wikipedia, blog posts and code-sharing in reference lists associated with references in scientific publications (~35,000 citations to Wikipedia, ~11,000 for YouTube, ~10,500 for Facebook, and ~7000 for Twitter). This will increase in the future.\n\nThe vast majority of researchers depend on research funding in order to progress their science. However, acquiring research funding continues to be a challenge, with only 20.7% (14,457/69,973) of NIH Research project grants funded in 2015. This increased competition drives requirements to assess the quality and volume of research output. These exercises are becoming more commonplace, for example the Research Excellence Framework (REF2020) is the current system for assessing the quality of research in UK higher education institutions and, similarly, the Excellence in Research in Australia project (ERA). While there are various subjective judgements regarding the “impact” of a publication (including impact factors, CiteScore7, h-index and citations) for a research effort and ultimately for the performance and impact of a scientist, it should be obvious that in our present time the sharing, networking and outreach of research work could bring benefits. What those benefits actually are, and how to deliver them reproducibly, is one of the mysteries of this new scientific networking ecosystem. How does one use technology to maximally increase the visibility of your research outputs and potentially lead to breakthroughs that benefit humanity? This could be seen as the ‘new alchemy’ of science. Suggested benefits might be broader reach for a publication and Altmetric scores (vide infra) that could be used by granting agencies or others for assessing the scientist. By engaging or participating in online media for example, scientists can be empowered by using and adopting the tools available to them to share and improve access to their research, facilitating networking and potentially resulting in greater impacts on their career. However, these benefits come at a cost as they require time and effort that could distract from other efforts, such as research, writing grants, reviewing, mentoring and so on, and they are not yet in the standard workflow of the scientist.\n\nBased on our collective experiences over the past five years, we believe there is a significant return on investment, and it is far higher than simply relying on publishers to invest informed and targeted efforts in sharing a scientific publication. This in fact rarely happens unless the publisher, or you, invest in a press release, but the vast majority of publications are certainly never picked up by the general press, primarily due to the sheer volume of scientific papers being published. For a publisher producing many thousands, if not tens of thousands of publications in a year, investing efforts to share information about one publication above and beyond one tweet, or a rudimentary blog post on their website, is highly unlikely. If publishers were to promote each article, book or report they publish, this would flood their communication channels, thus creating too much noise. Their own marketing efforts would be less effective through mass communication and may also reduce reach, especially to the overwhelmed specific target audiences that would benefit from the author's research. Ultimately, the best person to raise awareness is the author(s) themselves who can leverage their own networks, whether through electronic tools or personal interactions. These may be as effective as a press release when it comes to spreading the word on what is likely a hot topic relevant to few scientists in a specialized field.\n\nA scientific publication is considered the most basic and historical path to “sharing” the product/s of one’s research. This is also generally considered the final output detailing the purpose for a particular piece of research, provides enough detail to reproduce the work and access to sufficient data (either directly in the manuscript, as supplementary info files, or via links to data stored on other external websites). Nowadays however, even a tweet with links to further scientific information or data can potentially represent a “nanopublication”, although the longevity of some of these tools is questionable. This effort does not have to wait until publication as you could tweet ideas at the very earliest stages of the research depending on your level of openness, especially in regards to open data, as well as concern for being scooped by people following you. In the process of our own experiments to determine the benefits and paths towards online sharing of our research, we have identified at least four steps that the reader could put into action. These four steps are to explain, share, and gather evidence of increased awareness of the work, as well as gather feedback from the community. These efforts may only take a few minutes of a scientist’s time, which is a fractional investment compared with the hundreds to thousands of hours spent on the scientific research from conception to publication. These steps, in turn, may increase impact of the research efforts.\n\nInvesting additional efforts into sharing data, research outputs such as presentations, or the final published products of the research work, may directly benefit a scientist’s career (especially with the growing attention given to alternative metrics “altmetrics”, vide infra), leading to new collaborations, new funding or even facilitate new discoveries. While there is certainly no shortage of software tools available to share and amplify research efforts, we will discuss a small number of free tools that we use ourselves, of course others are likely to have their own personal favorites. We hope this serves to whet the appetite, encourage further exploration to find out what tools work best for you to meet your objectives, and even form the foundation to discover new or other tools not identified in this piece.\n\n\nWhat are your goals at the start?\n\nTo extract the most value from these efforts it is certainly important to identify your primary goals and objectives for using these networking, sharing and amplification tools. For example, you might be interested in: 1) saving time by using integrated tools; 2) sharing your work using a small number of online services for dissemination and amplification; 3) tracking and providing evidence of your “impact” on research to potentially help with research funding applications; and/or 4) furthering your outreach to those outside your own network and perhaps engaging with the general public. Once you have defined your focus, you will be better positioned in terms of deciding what software services to use and whether to utilize a mixture or concentrate your efforts on one or two only. Your institution, colleagues, collaborators or even publishers may also have some recommendations for you or may already be partnered with specific services of which you can take advantage.\n\n\nCategories of tools: Networking, sharing, tracking and research amplification\n\nWe separated these software tools into four specific categories: networking, sharing, tracking and amplification. We acknowledge that the lines between many of these are actually rather fuzzy and that networking sites, such as Facebook and Twitter, are primarily used for sharing. Many of these tools actually serve more than one particular function, and our article is acknowledged to be subjective and based on our own experience and usage. There are obviously more generic tools that have long existed, for example email, and newer tools like Slack or Yammer, which are generally used for private communication and sharing. While these newer tools may be important for science and collaboration within small groups, they are useful in extending your online network more broadly. Clearly, the changes we have seen with technology may not predict what we will see in future in terms of communication style and utility as technologies themselves can redefine communication styles, as is the case with the 140 character tweet. Throughout this article we will point to examples of our own profiles on these various online tools as examples.\n\nLinkedIn. While LinkedIn has a primary role to form connections and expose your career to potential employers and partners, it is likely the de facto networking tool for many professional scientists. LinkedIn is also the number one networking site for business and crosses all domains. The networking of scientists, business and investors offers potential opportunities for innovation and commercialization. Other platforms (e.g., ResearchGate) are more geared towards research. LinkedIn offers both free and paid levels of service and, to date, we have only used the free services. We think that anyone creating a LinkedIn profile should invest a minimal effort in terms of adding a “professional” photograph, a career history back to their research training (i.e. college or university), a minimum of two to three research publications, and links to public presentations on other sites (vide infra). We also suggest that career interests be listed and some efforts invested in building out a network of work colleagues, ex-supervisors, co-authors, etc. We believe that LinkedIn be considered as an opportunity for career-networking, not family event and activity sharing, and to maintain the highest level of professionalism on this site. The ‘networking effect’ will likely result fairly quickly in new contacts reaching out to join your network, and we suggest being somewhat selective in accepting these offers. Some proposed filters for accepting new contacts include: 1) whether you have met them face-to-face; 2) the size of overlapping networks (for example when you have ~10 contacts in common); and 3) when you have already had either phone or email contact with the person. As well as recommendations from those you have worked directly with, the endorsement facility of LinkedIn allows members of your network to identify and endorse you for specific skills and this, in particular, can be valuable in terms of having a publicly acknowledged list of capabilities visible to the LinkedIn user base. An example of such an endorsement list is shown in Figure 2, which shows a partial list of the featured skills and endorsements for one of the authors (SE). This list can be pruned according to what skills you would want to be identified as having. It is also possible to directly list projects that you have been (or are) involved with, as well as associating the various members of the project team. This further defines the networking aspect of the site.\n\nNote that these can be directly managed by the account owner to remove endorsements that they do not want to be visible.\n\nFor those who use the site, they will have noticed business managers sharing their greatest coaching ideas or meaningful quotes. A scientist can use the same facility to post an update regarding present activities or simply something of scientific interest to share with followers. In this regard, the site can be considered as a more expansive version of Twitter. Some of these types of updates can be incredibly useful as tools to highlight a recent achievement for your group or collaborators. From our experience, a tendency towards keeping them brief and adding an image certainly helps in obtaining more views and ‘likes’. We have used this facility to provide updates on new papers or grants received as well as updates on specific projects. We have found that positive or engaging news can quickly gather momentum (e.g. announcing new grants, new hires, new jobs) and can drive contacts that lead to offline follow up. As an example, AJW posts updates on LinkedIn regarding his most recent project, the CompTox Chemistry Dashboard8. The associated Google Analytics for the site shows that these postings are very effective in driving traffic to the site, commonly a few dozen visits within a day of posting an update about the site. This can be informative about your network and perhaps beyond, in that good news travels fast. One or more scientific publications can be associated with the profile to illustrate latest research efforts and we suggest association of your highest profile publications with the site. Ideally LinkedIn would make use of Digital Object identifiers (DOIs) and would allow a resolver service, such as CrossRef, to display the publication details rather than forcing manual entry as it does at present. DOIs are one of the primary ways that scientific online tools integrate their data streams (i.e. ORCID, Altmetric, Publons). Besides using LinkedIn to share links to your latest publications, one can also insert PowerPoint presentations, PDF files and other document forms using the embed functionality available via SlideShare (described in more detail later).\n\nThe regularity of updates regarding new publications and presentations can be representative of your productivity, and requires active attention to your LinkedIn profile. The sharing of articles, presentations and science news that interests you can also help drive attention to other people’s work and elevate interest in it. For sharing via the site, we suggest using URL shortener services like TinyURL, Goo.gl URLs or Bitly to track the number of times people click on your link (as clickthroughs). However, it should be remembered that some services are currently banned in countries like China (e.g. Google), so you may want to ensure you are using “global friendly” services for this purpose. In our opinion LinkedIn is a pivotal tool and the purchase of LinkedIn by Microsoft9 would indicate that it will be more closely integrated with their products, maybe even extending to include an integration to their Microsoft Academic site that operates in the same space as Google Scholar Citations.\n\nResearchGate and Academia. There are several tools for networking your publications. Two of the most popular are ResearchGate and Academia. While there are differences in functionality, both sites provide similar abilities in terms of sharing preprint manuscripts, presentations, posters and other forms of general communication of your science. For example, one author’s (AJW) Academia site lists ~250 publications, presentations, book chapters, magazine articles and other research outputs and has had ~30,000 views. The detailed analytics page includes geographical details, views and downloads and Academia’s measures of impact). A recent study in PLOS ONE10 states that papers uploaded to Academia receive a 69% boost in citations over five years.\n\nResearchGate appears to be more expansive in terms of what can be hosted on the site and can include, for example, datasets, project updates, patents, working papers, code, and negative results. Users are encouraged to fill out their complete profile, and list awards and their previous work experience at a minimum. It also provides a forum for technical questions and answers, and most recently, allows users to group publications and data into ‘projects’. SE’s profile includes 418 contributions as of 19th March 2017 and provides a reach of ca. 15,000 researchers, who can learn about his publications, presentations and postings to the site. It should be noted he also barely uses this software, as he prefers to put his efforts elsewhere. Academia and ResearchGate, when used to develop a network of followers, both offer an excellent communication path for sharing activities and research outputs with the community using the sites. These sites require some considerable effort to respond to requests and engage with others if you are to fully maximize their value.\n\nIt is important to note that any uploads of published work to these services requires permission from the publisher. A strong movement from publishers, which is beginning to take momentum, puts pressure on these services to ensure they educate their users on appropriate copyright. You may find the publisher will contact you if they feel that your published work is listed on ResearchGate and conflicts against your agreed copyright. To ensure that ResearchGate is used without risking a breach of copyright, it is necessary to read the Copyright Transfer Agreement or transfer license associated with the publisher to see how the paper can be shared. Open access policies can be checked on the journal’s website or by using SHERPA RoMEO, a site that presents policies based on a journal title or ISSN search.\n\nCurrently, the most widely adopted social sharing platform is Facebook, approaching two billion users as of May 2017. While we acknowledge the penetration, versatility and general global acceptance of Facebook as a platform, we collectively use it for sharing with friends and family mainly, only rarely sharing links to some of our scientific works. We prefer to separate career activities from personal pursuits, but acknowledge that this is also a personal choice. For younger generations this may be inverted, their followers on Facebook may be colleagues rather than family and for them it would be seen as appropriate to share on Facebook, on Instagram or on group chat site, such as WhatsApp. Instead, for sharing we use alternative online tools as outlined below.\n\nBlogs. All three of us have managed blogs for a number of years and two invested considerable amounts of time in sharing opinions, data and ideas (SE and AJW). We have developed collaborations of value, asked for opinions and guidance and used them to share information regarding our (or others) latest research efforts, industry research and news/updates. Overall though, it seems that there is less interest in blogging, probably due to the amount of time needed to develop content and we ourselves more commonly share information now in smaller soundbites (via Twitter and LinkedIn). We also try to reach other networks by being guest bloggers on the blogs of others or news services and using these other sites to share data. There are so many other applications now available for sharing relative to just a few years ago that we will therefore focus our discussion on those sites that we use on a more regular basis.\n\n“Small nugget sharing”: Twitter/Google Plus. In many ways both Twitter and Google Plus are for sharing bite-sized communications, presently limited to only 140 characters for Twitter, but possibly to be extended. In terms of sharing, it is an ideal application for pointing your network to papers, publications, data sets, and blog posts via embedded short links. Networking retweets can drive attention to these various types of resources. It is simple to use and push out a URL to something you wish to share can drive attention easily, though only for a fairly short period of time. Once an item of interest has been prepared on some other form of social media, or a website link to a publication requires sharing, Twitter and Google Plus then become ideal ways to point to these items. Examples of nugget sharing include letting people know in advance that you will be at a conference (and potentially set up a “tweetup” with new connections), or live tweeting at a conference11. One of the primary reasons that we use these platforms is for amplification of our publications (vide infra). Both AJW and SE have initiated collaborations and specific data sharing activities via Twitter. For example, SE attended a meeting and shared the need for an application to build on a Green Chemistry dataset that was available only as a PDF file. What was initiated as a public Twitter exchange resulted in a mobile application being available within a few days12, an example of a specific collaboration initiated via 140 character exchanges. Another example is a Twitter exchange that gained support for a new chemical identifier associated with the EPA CompTox Chemistry Dashboard added to Wikidata. Also, an open research project on Ebola (SE) was initiated, which in the space of two years facilitated the identification of new antivirals, publications13–17 and an eventual NIH grant. Others have described how Twitter can be used by scientists to extend network networks and ultimately find jobs18 and, in our domain of chemistry, for sharing “Real-time Chemistry”.\n\nTaking advantage of the communities of followers on Twitter and Google Plus is only possible, of course, once you have established a community. Building a community requires engaging in the platform by following other users who post interesting content, by engaging with the content, sharing other people’s content and posting your own. Developing your own following is an incremental process, which may take years and there are more expansive guidelines available on how to do this.\n\nMedia sharing: Presentations, preprints and videos. Most scientists present their work at conferences, either as talks or posters. Without using online sharing tools, the only people who would see your presentations, which commonly take hours of time to put together, would be limited to the people at the meeting. However, online tools for sharing and distributing these same presentations can result in a much broader reach, and importantly, keep the work alive for a period much longer than the limited presentation time, and associated audience, at the conference. One commonly used presentation sharing platform is SlideShare (acquired by LinkedIn in 2012). An advantage of these websites is that they both belong to the same company (Microsoft). This allows a relatively simple process for associating a presentation with your profile with one click (“Add to Profile”) to make the presentation visible (Figure 3). SlideShare is not limited to simply sharing PowerPoint presentations: the user can include article preprints, infographics and other documents, as well as the ability to have integrated videos embedded via YouTube. One approach (adopted by AJW) to derive most value from the network effect of multiple connected platforms, reach the broadest audience, and share the work in various forms is as follows: 1) a PowerPoint presentation delivered at a meeting is shared on SlideShare (and also figshare, ResearchGate and Academia); 2) a narrated version with voiceover to capture the presentation is made and published to YouTube; 3) the YouTube embed function is used to insert the video to the second slide on SlideShare; and, 4) a viewer will then have the choice to view the slide deck, download it for local storage, and if they want they can hear the author also present the slide deck with the voiceover.\n\nWhile we acknowledge that the most popular global video sharing platform is YouTube, there are geographical issues, not only based on language, in terms of all countries accepting the sharing platform. Streaming content in China via YouTube is an issue and other platforms, such as Vimeo or Weibo, may be an option. We think that scientists should not mix their scientific movies (for example, narrated presentations, lab activities, etc.) with family movies!\n\nSlideShare is part of the LinkedIn application family and one click is required to share the presentations to the LinkedIn profile and share them with all account followers.\n\nData sharing. There are myriad platforms available for data sharing, and it is difficult to be exhaustive in this short article as, depending on the particular domain of science, there will be biases. Climate science, versus chemistry, physics or medical sciences have their own favorite platforms. We have experience of the Dryad Digital Repository, figshare and Mendeley data for sharing most data types. Other sites can be used for specific types of data sharing. For example, PubChem for sharing BioAssay data. For the purposes of this article we focus only on the figshare site for data sharing, as we have the greatest experience in terms of using this platform, as well as the fact that it is now integrated with many publishers that we have published our articles with (i.e. PLOS, the American Chemical Society, Springer and Wiley). Importantly, figshare offers the advantage of creating DOIs that give unique persistent identifiers that can then be resolved across platforms. Datasets published to figshare can be embargoed, cited in a manuscript, and made open at the time of publication. This provides important benefits as now specific datasets are fully citable (via DOI), the number of views and downloads are directly tracked, the altmetrics can be measured and, overall, there is significantly more insight into how data is used and accessed over simply putting a file as supplementary info with a publication. An example of a shared dataset, including views, downloads and an associated “Altmetric donut” is shown in Figure 4. The donut represents the Altmetric Attention Score and is designed to identify how much and what type of attention a research output has received. The colors of the donut represent the different sources of attention and the details regarding how the score is calculated are available online.\n\nAt deposition of the file, a digital object identifier (doi) can be requested which, for this file, is accessible at https://doi.org/10.6084/m9.figshare.3578313.v1.\n\nWith the data (or presentations or documents) on figshare, we can then share details via Twitter and use the associated DOI to cite our datasets again on ResearchGate. figshare also allows us to share figures before they are used in manuscripts, define them with a CC-BY Creative Commons license, and then use them in our publications. In this way, we are not transferring copyright of our figures to the journals and other people who wish to use our figures do not have to request permission as they are already in the public domain. figshare is also a platform where we share our presentations, posters and preprints (other preprint servers do exist, such as arXiv, bioRxiv, and the impending ChemRXiv. It is also where we tend to host our data that we point to from our publications, as well as using subsets of the overall data in the supplementary information with publications. Tagging of any form of information that we share via the site makes it more discoverable, and a search across a specific application website can therefore surface this to interested parties. More recently figshare has added ‘collections’, and we have made use of this around our work on the Zika virus. As multiple academic publishers have now accepted figshare as a data repository of choice, its reach appears to be growing and it is likely to have increasing importance as a data repository for science.\n\nCode sharing. It is advantageous to those that produce computer code, as a part of their scientific output, to be able to share it and allow others to consume it. Major code repositories, such as GitHub and SourceForge, offer many advantages for collaborative software development and versioning, so have great utility in their own right. They are integrated into online sharing platforms, thereby ensuring that code updates are transmitted across the community, keeping the audiences, who may require code access, informed of new depositions that they can consume. Most scientists may never use these repositories, but they are of central importance in the science ecosystem and other data sharing tools can learn from them. There are other reviews regarding the adoption of these platforms and readers are pointed to these for more detail19. We, like many others, believe that code used in projects that is then used to deliver software underpinning, for example, data processing, analysis and reporting, should be citable and ultimately be a part of the altmetrics feeding a scientist’s acknowledgement for their contributions to science.\n\nAs scientists, one of our interests is to track our publication records, have access to our citation statistics, and, potentially measure the impact of our work. Impact can be estimated by a number of statistics, such as the h-index. There are even programs available so you can generate your own statistics.\n\nIn recent years, catalyzed specifically by the work of Priem et al and the release of the “altmetrics manifesto”, altmetric statistics have started to gain general acceptance within the community as measures of interest in a scientist’s work. This acceptance is likely to increase as their algorithm becomes more mature and produces increasingly relevant results. Altmetric statistics not only take account of standard publication metrics, such as citations, article views and downloads, may also track views and downloads for presentations and videos, and include measures of attention for discussion (via blogs) regarding publications online on platforms such as Twitter and Facebook. Altmetric statistics may also attempt to measure the impact of the reuse of data sets and code. This, to many, may seem incredibly complex and go far beyond just tracking “the paper” itself, but this is the world that is evolving. A number of tools attempting to integrate and track these altmetrics impact statistics have been established. These include ImpactStory, Altmetric and PlumX. This section discusses some of the sites we use for managing our publication records and tracking our classical citation statistics, as well as those that we use for measuring our impact in the realm of altmetrics.\n\nFor the purposes of “publication and citation tracking” we use: ORCID, Google Scholar, and Microsoft Academic Search. While other platforms, such as ResearchGate, do an excellent job of informing us via emails when there are new publications to associate with our profile, once we have confirmed the associated publication as appropriate, we use it more as a networking site and catch-all for the bulk of our research outputs (see earlier). These three sites are more focused on simply tracking publications and, in the case of ORCID, it expands to include presentations listed on figshare (via the DOI integration). Each of these websites is free to use for an individual scientist, while ORCID also offers access to an institutional package (vide infra) that allows organizations to mesh together contributions for their staff into an institutional representation of activity.\n\nORCID. An Open Researcher and Contributor Identifier (ORCID iD) is a unique numeric identifier for a researcher that is free to claim and can be obtained simply by registering at the website. Almost four million identifiers have been claimed (or issued) at the time of writing. The ORCID iD is a derivative of previous efforts by Clarivate Analytics (formally Intellectual Property and Science business of Thomson Reuters) to produce the ResearcherID that has a distinct benefit of disambiguating authors and their association with publications, which is an issue for other sites (see later for a distinct example of this problem with Google Scholar). Once a scientist has claimed their unique identifier, they are responsible for defining the content associated with it, including whether they wish the content to be public or private. They can add a short biography and associate a number of their websites and public profiles with the ORCID site. Increasingly, these identifiers are expected or accepted by publishers at the time of manuscript submission, and funding agencies are also starting to use them. The website allows for an online resume to be assembled from publications by searching based on your name and editing the list as necessary. The data collected are then available via an application programming interface and, for example, can be used by publishers to use on their own platforms for enhancing the linkages between an authors’ publications. As a starting point, it is possible to upload a list of publications in a standard format, such as Bibtex or EndNote. It is also possible for a scientist with a ResearcherID to connect and migrate the existing content to ORCID and expand from that point. The ORCID application programming interface and authorization module allows connectivity between web-applications.\n\nSince the ORCID iD itself has value independent of the capabilities of the website as a representation of a scientist’s publication record and resume, obtaining an ORCID iD is, in our estimation, one of the primary entry points into the scientific networking regime today and we encourage registration. Inclusion of the ORCID into PowerPoint templates for presentations shared online, and on other scientific networking and data sharing sites ensures that a simple web search in the future will aggregate the majority of your public works labeled with the identifier. We also add the ORCID iD either on the first or last slide of our slide decks with the intention that it is captured by the search engines, allowing a simple search to provide us a list of ORCID indexed works.\n\nGoogle Scholar and Microsoft Academic Search. Google Scholar (GS) is a free website for assembling what is effectively a list of your publications and their citations. GS also provides metrics, such as the h-index, that we have found to be generally much higher than in the Web of Science (WoS), which could be because it includes self-citations, whereas WoS does not. There have been comments that GS is a better predictor of actual citation numbers than WoS. GS is useful for searching for publications and perhaps picking up citations that the commercial tools are missing. We have seen a worrying recent trend in terms of auto-associating publications, and recently AJW identified that 70 publications had been associated with his GS profile and had to be manually deleted. Profile maintenance is therefore necessary by the user, and there has to be careful curation and pruning on an ongoing basis. Microsoft Academic is much like GS, but we have found it to be less useful because it was unable to capture citations to our publications.\n\n“Alternative metrics”: Plum Analytics, Altmetric, ImpactStory. These tools aggregate the citations from blogs, tweets, Facebook, etc., and use their own algorithms to derive a score for each paper. Twitter is commonly a major contributor in terms of social media counts, and a weighted approach in terms of the importance of a social media event can be taken into account. For example, Altmetric give a news item a higher weighting than a tweet. Interestingly, ResearchGate also derives a score for each author, though it remains a little unclear how the score is calculated.\n\nKudos. The emerging area of author support tools has very limited research findings available (e.g. data and code). However, Kudos has been recently highlighted as useful in this regard20. While there are plenty of websites for an author to post their papers and preprints, enriching these research outputs to add more information about what commentaries have been made about the work, linking to additional presentations, datasets, etc., is less supported in general. Kudos tracks citations (supported only by WoS statistics at present), altmetrics on publications (supported by Altmetric currently), and other statistics, like usage, where available (e.g., downloads and clickthroughs). It also provides a dashboard of an author’s papers using a CrossRef DOI as the basis of the data feed and can be linked to your ORCID account (Figure 5). For a new user there is a downside if they have a large publication record, as it will take a very long time to enrich every publication with links to other content, but a user can of course choose to ignore their historical record of publications and focus only on new publications moving forward. Our adoption of the platform has allowed us to enrich publications with information (with examples described below) and share the associated Kudos page, drive traffic to the paper and track this activity. In our experience, Kudos results can be improved when multiple authors contribute and work together to improve awareness of a publication because the publication is disseminated through multiple-author networks and online networking efforts.\n\nExternal resources such as presentations, videos, interviews, figures, datasets or related publications can be linked to an article as is shown (right hand side). The claimed article is available directly by appending the DOI to the growkudos.com URL (https://www.growkudos.com/publications/10.1016%252Fj.envint.2015.12.008).\n\nThe enrichment capability offered by the Kudos platform delivers a valuable capability to the authors of the publication: the ability to keep the publication up to date by linking to related information, for example, blog posts regarding the work, media coverage, presentation slide decks from conferences, later publications by the authors themselves or derivative works by other scientists. One of the authors (AJW) coined the term “forward citation” to refer to this capability, as citations are retrospective in nature and point only to earlier work. Enrichment of an article can continue to keep the research reported in a publication updated with follow-on information as later works of various types are associated. An example of this from the author’s list of publications is associated with the synthesis of the chemical known as “Olympicene”, a small molecule synthesized during the time of the 2012 Olympic Games as a form of molecular commemoration. The claimed publication has been enriched with a YouTube movie, multiple blog posts, various types of media coverage and even detailed discussions regarding part of the chemical synthesis. In particular, multiple scientific papers by other authors referring specifically to the trivial chemical name of Olympicene have been associated with the original paper thereby connecting the work directly. While search engines and referential systems from publishers attempt to do this in an automated manner, in this case the claiming author(s) have an opportunity to directly make the association and comment as appropriate, regarding the value and nature of the related information.\n\nIn June 2016, Kudos and the Altmetrics Research Team at the Centre for HEalthy and Sustainable CitieS (CHESS), and the Wee Kim Wee School of Communication and Information at Nanyang Technological University (NTU, Singapore) analyzed user data and found that 51% of registered Kudos users were STEM researchers sharing their work, 29% were social sciences researchers and 8% were humanities researchers. This demonstrates the engagement of STEM researchers, and how they are leading the way in using innovative tools to disseminate their research. However, with seven to nine million active researchers a year3, Kudos only has around 1% penetration into the research community, so there is huge growth potential here for these types of services to help more of the scientific community.\n\nThis article has primarily focused on the benefits of these tools for the individual researcher in terms of sharing their research and work outputs in the forms of data, presentations, publications and other outputs. A number of the tools also offer capabilities to track “organizational impact” (Kudos; Altmetrics; ORCID). LinkedIn already allows aggregation of users into organizations so that they can stay informed, connected and follow and post updates (see here and here). Being aware of your institution’s research in terms of what is being disseminated, what the comments are out in the public domain via the social networks and media is certainly of interest to any organization. For example, Altmetrics data can help track the influence of an institutions work on public policy and helps provide insight into value of research outputs. Since these tools can give almost immediate details about the public engagement with work outputs as soon as it is published, this ensures that it is bring interpreted correctly and addresses any potential issues resulting from follow on reports. In terms of funding, many of these sites can be valuable in engaging new collaborators and sponsors, as well as providing information valuable in obtaining future support. Online activity benchmarking relative to other peer organizations can be important in terms of identifying contributions and productivity.\n\n\nPotential downsides to this new alchemy\n\nThis article has outlined various tools and approaches than can be executed in order to develop an online network and has defined a number of benefits resulting from participation. For balance, it is appropriate to also identify potential downsides to engaging in these activities. It is certainly true that there is a potential for noise in the network as millions of people have moved online and share their views, commentaries and concerns. In the domain of science, as secluded as it may be from the engagement of the masses, if all scientists were to take advantage of the software applications to share their activities, the advantages endowed upon the early participants in online networking for scientists could be at risk to increase the noise in the system. Separating the true signals from the noise will require development of necessary skills to participate online in a manner that rises above the noise. There is likely more to lose by not participating, than by working to develop an online presence that both contributes to the greater community and develops your own following.\n\nOnline platforms will certainly be used to push both good and bad, or weak, science. However, these platforms also offer opportunities for true scientific discourse that will assist those who have less scientific knowledge to other sources of information, with the potential to re-educate and advise. While bad science has already developed a voice online, there is an active community just as willing to participate in the debate and reactions can be swift.\n\nScience is meant to be data-driven and objective, yet open to discussion and reinterpretation relative to multiple hypotheses. Historically this has been paced relative to the release of relevant publications to the public. Peer-review has been limited to a small audience prior to publication, commonly between three and five scientists, and then released to a larger community for consumption. Historically, responses to articles would be based on letters to the editor and would be slow to move into press, likely with exchanges behind the scenes between editors, authors and potential critics. Post-publication peer-review is now facilitated by publishers, allowing direct comments to be posted against articles, in general with moderation, but any publication can now be critiqued immediately after release. Naysayers to scientific work can be summarily disregarded and their commentaries debated in the public domain. Meanwhile online networking tools also provide an exciting and engaging means to cautiously discuss science, and even conduct further work in the laboratory to validate the reported findings. From our own domain, the Hexacyclinol controversy was taken online to a discussion with a community of interested scientists, and while the community disassembled the science published in February 2006 it was a full six years before retraction21, this after dozens of blog posts and online discussions. Similarly, an article regarding oxidation by a reducing agent22, sodium hydride, was dismissed by “peer review in the blogosphere” in a matter of days following publication. This included blog posts from labs showing NMR spectra and detailed exchanges between scientists on blogs. Sadly all of this discourse failed to make it to the journal article where the simple message communicated by the American Chemical Society on the journal page is “This manuscript has been withdrawn for scientific reasons”22, and the science reported online has been lost to posterity as a result of the majority of links decaying into obsolescence (i.e. http://www.coronene.com/blog/?p=842 and http://www.organic-chemistry.org/totalsynthesis/?p=1903; both fail to link to the original posts). This points to the somewhat temporary nature of internet exchanges and the challenge of maintaining and archiving these for future retrieval.\n\nHundreds of millions of tweets are exchanged every day, an average of 6000 per second as of this writing. Blog posts are loaded and commented on. The number of English Wikipedia pages is approaching 5.5 million, with about 750 pages added every day, and multiple edits being made at any point in time. Can we depend on the Internet Archive Wayback Machine to capture all of this content? While the Wayback Machine did capture the decayed page regarding the oxidation by sodium hydride (https://web.archive.org/web/20090801231430; http:/www.coronene.com/blog/?p=842), it is highly unlikely that capturing all internet knowledge is even feasible as the machine takes irregular snapshots. As with the information contained within books, as with knowledge itself, internet content can decay and morph yet society, science and humanity continues to move forward unabated. Not all contributions and engagements in the online networking world will make a difference and we can only hope that there is useful signal in the noise.\n\n\nConclusions\n\nThe public online networking, tracking and amplification tools described in this article that can be used for raising awareness of scientific outputs are just the tip of the iceberg. We acknowledge that our efforts invested in them would dissipate if the software tools cease or are overtaken with new offerings. For example, WhatsApp, with a worldwide user base of 1.3 billion users, is not used by any of the authors yet! We are not alone in terms of ignorance of the app; however, it was also omitted from the recent Times Higher Education listing of social media for academia. Perhaps this app represents an untapped tool for communication and networking in science. We imagine that the software tools used in five years by scientists are likely to be different, though some of the existing sites will persist, so there may be a hurdle to overcome before engaging with a new software application. Social media tools overall can be viewed as a conversation container that is most relevant when it is current and with value that degrades over time; i.e. a tweet from five years ago might not be as relevant as when it happened. This also raises the question of longevity, as these efforts would not be around as long as the papers, which could be useful for many years or decades.\n\nAdopting a new application must either offer some specific advantages over existing apps or include ways to incorporate your information to avoid re-entering data. If you want to learn more about our personal use of online media for science communication and how it has evolved over the years, please review our supplemental materials (https://www.slideshare.net/AntonyWilliams/; https://www.slideshare.net/ekinssean/). Our involvement in sharing data, research activities, presentations and publications has developed over a period of almost a decade. AJW has presented dozens of times at educational institutions and governmental organizations. SE most recently presented his experiences at the AAPS conference. LP has supported hundreds of scientists and authors in numerous disciplines over the years to help them publish, disseminate and increase the impact of their research. In addition, LP also advises publishers and libraries on how best to support their researchers. During these efforts, there are a couple of common observations. Adoption of online media tools appears to be generational with much faster uptake by early career scientists and later generation scientists generally avoid them. Some scientists see participation in the use of open data-sharing, posting their presentations and putting efforts into amplifying their research as not an appropriate or useful activity. Similarly, there appears to be true advocates of openness, especially with the increasing drive towards open access publishing, but we have certainly met scientists who take a very neutral or skeptical position on open science and sharing in general.\n\nOne of the recurring themes of our engagements is that many of these software tools exist in isolation and there is no way to link them all together, thereby requiring multiple efforts to populate them with data and information. This results in repetitive efforts and time wasting by users. It does, however, present a potential commercial opportunity to support those with little time to invest in starting or maintaining use of these various tools, which could enhance the visibility of their scientific outputs. A useful service would be to offer an integration tool to update multiple online media profiles, with at the very least the most basic of information, and to show the relative benefits from these different tools. Or one could set this as a project for their younger lab members who might be more adept with the technologies!\n\nWhile we have mentioned a number of online platforms that we use in our own efforts to network, share and amplify our research, these are not necessarily the best tools available for every individual scientist’s use case. AJW primarily operates in the field of chemistry, cheminformatics and chemical-biology, while SE is focused more on drug discovery for rare and neglected diseases, and our chosen tools are based primarily on our early adoption, familiarity and cross-fertilization from collaborating over the past decade. The most appropriate sites for physicists and biologists to share their data may well be very different. There will be more websites and applications coming online in the future that may be even more fit-for-purpose for a scientist operating in a particular field. We encourage experimentation and adoption, if you find them of benefit. To begin with we suggest keeping it simple, use a few tools, and focus on fundamentals – be smart with the time you have available. We find ORCID identifiers to be increasingly in demand by publishers and they will be an expected part of every scientists’ profile before long. Our Google Scholar Citations profile are our primary method by which to track publications and, as a beneficial side effect, inform us of citations to our work. LinkedIn is the primary professional networking site at present and it is worth the effort to develop an extensive profile. SlideShare (or similar) is valuable for sharing presentations and documents, figshare (or alternatives) for sharing citable data, Kudos for post-publication enhancement by associating with later or relevant works, and Twitter for bite-sized sharing into a large network of potential engagement. While a scientist may not see much traction with one tool, coordinating how to use more than one is the key, which should lead to seeing the benefits from engaging with this ‘new alchemy’. We think you will quickly discover what works best by measuring activities and what gives the most impact, as it may be different for each scientist.\n\nThe approaches outlined here regarding sharing details about a scientific manuscript, or simply a research study and associated data, also offer a number of potential positive effects that can contribute to the quality of science. The historical approach of peer review was to, hopefully, both improve and ensure the quality of the science and the published output. Sharing research data, presentations, posters and preprints allows for early feedback on both the results and preliminary findings and thereby offers the opportunity to get feedback from peer groups and reviewers. This can certainly help contribute to the quality of science before the final published record in a journal is established.\n\n\nDisclaimer\n\nThe views expressed in this article are those of the authors and do not necessarily reflect the views or policies of the U.S. Environmental Protection Agency. Mention of or referral to commercial products or services, and/or links to non-EPA sites does not imply official EPA endorsement.",
"appendix": "Competing interests\n\n\n\nLP was previously an employee for Kudos, but is now an independent consultant. SE is an employee of Collaborations Pharmaceuticals, Inc. and Phoenix Nest, Inc.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAuthor information\n\nTwitter accounts for AJW, LP and SE can be viewed at:\n\nAJW: @ChemConnector\n\nLP: @loupeckconsult\n\nSE: @collabchem\n\n\nReferences\n\nCollins K, Shiffman D, Rock J: How Are Scientists Using Social Media in the Workplace? PLoS One. 2016; 11(10): e0162680. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJinha AE: Article 50 million: an estimate of the number of scholarly articles in existence. Learn Publ. 2010; 23(3): 258–263. Publisher Full Text\n\nWare M, Mabe M: The STM Report. 2015. Reference Source\n\nWillams AJ, Pence HE: The future of chemical information is now. Chem International. 2017; 39(3), In press. Publisher Full Text\n\nBohannon J: Who's afraid of peer review? Science. 2013; 342(6154): 60–5. PubMed Abstract | Publisher Full Text\n\nSorokowski P, Kulczycki E, Sorokowska A, et al.: Predatory journals recruit fake editor. Nature. 2017; 543(7646): 481–483. PubMed Abstract | Publisher Full Text\n\nVan Noorden R: Controversial impact factor gets a heavyweight rival. Nature. 2016; 540(7633): 325–326. PubMed Abstract | Publisher Full Text\n\nMcEachran AD, Sobus JR, Willams AJ: Identifying known unknowns using the US EPA’s CompTox Chemistry Dashboard. Anal Bioanal Chem. 2017; 409(7): 1729–1735. PubMed Abstract | Publisher Full Text\n\nJamerson J: Microsoft Closes Acquisition of LinkedIn. In Wall Street Journal. 2016. Reference Source\n\nNiyazov Y, Vogel C, Price R, et al.: Open Access Meets Discoverability: Citations to Articles Posted to Academia.edu. PLoS One. 2016; 11(2): e0148257. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Perlstein EO: Ten simple rules of live tweeting at scientific conferences. PLoS Comput Biol. 2014; 10(8): e1003789. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Clark AM, Williams AJ: Incorporating Green Chemistry Concepts into Mobile Chemistry Applications and Their Potential Uses. ACS Sustain Chem Eng. 2013; 1(1): 8–13. Publisher Full Text\n\nEkins S, Freundlich JS, Coffee M: A common feature pharmacophore for FDA-approved drugs inhibiting the Ebola virus [version 1; referees: 2 approved]. F1000Res. 2014; 3: 277. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Coffee M: FDA approved drugs as potential Ebola treatments [version 1; referees: 1 approved, 1 approved with reservations]. F1000Res. 2015; 4: 48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Southan C, Coffee M: Finding small molecules for the ‘next Ebola’ [version 2; referees: 2 approved]. F1000Res. 2015; 4: 58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkins S, Freundlich JS, Clark AM, et al.: Machine learning models identify molecules active against the Ebola virus in vitro [version 2; referees: 2 approved]. F1000Res. 2015; 4: 1091. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLitterman N, Lipinski C, Ekins S: Small molecules with antiviral activity against the Ebola virus [version 1; referees: 2 approved]. F1000Res. 2015; 4: 38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker M: Social media: A network boost. Nature. 2015; 518(7538): 263–5. PubMed Abstract | Publisher Full Text\n\nThung F, Bissyande TF, Lo D: Network Structure of Social Coding in GitHub. In Software Maintenance and Reengineering (CSMR), 2013 17th European Conference on. IEEE. 2013. Publisher Full Text\n\nPerkel JM: Scientific writing: the online cooperative. Nature. 2014; 514(7520): 127–8. PubMed Abstract | Publisher Full Text\n\nLa Clair JJ: Retraction: Total syntheses of hexacyclinol, 5-epi-hexacyclinol, and desoxohexacyclinol unveil an antimalarial prodrug motif. Angew Chem Int Ed Engl. 2012; 51(47): 11661. PubMed Abstract | Publisher Full Text\n\nWang X, Zhang B, Wang DZ: Reductive and transition-metal-free: oxidation of secondary alcohols by sodium hydride. J Am Chem Soc. 2011; 133(13): 5160. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "24792",
"date": "18 Aug 2017",
"name": "Alice Meadows",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting paper that is hard to classify, as it doesn't fit neatly into any one category of article. It is a mix of best practice recommendations for and a review of online networking, data sharing, and research activity distribution tools for scientists - a blend of fact and opinion, part editorial and part research paper.\nOn the plus side, this is a very helpful overview and evaluation of many of the ever-increasing tools and services available to help scientists promote and share their research online, from a group of people who understand why and how to use them. Gathering all this information into a single paper is a real service to the community and should, I hope, prompt more scientists to follow the authors' advice, and : \"encourage further exploration to find out what tools work best for you to meet your objectives, and even form the foundation to discover new or other tools not identified in this piece.\"\nHowever, the paper is weakened by the fact that there are a number of points when the authors express their own opinion without providing any evidence to back it up. For example, from the Introduction: \"We believe there are a number of reasons that most scientists do not use these tools. It could be because few people would even think of using online media tools for their scientific research, or because they do not understand the potential value. The lack of credit for sharing pre-published data, code or other forms of research outputs, especially in terms of citations that can contribute to career progression, may also be an issue. Maybe scientists do not see this activity as a valuable use of their time or they require initial guidance to help navigate use of these tools.” Or, from Categories of Tools (Networking): \"While LinkedIn has a primary role to form connections and expose your career to potential employers and partners, it is likely the de facto networking tool for many professional scientists.\" These and other unsupported statements are most likely correct, however, they would be more credible if supported by evidence in the form of published research, as is the case for much of the paper.\nBy the authors' own acknowledgement, the article is based largely on their own experiences. They are clearly expert and long-term users and consumers of the tools and services they describe, which makes the information they share very helpful, especially for those who are less expert. But their experience is also, inevitably, limited by their own geography (USA and UK) and specialisms (chemistry). So the challenges of sharing via social media in China, for example, only get a brief mention. And as they point out: \"these are not necessarily the best tools available for every individual scientist’s use case... The most appropriate sites for physicists and biologists to share their data may well be very different.\"\nOverall, I found this a very thorough and helpful contribution to the literature on online tools and services for researchers. It would be even more valuable if could be updated regularly with information about new tools and services, taking into account the needs of a wider population of researchers by discipline and region.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "24790",
"date": "24 Aug 2017",
"name": "Matthew R. Hartings",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe expanding world of on-line dissemination and sharing and conversations surrounding scientific research can be daunting for many scientists. Williams, Peck, and Elkins are certainly qualified and experienced in this area. And, their article \"The new alchemy\" will certainly be useful to many as they try to figure out how best to navigate this online environment.\n\nThe greatest benefit to this article, is the clear description of what the different on-line platforms are meant to be and what they are best used for. Readers are sure to find the authors examples and guidance helpful. I am glad that they have written this manuscript and think that F1000 is an ideal place for publication.\n\nThere are two issues that I wish the authors had commented on a little more extensively.\n\nThe first is \"impact.\" I appreciate that the article is meant to lay bare what tools are out there for scientists to use in sharing their research. I also appreciate that \"impact\" is a loaded term and is exceedingly difficult to gauge (and is often done poorly). But, it remains that it is what many scientists focus on. I have personally found that tools like Plum Analytics and ImpactStory and Altmetric are useful in helping me to compare my article-level metrics to those of my peers and to those of other articles published in the same journal. What these tools specifically try to measure is the effectiveness of how well you are able to share your content across these online tools that the authors describe. Ultimately, we scientists need a reason to share our work. For selfish reasons, I think that many scientists will find benefit to these \"impact\" tools.\n\nThe second theme that I would have liked the authors to frame slightly differently is \"post-publication peer review.\" The scientists who engage with F1000 do not need to be told the benefits of post-publication peer review. However, it would have been gratifying for me to see the authors list sites that have post-publication review options as a way of sharing research rather than just in the sections on \"potential downsides.\" Science is not static. That sentiment goes for published science as well. Scientists would be wise to welcome and actively engage with post-publication peer review. It is this active engagement that is key for making sharing worth-while. Sharing for the sake of sharing is only a half-step. If a scientist wants to have lasting effects, fully engaging in the process of sharing is absolutely required.\n\nI applaud the authors for putting this article together. I know that there are many within the scientific community who can benefit from their commentary.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "24793",
"date": "30 Aug 2017",
"name": "Stacy R. Konkiel",
"expertise": [
"Reviewer Expertise Altmetrics",
"bibliometrics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion piece is a detailed explanation of what the authors believe to be some of the best tools for promoting one's research online.\nThe authors explore many aspects of online engagement, and make a solid case for why the average researcher should care to promote her own work. They are careful to address caveats in their argument (e.g. how disciplinarity or region might affect an author's ability to effectively promote his work on, say, Facebook). They cover a great deal of ground and share many relevant tools that would be valuable to the average researcher.\nIn general, the piece could make a stronger argument for how these tools help one to achieve true impact (rather than just drawing attention to one's work--the Olympicene example is a good illustration of the difference here--or being a prolific author--which is what one is arguably tracking when adding papers to ORCID or Google Scholar).\nMore evidence beyond the authors' own experiences would be helpful to see.\nPositioning arguments in relation to existing resources (e.g. the Fast Track Impact Handbook and the Impactstory 30 Day Impact Challenge) can help readers understand the unique importance of this paper's recommendations to their own work.\nThe metaphor of \"new alchemy\" might also be explained in greater depth, and more connections made from it to the specific recommendations, limitations, etc contained in the article.\nThe Conclusion points to the need for an integrated suite of tools, to avoid duplication of effort on the part of researchers, but I'd argue that Kudos gets one part of the way there--it's a one-stop-shop for promoting one's work across a number of social media sites.\nThe \"organizational impact\" section could be cut, as most of the paper focuses upon how individual researchers can manage their own outreach/engagement.\nIt's possible that the piece could be strengthened overall by choosing one area to focus upon (e.g. data sharing OR online networking), rather than trying to cover so much ground.\nThere are points throughout the paper where the authors' opinions are presented as fact. I suggest clarifying.\nThere are also several points where the authors link to blog posts as opposed to peer reviewed research that exists on a topic (e.g. the CostalPathogens post). The paper would be strengthened by a thorough literature review on the topics presented here and supporting arguments being made on the basis of those published papers, in addition to blog and Twitter posts.\nI'll speak specifically to the altmetrics-related content from here, given my area of expertise. Disclaimer: I'm employed by Altmetric.\n\nThe authors describe altmetrics as including citations, views, etc alongside other types of data, but many would argue that altmetrics are distinct from citations and usage statistics, and that cites and usage statistics are better described as article-level metrics. To that end, a brief explanation of why article-level metrics are more useful than journal-level metrics like the journal impact factor is probably appropriate here (assuming that the intended reader--a beginner--is not likely to have heard such arguments before).\nThe tweet from Bilder that is used as the basis of claiming certain coverage for altmetrics across the research literature is actually just the coverage provided by one service (Crossref Event Data). While it's still useful as an illustration, that should be clarified in the text.\nIn the sentence where \"altmetric algorithms\" are described, it would be helpful to explain what you mean by \"produces increasingly relevant results\".\nIt would also be helpful to readers to hear more about at least one recommended altmetrics tool that they can use to track attention to their work (e.g. Impactstory profiles or the Altmetric bookmarklet), and how the promotion strategies you recommend affect various types of altmetrics (more examples like the Kudos/NTU study would be good here).\nFinally, the authors should take care to distinguish Altmetric (the company) from altmetrics (the larger data type that's provided by many companies, including Impactstory and Plum Analytics in addition to Altmetric). Specifically, avoid referring to \"altmetric scores\" (too easily confused with the proprietary Altmetric Attention Score) or to the company Altmetric as \"Altmetrics\".\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1315
|
https://f1000research.com/articles/6-1303/v1
|
02 Aug 17
|
{
"type": "Research Article",
"title": "Using sheep genomes from diverse U.S. breeds to identify missense variants in genes affecting fecundity",
"authors": [
"Michael P. Heaton",
"Timothy P.L. Smith",
"Bradley A. Freking",
"Aspen M. Workman",
"Gary L. Bennett",
"Jacky K. Carnahan",
"Theodore S. Kalbfleisch",
"Timothy P.L. Smith",
"Bradley A. Freking",
"Aspen M. Workman",
"Gary L. Bennett",
"Jacky K. Carnahan"
],
"abstract": "Background: Access to sheep genome sequences significantly improves the chances of identifying genes that may influence the health, welfare, and productivity of these animals.\n\nMethods: A public, searchable DNA sequence resource for U.S. sheep was created with whole genome sequence (WGS) of 96 rams. The animals shared minimal pedigree relationships and represent nine popular U.S. breeds and a composite line. The genomes are viewable online with the user-friendly Integrated Genome Viewer environment, and may be used to identify and decode gene variants present in U.S. sheep. Results: The genomes had a combined average read depth of 16, and an average WGS genotype scoring rate and accuracy exceeding 99%. The utility of this resource was illustrated by characterizing three genes with 14 known coding variants affecting litter size in global sheep populations: growth and differentiation factor 9 (GDF9), bone morphogenetic protein 15 (BMP15), and bone morphogenetic protein receptor 1B (BMPR1B). In the 96 U.S. rams, nine missense variants encoding 11 protein variants were identified. However, only one was previously reported to affect litter size (GDF9 V371M, Finnsheep). Two missense variants in BMP15 were identified that had not previously been reported: R67Q in Dorset, and L252P in Dorper and White Dorper breeds. Also, two novel missense variants were identified in BMPR1B: M64I in Katahdin, and T345N in Romanov and Finnsheep breeds. Based on the strict conservation of amino acid residues across placental mammals, the four variants encoded by BMP15 and BMPR1B are predicted to interfere with their function. However, preliminary analyses of litter sizes in small samples did not reveal a correlation with variants in BMP15 and BMPR1B with daughters of these rams. Conclusions: Collectively, this report describes a new resource for discovering protein variants in silico and identifies alleles for further testing of their effects on litter size in U.S. breeds.",
"keywords": [
"Sheep",
"Whole genome sequence",
"GDF9",
"BMP15",
"BMP1RB",
"Fecundity",
"Litter size"
],
"content": "Introduction\n\nThere are currently 48 Mendelian traits and disorders in sheep where the causative variants are known1. Many of these variants affect the gene’s protein sequence, and thereby alter its normal function. Although gene function may be affected by a wide range of large and small scale genomic sequence differences2,3, variants that alter amino acid sequences via missense, nonsense, frameshift, and splice site variants, are among those most likely to affect function4. DNA polymorphisms encoding these protein variants are readily identified by aligning genomic sequences of animals to a high-quality, annotated reference genome assembly like that available for sheep3. Identifying protein variants encoded by individuals in a population is an essential first step in characterizing genes known to influence traits5,6.\n\nIn principle, protein variants may be identified in silico for a gene of interest with access to population-scale whole genome sequence (WGS) data, like that found at the National Center for Biotechnology Information (NCBI) BioProjects and Sequence Read Archives (SRA). The first large ovine BioProject was deposited by the International Sheep Genomics Consortium (ISGC), which included the genome sequences of 75 sheep from 43 breed groups and two wild species from around the world (PRJNA160933). Although global diversity is outstanding in these sheep, these animals are not ideally suited for protein variant discovery across U.S. sheep populations due to their exotic breed composition and low numbers within breed. In addition, the terabyte size of SRA datasets is challenging to work with, and not readily searchable by gene or accessible on the internet with a user-friendly environment, such as the Integrated Genome Viewer (IGV)7,8.\n\nWe previously showed in cattle that protein variants for a gene of interest may be identified in silico with the appropriate population sample and 14x WGS datasets9. To that end, we created a similar publicly accessible, 16x WGS resource of 96 rams, that is viewable online with IGV. The rams share minimal pedigree relationships, and represent nine popular U.S. breeds and a composite line. Their genomes may be used to identify DNA polymorphisms in genes that affect the protein sequences in U.S. sheep populations. To highlight the utility of this resource, we analyzed three well-studied genes previously shown to encode protein variants affecting litter size in sheep: growth and differentiation factor 9 (GDF9), bone morphogenetic protein 15 (BMP15), and bone morphogenetic protein receptor 1B (BMPR1B). Together, there are 14 previously reported missense, nonsense, and frameshift variants affecting the protein function of these genes, and thereby affect ovulation rate and litter size10,11.\n\nThe proteins encoded by GDF9 and BMP15 are oocyte-secreted paralogs of the transforming growth factor-beta (TGF-β) superfamily that form homo- and heterodimeric ligands, and are essential for ovarian and follicular development12. These ligands synergistically regulate folliculogenesis through complex interactions with multiple receptors, such as BMPR1B. The BMPR1B gene encodes a type 1 membrane protein receptor that binds GDF9 and BMP15 in some mammals, although the identities of the BMPR1B ligands in sheep are unknown13. The amino acid sequences of GDF9, BMP15, and BMPR1B are highly conserved among placental mammals, and variants that alter key residues in peptide sequence, diminish function, and affect traits like ovulation rate and litter size. For example, substitution of arginine (R) for glutamine (Q) at position 249 (Q249R) in BMPR1B causes attenuation of BMPR1B signaling and ultimately leads to an increase ovulation rate14,15. Likewise, missense, nonsense, and frameshift variants in GDF9 and BMP15 may abolish function and cause an increase in ovulation rate in carrier ewes, while causing sterility in homozygous ewes10. However, some homozygous missense variants only diminish the protein’s biological activity. For example, the homozygous substitution of methionine (M) for valine (V) at position 371 (V371M) in GDF9 allows ewes to remain fertile and hyper prolific. Since the types and distribution of protein variants encoded by these genes was unknown in U.S. sheep, we sought to identify them with WGS from the set of 96 U.S. rams.\n\nWe identified nine missense variants and 11 encoded protein variants in the three genes evaluated. Only one variant was previously known to be associated with increased litter size (GDF9, V371M). However, four variants were not previously reported. In BMP15, a Q for R substitution was observed at position 67 (R67Q), and a proline (P) for leucine (L) substitution was observed at position 252 (L252P). In BMPR1B, an isoleucine (I) for M at position 64 (M64I), and an asparagine (N) for threonine (T) was observed at position 345 (T345N). Based on the pattern of evolutionary conservation for these residues in vertebrates, it was hypothesized that some of these novel missense variants could interfere with protein function, affect litter size, and be useful for producers interested in modulating lamb production to match available resources.\n\n\nMethods\n\nThis article contains no studies performed with animal subjects. The archival DNA samples used were collected between the years 2000 and 200616. The reproduction records used were from daughters born between 2001 and 2007. All animal procedures were reviewed and approved by the United States Department of Agriculture (USDA), Agricultural Research Service (ARS), U.S. Meat Animal Research Center (USMARC) Animal Care and Use Committee prior to their implementation (Experiment Number 5438-31000-037-04). Because health status is important for providing purified DNAs to an international community as described here, tissues were collected from healthy sheep, without signs or history of clinical disease. The source flock’s history of disease surveillance is also relevant when requesting reference samples described in this report. Since first stocking sheep in 1966, USMARC has not had a known case of scrapie. Until 2002, surveillance consisted of monitoring sheep for possible signs of scrapie and submitting brain samples to the USDA Animal and Plant Health Inspection Service (APHIS) National Veterinary Services Laboratory in Ames, IA for testing. All tests have been negative. Since April 2002, USMARC has voluntarily participated in the APHIS Scrapie Flock Certification Program, is in compliance with the National Scrapie Eradication Program, and is certified as scrapie-free. With regards to other transmissible diseases, it is recognized that the USMARC flock of 2000 to 4000 breeding ewes is located in a bluetongue medium incidence area and is known to have some prevalence of contagious ecthyma (sore mouth), foot rot, paratuberculosis (Johne's disease), ovine progressive pneumonia (visna-maedi), and pseudotuberculosis caseous lymphadenitis.\n\nThe purpose of the USMARC Sheep Diversity Panel version 2.4 (MSDPv2.4) was to provide a set of 96 samples for variant allele discovery and frequency estimation in U.S. sheep. Details of the panel design strategy have been published elsewhere16. Briefly, the panel consists of 96 rams from Dorper, White Dorper, Dorset, Finnsheep, Katahdin, Rambouillet, Romanov, Suffolk, and Texel breeds; a composite line (USMARC III: 1/2 Columbia, 1/4 Hampshire, and 1/4 Suffolk17); and one Navajo-Churro ram (Figure 1). In addition to their contributions to the U.S. sheep industry, the breeds were selected to represent genetic diversity for traits such as fertility, prolificacy, maternal ability, growth rate, carcass leanness, wool quality, mature weight, and longevity. The Navajo-Churro ram was included for its rare lysine 171 (K171) substitution in the prion gene. The rams sampled from each breed were chosen to minimize their genetic relationships at the grandparent level. DNA samples of all 96 rams have been made available for global use as genotyping reference material since 201016.\n\nThis group of 96 rams was sampled from USMARC and private U.S. flocks to represent genetic diversity for traits such as fertility, prolificacy, maternal ability, growth rate, carcass leanness, wool quality, mature weight, and longevity.\n\nDNA was extracted from whole blood with a typical phenol:chloroform method and stored at 4°C in 10 mM TrisCl, 1 mM EDTA (pH 8.0) as previously described16. Library preparation for DNA sequencing was also accomplished as previously described9. Briefly, 2 μg of ovine genomic DNA was fragmented and used to make indexed, 500 bp, paired-end libraries. Pooled libraries were sequenced with a massively parallel sequencing machine and high-output kits (NextSeq500, two by 150 paired-end reads, Illumina Inc.). Pooled libraries with compatible indexes were repeatedly sequenced until 40 GB of data with greater than Q20 quality was collected for each ram, thereby producing at least 10-fold mapped read coverage for each index. This level of coverage provides scoring rates and accuracies that exceed 99%9,18. The DNA sequence alignment process was similar to that previously reported18. FASTQ files were aggregated for each animal and DNA sequences, aligned individually to Oar_v3.1 with the Burrows-Wheeler Alignment tool (BWA) aln algorithm version 0.7.1219, merged, and collated with the bwa sampe command. The resulting sequence alignment map (SAM) files were converted to binary alignment map (BAM) files, and subsequently sorted via SAMtools version 1.3.120. Potential PCR duplicates were marked in the BAM files using the Genome Analysis Toolkit (GATK) version 3.621. Regions in the mapped dataset that would benefit from realignment due to small indels were identified with the GATK module RealignerTargetCreator, and realigned using the module IndelRealigner. The BAM files produced at each of these steps were indexed using SAMtools. The resulting indexed BAM files were made immediately available via the Intrepid Bioinformatics genome browser with groups of animals linked at the USDA, ARS, USMARC internet site.\n\nThe raw reads were deposited at NCBI BioProject PRJNA324837. Mapped datasets for each animal were individually genotyped with the GATK UnifiedGenotyper with arguments “--alleles” set to the VCF file (Supplementary File S1), “--genotyping_mode” set to “GENOTYPE_GIVEN_ALLELES”, and “--output_mode” set to “EMIT_ALL_SITES”. Lastly, some SNP variants were identified manually by inspecting the target sequence with IGV software version 2.1.287,8 (described below in Methods section entitled ‘Identifying protein variants encoded by GDF9, BMP15, and BMPR1B genes’). In these cases, read depth, allele count, allele position in the read, and quality score were taken into account when the manual genotype determination was made.\n\nGenotypes from a set of 163 reference SNPs were used as an initial verification of the WGS datasets. These DNA markers have been used for parentage determination, animal identification, and disease traceback22. The 163 reference SNPs were previously genotyped across the MSDPv2.4 by multiple overlapping PCR-Sanger sequencing reactions, multiplexed matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) genotyping assays, and 50 k bead array platforms22. The genotype call rate was defined as the number of SNP sites with three or more mapped reads, divided by the total number of sites tested. The error rate in the WGS data was estimated by comparing the independently-derived consensus genotypes for these SNPs to the WGS genotypes. An animal’s WGS dataset passed initial verification when the accuracy of the WGS genotypes exceeded 97%, and the average mapped read depth was proportional to the amount of WGS data collected. Animals’ datasets that failed this initial verification were inspected for contaminating and/or missing files. Once identified, the dataset was corrected and reprocessed. Linear regression analysis was accomplished in Excel version 2016. Access to the sequence was made available via USDA, ARS, USMARC internet site. Because the raw datasets were available online as they were produced, the raw FASTQ files were deposited in the NCBI SRA only after they were validated as described above. These 96 sets of files may be accessed through BioProject PRJNA324837 in the Project Data table under the Resource Name: SRA Experiments.\n\nSNPs from the OvineSNP50 BeadChip (Illumina Inc.) were selected for comparison because they were numerous, uniformly distributed across the ovine genome, and available. Based on the nucleotide sequence of the 54,242 probes obtained from the manufacturer, the positions of 51,796 SNPs were verified via a BLAT process, as previously described18. There were 50,357 of these that mapped uniquely to autosomes and were used for analysis (Supplementary File S1). The genotypes from the WGS data were compared to those from the 50 k bead array with a custom program written specifically for this operation.\n\nThe nucleotide variation in the exon regions of GDF9, BMP15, and BMPR1B was visualized through the public access portal at ARS USMARC with open source software installed on a laptop computer. Variants were recorded manually in a spreadsheet as previously described9. Briefly, a Java Runtime Environment version 8, update 131 (Oracle Corporation, Redwood Shores, CA) was first installed on the computer. When links to the data were selected from the appropriate web page, IGV software version 2.1.287,8 automatically loaded from a third-party site (University of Louisville, Louisville KY) and the mapped reads were loaded in the context of the ovine Oar_v3.1 reference genome assembly. Gene variants were viewed by loading WGS from a set of eight animals of different breeds, and the IGV browser was directed to the appropriate genome region by entering the gene abbreviation in the search field (e.g., GDF9). The IGV zoom function was used to view the first exon at nucleotide resolution with the “Show translation” option selected in IGV. Since GDF9 was in the reverse orientation with regards to the Oar_v3.1 assembly, the reference sequence was reversed so the translation was correctly viewed from right to left. The exon sequences were visually scanned for polymorphisms that would alter amino acid sequences, such as missense, nonsense, frameshift, and splice site variants. Once identified, the nucleotide position corresponding to a protein variant was viewed and recorded for all 96 animals. Using IGV, codon tables, and knowledge of the ovine GDF9, BMP15, and BMPR1B protein sequences (NP_001136360.2, NP_001108239.1, and NP_001009431.1, respectively), the codons affected by nucleotide alleles were translated into their corresponding amino acids and their Oar_v3.1 positions noted. Haplotype-phased protein variants were unambiguously assigned in individuals that were either: 1) homozygous for all variant sites, or 2) had exactly one heterozygous variant site. Maximum parsimony phylogenetic trees were manually constructed from the unambiguously phased protein variants. The phylogenetic trees were used, together with simple maximum parsimony assumptions, to infer haplotype phase in seven rams where two heterozygous variant sites occurred in GDF9. The protein phylogenetic trees were rooted by comparing the variable residues in sheep to those from related species. Ovine peptide sequences for GDF9, BMP15, and BMPR1B were used to search NCBI's refseq_protein database with BLASTP 2.6.123,24. Aligned protein sequences from a representative subset of 29 vertebrate species were used for the comparison.\n\nLambing records for daughters of carrier rams were retrieved from the USMARC historical database and analyzed with the mixed-model analysis of variance procedure (MIXED) of SAS (SAS Inst., Inc., Cary, NC; version 9.3). The phenotype evaluated was total number of lambs born (including stillborn) as a repeated record for each ewe. Different sets of ewes contributed to the analysis of each gene locus, and breed-specific genotype contrasts were evaluated. There were, however, similar models employed for all of the analyses. The models included fixed effects of classification for ewe age, and the sire-derived genotype class for the allele contrast in question. Three groups were created for ewe age to combine similar biologically performing ages: Group 1, ewe lambs; Group 2, ewes aged 2–5 years; and Group 3, ewes older than 5 years. The random effect of “ewe” was fitted and used to test the genotype contrast mean square. The Kenward-Roger option was used to approximate denominator degrees of freedom associated with the random effect of “ewe”. For analysis of the X-linked BMP15 allele contrasts, the sire-derived gamete in these daughters was known directly. For analyses of autosomal genotype contrasts, it was inferred that rams of different genotypes had different distributions of daughter genotypes sampled. This inference reduced the power of analysis compared to a direct allelic test because we cannot determine the maternal-derived allele.\n\n\nResults\n\nThe average amount of genomic DNA sequence collected per animal was 50.4 GB (range 40.0 - 97.7, SD 10.4). Independently-derived genotypes from two sets of reference SNPs were used to confirm the identity and evaluate the quality of these data: those from 163 parentage markers, and those from approximately 50,000 markers on the OvineSNP50 bead array. Both sets have SNPs that are well distributed, highly-informative, and have been widely used. The WGS-derived genotypes for the 163 parentage SNPs were obtained by manually viewing an animal’s mapped reads at the relevant genome coordinates via the internet and third-party software (illustrated in Figure 2A, and described in Methods). The expected genotypes and read depths were consistent for all but one of the 96 datasets, owing to missing data for that animal. After rectifying the data omission and performing regression analysis of the data for all 96 rams, the average calculated read depth (17.0) was directly proportional to the amount of sequence collected for each animal (range 11.9 - 33.9, SD 3.6; Figure 2B).\n\nThe genotype call rate for the 163 parentage markers was 99.7% when WGS data was used, i.e. 47 missing of 15,159 possible. Most of the missing genotypes (32) were attributed to a single SNP site (DU191809, chr1:187087905). The source of the difficulty appeared to be a misassembly of the Oar_v3.1 in that that region, leading to a mismapping of reads as this site averaged only 3.5 reads per animal. The overall accuracy of WGS genotypes for the 163 reference SNPs was 99.4%, and no animals had a SNP genotype accuracy less than 97% (i.e., not more than 4 errors in 163 SNP genotypes; Figure 2C). The few WGS genotype errors observed were typically caused by undetected heterozygous alleles at sites with low read coverage. Thus, comparing genotypes from 163 reference SNPs to those derived from the WGS file sets was effective for discovering and repairing errors, and independently verifying coverage.\n\nThe coverage and integrity of the WGS datasets were also evaluated at 50,357 evenly distributed, autosomal SNP sites from bead array data25. When plotted as a distribution of read depths by SNPs for all animals combined, the read depth was normally distributed with a mode near 16 (Figure 3A). The calculated average read depth per SNP per animal was 16.8 for the 50 k bead array SNPs (Min 11.7, Max 34.2, SD 3.5), compared to 17.0 for the 163 reference SNPs above. Averaged over all animals, the concordance between WGS genotypes and those from the bead array was 99.5% (Figure 3B) compared to 99.4% for the 163 reference SNPs. The genotype concordance reached a maximum at approximately 99.89% for the animal with the highest read depth (34.2-fold, 97.7 GB Q20 data). Taken together, the WGS genotype results for 163 reference SNPs was consistent with those for the 50 k bead array SNPs and indicated that the WGS datasets from these 96 rams are of sufficient quality and coverage for use in identifying and decoding gene variants in U.S. sheep.\n\n(A) Computer screen image of one animal’s WGS data aligned to ovine reference assembly Oar_v3.1 at a reference SNP site. The heterozygous C/T genotype is shown as viewed with the IGV software7,8. (B) Linear relationship between mapped read depth and the amount (Gb) of Q20 WGS data collected. At each SNP position, the read depth and genotypes were visualized and manually recorded for 163 parentage SNPs. (C), genotype scoring accuracy for 163 parentage SNPs in 96 sires. Consensus reference genotypes (n = 15,684) for the parentage SNPs were previously determined by multiple methods22.\n\n(A) The distribution of average WGS read depth across 45,946 SNP sites for 96 sires combined. (B) A comparison of the average WGS read depth per animal to the average genotype concordance between 45,946 WGS and bead array genotypes.\n\nThe WGS data for the 96 rams were used to analyze the coding regions of GDF9, BMP15, and BMPR1B. These genes encode proteins of 453, 393, and 502 amino acids, respectively, each with multiple functional domains (Figure 4A). Viewing the aligned sequences and detecting variants was simple, fast, and accurate with the IGV software and a publicly available web-based browser developed for this purpose (Figure S1, Table S1). Nine missense variants were observed in the three genes with the 96 genomes (Table 1). Four of the nine variants were not previously reported: BMP15 (R67Q, L252P) and BMPR1B (M64I, T345N). No other missense, nonsense, frameshift, splice site, or indel variants affecting the coding region were detected. A comparative list of the coding variants discovered here is given in Table 2, together with those previously reported for the three genes. Eleven protein sequence isoforms were predicted from phased combinations of codon variants (Table 3). Haplotypes were translated and placed in the context of a phylogenetic tree for predicted variants for GDF9, BMP15, and BMPR1B (Figure 4B). The trees were rooted based on the pattern of evolutionary conservation of the residues in vertebrates (Figure 5). All four of the previously unreported protein variants encoded by BMP15 and BMPR1B were on the distal nodes of their respective tree, indicating they arose after those on adjacent nodes. The previously reported GDF9 V371M variant was present in our reference panel only in Finnsheep (Table 4). Alleles encoding the M371 residue are associated with increased litter size in both carriers and homozygous individuals (Table 2). The novel BMP15 R67Q and L252P variants were confined to the Dorset and Dorper breed groups of our reference panel, respectively. The novel BMPR1B M64I variant was only present in the Katahdin breed group, while the novel BMPR1B N345 variant was observed in both Romanov and Finnsheep breed groups.\n\n(A) Physical maps of GDF9, BMP15, and BMPR1B exon and protein domains in relationship to missense variants. (B) Maximum parsimony phylogenetic trees of haplotype-phased protein variant identified in the sheep diversity panel. For each gene analyzed, the most frequent protein isoform was defined as “variant 1” and used as the reference sequence for each tree. Each node in a tree represents a different protein isoform that varies by one amino acid compared to adjacent nodes. The areas of the circles are proportional to the variant frequency in the panel of 96 rams. The trees were rooted based on evolutionary conservation of residues in closely related species. The predicted root of GDF9 was not observed in the 96 rams.\n\naAll variants and sequences are oriented from the sense strand perspective. However, GDF9, BMP15, and BMPR1B are oriented in the opposite direction with regards to the Oar_v3.1 reference assembly. Alphabetical abbreviations for relevant amino acids: E, glutamate; H, histidine; I, isoleucine; K, lysine; L, leucine; M, methionine; N, asparagine; P, proline; Q, glutamine; R, arginine; T, threonine; and V, valine.\n\nbProtein domain abbreviations: PRO, propeptide; MAT, mature peptide; AR, activin receptor domain; and AS-AL, between the active site proton acceptor and the activation loop domains.\n\ncIUPAC/IUBMB ambiguity codes used for nucleotides: R = a/g, Y = c/t, M = a/c, K = g/t, S = c/g, W = a/t35.\n\ndThe major allele is listed first.\n\neMinor allele frequency in MSDPv2.4.\n\nfThe L11ΔL variant is an abbreviation for p.(Leu10_11delinsLeu), the recommended nomenclature for this variant by the Human Genome Variation Society36.\n\naBold font indicates previously unreported variants affecting the protein sequence.\n\naThe bolded residues are those differing from “variant 1” in each gene.\n\nbThe protein variant frequency.\n\naThe variants correspond to those shown in Figure 4. The distinctive missense variant or reference isoform is indicated in parentheses.\n\nbGDF9 protein “variant 4\" contains the M371 amino acid previously associated with litter size in Finnish landrace sheep11,26,37.\n\ncBMP15 protein “variants 3 and 4\" contain the previously unreported P252, and Q67 residues, respectively.\n\ndBMPR1B protein “variants 2 and 3” contain the previously unreported I64 and N345 missense variants, respectively.\n\neHyphen indicates the variant was not detected in that group.\n\nAligned protein sequences from a representative subset of 29 vertebrate species were compared. Abbreviations and symbols are as follows: TMRCA, estimated time to most recent common ancestor in millions of years48; letters, IUPAC/IUBMB codes for amino acids; dot, amino acid residues identical to those in sheep “variant 1”; triangle, net deletion of one leucine residue in BMP15 positions 10 and 11 where two leucine residues are commonly present; ni, a fourth protein variant was not identified for BMPR1B; nr, not in refseq_protein database and thus residues were determined by analyzing WGS data; dash, not enough sequence similarity for comparison or missing polypeptide region; nm, did not match a refseq_protein in the database for that species.\n\nAn analysis of amino acid sequence conservation among species helped identify critical residues more likely to be involved in important protein functions. Ovine GDF9, BMP15 and BMPR1B were 80, 88, and 99% identical to other Artiodactyla species, at the propeptide sequence level (Figure 5). We predict that variant residues in highly conserved protein domains are more likely to affect ovulation rate and litter size. The most well conserved residue in GDF9 showing missense variation among sheep breeds is V371, located in the TGF-β-like domain. Throughout Eutheria, sheep were the only species observed to have the M371 residue encoded by GDF9 (Figure 5). This is consistent with the substitution of the M371 residue causing reduced protein function, and therefore increased litter size in Finnish Landrace sheep (Table 2). Less conserved were the GDF9 residues at positions 87, 241, and 332, which are variable throughout Eutheria species and have not been associated with fecundity in sheep. With regards to missense variants in the other TGF-β ligand, BMP15 residues at positions 11, 67, and 252, were conserved through most of the Laurasiatheria, although the L11 deletion variant is common in sheep and has not been associated with fecundity (Table 2). Since Q67 and P252 substitutions in BMP15 have not been previously reported, their impact on protein function or reproductive phenotype has yet to be determined.\n\nConservation in the TGF-β receptor ligand receptor, BMPR1B, is particularly striking with 98% propeptide identity observed throughout Eutheria species, compared to approximately 76% and 77% for GDF9 and BMP15, respectively. Moreover, BMPR1B residues at positions 64 and 345 are also conserved throughout the Eutheria, suggesting that the I64 and the N345 substitutions in sheep may affect protein function. The I64 substitution in Katahdin sheep is in the extracellular activin receptor domain, whereas the N345 substitution in Romanov sheep is between the active site proton acceptor domain and the activation loop of the cytoplasmic domain (Figure 4A). Although intriguing, the potential effects of the observed substitutions encoded by GDF9, BMP15 and BMPR1B in U.S. Sheep are unknown.\n\nThe potential effects of the observed GDF9, BMP15 and BMPR1B variants on reproductive phenotypes were examined by analyzing lambing records from daughters of the rams sequenced in this project. There were no database records for daughters of the five Finnsheep rams carrying the GDF9 allele encoding the M371 residue (i.e., “Variant 4”, Table S2). There were, however, records for 403 daughters sired by eight rams with at least one of the four BMP15 or BMPR1B variants. Together, the eight rams sired 480 lambs in various flocks in seven years, although not all variant genotypes were frequent in these rams (Table S3–Table S6). Analyses of these data did not reveal a significant correlation between litter size and any of the four BMP15 or BMPR1B variants (95% confidence interval). However, this simple test for association lacked power, and could only detect litter size effects. It remains possible that a well-designed, prospective genetic study may detect biologically and economically relevant differences associated with these variants of highly-conserved residues in developmentally important genes.\n\n\nDiscussion\n\nWe created a searchable and publicly viewable online genomics resource consisting of 96 individuals representing a broad cross section of U.S. sheep breeds, and demonstrated its use for identifying protein variants. The DNA for these 96 rams, together with their 95 tetrad families, is also available for confirming segregation alleles identified in the WGS16. A minimum of 40 GB of short read, paired-end DNA sequence data provided at least 11-fold mapped genome coverage for each animal. The aligned sequences were made available for downloading or viewing online with a customized IGV visualization software that supports accurate manual assessment of gene-specific genetic variation. The average coverage of the sheep diversity panel was 16.8-fold and resulted in an average genotype accuracy of approximately 99.5%. These numbers were consistent with previous results obtained with 96 beef bulls9. This online resource provides the ability to readily inspect gene variants reported in one breed, evaluate them in other breeds, and search for any additional variants that may affect protein structure. The ability to identify the full range of protein variants in a population is critical for designing studies intended to test a candidate gene’s influence on a trait.\n\nThe web-based platform worked well for analyzing three ovine genes with previously documented missense variants affecting ovulation rate and litter size. In a matter of hours, each gene was evaluated for any obvious coding variants, scored in the group of 96 rams, and compared to the previously known variants. Of the 14 known causative variants affecting litter size in sheep, only one was observed in the 96 U.S. rams, and only in the Finnsheep breed (GDF9 V371M). This is consistent with reports that the highly prolific Finnish Landrace sheep are thought to be the source of the V371M variant11,26. With regards to U.S. Finnsheep, the frequency of the GDF9 V371M variant was 0.25%, with five of the 10 rams having zero copies of the V371M variant. Since ewes homozygous for the M317 variant are known to be fertile, there is a good opportunity for breeders to modulate the frequency of the GDF9 V371M variant in their purebred Finnsheep flocks, and thereby attain a more optimal litter size for their ewes.\n\nThe WGS analysis also revealed four previously unreported missense variants: BMP15 R67Q and L252P; BMPR1B M64I and T345N. Although our preliminary tests for association between variants and litter size did not detect a significant difference, the evidence for dismissing these candidates is not compelling due to the limited number of sires with the variant allele. In spite of having no direct evidence of phenotypic effects associated with these alleles, analysis of the evolutionary conservation of residues at variant sites, their locations within the protein domains, and the effects on ovulation in other species has provided some insight. For example, the BMP15 R67Q variant found in Dorset was the least conserved, and predicted to be the least likely to affect function among placental mammals. Since the Q67 residue is present in several other Eutheria, and is not part of the mature BMP15 peptide ligand, its occurrence would seem to be a functional evolutionary option (Figure 5). In humans, the equivalent variant (R68Q) was reported in the 1000 Genomes Project with no apparent disease effect noted (rs782187019)27. However, a tryptophan (W) substitution at this same position in humans causes premature ovarian failure and primary ovarian insufficiency (i.e., R68W)28. Thus, some substitutions at this position may cause loss of function in some mammals, but it appears as though Q67 may not be one of them.\n\nUnlike the R67Q variant, the L252P variant encoded by BMP15 was not observed in any other vertebrate species and was strictly conserved throughout the Laurasiatheria species. The P252 residue does not appear in the mature BMP15 peptide, however, it is plausible that the non-conservative substitution of P252 for L252 could interfere with post-translational processing of the mature peptide. In primate species, M253 is the equivalent residue to ovine position L252P, and healthy human individuals represented in the 1000 Genomes Project have rare heterozygous substitutions of V253 and T253 with no pathology reported. Because alleles with the P252 residue were present at a high frequency (1.0 in four White Doper), it’s unlikely that the homozygous state causes sterility in ewes. However, the possibility remains that P252 residue may decrease function, and that two copies of a slightly less functional BMP15 may increase the ovulation rate and litter size.\n\nIn contrast to the numerous missense variants encoded by the ovine GDF9 and BMP15 genes, there has been only one missense variant identified in the receptor gene, BMPR1B (Q249R). This variant was first discovered in Booroola Merino sheep14,29, and subsequently reported in Garole30, Javanese30, Chhotanagpuri31, Iranian Kalehkoohi32, small-tailed Han33, Hu and Chinese Merino34 sheep. In the present report, we did not observe the Q249R variant in any of the WGS from 96 U.S. sheep. Rather, two previously unrecognized BMPR1B variants were identified: M64I and T345N. The M64I variant was present in two of eight Katahdin rams (including a homozygote), and two of 17 composite rams containing Suffolk, Colombia and Hampshire germplasm. The I64 substitution was not present in other vertebrate protein sequences and was conserved throughout the Theria with the notable exception of humans, manatees, and armadillos. No variants have been reported in the 1000 Genomes Project for the equivalent position in humans. The M64I variant is positioned in the extracellular activin receptor domain, whose function is to bind ligands for receptor activation. It is plausible that the enhanced fertility and prolificacy, which the Katahdin breed is known for, is conferred in part by this variant.\n\nThe second BMPR1B variant, T345N, is located inside the cell between two closely spaced active site domains and was present in three of ten Romanov rams (including a homozygote), and one of ten Finnsheep rams. The T345 residue is conserved throughout Tetrapoda species and N345 was not found in any Vertebrata species. A search for human variants in the 1000 Genomes Project revealed only a rare S345 substitution with no pathology reported. Based on the location of the T345 variant near the active site, its strict evolutionary conservation in vertebrates, and that it was found in the two most prolific U.S. breeds, we hypothesize that the N345 residue diminishes the function of the BMPR1B receptor and may influence ovulation and litter size. The BMPR1B T345N variant thus represents a high-priority candidate allele for validation studies in these breeds. If any of these newly discovered variants are confirmed to be associated with litter size, DNA-based tests for them could be incorporated into existing genetic testing platforms and used to select for important traits and manage production. Since the number of lambs produced per ewe per year is of fundamental economic importance to sheep production regardless of the production system, these types of DNA tests would be helpful for producers interested in modulating lamb production to match available resources and maintain long-term sustainability.\n\n\nConclusion\n\nIn summary, the WGS resources described here are suitable for use in identifying and decoding gene variants in the vast majority of U.S. sheep. When applied to GDF9, BMP15 and BMPR1B genes, the findings suggest there may be variants circulating in the U.S. that could be further evaluated for potential use to increase litter size in U.S. breeds. These resources, including the web interface, underlying sequence data, and the associated information are available to researchers, companies, veterinarians, and producers for use without restriction.\n\n\nData availability\n\nValidated sheep FASTQ files are available in the NCBI SRA under accession numbers SRX2185832-SRX2185868; SRX2185872-SRX2185977; SRX2186010-SRX2186189; SRX2186191-SRX2186294; SRX2186381-SRX2186766; SRX2186768-SRX2186784; SRX2186786-SRX2186798; SRX2186800-SRX2186879.\n\nThe data have also been deposited with links to BioProject accession number PRJNA324837 in the NCBI Bio-Project database.\n\nIn addition, access to the aligned sequences is available via USDA internet site: http://www.ars.usda.gov/Services/Docs.htm?docid=25585. Download access to the BAM files is available at the Intrepid Bioinformatics site:\n\nhttp://server1.intrepidbio.com/FeatureBrowser/customlist/record?listid=7918711123\n\nLambing records for daughters of carrier rams were retrieved from the USMARC historical database, which is not accessible to the public. Table S3–TableS6 provide summary data from these records, which is adequate for the reproducibility and re-analysis purposes of this article.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding for this research was provided by the USDA, ARS appropriated projects 5438-32000-029-00D (MPH) and 5438-31320-012-00D (TPLS).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank the USMARC Core Facility staff for outstanding technical assistance, and J. Watts for secretarial support. We thank Dr. K. Leymaster for assistance in developing the sheep diversity panel and thoughtful discussions and improvements to the manuscript. This work was conducted in part using the resources of the University of Louisville’s research computing group and the Cardinal Research Cluster, and we thank Mr. H. Simrall for his assistance.\n\nMention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the USDA. The USDA is an equal opportunity provider and employer.\n\n\nSupplementary material\n\nTable S1. GDF9, BMP15, and BMPR1B genotypes recorded manually from WGS reads mapped to OAR_v3.1 assembly for the USMARC Sheep Diversity Panel v2.4.\n\nClick here to access the data.\n\nTable S2. Haplotype-phased genotypes (diplotypes) for GDF9, BMP15, and BMPR1B genes in the MSDPv2.4.\n\nClick here to access the data.\n\nTable S3. Effect of sire copies of BMP15 P252 alleles (“Variant 3”) on daughter litter size.\n\nClick here to access the data.\n\nTable S4. Effect of sire copies of BMP15 Q67 alleles (“Variant 4”) on daughter litter size.\n\nClick here to access the data.\n\nTable S5. Effect of sire copies of BMPR1B I64 alleles (“Variant 2”) on daughter litter size.\n\nClick here to access the data.\n\nTable S6. Effect of sire copies of BMPR1B N345 alleles (“Variant 3”) on daughter litter size.\n\nClick here to access the data.\n\nFigure S1. Screen image of Integrated Genome Viewer (IGV) software displaying GDF9 V332I genotype data for eight sheep.\n\nClick here to access the data.\n\nSupplementary File 1. VCF file of 50,357 SNP variants used in comparing WGS genotypes to those from the OvineSNP50 bead array.\n\nClick here to access the data.\n\n\nReferences\n\nNicholas FW, Hobbs M: Mutation discovery for Mendelian traits in non-laboratory animals: a review of achievements up to 2012. Anim Genet. 2014; 45(2): 157–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBickhart DM, Liu GE: The challenges and importance of structural variation detection in livestock. Front Genet. 2014; 5: 37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJiang Y, Xie M, Chen W, et al.: The sheep genome illuminates biology of the rumen and lipid metabolism. Science. 2014; 344(6188): 1168–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\n1000 Genomes Project Consortium, Abecasis GR, Altshuler D, et al.: A map of human genome variation from population-scale sequencing. Nature. 2010; 467(7319): 1061–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJordan DM, Ramensky VE, Sunyaev SR: Human allelic variation: perspective from protein function, structure, and evolution. Curr Opin Struct Biol. 2010; 20(3): 342–50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacArthur DG, Manolio TA, Dimmock DP, et al.: Guidelines for investigating causality of sequence variants in human disease. Nature. 2014; 508(7497): 469–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson JT, Thorvaldsdóttir H, Winckler W, et al.: Integrative genomics viewer. Nat Biotechnol. 2011; 29(1): 24–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThorvaldsdóttir H, Robinson JT, Mesirov JP: Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration. Brief Bioinform. 2013; 14(2): 178–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeaton MP, Smith TP, Carnahan JK, et al.: Using diverse U.S. beef cattle genomes to identify missense mutations in EPAS1, a gene associated with pulmonary hypertension [version 2; referees: 2 approved]. F1000Res. 2016; 5: 2003. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJuengel JL, Davis GH, McNatty KP: Using sheep lines with mutations in single genes to better understand ovarian function. Reproduction. 2013; 146(4): R111–23. PubMed Abstract | Publisher Full Text\n\nMullen MP, Hanrahan JP: Direct evidence on the contribution of a missense mutation in GDF9 to variation in ovulation rate of Finnsheep. PLoS One. 2014; 9(4): e95251. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Castro FC, Cruz MH, Leal CL: Role of Growth Differentiation Factor 9 and Bone Morphogenetic Protein 15 in Ovarian Function and Their Importance in Mammalian Female Fertility - A Review. Asian-Australas J Anim Sci. 2016; 29(8): 1065–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReader KL, Haydon LJ, Littlejohn RP, et al.: Booroola BMPR1B mutation alters early follicular development and oocyte ultrastructure in sheep. Reprod Fertil Dev. 2012; 24(2): 353–61. PubMed Abstract | Publisher Full Text\n\nMulsant P, Lecerf F, Fabre S, et al.: Mutation in bone morphogenetic protein receptor-IB is associated with increased ovulation rate in Booroola Mérino ewes. Proc Natl Acad Sci U S A. 2001; 98(9): 5104–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRegan SL, McFarlane JR, O'Shea T, et al.: Flow cytometric analysis of FSHR, BMRR1B, LHR and apoptosis in granulosa cells and ovulation rate in merino sheep. Reproduction. 2015; 150(2): 151–63. PubMed Abstract | Publisher Full Text\n\nHeaton MP, Leymaster KA, Kalbfleisch TS, et al.: Ovine reference materials and assays for prion genetic testing. BMC Vet Res. 2010; 6: 23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeymaster KA: Straightbred comparison of a composite population and the Suffolk breed for performance traits of sheep. J Anim Sci. 1991; 69(3): 993–9. PubMed Abstract | Publisher Full Text\n\nKalbfleisch T, Heaton MP: Mapping whole genome shotgun sequence and variant calling in mammalian species without their reference genomes [version 1; referees: 1 approved with reservations]. F1000Res. 2013; 2: 244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Durbin R: Fast and accurate long-read alignment with Burrows-Wheeler transform. Bioinformatics. 2010; 26(5): 589–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Handsaker B, Wysoker A, et al.: The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009; 25(16): 2078–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKenna A, Hanna M, Banks E, et al.: The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010; 20(9): 1297–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeaton MP, Leymaster KA, Kalbfleisch TS, et al.: SNPs for parentage testing and traceability in globally diverse breeds of sheep. PLoS One. 2014; 9(4): e94851. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltschul SF, Madden TL, Schäffer AA, et al.: Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 1997; 25(17): 3389–402. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltschul SF, Wootton JC, Gertz EM, et al.: Protein database searches using compositionally adjusted substitution matrices. FEBS J. 2005; 272(20): 5101–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKijas JW, Lenstra JA, Hayes B, et al.: Genome-wide analysis of the world's sheep breeds reveals high levels of historic mixture and strong recent selection. PLoS Biol. 2012; 10(2): e1001258. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVåge DI, Husdal M, Kent MP, et al.: A missense mutation in growth differentiation factor 9 (GDF9) is strongly associated with litter size in sheep. BMC Genet. 2013; 14: 1. PubMed Abstract | Publisher Full Text | Free Full Text\n\n1000 Genomes Project Consortium, Auton A, Brooks LD, et al.: A global reference for human genetic variation. Nature. 2015; 526(7571): 68–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossetti R, Di Pasquale E, Marozzi A, et al.: BMP15 mutations associated with primary ovarian insufficiency cause a defective production of bioactive protein. Hum Mutat. 2009; 30(5): 804–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSouza CJ, MacDougall C, MacDougall C, et al.: The Booroola (FecB) phenotype is associated with a mutation in the bone morphogenetic receptor type 1 B (BMPR1B) gene. J Endocrinol. 2001; 169(2): R1–6. PubMed Abstract | Publisher Full Text\n\nDavis GH, Galloway SM, Ross IK, et al.: DNA tests in prolific sheep from eight countries provide new evidence on origin of the Booroola (FecB) mutation. Biol Reprod. 2002; 66(6): 1869–74. PubMed Abstract | Publisher Full Text\n\nOraon T, Singh DK, Ghosh M, et al.: Allelic and genotypic frequencies in polymorphic Booroola fecundity gene and their association with multiple birth and postnatal growth in Chhotanagpuri sheep. Vet World. 2016; 9(11): 1294–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahdavi M, Nanekarani S, Hosseini SD: Mutation in BMPR-IB gene is associated with litter size in Iranian Kalehkoohi sheep. Anim Reprod Sci. 2014; 147(3–4): 93–8. PubMed Abstract | Publisher Full Text\n\nChu MX, Liu ZH, Jiao CL, et al.: Mutations in BMPR-IB and BMP-15 genes are associated with litter size in Small Tailed Han sheep (Ovis aries). J Anim Sci. 2007; 85(3): 598–603. PubMed Abstract | Publisher Full Text\n\nGuan F, Liu SR, Shi GQ, et al.: Polymorphism of FecB gene in nine sheep breeds or strains and its effects on litter size, lamb growth and development. Anim Reprod Sci. 2007; 99(1–2): 44–52. PubMed Abstract | Publisher Full Text\n\nNC-IUB: Nomenclature for incompletely specified bases in nucleic acid sequences. Recommendations 1984. Nomenclature Committee of the International Union of Biochemistry (NC-IUB). Proc Nat Acad Sci U S A. 1986; 83(1): 4–8. PubMed Abstract | Free Full Text\n\nden Dunnen JT, Dalgleish R, Maglott DR, et al.: HGVS Recommendations for the Description of Sequence Variants: 2016 Update. Hum Mutat. 2016; 37(6): 564–9. PubMed Abstract | Publisher Full Text\n\nHanrahan JP, Gregan SM, Mulsant P, et al.: Mutations in the genes for oocyte-derived growth factors GDF9 and BMP15 are associated with both increased ovulation rate and sterility in Cambridge and Belclare sheep (Ovis aries). Biol Reprod. 2004; 70(4): 900–9. PubMed Abstract | Publisher Full Text\n\nKhodabakhshzadeh R, Mohammadabadi MR, Esmailizadeh AK, et al.: Identification of point mutations in exon 2 of GDF9 gene in Kermani sheep. Pol J Vet Sci. 2016; 19(2): 281–9. PubMed Abstract | Publisher Full Text\n\nSouza CJ, McNeilly AS, Benavides MV, et al.: Mutation in the protease cleavage site of GDF9 increases ovulation rate and litter size in heterozygous ewes and causes infertility in homozygous ewes. Anim Genet. 2014; 45(5): 732–9. PubMed Abstract | Publisher Full Text\n\nSilva BD, Castro EA, Souza CJ, et al.: A new polymorphism in the Growth and Differentiation Factor 9 (GDF9) gene is associated with increased ovulation rate and prolificacy in homozygous sheep. Anim Genet. 2011; 42(1): 89–92. PubMed Abstract | Publisher Full Text\n\nNicol L, Bishop SC, Pong-Wong R, et al.: Homozygosity for a single base-pair mutation in the oocyte-specific GDF9 gene results in sterility in Thoka sheep. Reproduction. 2009; 138(6): 921–33. PubMed Abstract | Publisher Full Text\n\nMartinez-Royo A, Jurado JJ, Smulders JP, et al.: A deletion in the bone morphogenetic protein 15 gene causes sterility and increased prolificacy in Rasa Aragonesa sheep. Anim Genet. 2008; 39(3): 294–7. PubMed Abstract | Publisher Full Text\n\nMonteagudo LV, Ponz R, Tejedor MT, et al.: A 17 bp deletion in the Bone Morphogenetic Protein 15 (BMP15) gene is associated to increased prolificacy in the Rasa Aragonesa sheep breed. Anim Reprod Sci. 2009; 110(1–2): 139–46. PubMed Abstract | Publisher Full Text\n\nGalloway SM, McNatty KP, Cambridge LM, et al.: Mutations in an oocyte-derived growth factor gene (BMP15) cause increased ovulation rate and infertility in a dosage-sensitive manner. Nat Genet. 2000; 25(3): 279–83. PubMed Abstract | Publisher Full Text\n\nDemars J, Fabre S, Sarry J, et al.: Genome-wide association studies identify two novel BMP15 mutations responsible for an atypical hyperprolificacy phenotype in sheep. PLoS Genet. 2013; 9(4): e1003482. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBodin L, Di Pasquale E, Fabre S, et al.: A novel mutation in the bone morphogenetic protein 15 gene causing defective protein secretion is associated with both increased ovulation rate and sterility in Lacaune sheep. Endocrinology. 2007; 148(1): 393–400. PubMed Abstract | Publisher Full Text\n\nWilson T, Wu XY, Juengel JL, et al.: Highly prolific Booroola sheep have a mutation in the intracellular kinase domain of bone morphogenetic protein IB receptor (ALK-6) that is expressed in both oocytes and granulosa cells. Biol Reprod. 2001; 64(4): 1225–35. PubMed Abstract | Publisher Full Text\n\nHedges SB, Marin J, Suleski M, et al.: Tree of life reveals clock-like speciation and diversification. Mol Biol Evol. 2015; 32(4): 835–45. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "24897",
"date": "29 Aug 2017",
"name": "Christine Couldrey",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript entitled “Using sheep genomes from diverse U.S. breeds to identify missense variants in genes affecting fecundity” provides an informative overview of a publicly searchable DNA sequence resource for U.S. sheep that the authors have generated. The manuscript is well written. The manuscript is largely descriptive in nature but does include a proof of principal type study to highlight the potential usefulness of this dataset. The study is appropriately designed and technically sound. Most of the conclusions drawn are adequately supported by the results. Where the authors have not identified phenotypic effects of amino acid changes they have acknowledged that this study does lack power to make definitive statements.\n\nA few details could do with further detail/clarification/some modification:\nIn the last paragraph on page 4 where the manuscript states “when the accuracy of the WGS genotypes exceeded 97%”…… Please explain why 97% was chosen as the threshold.\n\nPage 5 under heading Identifying protein variants encoded by GDF9, BMP15, and BMPR1B: the methods describe haplotype phasing, however, it is unclear as to how many of the 96 sheep were able to be phased and used in analysis. It is therefore difficult to determine the validity of this method. The methods or corresponding results section should be expanded to include this information. Page 5 Methods section under heading Statistical analysis of litter size in daughters of carrier rams: Please include information of how many rams had daughter lambing records for each breed and each of the variants identified Page 6 The authors should include some discussion/information about the inclusion of animals that had the lowest concordances with other genotyping platforms, particularly the animals with ~17X coverage and a concordance of ~97%. In some research facilities this (and some of the other animals with greater than 10X coverage and less than 99% concordance) would be treated as suspect and excluded, or further analysis undertaken - if the latter is the case, please include the further analysis. Page 7 the sentence “Alleles encoding the M371 residue…” refers to Table 2, however this residue is not referred to in Table 2. Please correct. Page 7 “We predict that variant residues in highly conserved protein domains are more likely to affect ovulation rate and littler size”. While this sentence could be true, it is also possible that protein domains are highly conserved for functions other than ovulation rate and litter size. Page 12 There is insufficient data to really make the statement “Thus some substitutions at this position may cause loss of function in some mammals but it appears as though Q67 may not be one of them”. Given that the phenotypes around ovarian failure in sheep are likely not well recorded given the culling of sheep relatively early in life.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "25465",
"date": "06 Sep 2017",
"name": "Eyal Seroussi",
"expertise": [
"Reviewer Expertise Animal genetics",
"genomics and bioinformatics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe main aim of the publication entitled “Using sheep genomes from diverse U.S. breeds to identify missense variants in genes affecting fecundity” is to establish a publicly accessible resource of sequenced genomes that represent popular U.S. sheep breeds. A key factor for the usefulness of such resource is the fold coverage. While in Introduction a “16x WGS resource” is mentioned, in Results it is indicated that: “the average calculated read depth (17.0)”, the submissions to the SRA database are all entitled “12x WGS of USMARC Sheep…” and the author’s website referred in the manuscript: “USDA, ARS, USMARC internet site” is entitled: “10x WGS of …”. The latter number would have indicated a limited utility for Genotyping-By-Sequencing (GBS) of SNPs, as the typical coverage is not evenly distributed along the genome, and many loci would lack the minimal coverage of 5x. This coverage allows detecting homozygotes with less than a 0.05% uncertainty, assuming ideal conditions of a 0.5 probability to detect an allele and no sequencing errors. Practically, detection rate is biased and sequence errors do exist. Therefore, the authors put much effort to estimate the rate of genotyping errors by comparing their genotypes with bead array data, concluding that no animals had a SNP genotype accuracy of less than 97%. Since this calculation of error rate involves errors introduced by both genotyping methods, a more straightforward approach can be considered by analyzing non-autosomal loci on chromosome X; there all genotypes must be homozygous. Table S1 offers such a possibility for the BMP15 gene; where two heterozygotes were encountered out of 288 genotypes, which indicates that either sequencing errors or contaminations may introduce 0.7% of the error.\n\nIndeed, sequence cross contamination is a known problem of sequenced genomes1; and sequencing projects should be routinely controlled for DNA contaminants. As the authors did not refer to this problem, I tested one of their 923 submissions that has a median size (4.6 G bases, SRX2186704) by analyzing the sequence reads that do not map to the sheep genome (Oar_v4). Using the GAP5 software 2, the reads sent to the failures.seq file were de-novo assembled into contigs using MIRA4 sequence assembler 3. As the current genome version lacks the Y chromosome, most of these contigs were similar to submissions of Y chromosome orthologs of other ruminants; yet, several contigs resembled the Onchocerca flexuosa genome, the largest of which was of 3372 bp and had 502 reads with 83% identity to this worm genome. This suggests that this individual was infected by a parasitic roundworm similar to the species that infect red deer, and that the authors present a valuable resource that is important for parasitology 4. As I encountered no other DNA contaminants, it is likely that the data presented by the authors is solid and of the highest quality and that the worm DNA was extracted from this animal’s blood.\n\nAs for identifying missense variants in 3 genes affecting fecundity, I conclude that the authors left no stone unturned to ensure the validity of their genotypes. E. g., Table S1 indicates that the unique homozygous genotype for BMPR1B (individual 200117552) had coverage of 41x fold, suggesting that sequence coverage was increased for this individual to ensure this result. Yet, the use of modern tools for predicting the functional effect of amino acid substitutions5 should have warned them that this variation (T345N) is not likely to produce a phenotype (PROVEAN score = -2.191, Neutral). In this respect, the 4 novel variations described in the fecundity genes are of minor importance. Nevertheless, the observation that none of the U.S. popular breeds carries the Booroola mutation should have an impact, as introgression of this mutation revolutionized sheep production in Spain and Israel6. Despite the apparent weakness of the work in identifying important novel variants of fecundity genes, I approve this work as a valuable genomic resource. The authors are advised to control for DNA contaminants, to avoid the discrepancies described by extending the clear and accurate presentation of this work to the affiliated webpages and to discuss the issues raised by this review.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1303
|
https://f1000research.com/articles/6-1294/v1
|
01 Aug 17
|
{
"type": "Software Tool Article",
"title": "cophesim: A comprehensive phenotype simulator for testing novel association methods",
"authors": [
"Ilya Y. Zhbannikov",
"Konstantin G. Arbeev",
"Anatoliy I. Yashin",
"Anatoliy I. Yashin"
],
"abstract": "Simulation is important in evaluating novel methods when input data is not easily obtainable or specific assumptions are needed. We present cophesim, a software to add the phenotype to generated genotype data prepared with a genetic simulator. The output of cophesim can be used as a direct input for different genome wide association study tools. cophesim is available from https://bitbucket.org/izhbannikov/cophesim.",
"keywords": [
"Phenotype simulation",
"GWAS"
],
"content": "Introduction\n\nGenome-wide association studies (GWAS) are routine in population research. New methods are being developed for better accessing complex associations between genotypes and phenotypes, uncovering genotype structures or testing evolutionary hypotheses. Testing the novel methods requires experimental data, which may not be easily obtainable. One solution is to use artificial data simulated with specific assumptions.\n\nThe best existing phenotype simulators, such as: GENOME1, Plink2, phenosim3, CoaSim4, Fregene5, ForSim6, QuantiNemo7, GCTA8, HapGen9, SeqSimla10, and SimRare11 offer qualitative and dichotomous simulated phenotype. But the known phenotype simulation software tools have some limitations, which may prevent customers from using them: (i) the majority, if not all, of the phenotype simulation software tools do not offer simulation of survival traits/time-to-event outcome, making it impossible to test respective hypotheses of associations; (ii) some of the tools are not easy to use, due to wide range of parameters, which the user has to provide and control (rather than calculate them automatically), making them unnecessarily difficult to use and preventing the user from future use of the tool; (iii) phenotype simulation is often offered as an auxiliary part of the genetic simulation routine, and therefore the user first has to perform a time-consuming unavoidable genetic simulation in order to obtain the phenotype; (iv) in situations when the genetic data is already simulated from other tools, only phenosim and GCTA offer adding simulated phenotype to such data. Consequently, it is necessary to have a new, simple and flexible phenotype simulation tool with plain algorithmic assumptions.\n\nConsequently, we present cophesim, a comprehensive phenotype simulation tool that was developed to add a phenotype to corresponding genotypes simulated by other simulation tool (Table S1). cophesim offers simulation of continuous, dichotomous and survival traits, with different (user-provided) effect sizes of causal variants, with the ability to simulate epistatic interactions. It also can simulate phenotype within gene-environment interaction assumptions using up to 10 covariates.\n\n\nMethods\n\nThe workflow (see Figure 1) includes the following stages: (i) Input data pre-processing; (ii) phenotype simulation; (iii) generation of final output files.\n\nWorkflow of cophesim has three stages: (1) Input stage, where the input data (can be provided in one of the three formats: Plink, MS and GENOME, see the user manual - Supplementary File 1) along with the other input parameters (such as causal variants with size effects, output format, etc.) is prepared for phenotype simulation; (2) Phenotype simulation stage, where different types of phenotypic traits are simulated: dichotomous, continuous and time-to-event (‘survival’); (3) Output stage – the final stage, where simulated phenotype data are packed to various formats in order to be directly usable by six GWAS tools: EMMAX, BLOSSOC, Plink, QTDT, TASSEL and GenABEL. Summary statistics are generated at the output stage as well.\n\nCurrently cophesim accepts the genotype output data from Plink, MS12 and GENOME software applications. Phenotypes (dichotomous, continuous and survival) are then added according to the following simulation scenarios.\n\nDichotomous phenotype for ith individual (i = 1...N, where N is the total number of individuals in a dataset) is simulated according to the logistic model (if the user provided effect sizes for causal variants):\n\nIf the user did not provide the effect sizes for causal variants, the following strategy is then used:\n\nHere wij is a weight and computed as follows: wij=gij−2MAFj(2MAFj(1−MAFj))1/2 (a standardization procedure, and the matrix W containing element wij is called a standardized genotype matrix8; MAFj – a minor allele frequency for jth genetic variant, and the other values are the same as described above. This strategy allows using defined genetic architecture in a simulated population.\n\nQualitative (continuous) phenotype for ith individual is simulated according to the linear regression scenario according to the equations (2) or (3) (in case if effect sizes were not supplied).\n\nWe model a survival phenotype from the proportional hazards model using the inverse probability method13: if U is uniform in (0, 1) and if S(·|z) is the conditional survival function derived from the proportional hazards model: S(t|z) = e–H0(t)ez, then the random variable\n\nThe simplest way to simulate collinearity between two SNPs, g1 and g2, with effect sizes E1 and E2 is to replace some portion of g2 with g1 values according to provided r122 coefficient, which reflects a correlation between two SNPs. We also consider applying other techniques, such as copulas, in order to simulate LD.\n\nThese are modeled with the following equation for ith individual:\n\nOutput files are in the formats as the direct inputs for the following tools: EMMAX14, Blossoc4, Plink (.ped file), QTDT15, TASSEL16, GenABEL17 (see Table 1).\n\nApplying one of the options shown below controls the output format. Each output format has a special suffix type, which defines the file format. These output formats are concordant to those used in phenosim.\n\ncophesim is freely available for download from the following link: https://bitbucket.org/izhbannikov/cophesim. Requirements: Python v2.7.10 and newer, plinkio v0.9.6, R v3.2.4 and newer, Plink v1.07, - in order to run the examples. The user manual is provided in a separate file “cophesim.pdf” located in the program directory and is also available as Supplementary File 1.\n\n\nUse case\n\nBelow we present an example that shows simulation of genetic data and then simulation of three different phenotypic traits. Other examples and installation instructions are provided at the program website and also in the user manual. Refer to the user manual for description of input parameters.\n\n\n\nIn this example, we first (Step 1) simulate genetic data using Plink. We simulate N.cases = N.control = 5,000 cases and controls and 1,000 SNPs (defined in wgas.sim file, refer to the Plink website to see documentation for this type of file). Then (Step 2) we convert a binary sim.plink.bed file to sim.plink.ped (option --recode in Plink). This step is not required since cophesim can handle binary Plink files (.bed, .bim, .fam), but we provide this step in order to show the ability of the program to deal with Plink PED format. Finally (Step 3), we simulate dichotomous (by default), continuous (option -c) and survival (option -s) traits from previously simulated data stored in files sim.plink.ped and sim.plink.map. Note that we simulate survival trait with Gompertz hazard function (option -gomp); effect sizes for causal variants are provided in file effects.txt (to include this file we use option -ce).\n\nWe provide Receiver-Operating Characteristic (ROC) curves (Figure 2) constructed from association tests performed on a simulated dataset. Simulation and association testing were performed with Plink suite. The following parameters were used: N = 10,000 individuals, N.snp.c = 100 causal, with total N.snp = 1,000 variants. Causal variants were labeled with ‘1’ and the other (neutral) variants were labeled with ‘0’. These labels are then used later as true identifiers during calculation of TPR (true positive rate) and FPR (false positive rate). Dichotomous, continuous and survival phenotypic traits were simulated with cophesim. Then association tests were performed with Plink for dichotomous and continuous traits (using Plink flags –logistic and -regression, respectively). Association tests for survival trait were performed with the R package GenABEL. Then calculated p-values provided by association tests for each variant were compared to the significance threshold. Those variants passed the threshold were recognized as causal and associated with simulated phenotype. These classification results later were compared to the true identifiers (defined above) in order to obtain TPR and FPR. For all these tests, we varied the significance threshold from 0 to 1 with the increment of 0.001.\n\nTPR: True Positive Ratio, FPR: False Positive Ratio. These results were calculated for dichotomous, continuous and survival traits. The dashed, 45 degrees line represents random guessing.\n\nThe R code to construct ROC curves is provided in the file “roc.R”. This file is attached to this computer note and also in the data repository: https://bitbucket.org/izhbannikov/cophesim_data/ROC/roc.R\n\n\nConclusion\n\nIn this work we presented the cophesim for phenotype simulation from genetic data obtained either from simulation or real data collecting. cophesim makes it possible to simulate various demographic models under user-defined scenarios.\n\n\nSoftware and data availability\n\nTool and source code available from: https://bitbucket.org/izhbannikov/cophesim\n\nArchived source code as at time of publication: doi:10.5281/zenodo.81019518\n\nLicense: MIT\n\nThe example script and output files for the software are available at: https://doi.org/10.5281/zenodo.80409019.\n\nTo test the cophesim we provided a repository “cophesim_data”: https://bitbucket.org/izhbannikov/cophesim_data. Download or clone this repository to be able to run tests.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the National Institute on Aging of the National Institutes of Health (NIA/NIH) under award numbers P01AG043352, R01AG046860, and P30AG034424.\n\nThe content is solely the responsibility of the authors and does not necessarily represent the official views of the NIA/NIH.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nTable S1: Best available phenotype/genotype simulation software applications and their comparison to cophesim in terms of ability to simulate different types of phenotypic traits. (https://f1000researchdata.s3.amazonaws.com/supplementary/11968/c65c7ddd-d305-4043-a722-e850f2413f10.docx)\n\nSupplementary File 1: User manual for cophesim (https://f1000researchdata.s3.amazonaws.com/supplementary/11968/42ab5de2-8130-4b8c-a7ce-abb2f3d55648.pdf).\n\n\nReferences\n\nLiang L, Zöllner S, Abecasis GR: Genome: a rapid coalescent-based whole genome simulator. Bioinformatics. 2007; 23(12): 1565–7. PubMed Abstract | Publisher Full Text\n\nPurcell S, Neale B, Todd-Brown K, et al.: Plink: A tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007; 81(3): 559–575. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGünther T, Gawenda I, Schmid KJ: phenosim--A software to simulate phenotypes for testing in genome-wide association studies. BMC Bioinformatics. 2011; 12(1): 265. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMailund T, Schierup MH, Pedersen CN, et al.: Coasim: A flexible environment for simulating genetic data under coalescent models. BMC Bioinformatics. 2005; 6(1): 252. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoggart CJ, Chadeau-Hyam M, Clark TG, et al.: Sequence-level population simulations over large genomic regions. Genetics. 2007; 177(3): 1725–1731. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLambert BW, Terwilliger JD, Weiss KM: Forsim: a tool for exploring the genetic architecture of complex traits with controlled truth. Bioinformatics. 2008; 24(16): 1821–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeuenschwander S, Hospital F, Guillaume F, et al.: quantinemo: an individual-based program to simulate quantitative traits with explicit genetic architecture in a dynamic metapopulation. Bioinformatics. 2008; 24(13): 1552–3. PubMed Abstract | Publisher Full Text\n\nYang J, Lee SH, Goddard ME, et al.: Gcta: A tool for genome-wide complex trait analysis. Am J Hum Genet. 2011; 88(1): 76–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSpencer CC, Su Z, Donnelly P, et al.: Designing genome-wide association studies: sample size, power, imputation, and the choice of genotyping chip. PLoS Genet. 2009; 5(5): e1000477. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChung RH, Shih CC: SeqSIMLA: a sequence and phenotype simulation tool for complex disease studies. BMC Bioinformatics. 2013; 14(1): 199. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi B, Wang G, Leal SM: Simrare: a program to generate and analyze sequence-based data for association studies of quantitative and qualitative traits. Bioinformatics. 2012; 28(20): 2703–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEwing G, Hermisson J: MSMS: a coalescent simulation program including recombination, demographic structure and selection at a single locus. Bioinformatics. 2010; 26(16): 2064–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBender R, Augustin T, Blettner M: Generating survival times to simulate Cox proportional hazards models. Stat Med. 2005; 24(11): 1713–1723. PubMed Abstract | Publisher Full Text\n\nKang HM, Sul JH, Service SK, et al.: Variance component model to account for sample structure in genome-wide association studies. Nat Genet. 2010; 42(4): 348–54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbecasis GR, Cardon LR, Cookson WO: A general test of association for quantitative traits in nuclear families. Am J Hum Genet. 2000; 66(1): 279–292. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBradbury PJ, Zhang Z, Kroon DE, et al.: TASSEL: software for association mapping of complex traits in diverse samples. Bioinformatics. 2007; 23(19): 2633–5. PubMed Abstract | Publisher Full Text\n\nAulchenko YS, Ripke S, Isaacs A, et al.: GenABEL: an R library for genome-wide association analysis. Bioinformatics. 2007; 23(10): 1294–6. PubMed Abstract | Publisher Full Text\n\nZhbannikov I: izhbannikov/release-1.4.1. Zenodo. 2017. Data Source\n\nZhbannikov I: izhbannikov/cophesim_data: First release. Zenodo. 2017. Data Source"
}
|
[
{
"id": "24718",
"date": "08 Aug 2017",
"name": "Arnold B. Mitnitski",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the manuscript “Cophesim: a comprehensive phenotype simulator for testing novel association methods”, I. Zhbannikov and colleagues from the Duke University, presented a software that allowed to generate genotype data prepared with a genetic simulator for the use in the investigations of the genome wide association study (GWAS) tools. The rational for development of the software is clearly explained. The idea of the study is to use computer simulations to model data with specific assumptions. Similar simulators are known but all of them do not allow simulate survival. There are several other disadvantage with the existing simulators reviewed by the authors.\n\nThe description of the software is technically sound. The methods section is clearly presented. Dichotomous phenotype are simulated according to the logistic model with the covariates being genetic variants and covariates. Continuous phenotypes are simulated using the linear regression. Survival phenotype is modeled using the proportional hazards with inverse probability method.\nThe details of the code, methods and analysis allow replication of the software and its use by the others. The methods section is clearly presented. Dichotomous phenotype are simulated according to the logistic model with the covariates being genetic variants and covariates. The output formats are compatible with the other applications (Table 1). It is useful example if using the simulator and the other examples are available in the manual. The ROC curve example is also very useful. The information provided is quite sufficient to allow interpretation of the expected results.\n\nIn short, the Cophesim is a useful tool that can be helpful in the genetic analyses. The article is scientifically sound, the methods are described with details – this article will greatly help the researcher interested in the application genetic analyses.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "25816",
"date": "18 Sep 2017",
"name": "Lars Rönnegård",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nZhbannikov and co-workers present the cophesim software that they have developed for simulating phenotypic data using genotype information. The input and output file formats are compatible with many of the most commonly used computer programs for genome wide association studies. The software is flexible, well documented and fills a gap in existing tools, especially for simulating time-to-event phenotypes. The paper is well written and easy to follow, and we only have some minor comments and suggestions.\n\nMinor comments\nClosing bracket missing in the sentence below equation (3) In equation (4), if the user does not provide gene effects then the phenotype is built by the sum of the standardized genotypes for each individual. Could you motivate this choice a bit and explain why it would be useful? In equation (5), the subscripts look wrong. a_i should be a_j In the Linkage Disequilibrium section the term “copula” is used. We do not think most readers of this paper can be expected to be acquainted with copulas and a reference is needed.\nConsider adding a short paragraph where you discuss limitations and the possibility to add further functionality in the future, including:\nDominance effects Probit link for binary data Simulation of correlated traits Alternative ways to simulate LD including a copula approach\n\nCheck that the following link (at the end of the paper) works: https://bitbucket.org/izhbannikov/cophesim_data/ROC/roc.R (we were able to retrieve the code from https://bitbucket.org/izhbannikov/cophesim_data/src/)\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1294
|
https://f1000research.com/articles/6-1293/v1
|
01 Aug 17
|
{
"type": "Research Article",
"title": "Child psychiatry: A scientometric analysis 1980-2016",
"authors": [
"Sadiq Naveed",
"Ahmed Waqas",
"Salman Majeed",
"Muhammad Zeshan",
"Nusrat Jahan",
"Muhammad Haaris Sheikh",
"Sadiq Naveed",
"Salman Majeed",
"Muhammad Zeshan",
"Nusrat Jahan",
"Muhammad Haaris Sheikh"
],
"abstract": "Background: The field of child and adolescent psychiatry lags behind adult psychiatry significantly. In recent years, it has witnessed a significant increase in the publication of journals and articles. This study provides a detailed bibliometric analysis of articles published from 1980 to 2016, in the top seven journals of child and adolescent psychiatry. Methods: Using the Web of Science core collection, we selected 9,719 research papers published in seven psychiatric journals from 1980 to 2016. We utilized the Web of Science Analytics tool and Network Analysis Interface for Literature Studies (NAILS) Project scripts to delineate the general trends of publication in these journals. Then, co-citation analysis and hierarchical cluster analysis was performed using CiteSpace to map important papers, landmark theories and foci of research in child and adolescent psychiatry. Results: The field of child and adolescent psychiatry has experienced an increasing trend in research, which was reflected in the results of this study. Hierarchical cluster analysis revealed that the research foci in psychiatry were primarily studies related to the design of psychometric instruments, checklists, taxonomy, attention deficit hyperactivity disorder (ADHD), depression, PTSD, social phobia, and psychopharmacology. Moreover, several landmark studies, including the validation of a child behavior checklist, Ainsworth's empirical evidence of Bowlby's attachment theory, and adult outcomes of childhood dysregulation were published. This study also reports rapid expansion and innovation in research areas in the field of child and adolescent psychiatry from 1980-2016. Conclusions: Rapid expansion and innovation in research areas in the field of child and adolescent psychiatry has been observed, from 1980 to 2016.",
"keywords": [
"evolution",
"scientometrics",
"bibliometric",
"citespace",
"child psychiatry",
"influential",
"publication output"
],
"content": "Introduction\n\nMental health disorders are very prevalent among children and adolescents, resulting in a significant impact on society. It is estimated that 13–20% of children living in the United States (1 out of 5 children) suffer from a mental disorder every year, resulting in an annual economic loss of 247 billion USD1. Despite these statistics, the field of child psychiatry has attracted a little research interest as compared to other specialties of medicine. This is evident from the fact that not even a single article related to mental health among children was published in the first 45 years of the publication history of American Journal of Insanity2. In a clinical context, the first ever hospital specializing in the treatment of sick children; La’Hôpital des Enfants-Malades, was established in 1802 on the Rue deSévres in Paris. It was also the first time that the field of pediatrics was recognized as an established specialty of medicine2. Institutions specializing in mental health of children, however, did not develop until after World War I, when August Hamberger established his outpatient clinic in the University of Heidelberg3. Experts believe that child psychiatry evolved as a separate field after America’s first juvenile court was established in 18994. Then, during World War II, Great Britain started to patronize the psychological development of its children, for a better future5. Similar strides were made in the US, when President Harry Truman declared war on mental illness in 1946, after signing the National Mental Health Act that led to the birth of the National Institute of Mental Health6. This resulted in an explosion of research in understanding the nature of psychiatric diseases, its diagnoses and taxonomy and psychopharmacological and behavioral treatments. At present, child and adolescent psychiatry has established itself as a distinct specialty globally, however, major disparities exist between the developed and third world countries7.\n\nWhile the development of infrastructure and facilities in any field of medicine are important, scientific research, new discoveries and influential publications are the true marker that ensure its constant progress and evolution. The history of research and development in child psychiatry is quite intricate; spanning discoveries in several domains of medical and social sciences. It is very important to map the research output in a field, to help guide policy makers, researchers and funding agencies towards areas where restriction or increase in research activity is required. In recognition of its importance, several reproducible statistical methods were developed under the umbrella of scientometrics. It is the “quantitative study of science, communication in science, and science policy”, helping to evaluate the impact of journals, scientists and institutes on the development and innovation of a scientific field8.\n\nThe present study analyzes the trends of research in the field of child and adolescent psychiatry by employing reproducible scientometric techniques. Although several scientometric studies have been published in general psychiatry9,10 and other fields of medicine, there is a paucity of such studies mapping the research output in the field of child psychiatry, hence warranting this study. The present study identifies influential publications, landmark theories, authors, countries and major funding agencies contributing to child psychiatry from 1980–2016.\n\n\nMethods\n\nFor the purpose of this scientometric analysis, we selected seven journals (Table 1) indexed under the term “Child Psychiatry” in Google Scholar. These journals were selected on the base of their ISI impact factor, h5-index and h5-median, following the methodology of previous scientometric articles published in the field of psychiatry9,10. The Web of Science core collection was utilized to download bibliographic records of articles published in these journals from 1980–2016 to provide an overview of recent advances in this field. These records included title, author names, abstract, key words and cited references. This search was performed in December, 2016 and records for a total of 9,719 articles, published during 1980- June 2016 were retrieved. There were no restrictions based on type or language of articles included in this analyses.\n\nThis study utilized three software tools for analyses of data; Web of Science core collection records, Network Analysis Interface for Literature Studies (NAILS) Project scripts11 and Citespace (v4.0 R5, Drexel University, Pennsylvania, USA)12. The Web of Science core collection-online analysis platform was used to document journal wise influential authors, institutions, funding agencies and countries. NAILS software was utilized to analyze these to identify the most cited keywords in these journals.\n\nCiteSpace (v4.0 R5, Drexel University, Pennsylvania, USA) is a Java-based platform that allows knowledge mapping by visualization of bibliographic data and hence it is a popular and user-friendly tool to perform co-citation analyses12. According to the theory of document co-citation13,14, co-citation relationships between two documents exist when they are cited together by another document.\n\nUsing “time slicing”, the bibliographic records were divided into four groups according to the year of publication; 1980–1990, 1991–2000, 2001–2010 and 2011–2016, the year per slice was set to 1 with each year represented by top 50 articles based on the number of citations. The ‘team source’ selects were ‘title’, ‘abstract’, ‘author keywords’ and ‘keywords plus’ and the ‘node types’ selected were ‘cited reference’.\n\nNetwork analyses were run with the link reduction method using pathfinder network scaling and then, bibliographic records for each 10-year slice were visualized separately. Articles were represented as nodes and lines as edges. Using this technique, several key results could be identified, such as new theories/concepts related to a field (visualized as a purple ring), centrality reflecting the status of a publication in their network/field, citation bursts (hot topics of research), citation tree rings representing year-wise citation pattern of a node (article). Articles with centrality values > 0.1 were considered as significant entities controlling significant resources in their collaborative networks. Based on these techniques, researchers can observe and understand bibliographic trends in order to identify patterns of research in particular fields and regularities of citations in particular time periods15.\n\n\nResults\n\nA total of 9, 719 research papers were published in the seven psychiatric journals ‘Journal of Child Psychology and Psychiatry’, ‘Journal of the American Academy of Child & Adolescent Psychiatry’, ‘European Child & Adolescent Psychiatry’, ‘Child Psychiatry & Human Development’, ‘Child and Adolescent Psychiatric Clinics of North America’, ‘Clinical Child Psychology and Psychiatry’, and ‘Child and Adolescent Psychiatry and Mental Health’ (Table 1), from 1980 to June 2016. All journals publish multidisciplinary research articles in the fields of child and adolescent psychiatry, ensuring constant improvement and evolution of the field toward a cutting-edge, evidence-based clinical specialty.\n\nFigure 1 shows the yearly publication volume in the field of child and adolescent psychiatry since 1980, and documents the rapid increase in publication volume in this field since the 1980s. The publication output in these journals rose from less than a 100 journal articles in 1980 to more than 500 in 2015. Figure 2 shows the yearly trend in the number of citations received by articles included in our analyses. The articles were cited a total of 1,37,006 times, however, this number dropped to 1,27,119 after removing self-citations. The total number of articles citing these publications was 81551 (77853 after excluding self-citations). Average citations per item were 16.85, contributing to an h-index of 132.\n\nThe increasing trend in publication ensures that the field of child psychiatry is constantly evolving because of new discoveries in epidemiology, assessment techniques, genetics, neurosciences and therapeutics. However, the research output in journals related to child psychiatry still lags behind those of general psychiatry and other specialties of medicine16.\n\nIt is interesting to note that the highest output of research in child psychiatry comes from institutions in developed countries. According to Albayrak et al., this regional disparity is attributed to a high GDP of these countries, higher funding available and availability of more public health resources, specialty training programs and mental health professionals committed to the field of child psychiatry7. In addition, regions like South Asia, and a high percentage of developing European countries (23%) do not have training programs in child psychiatry, hence, low research output7. Similar trends were identified in our study. According to the Web of Science (core database) citation report for these seven journals, countries with highest research output were USA, England, Netherlands, Germany, Spain, Canada, Australia, Sweden, Switzerland and Norway. Globally, the most productive organizations were University of London, Kings College London, Yale University, University of California, University College, London, Harvard University, Pensylvania Commonwealth System of Higher Education, Vrije Universiteit Amsterdam, Radboud University Nijmegen and the University of Pittsburgh in USA (Table 2).\n\nAccording to our analysis, the top foci of research in child psychiatry correspond with most common mental health conditions globally. Figure 3 details the top cited keywords, representing the top foci of research in these selected journals (Figure 3). Top cited psychopathologies were depression, attention deficit hyperactivity disorder (ADHD), autism, anxiety, conduct disorder, obsessive compulsive disorder, post-traumatic stress disorder, bipolar disorder, suicide and aggression. This is in accordance with Polanzcyk et al., who identified the worldwide prevalence of any anxiety disorder to be 6.5%, any depressive disorder to be 2.6%, ADHD to be 3.4%, and any disruptive disorder to be 5.7%17. Methylphenidate was identified as the top cited keyword for a drug used in child psychiatry. Our results are in accordance with López Muñoz et al., who reported methylphenidate to be the most researched drug for attention deficit hyperactivity disorder (ADHD), also correlating it with an increased trend in its use18.\n\nIn further analysis, CiteSpace was used to identify important articles based on their centrality values. Articles with centrality values > 0.1 were considered significant. These articles were considered important within their collaborative network, focused on a specific research domain (Table 3, Figure 4). Visualization of these clusters also helped in identification of purple nodes that represent important and groundbreaking theories, and represent a link between two different clusters12.\n\nFigure 4 represents a visual co-citation network of 412 research documents published in the field of child psychiatry from 1980–1990. The rings represent several key results such as new theories/concepts related to a field (visualized as a purple ring), centrality reflecting the status of a publication in their network/field, and citation tree rings representing year-wise citation pattern of an article.\n\nFrom 1980–1990, there were 412 nodes (articles) and 646 edges (co-citation links). The most important paper with a centrality value of 0.42 (identified as a purple node) was titled, “diagnostic significance of masked depression”, by Carlson & Cantwell19. It was a landmark study in the field of child psychiatry as it elucidated the diagnostic significance of unmasking depression in adolescents who presented with other comorbidities and symptoms. Cantwell reviewed the evidence linking hyperactivity with antisocial behavior in the youth,and Lewis et al. compared the neuropsychiatric, intellectual, and educational status of extremely violent and less violent incarcerated boys20,21.\n\nDevelopment of questionnaires and rating scales specific to child psychiatry is another milestone in its history. Like any other scientific discipline, this allowed researchers and clinicians to reliably quantify emotional problems among children and adolescents. This also helps them to track symptoms and response to the treatment22. During this decade, two important rating scales were developed, subsequently influencing research in child psychiatry. Shaffer et al.’s ‘Children’s Global Assessment Scale’ adapted the adult version of Global Assessment Scale, to evaluate overall functioning among children, as a complement to their clinical diagnoses23. Similarly, Achenbach ’s exploration of DSM-III in perspective of child psychopathology lead to the development Child Behavior Checklist that integrates information from a variety of sources; parent, child and teacher24,25.\n\nOther important publications explored the relationship between the caretakers and the child. Bowlby’s attachment theory propounded that a child initially forms only one attachment relationship, and this attachment figure becomes a base for all future relationships26 and disrupting this, can lead to long term consequences. Ainsworth’s studies provided the first empirical evidence of Bowlby’s attachment theory26, which was subsequently explored in a longitudinal study by Egeland and Sroufe27. In a similar context, Gaensbauer & Sands’ study on the therapeutic relationship between abused/neglected infants and their caretakers, which concluded that personality traits of the child may contribute to disturbance in caretaker-infant interaction, leading to abuse and neglect28. All of these studies represented significant between-ness centrality in this time period.\n\nThere were 315 nodes and 508 edges in this time period (Table 4, Figure 5). In recent decades, the integration of epidemiological evidence into child psychiatry, has truly helped it reach its scientific potential. This specific discipline helped child psychiatry in three principal ways: a) identify the burden of childhood psychiatric illnesses b) identify new risk factors for psychiatric illnesses and c) explore the validity and reliability of diagnostic statistical manual.\n\nDuring this period, several landmark epidemiological studies were conducted. There were three studies focusing on suicide, PTSD and ADHD. The most important paper with a centrality value of 0.34, identified as a purple node was entitled, “Risk factors for adolescent suicide: a comparison of adolescent suicide victims with suicidal inpatients” by Brent et al.29. It identified the most prevalent risk factors for suicidal behaviors in adolescents and emphasized on their proper identification. Pynoos et al.’s work on acute PTSD garnered a lot of attention during this period. He concluded that these symptoms were not affected by age, gender and ethnicity and that severity of acute PTSD symptoms correlated with proximity to violence30. Barkley et al. identified hyperactivity as a pattern of behavioral symptom that is highly stable over time and associated with considerably greater risk for family disturbance and negative academic and social outcomes in adolescence31.\n\nAlthough DSM-III was published in 1980, it did not appear as an influential entity in child psychiatry during 1980s. However, it attracted a lot epidemiological studies from 1990–2000, mainly because it operationalized the diagnostic criteria of mental illnesses and used a phenomenological approach. Bird et al. (1993) in his epidemiologic study identified patterns of the comorbidity of four major diagnostic domains (attention deficit disorders, conduct/oppositional disorders, depression and anxiety disorders) among children32, and Cohen et al. (1993) reported that patterns of diagnoses varied by both age and gender33. Subsequent publications include Anderson et al.’s work investigating the prevalence of DSM-III disorders in preadolescent children34, where it was found that the most prevalent disorders were attention deficit, oppositional and separation anxiety disorders. And the least prevalent were depression and social phobia. Bird et al. delineated the demographic correlates of maladjustments and its DSM-III diagnostic domains35.\n\nFigure 5 represents a visual co-citation network of 315 research documents published in the field of child psychiatry from 1991–2000. The rings represent several key results such as new theories/concepts related to a field (visualized as a purple ring), centrality reflecting the status of a publication in their network/field, and citation tree rings representing year-wise citation pattern of an article.\n\nThe publication of a DSM III- revised version by the American Psychiatric Association was significant in their own collaborative network, as well as visualized as a purple node representing a landmark work in the field of child psychiaty36. Using DSM III- R critera, Lewinsohn et al. identified the prevalence and incidence of depression37 and other DSM-III-R disorders among high school students. This decade also included the development of Diagnostic Interview Schedule for Children-Revised (DISC-R), to be used among children, by Schwab-Stone et al.38. This was a very important development, as it could be administered by clinicians as well as lay interviewers with no formal clinical training38.\n\nThe decade also included DSM- IV publication by American Psychiatric Association as an important publication39. A study by Lahey et al. compared the psychometric properties of DSM- IV criteria for oppositional defiant disorder and conduct disorder with previous DSM diagnostic formulations40. It concluded that DSM-IV definitions and validity of oppositional defiant disorder and conduct disorder are somewhat better than DSM-III-R definitions.\n\nThe introduction of cutting edge techniques in child psychiatry integrated new disciplines such as genetics. In 1992, Biederman et al. provided the evidence for family-genetic influences in the development of ADHD41. Table 5 provides a detailed analysis of the articles selected based on their centrality values. Please, note that some of these articles were published in the previous decade but they influenced other research in this decade.\n\nThere were 306 nodes and 483 edges (Table 5, Figure 6). An eminent scientist in the field of child psychiatry, Angold et al. (1999) conducted a meta-analysis to provide an understanding of comorbidity of different psychiatric disorders42. Angold et al. Also highlighted that severity of symptomatology among children and the resulting impairment contributed significantly to parents’ burden43. Costello et al. conducted a 10 year review update to track the recent progress in child and adolescent psychiatric epidemiology44. It summarized the burden and available methods to screen and diagnose mental illnesses among the youth44. Similarly, Ford et al. identified the prevalence of DSM-IV disorders by conducting the child and adolescent mental health survey45.\n\nFigure 6 represents a visual co-citation network of 306 research documents published in the field of child psychiatry from 2001–2010. The rings represent several key results such as new theories/concepts related to a field (visualized as a purple ring), centrality reflecting the status of a publication in their network/field, and citation tree rings representing year-wise citation pattern of an article.\n\nThis decade experienced an influence from a number of landmark prospective cohort studies. Prospective cohort studies provide strong evidence regarding temporality and causality, and minimize recall bias. The studies revolutionized the understanding of the developmental course of psychopathologies. Kim-Cohen et al.’s work (2003) emphasized the public health importance of juvenile disorders by concluding that most adult disorders might be an extension of juvenile disorders46. Pine et al. identified anxiety and depressive disorders in adolescence to be a strong risk factor for early-adulthood anxiety and depressive disorders47. Costello et al. (1996) influenced this era by their renowned “The Great Smoky Mountain study”, which looked at the prevalence of psychiatric disorders in urban and rural population48. The study identified several key findings, such as a high burden of psychiatric illnesses among rural populations, externalizing and internalizing dimensions of psychopathologies, and poverty and puberty as a risk factor for depression48.\n\nIn 2000, the American Psychiatric Association published the DSM-IV-TR49. Similar to its previous versions, DSM-IV-TR brought about landmark changes (visualized as a purple node) in the field of child psychiatry\n\nIn this decade, a particular emphasis was also observed in the study of genetic –environment interaction in development and progression of diseases. These studies stimulated a great deal of research in the domain of psychiatric genetics especially in ADHD and mania. Castellanos et al. concluded that genetic and/or early environmental influences on brain development in ADHD are fixed, non-progressive, and unrelated to stimulant treatment50. Faraone et al. reviewed important genetic variances and its association with ADHD51. Leibenluft et al. described the clinical phenotypes of juvenile mania52. Gottesman and Gould’s work emphasized the importance of endo-phenotypes in the understanding of neurobiological correlates of psychiatric disorders53\n\nSimilar to previous decades, the work in this period also focused on the importance of assessment, diagnoses and taxonomy of childhood psychiatric disorders. Two scales in particular provided a strong base for screening, diagnoses and research pertaining to ADHD. The Conners' Rating Scales-Revised User Manual published in 1997 garnered a lot of attention in this decade54. The scale represents validated instrument with excellent psychometric properties for evaluation, diagnosis, and treatment response of children with ADHD and co-morbid disorders54. Goodman et al. employed the Strengths and Difficulties Questionnaire (SDQ) to screen child psychiatric disorders in a community sample55. Owing to their inexpensive use by non-trained individuals, these scales are extensively used in screening ADHD among children in school settings, community settings as well as research.\n\nProviding an updated version of DISC-R, Shaffer et al assessed the reliability of NIMH Diagnostic Interview Schedule for Children Version 2.3 (DISC-2.3) in the MECA study56. Subsequent papers included Shaffer et al.’s comparison of NIMH Diagnostic Interview Schedule for Children Version IV (NIMH DISC-IV) with its previous versions, and its reliability for some common diagnoses57.\n\nIn this decade, a lot of pharmacological research was guided by several influential studies on depression, anxiety and ADHD among children. The most important paper with a centrality value of 0.26 and identified as a purple node was titled “Treatment for adolescents with depression Study (TADS) Team: Fluoxetine, cognitive-behavioral therapy, and their combination for adolescents with depression” by March et al.58. This study concluded that the combination of CBT and SSRI is the most efficacious for treating major depression among adolescent population. It also helped guide the National Institute for Health and Clinical Excellence (NICE) guidelines for treating adolescent depression.\n\nAnother trial proved that fluvoxamine is efficacious in childhood and adolescent anxiety disorders59. The MTA Cooperative Group tested the treatment strategies for ADHD60 and also identified the moderators and mediators of treatment response for children with ADHD61. This was one of the most influential study guiding future research on treatment of ADHD. Since 1999, when the NIMH study was published, thousands of additional peer-reviewed studies have been published on the topic of ADHD treatment.\n\nThere were 209 nodes and 313 edges (Table 6, Figure 7). “The fifth edition of Diagnostic and statistical manual of mental disorders (DSM-5)” was identified as the most important publication.62. Willcutt et al.63 reviewed the validity of DSM-IV attention deficit/hyperactivity disorder symptom dimensions and subtypes. Our analysis identified two important works in epidemiology; elucidating the prevalence of childhood psychiatric disorders. In his meta-analysis, Polanczyk analyzed the causes for worldwide variation in estimates of ADHD64. Kessler et al.65 conducted a survey to estimate the lifetime prevalence and age of onset of distributions of DSM-IV disorders in the National Comorbidity Survey Replication.\n\nThis decade also attracted a lot of research on disruptive disorders among children. Stringaris & Goodman66 recognized three dimensions of oppositionality in youth; irritability, hurtful and headstrong, deeming them as differential predictors of aetiology, prognosis and treatment responsiveness. Later, Stringaris & Goodman assessed the longitudinal outcomes of these three dimensions, with irritability predicting depression and anxiety, headstrong dimension with ADHD and hurtful dimension predicting aggressive conduct disorder67. White68 reviewed the importance of callous traits for developmental models of aggressive and antisocial behavior and Wolke et al.69 investigated whether drop out in the Avon Longitudinal Study of Parents And Children (ALSPAC) is systemic or random and if systematic, whether it had an impact on the prediction of disruptive behavior disorders.\n\nEgger & Angold (2006) conducted an important review about common emotional and behavioral disorders in preschool children70. Subsequent papers include Leibenluft’s work on severe mood dysregulation, irritability, and the diagnostic boundaries of bipolar disorder in youth71. It emphasized that severe mood dysregulation disorder is a different diagnostic entity than Bipolar Disorder, while Frick and Simonoff et al.72 elucidated that social anxiety disorders, ADHD and conduct disorders were most comorbid with autism spectrum disorders.\n\nFigure 7 represents a visual co-citation network of 209 research documents published in the field of child psychiatry from 2011–2016. The rings represent several key results such as new theories/concepts related to a field (visualized as a purple ring), centrality reflecting the status of a publication in their network/field, and citation tree rings representing year-wise citation pattern of an article.\n\nTwo prospective studies influenced research in this decade. These included Althoff et al’s 14-year follow-up study, concluding childhood dysregulation identified using the child behavior checklist-dysregulation profile (CBCL-DP) could predict anxiety and behavior disruptive disorders in adulthood73. Copeland et al74 identified childhood and adolescent psychiatric disorders as predictors of young adult disorders.\n\nWalkup et al.75 published a randomized controlled trial to assess treatment options of childhood anxiety disorders. They concluded that a combination of cognitive behavioral therapy and Sertraline to be of greatest efficacy in childhood anxiety. And Birmaher & Brent identified practice parameters for the assessment and treatment of children and adolescents with depressive disorders76. This decade also included two influential studies on non-pharmacological interventions. In contrast to previous studies that had focused on pharmacological treatments for ADHD, Sonuga-Barke et al. reviewed the evidence for non-pharmacological interventions77 such as Free fatty acid supplementation, artificial food color exclusion, behavioral interventions, neurofeedback, cognitive training, and restricted elimination. Similarly, Silverman et al.78 reviewed the evidence-based psychosocial treatments for phobic and anxiety disorders in children and adolescents.\n\nThis decade was also particularly influenced by Nylund and colleagues’ work on latent class analysis and mixture modeling techniques, commonly used in behavioral and social science research for identifying patterns of behaviors, psychiatric symptoms and disorders, and co-occurrence of aspects of the social environment79.\n\n\nDiscussion\n\nThe field of child psychiatry is still a relatively new and expanding field, in comparison to other specialties of medicine. It has evolved significantly over the last few years. Most of the literature in this field was contributed by United States of America and the European countries, with small contributions from the developing countries. Similar trends were observed in the regional distribution of funding agencies and authors contributing to this field. Several pharmaceutical industries were also identified among top ten funding agencies. A number of landmark papers and research foci were also identified. During the first three decades, the researchers focused on assessment tools, taxonomy, identification of risk factors, and symptomatology. Over time, there has been a growing interest in finding better assessment tools and more effective psychopharmacological options. However, more recently, there has been a rapid development in several areas including neurobiology, neuroimaging, and molecular genetics. Researchers are more interested in exploring etiological factors leading to psychiatric illnesses. The authors strongly believe that these innovative trends in the field would help identify and manage the childhood psychiatric disorders at an earlier stage, and also increase the quality of life among patients, their families and caretakers.\n\n\nData availability\n\nDataset 1: Source data utilized in this study, compiled into text documents. The data is also accessible via the Clarivate analytics Web of Science Core database. DOI, 10.5256/f1000research.12069.d17067080",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors thank Dr. Hamid Hassan, Assistant Professor of Applied Physiology, Multan Medical and Dental College, Pakistan, for improving the use of English in this manuscript. The authors also thank Professor Chaomei Chen, Professor of Informatics in the College of Computing and Informatics at Drexel University, for providing his valuable comments to improve the paper.\n\n\nReferences\n\nO’Connell M, Boat T, Warner K: Preventing mental, emotional, and behavioral disorders among young people: Progress and possibilities. 2009. Reference Source\n\nRey JM, Assumpção FB Jr, Bernad CA, et al.: HISTORY OF CHILD PSYCHIATRY. 2015. Reference Source\n\nBewley T: Development of specialties - Child psychiatry. (Madness to Mental Illness. A History of the Royal College of Psychiatrists.). 1–11. Reference Source\n\nMcCord J, Spatz Wisdom C, Crowell N: Juvenile Crime, Juvenile Justice. Panel on Juvenile Crime: Prevention, Treatment, and Control. National Research Council and Institute of Medicine, Committee on Law and Justice and Board on Children, Youth, and Families.; 2001. Reference Source\n\nStewart J: US Influences on the Development of Child Guidance and Psychiatric Social Work in Scotland and Great Britain during the Interwar Period. In: Andresen A, Elybakken KT, Hubbard W, editors. Public Health and Preventive Medicine, 1800–2000: Knowledge, Co-operation, and Conflict. Bergen, Stein Rokkan Centre for Social Studies; 2000; 85–95.\n\nSchowalter J: A history of child and adolescent psychiatry in the United States. Psychiatr Times. 2003. Reference Source\n\nAlbayrak O, Föcker M, Wibker K, et al.: Bibliometric assessment of publication output of child and adolescent psychiatric/psychological affiliations between 2005 and 2010 based on the databases PubMed and Scopus. European child & adolescent psychiatry. Eur Child Adolesc Psychiatry. 2012; 21(6): 327–37. PubMed Abstract | Publisher Full Text\n\nHess D: Science Studies: An advanced introduction. New York: New York University Press; 1997. Reference Source\n\nWu Y, Duan Z: Analysis on evolution and research focus in psychiatry field. BMC Psychiatry. 2015; 15(1): 105. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu Y, Duan Z: Visualization analysis of author collaborations in schizophrenia research. BMC Psychiatry. 2015; 15(1): 27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKnutas A, Hajikhani A, Salminen J, et al.: Cloud-based bibliometric analysis service for systematic mapping studies. Proc 16th Int Conf Comput Syst Technol. 2015; 184–91. Publisher Full Text\n\nChen C, Sanjuan FI, Hou J: The Structure and Dynamics of Co-Citation Clusters: A Multiple Perspective Co-Citation Analysis. J Am Soc Inf Sci Technol. 2010; 61(7): 1386–409. Publisher Full Text\n\nSmall H: Co-citation in the scientific literature: A new measure of the relationship between two documents. J Am Soc Inf. 1973; 24(4): 265–9. Publisher Full Text\n\nMarshakova I: System of connections between documents based on references (as the Science Citation Index). Nauchno-Tekhnicheskaya Informatsiya Seriya. 1973; 2(6): 3–8.\n\nLiu Z, Chen Y, Ying L, et al.: Mapping knowledge domains methods and application. Beijing: People’s Publishing House; 2008. Reference Source\n\nISI Thompson Reuters. Master Journal List. Reference Source\n\nPolanczyk GV, Salum GA, Sugaya LS, et al.: Annual Research Review: A meta‐analysis of the worldwide prevalence of mental disorders in children and adolescents. J Child Psychol Psychiatry. 2015; 56(3): 345–65. PubMed Abstract | Publisher Full Text\n\nLópez-Muñoz F, Alamo C, Quintero-Gutiérrez FJ, et al.: A bibliometric study of international scientific productivity in attention-deficit hyperactivity disorder covering the period 1980–2005. Eur Child Adolesc Psychiatry. 2008; 17(6): 381–91. PubMed Abstract | Publisher Full Text\n\nCarlson GA, Cantwell DP: Unmasking masked depression in children and adolescents. Am J Psychiatry. 1980; 137(4): 445–9. PubMed Abstract | Publisher Full Text\n\nCantwell DP: Hyperactivity and Antisocial Behavior. J Am Acad Child Psychiatry. 1978; 17(2): 252–62. PubMed Abstract | Publisher Full Text\n\nLewis DO, Shanok SS, Pincus JH, et al.: Violent juvenile delinquents: psychiatric, neurological, psychological, and abuse factors. J Am Acad Child Psychiatry. 1979; 18(2): 307–19. PubMed Abstract | Publisher Full Text\n\nRey JM, Jr FBA, Bernad CA, et al.: Chapter. 2015; 1–72.\n\nShaffer D, Gould MS, Brasic J, et al.: A children’s global assessment scale (CGAS). Arch Gen Psychiatry. 1983; 40(11): 1228–31. PubMed Abstract | Publisher Full Text\n\nAchenbach T: DSM-III in light of empirical research on the classification of child psychopathology. J Am Acad Child Psychiatry. 1980; 19(3): 395–412. PubMed Abstract | Publisher Full Text\n\nAchenbach T: Manual for the child behavior checklist and revised child behavior profile. Department of Psychiatry of the University of Vermont; 1983. Reference Source\n\nAinsworth M, Blehar M, Waters E, et al.: Patterns of attachment: A psychological study of the strange situation. Hillsdale. NJ: Erlbaum; 1978. Reference Source\n\nEgeland B, Sroufe LA: Attachment and early maltreatment. Child Dev. 1981; 52(1): 44–52. PubMed Abstract | Publisher Full Text\n\nGaensbauer TJ, Sands K: Distorted affective communications in abused/neglected infants and their potential impact on caretakers. J Am Acad Child Psychiatry. 1979; 18(2): 236–50. PubMed Abstract | Publisher Full Text\n\nBrent DA, Perper JA, Goldstein CE, et al.: Risk factors for adolescent suicide. a comparison of adolescent suicide victims with suicidal inpatients. Arch Gen Psychiatry. 1988; 45(6): 581–8. PubMed Abstract | Publisher Full Text\n\nPynoos RS, Frederick C, Nader K, et al.: Life threat and posttraumatic stress in school-age children. Arch Gen Psychiatry. 1987; 44(12): 1057–63. PubMed Abstract | Publisher Full Text\n\nBarkley RA, Fischer M, Edelbrock CS, et al.: The adolescent outcome of hyperactive children diagnosed by research criteria: I. An 8-year prospective follow-up study. J Am Acad Child Adolesc Psychiatry. 1990; 29(4): 546–557. PubMed Abstract | Publisher Full Text\n\nBird HR, Gould MS, Staghezza BM: Patterns of diagnostic comorbidity in a community sample of children aged 9 through 16 years. J Am Acad Child Adolesc Psychiatry. 1993; 32(2): 361–8. PubMed Abstract | Publisher Full Text\n\nCohen P, Cohen J, Kasen S, et al.: An epidemiological study of disorders in late childhood and adolescence--I. Age- and gender-specific prevalence. J Child Psychol Psychiatry. 1993; 34(6): 851–67. PubMed Abstract | Publisher Full Text\n\nAnderson JC, Williams S, Mcgee R, et al.: DSM-III disorders in preadolescent children. Prevalence in a large sample from the general population. Arch Gen Psychiatry. 1987; 44(1): 69–76. PubMed Abstract | Publisher Full Text\n\nBird HR, Canino G, Rubio-Stipec M, et al.: Estimates of the prevalence of childhood maladjustment in a community survey in Puerto Rico. The use of combined measures. Arch Gen Psychiatry. 1988; 45(12): 1120–6. PubMed Abstract | Publisher Full Text\n\nAmerican Psychiatric Association: Committee on nomenclature and statistics. Diagnostic and Statistical Manual of Mental Disorders, Revised Third Edition. American Psychiatric Association; 1987.\n\nLewinsohn PM, Hops H, Roberts RE, et al.: Adolescent psychopathology: I. Prevalence and incidence of depression and other DSM-III-R disorders in high school students. J Abnorm Psychol. 1993; 102(1): 133–44. PubMed Abstract | Publisher Full Text\n\nSchwab-Stone M, Fallon T, Briggs M, et al.: Reliability of diagnostic reporting for children aged 6–11 years: a test-retest study of the Diagnostic Interview Schedule for Children-Revised. Am J Psychiatry. 1994; 151(7): 1048–54. PubMed Abstract | Publisher Full Text\n\nAmerican Psychiatric Association: Diagnostic and statistical manual of mental disorders (DSM). American Psychiatric Association; 1994; 143–147. Reference Source\n\nLahey BB, Applegate B, Barkley RA, et al.: DSM-IV field trials for oppositional defiant disorder and conduct disorder in children and adolescents. Am J Psychiatry. 1994; 151(8): 1163–71. PubMed Abstract | Publisher Full Text\n\nBiederman J, Faraone SV, Keenan K, et al.: Further evidence for family-genetic risk factors in attention deficit hyperactivity disorder. Patterns of comorbidity in probands and relatives psychiatrically and pediatrically referred samples. Arch Gen Psychiatry. 1992; 49(9): 728–38. PubMed Abstract | Publisher Full Text\n\nAngold A, Costello EJ, Erkanli A: Comorbidity. J child Psychol psychiatry. 1999; 40(1): 57–87. PubMed Abstract | Publisher Full Text\n\nAngold A, Messer SC, Stangl D, et al.: Perceived parental burden and service use for child and adolescent psychiatric disorders. Am J Public Health. 1998; 88(1): 75–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCostello EJ, Egger H, Angold A: 10-year research update review: the epidemiology of child and adolescent psychiatric disorders: I. Methods and public health burden. J Am Acad Child Adolesc Psychiatry. 2005; 44(10): 972–86. PubMed Abstract | Publisher Full Text\n\nFord T, Goodman R, Meltzer H: The British Child and Adolescent Mental Health Survey 1999: the prevalence of DSM-IV disorders. J Am Acad Child Adolesc Psychiatry. 2003; 42(10): 1203–11. PubMed Abstract | Publisher Full Text\n\nKim-Cohen J, Caspi A, Moffitt TE, et al.: Prior juvenile diagnoses in adults with mental disorder: developmental follow-back of a prospective-longitudinal cohort. Arch Gen Psychiatry. 2003; 60(7): 709–17. PubMed Abstract | Publisher Full Text\n\nPine DS, Cohen P, Gurley D, et al.: The risk for early-adulthood anxiety and depressive disorders in adolescents with anxiety and depressive disorders. Arch Gen Psychiatry. 1998; 55(1): 56–64. PubMed Abstract | Publisher Full Text\n\nCostello EJ, Angold A, Burns BJ, et al.: The Great Smoky Mountains Study of Youth. goals, design, methods, and the prevalence of DSM-III-R disorders. Arch Gen Psychiatry. 1996; 53(12): 1129–36. PubMed Abstract | Publisher Full Text\n\nAmerican Psychiatric Association: Diagnostic criteria from dsM-iV-tr. 2000.\n\nCastellanos FX, Lee PP, Sharp W, et al.: Developmental trajectories of brain volume abnormalities in children and adolescents with attention-deficit/hyperactivity disorder. JAMA. 2002; 288(14): 1740–8. PubMed Abstract | Publisher Full Text\n\nFaraone SV, Perlis RH, Doyle AE, et al.: Molecular genetics of attention-deficit/hyperactivity disorder. Biol Psychiatry. 2005; 57(11): 1313–23. PubMed Abstract | Publisher Full Text\n\nLeibenluft E, Charney DS, Towbin KE, et al.: Defining clinical phenotypes of juvenile mania. Am J Psychiatry. 2003; 160(3): 430–7. PubMed Abstract | Publisher Full Text\n\nGottesman II, Gould TD: The endophenotype concept in psychiatry: etymology and strategic intentions. Am J Psychiatry. 2003; 160(4): 636–45. PubMed Abstract | Publisher Full Text\n\nConners CK: Conners’ Rating Scales-revised. Multi-Health Systems, Incorporated; 1997.\n\nGoodman R, Ford T, Simmons H, et al.: Using the Strengths and Difficulties Questionnaire (SDQ) to screen for child psychiatric disorders in a community sample. Br J Psychiatry. 2000; 177(6): 534–9. PubMed Abstract | Publisher Full Text\n\nShaffer D, Fisher P, Dulcan MK, et al.: The NIMH Diagnostic Interview Schedule for Children Version 2.3 (DISC-2.3): description, acceptability, prevalence rates, and performance in the MECA Study. Methods for the Epidemiology of Child and Adolescent Mental Disorders Study. J Am Acad Child Adolesc Psychiatry. 1996; 35(7): 865–77. PubMed Abstract | Publisher Full Text\n\nShaffer D, Fisher P, Lucas CP, et al.: NIMH Diagnostic Interview Schedule for Children Version IV (NIMH DISC-IV): description, differences from previous versions, and reliability of some common diagnoses. J Am Acad Child Adolesc Psychiatry. 2000; 39(1): 28–38. PubMed Abstract | Publisher Full Text\n\nMarch J, Silva S, Petrycki S, et al.: Fluoxetine, cognitive-behavioral therapy, and their combination for adolescents with depression: Treatment for Adolescents With Depression Study (TADS) randomized controlled trial. JAMA. 2004; 292(7): 807–20. PubMed Abstract | Publisher Full Text\n\nWalkup JT, Labellarte MJ, Riddle MA, et al.: Fluvoxamine for the treatment of anxiety disorders in children and adolescents. The Research Unit on Pediatric Psychopharmacology Anxiety Study Group. N Engl J Med. 2001; 344(17): 1279–85. PubMed Abstract | Publisher Full Text\n\nThe MTA Cooperative Group: A 14-month randomized clinical trial of treatment strategies for attention-deficit/hyperactivity disorder. The MTA Cooperative Group. Multimodal Treatment Study of Children with ADHD. Arch Gen Psychiatry. 1999; 56(12): 1073–86. PubMed Abstract | Publisher Full Text\n\nThe MTA Cooperative Group: Moderators and mediators of treatment response for children with attention-deficit/hyperactivity disorder: the Multimodal Treatment Study of children with Attention-deficit/hyperactivity disorder. Arch Gen Psychiatry. 1999; 56(12): 1088–96. PubMed Abstract | Publisher Full Text\n\nAmerican Psychiatric Association (APA): Diagnostic and statistical manual of mental disorders (DSM-5). 2013. Reference Source\n\nWillcutt EG, Nigg JT, Pennington BF, et al.: Validity of DSM-IV attention deficit/hyperactivity disorder symptom dimensions and subtypes. J Abnorm Psychol. 2012; 121(4): 991–1010. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPolanczyk G, de Lima S, Horta BL: The Worldwide Prevalence of ADHD: A Systematic Review and Metaregression Analysis Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Am J. 2007; 164(6): 942–8. Reference Source\n\nKessler RC, Berglund P, Demler O, et al.: Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the National Comorbidity Survey Replication. Arch Gen Psychiatry. 2005; 62(6): 593–602. PubMed Abstract | Publisher Full Text\n\nStringaris A, Goodman R: Three dimensions of oppositionality in youth. J Child Psychol Psychiatry Allied Discip. 2009; 50(3): 216–23. PubMed Abstract | Publisher Full Text\n\nStringaris A, Goodman R: Longitudinal outcome of youth oppositionality: irritable, headstrong, and hurtful behaviors have distinctive predictions. J Am Acad Child Adolesc Psychiatry. 2009; 48(4): 404–12. PubMed Abstract | Publisher Full Text\n\nFrick PJ, White SF: Research review: The importance of callous-unemotional traits for developmental models of aggressive and antisocial behavior. J Child Psychol Psychiatry. 2008; 49(4): 359–75. PubMed Abstract | Publisher Full Text\n\nWolke D, Waylen A, Samara M, et al.: Selective drop-out in longitudinal studies and non-biased prediction of behaviour disorders. Br J Psychiatry. 2009; 195(3): 249–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEgger HL, Angold A: Common emotional and behavioral disorders in preschool children: presentation, nosology, and epidemiology. J Child Psychol Psychiatry. 2006; 47(3–4): 313–37. PubMed Abstract | Publisher Full Text\n\nLeibenluft E: Severe mood dysregulation, irritability, and the diagnostic boundaries of bipolar disorder in youths. Am J Psychiatry. 2011; 168(2): 129–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimonoff E, Pickles A, Charman T, et al.: Psychiatric disorders in children with autism spectrum disorders: prevalence, comorbidity, and associated factors in a population-derived sample. J Am Acad Child Adolesc Psychiatry. 2008; 47(8): 921–9. PubMed Abstract | Publisher Full Text\n\nAlthoff RR, Verhulst FC, Rettew DC, et al.: Adult outcomes of childhood dysregulation: a 14-year follow-up study. J Am Acad Child Adolesc Psychiatry. 2010; 49(11): 1105–16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCopeland WE, Shanahan L, Costello EJ, et al.: Childhood and adolescent psychiatric disorders as predictors of young adult disorders. Arch Gen Psychiatry. 2009; 66(7): 764–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWalkup JT, Albano AM, Piacentini J, et al.: Cognitive behavioral therapy, sertraline, or a combination in childhood anxiety. N Engl J Med. 2008; 359(26): 2753–66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBirmaher B, Brent D, AACAP Work Group on Quality Issues: Practice parameter for the assessment and treatment of children and adolescents with depressive disorders. J Am Acad Child Adolesc Psychiatry. 2007; 46(11): 1503–26. PubMed Abstract | Publisher Full Text\n\nSonuga-Barke EJ, Brandeis D, Coreste S, et al.: Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. Am J Psychiatry. 2013; 170(3): 275–89. PubMed Abstract | Publisher Full Text\n\nSilverman WK, Pina AA, Viswesvaran C: Evidence-based psychosocial treatments for phobic and anxiety disorders in children and adolescents. J Clin Child Adolesc Psychol. 2008; 37(1): 105–30. PubMed Abstract | Publisher Full Text\n\nNylund KL, Asparouhov T, Muthén BO: Deciding on the number of classes in latent class analysis and growth mixture modeling: A Monte Carlo simulation study. Struct Equ Model. 2007; 14(4): 535–69. Publisher Full Text\n\nNaveed S, Waqas A, Majeed S, et al.: Dataset 1 in: Child psychiatry: A scientometric analysis from 1980–2016. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24874",
"date": "10 Aug 2017",
"name": "Miyuru Chandradasa",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe publication, Child psychiatry: A scientometric analysis 1980-2016 was read with enthusiasm and admiration. It is true that child and adolescent psychiatry significantly lags behind general adult psychiatry in producing valid clinically useful research. Assessing research output in a scientific field is of utmost importance as it provides a guide for the future development of that field. This article demonstrates the research output trends in child and adolescent psychiatry over the previous four decades. It allows us to generate an understanding of the future research trends and the gaps that should be addressed by researchers in this field.\n\nThe mapping of research output in a scientific field is an extremely difficult task. This is made even more cumbersome in the field of child and adolescent psychiatry, as there is an overlap of articles in fields of child psychiatry, child psychology, neuro developmental psychiatry and paediatrics. The authors have employed a sound technique to achieve this complex task by using reputed software such as Web of Science core collection records, Network Analysis Interface for Literature Studies Project scripts and Citespace.\n\nThe article highlights the emergence of new concepts in the discipline of child and adolescent psychiatry over the decades. It is interesting to note that some older publications influencing the field after many years from their initial publication.\n\nFew limitations were considered for this publication. The authors have analysed articles of the journals with high impact factor at present. However, it should be noted that in the past different journals had higher impact factors for child and adolescent psychiatry. Many decades back certain reputed journals in general psychiatry published significant numbers of articles in child and adolescent psychiatry as journals dedicated only for this sub speciality were less popular. It is understandable logically that the authors selected the seven journals according to the current impact factors to make this project practical.\n\nIn relevance to the discussion, it would have been interesting to compare some aspects of the results of this study with similar scientometric analyses in other medical disciplines and General Psychiatry. The authors could have concentrated on research output related to aetiology, symptomatology and management compared to other fields of psychiatry and medicine. In future, it would be exciting to compare these results with other sub specialities in psychiatry such as forensic psychiatry, mental health related to intellectual disability, addiction psychiatry and old age psychiatry.\nIn conclusion, the authors have made a valid and practical attempt to assess and analyse the research output in the discipline of child and adolescent psychiatry over the past four decades. Further scientometric analyses are required to assess the impact of other influential publications in child and adolescent psychiatry published in journals known for general or other specified medical specialities.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "25167",
"date": "05 Sep 2017",
"name": "Muhammad Waqar Azeem",
"expertise": [
"Reviewer Expertise Child and adolescent psychiatry",
"autism"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nExcellent work by authors in reviewing the literature on child and adolescent psychiatry.\nWould be important to mention in the limitations that quite a few child and adolescent psychiatry articles are published in journals for paediatrics and general psychiatry.\nSecondly Clinics of North America usually publish articles which are usually review of the literature about different issues.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "26279",
"date": "01 Nov 2017",
"name": "Deepa Mishra",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this study, authors conducted a bibliometric analysis of articles published from 1980 to 2016, in the top seven journals of child and adolescent psychiatry. The paper is well written but can only be accepted has after these changes. Specifically:\nIn the Abstract Section, Conclusions is just a restatement of the Results Section. I believe the paper has much more to offer than what is mentioned in Conclusions.\n\nDiscussion section should be rewritten to highlight some future research directions also. This will definitely improve the quality of the paper.\n\nFor Visualization of important nodes (Figure 4), authors can use other software like Gephi, as the figure is not depicting the results properly. It is very difficult for readers to understand the concept.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1293
|
https://f1000research.com/articles/6-85/v1
|
27 Jan 17
|
{
"type": "Research Article",
"title": "Comparison of color discrimination in chronic heavy smokers and healthy subjects",
"authors": [
"Thiago Monteiro de Paiva Fernandes",
"Natanael Antonio dos Santos",
"Natanael Antonio dos Santos"
],
"abstract": "Background: Cigarette smoke is probably the most significant source of exposure to toxic chemicals for humans, involving health-damaging components, such as nicotine, hydrogen cyanide and formaldehyde. The aim of the present study was to assess the influence of chronic heavy smoking on color discrimination (CD). Methods: All subjects were free of any neuropsychiatric disorder, identifiable ocular disease and had normal acuity. No abnormalities were detected in the fundoscopic examination and in the optical coherence tomography exam. We assessed color vision for healthy heavy smokers (n = 15; age range, 20-45 years), deprived smokers (n = 15, age range 20-45 years) and healthy non-smokers (n = 15; age range, 20-45 years), using the psychophysical forced-choice method. All groups were matched for gender and education level. In this paradigm, the volunteers had to choose the pseudoisochromatic stimulus containing a test frequency at four directions (e.g., up, down, right and left) in the subtest of Cambridge Colour Test (CCT): Trivector. Results: Performance on CCT differed between groups, and the observed pattern was that smokers had lower discrimination compared to non-smokers. In addition, deprived smokers presented lower discrimination to smokers and non-smokers. Contrary to expectation, the largest differences were observed for medium and long wavelengths. Conclusions: These results suggests that cigarette smoke and chronic exposure to nicotine, or withdrawal from nicotine, affects CD. This highlights the importance of understanding the diffuse effects of nicotine either attentional bias on color vision.",
"keywords": [
"cigarette smoking",
"visual system",
"color discrimination",
"color vision",
"Cambridge Colour Test",
"Trivector",
"nicotinic receptor"
],
"content": "Introduction\n\nCigarette smoking is still a major source of exposure to chemicals that are toxic for humans. The compounds in cigarettes and cigarette smoke, such as nicotine, oxygen dioxide and formaldehyde, are highly harmful to health1. Data from the World Health Organization (WHO) hypothesize that by 2030, cigarettes could kill nearly 9 million people a year around the world2,3.\n\nCigarette nicotine deprivation in chronic users may impair cognitive and attentional abilities even after long time of cessation4,5. The neurotoxic effects of chronic use and smoking abstinence on the nervous system have not been extensively studied6–8. The need for establishing reliable measures that assess the effects of smoking on sensory, perceptual and cognitive domains is crucial to fill these gaps.\n\nA visual percept may consist of stimuli that vary over the space (spatial contrast), time (temporal contrast) or direction of motion, and vary in luminance (achromatic) and chromaticity (saturation and hue color)9–11. Thus, chromatic contrast involves chromaticity differences, which can be expressed by the distance in the CIE 1976 uniform chromaticity scale diagram and assessed by the size of MacAdam ellipses on the Cambridge Color Test (CCT), for example 12,13.\n\nWe base our rationale on the premise that chronic exposure to nicotine will led to receptor desensitization and not suffer influence of arousal and increase in attentional resources in smokers14. The purpose of the present study was to assess the influence of chronic heavy smoking on color discrimination (CD).\n\n\nMethods\n\nIn this study, 15 non-smokers (mean age = 32.5 years; SD = 9.1; 7 male), 15 cigarette smokers (mean age = 32.1 years; SD = 5.7; 7 male) and 15 deprived smokers (mean age = 31.9 years; SD = 6.3; 7 male) between the ages of 20 and 45 years, who were working as staff or were students at Federal University of Paraiba, were recruited through printed advertisements. Participants were excluded if they had any one of the following criteria: younger than 20 and older than 45 years (since the effects of the human visual system immaturation or aging could superestimate the results15,16); current history of neuropsychiatric disorder; a history of head trauma, color blindness, current or previous drug abuse; drinking more than 10 alcoholic drinks per week or current use of medications that may affect visual processing and cognition. In addition, subjects were required to have a good ocular health: no abnormalities were detected in the fundoscopic examination and in the optical coherence tomography (OCT) exam. All participants had normal or corrected-to-normal vision as determined by a visual acuity of at least 20/20.\n\nSmokers reported a smoking history of at least 8 years, currently smoked more than 20 cigarettes/day and had a score of >5 on the Fagerstrom Test for Nicotine Dependence (FTND)17. Deprived smokers were asked to smoke only after the experiment (≈ 6/7 hours of deprivation). Smokers and deprived smokers began smoking at an average of 16.5 years of age (SD = 3.25) and had been smoking for an average 15 years (SD = 6.45). Non-smokers had never smoked a cigarette. Smokers were allowed to smoke until the beginning of experiment.\n\nThis research followed the ethical principles from the Declaration of Helsinki and was approved by the Committee of Ethics in Research of the Health Sciences Center of Federal University da Paraiba (CAAE: 60944816.3.0000.5188). Written informed consent was obtained from all participants.\n\nStimuli were presented on a 19 inch LG CRT monitor with 1024 × 786 resolution and a rate of 100 Hz. Stimuli were generated using a VSG 2/5 video card (Cambridge Research Systems), which was run on a microcomputer Precision T3500 with W3530 graphics card. All procedures were performed in a room at 26±1°C, with the walls covered in grey for better control of luminance during the experiments. All measurements were performed with binocular vision. Monitor luminance and chromatic calibrations were performed with a ColorCAL MKII photometer (Cambridge Research Systems).\n\nThe color vision test was performed using CCT, version 2.0, with Trivector subtest (Cambridge Research Systems; http://www.crsltd.com/tools-for-vision-science/measuring-visual-functions/cambridge-colour-test/). The CTT was performed in a darkened room with illumination provided only by the monitor used to present visual stimuli. Trivector provides a clinical assessment of color vision deficiencies as a rapid means screening of the existence of congenital or acquired deficits12. CCT uses pseudoisochromatic stimuli (Landolt C) defined by the test colors that are to be discriminated, on an achromatic background. The figure and the background are composed of grouped circles randomly varying in diameter and having no spatial structure (variation of 5.7° arcmin of external diameter and 2.8° arcmin of internal diameter). The luminance variation in each response avoids the existence of learning effect or use of tricks to respond correctly.\n\nThe four-alternative forced-choice12,18 (4-AFC) method was used, and the subjects' task was to identify, using a remote control response box, whether the Landolt 'C' stimulus was presented at the left, right, up or down side of the monitor screen. The participant was instructed to answer even if could not identify the stimulus gap12. After each correct answer, the chromaticity of the target proceeded closer to that of background, while each wrong answer or omission was followed by the presentation of the target at a greater chromatic distance from the background. The step on the staircase was doubled or divided by two after each incorrect or correct answer, respectively. This process took place throughout the experiment. The experiment ended after 11 reversals for each axis and the threshold was estimated from the six final reversals19.\n\nThe trivector testing protocol estimates sensitivity for the short, medium and long wavelengths through the protanopic, deuteranopic, and tritanopic confusion axes, respectively19,20. Trivector protocol uses vectors as central measurement. The advantage of this brief test is that it can be performed in about 5 minutes and provides a reliable result12. The three confusion axes converge at a point called 'point of intersection', and the xy coordinates used were: protan (0.6579, 0.5013), deutan (-1.2174, 0.7826) and tritan (0.2573, 0.0000) (for more details, see 13).\n\nIn general, we used a default setting where the Landolt ‘C’ had an opening at 1° of visual angle, minimum luminance of 8 cd/m², maximum luminance of 18 cd/m², 6 s of response time for each trial and distance of 269 cm between participant and monitor screen.\n\nMost of these procedures were performed late in the morning or mid-afternoon. Here we used Weber contrast21:\n\nThe distributions for each group were compared with Shapiro-Wilk. Both groups showed non-normal distribution, thus non-parametric statistical methods were used to analyze the data. For group comparisons, the non-parametric univariate analysis was used, with pairwise comparisons by Mann-Whitney U test. Spearman’s rank correlation coefficients (rho) were conducted to assess the relationship between outcomes of color discrimination data and biosociodemographic variables, such as age, gender and education level. All the calculations were made using SPSS®, version 21.0.\n\nThe effect size (r) estimation was used from the conversion of z-score22,23:\n\nResults are presented as medians. Center lines show the medians; box limits indicate the 25th and 75th percentiles as determined by SPSS software; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles (ends of the whiskers are the maximum and minimum values). When presented, errors bars represent standard deviations (SD) of the median based on 1000 bootstrap resamplings. Bonferroni correction was the method of adjusting the P-value that we used. P < 0.016 was accepted as statistically significant for multiple comparisons and P < 0.025 for pairwise comparisons.\n\n\nResults\n\nThere were significant differences in discrimination thresholds between groups along the protan (χ²(2) = 26.53, P < 0.001), deutan (χ²(2) = 22.40, P < 0.001) and tritan (χ²(2) = 14.93, P < 0.001) axes. The results of the trivector measurements are shown in Figure 1.\n\nTrivector test: box-and-whiskers plots for protan (A), deutan (B) and tritan (C) confusion lines. Data are presented in 10-4 u’v’ units. Each box-and-whiskers plot is based on results for 45 participants. * P < 0.05; ** P < 0.01; *** P < 0.001.\n\nAlong protan vectors (Figure 1A), pairwise comparisons showed significant differences between non-smokers vs. smokers (U = 132, P = 0.002, r = -.61), non-smokers vs. deprived smokers (U = 105, P < 0.001, r = -.85) and smokers vs. deprived smokers (U = 136, P = 0.002, r = -.58).\n\nAlong deutan vectors (Figure 1B), pairwise comparisons showed significant differences between non-smokers vs. smokers (U = 136, P = 0.001, r = -.58), non-smokers vs. deprived smokers (U = 108, P < 0.001, r = -.83), and smokers vs. deprived smokers (U = 154, P = 0.024, r = -.43).\n\nAlong tritan vectors (Figure 1C), pairwise comparisons showed significant differences between non-smokers vs smokers (U = 140, P = 0.003, r = -.55) and non-smokers vs. deprived smokers (U = 126, P < 0.001, r = -.67). There was no statistically significant differences among smokers vs. deprived smokers (P = 0.250).\n\nThere is no relationship between color discrimination and gender (chi-square = 72, df = 39, P > 0.05). A spearman correlation showed no correlation between FTND and trivector data (P > 0.050), color discrimination and education years [rho = .078, P = 0.515], and color discrimination and age [rho = .096, P = 0.347].\n\n\nDiscussion\n\nThe data indicated that smoker groups, as a whole, had less discrimination when compared to non-smokers (P < 0.05), indicating the existence of a diffuse impairment in visual processing.\n\nSmall differences in blue-yellow color processing suggest that sensor neurons responsive to the short wavelength may differently operate from those responding to medium and long wavelengths. Indeed, the koniocelular pathway may not suffer from the influences of tobacco components.\n\nAlong the trivector protocol, smokers were more sensitive to protanopic and deuteranopic confusion axes (Figure 1). An effect size analysis confirmed that smokers had the largest discrimination errors for protanopic (r = -85) and deuteranopic (r = -82) confusion axes when comparing against non-smokers. As stated, this result does not support the idea of channel selectivity. However, we base our rational on the existence of diffuse processing impairment, which may include magno- and parvocellular pathways.\n\nNicotine enhances dopamine (DA) release through a balance of activation and desensitization of nicotinic acetylcholine receptors (nAChRs) located mainly in the ventral tegmental area and in the striatum14,28. There are also nAChRs and DA receptors on the retina, so it is not hard to understand that the use of nicotine would enhance attentional resources29–31. However, we did not observe improvements in color discrimination. So, is there any relationship between smoking and color discrimination? The answer may lie in desensitization, which is one of many brain changes caused by addiction32. In addition, chronic nicotine exposure leads to nAChRs desensitization through brain upregulation33,34. Another property of cigarettes is that the more exposure, the greater the need for it activate the receptors, which changes affinity and response properties of the nAChRs35,36. Whereas nicotine enhancing effects decay and remain unchanged after chronic exposure, this may explain the lower discrimination, but the small similarity, between smokers and non-smokers in some of our data (Figure 1).\n\nThen, why did the deprived smokers group have less discrimination? This can be explained by the withdrawal effect, which induces a hypofunctional effect of DA release37,38, reflecting both visual processing39–41 and brain reward function42. Visual attention plays a role for detection of environmental stimuli43.\n\nAs stated, impairments observed at color discrimination can occur due to cones saturation, amplification of the signals that reach visual cortex or by the action of nicotine in parvocelular pathway44. In agreement with studies, color vision impairments may be related to ventral stream, which processes color45. However, our tests used pseudoisochromatic stimuli. Thus, color discrimination may have occurred through dorsal and ventral stream. Maybe it is too soon to conclude anything, but there may be nAChRs in both dorsal and ventral stream. In addition, both streams may suffer from the action of DA hypofunction, affecting directly visual processing38–40,42.\n\nKnowing the existence of the expression of nAChRs in bipolar, amacrine and ganglionar cells28,46, we suggest that smoking affects visual processing, regardless of deprivation. Although the differences between smokers and non-smokers were small, we could not ignore the existence of many harmful compounds to vision in cigarettes. As noted in others studies, exposure to cigarette smoking47–51 and solvents52,53 affects vision. Thus, smoking can be harmful even for passive smokers.\n\nOur limitations need to be considered. We evaluated cigarette smoking as a whole, not the nicotine-only effects49,50. Which brings us to the idea of further studies, using nicotine gum and the same paradigm used here. Clearly, further work is needed, but this study highlights the relationship between smoking and color discrimination, involving short, medium and long wavelengths. We conclude that cigarette compounds affect vision more than nicotine separately54,55.\n\n\nData availability\n\nDataset 1: Patient demographics and Trivector results. Raw data of the subjects biosociodemographic and trivector (protan, deutan and tritan) results. doi, 10.5256/f1000research.10714.d15005959",
"appendix": "Author contributions\n\n\n\nTM: design of the work, data collection and interpretation, and drafting the article. NA: design of the work, data analysis and interpretation, and critical revision of the article. All authors approved the final version to be published.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nNational Counsel of Technological and Scientific Development (CNPq), Brazil (grant no., 303822/2010-4), CAPES and Federal University of Paraiba funded this paper.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nNatalia Leandro de Almeida for helping with data collection and interpretation.\n\n\nReferences\n\nMathers CD, Loncar D: Projections of Global Mortality and Burden of Disease from 2002 to 2030. PLoS Med. 2006; 3(11): e442. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO: Who report on the global tobacco epidemic, 2013. Reference Source\n\nWHO: WHO Report on the Global Tobacco Epidemic, 2015: Raising Taxes on Tobacco. 2015. Reference Source\n\nBell SL, Taylor RC, Singleton EG, et al.: Smoking after nicotine deprivation enhances cognitive performance and decreases tobacco craving in drug abusers. Nicotine Tob Res. 1999; 1(1): 45–52. PubMed Abstract | Publisher Full Text\n\nHarrison EL, Coppola S, McKee SA: Nicotine Deprivation and Trait Impulsivity Affect Smokers’ Performance on Cognitive Tasks of Inhibition and Attention. Exp Clin Psychopharmacol. 2009; 17(2): 91–98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurazzo TC, Meyerhoff DJ, Nixon SJ: Chronic Cigarette Smoking: Implications for Neurocognition and Brain Neurobiology. Int J Environ Res Public Health. 2010; 7(10): 3760–3791. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThurgood SL, McNeill A, Clark-Carter D, et al.: A Systematic Review of Smoking Cessation Interventions for Adults in Substance Abuse Treatment or Recovery. Nicotine Tob Res. 2016; 18(5): 993–1001. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang C, Xu X, Qian W, et al.: Altered human brain anatomy in chronic smokers: a review of magnetic resonance imaging studies. Neurol Sci. 2015; 36(4): 497–504. PubMed Abstract | Publisher Full Text\n\nCornsweet T: Visual Perception. Academic Press; 2012. Reference Source\n\nDeValois KK, Webster MA: Color vision. Scholarpedia. 2011; 6(4): 3073. Publisher Full Text\n\nDeValois RL, DeValois KK: Spatial Vision. Oxford University Press. 1990. Reference Source\n\nMollon JD, Regan BC: Cambridge Colour Test Handbook. 2000. Reference Source\n\nParamei GV: Color discrimination across four life decades assessed by the Cambridge Colour Test. J Opt Soc Am A Opt Image Sci Vis. 2012; 29(2): A290–297. PubMed Abstract | Publisher Full Text\n\nD’Souza MS, Markou A: Neuronal mechanisms underlying development of nicotine dependence: implications for novel smoking-cessation treatments. Addict Sci Clin Pract. 2011; 6(1): 4–16. PubMed Abstract | Free Full Text\n\nRegan BC, Reffin JP, Mollon JD: Luminance noise and the rapid determination of discrimination ellipses in colour deficiency. Vision Res. 1994; 34(10): 1279–1299. PubMed Abstract | Publisher Full Text\n\nReffin JP, Astell S, Mollon JD: Trials of a computer-controlled colour vision test that preserves the advantages of pseudoisochromatic plates. In: Drum B, Moreland JD, Serra A, eds. Colour Vision Deficiencies X. Documenta Ophthalmologica Proceedings Series. Springer: Netherlands; 1991; 69–76. Publisher Full Text\n\nField A: Discovering Statistics Using IBM SPSS Statistics. SAGE; 2013. Reference Source\n\nRosenthal R, Rosnow RL: Effect sizes for experimenting psychologists. Can J Exp Psychol. 2003; 57(3): 221–237. PubMed Abstract | Publisher Full Text\n\nAllen AE, Brown TM, Lucas RJ: A distinct contribution of short-wavelength-sensitive cones to light-evoked activity in the mouse pretectal olivary nucleus. J Neurosci. 2011; 31(46): 16833–16843. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDelahunt PB, Brainard DH: Control of chromatic adaptation: signals from separate cone classes interact. Vision Res. 2000; 40(21): 2885–2903. PubMed Abstract | Publisher Full Text\n\nPurves D, Augustine GJ, Fitzpatrick D, et al.: Cones and Color Vision. Accessed November 9, 2016. 2001. Reference Source\n\nBocanegra BR, Zeelenberg R: Emotional cues enhance the attentional effects on spatial and temporal resolution. Psychon Bull Rev. 2011; 18(6): 1071–1076. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKunchulia M, Pilz KS, Herzog MH: Small effects of smoking on visual spatiotemporal processing. Sci Rep. 2014; 4: 7316. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAhmadi K, Pouretemad HR, Esfandiari J, et al.: Psychophysical Evidence for Impaired Magno, Parvo, and Konio-cellular Pathways in Dyslexic Children. J Ophthalmic Vis Res. 2015; 10(4): 433–440. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLevin ED: Nicotinic Receptors in the Nervous System. CRC Press; 2001. Reference Source\n\nPotter AS, Newhouse PA: Acute nicotine improves cognitive deficits in young adults with attention-deficit/hyperactivity disorder. Pharmacol Biochem Behav. 2008; 88(4): 407–417. PubMed Abstract | Publisher Full Text\n\nQuisenaerts C, Morrens M, Hulstijn W, et al.: The nicotinergic receptor as a target for cognitive enhancement in schizophrenia: barking up the wrong tree? Psychopharmacology (Berl). 2014; 231(3): 543–550. PubMed Abstract | Publisher Full Text\n\nQuisenaerts C, Morrens M, Hulstijn W, et al.: Acute nicotine improves social decision-making in non-smoking but not in smoking schizophrenia patients. Front Neurosci. 2013; 7: 197. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoob GF, Sanna PP, Bloom FE: Neuroscience of addiction. Neuron. 1998; 21(3): 467–476. PubMed Abstract | Publisher Full Text\n\nBalfour DJ, Munafò MR: The Neuropharmacology of Nicotine Dependence. Springer, 2015; 24. Publisher Full Text\n\nGovind AP, Vezina P, Green WN: Nicotine-induced upregulation of nicotinic receptors: underlying mechanisms and relevance to nicotine addiction. Biochem Pharmacol. 2009; 78(7): 756–765. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBesson M, Granon S, Mameli-Engvall M, et al.: Long-term effects of chronic nicotine exposure on brain nicotinic receptors. Proc Natl Acad Sci U S A. 2007; 104(19): 8155–8160. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVallejo YF, Buisson B, Bertrand D, et al.: Chronic nicotine exposure upregulates nicotinic receptors by a novel mechanism. J Neurosci. 2005; 25(23): 5563–5572. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Biasi M, Dani JA: Reward, addiction, withdrawal to nicotine. Annu Rev Neurosci. 2011; 34: 105–130. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang L, Dong Y, Doyon WM, et al.: Withdrawal from chronic nicotine exposure alters dopamine signaling dynamics in the nucleus accumbens. Biol Psychiatry. 2012; 71(3): 184–191. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBodis-Wollner I: Visual deficits related to dopamine deficiency in experimental animals and Parkinson’s disease patients. Trends Neurosci. 1990; 13(7): 296–302. PubMed Abstract | Publisher Full Text\n\nJackson CR, Ruan GX, Aseem F, et al.: Retinal dopamine mediates multiple dimensions of light-adapted vision. J Neurosci. 2012; 32(27): 9359–9368. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWitkovsky P: Dopamine and retinal function. Doc Ophthalmol. 2004; 108(1): 17–40. PubMed Abstract | Publisher Full Text\n\nWise RA, Rompre PP: Brain dopamine and reward. Annu Rev Psychol. 1989; 40(1): 191–225. PubMed Abstract | Publisher Full Text\n\nAguirre CG, Madrid J, Leventhal AM: Tobacco withdrawal symptoms mediate motivation to reinstate smoking during abstinence. J Abnorm Psychol. 2015; 124(3): 623–634. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFigueiró LR, Bortolon CB, Benchaya MC, et al.: Assessment of changes in nicotine dependence, motivation, and symptoms of anxiety and depression among smokers in the initial process of smoking reduction or cessation: a short-term follow-up study. Trends Psychiatry Psychother. 2013; 35(3): 212–220. PubMed Abstract | Publisher Full Text\n\nPestilli F, Viera G, Carrasco M: How do attention and adaptation affect contrast sensitivity? J Vis. 2007; 7(7): 9.1–912. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee TH, Baek J, Lu ZL, et al.: How arousal modulates the visual contrast sensitivity function. Emotion. 2014; 14(5): 978–984. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPollux PM, Hall S, Roebuck H, et al.: Event-related potential correlates of the interaction between attention and spatiotemporal context regularity in vision. Neuroscience. 2011; 190: 258–269. PubMed Abstract | Publisher Full Text\n\nButler PD, Zemon V, Schechter I, et al.: Early-stage visual processing and cortical amplification deficits in schizophrenia. Arch Gen Psychiatry. 2005; 62(5): 495–504. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClaeys KG, Dupont P, Cornette L, et al.: Color discrimination involves ventral and dorsal stream visual areas. Cereb Cortex. 2004; 14(7): 803–822. PubMed Abstract | Publisher Full Text\n\nNeal MJ, Cunningham JR, Matthews KL: Activation of nicotinic receptors on GABAergic amacrine cells in the rabbit retina indirectly stimulates dopamine release. Vis Neurosci. 2001; 18(1): 55–64. PubMed Abstract | Publisher Full Text\n\nGundogan FC, Durukan AH, Mumcuoglu T, et al.: Acute effects of cigarette smoking on pattern electroretinogram. Doc Ophthalmol. 2006; 113(2): 115–121. PubMed Abstract | Publisher Full Text\n\nVarghese SB: The effects of nicotine on the human adult visual pathway and processing. 2013. Reference Source\n\nVarghese SB, Reid JC, Hartmann EE, et al.: The Effects of Nicotine on the Human Electroretinogram. Invest Ophthalmol Vis Sci. 2011; 52(13): 9445–9451. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNaser NT, Loop M, Than T, et al.: Color Vision: Effects of Nicotine Gum in Non-Smokers. Invest Ophthalmol Vis Sci. 2011; 52(14): 4902. Reference Source\n\nOliveira AR: Avaliações psicofísicas cromática e acromática de homens e mulheres expostos a solventes orgânicos. 2015. Reference Source\n\nLacerda EM da CB, Ventura DF, Silveira LC de L: Visual assessment by psychophysical methods of people subjected to occupational exposure to organic solvents. Psicol USP. 2011; 22(1): 117–145. Publisher Full Text\n\nBarreto GE, Iarkov A, Moran VE: Beneficial effects of nicotine, cotinine and its metabolites as potential agents for Parkinson’s disease. Front Aging Neurosci. 2015; 6: 340. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPowledge TM: Nicotine as therapy. PLoS Biol. 2004; 2(11): e404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMazzone P, Tierney W, Hossain M, et al.: Pathophysiological impact of cigarette smoke exposure on the cerebrovascular system with a focus on the blood-brain barrier: expanding the awareness of smoking toxicity in an underappreciated area. Int J Environ Res Public Health. 2010; 7(12): 4111–4126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArda H, Mirza GE, Polat OA, et al.: Effects of chronic smoking on color vision in young subjects. Int J Ophthalmol. 2015; 8(1): 77–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErb C, Nicaeus T, Adler M, et al.: Colour vision disturbances in chronic smokers. Graefes Arch Clin Exp Ophthalmol. 1999; 237(5): 377–380. PubMed Abstract | Publisher Full Text\n\nde Paiva Fernandes TM, dos Santos NA: Dataset 1 in: Comparison of color discrimination in chronic heavy smokers and healthy subjects. F1000Research. 2017. Data Source"
}
|
[
{
"id": "20559",
"date": "10 Mar 2017",
"name": "Goro Maehara",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors measured color discrimination thresholds in chronic smokers and non-smokers using the Cambridge Color Test. The color discrimination thresholds were significantly higher for chronic smokers than non-smokers. Although their methods were scientifically sound, the thresholds for chronic smokers were not high enough to conclude that smoking impairs their color discrimination abilities.\n\nMajor concern\n\n1. The thresholds for chronic smokers According to Mollon & Regan (2000)1, “normal limits for performance for first examination on the basic “Trivector” test are 100 (protan), 100 (deutan) and 150 (tritan).” The threshold medians for chronic smokers were lower than those values, except for the deutan threshold median for deprived smokers. Although there were statistically significant difference between chronic smokers and non-smokers, the thresholds for non-smokers were very low (about 40, 50, and 80 in 10−4 u’ v’ units for protan, deutan, and tritan, respectively). In addition, the differences in thresholds (about 60) make little change in color appearance. Taken together, it seems difficult to conclude that smoking impairs color discrimination abilities.\n\nMinor concern\n\n1. Equation 1 I am not sure why Weber contrast (equation 1) needs to be explained in the Methods section. The authors should state clearly how they used the equation in their experiment.\n\n2. Results The authors just listed the statistical results in the Results section. I suggest describing the results in the more detailed way (ex. thresholds were higher for smokers than non-smokers, U = 132, P = 0.002, r = -.61).\n\n3. 3rd paragraph in the Discussion section The authors stated that “smokers were more sensitive to protanopic and deutanopic confusion axes.” This sentence is confusing. Which does this sentence mean, “more sensitive than non-smokers” or “more sensitive than to tritanopic axes”?",
"responses": [
{
"c_id": "2553",
"date": "13 Mar 2017",
"name": "Thiago P Fernandes",
"role": "Author Response",
"response": "Dear Goro Maehara, We respectfully thank you for the reading and responding to our manuscript. If I may contest your decision of this manuscript, we respectfully do not agree that it can not be accepted as an acceptable scientific standard. We believe that the data contribute in terms of scientific validity, and because we know that there are few studies on color vision in chronic smoking (and abstinence). We will try to answer your questions below: \"Although their methods were scientifically sound, the thresholds for chronic smokers were not high enough to conclude that smoking impairs their color discrimination abilities.\" Although we did not observe large differences between the thresholds of the control group and the group of smokers, we agree with Lakens (2013) and Field (2013) noting that the statistically significant differences are consonant with the related effect sizes which fluctuated between mid-to-high values (r values between .50 and .61; chronic smokers x controls). Moreover, when comparing the group of smokers with deprived smokers, we observed that these differences were not so large (r values reaching .50). We do not know if the smoking habit, cigarette compounds or smoking per se, are responsible for the decrease in color discrimination. But there was a loss of color discrimination, suggesting the idea that visual color processing may be diffusely impaired in smokers (Besson et al., 2007; Vallejo, Buisson, Bertrand, & Green, 2005; Zhang, Dong, Doyon, & Dani, 2012). We base this hypothesis on the fact that the many cigarette compounds, including organic solvents in the cigarette smoke, impairs color processing per se. Major Concern 'The threshold medians for chronic smokers were lower than those values, except for the deutan threshold median for deprived smokers. Although there were statistically significant difference between chronic smokers and non-smokers, the thresholds for non-smokers were very low (about 40, 50, and 80 in 10−4 u’ v’ units for protan, deutan, and tritan, respectively). In addition, the differences in thresholds (about 60) make little change in color appearance. Taken together, it seems difficult to conclude that smoking impairs color discrimination abilities.' Many thanks for the review. Based on our expertise in the use of Cambridge Colour Test: the minor the threshold, better discrimination. If a group (in this case, smokers group) has a higher threshold, this means that they needed more chromatic contrast to detect the stimuli. Thus, higher thresholds means lower discriminiation along confusion axes (Hasrod & Rubin, 2015). After the publication of Mollon and Reffin's about the CCT (2000), several studies using Trivector have been published. Including preliminary norms for the use of CCT (Ventura et al., 2003), which was considered by the creators of the CCT on the software website (http://www.crsltd.com/tools-for-vision-science/measuring-visual-functions/cambridge-colour-test/) We re-checked our Trivector data and compared to several studies and we observed that we're with similar values for control groups. If another group (such smokers or deprived smokers) had higher thresholds, it means that they differ from the standard values and are likely to have color vision impairments. As shown in the other studies, values for control subjects have fluctuated precisely in the values that we obtained in our data (Costa et al., 2007; Goulart et al., 2008; Paramei, 2012, 2014; Ventura et al., 2002). Thus, the raising of the threshold of smokers is possibly connected with smoking conditions, since we've matched all possibly intervenient variables. Taken together, we can not ignore that, although not as large, the differences were significant in this sample. Based on previous studies, even though there are small differences, they need to be punctuated, since we agree that this is an important area that requires further research. Minor concern 1. Equation 1; I am not sure why Weber contrast (equation 1) needs to be explained in the Methods section. The authors should state clearly how they used the equation in their experiment. Ops! Many thanks! Since CCT already uses this default setting, we strongly agreed with your review. These changes will be in the second version of the manuscript (we will remove it). 2. Results; The authors just listed the statistical results in the Results section. I suggest describing the results in the more detailed way (ex. thresholds were higher for smokers than non-smokers, U = 132, P = 0.002, r = -.61). Many thanks again. We believe that the way you suggested will facilitate the reader's understanding and will enhance the scientific level of our writing. We fixed it. They will be more descriptive in the next version 3. 3rd paragraph in the Discussion section; The authors stated that “smokers were more sensitive to protanopic and deutanopic confusion axes.” This sentence is confusing. Which does this sentence mean, “more sensitive than non-smokers” or “more sensitive than to tritanopic axes”? We appreciate the suggestion and agree that the use of two forms of explanation may actually confuse the reader. We will correct this. However, when we mention that the smoking group was more sensitive to the protanopic or deutanopic axes, we simply mean that they made more errors than the control group, for example. That is, they needed more chromatic contrast (they were more sensitive) than the comparison group. The confusion axes refer to the red (protanopic), green (deutanopic) and blue (tritanopic) axes. Thus, if any group was more sensitive to the red confusion axis, forr example, it means that they possibly had impairments in the processing of this wavelength. In this way, based on the apointments above, the 'not approved' status is honestly inconsistent with the content of the work. We ask you to reconsider your decision and we are grateful for the comment, reading, and review of the manuscript. References 1. Lakens D. Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front. Psychol. [Internet]. 2013 [cited 2017 Mar 10];4. Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2013.00863/abstract 2. Field A. Discovering Statistics Using IBM SPSS Statistics. SAGE; 2013. 3. Besson M, Granon S, Mameli-Engvall M, Cloëz-Tayarani I, Maubourguet N, Cormier A, et al. Long-term effects of chronic nicotine exposure on brain nicotinic receptors. Proc. Natl. Acad. Sci. 2007;104:8155–60. 4. Vallejo YF, Buisson B, Bertrand D, Green WN. Chronic Nicotine Exposure Upregulates Nicotinic Receptors by a Novel Mechanism. J. Neurosci. Off. J. Soc. Neurosci. 2005;25:5563–72. 5. Zhang L, Dong Y, Doyon WM, Dani JA. Withdrawal from Chronic Nicotine Exposure Alters Dopamine Signaling Dynamics in the Nucleus Accumbens. Biol. Psychiatry. 2012;71:184–91. 6. Mollon JD, Regan BC. Cambridge Colour Test Handbook [Internet]. 2000. Available from: https://sites.oxy.edu/clint/physio/article/CAMBRIDGECOLOURTESTHandbook.pdf 7. Ventura DF, Silveira LC de L, Rodrigues A, Costa MF. Preliminary Norms for the Cambridge Colour Test. 2003;331–9. 8. Costa MF, Oliveira AGF, Feitosa-Santana C, Zatz M, Ventura DF. Red-Green Color Vision Impairment in Duchenne Muscular Dystrophy. Am. J. Hum. Genet. 2007;80:1064–75. 9. Feitosa-Santana C, Barboni MTS, Oiwa NN, Paramei GV, Simões ALAC, Da Costa MF, et al. Irreversible color vision losses in patients with chronic mercury vapor intoxication. Vis. Neurosci. 2008;25:487–91. 10. Goulart PRK, Bandeira ML, Tsubota D, Oiwa NN, Costa MF, Ventura DF. A computer-controlled color vision test for children based on the Cambridge Colour Test. Vis. Neurosci. 2008;25:445–50. 11. Paramei GV. Color discrimination across four life decades assessed by the Cambridge Colour Test. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2012;29:A290-297. 12. Paramei GV, Oakley B. Variation of color discrimination across the life span. JOSA A. 2014;31:A375–84. 13. Pelli DG, Bex P. Measuring contrast sensitivity. Vision Res. 2013;90:10–4. 14. Hogg RE, Chakravarthy U. Visual function and dysfunction in early and late age-related maculopathy. Prog. Retin. Eye Res. 2006;25:249–76. 15. DeValois KK, Webster MA. Color vision. Scholarpedia. 2011;6:3073."
},
{
"c_id": "2569",
"date": "20 Mar 2017",
"name": "Goro Maehara",
"role": "Reviewer Response",
"response": "Dear Thiago, I am happy to review the revised manuscript. Please make it clear that the thresholds of normal observers were comparable with those reported by previous studies using the Cambridge color test. According to Thornton, Edwards, Mitchell, Harrison, Buchan & Kelly (2005), there is a strong association between current smoking and age-related macular degeneration. This line of studies could strengthen your paper. Regards, Goro"
},
{
"c_id": "2580",
"date": "30 Mar 2017",
"name": "Thiago P Fernandes",
"role": "Author Response",
"response": "Dear Goro, Many thanks for the quick answer. Based on your suggestions, substantial changes were made. We inserted a subsection in the methods where we explained the cutoff points (where the results would be normal and where the discrimination losses would be). In addition, we better describe the results section, making it clear to the reader that the higher the threshold, the lower the color discrimination. Also, based on your last suggestion, we added a few paragraphs about the relationship between the harmful cigarette compounds and the damage they cause to the retina, and consequently, visual processing. I hope we have answered the suggestions. Again, we ask you to reconsider your decision about the status of our work. Best regards,"
}
]
},
{
"id": "20562",
"date": "17 Mar 2017",
"name": "Marine Raquel Diniz da Rosa",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article investigates and compares color discrimination in chronic smokers and healthy individuals. The authors found a lower significant color discrimination in chronic smokers.\nHowever, I believe that in order to clarify the cut-off point of color discrimination, authors should write in Methods the standard of normality used to rate low or high color discrimination. In addition, in the description of the results, for better understanding, the authors should explain better the results (which are below the expected) and then the significance of them.\nThe results suggest a possible, even small, important change to the color discrimination of smokers which deserves attention and should be better studied. Therefore, I believe that the paper should be accepted for publication with the aforementioned suggestions.",
"responses": [
{
"c_id": "2579",
"date": "30 Mar 2017",
"name": "Thiago P Fernandes",
"role": "Author Response",
"response": "Dear Marine, First of all, many thanks for the reading and suggestions for our manuscript. We will try to answer your questions below: However, I believe that in order to clarify the cut-off point of color discrimination, authors should write in Methods the standard of normality used to rate low or high color discrimination. We appreciate the suggestion. These changes were made. We stand by calculating the average of the control group of all studies using Trivector in Brazil and using the normative data for age groups used by Paramei et al.1 In addition, in the description of the results, for better understanding, the authors should explain better the results (which are below the expected) and then the significance of them. We appreciate the suggestion. Although the description of Trivector's results is quite directive (for details, see 2-5), we agreed that the way the results were presented was below expectations.These changes were made. The results suggest a possible, even small, important change to the color discrimination of smokers which deserves attention and should be better studied. In agreement. Based on this suggestion, we added a few paragraphs on the importance of the present study, see Introduction and Discussion. References: Paramei GV. Color discrimination across four life decades assessed by the Cambridge Colour Test. J Opt Soc Am A Opt Image Sci Vis. 2012;29(2):A290-297. Lacerda EM da CB, Ventura DF, Silveira LC de L: Visual assessment by psychophysical methods of people subjected to occupational exposure to organic solvents. Psicol USP. 2011;22(1):117–145. 10.1590/S0103-65642011005000011 Oliveira AR: Avaliações psicofísicas cromática e acromática de homens e mulheres expostos a solventes orgânicos.2015. Reference Source Goulart PRK, Bandeira ML, Tsubota D, Oiwa NN, Costa MF, Ventura DF. A computer-controlled color vision test for children based on the Cambridge Colour Test. Vis Neurosci. 2008;25(3):445-450. doi:10.1017/S0952523808080589. Costa MF, Oliveira AGF, Feitosa-Santana C, Zatz M, Ventura DF. Red-Green Color Vision Impairment in Duchenne Muscular Dystrophy. Am J Hum Genet. 2007;80(6):1064-1075"
}
]
}
] | 1
|
https://f1000research.com/articles/6-85
|
https://f1000research.com/articles/6-763/v1
|
01 Jun 17
|
{
"type": "Research Article",
"title": "Using spectral decomposition of the signals from laurdan-derived probes to evaluate the physical state of membranes in live cells",
"authors": [
"Serge Mazeres",
"Farzad Fereidouni",
"Etienne Joly",
"Serge Mazeres",
"Farzad Fereidouni"
],
"abstract": "Background: We wanted to investigate the physical state of biological membranes in live cells under the most physiological conditions possible. Methods: For this we have been using laurdan, C-laurdan or M-laurdan to label a variety of cells, and a biphoton microscope equipped with both a thermostatic chamber and a spectral analyser. We also used a flow cytometer to quantify the 450/530 nm ratio of fluorescence emissions by whole cells. Results: We find that using all the information provided by spectral analysis to perform spectral decomposition dramatically improves the imaging resolution compared to using just two channels, as commonly used to calculate generalized polarisation (GP). Coupled to a new plugin called Fraction Mapper, developed to represent the fraction of light intensity in the first component in a stack of two images, we obtain very clear pictures of both the intra-cellular distribution of the probes, and the polarity of the cellular environments where the lipid probes are localised. Our results lead us to conclude that, in live cells kept at 37°C, laurdan, and M-laurdan to a lesser extent, have a strong tendency to accumulate in the very apolar environment of intra-cytoplasmic lipid droplets, but label the plasma membrane (PM) of mammalian cells ineffectively. On the other hand, C-laurdan labels the PM very quickly and effectively, and does not detectably accumulate in lipid droplets. Conclusions: From using these probes on a variety of mammalian cell lines, as well as on cells from Drosophila and Dictyostelium discoideum, we conclude that, apart from the lipid droplets, which are very apolar, probes in intracellular membranes reveal a relatively polar and hydrated environment, suggesting a very marked dominance of liquid disordered states. PMs, on the other hand, are much more apolar, suggesting a strong dominance of liquid ordered state, which fits with their high sterol contents.",
"keywords": [
"Membrane",
"lipid bilayer",
"microdomains",
"solvatochromic",
"spectral unmixing",
"spectral decomposition",
"rafts",
"biphoton microscope."
],
"content": "Introduction\n\nThe lipid bilayer is the main architectural component of biological membranes, in which a variety of proteins are more or less deeply embedded. Over the past two decades, it has become widely accepted that biological membranes are not simply homogeneous seas of lipids studded with proteins, but contain microdomains that play crucial roles in the assembly of signalling platforms and in intracellular transport between various compartments (Lingwood & Simons, 2010). Although the physical forces that govern membrane microdomains are still poorly understood, it is now commonly accepted that the degree of order in the lipids is highly relevant to the formation of lipid domains in eukaryotic cell membranes (Rosetti et al., 2017; Sezgin et al., 2017).\n\nTwo important characteristics of membrane microdomains are their small size (typically below 100 nm, i.e. too small to be seen by standard optical microscopy) and their very dynamic nature (Ali et al., 2006). Because of these characteristics, the physical state of biological membranes, and of the microdomains within them, has proven very difficult to observe and characterise. One way to circumvent these difficulties is to use fluorescent lipids with solvatochromic properties, i.e. fluorescent probes whose emission spectra are influenced by the polarity of their environment (Klymchenko & Kreder, 2014). Because microdomains are commonly viewed as structures with greater lipid order than the surrounding membrane lipids, and because this greater order is linked to a reduced exposure to water and thus to a less polar environment, the emission spectra of solvatochromic lipid probes differ when they are within or outside membrane microdomains.\n\nThe probes most frequently used for this type of approach belong to five main families: laurdan-, Nile Red- (NR12S), ANEPPDHQ-, 3-hydroxyflavone- and pyrene-based lipid probes (Klymchenko & Kreder, 2014; Le Guyader et al., 2007; Niko et al., 2016; Owen et al., 2011). Among these, we prefer the probes derived from laurdan, for two main reasons. Firstly, laurdan and its derivatives have no marked preference for ordered or disordered lipid environments and thus tend to distribute evenly in lipid bilayers harbouring co-existence of lipid domains in different physical states. Secondly, the emission spectra of the laurdan-family probes show a relatively clear-cut dichotomy in their emission spectra in different types of membranes: in the apolar environments of phospholipid bilayers in either liquid ordered (Lo) or solid (So) states their emission maxima are at 440 nm, whereas when they are in the relatively polar environments of membranes in the liquid disordered state (Ld) their emission maxima are at 490 nm (Bagatolli et al., 2003; Parasassi & Gratton, 1995).\n\nThe most widely accepted model to explain the ‘red shift’ of laurdan from an emission maximum of 440 nm to a maximum of 490 nm in more polar environments is based on the reorientation, or relaxation, of structured water molecules in the direct vicinity of the probe (Bagatolli, 2013). Such a model explains the ‘all or nothing’ response in the emission spectrum of the laurdan-family probes: in the excited state, which has a half-life of a few nanoseconds, similar to the timeframe of water relaxation, either the fluorochrome does not dissipate energy towards a water molecule and the emitted photon has a wavelength around 440 nm, or there is a water molecule in the direct vicinity of the fluorochrome towards which relaxation of part of the excitation energy can be transferred, and the emitted photon will then have a wavelength around 490 nm.\n\nThe ANEPPDHQ, Nile Red and pyrene-derived probes, by contrast, undergo more progressive shifts in their emission maxima depending on the dielectric constants of their environments. Furthermore, the shifts in emission spectra of these probes are not necessarily linked to the degree of order of the bilayers. Indeed, when the ANEPPDHQ or Nile Red probes are inserted in model bilayers of pure phospholipids below their transition temperature, i.e. bilayers in a highly ordered So or gel state, their emission spectra tend to resemble those corresponding to a Ld state. This may be explained if these probes tend to be excluded from the crystalline mesh that the phospholipids form when they switch to a gel or So state, and thus they may accumulate in imperfections or 'cracks' in the bilayer where they will be more exposed to water. A similar phenomenon of red-shifted emission occurs with laurdan when it is inserted into model bilayers of pure glycosphingolipids, especially those bearing large headgroups; this may be explained by the formation of sphingolipid aggregates resulting in the exposure of the probe to water molecules (Bagatolli et al., 1998).\n\nLaurdan was first synthesised by Weber in 1979 (Macgregor & Weber, 1981; Weber & Farris, 1979); it has since been widely used to investigate the physical state of biological membranes (Gaus et al., 2005; Jay & Hamilton, 2017; Yu et al., 1996). For a thorough review of laurdan’s history and properties, the reader is referred to the book chapter by Luis Bagatolli (Bagatolli, 2013). Previous studies, however, were mostly of cells at room temperature that were very often fixed with a crosslinking agent such as formaldehyde. We wanted to study the plasma membranes (PMs) of living cells at 37°C exposed to as little disturbance as possible. At 37°C, however, laurdan tends to accumulate in intracellular compartments and to label the PM rather poorly. C-laurdan, a probe first synthesised and characterised in 2007 was derived from laurdan by adding a carboxyl group to the polar head, and labels the PM of eukaryotic cells more effectively than laurdan. In 2014, we reported an optimised protocol for the synthesis of C-laurdan and of M-laurdan, an intermediate in the synthesis that showed promising properties (Mazeres et al., 2014).\n\nHere, we have used these three laurdan family probes to label live cells maintained in tissue culture medium and at 37°C for the duration of the labelling procedure and observation. This, combined with an analysis based on spectral decomposition, has allowed us to obtain a clearer picture of the physical state of membranes in live cells compared with previous studies based on generalised polarisation (GP), which is calculated simply from the fluorescence emissions at 440 and 490 nm. Our study provides an estimate of the proportion of apolar versus polar environments in the lipid bilayers of various cellular compartments.\n\n\nMethods\n\nLaurdan, M-laurdan and C-laurdan were all synthesised in-house as previously described (Mazeres et al., 2014). Aliquots of 2 mM stock solutions in DMSO were kept in long-term storage at -20°C; when needed for experiments, aliquots were kept at 4°C for, at most, six weeks. Various staining procedures were used for different experiments, as described later. Nile Red was obtained from Sigma; staining was performed at 10ng/ml for 30–60 min at 37°C in tissue culture medium without serum. LysoTracker Red DND-99 was obtained from Invitrogen; staining was performed at 20 ng/ml for 30–60 min at 37°C in tissue culture medium without serum. Methyl-β-cyclodextrin (MßCD) (cell culture tested) was obtained from Sigma.\n\nMultilamellar large vesicle (MLV)s were prepared as follows. DPPC (1,2-dipalmitoyl-sn-glycero-3-phosphocholine), DOPC (1,2-dioleoyl-sn-glycero-3-phosphocholine) and cholesterol were purchased from Sigma–Aldrich, and prepared as stock solutions in chloroform. The appropriate amounts of these stock solutions to obtain a final concentration of either 100 μM DOPC (for Ld MLVs), or 60 μM DPPC and 40 μM cholesterol (for Lo MLVs) in a final volume of 3 ml were placed in glass tubes. The chloroform solvent was first evaporated under nitrogen flow and then under vacuum for 2 hours. Three ml of MOPS buffer (3-(N-morpholino)propane sulfonic acid, 10 mM, pH 7.3, NaCl 100 mM, EDTA 10 μM) was then added either at room temperature (for the DOPC MLVs) or at 50°C (for the MLVs containing 60% DPPC and 40% cholesterol). The tubes were then vortexed vigorously for 30 seconds to form MLVs. Stock solutions of the various probes were prepared in DMSO such that 1/1000 v/v of the probe solution was added to the aqueous suspension of MLVs. To prevent dye aggregation in water, the probes were always added while vortexing the tubes containing the MLVs. They were then incubated at room temperature for at least one hour before analysis in a QM4 Spectrofluorimeter (Photon Technology International), or observation by biphoton microscopy (see below).\n\nMammalian cells were all grown in DMEM with 10% fetal calf serum (FCS) at 37°C with 5% CO2 and passaged by trypsinisation every three or four days. Hela, HCT116 and L(tk-) cells were obtained from the ATCC. The primary culture of human foreskin fibroblasts was a gift from Laure Gibot (IPBS, Toulouse, France). Drosophila melanogaster Kc cells were provided by Vanessa Gobert (CBD, Toulouse, France) and were grown at room temperature in Shields and Sang medium with 10% FCS. Dictyostelium discoideum AX2 cells, provided by François Letourneur (DIMNP, Montpellier, France), were grown in HL5 medium at room temperature.\n\nAll mammalian cells were grown for 2–4 days on glass coverslips in six-well plates, with 2 ml of medium in each well. On the day of the assay, the glass coverslips were placed inside stainless steel culture chambers (either from SKE Research Equipment (Milan) or from ThermoFisher), which had been pre-warmed in the same incubator as the cells for at least 30 minutes. One ml of the medium that the cells were growing in was then transferred from the well of the six-well plate to the chamber, and the chamber containing the coverslip with the cells on it was then returned to the incubator for at least 30 minutes. The medium was then rinsed once with 1ml of DMEM with neither serum nor phenol red, which had been pre-warmed to 37°C. The probes (in 1µl DMSO) were placed in an Eppendorf tube, and 1 ml of pre-warmed DMEM was quickly pipetted up and down twice in the tube before transferring onto the cells. The chamber was then returned to the tissue culture incubator for various times (at least 20 minutes for C-laurdan, and 45 minutes for M-laurdan and laurdan) before observation. For the time-courses, such as the ones shown in Figure 7, C-laurdan was added by pipetting 500 µl out of the chamber, pipetting this liquid up and down twice into an Eppendorf tube containing the adjusted amount of probe in DMSO, and returning the 500 µl to the observation chamber with pipetting up and down a few times.\n\nDrosophila Kc cells were grown for 48 hours on coverslips coated with poly-L-lysine (0.1 mg/ml for 1 hour followed by three rinses in PBS), and stained with 800 nM C-laurdan in Shields and Sang serum-free medium. Dictyostelium AX2 cells, were incubated for ~two hours in HL5 medium at room temperature to allow them to adhere to untreated coverslips. They were then stained with 800 nM C-laurdan in Sörensen's buffer.\n\nSince the probes fluoresce only in a lipid environment, the cells were imaged in the staining medium. (Note that if rinsing is necessary, it should be done with medium without serum, or the labelling will fade rapidly, especially in the case of C-laurdan. If cells need to be returned to serum-containing medium, for example for very long incubation times, one should consider using laurdan or M-laurdan.)\n\nMammalian cells were maintained at 37°C, 5% CO2 throughout the imaging procedure. All images were recorded on a LSM 710 NLO-Meta confocal laser-scanning microscope controlled by Zen software (2010B SP1, v 6.0.0.485), equipped with a 40x/1.2 water immersion objective, a gas-controlled thermostatic chamber and a spectral detection module (Zeiss, Germany), and coupled to a biphoton laser source (3 watts at 800 nm; Chameleon Vision II, Coherent, France). When tuned to 720 nm, the laser delivered 2.14 watts, and with the AOTF set to 4%, the average measured power delivered at the level of the objective was 6 mW.\n\nUnless otherwise specified, the settings of the microscope were: biphoton tunable laser set at 720 nm; 2–4% power; pixel dwell time of 6.3 µsec; average on four measures; pinhole open (600) or sometimes closed to 150 (never more) to gain higher resolution of close-up pictures (zoom 4 and above). When the settings differed from these, the specific details are given in the legends of the corresponding figures. The images were all acquired as stacks in lambda-mode with spectral resolution steps of 9.8 nm. For acquisitions with the laurdan-derived probes, we recorded the 17 channels between 418 nm and 584 nm plus a channel for transmitted light. When Nile Red or LysoTracker Red were included in the experiment, we recorded the 29 channels between 418 nm and 700 nm, plus a channel for transmitted light. All the images shown in all the figures are representative of at least six images (of six different fields), acquired on at least two different days.\n\nAlthough the Zen software we used for acquisition of images on the Zeiss microscope allows extensive and convenient image analysis, including spectral decomposition (called ‘linear unmixing’ in the Zen software), this commercial software is not widely available. We have therefore chosen to base all the image analyses reported in this paper on ImageJ software (current version 1.48V, Java 1.6.0_65) complemented with various plugins, which are all open code software and freely available.\n\nThe PoissonNMF plugin (https://github.com/neherlab/PoissonNMF) performs nonnegative matrix factorisation to allow spectral decomposition without knowing the initial spectra (Neher et al., 2009). Here, we used it to perform spectral decomposition analysis of reference spectra acquired separately on MLVs of defined composition. The ‘blind’ mode used on live cells labelled with laurdan-derived probes usually gave results very similar to those obtained by using reference spectra, but the spectral identification was sometimes less accurate, especially when the signal to noise ratios were low.\n\nWhen using fixed spectra with the PoissonNMF plugin, we used ten iterations (as recommended by Richard Neher, personal communication). The other values were the default settings suggested by the program: subsamples, 2; segregation bias, 0; saturation threshold, 4000; background threshold, 50; background spectrum, minimal values, and the spectral values were specified manually to match those of the spectral analyser of the biphoton microscope. Note: for the PoissonNMF plugin to function correctly when using fixed spectra, it is essential that the number of channels specified matches exactly the number of channels in the stack.\n\nThe Fraction Mapper plugin (see Software and data availability) was developed specifically for this study. This plugin converts the two-image stack obtained through spectral decomposition into fractional intensities by dividing the unmixed images by the total intensity image. The summation of fractional intensities is always 1 and since we only unmix two components, the fractional intensity can be indexed with only one number. Next, the fractional intensity is mapped on to the total intensity image and it is colour-coded using either readily available or custom made look-up table (LUT)s. The saturation of the colours is scaled with the pixel value of the total intensity. The plugin allows the choice of various LUTs and also thresholding of the image. After running the plugin on the stack of unmixed images, the plugin asks for the desired LUT, it generates the colour-coded fractional intensity image and also a 2D histogram, where the frequency is colour-coded. The axes of the histogram plot are the intensity of the two unmixed images. The plugin allows the user to make regions of interest on the 2D plot and to project them back into the fractional colour-coded image, rather like the gating of flow cytometry plots.\n\nFor analysis by flow cytometry, Hela cells were harvested by trypsinisation, and a single-cell suspension was prepared in tissue culture medium (DMEM with 10% FCS). Cells were then washed twice in pre-warmed DMEM with neither serum nor phenol red before adding the probes (1/1000 v/v of the DMSO stocks) followed by incubation in a 37°C water bath placed near the flow cytometer.\n\nFor cholesterol depletion, a fresh 10X stock solution of MßCD was prepared at 50mM (66mg/ml of MßCD powder) in PBS on the day of each experiment. Cell suspensions were rinsed twice in DMEM without serum before incubating with 5mM MßCD for 30 minutes at 37°C. Cells were then rinsed twice with warm DMEM without serum before staining with the laurdan-family probes. (Note, treatment of the cells with MßCD after labelling with the laurdan-family probes results in very effective removal of the probes.)\n\nAt the end of the incubations, the cells were analysed on a Beckton Dickinson LSR II flow cytometer equipped with a 355 nm UV laser, and the following set of filters were used for the analysis of the emitted light: first channel: 505LP + 530/30 BP; second channel 450/50 BP. For each sample, 5000 cells gated on FSC/SSC were analysed.\n\n\nResults\n\nIn a previous study, we compared the photophysical properties of C-laurdan and two intermediates in its synthesis, M-laurdan and MoC-laurdan, to those of laurdan (Mazeres et al., 2014). We found that the probes distributed very differently inside live cells. Laurdan stained predominantly cytoplasmic ‘vesicles’, which we speculated might belong to the endolysosome pathway (Mazeres et al., 2014). Since then, we observed that these vesicular bodies were very apolar, since they were seen much better at 440 nm than at 490 nm (Supplementary File 1), which suggested that they may in fact be lipid droplets. To discriminate between these two possibilities, we wanted to perform double labelling with either LysoTracker Red (which labels lysosomes) or Nile Red (which labels lipid droplets) and one of each of the three laurdan-derived probes. We found, however, that these vesicles were also very mobile (Supplementary File 1), so we could not perform successive image acquisitions to investigate whether the laurdan-based probes co-localized with LysoTracker Red or with Nile Red.\n\nTo circumvent this problem, we used the lambda mode of acquisition on our microscope to collect the fluorescent light emitted by the two dyes at exactly the same time, and then ‘unmixed’ the signals of the two probes by spectral decomposition. Thus, the mobility of the vesicles in live cells no longer confounded the interpretation of localisation of the two dyes. To unmix the emission spectra, we first used cells labelled with only one probe to acquire the corresponding emission spectrum from 420 to 700 nm. We then used these emission spectra to perform spectral decomposition on images of cells stained with two probes (Figure 1). Decomposition revealed that the staining patterns obtained with LysoTracker Red did not coincide with those obtained with any of the three laurdan probes (Figure 2). By contrast, the staining patterns obtained with Nile Red were essentially superimposable on those obtained with all three laurdan-derived probes, and in particular with laurdan: there was an almost perfect coincidence of the staining patterns obtained with laurdan and Nile Red (Figure 3). We conclude from this that the cytoplasmic vesicles detected with laurdan (and to a certain extent with M-laurdan) do not belong to the endolysosome pathway, as we initially postulated, but are lipid droplets. These findings are supported by those of Owen and colleagues who noted in the troubleshooting table of their article (Owen et al., 2012a) that lipid droplets tend to sequester laurdan. As previously documented by time-resolved microscopy (Ghosh et al., 2013), the environment inside lipid droplets is relatively viscous and very apolar, but not necessarily very ordered. Our findings are, therefore, a useful reminder that the solvatochromism of the laurdan family probes reflects not the order but the polarity of their environment; in other words, the presence or absence of water molecules in the immediate environment of the probes.\n\nTo perform spectral decomposition on live cells labelled with two fluorescent dyes, we first acquired lambda stacks of cells labelled with either Nile Red or laurdan. The emission spectra of the dyes were then extracted from those stacks. Those emission spectra were then used to perform spectral decomposition on the lambda stacks acquired from cells stained with both dyes (bottom right panel).\n\nHela cells were grown on glass coverslips for four days before double labelling with LysoTracker Red and either laurdan (top row), M-laurdan (middle row) or C-laurdan (bottom row), as described in the Methods. The live cells, at 37°C, were then imaged over 30 channels: 29 fluorescence channels from 420–700 nm, and transmitted light (left column). Spectral decomposition was then performed on the lambda stacks using the PoissonNMF plugin and the spectra acquired with cells labelled with only one probe as references. The second column shows the signals attributed to the laurdan-family probes artificially coloured in green. The third column shows the signals attributed to LysoTracker Red artificially coloured in red. The fourth column shows a merge of the two signals, and the fifth column shows 10x magnifications of the areas indicated by the white squares in the fourth column. Width of the squares, 10 µm.\n\nHela cells were grown on glass coverslips for four days before double labelling with Nile Red and either laurdan (top row), M-laurdan (middle row) or C-laurdan (bottom row), as described in the Methods. The live cells, at 37°C, were then imaged over 30 channels: 29 fluorescence channels from 420–700 nm, and transmitted light (left column). Spectral decomposition was then performed on the lambda stacks by using the PoissonNMF plugin and the spectra acquired with cells labelled with only one probe as references. The second column shows the signals attributed to the laurdan-family probes artificially coloured in green. The third column shows the signals attributed to Nile Red artificially coloured in red. The fourth column shows a merge of the two signals, and the fifth column shows 10x magnifications of the areas indicated by the white squares in the fourth column. Width of the squares, 10 µm.\n\nOf note, in the clumps of 15–20 cells labelled with either laurdan or M-laurdan (Figure 2 and Figure 3), the labelling intensity of the cells inside the clumps appeared somewhat weaker than the labelling of cells at the periphery. This may be due to inaccessibility of the probes to the cells inside the clumps, but more likely reflects a difference in the lipid composition of the cells in the crowded environment inside the clumps when compared to the cells at the periphery, as suggested previously (Frechin et al., 2015; Gray et al., 2015).\n\nTo reconstitute colour pictures from the stacks of images obtained in lambda mode, we used a chromatic LUT designed to match the wavelengths of the corresponding channels (Figure 4). For each of the four solvatochromic probes, we noticed differences in the colours emitted by various cellular compartments. For the laurdan-based probes, as expected from the results of several previous studies (Golfetto et al., 2013; Niko et al., 2016; Owen & Gaus, 2010; Owen et al., 2011; Yu et al., 1996), the cytoplasm was slightly greener than the blue PMs and lipid droplets. With Nile Red, however, there was a striking difference between the staining of the lipid droplets, which were bright yellow, and the staining of the intracellular membranes, which appeared red–orange. This solvatochromism of Nile Red in lipid droplets, which is due to their very hydrophobic nature, was documented previously (Greenspan & Fowler, 1985).\n\nFrom left to right: Hela cells labelled with laurdan, M-laurdan, C-laurdan and Nile Red, respectively. Top row, transmitted light. Bottom row, recoloured emitted fluorescent light after acquisition by the multichannel spectral analyser. Colouring was achieved using the built-in ‘time-lapse colour coding’ plugin of ImageJ, after converting the 29 channel stacks into 29 frame stacks and a custom-made LUT called Rainbeau (Supplementary File 3). In designing this LUT, we attempted to match the wavelengths of the corresponding channels of the microscope’s spectral analyser.\n\nThese observations led us to wonder whether we could decompose the various spectral components of the different colours emitted by the laurdan-derived solvatochromatic probes by a similar approach to that which we used to unmix the signals from two probes above. Indeed, rather than a progressive shift in their emission maximum, the laurdan-derived probes have more of a bimodal response to changes in the dielectric constants of their environment, which is particularly well marked between the more water-exposed environment of the probes in lipid membranes in disordered phase and the more hydrophobic, or apolar, environment of the probes in more ordered membranes, whether Lo or gel phases (Bagatolli, 2013; Bagatolli et al., 2003; Parasassi & Gratton, 1995). Thus, these bimodal fluorescence signals should be particularly amenable to spectral decomposition, and this might allow us to evaluate the proportion of ordered and disordered states of cell membranes.\n\nWe first compared the emission spectra obtained from fluorescently labelled DPPC–cholesterol (Lo) or DOPC (Ld) MLVs in a regular spectrofluorimeter to those obtained with the biphoton microscope in lambda mode from 480 to 600 nm for the three laurdan-based probes, and from 520 to 700 nm for Nile Red (Figure 5). The emission curves extracted using ImageJ software on the MLV images acquired on the biphoton microscope were remarkably similar to those obtained from the same MLVs in cuvettes in the spectrofluorimeter. Thus, the spectral analyser of the biphoton microscope can be used to analyse emission spectra in two dimensions at the submicrometer scale.\n\nThe four fluorescent lipid probes were inserted into either Lo or Ld MLVs and their emission curves were recorded in cuvettes on a spectrofluorimeter (thick lines), or imaged on the biphoton microscope in the same conditions as used for live cells. After defining appropriate ROIs in the lambda stacks, the emission curves of individual MLVs were extracted using the ImageJ software. The thin lines correspond to the average curves obtained from 6–10 separate MLVs (the error bars are smaller than the symbols). Lo, DPPC–chol MLVs (blue or orange); Ld, DOPC MLVs (green or red).\n\nWe then tried our decomposition approach on the most challenging task available to us, i.e. we performed spectral decomposition into four different channels on stacks of 29 images acquired after double staining with laurdan and Nile Red (Figure 6). By using the four emission curves obtained with each probe on either DPPC–cholesterol (Lo) or DOPC (Ld) MLVs to perform spectral decomposition with the PoissonNMF plugin, we obtained a very convincing separation of the signals, where laurdan was found to be in an apolar, water-poor environment in lipid droplets and at the PM, whilst being much more exposed to water in the intra-cellular membranes. For Nile Red, there was effectively no detectable staining of the PM, but a marked dichotomy between hydrophobic lipid droplets and more hydrophilic intra-cellular membranes.\n\nThe PoissonNMF plugin was used to perform spectral decomposition on the lambda stack obtained from live cells double labelled with laurdan and Nile Red (see Figure 3). The reference curves used (lower panel) were obtained from MLVs labelled either with laurdan (Lo, DPPC–chol, cyan curve; Ld, DOPC, orange curve) or with Nile Red (Lo, DPPC–chol, green curve; Ld, DOPC, red curve). The settings used were all as described in Methods. Top left (cyan), laurdan in an apolar (hydrophobic) environment; bottom left (orange), laurdan in a water-exposed environment; top right (green), Nile Red in an apolar (hydrophobic) environment; bottom right (red), Nile Red in a water-exposed environment.\n\nFrom these results, we came to suspect that spectral decomposition could turn out to be a very powerful approach to analyse the cellular distribution of solvatochromic probes such as laurdan, and more importantly, the actual physical state of their environment in different cellular compartments.\n\nMost signalling events take place at the PM. Given its cellular distribution, C-laurdan appeared as the best suited probe to investigate such events. At the beginning of our study, however, we found that the intra-cellular distribution of C-laurdan, as well as the polar/apolar ratios in various cellular compartments, were very variable, not only from one experiment to another, but even between cells on the same coverslip.\n\nThe first factor that we identified as having a major influence on the heterogeneity of our results was cellular stress. This could result, for instance, from increased osmotic pressure due to water evaporation when we carried out the staining steps in small volumes, from sudden temperature changes if we used buffers that had not been pre-warmed to the same temperature as that of the cells, or simply from unstable, oscillating temperatures when we used chambers of insufficient thermal inertia.\n\nIntra-sample homogeneity was much improved by the use of heavy stainless steel chambers in which we could perform the staining steps with large volumes (0.5 – 1 ml) of buffers pre-warmed to 37°C (see Methods).\n\nDespite these improvements, we were still witnessing very significant variability in the staining patterns of the cells stained with C-laurdan. In particular, we noticed that these patterns tended to change over time, with the high prominence of the apolar component at the PM that we obtained at the very early time points (< 5 min) tending to fade, and even disappear almost completely over time (cyan colour in Figure 7 upper row). We found out, however, that this did not occur when we used lower concentrations of the probe (see lower panel of Figure 7).\n\nHela cells growing on coverslips were placed in observation chambers in 1 ml of serum-free medium and placed in the thermostatic chamber of the biphoton microscope. After focusing on a chosen group of cells, C-laurdan was added to a final concentration of either 4 µM (top panels) or 200 nM (bottom panels). Lambda stack images were then taken at the indicated times, with the laser power set at 2% for staining with 4 µM, and 4% for 200 nM C-laurdan. All other settings were as described in Methods and spectral decomposition was carried out as described in Methods. The pictures shown are dual-colour overlays. Cyan, C-laurdan in apolar environments; orange, C-laurdan in water-exposed environments.\n\nHaving observed this, we felt that it was important to document at what precise concentrations this phenomenon was occurring. In addition, we also wanted to optimise our staining protocols with all three probes for both the concentrations used and the incubation times. One of the major drawbacks of fluorescence microscopy, however, lies with the difficulty in generating quantitative measurements. Another difficulty of microscopy lies with the relatively small number of samples that can be analysed.\n\nTo circumvent this problem, we turned to flow cytometry, which allows quantitative analysis of many cells, in numerous different samples to be performed. For each cell, emissions in the 400–500 and 500–560 nm windows were recorded simultaneously. We realised that the ratio of the signals recorded in the first channel over that in the second should vary with the proportion of the probe emitting fluorescence in apolar versus water-containing environments, similarly to the GP factor, which is commonly used for the analysis of laurdan-based studies. Given that the intensities of the signals are completely dependent of the instrument settings, however, the values obtained are purely relative, and can thus only be used to compare the samples of one experiment with one another.\n\nAs can be seen in Figure 8, when we incubated cells with increasing concentrations of C-laurdan for 60 min at 37°C, fluorescence intensities increased in both channels, but the 450/530 ratio decreased progressively, and very notably, for concentrations superior to 1.2 µM. When we looked at the evolution of the 450/530 ratio over time in cells labelled with 200 nM of C-laurdan, we found that this ratio was very stable over time, and, if anything, tended to go up slightly for incubation times superior to 30 minutes (Figure 8A, upper right panel). The progressive disappearance of staining of the PM seen by microscopy at high concentrations of C-laurdan fits with the reduction in the 450/530 ratio seen by flow cytometry with increasing concentrations of the probe. Thus taken together, our results strongly suggest that, in cells kept at 37°C, C-laurdan does not have an intrinsic affinity for an intra-cellular compartment in which it would get progressively trapped. A more likely explanation seems to be that some autoquenching of the probe occurs preferentially in the PM. This more marked tendency of C-laurdan to autoquench in a Lo environment was confirmed by performing staining of MLVs with increasing concentrations of C-laurdan and measuring fluorescence in a standard spectrofluorimeter. As can be seen in Figure 8B, the fluorescence intensities recorded plateaued above 10 µM of C-laurdan, and this was even more noticeable at 440 nm than at 490 nm, with the signal at 440 nm actually decreasing for DPPC–cholesterol MLVs stained with 20 µM of C-laurdan.\n\n(A) FACS was used to quantify the mean fluorescence intensities (MFIs) of Hela cells labelled with C-laurdan (top), M-laurdan (middle) or laurdan (bottom). Blue squares indicate MFI at 450 nm and green circles indicate MFI at 530 nm, as plotted on the y-axis on the left side of each plot. Orange triangles indicate the ratio of 450/530 MFIs as plotted on the orange y-axis on the right side of each plot. The plots in the left column show dose–response curves after labelling cells at 37°C for 60 minutes. The plots in the right column show time courses of cells labelled at 37°C with either 200 nM C-laurdan, 500 nM M-laurdan or 4 µM laurdan. The unfilled blue squares and orange triangles indicate measurements as for the filled symbols, but on cells treated with 5 mM MßCD for 30 min at 37°C before labelling. (B) The preferential autoquenching of C-laurdan in Lo environments, suggested by the drop in the 450/530 ratio in the FACS analysis of cells labelled with high concentrations of the probe, can also be seen in MLVs analysed on a standard spectrofluorimeter. Serial two-fold dilutions of C-laurdan in DMSO were added (1% vol), whilst vortexing, to aliquots of MLVs prepared either with DOPC (Ld, dotted lines) or with DPPC–chol (Lo, continuous lines). After 20 min incubation at 37°C, the emission spectra of each aliquot were recorded in quartz cuvettes in a standard spectrofluorimeter (excitation: 360 nm), and the fluorescence intensities at 440 nm (blue lines) and 490 nm (green lines) were plotted against the concentration of C-laurdan.\n\nWhen similar experiments were performed with cells labelled either with laurdan or M-laurdan, we found evidence for some autoquenching occurring with M-laurdan at concentrations above 600 nM, but not with laurdan (Figure 8A, middle and bottom panels). It can be noted that the tendency to autoquench correlates with the efficiency in labelling cells. In the experiment shown, the autoquenching becomes noticeable for levels of fluorescence superior to 10.000 for both C-laurdan and M-laurdan. The reason why autoquenching is not seen for laurdan may thus be that the levels of staining with that probe never attain these values.\n\nWhen it came to optimizing the times necessary for staining cells, C-laurdan turned out to be a much simpler probe to use. Indeed, for C-laurdan, levels of staining and the 450/530 ratio were found to be remarkably stable between 10 and 30 min, but this was not the case for the other two probes. First, levels of staining with laurdan and M-laurdan kept increasing over time all the way to 60 minutes. Additionally, for M-laurdan the 450/530 ratios decreased during the first 20 minutes, probably as a consequence of the probe's diffusion from the surface to the intracellular membranes.\n\nIn order to validate the use of cytometry to evaluate the polar/apolar ratio in cell membranes, we performed staining of cells that had been treated with methyl beta cyclodextrin (MßCD) to deplete them of cholesterol (dotted lines and hollow symbols in right hand side panels of Figure 8A). As expected, this resulted in very significant decreases of the 450/530 ratio for all three probes, with evolutions over time that paralleled those recorded on untreated cells.\n\nBased on these results, we elected to use 200 nM of C-laurdan, 500 nM of M-laurdan or 2 µM of laurdan as the concentrations that give the best compromise between good levels of staining and minimal autoquenching, and to incubate the cells with the probes for at least 45 min at 37°C before observation for the latter two probes, and 20 minutes for C-laurdan. We also find that maintaining the cells in the staining medium for imaging gives the best results. Of note, if rinsing needs to be performed, it should be done with medium with no serum, lest the labelling will rapidly fade away, especially for C-laurdan. But in our experience this is really not necessary since the probes are only fluorescent when inserted in a lipid environment.\n\nHaving established the appropriate concentrations and incubation times for the three probes, we could then compare the staining patterns obtained on Hela cells after spectral decomposition. The results of the decomposition consist of two-channel stacks, which correspond, respectively, to the fraction of the probe in either apolar or more water-exposed environments. After numerous tests and surveys among users to find out what two-colour combination worked best for most people, we selected to use the cyan colour for the first channel, and orange for the second one. As can be seen on Figure 9, all three probes did result in very similar overall patterns, but with some noticeable differences: whilst laurdan tends to accumulate inside cells, and very noticeably inside lipid droplets, C-laurdan labels the PM more efficiently, and M-laurdan has an intermediate behaviour.\n\nLive Hela cells grown on glass coverslips for two days were labelled with either 2uM laurdan (top row), 500 nM M-laurdan (middle row) or 200 nM C-laurdan (bottom row), as described in Methods. Lambda stacks were then acquired by biphoton microscopy of the live cells at 37°C, and spectral decomposition was carried out, all with the standard conditions specified in Methods. Left column, transmitted light; second column (cyan), decomposed fluorescent signal corresponding to the probes in an apolar (hydrophobic) environment; third column (orange), decomposed fluorescent signal corresponding to the probes in a polar (water-exposed) environment; fourth column, overlay of the signals shown in the second and third columns.\n\nWhilst the cyan-orange two-colour representation is already quite informative, we found that it was not at all appropriate to evaluate the ratio of the two channels in the various cellular compartments. To this end, we generated the Fraction Mapper plugin for ImageJ, which proceeds in successive steps (Figure 10). Starting from a two-channel image stack, such as the ones obtained with the PoissonNMF plugin, Fraction Mapper first calculates the fraction of signal in the first channel for every pixel (i.e. value of pixel in first channel / sum of values for that pixel in first and second channel). Fraction Mapper then colour-codes this fraction according to a chosen LUT, and gives each pixel the intensity of the sum of the two components. For our purpose, we find that a LUT based on six discrete colours from blue to red gives the clearest results (Supplementary File 2). Compared to relying just on the fluorescence recorded in the 440 and 490 nm channels, as classically used when calculating GP and shown on the panels at the bottom of Figure 10, the spectral decomposition approach based on combining PoissonNMF and Fraction Mapper can clearly result in a very marked improvement of the resolution between different cellular compartments.\n\nStarting from a stack of two channels, Fraction Mapper calculates the fraction of the intensity in the first channel for every pixel (ch1/ch1+ch2), which can be obtained as a fraction map. A 2D plot of the intensities in each of the channels can also be generated. The plugin then colour-codes the numbers corresponding to the fraction according to a chosen LUT, and gives the pixels the light intensity of the sum of the two channels. The LUT used was based on six discrete colours from blue to red (LUT ‘6 colours 2,3,3,3,3,2’ available for download as Supplementary File 2). Bottom row: Starting from the 440nm and 490 nm channels, classically used for calculating GP, the picture obtained is much less informative. Of note, GP values are closely related to fraction values we use in this study: the GP can be obtained simply by multiplying the fraction by two and subtracting 1. Owen et al. have produced and published a ‘GP Calculator’ plugin (Owen et al., 2011) that provides very similar results to those obtained with Fraction Mapper, albeit by a more indirect approach. The 10× magnifications correspond to the 52×52 pixels regions indicated by white squares in the adjacent image; each pixel is 100×100 nm and the whole pictures cover 52×52 microns.\n\nWith the representation obtained using Fraction Mapper, one can rapidly see that, in Hela cells stained with the three laurdan-derived probes (Figure 11), although the distribution of the three probes is quite different, the overall picture is basically the same: intra-cellular compartments harbour a majority of water-exposed probes (yellow, orange and red), corresponding most likely to membranes predominantly in Ld states, whilst the regions near the PM show a strong dominance of apolar environments, which presumably correspond mostly to Lo states.\n\nIn the middle column, the Fraction Mapper plugin was used on the two-colour stacks from Figure 9 (shown in left column), which were obtained by spectral decomposition using PoissonNMF. The right column shows 10x magnifications of the 52×52 pixel regions indicated by white squares in the middle column. Each pixel is 100×100 nm and the whole pictures cover 52×52 microns. Top row, 2uM laurdan; middle row, 500 nM M-laurdan; bottom row, 200 nM C-laurdan.\n\nUpon ticking the corresponding box, Fraction Mapper will produce a ‘Fraction Map’ image, which can then be used to quantify the actual numerical values of the fraction of the first channel in a defined area of the pictures, such as individual pixels or ROIs drawn on particular regions of the cells.\n\nFraction Mapper can also generate 2D plots based on the intensity of each pixel in the two channels of the stacks of images used as starting material. On those 2D plots, it is possible to draw regions of interest or gates and to then use the ‘Gate to Image’ plugin tool to generate an image based on the fraction map, but containing only the pixels falling within the gate. This function was implemented because we felt that, considering the very sharp regional differences seen between the PM and the intra-cellular membranes, we might be able to identify separate populations of pixels on this type of graph, much like what can be done for cells in flow cytometry. This has, however, proven not to be the case: as long as the cells are alive and healthy, and as long as there are no over-exposed areas in the field, we observe no obvious clouds of dots, but always a regular distribution of the pixels instead. This has been true on all the numerous dot plots we examined in the course of this study, acquired on a variety of cell lines.\n\nThe Fraction Mapper plugin provided us with the capacity to analyse rapidly the physical state of the cellular membranes we were imaging with much improved clarity and precision compared to the two-colour cyan-orange pictures obtained through spectral decomposition. In turn, this allowed us to adjust the settings we were using to ensure the reproducibility of our results; in other words, to make sure that we were not damaging the cells during the acquisition process. Indeed, one of the major pitfalls of using biphoton excitation lies with the very high intensities of light necessary for an efficient absorption of two photons simultaneously. This is of particular concern in live cells because this type of illumination can very easily induce cell damage, in particular at the level of membranes. For our study, which focuses on the physical state of biological membranes, the localized delivery of heat was also of particular concern to us because, as anybody who has ever left butter in direct sunshine, or tried to spread butter on toast when it comes straight out of the fridge knows only too well, the physical state of lipids is especially sensitive to changes in temperatures.\n\nUltimately, we found that the systematic sequential acquisition of two pictures, coupled with the Fraction Mapper representation, was a very effective way to adjust our settings to obtain good signals, whilst ensuring that we were not inducing damage, or even just modifications to the membranes of the cells we were studying (Figure 12). For example, we find that using laser powers over 6% can very rapidly induce membrane blebs, and we therefore never use powers over 4%. Although biphoton excitation already results in excitation that is limited to a depth of about 1 μm, this resolution along the z axis can be somewhat improved a little bit further by closing the pinhole normally used for confocal acquisition with one-photon excitation. Whilst we did find that closing the pinhole could marginally improve the resolution in the pictures, we also found that closing it past a value of 150 resulted in an excessive loss of photons, which required increasing the laser power above 4%. Other factors that we identified as possibly resulting in altered pictures between the first and the second passage are excessively long dwell times per pixel or excessive number of averages, if only because both can also result in very long acquisition times. In the end, the standard settings we have elected to use are as follows: laser power: 2–4% (corresponding to 3–6 mW average power delivered, as measured at the level of the sample), pixel dwell time: 6.3µsec, average on 4 measures, pinhole: 150. With such settings, the acquisition of a 512x512 picture takes 15 seconds, and as can be seen on Figure 12, the state of the cells remains unaltered during the second acquisition compared to the first passage, whilst the micro heterogeneities seen at the level of individual 100 nm pixels are themselves remarkably fluctuant.\n\nHela cells grown for two days on glass coverslips were stained with 200 nM C-laurdan and imaged twice sequentially using the standard conditions defined in Methods (power 4%, speed 6, 4 averages, pinhole 150). With these settings, the acquisition procedure takes 15 seconds. The second pictures were thus taken 15 seconds after the first ones. Each pixel is 100×100 nm, and each picture, taken at zoom 4, covers 52×52 microns.\n\nHaving established these optimized settings and conditions using Hela cells, we turned our attention to a panel of other cell lines, and found very similar patterns in all the mammalian cells we looked at (see Figure 13 for three examples).\n\nHuman HeLa cells (first column), HCT116 cells (second column), foreskin fibroblasts in primary culture (third column) and mouse L(tk-) cells (fourth column) were all grown on glass coverslips before staining with C-laurdan, imaging and analysing the images by the standard procedure, as described in Methods. Top row: overlay of the two channels obtained after spectral decomposition with PoissonNMF (cyan, apolar environment; orange, polar, water-exposed environment). Middle row: the two channel stacks were processed with the Fraction Mapper plugin using the six-colour 2,3,3,3,3,2 LUT shown on the right-hand side. Bottom row: 10x magnifications of the 52×52 pixel regions indicated by white squares in the middle row. Each pixel is 100×100 nm, and each picture, taken at zoom 4, covers 52×52 microns.\n\nConsidering the importance of temperature in regulating the state and order in lipid bilayers, we wondered what the situation would be in cells from ectotherms. To this end, we turned our attention to cells from Drosophila melanogaster, and to Dictyostelium discoideum, which both grow at room temperature. For both of these cell types, we found that we needed to use slightly higher levels of the C-laurdan probe to reach satisfactory levels of staining (but which did not lead to any detectable autoquenching of the probe). The fact that these organisms are less efficiently labelled by C-laurdan may be due to their lipid composition being different from that of mammalian cells, such as different sterols and lipid chains better adapted to life at more variable temperatures. In line with this hypothesis, the results shown in Figure 14 reveal that the C-laurdan probe, when inserted in the membrane compartments of these two very different organisms, tends to emit fluorescent light that is blue-shifted compared to when it is in mammalian cells: the cytoplasm appears dominantly yellow rather than red-orange, and the PMs show a high prominence of dark blue rather than cyan pixels. On the whole, C-laurdan thus seems to be slightly less exposed to water when it is inserted in the membranes of those cells than in the membrane of mammalian cells, which would suggest that lipid membranes of these ectotherm organisms tend to be a bit less permissive for water to penetrate deep into the lipid bilayers. At the cellular scale, however, the overall picture remains the same, with the environment of the PMs of the cells of these two ectotherms being much more apolar than their intra-cellular compartments.\n\nLeft column: Drosophila melanogaster Kc cells were grown at room temperature for 48 hours on PLL-coated coverslips in six-well plates before being transferred to an observation chamber and stained with 400 nM C-laurdan in serum-free medium. Middle and right columns: Dictyostelium discoideum amoebae (AX2 strain) growing at room temperature were placed directly into an observation chamber with a glass coverslip bottom for two hours in axenic medium before staining with 800 nM C-laurdan in Sörensen's buffer (see Methods). The pictures in the right column were taken 15 seconds after those in the middle column; they show that the motility of these photoreactive amoebae, induced by the first acquisition, resulted in no noticeable modification of the overall state of their lipid membranes. The microscope settings and analyses were identical to those described previously for mammalian cells. All pictures in this figure were taken at zoom 6. Whole picture 35×35 µM. Pixel size, 69×69 nm\n\n\nDiscussion\n\nThe capacity to perform spectral decomposition on lambda-stack images of cells stained with solvatochromic membrane probes of the laurdan family provides much clearer results than simply relying on the 440 and 490 nm channels. The idea of making use of the additional information contained in a lambda stack acquired via a spectral analyzer compared to simply using the 440 and 490 nm channels has already been recently explored by Sezgin et al. (2015). Whilst the spectral imaging approach described by these authors did result in improved accuracy for the detection of differences in polarity in model and cellular membranes, they did, however, find that this approach was not very well suited for staining of live cells with C-laurdan. After attempting to use their “GP plugin”, we concur with their conclusions, and find that our approach based on spectral decomposition with PoissonNMF provides much clearer pictures, as well as being much less demanding in terms of calculating computer time.\n\nOne important point to bear in mind, however, is that, although the results we report here rely on precise computer-based mathematical calculations, the numbers obtained should only be taken as a rough indication of the actual ratio of the probes in apolar and water-exposed environments, and this, in turn, should only be considered as a rough proxy for the ratio of membranes in disordered versus ordered states. Among the many reasons why this type of approach cannot be rigorously quantitative, the main one is that the results of the decomposition procedure are very sensitive to the settings used, including the power of the laser, the speed and number of acquisitions, or the actual reference curves used for the decomposition process. For example, although the emission curves of the three laurdan-derived probes are very similar to one another, we found that we could get very noticeably different results if we inadvertently used the reference curves for one probe to analyse the results obtained with another probe. This seems particularly relevant given that the reference curves we used for each probe were acquired on simple DOPC and DPPC-chol MLVs (in other words on completely artificial membrane systems), and that sphingolipids, which are very prominent in biological membranes, can somewhat alter the emission spectra of those laurdan-based probes (Bagatolli et al., 1998; Mazeres et al., 2014). All in all, however, based on numerous experiments performed on a whole variety of cell lines, and in line with the results of several other previous studies (Dodes Traian et al., 2012; Golfetto et al., 2013; Kucherak et al., 2010; Niko et al., 2016; Owen & Gaus, 2010; Owen et al., 2011; Yu et al., 1996), we feel that we can very confidently state that, in plasma membranes, there is a very significant dominance of organized domains, whilst lipid bilayers in intra-cellular compartments are mostly in a liquid disordered state.\n\nThe degree of order inside lipid bilayers is directly linked to their local viscosity, and this can significantly influence the fluorescence lifetime of a variety of lipid fluorescent probes. Because fluorescence lifetimes are effectively independent of the probes’ concentration, imaging based on fluorescence-lifetimes, is often considered as more reliable than relying on spectral differences to perform quantitative analyses. Such approaches have used a variety of solvatochromic probes, such as laurdan (Golfetto et al., 2013; Owen & Gaus, 2010; Owen et al., 2012b), di-4-ANEPPDHQ (Owen & Gaus, 2010), F2N12S (Kilin et al., 2015), PA (Niko et al., 2016) or different molecular rotors (Dent et al., 2016) and references therein. The results of these various studies all concur to show that the PMs of eukaryotic cells are a much more organized environment (and more impermeable to water) than the membranes of the intra-cellular compartments. This is in very good agreement with the ordering properties of cholesterol on liquid organized lipid bilayers, since there is a gradient of cholesterol (or other sterols in plants, fungi or invertebrates) going from nucleus to the plasma membrane (van Meer et al., 2008). In the nuclear envelope and endoplasmic reticulum, sterols represent less than 10% of the lipids. They get progressively enriched through the Golgi apparatus, and account for more than 30% of lipids at the PM, and can even make up almost 50% of PMs’ lipids in some cases. In the reverse direction, i.e. during endocytosis, sterols get progressively eliminated from endosomes, and are virtually absent from lysosomes (Darwich et al., 2014).\n\nUsing quantitative analyses based on time-resolved lifetime imaging coupled to phasor analyses, Owen and colleagues did famously manage to evaluate that the PM of live mammalian cells was comprised of approximately three quarters of ordered lipid domains (Owen et al., 2012b). Our results appear to be in very good accordance with this previous study, and with all the others cited above. A major drawback of approaches based on fluorescence lifetime is, however, the long acquisition time inherent to the technique of time-correlated single-photon counting (TCSPC). Although our approach based on spectral decomposition and fraction map analysis may not be as rigorously quantitative as those based on time-resolved approaches, one of its major advantages lies with the speed of data acquisition. Using the standard settings described here, a whole 512×512 picture takes 15 seconds to acquire, but this could be considerably accelerated by simply reducing the size of the imaging field, and even further by shortening the pixel-dwelling time by a factor of two or three without too much effect on the noise to signal ratio. This type of approach would thus seem very well fitted for more dynamic studies, such as documenting the cellular responses to temperature shifts or to focalized stimuli.\n\nRegarding the dominance of a disordered state of the intra-cellular membrane compartment, this is not entirely surprising since most of those membranes would be expected to correspond to the ER, where it is known that there are very low levels of cholesterol. We were, however, somewhat surprised not to observe readily detectable peri-nuclear areas of decreased polarity corresponding to the Golgi or to endosomes (Niko et al., 2016). In future work, we plan to address such questions by making use of red-fluorescent proteins targeted to various cellular compartments. With such markers, given the capacity of our system to separate red fluorescent signals from those emitted by the laurdan-derived probes (see Figure 2, Figure 3 and Figure 6), we should be in a position to explore the degree of order of various intra-cellular compartments, with a particular emphasis on the ER > Golgi > PM exocytosis axis.\n\nIn most of our pictures, the thickness of PMs appear to span over several pixels, corresponding to several hundreds of nanometers. This is far wider than the thickness of a lipid bilayer, i.e. 4–10 nm, but this is not surprising given that we are not using super-resolution microscopy. Therefore, the limit of resolution in our pictures corresponds to 0.4 times the wavelength divided by the numeric aperture of the microscope objective, i.e. 0.4×500/1.2 = 167 nm at best, corresponding to two pixels in most of our pictures. In this regard, although it may be tempting to ponder if the pixel-wide heterogeneities seen both at the level of the cytoplasm and the PM in all of our Fraction Mapper results could correspond to the famously elusive microdomains, this can simply not be the case, and those heterogeneities must thus simply result from noise, since we are at the limits of sensitivity of this type of acquisition procedure.\n\nAnother factor that can contribute to increasing the width of the membranes is that, even if the depth of the biphoton excitation is very limited (in other words the size of the voxel along the z axis), this depth is still of the order of one micron (as mentioned earlier, closing the pinhole of the confocal microscope can result in a reduction of this thickness, but with the immediate drawback of dramatically reducing the intensity of the captured light). Over such a thickness of several hundreds of nanometers, the PM is therefore very unlikely to be completely flat and perfectly aligned with the microscope’s objective. It is therefore not so surprising that the areas of high hydrophobicity that surround the cells and are expected to correspond to the PM should span multiple pixels, corresponding to several hundreds of nanometers. In addition, the intra-cellular membrane compartments that are adjacent to the PM might become superimposed with the PM, and make it look wider that it actually is. Such compartments would be expected to correspond mostly to vesicles of exocytosis coming from the Golgi, and of endocytosis coming straight from the PM (Darwich et al., 2014). Both those types of vesicles would contain significant levels of cholesterol, and hence be likely to represent more apolar environments compared to the membranes of the ER, which contain very low levels of cholesterol. On the other hand, the superimposition of mostly-disordered intra-cellular membranes with the plasma membrane would result in a reduction of its apparent order. In line with this hypothesis, in most of the pictures we have obtained, the dark blue pixels, which are expected to correspond to very ordered membranes, are mostly found on the outer edge of cells (zooms in Figure 12–Figure 14), i.e. in areas where the PM cannot be superimposed with intra-cellular membranes. In turn, one may wonder how functional microdomains (aka rafts) may actually form in an environment that is already so dominantly in a liquid organized form? Whilst some have expressed the views that this may rely on the coalescence of small discontinuous domains (Owen et al., 2012b), or even to the formation of ‘reverse rafts’ (Wassall & Stillwell, 2009), another possibility lies with the formation of even more densely organized domains (de Almeida & Joly, 2014; Joly, 2004).\n\n\nSoftware and data availability\n\nSource code for the Fraction Mapper plugin: https://github.com/farzadf58/FractionMapper/tree/1.0\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.581815 (Fereidouni, 2017)\n\nLicense: MIT\n\nDataset 1: Intermediary flow cytometry data for Figure 8: Using flow cytometry (FACS) to quantify staining intensities at 450 and 530 nm, as well as the ratio between the two. doi, 10.5256/f1000research.11577.d162294 (Mazeres et al., 2017).",
"appendix": "Author contributions\n\n\n\nSM prepared the MLVs, provided the technical expertise for the use of the biphoton microscope and the spectrofluorimeters, as well as the ImageJ and Excel software.\n\nFF developed the Fraction Mapper plugin.\n\nEJ conceived and performed the experiments, interpreted the data, assembled the figures and wrote the manuscript.\n\nAll authors contributed to the revision and verification of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nSM’s salary was paid by the CNRS and EJ’s salary by the INSERM. The funds for this study were provided by the Membrane and DNA Dynamics team budget, which is directed by Laurence Salomé.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors thank Laurence Salomé and Evert Haanappel for enriching discussions, Andrey Klymchenko and Rodrigo de Almeida for their constructive criticisms of the manuscript in preparation, Carol Featherstone for her comments and corrections on the manuscript, Richard Neher for writing the PoissonNMF plugin and for his advice on its use, the TRI platform for providing us access to the biphoton microscope, spectrofluorimeters and flow cytometer, Laure Gibot (Toulouse) for the HFF cells, Vanessa Gobert (Toulouse) for the Drosophila Kc cells, Francois Letourneur (Montpellier) for the Dictyostelium AX2 cells.\n\n\nSupplementary material\n\nSupplementary File 1: Film S1: In live Hela cells kept at 37°C, the intra-cellular vesicular bodies labelled by laurdan are very mobile. Hela cells were grown on coverslips for two days before labelling them with laurdan and imaging in lambda mode with our biphoton microscope as described in Methods. 10 successive pictures were taken every 30 seconds. The channels centred on 442 nm (left panel), 491 nm (center panel) and that for transmitted light were then extracted from the resulting stack, and recombined to create this movie. The fact that the vesicles are much more visible at 442 nm than at 491 nm suggests that they correspond to a very hydrophobic environment.\n\nClick here to access the data.\n\nSupplementary File 2: 6 colours 2,3,3,3,3,2.lut. Look up table for the presentation of Fraction Mapper results. This LUT is based on six discrete colours from blue to red.\n\nClick here to access the data.\n\nSupplementary File 3: Rainbeau.lut. Look up table used for recoloring the image stacks acquired in lambda mode over 29 channels between 418 nm and 700 nm.\n\nClick here to access the data.\n\n\nReferences\n\nAli MR, Cheng KH, Huang J: Ceramide drives cholesterol out of the ordered lipid bilayer phase into the crystal phase in 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine/cholesterol/ceramide ternary mixtures. Biochemistry. 2006; 45(41): 12629–12638. PubMed Abstract | Publisher Full Text\n\nBagatolli LA: Fluorescent methods to study biological membranes. (Heidelberg: Springer), 2013; 13. Publisher Full Text\n\nBagatolli LA, Gratton E, Fidelio GD: Water dynamics in glycosphingolipid aggregates studied by LAURDAN fluorescence. Biophys J. 1998; 75(1): 331–341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBagatolli LA, Sanchez SA, Hazlett T, et al.: Giant vesicles, Laurdan, and two-photon fluorescence microscopy: evidence of lipid lateral separation in bilayers. Methods Enzymol. 2003; 360: 481–500. PubMed Abstract | Publisher Full Text\n\nDarwich Z, Klymchenko AS, Dujardin D, et al.: Imaging lipid order changes in endosome membranes of live cells by using a Nile Red-based membrane probe. Rsc Adv. 2014; 4: 8481–8488. Publisher Full Text\n\nde Almeida RF, Joly E: Crystallization around solid-like nanosized docks can explain the specificity, diversity, and stability of membrane microdomains. Front Plant Sci. 2014; 5: 72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDent MR, López-Duarte I, Dickson CJ, et al.: Imaging plasma membrane phase behaviour in live cells using a thiophene-based molecular rotor. Chem Commun (Camb). 2016; 52(90): 13269–13272. PubMed Abstract | Publisher Full Text\n\nDodes Traian MM, González Flecha FL, Levi V: Imaging lipid lateral organization in membranes with C-laurdan in a confocal microscope. J Lipid Res. 2012; 53(3): 609–616. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFereidouni F: farzadf58/FractionMapper 1.0 [Data set]. Zenodo. 2017. Data Source\n\nFrechin M, Stoeger T, Daetwyler S, et al.: Cell-intrinsic adaptation of lipid composition to local crowding drives social behaviour. Nature. 2015; 523(7558): 88–91. PubMed Abstract | Publisher Full Text\n\nGaus K, Chklovskaia E, Fazekas de St Groth B, et al.: Condensation of the plasma membrane at the site of T lymphocyte activation. J Cell Biol. 2005; 171(1): 121–131. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGhosh S, Chattoraj S, Mondal T, et al.: Dynamics in cytoplasm, nucleus, and lipid droplet of a live CHO cell: time-resolved confocal microscopy. Langmuir. 2013; 29(25): 7975–7982. PubMed Abstract | Publisher Full Text\n\nGolfetto O, Hinde E, Gratton E: Laurdan fluorescence lifetime discriminates cholesterol content from changes in fluidity in living cell membranes. Biophys J. 2013; 104(6): 1238–1247. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGray EM, Díaz-Vázquez G, Veatch SL: Growth Conditions and Cell Cycle Phase Modulate Phase Transition Temperatures in RBL-2H3 Derived Plasma Membrane Vesicles. PLoS One. 2015; 10(9): e0137741. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGreenspan P, Fowler SD: Spectrofluorometric studies of the lipid probe, nile red. J Lipid Res. 1985; 26(7): 781–789. PubMed Abstract\n\nJay AG, Hamilton JA: Disorder Amidst Membrane Order: Standardizing Laurdan Generalized Polarization and Membrane Fluidity Terms. J Fluoresc. 2017; 27(1): 243–249. PubMed Abstract | Publisher Full Text\n\nJoly E: Hypothesis: could the signalling function of membrane microdomains involve a localized transition of lipids from liquid to solid state? BMC Cell Biol. 2004; 5: 3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKilin V, Glushonkov O, Herdly L, et al.: Fluorescence lifetime imaging of membrane lipid order with a ratiometric fluorescent probe. Biophys J. 2015; 108(10): 2521–2531. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim HM, Choo HJ, Jung SY, et al.: A two-photon fluorescent probe for lipid raft imaging: C-laurdan. Chembiochem. 2007; 8(5): 553–559. PubMed Abstract | Publisher Full Text\n\nKlymchenko AS, Kreder R: Fluorescent probes for lipid rafts: from model membranes to living cells. Chem Biol. 2014; 21(1): 97–113. PubMed Abstract | Publisher Full Text\n\nKucherak OA, Oncul S, Darwich Z, et al.: Switchable nile red-based probe for cholesterol and lipid order at the outer leaflet of biomembranes. J Am Chem Soc. 2010; 132(13): 4907–4916. PubMed Abstract | Publisher Full Text\n\nLe Guyader L, Le Roux C, Mazères S, et al.: Changes of the membrane lipid organization characterized by means of a new cholesterol-pyrene probe. Biophys J. 2007; 93(12): 4462–4473. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLingwood D, Simons K: Lipid rafts as a membrane-organizing principle. Science. 2010; 327(5961): 46–50. PubMed Abstract | Publisher Full Text\n\nMacgregor RB, Weber G: Fluorophores in polar media: spectral effects of the Langevin distribution of electrostatic interactions. Annals of the New York Academy of Sciences. 1981; 366(1): 140–154. Publisher Full Text\n\nMazeres S, Fereidouni F, Joly E: Dataset 1 in: Using spectral decomposition of the signals from laurdan-derived probes to evaluate the physical state of membranes in live cells. F1000Research. 2017. Data Source\n\nMazeres S, Joly E, Lopez A, et al.: Characterization of M-laurdan, a versatile probe to explore order in lipid membranes [version 2; referees: 2 approved, 1 approved with reservations]. F1000Res. 2014; 3: 172. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeher RA, Mitkovski M, Kirchhoff F, et al.: Blind source separation techniques for the decomposition of multiply labeled fluorescence images. Biophys J. 2009; 96(9): 3791–3800. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNiko Y, Didier P, Mely Y, et al.: Bright and photostable push-pull pyrene dye visualizes lipid order variation between plasma and intracellular membranes. Sci Rep. 2016; 6: 18870. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOwen DM, Gaus K: Optimized time-gated generalized polarization imaging of Laurdan and di-4-ANEPPDHQ for membrane order image contrast enhancement. Microsc Res Tech. 2010; 73(6): 618–622. PubMed Abstract | Publisher Full Text\n\nOwen DM, Rentero C, Magenau A, et al.: Quantitative imaging of membrane lipid order in cells and organisms. Nat Protoc. 2011; 7(1): 24–35. PubMed Abstract | Publisher Full Text\n\nOwen DM, Rentero C, Magenau A, et al.: Quantitative imaging of membrane lipid order in cells and organisms. Nat Protoc. 2012a; 7(1): 24–35. PubMed Abstract | Publisher Full Text\n\nOwen DM, Williamson DJ, Magenau A, et al.: Sub-resolution lipid domains exist in the plasma membrane and regulate protein diffusion and distribution. Nat Commun. 2012b; 3: 1256. PubMed Abstract | Publisher Full Text\n\nParasassi T, Gratton E: Membrane lipid domains and dynamics as detected by Laurdan fluorescence. J Fluoresc. 1995; 5(1): 59–69. PubMed Abstract | Publisher Full Text\n\nRosetti CM, Mangiarotti A, Wilke N: Sizes of lipid domains: What do we know from artificial lipid membranes? What are the possible shared features with membrane rafts in cells? Biochim Biophys Acta. 2017; 1859(5): 789–802. PubMed Abstract | Publisher Full Text\n\nSezgin E, Levental I, Mayor S, et al.: The mystery of membrane organization: composition, regulation and roles of lipid rafts. Nat Rev Mol Cell Biol. 2017; 18(6): 361–374. PubMed Abstract | Publisher Full Text\n\nSezgin E, Waithe D, Bernardino de la Serna J, et al.: Spectral imaging to measure heterogeneity in membrane lipid packing. Chemphyschem. 2015; 16(7): 1387–1394. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Meer G, Voelker DR, Feigenson GW: Membrane lipids: where they are and how they behave. Nat Rev Mol Cell Biol. 2008; 9(2): 112–124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWassall SR, Stillwell W: Polyunsaturated fatty acid-cholesterol interactions: domain formation in membranes. Biochim Biophys Acta. 2009; 1788(1): 24–32. PubMed Abstract | Publisher Full Text\n\nWeber G, Farris FJ: Synthesis and spectral properties of a hydrophobic fluorescent probe: 6-propionyl-2-(dimethylamino)naphthalene. Biochemistry. 1979; 18(14): 3075–3078. PubMed Abstract | Publisher Full Text\n\nYu W, So PT, French T, et al.: Fluorescence generalized polarization of cell membranes: a two-photon scanning microscopy approach. Biophys J. 1996; 70(2): 626–636. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23165",
"date": "05 Jun 2017",
"name": "Erdinc Sezgin",
"expertise": [
"Reviewer Expertise Membrane biophysics",
"imaging",
"lipids"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article, Mazeres et al. report a new way (spectral decomposition) of analysing the spectral imaging data to obtain a quantitative measure of the lipid organisation in plasma and internal membranes. The authors also provide an open-access image analysis platform compatible with ImageJ.\nThe manuscript is well-written and explanatory. The focus of the manuscript is not novel aspects of the membrane organisation, it is rather the methodology which is quite useful in my opinion.\nI recommend publication after minor revision. Here are my specific comments:\nIn the introduction; there are statements;\n“…laurdan derivatives have no preference for ordered and disordered domains…”\nThis is not exactly accurate, the partitioning of these probes has been tested in GUVs and it is probably true in GUV systems they were tested, however, recently it has been shown that partitioning of the membrane probes is highly governed by the lipid packing of the coexisting domains (Sezgin et al., 20151), and Laurdan signal predominantly comes from ordered domains of cell derived vesicles (Sezgin et al., 20142). Therefore, I would formalise this sentence a bit differently so that it is not misleading.\n\n“The most widely accepted model to explain the ‘red shift’ of laurdan from an emission maximum of 440 nm to a maximum of 490 nm in more polar environments is based on the reorientation, or relaxation, of structured water molecules in the direct vicinity of the probe”\n\nbelieve the photophysics behind this spectral shift is a bit more complicated and would be nice to mention this and refer the reader to appropriate reports such as the nice review by Amaro et al., 2014 3\n\n“The ANEPPDHQ, Nile Red and pyrene-derived probes, by contrast, undergo more progressive shifts in their emission maxima depending on the dielectric constants of their environments. Furthermore, the shifts in emission spectra of these probes are not necessarily linked to the degree of order of the bilayers.”\nHere, the statements need references. The original paper of di-4-ANEPPDHQ paper (Jin et al., 2006 4) as well as Sezgin et al., 20142 show the progressive shift of these dyes. And a recent report by Amaro et al., 20175 shows the complicated spectral shift of ANEP dye unlike a robust response of Laurdan to molecular order.\n\n“C-laurdan, a probe first synthesised and characterised in 2007 was derived from laurdan by adding a carboxyl group to the polar head, and labels the PM of eukaryotic cells more effectively than laurdan.”\nHere the original C-Laurdan should be cited (Kim et al., 2007)6\n\nFrom the Materials and Methods, I could not get what is the excitation wavelength of NileRed?\n\nAuthors should comment on whether Fraction Mapper can read any spectral imaging file? (lsm, czi, lif etc)\n\nIn Figure 2, there is significant “yellow signal”. The conclusion of the colocalisation is very qualitative, I suggest authors to perform pearson correlation to have quantitative measure of the colocalisation (Fig 2 and 3).\n\nWould be very useful to write the excitation wavelength of the probes in the figure legends as well as emission bands.\n\nAlso, it would be extremely useful to have the figures a bit more informative and less dependent to the legend. In the current form, I had to go back and forth between the figure and the legend many times to understand the figure. Instead, if the authors put a bit of descriptive text on the figures, it would have been a lot easier to understand the figures (like they did for Figure 7 where they 3min, 15 min and 30 min as well as 4 uM and 200 nm on the figure itself). For instance, for figure 2, I would write; Laurdan, M-Laurdan, C-Laurdan on the rows and “brightfield”, “probe”, “lysotracker”, “merge” and “close-up” for the columns. Similarly I would do it for every figure so as soon as the reader looks at the figure, she can get what the figure tells without reading the legend. Similarly, they should add labels to Figure 5 and 8 on the figure what thin and bold lines or colors etc mean.\n\nAuthors should comment on possible FRET between Laurdan and NileRed when they do the spectral decomposition between these two.\n\nAuthors mention the “autoqueching” because of the decrease in signal with concentration. Are the authors sure that this is what happens? Can that be something else?\n\n3min pictures in Fig7 is different than the rest, why?\n\nFor Figure 10-13, it is necessary to show some quantification, qualitative assessment with the colors isn’t enough.\n\n“Considering the importance of temperature in regulating the state and order in lipid bilayers….” Authors should give a reference here such as Burns et al., 20177.\n\nIn the first paragraph of the Discussion, authors should also discuss the recently published improved approach of spectral imaging analysis (Aron et al., 20178)\n\nIn the discussion, when authors discuss the differences of lipid packing, they only consider the lipid composition. They should also add a short discussion on the presence of actin cytoskeleton for plasma membrane (and absence of it for intracellular membranes) may be responsible for relatively more ordered plasma membrane compared to intracellular ones (Dinic et al., 20139).\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2903",
"date": "01 Aug 2017",
"name": "Etienne Joly",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We are very grateful to Dr. Sezgin for his careful evaluation of our manuscript and for his constructive comments and suggestions to improve it. We have followed most of them, or have explained below why we chose not to. In this article, Mazeres et al. report a new way (spectral decomposition) of analysing the spectral imaging data to obtain a quantitative measure of the lipid organisation in plasma and internal membranes. Authors’ comment: As discussed extensively in the manuscript, we suggest that the results of the approach we describe should only be considered as semi-quantitative. Here are my specific comments: In the introduction; there are statements; “…laurdan derivatives have no preference for ordered and disordered domains…” This is not exactly accurate, the partitioning of these probes has been tested in GUVs and it is probably true in GUV systems they were tested, however, recently it has been shown that partitioning of the membrane probes is highly governed by the lipid packing of the coexisting domains (Sezgin et al., 20151), and Laurdan signal predominantly comes from ordered domains of cell derived vesicles (Sezgin et al., 20142). Therefore, I would formalise this sentence a bit differently so that it is not misleading. Authors reply: Following this comment, we went back to re-read those papers to see if we had missed anything relating to the unequal partitioning of Laurdan or C-laurdan between ordered and disordered domains, but could not find any definite evidence relating to partitioning of Laurdan or C-laurdan in either of these papers. Incidentally, we also noticed that C-laurdan was used at 4uM in the 2015 study, which our data suggests would probably lead to very significant auto-quenching, preferentially in organized domains, and render any quantitative analysis meaningless. “The most widely accepted model to explain the ‘red shift’ of laurdan from an emission maximum of 440 nm to a maximum of 490 nm in more polar environments is based on the reorientation, or relaxation, of structured water molecules in the direct vicinity of the probe” I believe the photophysics behind this spectral shift is a bit more complicated and would be nice to mention this and refer the reader to appropriate reports such as the nice review by Amaro et al., 2014 3 Authors reply: In the review by Amaro et al, the authors’ conclusion is: « Taking all the above facts together, one can conclude that Laurdan τr reflects the rearrangement of hydrated sn-1 carbonyls of a lipid bilayer in the liquid crystalline phase, whereas Δν mirrors the polarity, which is related to the hydration level of sn-1 carbonyls.” To us, this is just a more elaborate way (not to say complicated) to say the same thing as what we stated in our paper, i.e. that the red shift is based on the reorientation, or relaxation, of structured water molecules in the direct vicinity of the probe. “The ANEPPDHQ, Nile Red and pyrene-derived probes, by contrast, undergo more progressive shifts in their emission maxima depending on the dielectric constants of their environments. Furthermore, the shifts in emission spectra of these probes are not necessarily linked to the degree of order of the bilayers.” Here, the statements need references. The original paper of di-4-ANEPPDHQ paper (Jin et al., 2006 4) as well as Sezgin et al., 20142 show the progressive shift of these dyes. And a recent report by Amaro et al., 20175 shows the complicated spectral shift of ANEP dye unlike a robust response of Laurdan to molecular order. Authors reply: We thank Dr. Sezgin for his suggestion and have included the references in the updated version of our manuscript. “C-laurdan, a probe first synthesised and characterised in 2007 was derived from laurdan by adding a carboxyl group to the polar head, and labels the PM of eukaryotic cells more effectively than laurdan.” Here the original C-Laurdan should be cited (Kim et al., 2007)6 Authors’ reply: This reference was inadvertently deleted at some stage during the writing of the paper and will be re-inserted in the updated version of our manuscript. From the Materials and Methods, I could not get what is the excitation wavelength of NileRed? Authors’ reply: As was already stated in the paper, the bi-photon excitation was carried out at 720 nm. Whilst this is clearly not the optimal wavelength for Nile Red or LysoTracker Red DND-99, it still resulted in very efficient excitation of those red probes. For Nile Red, the concentration actually had to be diluted tenfold to give signal intensities similar to those of the laurdan-based probes. We have now added a sentence in the results section to clarify and emphasize this point. Authors should comment on whether Fraction Mapper can read any spectral imaging file? (lsm, czi, lif etc) Authors’ reply: As clearly stated in the manuscript, Fraction Mapper does not read spectral imaging files, but stacks of two images. As long as ImageJ can open such stacks, one should be able to run Fraction Mapper on them. If not, the stacks should simply be converted to standard TIFF format beforehand. In Figure 2, there is significant “yellow signal”. The conclusion of the colocalisation is very qualitative, I suggest authors to perform Pearson correlation to have quantitative measure of the colocalisation (Fig 2 and 3). Authors’ reply: Dr. Sezgin is absolutely right: the analysis is indeed purely qualitative, but we feel that the results are sufficiently clear and the conclusions clear-cut that there is really no need for complicated mathematical analyses that would bring nothing additional. Would be very useful to write the excitation wavelength of the probes in the figure legends as well as emission bands. Authors’ reply: As indicated in the methods section, bi-photon excitation was set at 720 nm, and this was true for absolutely all the microscopy experiments described in this manuscript. It therefore does not seem useful to specify this in all the legends as well. Also, it would be extremely useful to have the figures a bit more informative and less dependent to the legend. In the current form, I had to go back and forth between the figure and the legend many times to understand the figure. Instead, if the authors put a bit of descriptive text on the figures, it would have been a lot easier to understand the figures (like they did for Figure 7 where they 3min, 15 min and 30 min as well as 4 uM and 200 nm on the figure itself). For instance, for figure 2, I would write; Laurdan, M-Laurdan, C-Laurdan on the rows and “brightfield”, “probe”, “lysotracker”, “merge” and “close-up” for the columns. Similarly I would do it for every figure so as soon as the reader looks at the figure, she can get what the figure tells without reading the legend. Similarly, they should add labels to Figure 5 and 8 on the figure what thin and bold lines or colors etc mean. Authors’ reply: Thys type of presentation was used out of habit (many journals actually discourage the use of lettering and legends within the figures). But we agree with Dr. Sezgin that it would probably be better to have some legends within the figures, and have re-introduced them in the updated version of the manuscript. Authors should comment on possible FRET between Laurdan and Nile Red when they do the spectral decomposition between these two. Authors’ reply: What sort of comment? As underlined above, the analysis of the double staining experiment with laurdan and Nile Red is purely qualitative. We fail to grasp how FRET, if it actually occurred, would influence these qualitative results, and our conclusions. Authors mention the “autoquenching” because of the decrease in signal with concentration. Are the authors sure that this is what happens? Can that be something else? Authors’ reply: If we have not mentioned anything else, it is because we cannot think of any other mechanism that may realistically cause this effect. But we are open to suggestions … 3min pictures in Fig7 is different than the rest, why? Authors’ reply: At three minutes, the fluorescence intensities are much weaker because the probe had not had time to stain the cells completely. The signal to noise ratios were thus much lower, which explains the duller aspect of the pictures, both at 4uM and 200 nM. This has now been clarified in the figure legend. For Figure 10-13, it is necessary to show some quantification, qualitative assessment with the colors isn’t enough. Authors’ reply: We beg to differ on this subject. For a variety of reasons developed in the discussion, we feel that our results can only be considered as semi-quantitative. It is for this reason that we have chosen to represent the data with a LUT based on six discrete colors, and have refrained from carrying out any further quantitation. “Considering the importance of temperature in regulating the state and order in lipid bilayers….” Authors should give a reference here such as Burns et al., 20177. Authors’ reply: We thank Dr Sezgin for his suggestion and have included this reference in the updated version of the manuscript. In the first paragraph of the Discussion, authors should also discuss the recently published improved approach of spectral imaging analysis (Aron et al., 20178) Authors’ reply: Despite its title, this particular paper is not really about spectral imaging analysis, but about a bioinformatics toolbox for high throughput GP analyses. We do not see how this is relevant to our results. In the discussion, when authors discuss the differences of lipid packing, they only consider the lipid composition. They should also add a short discussion on the presence of actin cytoskeleton for plasma membrane (and absence of it for intracellular membranes) may be responsible for relatively more ordered plasma membrane compared to intracellular ones (Dinic et al., 20139). Authors’ reply: We thank Dr Sezgin for his suggestion and have included this reference in the updated version of the manuscript."
}
]
},
{
"id": "23159",
"date": "13 Jun 2017",
"name": "Bruno Antonny",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very careful study of the physical states of cellular membranes of different organisms under well defined and physiological conditions (e.g . temperature). The experimental approaches are well explained and performed in the most precise manner. This type of analysis, together with the recent study performed by Niko, Klymchenko and coauthors (Scientific Reports) using a different probe, provide a clear cut and most informative picture of the geography of cell membranes and should be very useful in the future for membrane biologists. Clearly, the plasma membrane differs from intracellular membranes by its high order state. As simple as it may seem, this result is fundamental and has been overlooked in the past as compared to other ideas and models of cell membrane organization, which are compatible, but which put the emphasis on nanodomains, which remain difficult to capture. Note that in addition to the cholesterol gradiant, other parameters could also contribute to the increase in lipid ordering along the secretory pathway; most notably lipid remodeling, where the acyl chains of some bulk lipids tend to become more saturated at the PM as compared to organelles of the early secretory pathway. See references below.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2904",
"date": "01 Aug 2017",
"name": "Etienne Joly",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Bruno Antonny wrote “Note that in addition to the cholesterol gradient, other parameters could also contribute to the increase in lipid ordering along the secretory pathway; most notably lipid remodeling, where the acyl chains of some bulk lipids tend to become more saturated at the PM as compared to organelles of the early secretory pathway.” Authors’ reply: We thank Dr Antonny for his appreciation of our work and for his useful suggestion. We have included an appropriate reference in the updated version of our manuscript"
}
]
}
] | 1
|
https://f1000research.com/articles/6-763
|
https://f1000research.com/articles/6-1287/v1
|
31 Jul 17
|
{
"type": "Software Tool Article",
"title": "BlobTools: Interrogation of genome assemblies",
"authors": [
"Dominik R. Laetsch",
"Mark L. Blaxter",
"Mark L. Blaxter"
],
"abstract": "The goal of many genome sequencing projects is to provide a complete representation of a target genome (or genomes) as underpinning data for further analyses. However, it can be problematic to identify which sequences in an assembly truly derive from the target genome(s) and which are derived from associated microbiome or contaminant organisms. We present BlobTools, a modular command-line solution for visualisation, quality control and taxonomic partitioning of genome datasets. Using guanine+cytosine content of sequences, read coverage in sequencing libraries and taxonomy of sequence similarity matches, BlobTools can assist in primary partitioning of data, leading to improved assemblies, and screening of final assemblies for potential contaminants. Through simulated paired-end read dataset,s containing a mixture of metazoan and bacterial taxa, we illustrate the main BlobTools workflow and suggest useful parameters for taxonomic partitioning of low-complexity metagenome assemblies.",
"keywords": [
"Bioinformatics",
"visualisation",
"genome assembly",
"quality control",
"contaminant screening"
],
"content": "Introduction\n\nAdvances in next generation sequencing technologies have generated vast amounts of data and knowledge (Goodwin et al., 2016). The decrease in cost per nucleotide lead to an increased application of these technologies to non-model organisms, life forms which have so far not been intensively studied by the research community. Genome-enabled science on these species can then illuminate novel processes and reveal the patterns of evolution. For non-model species, the luxury of large amounts of material from cultured isolates is often not possible, and research must progress from organisms sourced from the wild or from complex mixtures of species. DNA extracted from a sample may actually contain genomes from multiple organisms – food sources, host material, symbionts, pathogens, commensals and external contaminants – in addition to the target organism. In some cases, the associated genomes can be considered “contaminants”, while in others, they can provide insights into the biology of the target organism. In all cases they should be identified, isolated and investigated with care.\n\nInterrogation of genome assemblies to assure single-taxon origin is an elemental step in the genome sequencing process. Failure to identify non-target sequence can lead to false conclusions regarding the biology of the target organism, such as metabolic potential and events of horizontal gene transfer (HGT) between species. Several reports of HGTs into eukaryotic genomes have later been shown to have been based on undetected contamination in assemblies. Identification of contamination can radically change the conclusions of a study, as shown for the starlet sea anemone Nematostella vectensis (Artamonova & Mushegian, 2013) and the tardigrade Hypsibius dujardini (Koutsovoulos et al., 2016). Importantly, undetected non-target sequence contamination of published genomes will pollute public sequence databases and promote propagation of annotation errors.\n\nReliable assignment of a DNA sequence from a new assembly to its species-of-origin, i. e. the association of the sequence ID to an unique, numerical identifier (TaxID) of the National Centre for Biotechnology Information (NCBI) Taxonomy database (Federhen, 2012), is a non-trivial problem. Current contaminant screening pipelines are based on sequence similarity to sequences of known origin, sequence composition signatures such as k-mers, and/or shared coverage profiles across different datasets. Few are readily applicable to datasets of eukaryotic genomes of any size (Eren et al., 2015; Kumar et al., 2013; Mallet et al., 2017; Tennessen et al., 2016). Anvi’o (Eren et al., 2015) partitions assemblies by clustering sequences based on the output of CONCOCT (Alneberg et al., 2014). CONCOCT uses Gaussian mixture models to predict the cluster membership of sequences by considering sequence composition and coverage profiles. PhylOligo (Mallet et al., 2017) relies exclusively on sequence composition and performs iterative, partially supervised clustering of sequences based on sequence composition profiles. ProDeGe (Tennessen et al., 2016) uses a fully unsupervised method based on sequence similarity to databases and sequence composition to partition assemblies using principal component analysis (PCA). It should be noted that while taxonomic assignment based on higher order sequence composition (such as k-mers of length 4 or greater) is highly effective for bacterial sequences, its success has been limited for eukaryotic genomes, as the information content, represented by the number of coding bases, is lower, and sequence composition spectra often show multimodal distributions (Chor et al., 2009).\n\nExisting contaminant screening pipelines also differ in the way results are presented. Anvi’o depicts assemblies through interactive plots with rich annotations of sequence composition features, coverages across datasets and taxonomic/binning results. PhylOligo offers heatmaps of hierarchical clusterings of sequences, tree visualisations, and t-SNE (t-Distributed Stochastic Neighbor Embedding) plots, where sequence composition clusterings have been reduced to two dimensions. ProDeGe displays sequences in an interactive, three-dimensional k-mer PCA plots.\n\nBlobPlots, or taxon-annotated GC-coverage plots (Kumar et al., 2013) are another contamination detection and data partitioning methodology. BlobPlots are two-dimensional scatter plots, in which sequences are represented by dots and coloured by taxonomic affiliation based on sequence similarity search results. For each sequence, the position on the Y-axis is determined by the base coverage of the sequence in the coverage library, a proxy for molarity of input DNA. The position on the X-axis is determined by the GC content, the proportion of G and C bases in the sequence, which can differ substantially between genomes.\n\nHere, we present BlobTools, a modular command-line solution for the visualisation of genome assemblies as BlobPlots, and taxonomic interrogation for purposes of quality control. BlobTools is a complete reimplementation of the Blobology pipeline (Kumar et al., 2013) focussed on usability, improved taxonomic assignment of sequences based on custom user input, and support for coverage information based on multiple formats and sequencing libraries. We demonstrate the features of BlobTools using synthetic datasets, and offer guidelines for efficient adoption of BlobTools into genome assembly programmes.\n\n\nMethods\n\nBlobTools is written in Python and consists of a main executable that allows the user to interact with the implemented modules (see Table 1). It offers a simple, modular command line interface which can easily be adapted to process multiple datasets simultaneously using GNU parallel (Tange, 2011). Inputs for BlobTools are standard file formats commonly created during the course of genome assembly projects. The primary processing in BlobTools constructs a BlobDB data structure based on user input. From this data structure, BlobTools generates easily interpretable, two-dimensional visualisations ready for publication, in conjunction with tabular output, enabling the user to partition sequences and paired-end (PE) reads contributing to them, for separate downstream processing. We present two recommended workflows, one targeted at de novo genome assembly projects in the absence of a reference genome (Figure 1A) and another for projects where a reference genome is available (Figure 1B).\n\n(A) Workflow A. Targeted at de novo genome assembly projects in the absence of a reference genome. 1: Creation of a BlobDB data structure based on input files. 2: Visualisation of assembly and generation of tabular output. 3: Partitioning of sequence IDs in assembly, based on user-defined parameters informed by the visualisations. 4: Partitioning of PE reads based on sequence IDs. (B) Workflow B. Targeted at projects where a reference genome is available. 1: Reads are mapped against the reference genome. 2: BAM file is processed to generate FASTQ files based on read mapping behaviour. 3: FASTQ file of read pairs where neither read maps to the reference genome (UnUn) are assembled de novo and used in workflow A. 4: partition of read pairs of target taxon recovered from workflow A are assembled together with the other target taxon read pairs from step 2 and used in workflow A.\n\nTaxonomy assignment in BlobTools is based on user-supplied, tab-separated-value (TSV) files composed of three columns: the input sequence ID, a NCBI TaxID, and a numerical score. We refer to these TSV files as ‘hits’ files below. They can be generated from the output of sequence similarity searches, such as BLAST (Camacho et al., 2009) or Diamond blastx (Buchfink et al., 2015) searches against public or reference databases, or the output of other contaminant identification tools. The BlobTools module taxify allows easy conversion of tabular file formats to BlobTools compatible input, in addition to annotation of similarity search results based on NCBI TaxID mapping files, as available from UniProt and NCBI.\n\nBased on these inputs, BlobTools assigns a single NCBI taxonomy for each sequence in the assembly, based on the highest scoring NCBI TaxID at the following taxonomic ranks: species, genus, family, order, phylum, and superkingdom. Score calculation can be controlled by the user through a minimal score threshold (--min_score) and a minimal difference in scores (--min_diff) between the best and second-best scoring taxonomy. In addition, three non-canonical taxonomic annotations are possible: ‘no-hit’, the suffix ‘-undef’ and ‘unresolved’. Sequences not assigned to any taxonomic group, or not passing the --min_score threshold, are labelled ‘no-hit’. If a NCBI TaxID has no explicit parent at a taxonomic rank, the suffix ‘-undef’ is appended to the next upper taxonomic rank for which one does exist. In cases where the score difference between the best and second-best hits is smaller than --min_diff, sequences are labelled as ‘unresolved’.\n\nMultiple ‘hits’ files can be provided as input. In this case, the behaviour of the taxonomy assignment process can be controlled further through ‘taxrules’. The highest scoring taxonomy can either be inferred across all files (‘bestsum’) or successively (‘bestsumorder’) in the order they were supplied as input, allowing only sequence that received no hits from one file to be considered for taxonomic annotation in the next file, thereby leveraging reliability of scores of different input file sources.\n\nThe original blobology pipeline (Kumar et al., 2013) recommended the use of a single, best BLAST hit per sequence for taxonomy assignment. However, taxonomically mis-annotated sequences in databases (derived from inclusion of un-screened genome assemblies) can lead to erroneous taxonomic annotation. BlobTools mitigates this issue by accepting multiple hits per sequence and allocating taxonomy based on the highest sum of scores.\n\nIt should be noted that a definitive taxonomic placement for every sequence in the assembly is not required for successful taxonomic partitioning of sequences, since differential coverage and sequence composition profiles between the genomes are often sufficient.\n\nIn BlobTools, sequences are depicted as circles in BlobPlots (as opposed to dots in the blobology pipeline), with diameters proportional to sequence length. The scatter-plot is decorated with coverage and GC histograms for each taxonomic group, which are weighted by the total span (cumulative length) of sequences occupying each bin. A legend reflects the taxonomic affiliation of sequences and lists count, total span and N50 by taxonomic group. Taxonomic groups can be plotted at any taxonomic rank and colours are selected dynamically from a colour map. The number of taxonomic groups to be plotted can be controlled (--plotgroups, default is ‘7’) and remaining groups are binned into the category ‘others’. An example is shown in Figure 2A.\n\n(A) BlobPlot of the assembly. Sequences in the assembly are depicted as circles, with diameter scaled proportional to sequence length and coloured by taxonomic annotation (at the rank of ’order’) based on BLASTn and Diamond blastx similarity search results provided in this order and using taxrule ’bestsumorder’. Circles are positioned on the X-axis based on their GC proportion and on the Y-axis based on the sum of coverage across both library A and library B. (B) ReadCovPlot of library A. (C) ReadCovPlot of library B. In ReadCovPlots, mapped reads are shown by taxonomic group at the rank of ’order’.\n\nThe power of differential coverage profiles across different sequencing libraries for partitioning sequences in an assembly prompted the development of CovPlots (Figure 3) (Koutsovoulos et al., 2016), which are analogous to BlobPlots, except that the GC-axis is substituted by the coverage-axis from another sequencing library. CovPlots can be used for the visualisation of patterns of differential coverage signatures between taxonomic groups in the assembly.\n\nSequences in the assembly are depicted as circles, with diameter scaled proportional to sequence length and coloured by taxonomic annotation (at the rank of ’order’) based on BLASTn and Diamond blastx similarity search results provided in this order and using taxrule ’bestsumorder’. Circles are positioned on the X-axis based on coverage in library A and on the Y-axis based on coverage in library B. Parameters for partitioning the sequences in the assembly (which were applied to the tabular representation of the BlobDB) are indicated as dotted grey lines and text annotations in the scatter plot.\n\nThe modules for generating BlobPlots and CovPlots support additional input parameters controlling visualisation behaviour, including cumulative addition (--cumulative) or generation of separate plots for each taxonomic group (--multiplot), exclusion (--exclude) or relabelling (--relabel) of taxonomic groups, assignment of specific HEX colours to groups (--colour) or labelling sequences based on arbitrary, user defined categories (--catcolour). The latter could be, for instance, binned categories of RNAseq mappings to sequences in the assembly as shown in Koutsovoulos et al. (2016).\n\nReadCovPlots (Figure 2B and 2C) visualise the proportion of reads of a library that are unmapped or mapped, showing the percentage of mapped reads by taxonomic group, as barcharts. These can be of use for rapid taxonomic screening of multiple sequencing libraries within a single project. The underlying data of ReadCovPlots and additional metrics are written to tabular text files for custom analyses by the user.\n\nBlobTools supports coverage input (BAM/CAS format) from multiple sequencing libraries. As these data formats contain more information than needed, BlobTools parses coverage information of sequences (normalised base coverage and read coverage) into COV files in TSV format. These files can be generated through the module map2cov prior to construction of a BlobDB.\n\nWithin the BlobDB data structure, base and read coverage information is stored for each sequence in the assembly. If more than one coverage file is supplied, BlobTools constructs an additional coverage library (‘cov_sum’) internally, containing the sum of coverages for each sequence across all coverage files. This internal coverage library is considered when extracting views or plotting visualisations.\n\nSystem requirements for BlobTools include a UNIX based operating system, Python 2.7, and pip. An installation script is provided, which installs Python dependencies, downloads and processes a copy of the NCBI TaxDump, and downloads and compiles a copy of samtools (Li et al., 2009). Instructions for installation and execution of BlobTools can be found at https://github.com/DRL/blobtools.\n\nTwo common BlobTools workflows for taxonomic interrogation of paired-end (PE) read datasets are depicted in the flowchart in Figure 1. Workflow A is targeted at de novo genome assembly projects where there is no preexisting reference genome. Workflow B should be followed where a reference genome is available.\n\nWorkflow A (Figure 1A) proceeds through construction a BlobDB data structure based on input files (step A1), visualisation of assembly and generation of tabular output (A2), partitioning of sequence IDs based on user-defined parameters informed by the visualisations (A3) and partitioning of PE reads based on sequence IDs (A4). It should be noted that while the BlobTools module create (step A1) supports multiple mapping formats, it is recommended that these are processed in advance using map2cov. Generation of tabular ‘hits’ files is simplified through the module taxify, which allows annotation of similarity search results based on TaxID mapping files or based on custom user input in tabular format.\n\nBlobTools can process both PE and single-end read files. The module bamfilter in step A4 is only of relevance if PE read data is used, since single end read data can easily be partitioned using GNU grep or other tools. The module bamfilter can be controlled with a list of sequence IDs to include or to exclude. Use of an exclusion list causes all sequence IDs, except those specified, to be included. In both cases it will output up to four interleaved FASTQ files depending on the actual mapping behaviour of the read pairs and whether the parameter --include_unmapped is provided. Possible mapping behaviours of read pairs are: both reads mapping to included sequences (included-included: InIn), one read mapping to an included sequence and the other being unmapped (InUn), and one read mapping to an included sequence and the other mapping to an excluded sequence (ExIn). If the --include_unmapped parameter is specified, the module also writes read pairs where neither read maps to the assembly (UnUn). The latter case can occur if the assembler used for generating the sequences did not make use of all reads in the dataset. The resulting partitioned PE read files can then be assembled separately and the workflow is repeated. Decisions concerning which PE read files to use is left at the discretion of the user. However, as general rule, if target taxa have been sequenced at low coverages it might be preferable to be inclusive (using InIn, InEx, InUn and UnUn FASTQ files for assembly) and risking including non-target reads, than being exclusive (using only InIn and InUn for assembly) and risking losing significant proportions of reads from target genomes.\n\nWorkflow B (Figure 1B) should be applied when a reference genome is available. Reads are mapped against the reference genome (B1) and the resulting BAM file is processed with the module bamfilter (B2) using the parameter --include_unmapped and without providing a list of sequences. This will result in three FASTQ files: InIn, InUn and UnUn. Since taxonomic origin of the InIn and InUn reads has been established through the mapping step, only the UnUn reads are assembled de novo (B3) and processed via workflow A. This decreases computational requirements substantially. If workflow A yields a PE read partition of the target organism, which will consist of parts of the organism’s genome not present in the reference, these reads are can be used together with the InIn and InUn reads from step 2 to generate a new assembly (B4), which should be screened again via Workflow A. This iterative procedure can easily be applied to projects studying highly variable species where segmental presence-absence is common and a reference genome is expanded (to form a pangenome) as new samples are sequenced, or holobiomes, where reference genomes of multiple taxa are expanded as new samples are added.\n\n\nUse cases\n\nA detailed description of the programs and commands used can be found in Supplementary File 1.\n\nTo illustrate workflow A (Figure 1A), we simulated read libraries for the nematode Caenorhabditis elegans contaminated with other organisms (see Table 2). Library A contains C. elegans reads contaminated with reads from Escherichia coli, Homo sapiens chromosome 19 and H. sapiens mitochondrial (mtDNA) genome, mimicking a dataset where the target genome is contaminated with DNA from food (E. coli) and operator (H. sapiens). Library B is composed of C. elegans reads contaminated with Pseudomonas aeruginosa, mimicking a project where the metazoan target species is heavily colonised by a prokaryotic organism.\n\nWe assembled both read datasets together and mapped each library individually against the assembly. We supplied the assembly to BlobTools, in addition to coverage information extracted from both BAM files and the results of sequence similarity searches.\n\nTo simulate cases where sequences of genomes in the assembly are not part of public sequence databases, we removed all sequences annotated under the taxonomic terms ‘Caenorhabditis elegans’, ‘Hominids’, ‘Escherichia’, ‘Pseudomonas’, and ‘Other sequences’ before conducting sequence similarity searches. The search results provided to BlobTools were BLASTn megablast search against NCBI nt (-outfmt ’6 qseqid staxids bitscore std’ -max-target-seqs 1 -max_hsp 1 -evalue 1e-25) and Diamond blastx searches against UniProt Reference Proteomes (--outfmt 6 --sensitive --max-target-seqs 1 --evalue 1e-25), supplied in this order and using taxrule ‘bestsumorder’.\n\nA BlobPlot (Figure 2A), ReadCovPlots (Figure 2B and C) and a CovPlot (Figure 3) were generated at the taxonomic rank of ’order’. A tabular view of the BlobDB was generated using the module view under the taxrule ’bestsumorder’ and for the taxonomic ranks of ’superkingdom’, ’phylum’, and ’order’. We partitioned sequences based on differential coverage and taxonomy annotation (Figure 3) using the tabular view and the UNIX tools GNU grep, GNU cut, and GNU awk. Subsequently, read pairs were partitioned based on mapping behaviour to these sequence partitions using the module bamfilter and read pairs where both reads mapped to included sequences (i. e. the InIn set) were assembled by taxonomic group.\n\nWe then generated BlobPlots for the four assemblies (named ‘rhabditida-BT’, ‘primates-BT’, ‘pseudomonadales-BT’ and ‘enterobacterales-BT’) (Figure 4). Coverage information was based on mapping of both simulated sequencing libraries against all four assemblies and sequences were coloured based on the genome-of-origin of the simulated reads mapping to them.\n\nCoverage was obtained by mapping original reads to assemblies. Sequences are taxonomically annotated with ’true’ taxonomy based on origin of simulated reads mapping to them. Sequences labelled as ’no-hit’ did not receive any reads mapped to them. (A) Assembly of partition of Rhabditida reads (’rhabditida-BT’). One P. aeruginosa sequence (span 4,886 nt) remains. (B) Assembly of partition of Primates reads (’primates-BT’). Five E. coli sequences (total span 3,838 nt) remain. (C) Assembly of partition of Pseudomonadales reads (’pseudomonadales-BT’). (D) Assembly of partition of Enterobacterales reads (’enterobacterales-BT’). One sequence of P. aeruginosa (span 254 nt) remains.\n\nCleaned assemblies were evaluated based on the count of simulated reads, by genome-of-origin, mapping to them (Table 3), and based on standard assembly metrics (Table 4).\n\n*: Reads that did not map to any sequence are listed under ’Not Mapped’. Bold: Zero reads mapped.\n\nTo account for assembly and mapping biases, the original simulated read sets were also assembled separately by taxon, yielding the assemblies CELEG-SIM (reads simulated from the C. elegans genome), HSAPI-SIM (reads simulated from H. sapiens chromosome 19 and mtDNA), PAERU-SIM (reads simulated from P. aeruginosa genome), and ECOLI-SIM (reads simulated from E. coli genome).\n\nWe evaluated the effect of parameters of similarity searches against public databases on taxonomic annotation using BlobTools (see Supplementary File 2). Since exhaustive searches against large databases require time and computing power we focussed on parameters that limit resource usage and control the number of returned results. In both BLASTn and Diamond blastx, the options -max-target-seq and -max-hsps are implemented. The former is an early filter applied during primary search and excludes initial hits from later examination. The latter controls the number of high-scoring pairs (HSPs) reported between a query and a subject in the search. The BLAST specific parameter -culling-limit controls the number of hits that can be allocated to a given region on the query. For this dataset, the best trade-off between false positive and false negative taxonomic annotations was achieved by combining BLAST search (-max-target-seqs 10 -evalue 1e-25) against NCBI nt with Diamond blastx searches (--evalue 1e-25 --max-target-seqs 1) against UniProt Reference Proteomes, in this order, using BlobTools taxrule ’bestsumorder’. However, a much faster search with acceptable outcome was achieved by changing the BLASTn parameters to -max-target-seqs 1 -max_hsps 1.\n\n\nSummary\n\nWe have presented the BlobTools pipeline and illustrated the main BlobTools workflow (Figure 1A) by successfully disentangling read pairs from two simulated datasets composed of metazoan and bacterial genomes. The small fraction of read pairs that received an erroneous taxonomic assignment or were left out during the partitioning step (Table 3) had little effect on the overall assembly success for each taxon (Table 4). The outcome could have been improved further by being more inclusive during the partitioning step of sequences (to decrease the number of unassigned read pairs), combined with a second round of BlobTools workflow A (to remove read pairs which were partitioned into the wrong taxonomic group).\n\nThe ease of interpretation of BlobPlots has favoured adoption by users, and the current implementation of BlobTools has been applied successfully to genome projects involving tardigrades (Koutsovoulos et al., 2016; Yoshida et al., 2017), mealybugs and their endosymbionts (Husnik & McCutcheon, 2016), ectoparasitic mites (Dong et al., 2017), diptera (Dikow et al., 2017), honeybees and their metagenomes (Gerth & Hurst, 2017), nematodes (Eves-van den Akker et al., 2016; Gawryluk et al., 2016; Slos et al., 2017; Szitenberg et al., 2017), bacteria (Fuller et al., 2017; Mellbye et al., 2017; Samad et al., 2016; Wang & Chandler, 2016), butterflies (Nowell et al., 2017), a fungal pathogen of barley (McGrann et al., 2016), and fungi (Compant et al., 2017).\n\nBlobTools is a user-friendly and reliable solution for visualisation, quality control and taxonomic partitioning of genome datasets. Wider adoption of BlobTools screening by the research community will help control the influx of taxonomically mis-annotated sequences into public sequence databases and prevent inaccurate biological conclusions based on contaminated genome assemblies.\n\n\nSoftware and data availability\n\nBlobTools source code: https://github.com/DRL/blobtools\n\nArchived source code as at time of publication: http://doi.org/10.5281/zenodo.833879 (Laetsch et al., 2017)\n\nLicense: GNU-GPL\n\nA walk through for all analyses in this study is deposited at https://github.com/DRL/blobtools_manuscript, together with additional code and resulting output files.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nDRL was supported by a James Hutton Institute/Edinburgh University School of Biological Sciences fellowship. MLB was supported by a BBSRC research grant (Project reference BB/P024238/1).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank members of the Blaxter Nematode and Neglected Genomics lab in Edinburgh for support, criticism and suggestions. We thank Georgios Koutsovoulos, Sujai Kumar, Tim Booth, and Jason Stajich for contributions to the BlobTools code base. We thank Carlos Caurcel and Sujai Kumar for comments on the manuscript. We thank Judith Risse and the team of Edinburgh Genomics for feature requests and implementing BlobTools in their quality control pipeline. We thank all GitHub users who have raised questions, issues and submitted feature requests.\n\n\nSupplementary material\n\nSupplementary File 1: Supplementary methods.\n\nClick here to access the data.\n\nSupplementary File 2: Supplementary results, including:\n\nTable S1: F-scores for evaluation of influence of parameters of BLASTn searches against NCBI nt and Diamond blastx searches against UniProt Reference Proteomes on taxonomic assignment by BlobTools.\n\nTable S2: Precision and recall for evaluation of influence of parameters of BLASTn searches against NCBI nt and Diamond blastx searches against UniProt Reference Proteomes on taxonomic assignment by BlobTools.\n\nTable S3: Number of true positive (TP), false positive (FP), true negative (TN), and false negative (FN) bases for evaluation of influence of parameters of BLASTn searches against NCBI nt and Diamond blastx searches against UniProt Reference Proteomes on taxonomic assignment by BlobTools.\n\nClick here to access the data.\n\n\nReferences\n\nAlneberg J, Bjarnason BS, de Bruijn I, et al.: Binning metagenomic contigs by coverage and composition. Nat Methods. 2014; 11(11): 1144–1146. PubMed Abstract | Publisher Full Text\n\nArtamonova II, Mushegian AR: Genome sequence analysis indicates that the model eukaryote Nematostella vectensis harbors bacterial consorts. Appl Environ Microbiol. 2013; 79(22): 6868–6873. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBuchfink B, Xie C, Huson DH: Fast and sensitive protein alignment using diamond. Nat Methods. 2015; 12(1): 59–60. PubMed Abstract | Publisher Full Text\n\nCamacho C, Coulouris G, Avagyan V, et al.: Blast+: architecture and applications. BMC Bioinformatics. 2009; 10: 421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChor B, Horn D, Goldman N, et al.: Genomic DNA k-mer spectra: models and modalities. Genome Biol. 2009; 10(10): R108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCompant S, Gerbore J, Antonielli L, et al.: Draft Genome Sequence of the Root-Colonizing Fungus Trichoderma harzianum B97. Genome Announc. 2017; 5(13): pii: e00137–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDikow RB, Frandsen PB, Turcatel M, et al.: Genomic and transcriptomic resources for assassin flies including the complete genome sequence of Proctacanthus coquilletti (Insecta: Diptera: Asilidae) and 16 representative transcriptomes. PeerJ. 2017; 5: e2951. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDong X, Armstrong SD, Xia D, et al.: Draft genome of the honey bee ectoparasitic mite, Tropilaelaps mercedesae, is shaped by the parasitic life history. Gigascience. 2017; 6(3): 1–17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEren AM, Esen ÖC, Quince C, et al.: Anvi'o: an advanced analysis and visualization platform for 'omics data. PeerJ. 2015; 3: e1319. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEves-van den Akker S, Laetsch DR, Thorpe P, et al.: The genome of the yellow potato cyst nematode, Globodera rostochiensis, reveals insights into the basis of parasitism and virulence. Genome Biol. 2016; 17(1): 124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFederhen S: The NCBI Taxonomy database. Nucleic Acids Res. 2012; 40(Database issue): D136–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFuller SL, Savory E, Weisberg AJ, et al.: Isothermal amplification and lateral flow assay for detecting crown gall-causing Agrobacterium spp. Phytopathology. 2017. PubMed Abstract | Publisher Full Text\n\nGawryluk RM, Del Campo J, Okamoto N, et al.: Morphological Identification and Single-Cell Genomics of Marine Diplonemids. Curr Biol. 2016; 26(22): 3053–3059. PubMed Abstract | Publisher Full Text\n\nGerth M, Hurst GDD: Short reads from honey bee (Apis sp.) sequencing projects reflect microbial associate diversity. PeerJ. 2017; 5: e3529. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoodwin S, McPherson JD, McCombie WR: Coming of age: ten years of next-generation sequencing technologies. Nat Rev Genet. 2016; 17(6): 333–351. PubMed Abstract | Publisher Full Text\n\nHusnik F, McCutcheon JP: Repeated replacement of an intrabacterial symbiont in the tripartite nested mealybug symbiosis. Proc Natl Acad Sci U S A. 2016; 113(37): E5416–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoutsovoulos G, Kumar S, Laetsch DR, et al.: No evidence for extensive horizontal gene transfer in the genome of the tardigrade Hypsibius dujardini. Proc Natl Acad Sci U S A. 2016; 113(18): 5053–5058. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar S, Jones M, Koutsovoulos G, et al.: Blobology: exploring raw genome data for contaminants, symbionts and parasites using taxon-annotated GC-coverage plots. Front Genet. 2013; 4: 237. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaetsch DR, Koutsovoulos G, Booth T, et al.: DRL/blobtools: BlobTools v1.0. Zenodo. 2017. Data Source\n\nLi H, Handsaker B, Wysoker A, et al.: The sequence alignment/map format and samtools. Bioinformatics. 2009; 25(16): 2078–2079. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMallet L, Bitard-Feildel T, Cerutti F, et al.: PhylOligo: a package to identify contaminant or untargeted organism sequences in genome assemblies. Bioinformatics. 2017. PubMed Abstract | Publisher Full Text\n\nMcGrann GR, Andongabo A, Sjökvist E, et al.: The genome of the emerging barley pathogen Ramularia collo-cygni. BMC Genomics. 2016; 17: 584. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMellbye BL, Davis EW 2nd, Spieck E, et al.: Draft Genome Sequence of Nitrobacter vulgaris Strain Ab1, a Nitrite-Oxidizing Bacterium. Genome Announc. 2017; 5(18): pii: e00290-17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNowell RW, Elsworth B, Oostra V, et al.: A high-coverage draft genome of the mycalesine butterfly Bicyclus anynana. Gigascience. 2017; 6(7): 1–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSamad A, Trognitz F, Antonielli L, et al.: High-Quality Draft Genome Sequence of an Endophytic Pseudomonas viridiflava Strain with Herbicidal Properties against Its Host, the Weed Lepidium draba L. Genome Announc. 2016; 4(5): pii: e01170–16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSlos D, Sudhaus W, Stevens L, et al.: Caenorhabditis monodelphis sp. n.: defining the stem morphology and genomics of the genus caenorhabditis. BMC Zool. 2017; 2(1): 4. Publisher Full Text\n\nSzitenberg A, Salazar-Jaramillo L, Blok VC, et al.: Comparative genomics of apomictic root-knot nematodes: Hybridization, ploidy, and dynamic genome change. BioRxiv. 2017. Publisher Full Text .\n\nTange O: Gnu parallel - the command-line power tool. login: The USENIX Magazine. 2011; 36(1): 42–47. Reference Source\n\nTennessen K, Andersen E, Clingenpeel S, et al.: ProDeGe: a computational protocol for fully automated decontamination of genomes. ISME J. 2016; 10(1): 269–272. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang Y, Chandler C: Candidate pathogenicity islands in the genome of ‘Candidatus rickettsiella isopodorum’, an intracellular bacterium infecting terrestrial isopod crustaceans. PeerJ. 2016; 4: e2806. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYoshida Y, Koutsovoulos G, Laetsch DR, et al.: Comparative genomics of the tardigrades hypsibius dujardini and ramazzottius varieornatus. BioRxiv. 2017. Publisher Full Text"
}
|
[
{
"id": "24671",
"date": "11 Aug 2017",
"name": "A. Murat Eren",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study by Laetsch and Blaxter describes the workflow of BlobTools, an open source software package for the curation of low-complexity metagenomic assemblies. The work is well-written and clear, and the efficacy of the tool have already been demonstrated by many previous studies. Operational procedures and use cases laid out in the current work will likely be very useful to researchers who wish to rapidly screen their assemblies.\nI have two minor suggestions. The first one is about the following sentence:\nAnvi’o (Eren et al., 2015) partitions assemblies by clustering sequences based on the output of CONCOCT (Alneberg et al., 2014).\nThis is not quite accurate. Anvi'o can employ CONCOCT to automatically partition contigs into genome bins, however, it is only optional. The default mode of anvi'o uses multiple aspects of data (including the differential normalized coverage of contigs across libraries --if multiple samples are available, GC-content, and/or tetranucleotide frequencies) to generate a hierarchical clustering dendrogram that can be used for the identification of distinct genome bins.\nMy second suggestion is to include a citation to the study by Delmont and Eren, \"Identifying contamination with advanced visualization and analysis practices: metagenomic approaches for eukaryotic genome assemblies\"1 as I believe it would make an appropriate addition to the introduction.\nThe readers could definitely benefit from an appropriate discussion of the limitations and advantages of the 2D approach BlobTools promote in contrast to other ways to do it. 2D plots are inherently limited with respect to the number of layers of data they can display. After adding coverage and GC-content as axes to organize data points on an ordination, these displays are enriched with the use of colors (i.e. for taxonomy or any other single categorical data) and dot sizes (i.e. for sequence length or any other single continuous data). Besides the simpler attributes of data, the use of anvi'o in doi:10.7717/peerj.18391 brings into a single interactive display many additional perspectives, including the abundance of transcripts matching to contigs, the occurrence of contigs in different sequencing libraries, and horizontally transferred genes as claimed by others, that can benefit expert investigations of assemblies. That being said, it is important to note that the visualization strategy anvi'o relies on has disadvantages: it requires the computation of a hierarchical clustering dendrogram, and the computational complexity of this step limits the number of contigs that can be processed and displayed in reasonable amount of resources to about 25,000. This creates a need for efficient and intuitive tools like BlobTools to rapidly process large metagenomic assembly datasets of low-complexity.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "2963",
"date": "21 Aug 2017",
"name": "Dom Laetsch",
"role": "Author Response",
"response": "Dear Murat,Let me first thank you for reviewing our manuscript.We completely agree with your comments and suggestions and will: Expand on our description of the Anvi’o pipeline Add the suggested citation in the introduction Elaborate on the limitations of the visualisations generated by BlobTools. We will upload the corrections as soon as possible. All the best, Dom"
}
]
},
{
"id": "25294",
"date": "27 Sep 2017",
"name": "Richard M Leggett",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes BlobTools, an open source software package for partitioning of genomic data, principally for contamination control. It is a reimplementation of the Blobology pipeline previously described by one of the authors.\nThe paper makes a compelling case for the usefulness of blob plots, by citing a large number of previous works that have adopted the approach. The operation of the tool and the use cases look well thought out.\nThe manuscript states that the software should work on a UNIX-based operating system, but I had some difficulties with Mac OS. I found I needed to install wget, but then encountered issues with the python installation and pip that I was unable to overcome. Some guidance for Mac users in the instructions would be appreciated, as these do make up a significant number of users of bioinformatics software. I was, however, able to install very easily on a Linux machine.\nThough the simulated dataset examples are useful, I would have liked to see a use case involving a real dataset, showing the real impact that BlobTools had. It would also be useful if the authors could provide a brief tutorial based around a small dataset (real or simulated).\nA few minor comments: In Abstract, a typo in final paragraph “dataset,s”. In Introduction paragraph, “The decrease in cost per nucleotide lead” should be “has led”. A little bit the introduction paragraph feels like it was written a few years ago - ie. non-model organisms have been sequenced for many years. Second paragraph: interrogation of genome assemblies… is an elemental step in the genome sequencing process. More a part of genome assembly than sequencing? Second paragraph: “Several reports of HGTs… have been shown…” - provide refererences.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1287
|
https://f1000research.com/articles/6-919/v1
|
15 Jun 17
|
{
"type": "Software Tool Article",
"title": "CASAS: Cancer Survival Analysis Suite, a web based application",
"authors": [
"Manali Rupji",
"Xinyan Zhang",
"Jeanne Kowalski",
"Manali Rupji",
"Xinyan Zhang"
],
"abstract": "We present CASAS, a shiny R based tool for interactive survival analysis and visualization of results. The tool provides a web-based one stop shop to perform the following types of survival analysis: quantile, landmark and competing risks, in addition to standard survival analysis. The interface makes it easy to perform such survival analyses and obtain results using the interactive Kaplan-Meier and cumulative incidence plots. Univariate analysis can be performed on one or several user specified variable(s) simultaneously, the results of which are displayed in a single table that includes log rank p-values and hazard ratios along with their significance. For several quantile survival analyses from multiple cancer types, a single summary grid is constructed. The CASAS package has been implemented in R and is available via http://shinygispa.winship.emory.edu/CASAS/. The developmental repository is available at https://github.com/manalirupji/CASAS/.",
"keywords": [
"survival",
"quantile survival",
"landmark",
"competing risk"
],
"content": "Introduction\n\nKaplan-Meier (KM) estimates and the Cox Proportional Hazards model have gained huge popularity among clinicians when depicting survival trends and identifying prognostic biomarkers in cancer research. There is a range of commercial software (SAS, STATA, SPSS, PRISM) available for researchers to carry out survival analysis. However, these programs have several disadvantages; commercial software is proprietary and involves restricted usage with rigid outputs, which cannot be changed easily. Open source software such as R is gaining popularity, but the user needs to learn programming skills, which may be very time consuming for clinicians and biomedical researchers with limited programming exposure.\n\nStandard survival analysis involves a single cause of failure. However, in other cases, clinicians may encounter many other causes of failure in addition to a specific cause of interest. In such cases, a competing risk analysis needs to be carried out, where an individual is exposed to two or more causes of failure but its eventual failure is due to only one cause. While several packages are available to conduct competing risk survival analysis in R, making the right choice presents another layer of confusion to the user.\n\nAs opposed to traditional KM or Cox regression analysis, typically a risk factor measured at baseline is examined for its association with survival thereafter. During follow-up, however, things may have changes, such that either the effect of a fixed baseline risk factor may vary over time, resulting in a weakening or strengthening of associations over time or the risk factor itself may vary over time. In the former case, such as effect is often seen in what appears to be significant differences in survival, not necessarily overall and among all survival times, but early on or at later survival times. We address such time-dependent effects on survival by creating two additional tools, one for landmark1 and another for quantile survival analysis2,3. As an example, the user may want to study the effect of chemotherapy on a specific cancer population and thus divides the data into a responder vs a non-responder group. The issue with this approach is that the responder cannot be deemed one, unless they survive until the time of response. In addition, being in the responder group gives them an unfair survival advantage leading to an immortal time bias. To overcome these issues, the investigator may need to perform a landmark analysis by removing the patients with an event (or censored) before the landmark time from the analysis.\n\nMost tools are available as separate packages. As an alternative, CASAS provides a comprehensive survival analysis suite of tools commonly encountered in cancer research. By providing a GUI interface, the user can readily perform any number of these analyses by simply uploading their data and selecting the variables relevant to the analysis.\n\nIn summary, CASAS suite of tools is a one-stop shop for conducting survival analysis without requiring any prior programming knowledge. It is a web-based application that, as a single tool, can carry out KM plot, univariate HR, landmark analysis, quantile survival analysis and competing risk analysis. It allows a user to combine results from various studies or cancer types as well.\n\n\nMethods\n\nStandard survival analysis uses the NCCTG Lung Cancer data in the ‘survival’ package in R4 and uses the ‘survminer’ package to display the KM plots. Either categorical or continuous variables can be used for stratification. Continuous variables can be dichomotimized by either the 25th, 50th, 75th percentile or the optimal cut point. Log-Rank test is used to estimate the overall differences between the survival curves (Figure 1). A single overall survival curve without stratification can be plotted using the ‘All patients’ option.\n\nThe interface shows an example KM plot using NCCTG Lung Cancer data in the ‘survival’ package5. The left side comprises a user menu and the right includes the result plots/tables. The KM plot is interactive and will change depending on the categorical variable selected. Continuous data can be divided using 25th percentile, 50th percentile, and 75th percentile or the user may opt to use an optimal cut point based on martingale residuals using the ‘survMisc’ package (also available with plot in a separate tab). The user could alternatively choose the “All patients” drop down. If the user selects the ‘All patients’ dropdown, a single KM curve is created using all the data instead of two separate KM curves by the categorical variable of interest. User could also choose to output the number of patients at risk. Univariate survival association analysis based on cox proportional hazards model, can be output by entering one variable in the model each time. Multiple variables can be selected to generate the output table. In addition, the user can also test for the proportional hazard assumption by selecting “Test for Proportional Hazards Assumption” as “Yes”. An additional column with the p-value for Proportional Hazards Assumption will be displayed as the rightmost column.\n\nCompeting risk survival analysis is based on Fine and Gray’s Model6, using the ‘cmprsk’ package in R. 35 AML/ALL patients data who underwent Hematopoietic stem cell transplantation (HSCT) affected by either AML or ALL5 are used. Cumulative incidence plot was plotted using CumIncidence.R function available in the package. This tool also displays the Gray’s p-value based on the competing risk code (Figure 2). Either categorical or continuous group variables/‘All patients’ can be used.\n\nThis tab includes competing risk analysis using the BMT data (http://www.stat.unipg.it/luca/R/). The left panel includes options to select variables for univariate survival association analysis based on Fine and Gray’s model, and includes options to select variables for cumulative incidence function analysis. For the Univariate Analysis, the user chooses variable of interest to enter into the analysis as well as the event and the censor code. To generate a Cumulative Incidence Function (CIF) plot and table, the user can choose either a categorical variable to compare or ‘All Patients’ (Like in Figure 1). Users can also select the appropriate time unit based on the data and the time points of interest, where applicable. Censor code can also be specified. The univariate result table or plot is displayed on the right panel.\n\nLandmark analysis is based on a user input landmark time and uses the ‘dynpred’ R package. The same dataset as in Figure 1 is used. The tool generates an overall KM plot and a landmark KM plot with log rank test p-values (Figure 3). The user can also opt for a CI curve instead of a KM plot and allows similar categorical/continuous variable inputs.\n\nThis tab includes landmark analysis using the same data as in Figure 1. The left panel includes options to select variables for landmark analysis and the landmark time. The program calls the function dynpred to generate the new landmark dataset to generate either KM/CIF plots on the right panel. Users can also select the appropriate time unit based on the data.\n\nQuantile survival analysis is based on method developed by 2,3, implemented in the ‘cequre’ package in R. The example data used is the 553 patient Expression with Clinical data for TCGA – BRCA patients given radiotherapy4. Survival time difference between the dichotomized continuous or categorical variable will be estimated with 95% CI for each quantile (Figure 4). A forest plot to represent quantile wise differences between the means and the overall differences is also provided as output.\n\nThe quantile regression tab shows three plots based on TCGA BRCA data with patients who received radiation therapy. The first is the overall KM plot with number at risk table for overall survival and log rank test p-value. The second is the survival time difference with 95% CI between two dichotomized groups at the 10 quantiles Q1 to Q10 (defined as 10 percentile to 100 percentile by 10 percentile at mean survival time among all patients). The third plot is the summary using forest plot for the survival time difference at 10 quantiles. The overall in forest plot corresponds to the transformed HR and 95% CI for overall survival (log [1/HR]).\n\n\nImplementation\n\nThe CASAS software is written in R and tested using version 3.3.0. The Interactive KM, CIF plots and data tables are made visible through a web browser using the shiny R package (www.rstudio.com/shiny).\n\n\nOperation\n\nUsing a windows 7 Enterprise SP1 PC with a 32.0 GB RAM and an 3.30 GHz Intel® Xeon® Processor E5 Family, for a 228 patients NCCTG Lung Cancer dataset4 it took 11.89s to create an interactive KM plot and 2.74s to generate a landmark plot. For data of 35 patients who underwent Hematopoietic stem cell transplantation (HSCT) affected by either AML or ALL5, it took 2.66s to display the CIF plot. TCGA BRCA data for 553 patients who received radiotherapy took 6.52s. The developmental repository is available at https://github.com/manalirupji/CASAS/.\n\nArchived source code as at the time of publication is available at: http://doi.org/10.5281/zenodo.8029177.\n\n\nDiscussion\n\nCASAS is a suite of tools that allows a user to conduct various types of survival analysis through an interactive application in R. We show by example, various types of cancer survival analyses that can be performed based on the questions of interest. Our tool will serve as a platform for many physicians and researchers to conduct preliminary analyses before heading to statisticians to conduct advanced analyses.\n\n\nData and software availability\n\nThe CASAS web tool (http://shinygispa.winship.emory.edu/CASAS/) includes preprocessed example data under each tab. The user could use the example data or could upload a dataset of their choice in the same format as the example data. For Kaplan Meier survival analysis and Landmark analysis, NCCTG Lung Cancer data from 228 patients as available in the survival package is used5. The data can be accessed in R using data(lung). For quantile survival analysis, Level 3 RNASeqV2 Breast Cancer (BRCA) data was downloaded from the TCGA data portal4. 553 patients that received radiotherapy and had survival information were used. Gene expression data for a specific biomarker gene was log2 transformed. The user has the choice to divide the data based on either the twenty-fifth percentile (set as default) or the fiftieth or the seventy-fifth, or an optimal cut point based on the martingale residuals. Similarly, for competing risk analysis, the user could choose the example data or upload their own. The example data consists of 35 patients with acute leukemia who underwent Hematopoietic stem cell transplantation (HSCT) affected by either AML or ALL8 (http://www.stat.unipg.it/luca/R).\n\nThe developmental repository is available at https://github.com/manalirupji/CASAS/.\n\nArchived source code as at the time of publication: http://doi.org/10.5281/zenodo.8029177\n\nLicense: CASAS is available under the GNU public license (GPL-3).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nResearch reported in this publication was supported in part by the Biostatistics and Bioinformatics Shared Resource of Winship Cancer Institute of Emory University and NIH/NCI under award number P30CA138292. The content is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health.\n\n\nReferences\n\nAnderson JR, Cain KC, Gelber RD: Analysis of survival by tumor response. J Clin Oncol. 1983; 1(11): 710–9. PubMed Abstract | Publisher Full Text\n\nHuang Y: Restoration of monotonicity respecting in dynamic regression. J Am Stat Assoc. 2016; (In press). Publisher Full Text\n\nHuang Y: Quantile Calculus and Censored Regression. Ann Stat. 2010; 38(3): 1607–1637. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNCI and NHGRI: The Cancer Genome Atlas (TCGA) Data Portal. Accessed in December 2015. Reference Source\n\nLoprinzi CL, Laurie JA, Wieand HS, et al.: Prospective evaluation of prognostic variables from patient-completed questionnaires. North Central Cancer Treatment Group. J Clin Oncol. 1994; 12(3): 601–7. PubMed Abstract | Publisher Full Text\n\nFine JP, Gray RJ: A Proportional Hazards Model for the Subdistribution of a Competing Risk. J Am Stat Assoc. 1999; 94(446): 496–509. Publisher Full Text\n\nmanalirupji: manalirupji/CASAS: CASASv1.0.0. Zenodo. 2017. Data Source\n\nScrucca L, Santucci A, Aversa F: Competing risk analysis using R: an easy guide for clinicians. Bone Marrow Transplant. 2007; 40(4): 381–7. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "24053",
"date": "06 Jul 2017",
"name": "Gang Han",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors of this article attempted to facilitate the routine survival analysis for clinicians and general health science professionals by building a web based survival analysis software with underlying R programming. Because survival data has increased usage in clinical research, this work is useful to fill out the gap of survival analysis for researchers without training in statistics and programming. But given a number of limitations, its practical value is questionable. Due to the following multiple missing components, this article does not look complete. This reviewer suggests major revision to improve its usability:\n\nThe first missing component is the test of model assumptions. The validity of the proportional hazard assumption was not described.\n\nThe second missing piece is the detailed description of required data format. For example, some practitioners may label event/censor as “1/0”, while others will write “not censored/censored.” What is the requirement for the data file header? For missing values, some may leave it blank, others may write “missing” or “unknown.” These things are all trivial for statisticians but useful for clinical users to know and bear in mind. The authors may add a separate section to emphasize the correct format.\n\nThe third missing piece is a comprehensive example. Although some figures were shown, potential users may look for a detailed example to follow. The authors had made the point that this software can perform certain survival analyses, but they failed to explain/illustrate how to perform these analyses. A major revision is necessary to answer the question of “how” in an example that looks similar to majority of the users’ data and analytical needs.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? No\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "2894",
"date": "31 Jul 2017",
"name": "Manali Rupji",
"role": "Author Response",
"response": "We thank the reviewer for their thoughtful comments that have led to an improved paper. Below, we respond individually to each comment.1. The first missing component is the test of model assumptions. The validity of theproportional hazard assumption was not described. We appreciate this reviewer’s careful review of the tool and in bringing this point to our attention. A test for proportional hazards (PH) was included as part of the standard survival analysis option. We recognize that several methods exist for checking the proportional hazards assumption and results may vary depending on the method applied. In response to the reviewer’s comment, we have included under the documentation tab, “Read Me”, that the proportional hazards assumption is examined by applying the method of Schoenfeld residuals. If this method indicates a violation of the proportional hazards assumption as indicated by a significant test of PH (versus alternative of non-PH) based on such residuals, then other methods should be examined that are outside the scope of this tool since results may vary depending on the method implemented (Hiller et al. 2015). Ref:Hiller L, Marshall A, Dunn J. Assessing violations of the proportional hazards assumption in cox regression: does the chosen method matter? Trials. 2015; 16(suppl 2): P134. 2. The second missing piece is the detailed description of required data format.For example, some practitioners may label event/censor as “1/0”, while otherswill write “not censored/censored.” What is the requirement for the data fileheader? For missing values, some may leave it blank, others may write“missing” or “unknown.” These things are all trivial for statisticians but useful forclinical users to know and bear in mind. The authors may add a separate section to emphasize the correct format We appreciate the reviewer’s point of clarity here and have addressed their recommendation of including a separate paragraph that highlights the required format. This new paragraph is listed under the, “Data and software availability” section in the paper. 3. The third missing piece is a comprehensive example. Although some figureswere shown, potential users may look for a detailed example to follow. Theauthors had made the point that this software can perform certain survivalanalyses, but they failed to explain/illustrate how to perform these analyses. Amajor revision is necessary to answer the question of “how” in an example thatlooks similar to majority of the users’ data and analytical needs.In response to the reviewer’s comment, we have updated the one example initially used for illustrating the quantile survival analysis to also illustrate the following: standard survival analysis, a test of proportional hazards assumption, and selection of a cutpoint for a continuous variable. Additionally, we have updated the specific dataset for landmark analysis, to include the time corresponding to the time-dependent cohort variable. We have left the other previous examples provided in the initial submission to supplement the ‘how to’ of individual methods. While we recognize the importance of having a single example to streamline the methods of survival analysis, we also thought it important to emphasize that the methods are not dependent on each other and that they can be implemented as several stand-alone methods under one tool."
}
]
}
] | 1
|
https://f1000research.com/articles/6-919
|
https://f1000research.com/articles/6-1234/v1
|
26 Jul 17
|
{
"type": "Research Article",
"title": "Plasmapheresis in neurological disorders: six years experience from University Clinical center Tuzla",
"authors": [
"Osman Sinanović",
"Sanela Zukić",
"Adnan Burina",
"Nermina Pirić",
"Renata Hodžić",
"Mirza Atić",
"Mirna Alečković-Halilović",
"Enisa Mešić",
"Osman Sinanović",
"Adnan Burina",
"Nermina Pirić",
"Renata Hodžić",
"Mirza Atić",
"Mirna Alečković-Halilović",
"Enisa Mešić"
],
"abstract": "Background: Therapeutic plasma exchange (TPE) is an extracorporeal blood purification technique that is designed to remove substances with a large molecular weight. The TPE procedure includes removal of antibodies, alloantibodies, immune complexes, monoclonal protein, toxins or cytokines, and involves the replenishment of a specific plasma factor. The aim of the study was to describe the clinical response to TPE in various neurological patients, and to assess the clinical response to this therapy. Methods: The study was retrospective. We analyzed the medical records of 77 patients who were treated at the Department of Neurology, University Clinical Center (UCC) Tuzla from 2011 to 2016.\n\nResults: 83 therapeutic plasma exchanges were performed in the 77 patients. There was a slight predominance of male patients (54.5%), with an average age of 51±15.9 years. The most common underlying neurological diseases were Guillain–Barré syndrome (GBS) (37.7%), then chronic inflammatory demyelinating polyneuropathy (CIDP) (23.4%), multiple sclerosis (MS) (11.7%) and myasthenia gravis (10.4%). Less frequent neurological diseases that were encountered were paraneoplastic polyneuropathies (5.2%), neuromyelitis optica (also known as Devic’s disease) (3.9%), motor neuron disease (3.9%), polymyositis (2.6%) and multifocal motor neuropathy (1.2%). Conclusions: Six years experience of therapeutic plasma exchange in neurological patients in our department have shown that, following evidence-based guidelines for plasmapheresis, the procedure was most effective in patients with GBS, CIDP and myasthenia gravis.",
"keywords": [
"plasmapheresis",
"therapeutic plasma exchange",
"neurological disorders",
"myasthenia gravis",
"Guillain-Barré syndrome",
"demyelinating diseases",
"chronic inflammatory demyelinating polyneuropathy"
],
"content": "Background\n\nTherapeutic plasma exchange (TPE) is an extracorporeal blood purification technique designed for the removal of large molecular weight substances. The basic premise of the treatment is that removal of these substances will allow for the reversal of the pathologic processes related to their presence1. In Asia and Australia TPE is most commonly used for treatment of digestive system diseases, whereas in Europe and USA neurologic disorders prevail2. While first experiences with TPE relate to acute life-threatening conditions, such as treatment of Guillain-Barre syndrome (GBS) or myasthenic crisis, therapeutic success hass also been shown for chronic diseases where immunosuppressive therapy is often required for long-term management3. The TPE procedure includes removal of antibodies, alloantibodies, immune complexes, monoclonal protein, toxins or cytokines, and involves the replenishment of a specific plasma factor4–7.\n\nThe aim of the study was to describe the clinical response to TPE in various neurological patients, and to assess the clinical response to this therapy.\n\n\nMethods\n\nThis study is retrospective, and examines medical records of patients who were treated at the Department of Neurology, University Clinical Center (UCC) Tuzla from January 2011 to December 2016. We recorded the patient demographics, the neurological findings of patients on admission, the diagnosis that prompted treatment with TPE, comorbidities, and any medical complications that took place. Hematological parameters including blood counts, serum proteins, electrolytes and coagulation profiles were monitored after every TPE. The neurological state of patients and the recovery and outcome of therapy were assessed again when discharged. The study received institutional ethical approval from the University Clinical Center Tuzla, and also written informed consent was obtained from all patients who were treated with TPE.\n\n\nResults\n\n83 therapeutic plasma exchanges were performed, in 77 patients, over the course of six years (2011 – 2016) at the Department of Neurology, University Clinical Center (UCC) Tuzla. Some of the patients received more than one course of plasmapheresis. There was a slight predominance of male patients (54.5%), with an average age of 51±15.9 years. The youngest patients were 16, and the oldest 78 (Table 1). Most patients were from the Tuzla Canton, but 28 of them were from other cantons of the Bosnia and Herzegovina, and one patient was from Croatia.\n\nTPE is usually carried out across three sessions. In 27 patients, it was carried out in five sessions, and in one case of severe poliradiculoneuritis in a young patient with tetraplegia, seven sessions were carried out. The most common underlying neurological disease poliradiculoneuritis, Guillain-Barré syndrome (GBS), presenting in 29 patients. These patients had a very good response to the therapy, and a good recovery of motor strength was observed. All patients with paraparesis or quadriparesis recovered some movement, even those with quadriplegia. All patients continue physical therapy in stationary conditions.\n\nTwo patients had complications, including deep vein thrombosis, but after treatment continued with physical therapy. One patient developed pneumonia, due to immobility (hypostatic pneumonia), not related to TPE. Unfortunately, one patient died after the third session of plasmapheresis.\n\n18 patients with chronic inflammatory demyelinating polyneuropathy (CIDP) underwent TPE. Due to the disease being chronic, improvement generally was slower than in the acute form of demyelinating polyneuropathy. All patients felt recovery of motor strength and their sensory ability increased, after treatment. In some of these patients TPE had been repeated over the years.\n\nPatients with severe forms of myasthenia gravis, with generalized muscle weakness, and in some of them respiratory failure, also underwent TPE. All these procedures had no complications, and all the patients recovered motor strength, except one, where no benefits were observed.\n\nTPE was also carried out on patients with demyelinating diseases, including nine with chronic progressive form of MS and three with neuromyelitis optica. Patients who were treated with TPE had progressive forms of MS, with a high score on the Expanded Disability Status Scale (EDSS > 7.0), and we accomplished improvements in symptoms such as tremors, spasms or paresthesias, and there was slightly improvement in motor strength. One MS patient in the progressive stage of disease died. One patient with neuromyelitis optica died, but after the treatment, on palliative care. The other two patients with Devic’s disease, with spastic quadriplegia, managed to take a few steps after treatment with an orthopedic tool after discharge, during physical therapy.\n\nSignificant improvements after TPE were observed in two patients with polymyositis, including better mobility and pain reduction, and in four patients with paraneoplastic syndromes improvements in motor strength and reduced paresthesia were observed. One of the patients had a diagnosis of cerebellar paraneoplastic disorder, caused by breast cancer, with distal weakness, tremor, ataxia and loss of perception, and inability to walk. After three courses of plasmapheresis, of five sessions each, the patient was able to walk for short distances with help. A patient with multifocal motor neuropathy had severe muscle weakness and milder atrophy, but after treatment he noticed improvement in muscle strength. However, in three patients with motor neuron disease, plasmapheresis had no effect.\n\nAlongside the TPE, patients were receiving treatment for their underlying neurological condition (steroids, immunosuppressive agents). Also, it is important to emphasize that all patients continued with physical therapy. A good outcome of the procedure was observed in 87% of patients (improvements were registered in 25 out of 29 GBS patients, in 18 patients with CIDP, 8 with MS, 7 with myasthenia gravis, 4 with paraneoplastic disorders, 2 with Devic’s disease, 2 with polymyositis and in one patient with multifocal motor neuropathy). Only one complication was observed (pneumotorax), but the patient fully recovered. Death was registered in three patients: two had severe, progressive forms of demyelinating disease and one patient with GBS experienced sudden death. The deaths were not directly related to plasmapheresis, but rather as results of complications associated with the disease.\n\n\nDiscussion\n\nThe American Academy of Neurology proposed an evidence-based guideline for plasmapheresis in neurological disorders. According to these recommendations, there is strong evidence that treatment is beneficial in severe forms of GBS (severe enough to impair independent walking or to require mechanical ventilation), and also as a short-term treatment for patients with CIDP8–10. Following these evidence-based guidelines, we treated 29 GBS patients with plasmapheresis (37.7%) and 18 CIDP patients (23.4%). Furthermore, according to these guidelines there was good evidence for treating polyneuropathy patients with plasmapheresis, and also that it had shown benefits as an adjunctive treatment in relapsing forms of MS11–15. We treated MS patients with progressive forms of the disease, and we only achieved a mild improvement of symptoms, but no significant improvement of EDSS scores.\n\nThe study by Láinez-Andrés et al. concluded that TPE proved to be an effective alternative treatment for diseases such as GBS, CIDP and myasthenia gravis. In comparative studies with intravenous immunoglobulin, the efficacy of both therapies is similar16. A recent, large meta-analysis by Ortiz-Sales et al (2016) concluded that there is no evidence that either treatment is more effective or safe in the management of GBS and myasthenia gravis17.\n\n\nConclusion\n\nSix years experience of therapeutic plasma exchange in neurological patients in our department have shown that, following evidence-based guidelines for plasmapheresis, the procedure was the most effective in patients with GBS, CIDP and myasthenia gravis. We did not record any significant complications associated with the procedure itself, only complications associated within the course of the patients’ neurological disease.\n\n\nData availability\n\nDataset 1. Data of neurological patients that were treated with therapeutic plasma exchange, with demographic and clinical characteristics.\n\ndoi, 10.5256/f1000research.11841.d16917918",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nKaplan AA: Therapeutic plasma exchange: a technical and operational review. J Clin Apher. 2013; 28(1): 3–10. PubMed Abstract | Publisher Full Text\n\nMalchesky PS, Bambauer R, Horuchi T, et al.: Apheresis technologies: an international perspective. Artif Organs. 1995; 19(4): 315–23. PubMed Abstract | Publisher Full Text\n\nSchröder A, Linker RA, Gorld R: Plasmapheresis for neurological disorders. Expert Rev Neurother. 2009; 9(9): 1331–1339. PubMed Abstract | Publisher Full Text\n\nWeinstein R: Therapeutic apheresis in neurological disorders. J Clin Apher. 2000; 15(1–2): 74–128. PubMed Abstract | Publisher Full Text\n\nStrauss RG, Ciavarella D, Gilcher RO, et al.: An overview of current management. J Clin Apher. 1993; 8(4): 189–194. PubMed Abstract | Publisher Full Text\n\nGafoor VA, Jose J, Saifudheen K, et al.: Plasmapheresis in neurological disorders: Experience from a tertiary care hospital in South India. Ann Indian Acad Neurol. 2015; 18(1): 15–19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKes P, Bašić V: Plasmapheresis in neurologic disorders. Acta Clin Croat. 2000; 39(4): 237–245. Reference Source\n\nAssessment of plasmapheresis. Report of the Therapeutics and Technology Assessment Subcommittee of the American Academy of Neurology. Neurology. 1996; 47(3): 840–843. PubMed Abstract\n\nKhatri BO: Evidence-based guideline update: plasmapheresis in neurologic disorders. Neurology. 2011; 77(17): e101; author reply e103–4. PubMed Abstract | Publisher Full Text\n\nMcQuillen MP: Evidence-based guideline update: plasmapheresis in neurologic disorders. Neurology. 2011; 77(17): e101; author reply e103–4. PubMed Abstract\n\nKaminski H, Cutter G, Ruff RL, et al.: Evidence-based guideline update: plasmapheresis in neurologic disorders. Neurology. 2011; 77(17): e101–2; author reply e103-4. PubMed Abstract\n\nStork AC, Notermans NC, Vrancken AF, et al.: Evidence-based guideline update: plasmapheresis in neurologic disorders. Neurology. 2011; 77(17): e102–3; author reply e103-4. PubMed Abstract\n\nMateen FJ, Zubkov A, Muralidharan R, et al.: Evidence-based guideline update: plasmapheresis in neurologic disorders. Neurology. 2011; 77(17): e103; author reply e103–4. PubMed Abstract\n\nWinter MM: Evidence-based guideline update: plasmapheresis in neurologic disorders. Neurology. 2011; 77(17): e104–5; author reply e105. PubMed Abstract\n\nFreeman C: Evidence-based guideline update: plasmapheresis in neurologic disorders. Neurology. 2011; 77(17): e105; author reply e105. PubMed Abstract\n\nLáinez-Andrés JM, Gascón-Giménez F, Coret-Ferrer F, et al.: [Therapeutic plasma exchange: applications in neurology]. Rev Neurol. 2015; 60(3): 120–131. PubMed Abstract\n\nOrtiz-Salas P, Velez-Van-Meerbeke A, Galvis-Gomez CA, et al.: Human Immunoglobulin Versus Plasmapheresis in Guillain-Barre Syndrome and Myasthenia Gravis: A Meta-Analysis. J Clin Neuromuscul Dis. 2016; 18(1): 1–11. PubMed Abstract | Publisher Full Text\n\nSinanović O, Zukić S, Burina A, et al.: Dataset 1 in: Plasmapheresis in neurological disorders: six years experience from University Clinical center Tuzla. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24544",
"date": "29 Aug 2017",
"name": "David B. Vodusek",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis report on a retrospective analysis of neurological patients treated with PE reflects the practice in a big neurological department in the years 2011-2016. As such it is intrinsically interesting as it provides data on neurological practice in an European region. It would add to the information if some additional general data were provided (population served by the department; number of patients treated per year)\nThe outcome of PE treatment should be - if possible - described in more detail, and only results, no explanation should be given in the Results section (\"..due to disease being chronic...\"). Improvements as reported subjectively by patients, and those resulting in objective improvement in function, should be differentiated. In GBS, CIDP, MG and whenever else possible the authors should describe when in the time course of the disease the PE was given, what were the deficits before PE, when was the improvement noted and what was the final outcome in terms of function.\nIt is stated that the patients treated with PE had also immunosuppressive drugs - but it remains unclear whether all patients (GBS??), and which drugs, and whether the drug regime was changed during PE. It would be interesting to note why it was decided to give PE in MND.\nIt would be best to describe all untoward effects of PE in one paragraph.\nIt would be interesting to note whether IVIG has also been used in the department in the same time period.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "25550",
"date": "06 Sep 2017",
"name": "Hidenori Matsuo",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe experience of therapeutic apheresis in Tuzia is described.\nThe authors should discuss more the use of TPE in patients with polymyositis or motor neuron disease. According to guidelines, TPE in these diseases seems ineffective. Why did the authors perform TPE for the patients, and how do they think this effects polymyositis?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1234
|
https://f1000research.com/articles/6-1225/v1
|
25 Jul 17
|
{
"type": "Research Article",
"title": "Overuse of prophylactic antibiotics for elective caesarean delivery in Medani Hospital, Sudan",
"authors": [
"Tahani E. Abbas",
"Ishag Adam",
"Elhassan M. Elhassan",
"Imad Eldin M. Tag Eldin",
"Mirgani Abdel Rahman",
"Tahani E. Abbas",
"Elhassan M. Elhassan",
"Imad Eldin M. Tag Eldin",
"Mirgani Abdel Rahman"
],
"abstract": "Background: Antibiotics for prophylaxis are widely used to reduce the risk of post-caesarean delivery infection. The dosage regimens are often inappropriate and may result in the appearance of drug-resistant organisms, which will increase the cost. Objectives: A cross-sectional study was conducted to investigate the prescribing patterns of prophylactic antibiotics for elective caesarean delivery (CD) at Medani Hospital, Sudan. Method: The medical records of women who underwent elective CD from April 2015 to June 2015 were reviewed retrospectively. Results: The main reasons for CD among these women (n=202) were repeat CD, breech and antepartum haemorrhage. The mean (±SD) age of the women was 28.7 (±6.2) years. Ceftizoxime was the most commonly prescribed antibiotic, prescribed for 63.9% of women. It was used alone in 12.4% of cases, and in combination with gentamicin and metronidazole in 49.5% of cases. Cefuroxime was used in combination with gentamicin and metronidazole in 26.7% of women, and in combination with metronidazole only in 9.4% of women, making the overall percentage 36.1%. Antibiotics were administered for 5 days in 32.7% of cases. 91.1% of women received antibiotic prophylaxis after clamping of the cord. All women received oral antibiotic prophylaxis on discharge for five to seven days. Oral cefuroxime in combination with metronidazole was the most preferred regime (77.2 %). Conclusions: The current study shows overuse of antibiotics for elective CD. Injectable ceftizoxime in combination with gentamicin and metronidazole after cord clamping was the most commonly prescribed regime.",
"keywords": [
"antibiotics",
"caesarean delivery",
"ceftizoxime",
"metronidazole"
],
"content": "Introduction\n\nCesarean delivery (CD) is a procedure mainly performed to save the lives of mother and child to ensure a healthy outcome when normal vaginal delivery is not possible. It is the most common major surgical procedure performed worldwide (Blanchette, 2011; DeFrances & Hall, 2007). With increasing CD rates, postpartum maternal infections are likely to become an increasing health and economic burden and their prevention remains a public health priority (Martin et al., 2010). Postpartum endometritis and abdominal wound infections are the most common infectious complications following childbirth, and their incidence has increased due to CD becoming a routine procedure (Chaim et al., 2000).\n\nThe Cochrane Database of Systematic Reviews has reported that antibiotic prophylaxis has reduced the risk of infectious morbidity of CD by 50% to 70% (Smaill & Grivell, 2014; Tita et al., 2009). The hospitals that used antibiotic prophylaxis for elective CD demonstrated high compliance and decreased rates of postpartum infectious complications (Skjeldestad et al., 2015). The desired antibiotic used for prophylaxis should have maximum efficacy against the organism at the surgical site, have a long duration of action and be delivered at an acceptable dose. (Burke, 2001).\n\nRecent reports have shown that a single dose of antibiotic used for CD is equally as adequate as multiple doses or multiple antibiotics, reducing the cost without increasing the infection rate (Gidiri & Ziruma, 2014; Ijarotimi et al., 2013; Westen et al., 2015a). Prolonged use of prophylactic antibiotics can lead to emergence of resistant bacterial strains (Harbarth et al., 2000).\n\nAdministration of antibiotic prophylaxis prior to CD is as effective as when given after cord clamping in reducing the risk of infectious morbidity (Costantine et al., 2008; Lamont & Joergensen, 2014). Various epidemiological studies have been conducted in different countries to assess the use of prophylactic antibiotics in a clinical setting (Gouvêa et al., 2015; Huskins et al., 2001). In Sudan, almost two fifth of babies are being delivered by caesarean (Abbaker et al., 2013). In spite of this high rate, there are few published studies on the effects of antibiotic prophylaxis for CD in Sudan (Ahmed et al., 2004; Elbur et al., 2014; Osman et al., 2013). Therefore, we investigated the prescription patterns of prophylactic antibiotics for elective CD at Medani Hospital, Sudan.\n\n\nMethods\n\nA cross-sectional study was conducted at Medani Hospital, Sudan by reviewing medical records of women who underwent elective CD at Medani Maternity Hospital, Sudan from April to June 2015. Medical records were taken from the Hospital Medical Archive system. The hospital owns the patient records that were accessed. The data were managed anonymously. Thus, the files were reviewed and patient consent was not being necessary. The Review Board of Medani Maternity Hospital Medical, Sudan approved the use of the data (# 2015/26). Data collected includes age, parity, indication for CD, type of anaesthesia given (general or spinal), time at which prophylactic antibiotics were administered (during induction of anaesthesia or after cord clamping), type and strength of injectable antibiotic prescribed, duration of treatment, and the regimen of oral antibiotic given after discharge from the hospital.\n\n\nStatistical analysis\n\nThe sample size of 202 women was estimated according to the equation: n = (Z1-α)2 (P(1-P)/D2).\n\nSPSS for Windows version 20.0 was used for data analysis. Continuous and categorical data were expressed as mean (±SD) and as proportions, respectively.\n\n\nResults\n\n202 medical records were reviewed. The mean (±SD) of the age was 28.7 (6.2) years. The main indications for the operation were repeat CD, previous miscarriage and intrauterine fetal death (Figure 1).\n\nThe mean (±SD) duration of antibiotic treatment was 6.4 (±1.3) days. Ceftizoxime was the most commonly prescribed antibiotic, at and overall rate of 63.9%. It was used alone in 12.4% of cases, and in combination with gentamicin and metronidazole infusion in 49.5% of cases. In 2% of the women, ceftizoxime was administered in combination with metronidazole alone. The second most common regime involved cefuroxime. It was used in combination with gentamicin and metronidazole infusion in 26.7% of women, and it was administered with metronidazole only in 9.4% of women.\n\nThe majority (91.1%) of women received antibiotic prophylaxis after clamping of the cord. All patients received oral antibiotic prophylaxis after discharge, for five to seven days. Oral cefuroxime (zinoxamore) in combination with metronidazole was the most prescribed regimen (77.2%).\n\n\nDiscussion\n\nThe main finding of the current study was that multiple regimens for antibiotic prophylaxis were being administered to women undergoing elective CD at the Medani Maternal Hospital. The use of antibiotic prophylaxis for CD has been shown to be effective in reducing postoperative morbidity, cost and duration of hospitalization (Clifford & Daley, 2012; Smaill & Grivell, 2014; Tita et al., 2009).\n\nThe duration of prophylactic treatment administered to the women at Medani Maternal Hospital was extended to an average of 6.4 days. This is inconsistent with international guidelines, where as a short duration of prophylaxis (usually < 24 hours) is recommended, and it gives the benefit of minimal toxicity and decreases the risk of antibiotic resistance (Dellinger et al., 1994; Giuliani et al., 1999). A number of studies have concluded that a single dose regime was equally as effective as multiple dose regimes. (Bhattachan et al., 2013; Westen et al., 2015b; Ziogos et al., 2010)\n\nInternational and global clinical guidelines have been prepared by a number of advisory committees on the use of antibiotic prophylaxis for women undergoing CDs. Evidence-based recommendations for the prevention of surgical site infections (Berríos-Torres et al., 2017; Review, 2017) state that antibiotic prophylaxis should be administered before skin incision, and no additional doses should have to be administered after the surgical incision is closed (Bhattachan et al., 2013; Ziogos et al., 2010). The guidelines from the American Society of Health-System Pharmacists (ASHP), recommend the use of a single dose of cefazolin administered before surgical incision (Bratzler et al., 2013). Clinical Practice Guidelines approved by the Executive and Council of the Society of Obstetricians and Gynaecologists of Canada recommends the use of a single dose of a first-generation cephalosporin, 15 to 60 minutes prior to skin incision, with no additional doses (van Schalkwyk et al., 2010). A national clinical guideline developed by the Scottish Intercollegiate Guidelines Network for antibiotic prophylaxis in surgery recommends the use of a single standard dose of narrow-spectrum, more affordable antibiotics for prophylaxis (Scottish Intercollegiate Guidelines Network, 2008).\n\nAlthough many national and international guidelines recommend the use of a single dose of antibiotic for prophylaxis, in our study the average duration of prophylaxis is extended to five-seven days, which is of concern. Antibiotic prophylaxis in surgery is used for prevention of surgical site infections, with optimal use involving the use of the antibiotic agent at a dosage that ensures adequate serum and tissue concentrations during the period of potential contamination (Burke, 2001). The antibiotic should be administered for the shortest feasible period to minimize the risk of adverse effects, development of resistance, and costs. Therefore, there is no need for an extended duration for antibiotic use, as observed in this study.\n\nGurusamy and his collegues have demonstrated that multiple prophylactic antibiotics or an increased duration of antibiotic prophylaxis is of no advantage to surgical patients with respect of reduction of MRSA infection (Gurusamy et al., 2013). The administration of single dose antibiotic prophylaxis also reduces the load on the staff and decreased the costs, which is a good for low-resource settings. and should be adopted if the cost has to be reduced (Gidiri & Ziruma, 2014; Ijarotimi et al., 2013; Westen et al., 2015a).\n\nThe prolonged use of prophylactic antibiotics can lead to emergence of resistant bacterial strains (Harbarth et al., 2000). The indiscriminate use of antibiotic prophylaxis coupled with the great capacity of adaptation of microorganisms, enables the emergence of resistant strains, which requires, synthesis of increasingly expensive drugs, resulting in significant increases in healthcare costs.\n\nInjectable ceftizoxime in combination with gentamicin and metronidazole after cord clamping was the most commonly prescribed regime in our study. The second-generation cephalosporin cefuroxime in combination with metronidazole was used as the second most common regime. In a similar study, Elbur and colleagues found great variation in prescribing patterns between different obstetrics units in Khartoum (Elbur et al., 2014).\n\nThe use of third generation cephalosporins, imidazole derivatives and second generation cephalosporins resemble the patterns seen in Asian countries where broad spectrum cephalosporin use is predominant (Al-Momany et al., 2009; Mahdaviazad et al., 2011) Inappropriate use of both prophylactic and therapeutic antibiotics in surgical procedures was observed in Malaysia (Lim et al., 2015). John J et al., also demonstrated inappropriate use of antibiotics in patients undergoing gynaecologic surgery in Texas (Joyce et al., 2017). In spite of this, many developed countries prefer the use of first generation cephalosporins or a combination of penicillin and betalactamase-inhibitors (Durando et al., 2012; Hosoglu et al., 2009). The inappropriate use of antibiotics may result in the development of drug-resistant organisms, which is concerning. (Dancer, 2004).The overuse of third-generation cephalosporin leads to the development of new strains of extended spectrum beta-lactamases (ESBLs), MRSA, vancomycin-resistant enterococci (VRE), and Clostridium difficile (Dancer, 2001).\n\nOur study revealed that the majority (91.1%) of women received antibiotic prophylaxis after clamping of the cord. However, less surgical wound infections have been observed when antibiotics were administered prior to skin incision, and there was no increase in the adverse effects on the neonates. (Dlamini et al., 2015; Lamont & Joergensen, 2014; Tita et al., 2009).\n\n\nConclusions\n\nThe current study shows an overuse of antibiotics for elective CD. Injectable ceftizoxime in combination with gentamicin and metronidazole after cord clamping was the most commonly prescribed regime at Medani Hospital, Sudan.\n\n\nData availability\n\nDataset 1: Raw data collected as the basis for this study. DOI, 10.5256/f1000research.11919.d168395 (Adam et al., 2017).\n\n\nEthical approval\n\nThe study was approved by the Review Board of the Medani Maternity Hospital Medical, Sudan (# 2015/26).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nAbbaker AO, Abdullahi H, Rayis DA: An Epidemic of Cesarean Deliveries at Khartoum Hospital in Sudan with Over Two-Fifths of Neonates Delivered through the Abdomen. J Womens Health Issues Care. 2013; 2(6): 10–13. Publisher Full Text\n\nAdam I, Abbas T, Elhassan E, et al.: Dataset 1 in: Overuse of prophylactic antibiotics for elective caesarean delivery in Medani Hospital, Sudan. F1000Research. 2017. Data Source\n\nAhmed ET, Mirghani OA, Gerais AS, et al.: Ceftriaxone versus ampicillin/cloxacillin as antibiotic prophylaxis in elective caesarean section. East Mediterr Health J. 2004; 10(3): 277–88, [Accessed May 5, 2017]. PubMed Abstract\n\nAl-Momany NH, Al-Bakri AG, Makahleh ZM, et al.: Adherence to international antimicrobial prophylaxis guidelines in cardiac surgery: a Jordanian study demonstrates need for quality improvement. J Manag Care Pharm. 2009; 15(3): 262–71. PubMed Abstract | Publisher Full Text\n\nBratzler DW, Dellinger EP, Olsen KM, et al.: Clinical practice guidelines for antimicrobial prophylaxis in surgery. Am J Heal Syst Pharm. 2013; 70(3): 195–283. PubMed Abstract | Publisher Full Text\n\nBerríos-Torres SI, Umscheid CA, Bratzler DW, et al.: Centers for Disease Control and Prevention Guideline for the Prevention of Surgical Site Infection, 2017. JAMA Surg. 2017. PubMed Abstract | Publisher Full Text\n\nBhattachan K, Baral GN, Gauchan L: Single Versus Multiple Dose Regimen of Prophylactic Antibiotic in Cesarean Section. NJOG. 2013; 8(2): 50–53. Publisher Full Text\n\nBlanchette H: The rising cesarean delivery rate in America: what are the consequences? Obstet Gynecol. 2011; 118(3): 687–90. PubMed Abstract | Publisher Full Text\n\nBurke JP: Maximizing Appropriate Antibiotic Prophylaxis for Surgical Patients: An Update from LDS Hospital, Salt Lake City. Clin Infect Dis. 2001; 33(Suppl 2): S78–83. PubMed Abstract | Publisher Full Text\n\nChaim W, Bashiri A, Bar-David J, et al.: Prevalence and clinical significance of postpartum endometritis and wound infection. Infect Dis Obstet Gynecol. 2000; 8(2): 77–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClifford V, Daley A: Antibiotic prophylaxis in obstetric and gynaecological procedures: a review. Aust N Z J Obstet Gynaecol. 2012; 52(5): 412–9. PubMed Abstract | Publisher Full Text\n\nCostantine MM, Rahman M, Ghulmiyah L, et al.: Timing of perioperative antibiotics for cesarean delivery: a metaanalysis. Am J Obstet Gynecol. 2008; 199(3): 301.e1–6. PubMed Abstract | Publisher Full Text\n\nDancer SJ: The problem with cephalosporins. J Antimicrob Chemother. 2001; 48(4): 463–78. PubMed Abstract | Publisher Full Text\n\nDancer SJ: How antibiotics can make us sick: the less obvious adverse effects of antimicrobial chemotherapy. Lancet Infect Dis. 2004; 4(10): 611–9. PubMed Abstract | Publisher Full Text\n\nDeFrances CJ, Hall MJ: 2005 National Hospital Discharge Survey. Adv Data. 2007; (385): 1–19. PubMed Abstract\n\nDellinger EP, Gross PA, Barrett TL, et al.: Quality standard for antimicrobial prophylaxis in surgical procedures. The Infectious Diseases Society of America. Infect Control Hosp Epidemiol. 1994; 15(3): 182–8. PubMed Abstract\n\nDlamini LD, Sekikubo M, Tumukunde J, et al.: Antibiotic prophylaxis for caesarean section at a Ugandan hospital: a randomised clinical trial evaluating the effect of administration time on the incidence of postoperative infections. BMC Pregnancy Childbirth. 2015; 15: 91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurando P, Bassetti M, Orengo G, et al.: Adherence to international and national recommendations for the prevention of surgical site infections in Italy: Results from an observational prospective study in elective surgery. Am J Infect Control. 2012; 40(10): 969–972. PubMed Abstract | Publisher Full Text\n\nElbur AI, Yousif MA, El Sayed AS, et al.: Misuse of prophylactic antibiotics and prevalence of postoperative wound infection in obstetrics and gynecology department in a Sudanese hospital. Health (Irvine Calif). 2014; 6(2): 158–164. Publisher Full Text\n\nGidiri MF, Ziruma A: A randomized clinical trial evaluating prophylactic single-dose vs prolonged course of antibiotics for caesarean section in a high HIV-prevalence setting. J Obstet Gynaecol. 2014; 34(2): 160–164. PubMed Abstract | Publisher Full Text\n\nGiuliani B, Periti E, Mecacci F: Antimicrobial prophylaxis in obstetric and gynecological surgery. J Chemother. 1999; 11(6): 577–80. PubMed Abstract | Publisher Full Text\n\nGouvêa M, Novaes Cde O, Pereira DM, et al.: Adherence to guidelines for surgical antibiotic prophylaxis: a review. Braz J Infect Dis. 2015; 19(5): 517–24. PubMed Abstract | Publisher Full Text\n\nGurusamy KS, Koti R, Wilson P, et al.: Antibiotic prophylaxis for the prevention of methicillin-resistant Staphylococcus aureus (MRSA) related complications in surgical patients. Cochrane Database Syst Rev. 2013; (8): CD010268. PubMed Abstract | Publisher Full Text\n\nHarbarth S, Samore MH, Lichtenberg D, et al.: Prolonged antibiotic prophylaxis after cardiovascular surgery and its effect on surgical site infections and antimicrobial resistance. Circulation. 2000; 101(25): 2916–21. PubMed Abstract | Publisher Full Text\n\nHosoglu S, Aslan S, Akalin S, et al.: Audit of quality of perioperative antimicrobial prophylaxis. Pharm World Sci. 2009; 31(1): 14–7. PubMed Abstract | Publisher Full Text\n\nHuskins WC, Ba-Thike K, Festin MR, et al.: An international survey of practice variation in the use of antibiotic prophylaxis in cesarean section. Int J Gynaecol Obstet. 2001; 73(2): 141–5. PubMed Abstract | Publisher Full Text\n\nIjarotimi AO, Badejoko OO, Ijarotimi O, et al.: Comparison of short versus long term antibiotic prophylaxis in elective caesarean section at the Obafemi Awolowo University Teaching Hospitals Complex, Ile-Ife, Nigeria. Niger Postgrad Med J. 2013; 20(4): 325–30, [Accessed May 5, 2017]. PubMed Abstract\n\nJoyce J, Langsjoen J, Sharadin C, et al.: Inappropriate use of antibiotics in patients undergoing gynecologic surgery. Proc (Bayl Univ Med Cent). 2017; 30(1): 30–32. PubMed Abstract | Free Full Text\n\nLamont RF, Joergensen JS: Prophylactic antibiotics for caesarean section administered preoperatively rather than post cord clamping significantly reduces the rate of endometritis. Evid Based Med. 2014; 19(1): 17. PubMed Abstract | Publisher Full Text\n\nLim MK, Lai PS, Ponnampalavanar SS, et al.: Antibiotics in surgical wards: use or misuse? A newly industrialized country’s perspective. J Infect Dev Ctries. 2015; 9(11): 1264–71. PubMed Abstract | Publisher Full Text\n\nMahdaviazad H, Masoompour SM, Askarian M: Iranian surgeons’ compliance with the American Society of Health-System Pharmacists guidelines: antibiotic prophylaxis in private versus teaching hospitals in Shiraz, Iran. J Infect Public Health. 2011; 4(5–6): 253–9. PubMed Abstract | Publisher Full Text\n\nMartin JA, Hamilton BE, Sutton PD, et al.: Births: final data for 2007. Natl Vital Stat Rep. 2010; 58(24): 1–85. PubMed Abstract\n\nOsman B, Abbas A, Ahmed MA, et al.: Prophylactic ceftizoxime for elective cesarean delivery at Soba Hospital, Sudan. BMC Res Notes. 2013; 6: 57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReview E: Centers for Disease Control and Prevention Guideline for the Prevention of Surgical Site Infection, 2017. JAMA Surg. 2017. PubMed Abstract | Publisher Full Text\n\nScottish Intercollegiate Guidelines Network: Antibiotic Prophylaxis in Surgery: A National Clinical Guideline. 2008; 1–71. Reference Source\n\nSkjeldestad FE, Bjørnholt JV, Gran JM, et al.: The effect of antibiotic prophylaxis guidelines on surgical-site infections associated with cesarean delivery. Int J Gynaecol Obs. 2015; 128(2): 126–130. PubMed Abstract | Publisher Full Text\n\nSmaill FM, Grivell RM: Antibiotic prophylaxis versus no prophylaxis for preventing infection after cesarean section. Cochrane Database Syst Rev. 2014; (10): CD007482. PubMed Abstract | Publisher Full Text\n\nTita AT, Rouse DJ, Blackwell S, et al.: Emerging concepts in antibiotic prophylaxis for cesarean delivery: a systematic review. Obstet Gynecol. 2009; 113(3): 675–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Schalkwyk J, Van Eyk N; Society of Obstetricians and Gynaecologists of Canada Infectious Diseases Committee: Antibiotic prophylaxis in obstetric procedures. J Obstet Gynaecol Can. 2010; 32(9): 878–892. PubMed Abstract | Publisher Full Text\n\nWesten EH, Kolk PR, van Velzen CL, et al.: Single-dose compared with multiple day antibiotic prophylaxis for cesarean section in low-resource settings, a randomized controlled, noninferiority trial. Acta Obstet Gynecol Scand. 2015a; 94(1): 43–49. PubMed Abstract | Publisher Full Text\n\nWesten EH, Kolk PR, van Velzen CL, et al.: Single-dose compared with multiple day antibiotic prophylaxis for cesarean section in low-resource settings, a randomized controlled, noninferiority trial. Acta Obstet Gynecol Scand. 2015b; 94(1): 43–49. PubMed Abstract | Publisher Full Text\n\nZiogos E, Tsiodras S, Matalliotakis I, et al.: Ampicillin/sulbactam versus cefuroxime as antimicrobial prophylaxis for cesarean delivery: a randomized study. BMC Infect Dis. 2010; 10: 341. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "27122",
"date": "15 Nov 2017",
"name": "Hansa Dhar",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe above research is a very relevant topic of concern as the authors have truly highlighted the advantages of prophylactic antibiotics in caesarean sections in the recent age. Prophylactic antibiotics have reduced the complications of post-operative wound infection, maternal infections, endometritis and pyrexia in caesarean delivery. In this study various drugs as prophylaxis with different durations have been used. But there is no comparative study of prophylactic drug regimes and drug durations for which these drugs were used in the hospital. The correct prophylactic regime accepted world wide is a single second generation cephalosporin ( cefazoline) used within 30 minutes to one hour prior to surgery which is helpful in avoiding postoperative infections. The regimes used in the above study appear to be therapeutic rather than prophylactic. A comparative study highlighting the advantages versus disadvantages of the overuse of the various multiple drug regimes in post operative follow up of these cases would improve the study. Patients who had postoperative complications if any has not been noted in the results, thereby making it an incomplete study.\n\nRecommendations = 1.The study needs to be elaborated further. 2.Comparative study of all the drug regimes need to be done. 3. Complications observed in study group should be noted. 4. Graphic representation in the form of a table should be added after comparison of all regimes.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
},
{
"id": "27120",
"date": "27 Nov 2017",
"name": "Musa Sekikubo",
"expertise": [
"Reviewer Expertise Maternal infectious diseases"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is difficult to evaluate antibiotic overuse without reference to locally available guidelines on prophylaxis.\nThere are mixed thoughts on the timing of prophylactic antibiotics1 - 30 minutes prior to incision vs after clamping the cord - what do the local guidelines recommend? It is not clear what was done in patients with preterm rupture of membranes.\nThe design seemed more of a retrospective records review than a cross sectional study.\nThe results don't spur future practice as they can not be pegged against current guidelines.\nRecommendations:\na. The study could benefit from additional information on local guidelines on antibiotic prophylaxis during Caesarean section to enable the reader to compare adherence to guidelines.\nb. I have significant reservations about the study despite its importance in the era of increasing antibiotic resistance.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1225
|
https://f1000research.com/articles/6-1223/v1
|
25 Jul 17
|
{
"type": "Opinion Article",
"title": "We need a NICE for global development spending",
"authors": [
"Kalipso Chalkidou",
"Anthony J. Culyer",
"Amanda Glassman",
"Ryan Li",
"Anthony J. Culyer",
"Amanda Glassman",
"Ryan Li"
],
"abstract": "With aid budgets shrinking in richer countries and more money for healthcare becoming available from domestic sources in poorer ones, the rhetoric of value for money or improved efficiency of aid spending is increasing. Taking healthcare as one example, we discuss the need for and potential benefits of (and obstacles to) the establishment of a national institute for aid effectiveness. In the case of the UK, such an institute would help improve development spending decisions made by DFID, the country’s aid agency, as well as by the various multilaterals, such as the Global Fund, through which British aid monies is channelled. It could and should also help countries becoming increasingly independent from aid build their own capacity to make sure their own resources go further in terms of health outcomes and more equitable distribution. Such an undertaking will not be easy given deep suspicion amongst development experts towards economists and arguments for improving efficiency. We argue that it is exactly because needs matter that those who make spending decisions must consider the needs not being met when a priority requires that finite resources are diverted elsewhere. These chosen unmet needs are the true costs; they are lost health. They must be considered, and should be minimised and must therefore be measured. Such exposition of the trade-offs of competing investment options can help inform an array of old and newer development tools, from strategic purchasing and pricing negotiations for healthcare products to performance based contracts and innovative financing tools for programmatic interventions.",
"keywords": [
"Value for money",
"aid",
"transition",
"cost-effectiveness",
"health technology assessment",
"priority setting",
"universal coverage"
],
"content": "Aid is wasted\n\nUp to 40% of healthcare spending is wasted according to the World Health Organization (WHO; (World Health Organization, 2010)). A recent report by the Organisation for Economic Cooperation and Development puts healthcare waste – defined broadly as over-utilisation of technologies, unwarranted hospital admissions, corruption and inefficient pharmaceutical markets – at 20% (OECD 2017). WHO analysis suggests that even the poorest or most fragile states in Sub Saharan Africa rely on external funding for less than a quarter of their total healthcare spending, and in all bar a handful the trend shows a rising domestic over foreign spending (Soucat, 2017). By implication, a lot of healthcare aid money does not buy health, while even the world’s poorest countries increasingly finance their healthcare systems (whether wastefully or efficiently) out of extremely limited domestic resources.\n\nNone of this means that more external money for development is now unimportant, but what matters, at least as much as the amount of aid, is what Glassman (2015) calls the “priorities ditch”: a dearth of investment in governance know-how for setting spending priorities locally, and in better incentives that link aid investment to development results, in a context-sensitive and hence more effective manner (Glassman & Chalkidou, 2012). There are lots of things donors can do, but don’t do, to fill this governance gap and to create the capacities (both internally and within countries) for wise spending, and, where countries are within reach of the Sustainable Development Goals, to smooth a transition to a world less dependent on aid.\n\n\nFilling the priorities ditch\n\nGiven today’s aid scepticism in major funding nations, credible mechanisms that actually deliver better development outcomes – whether poverty reduction, health, sanitation, nutrition, education, upholding of human rights – for the poorest and most vulnerable people must surely be our priority. The dramatic proposed cuts in the USAID budget (Konyndyk, 2017) and the reluctant UK commitment to a 0.7% GDP aid spending target reflect public scepticism of the return-on-investment of aid. The numerous channels through which UK aid is distributed also suggest multiple objectives and a lack of strategic purpose and coordination: the Department for International Development (DFID), Foreign and Commonwealth Office, UK Trade, Research Councils, Commonwealth Development Corporation, Ministry of Defence, etc. The scepticism is shared by some sections of the UK popular media and Parliament. It ought to be possible to moderate such scepticism: a loud and clear commitment by the US and UK administrations to ensuring that money spent on aid is money well spent, good value for those giving and for those receiving. Emphasis on analysis, evidence and performance, rather than evidence-free advocacy and non-performance-linked targets, may be a win-win for both donors and recipients of aid (Chalkidou et al., 2017). But this will require a much sharper focus within major global donors and – most importantly – in-country training in the necessary analytical skills and in-country infrastructure for recipients, so that competent agencies and ministerial departments are created and sustained (Li et al., 2017).\n\nTo make the case, it is no use relying solely on emotional case-by-case appeals to “doing the right thing” or on a parade of advocacy-friendly global statistics. Instead, before any funds are committed, donor countries should insist that the necessary capacity in recipient countries is brought into being to ensure that only cost-effective investments are considered in the first place. Mere effectiveness is not enough. The only relevant kind of cost-effectiveness is that determined by the social values and development objectives (better health, better education, and so on) in the country, the budget it proposes, and the associated realistic cost-effectiveness threshold, above which not a single significant investment should be made without very compelling reasons. A serious and sustained institutionalised effort is needed to analyse and publish every significant aid programme’s return-on-investment, monitoring and evaluating it both during implementation and in the longer run. Ian Mitchell of the Center for Global Development proposes that any bilateral aid programme above £10m should be disallowed unless this kind of analysis has been done and the proposed investment passes the relevant tests (Mitchell, 2017). He also proposes the establishment of NIDE, a National Institute for Development Effectiveness, to play this role on behalf of the UK.\n\n\nNIDE: NICE for aid?\n\nModelled along the lines of the National Institute for Health and Care Excellence (NICE), NIDE would be an independent public body accountable to the Government, and whose function is to assess the value-for-money of overseas development assistance on behalf of DFID and other relevant agencies. NIDE would evaluate the value-for-money of investing in aid not only for bilateral programmes, but also for monies spent through multilateral agencies – in health aid, such agencies include the Global Fund for AIDS, TB and Malaria, UNITAID, and Gavi – where the opportunities for efficiencies, given the bulk purchasing of commodities on behalf of large part of the world, are significant. Indeed, Her Majesty’s Government made a start in this direction with last September’s Performance Agreement with the Global Fund: “Through our membership of the Board of the Global Fund, the UK will work to strengthen independent advice and scrutiny of the Global Fund to ensure that it is following best practice in seeking value for money.” (Department for International Development and The Global Fund to Fight Aids, Tuberculosis and Malaria, 2016). What are missing, and what a NICE-for-aid could provide, are value-for-money indicators and valid, reliable processes for measuring and reporting against them.\n\nNIDE would also determine two other key factors in conjunction with government departments of recipient countries. One is the nature of criteria other than cost-effectiveness to be deployed to inform investment decisions. For health investments, evident candidates include the benefits of financial protection that the investments may enable and the contribution the investment would make to reducing the worst inequalities in health. The other factor is the cost-effectiveness threshold suitable for each client country, set so that when the country eventually absorbs the costs of whatever investments the aid has established (for instance, vaccination programmes), those costs are affordable with domestic resources, and in the case of health not the absurdly generous thresholds inherited from past WHO priority-setting (Culyer, 2016; Revill et al., 2014). NIDE would need also to play a key role in the creation of training and capacity-building programmes – targeting a broad spectrum of stakeholders ranging policymakers, technical officers, researchers, and the like – and preferably such programmes located in recipient countries and preferably ones that also train trainers, thereby avoiding as far as possible excessive ongoing dependence on the UK and other high-income countries (Li et al., 2017).\n\nThe “NICE-for-aid” idea is not new for public policy, at least not in the UK. The What Works Centres claim “a world first” for any government to have “…taken a national approach to prioritising the use of evidence in decision-making.” (Ruiz & Breckon, 2014). What Works Centres span health (NICE is part of the What Works network), crime, education and ageing, but do not (yet) cover development. The NIDE proposal simply extends this world innovation in public policy to overseas development assistance. This ought to win over domestic aid sceptics by championing strong governance and institutions in recipient countries alongside rigorous value-for-money assessments – assessments using world class skills, which the UK prides itself on and has in abundance (Hasan et al., 2015).\n\nNIDE must not be a patronising or culturally imposing exercise. The BMGF, Rockefeller and DFID already fund the international Decision Support Initiative (iDSI) based at Imperial College. This is a multinational, multiprofessional and multidisciplinary initiative aiming at improving health resource allocation decisions in low and middle income countries (LMICs) using evidence of comparative clinical and cost-effectiveness (Chalkidou, et al., 2017). It operates by strengthening local capacities for decision-making and priority-setting, by promoting best practice in analytical and statistical methods, and by sharing experiences and results actively through its expanding network. iDSI works with the national governments or national health insurers of India, South Africa, Ghana, Indonesia, Vietnam and China to improve value-for-money in healthcare investments. The World Bank is coordinating a Joint Learning Network for Universal Health Coverage, which is somewhat broader in scope and has recently launched an efficiency collaborative with similar objectives to those of iDSI. And whilst demand from governments for such support is less evident in low income economies, funding channels, such as the Global Fund and UNITAID, can play this role. In its market shaping strategy, the Global Fund commits to: “…proactively engage with recipients to share relevant analyses and information about likely product costs and comparative health technology assessments…the GF Secretariat...will connect recipients with these resources to inform country-driven health technology assessment. Engaging in this process can also be an opportunity to build country capacity for health technology assessment and how to incorporate this into product selection decisions.” (Kanpirom et al., 2017).\n\n\nNICE for aid: Setting it up\n\nSetting up a NICE-for-aid would hardly be an easy task. There are challenges in setting the scope (health, education, other areas of social policy, commodities or services, etc.) and in defining (even inventing) the methods to be followed for evaluating aid investment projects both ex ante, and perhaps ex post in monitoring and evaluation, where perhaps much of the ODA Research Council money could be channelled.\n\nOn scope, healthcare spending is an obvious starting point given the global drive for universal health coverage – Target 3.8 of the Sustainable Development Goals. Within healthcare, NIDE should seek a broad agreement on the value of evidence and the principles according to which evidence would be appraised; on the principles for determining value-for-money (economic efficiency); on suitable outcome measures and their appropriate uses; and on the principles to be used in assessing ethical and distributional issues and their integration into Health Technology Assessment (Chalkidou et al., 2016). NIDE should also make an inventory of the evidence-based policy expertise – essentially expertise in evidence generation, governance, and policy – that the UK has after the Cochrane Collaboration, more than 15 years’ experience of NICE, and two decades of Health Technology Assessment. NIDE should also liaise with the National Institute of Health Research and the Research Councils to develop a research programme that addressed important unanswered but researchable questions – ones whose answering would enhance NIDE’s effectiveness.\n\nOn methods, starting with articulating a Reference Case, that is an agreed “gold standard” for conducting and reporting economic analyses, to drive better economic evaluations makes sense. iDSI already has a Reference Case for health economic evaluations in low- and middle-income countries (Wilkinson et al., 2016). A Reference Case would have two main tasks: first to list the essential characteristics of competent research and research reporting, and second to list those specific contextual matters that can be resolved only in-country. NIDE could usefully revisit the current DFID definition of Value for Money (which has little economic content), the 4Es framework, which confusingly comprises 3 Es and a fourth CE for cost-effectiveness - but inexplicably discusses neither opportunity costs nor allocative efficiency (Department for International Development, 2011).\n\nNIDE should avoid taking a blinkered approach on the scope of economic and epidemiological analyses. They provide useful tools for looking at countries transitioning away from aid, to inform, for instance, appropriate co-financing levels to maximise return-on-investment. Recent work has advanced the field of efficiency and performance measurement of whole healthcare systems (Smith & Yip, 2016), moving well beyond the NICE approach of assessing only individual pharmaceutical products to include the analysis of delivery platforms, health system strengthening interventions and human resource constraints (van Baal et al., 2017; Morton et al., 2016). This is not easy – a recent attempt at dealing with a DFID programmatic intervention on maternal and child health in Nigeria as a technology whose value could be assessed compared to the next best thing, proved to be incredibly hard (Jones, 2017). Hard - but not impossible. Hard - but in fact essential if such investments are not to waste aid money. At the very least, the Nigeria study threw up areas where more research, empirical or methodological, or data collection exercises, or investment in skills were needed.\n\nData is another important prerequisite for systematically assessing the value for money of aid interventions. Systematic assessment is not only valuable for the substantive implications it has for particular policy choices; it also identifies data gaps and focuses attention on future data collection priorities amongst types of information that would make a difference in future investment decisions. Data on unit costs (price transparency is one important element of this) and resource use (how much does it cost the National Health Insurance of Ghana to treat a stroke?), incidence (how much of the full cost falls on private individuals), individual and social preferences (reflecting local cultural and religious realities), data on risks (incidence data for most conditions for example are absent for most SSA countries) and effect sizes (inevitably based on pragmatic research carried out in in-country context; (BOLDER Research Group, 2016)), treatment of residual uncertainty (for example, where data are absent and resort has to be had to modelling and “expert opinion”). Such a targeted, decision needs-driven approach to data generation, ideally through systems’ own routine mechanisms, need not necessarily be complemented by monster demographic health surveys, but it would make decisions easier to make, the reasons for them clearer, and decision makers more accountable to their populations and their donors. NICE has had that very effect in the National Health Service (NHS): costing data on technologies and services (like reference costs and later diagnosis-related groups) became more readily available as NICE used them for informing real investment decisions.\n\n\nThe opposition?\n\nNIDE is bound to provoke hostility. If it does things wrong, this will be deserved. But let us review a few grounds that would not be good grounds for complaint.\n\nIt will be said that costs ought not to be considered when setting health care priorities (Loewy, 1980). Only need matters. The charge is deeply wrong because it is inconsistent. It is exactly because needs matter that NIDE must consider the needs not being met when a priority requires that resources go elsewhere. These chosen unmet needs are the true costs. They are lost health. They must be considered, and should be minimised (and must therefore be measured).\n\nIt will be said that people’s willingness to pay ought not to determine the priorities in a public health insurance system because of the huge inequalities in abilities to pay in most LMICs. Indeed, individual willingness ought not to be the determinant. But explicit collective willingness to pay is essential, this is most handily expressed by a cost-effectiveness threshold and NIDE will have to help countries to decide what this should be. The WHO’s 1–3 times GDP per capita for health may well have caused more harm than good, until its quiet withdrawal by WHO sometime in 2016/17, for being overly generous and not nearly country specific enough to inform meaningful local spending decisions (Revill et al., 2014; Woods et al., 2016). Here is an issue over which one size definitely does not fit all. In particular, setting a threshold that is too high ensures that it becomes impossible to implement a cost-effective programme of care. It ensures that services recommended on its basis are unaffordable.\n\nSome will rail against economics and economists, attributing to them an indifference to inequities and uncritical worship of market solutions. Major figures in global development have attributed inequalities in healthcare outcomes and access to economics and economics. “Value for money”, “sustainability”, a “minimum healthcare package” and “limited resources” are deemed to be too un-aspirational and depriving the poor in developing countries from services they need, in the name of efficiency (Paul Farmer, Who Lives and Who Dies; (Farmer, 2015)). Senior WHO officials recently declared any efforts to value benefits of the latest expensive on-patent drugs, or “the value of life”, simply “unfeasible”. And economists are accused of “institutionalising inequality” and being collectively against “access free at the point of delivery”, which “kills the soul of an economist” (Richard Horton, The Atlantic, 2012) (Meyer, 2013). There are of course some economists who (still) believe in healthcare markets, consumer supremacy and prices as the best proxies for people’s preferences, all ill-suited to healthcare and most public policy, but these economists are not amongst those who published the Economists’ Declaration on UHC (Summers, 2015). True, NIDE must choose its economists well!\n\nNone of these objections should stop the UK from improving the effectiveness of aid spending, starting with its own investment and building on the firm principle that we seek to maximise the impact of our aid money on the health of the people we choose to help. If DFID were to create a NICE-for-aid it would at a stroke improve purchasers’ ability to buy effectively at all levels: from the way DFID spends its own money, through bilateral country programmes or organisations such as Global Fund, UNITAID and Gavi, as well as the WHO and the World Bank, to influencing how the likes of the Global Fund or World Bank trust funds contract directly with product manufacturers or pass money on to countries for them to buy from service providers or product manufacturers. Such an approach would give new meaning to “strategic purchasing” or “evidence-informed procurement”. It could also inform upstream investment decisions made by the like of DFID’s CDC, the UK private investment arm, active in South Asia and Sub Saharan Africa.\n\n\nWhat would it cost?\n\nIt has been estimated that the HTA programme in the UK has much more than paid for itself. Implementing just ten reports of the hundreds produced, for only one year, and conservatively assuming a yield of just 12% of the total benefit assumed in the HTA analyses, generates enough value to cover the running of the whole HTA programme for 20 years (Guthrie et al., 2016). That is surely a return-on-investment to die for. NICE, one of the most expensive of HTA agencies in the world, costs less than 0.05% of the NHS budget. A review of priority setting institutions around the world found that those countries who do have them, spend less than 1/1000 of their resources on them (Glassman et al., 2012). Spending less than 0.1% of the total health care budget on deciding how to spend the remaining 99.9% more wisely, improving outcomes and access, and building the needed local technical and administrative capacity in the process, is surely the definition of a good investment.\n\n\n…and without a NICE-for-aid?\n\nWhat is the alternative? The risk is that the UK’s (and possibly USA’s) aid spending becomes increasingly vulnerable to sceptics, leading to further budget cuts, and further fragmentation between spending channels across government departments. The “business as usual” approach would rule: serving short term objectives rather than maximising long term impact on reducing poverty and jeopardising progress toward valuable development goals, such as universal health coverage. Without the work a NIDE could do, countries transitioning away from aid will become more at risk of regressing back to being aid-dependent states, the local intelligent capacity for making good decisions will remain unbuilt, and the world risks losing hard won gains towards sustainable global development (Glassman & Temin, 2016). It can be done. Let’s do it!",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by a grant from the Bill and Melinda Gates Foundation [OPP1134345] and the UK’s Department for International Development.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nBOLDER Research Group: Better Outcomes through Learning, Data, Engagement, and Research (BOLDER) – a system for improving evidence and clinical practice in low and middle income countries [version 1; referees: 2 approved]. F1000Res. 2016; 5: 693. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChalkidou K, Glassman A, Marten R, et al.: Priority-setting for achieving universal health coverage. Bull World Health Organ. 2016; 94(6): 462–467. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChalkidou K, Li R, Culyer AJ, et al.: Health Technology Assessment: Global Advocacy and Local Realities; Comment on “Priority Setting for Universal Health Coverage: We Need Evidence-Informed Deliberative Processes, Not Just More Evidence on Cost-Effectiveness.” Int J Health Policy Manag. 2017; 6(4); 233–236. Publisher Full Text\n\nCulyer AJ: Cost-effectiveness thresholds in health care: a bookshelf guide to their meaning and use. Health Econ Policy Law. 2016; 11(4): 415–32. PubMed Abstract | Publisher Full Text\n\nDepartment for International Development: DFID’s Approach to Value for Money (VfM). Department for International Development. 2011. Reference Source\n\nDepartment for International Development and The Global Fund to Fight Aids, Tuberculosis and Malaria: Performance Agreement. United Kingdom and The Global Fund to Fight Aids, Tuberculosis and Malaria. 2016. Reference Source\n\nFarmer P: Who Lives and Who Dies: LRB 5 February 2015. [Online]. 2015; 37(3): 17–20, [Accessed: 6 June 2017]. Reference Source\n\nGlassman A, Chalkidou K: Priority-Setting in Health: Building Institutions for Smarter Public Spending. 2012. Reference Source\n\nGlassman A, Chalkidou K, Giedion U, et al.: Priority-setting institutions in health: recommendations from a center for global development working group. Global Heart. 2012; 7(1): 13–34. PubMed Abstract | Publisher Full Text\n\nGlassman A, Kenny C: In Health Spending, Middle-Income Countries Face a Priorities Ditch, Not a Financing Ditch – But That Still Merits Aid | Center For Global Development. [Online]. 2015, [Accessed: 5 June 2017]. Reference Source\n\nGlassman A, Temin M: Millions Saved: New Cases of Proven Success in Global Health. Center for Global Development. 2016. Reference Source\n\nGuthrie S, Hafner M, Bienkowska-Gibbs T, et al.: Returns on Research Funded Under the NIHR Health Technology Assessment (HTA) Programme: Economic Analysis and Case Studies. Rand Health Q. 2016; 5(4): 5. PubMed Abstract | Free Full Text\n\nHasan N, Curran S, Jhass A, et al.: The UK’s strong contribution to health globally. Lancet. 2015; 386(9989): 117–118. PubMed Abstract | Publisher Full Text\n\nJones A: Integration of iDSI’s Reference Case principles for economic evaluation and DFID’s approach to value for money analysis. Opportunities and challenges. F1000Res. 2017. Publisher Full Text\n\nKanpirom K, Luz AC, Chalkidou K, et al.: How Should Global Fund Use Value-for-Money Information to Sustain its Investments in Graduating Countries? Int J Health Policy Manag. 2017; 6: 1–5. Reference Source\n\nKonyndyk J: Our First Peek at Trump’s Aid Budget: Big Changes, but Will Congress Play Along? Center For Global Development [Online]. 2017, [Accessed: 6 June 2017]. Reference Source\n\nLi R, Ruiz F, Culyer AJ, et al.: Evidence-informed capacity building for setting health priorities in low- and middle-income countries: A framework and recommendations for further research [version 1; referees: 2 approved]. F1000Res. 2017; 6: 231. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoewy EH: Cost should not be a factor in medical care. N Engl J Med. 1980; 302(12): 697. PubMed Abstract | Publisher Full Text\n\nMeyer R: Is Economics “The Biggest Fraud Ever Perpetrated on the World?” The Atlantic [Online]. 2013, [Accessed: 6 June 2017]. Reference Source\n\nMitchell I: What a UK Election Manifesto on Development Might Look Like: 19 Proposals from CGD. Center For Global Development [Online]. 2017, [Accessed: 5 June 2017]. Reference Source\n\nMorton A, Thomas R, Smith PC: Decision rules for allocation of finances to health systems strengthening. J Health Econ. 2016; 49: 97–108. PubMed Abstract | Publisher Full Text\n\nOECD: Tackling wasteful spending on health. OECD Publishing; 2017. Reference Source\n\nRevill P, Asaria M, Phillips A, et al.: WHO Decides What is Fair? International HIV Treatment Guidelines, Social Value Judgements and Equitable Provision of Lifesaving Antiretroviral Therapy. CHE Research Paper 99, 2014. Reference Source\n\nRuiz F, Breckon J: THE NICE WAY: LESSONS FOR SOCIAL POLICY AND PRACTICE FROM THE NATIONAL INSTITUTE FOR HEALTH AND CARE EXCELLENCE. Alliance for Useful Evidence. 2014. Reference Source\n\nSmith PC, Yip W: The economics of health system design. Oxf Rev Econ Pol. 2016; 32(1): 21–40. Publisher Full Text\n\nSoucat A: Building institutions for an effective transition towards UHC. Sustainability and Transition Meeting. 2017. Reference Source\n\nSummers LH: Economists’ declaration on universal health coverage. Lancet. 2015; 386(10008): 2112–2113. PubMed Abstract | Publisher Full Text\n\nvan Baal P, Thongkong N, Severens JL: Human resource constraints and the methods of economic evaluation of health care technologies [version 1; not peer reviewed]. F1000Res. 2017; 6: 468. Publisher Full Text\n\nWilkinson T, Sculpher MJ, Claxton K, et al.: The International Decision Support Initiative Reference Case for Economic Evaluation: An Aid to Thought. Value Health. 2016; 19(8): 921–928. PubMed Abstract | Publisher Full Text\n\nWoods B, Revill P, Sculpher M, et al.: Country-Level Cost-Effectiveness Thresholds: Initial Estimates and the Need for Further Research. Value Health. 2016; 19(8): 929–935. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorld Health Organization: HEALTH SYSTEMS FINANCING. The path to universal coverage. 2010. Reference Source"
}
|
[
{
"id": "24492",
"date": "07 Aug 2017",
"name": "Samantha A. Hollingworth",
"expertise": [
"Reviewer Expertise Health technology assessment and health services research"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis interesting paper argues the case for an agency potentially named the National Institute for Development Effectiveness (NIDE) - like a NICE for global health spending by bilateral and multilateral aid agencies. The authors are well placed to propose this agency given their wealth of experience in this area. The paper outlines the need, the proposed mechanism, aspects of establishment, and requirements (e.g. methods and data). The authors pre-emptively outline some potential arguments opposing NIDE and posit the alternative – what would happen if there was no NIDE? They also outline the anticipated costs.\n\nThe authors note that this is not a new idea for the UK (p4) and mention the UK Government’s What Works initiative (some aspects of social policy) and NICE (health technology assessment). It might be useful to hear more about what other countries or multilateral agencies may have done in this area especially with regard to establishment, methods, and data sources\n\nIt appears that health technology assessment would be a major role for NIDE but it may be worth also considering the costs of scale up and implementing interventions (and not only budget impact analysis).\n\nThe authors anticipate some hostility. They mention several aspects but this is based on selected challenges once NIDE would be established. The authors may want to comment on one or two reasons that some may oppose even the establishment of NIDE. There may be several (and possibly competing) imperatives for some particular ‘aid’ projects funded by government(s). Concerning the negative arguments about ‘economics’, one can point to NICE itself for the benefits of ‘economics’!\n\nThe authors helpfully outline what NIDE might cost but it would be instructive to know the amount of money spent on aid – at least by the UK. Indeed, the authors hint at the excellent anticipated return on investment for NIDE - allocation efficiency for aid!\n\nSome particular comments:\nP3, col 2 - The authors pose some good information requirements. [“What are missing, and what a NICE-for-aid could provide, are value-for-money indicators and valid, reliable processes for measuring and reporting against them.”] but it would be helpful to have some more detail on these vital components. P4 col 1 - Could mention that the UK government funds the ‘What works’ initiative (for non-UK audiences). P4 col 2 - Please define the three Es from the 3E framework. P5 col 1 - It is unclear about the use of the term ‘incidence’ (presumably of disease or condition) with its example of (how much of the full cost falls on private individuals)’ which seems related to funding sources.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "24491",
"date": "15 Aug 2017",
"name": "Daniel A. Ollendorf",
"expertise": [
"Reviewer Expertise Health technology assessment",
"health economics",
"real world evidence"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a thought-provoking piece advocating for a quasi-governmental body that uses evidence-based principles to ensure the efficient, cost-effective, and equitable use of donor funds for global health programming directed at low-and-middle income countries. The authors have a wealth of experience in managing the use of international aid monies in the developing world, and been heavily involved in training and implementation of health technology methods and decision support techniques in those settings.\n\nThe authors' work with iDSI is instructive, but readers will benefit from specific examples of the change brought about the organization's activities. For example, has an evaluation been performed to compare decision-making processes before and after an iDSI \"intervention\" in India, Ghana, etc.? I've been recently made aware of an \"affordable cancer drugs list\" for India - was this an iDSI effort and how has its performance contrasted with prior approaches?\nSpecific comments are also provided below:\nAbstract: just a minor grammatical correction. \"...is channeled\" should be \"are channeled.\" Pg 3, column 1, paragraph 3: Proposed cuts to USAID and UK commitments certainly reflect skepticism around return-on-investment, but they also likely reflect the presence of nationalist politics, and this should be mentioned. Pg 3, column 2, paragraph 3: This seems an opportune place to introduce budgetary impact analysis as a companion effort to cost-effectiveness and related efforts. While successful empiric approaches to setting WTP thresholds will explicitly consider health budgets, BIA can be flexibly defined to address contingencies common in the developing world (e.g., regime change, changes in aid vs. domestic funding balance, etc.) Pg 4, column 1, paragraph 4: Scope challenges are appropriately noted, and I agree that health spending is a natural starting point. However, it would be useful to see some context around spending challenges across sectors in the developing vs. developed world. Page 5, column 1, paragraph 1: It's very difficult for me disentangle the very real data needs mentioned from the reality of trying to collect data in these settings. An example of a data need that was simply and efficiently addressed in an LMIC would be helpful. The example of NICE's impact is less relevant for me given the resources available in the UK. Page 5, column 2, paragraph 3: It may be premature to do this, but ROI is both about paying for and sustaining NIDE's activities AND reducing wasteful aid spending. Some estimate of wasteful spending in the UK, and what percentage reduction would essentially pay for NIDE and increase access to health services in some number of countries would be helpful, even if back-of-the envelope.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1223
|
https://f1000research.com/articles/6-1221/v1
|
25 Jul 17
|
{
"type": "Research Article",
"title": "The relationship between altmetric score with received citations in Iranian pediatrics articles",
"authors": [
"Leila Nemati-Anaraki",
"Hamed Aghajani Koupaei",
"Mohammadreza Alibeyk",
"Leila Nemati-Anaraki",
"Mohammadreza Alibeyk"
],
"abstract": "Background: Today, in addition to citations and with the expansion of social media, the use of altmetrics has gained attention as a tool necessary for evaluating the effects of scientific publications. The present study intended to monitor Iranian pediatrics articles, as one of the leading areas of scientific publications in Iran, between the years 2010-2016 using altmetrics and citation-metrics, and then evaluate the relationship between the altmetric score and number of received citations. Methods: This is a practical study of the analytical descriptive type and the research methodology is scientometrics. This research included 1332 research articles, review articles and conference articles in the field of pediatrics from Iran during 2010-2016, published in the Web of Science. Authors, year, journal and social media was determined in these articles. Data analysis was carried out using SPSS21 software and descriptive and inferential statistics (Kolmogorov-Smirnov test and Spearman correlation). Results: A total of 1138 articles have citations and 256 articles had altmetric activity. The results indicate a significant correlation among the articles’ altmetric scores and number of received citations. Among the data sources of altmetric score, mentions of articles in Mendeley, Twitter, and Facebook had the highest ranking. The number of times an article was read in Mendeley had a significant correlation with the number of citations. Conclusions: It seems that altmetrics better represent the impact of newer articles, while older articles had received more citations. In addition, a high number of reads in Mendeley correlates with received citations. However, Mendeley reads do not involve altmetric score calculation algorithms, and this should be implemented in the future.",
"keywords": [
"Pediatrics",
"Articles",
"Scientometrcis",
"Altmetric",
"Citation",
"Iran"
],
"content": "Introduction\n\nEvaluating the impact of scientific publications is carried out using various methods. “Scientific citations” are published for different sources, such as journals, books, articles, conference abstracts, and patents, and have been one of the most important evaluation tools for the impact of scientific publications. However, even though citations have various advantages, relying on the number of scientific citations can have disadvantages and there are various criticisms surrounding them. For example:\n\nThe long waiting time for review and publication results in the content becoming out of date, leading to a delay in scientific citations to those sources and hence analysis of the impact of the research;\n\nMany of the evaluation methods based on scientific citations, such as impact factor are actually indices for journal evaluation and not the articles (Riyahi et al., 1995);\n\nThe type and field of citations is often not evaluated and it is not possible to become aware of the citation motives. For example, citations might be given to criticize or reject a certain theory;\n\nThe significance of the citing source in citation-metrics is not considered for most indices. For example, the value of citing a source in a letter to the editor, or a review article or research article is not the same;\n\nWhile the review process, publication and citations of texts is very lengthy, many people prefer to use unofficial sources, such as online sources in order to publish the initial results of their research, which are not included in scientific citations (Wang et al., 2013);\n\nToday, online communication and virtual scientific discussions have obtained a special place in scientific communities. Results of numerous research studies are used in these online sources through various means, while it is even possible that no citation is given to them by official sources (Erfanmanesh, 2015).\n\nConsequently, there are weaknesses and existing deficiencies in methods based on citation only, and also the tendency towards using the online environment has become a means for the appearance of other criteria for scientific evaluation and analysis of the scientific process. Priem et al. states that today methods based on citations are not the only criteria for evaluation because citations only measures visible effects, while nowadays using Web 2 tools, circumstances have been provided for creating new measurement criteria, which make the inconspicuous results of research evident. Priem name it “altmetrics or alternative metrics” (Priem et al., 2012). Altmetric is a means for measuring the effect of scientific publications on the web, especially in social media, which has a more expansive audience. It is an easy access method with better capability for tracing the effects of publications in various forms of media (Piwowar, 2013).\n\nCurrently, Almetric.com is considered one of the most authentic databases for calculating the altmetric score of approximately five million articles. Briefly, in order to report the altmetric score, categorize and provide an up-to-date description of articles, this database considers the following sources:\n\nPublic policymaking documents\n\nSocial media, such as Facebook, Twitter, Google Plus, LinkedIn, SinaWebio, Pinterest\n\nMulti-media online environments, such as YouTube, Reddit, Q&A\n\nWikipedia\n\nBlogs\n\nOnline resource management, e.g. Mendeley\n\nMainstream sources, e.g. news agencies and newspapers\n\nTools for highlighting research, such as F1000\n\nReview environments for published articles, such as Pubpeer and Publons\n\nEvaluating the effects of scientific publications through official channels using scientific citations and on the Web using altmetrics each has their advantages and disadvantages. Until now, using methods based on citations and altmetrics in various scientific communities has been discussed to a great extent, and much research has been carried out for evaluating the relationship between these two methods. Since the use of social media has increased during the past few years, article coverage using altmetric scores has also increased during these years, and as a result it is better that scientific citations and altmetric scores complement each other and be evaluated more for recent articles (Costas et al., 2014)\n\nEvaluating the impact of scientific publications with citation- and alt-metric methods has been carried out thus far in various fields (Salajegheh, 2015), (Hassan & Gillani, 2016), (Costas et al., 2014), (Robinson-García et al., 2014). Pediatrics is among the topics of great regard in various countries, since this field, in addition to having an important role in promoting a country’s ranking in the World Health Organization (WHO), assures children’s wellbeing and is part of the country’s development infrastructure. Iranian scientific publication in pediatrics has approximately one percent of the world's pediatrics publications, and also includes more than one percent of all scientific publications in Iran during the past five years in the Web of Science database. According to the WHO statistics, Iran’s significant scientific activities and advances in the field of pediatrics has resulted in impressive development and improvement of health indices in this field (http://apps.who.int/gho/data/node.country.country-IRN).\n\nThe present study is an attempt to monitor the impacts of Iranian pediatric articles as one of the leading fields of scientific publication in Iran with citation-metrics and altmetric methods, and hence obtain the relationship between altmetric score and number of received citations. More importantly, by accurately studying the number of times articles are mentioned in sources and media involved with the altmetric score, their relationship with the number of article received citations will be considered, and finally the correlation between the number of article mentioned in these sources with the altmetric score will be evaluated.\n\n1. Determine the amount of citations for Iranian pediatrics article;\n\n2. Determine the altmetric score of Iranian pediatrics articles;\n\n3. Determine the relationship between altmetric score and number of citations received for Iranian pediatrics articles;\n\n4. Determine mentions of Iranian pediatrics articles in altmetric score data sources;\n\n5. Determine the relationship between mentions in altmetric score data sources with number of citations received for Iranian pediatrics articles;\n\n6. Determine the relationship between the number of mentions in altmetric score data sources with the altmetric score itself.\n\n\nMethods\n\nThis is a practical study of an analytical descriptive type, and the research methodology is scientometric. The literature included was research articles, review articles, and conference abstracts (due to their importance in this field and scientometric studies) in the field of pediatrics published from Iran during March 2010 to September 2016, as found by the Web of Science database. This time period was used due to the prevalence of social media use during these years.\n\nThe sample included 1332 articles, which had been retrieved with the following procedure:\n\n1. Pediatrics journals (120 titles; Supplementary File 1) were chosen using the Journal Citation Reports of 2015.\n\n2. These journal titles were merged together in an advanced search of the Web of Science database using the OR operator and all of their articles were retrieved.\n\n3. The retrieved articles were limited to the geographical region of Iran, 2010–2016, and type, including research article, review article, and conference abstract.\n\nAltmetric data was found using the altmetric.com database, which requires a unique identifier for articles. The following steps were undertaken for finding the altmetric data of all articles:\n\n1. All 1332 articles were initially added to Endnote and were evaluated using the DOI and PUBMED ID. Then, articles that had none of the aforementioned identifiers were searched through \"find reference updates\" button on Endnote to complete reference information and obtain their DOI or PMID.\n\n2. In cases where identifiers were not found by this method, searches were done in Pubmed, Crossref, doi, and Scopus in an attempt to obtain their DOI or PM ID.\n\n3. From 1332 articles, the aforementioned identifiers were obtained for 1138 articles. All 1138 articles having identifiers were searched using the Webometric Analysis software (v.2 beta student) and also altmetric.com API in order to find the altmetric score and details of the scores. A total of 256 articles had an altmetric score then the citations number of these 256 articles were outcomed using the HistCite software.\n\nData analysis was performed using SPSS v21 software. Descriptive and inferential statistics (Kolmogorov-Smirnov test and Spearman correlation) were calculated.\n\n\nResults\n\nThe findings indicate that of the 1332 articles evaluated, a total of 2879 citations were obtained, and the highest citation rate for an article was 37 citations. Among the 1332 articles, 609 articles obtained no citations. In total, 256 articles also had altmetric scores. Of these, the altmetric score ranged from 502 to 0.25. Articles obtained an overall score of 1023, and the altmetric score mean was about 4 points (standard deviation, 31). The reason for this high standard deviation was an article with an altmetric score of 502 among the articles. Evaluations indicate a positive correlation (P=0.018) among the altmetric score and number of citations obtained (Table 1).\n\nThe findings suggest that among the articles, those by Bakhshayesh et al., Ajallouyean et al., and Rezai et al. had the highest number of citations, and articles by Forouzanfar et al., Mirshemirani et al., and Farahani et al. had the highest altmetric score (Table 2 and Table 3).\n\nThe Iranian Journal of Pediatrics, Pediatric Nephrology and Pediatric Research had the highest number of published articles in the field of pediatrics from Iran, with 388, 114, and 87 articles, respectively. The Iranian Journal of Pediatrics, International Journal of Pediatric Otorhinolaryngology and Childs Nervous System had the highest number of citations (762, 196 and 110, respectively) and JAMA Pediatrics, Iranian Journal Pediatrics, and International Journal of Pediatric Otorhinolaryngology had the highest altmetric scores (502, 124 and 47, respectively).\n\nAs indicated in Table 4, the most publications were published in 2010, 2011 and 2013, with 245, 247 and 227 articles, respectively. The highest number of citations was allotted to 2010, 2011 and 2013, and the highest altmetric scores were allotted to 2013, 2015 and 2016, respectively.\n\nBy evaluating the detail pages of the articles’ altmetric score, it was found that 256 articles were mentioned in Menedely, CiteULike, weblogs, mainstream media, Twitter, Reddit, Facebook, Pinterest, F1000, and Google Plus. The number of mentions to these articles in reference managers, Mendeley and CiteULike (which is regarded as mentions in social media and instead is called ‘reads’), was overall 2606 times, and in social media, including weblogs, mainstream media, Twitter, Facebook, Reddit, Pinterest, F1000, and Google Plus, they were mentioned overall 918 times.\n\nOur findings suggest that among the 1332 articles evaluated, 234 articles were read 2595 times in Mendeley and the maximum number of readings for an article was 124 times. There was a significant relationship between the number of reads in Mendeley with the citations received (R=0.3; P=0.00).\n\nIn CiteULike, eight articles were read 11 times, and the highest reading rate for one article was four times. There is was no significant relationship between the number of reads in CiteULike and the number of citations received (R=-0.09; P=0.14).\n\nIn scientific weblogs, 13 articles were mentioned 18 times, and the maximum number of mentions for one article in weblogs was twice. There was no significant relationship between articles mentioned in weblogs and the number of citations received (R=0.19; P=0.52).\n\nIn the mainstream media, eight articles were mentioned 48 times, for which the maximum mentions for one article was 41 times. There was no significant relationship between article mentions in the mainstream media and the number of citations received (R=0.08; P=0.84).\n\nOn Twitter, 222 articles were mentioned 705 times, for which the maximum mentions for one article was 242 times. There is no significant relationship between articles mentioned on Twitter and the number of citations received (R= 0.1; P=0.7).\n\nOn Facebook, 67 articles were mentioned 136 times, for which the maximum number of mentions for one article was 11 times. There is no significant relationship between articles mentioned on Facebook and the number of citations received (R=0.06; P=0.76).\n\nOverall, the number of articles mentioned was very low in F1000 (four articles), Pinterest (two articles), Reddit (one article), and Google Plus (four articles). The number of mentions for all articles was only once. Thus, measuring their correlation with the number of received citations was disregarded. Table 5 shows the details of articles mentioned in each of the altmetric data sources.\n\nSD, standard deviation.\n\nIn order to evaluate the effectiveness and relationship of each of the data sources in the altmetric score, the correlation between number of mentions and readers in altmetric data sources with the altmetric score was also evaluated. However, as it was previously mentioned, social media, including Google Plus, Pinterest, Reddit, and F1000, were eliminated from this correlation test due to the very low number of mentions. Findings indicate that the number of mentions in Mendeley, weblogs, Twitter, and Facebook have a positive relationship with altmetric score (R>0; P<0.05) (Table 6).\n\nFinally, with regard to the capabilities of altmetric.com in presenting the ranking of an article among other published articles in a journal with consideration of the altmetric score, this study attempted to divide the status of each published article by the total number of articles published in that journal, in order to obtain Iranian pediatrics articles ranking. The findings indicate that the highest-ranking articles of Iran are among the top one percent of the best articles and the lowest ranking was the last article. On average, published articles from Iran in the field of pediatrics hold a ranking of 0.43 (upper half of articles).\n\n\nDiscussion\n\nArticles published from Iran was about 1% of the published articles in the field of pediatrics in the world, and regarding the population of Iran (80 million; about 1% of the world population) it seems to be an adequate publication quantity. Regarding the initial research question, regarding received citations for Iranian pediatrics articles, our findings indicate that 1332 articles received 2879 citations (on average two citations), while over 40% of the articles received no citations, which suggests the unbalanced quality of articles. The findings indicate that among 1332 articles, only 256 had altmetric scores. The low number of articles with altmetric scores may be due to lack of coverage by the Altmetric.com database for many of the articles, such as those that did not have unique identifiers, coverage of only preferred and not all social media, and also difficultly in tracing and monitoring the impacts of the articles, due to the high amount of information found on the Internet.\n\nThe present findings indicate that there is a direct correlation between the altmetric score and number of received citations for articles. Therefore, it seems that altmetrics can also be considered as a tool alongside scientific citations for the evaluation of scientific publication impacts. The correlation between citations and altmetric scores has are been previously confirmed in various research, for example by Costas et al. (2014); Hassan & Gillani (2016); Robinson-García et al. (2014), and Salajegheh (2015).\n\nThe three top articles with the most number of citations were all written with the cooperation of more than five authors; two articles had international cooperation. Article that obtained half of the total altmetric score (502 from 1024 total score) was written with the cooperation of authors from around the world. Therefore, it seems that cooperative writing, especially international cooperation, can have a significant effect on the impact of scientific publications in the field of pediatrics in Iran. In a study by Erfanmanesh (2015), approximately half of the articles with the highest altmetric score were articles with international cooperation. Among the top ten articles with regard to the number of received citations and the obtained altmetric score, only one article was a duplicate publication.\n\nThe Iranian Journal of Pediatrics, Pediatric Nephrology and Pediatric Research had the highest number of publications; however the most citations were obtained by the Iranian Journal of Pediatrics, International Journal of Pediatric Otorhinolaryngology, and Childs Nervous System. JAMA Pediatrics, Iranian Journal of Pediatrics, and International Journal of Pediatric Otorhinolaryngology were also the top journals regarding altmetric score. That the Iranian Journal of Pediatrics and International Journal of Pediatric Otorhinolaryngology were the top best journals in evaluating the number of citations and altmetric score, could be a confirmation of the relationship between citations and altmetric score. On the other hand, with regard to the second and third ranking of journals, none of these were among the five journals receiving the highest number of citations and altmetric score, and it seems that the Iranian authors need to put greater effort in choosing appropriate journals to publish their works in the field of pediatrics.\n\nCalculating the mean number of citations to article in each year shows that the years of 2010, 2011 and 2012 had the highest mean number of citations. Therefore, the highest altmetric scores obtained is approximately related to recent years, including 2016, 2013, and 2015. With regard to the fact that in the evaluated years, earlier years had higher citations and more recent years had higher altmetric scores, it could be stated that contrary to receiving citations, which are a long and time-consuming process, new publications have obtained higher altmetric scores. This issue can be due to the greater use of social media during recent years. On the other hand, it could be stated that new articles are discussed and cited in social media at a faster pace after publication and up-to-date topics have more advocates in social media. This issue was also discussed in the study by Erfanmanesh (2015). Therefore, regardless of citations, which are a better indication of the effectiveness of older articles, evaluation of new scientific publications may be better specified by altmetric score. Regarding the relationship between altmetric score and citations, which has been shown in this and previous studies, obtaining higher altmetrics scores by new publications could be a new and better outlook on the potential for these articles to obtain citations in the future. This would however require further investigation.\n\nFindings indicate that pediatrics articles in Iran have been mentioned in 10 media platforms, including Mendeley, CiteULike, weblogs, mainstream media, Twitter, Reddit, Facebook, Pinterest, F1000 and Google Plus. Among these, the reads of articles in reference manager resources (Mendeley and CiteULike) is over two and half times that of articles in social media. The pediatrics articles from Iran were mentioned 918 times in social media, including Twitter, Facebook, weblogs, mainstream media, Pinterest, Reddit, Google Plus, and F1000. Over all social media platforms, Twitter alone had 76% of the mentions. In previous studies, the number of reads in Mendeley and mentions in Twitter were higher than the other sources (Araújo et al., 2015; Erfanmanesh, 2015; Hassan & Gillani, 2016; Robinson-García et al., 2014) and (Zahedi et al., 2014). The number of mentions to articles in Pinterest, Reddit, Google Plus and F1000 were very low, whereas the evaluation of their relationship with scientific citations was disregarded. In a study by Thelwall et al. (2013), the same instance occurred for Google Plus and Reddit.\n\nRegarding the relationship between data sources used in altmetric score with received citations in Iranian pediatrics articles, findings show that the number of readers in Mendeley and the number of scientific citations obtained had a direct and significant relationship. In studies by Asadi et al. (2014); Bar-Ilan et al. (2012); Erfanmanesh (2015); Haustein et al. (2014); Li et al. (2012), and Zahedi (2014), the relationship between the number of Mendeley reads and citations was confirmed. However, in the present study there was no significant relationship observed between the readers of CiteULike and scientific citations. In studies by Asadi et al. (2014); Bar-Ilan et al. (2012), and Li et al. (2012), the relationship between readers of CiteULike and number of citations was significant.\n\nThe altmetric.com database does not involve the number of Mendeley reads in the altmetric score calculation algorithm because this database cannot access user profiles in Mendeley, and user tendency towards specific author, journal, publisher or organization in Mendeley is not clear for this database (https://www.altmetric.com/about-our-data/the-donut-and-score/). In addition, it seems that with regard to the direct correlation between the number of readers of articles in Mendeley and received citations, which was also confirmed in previous studies (Asadi et al., 2014; Bar-Ilan et al., 2012; Erfanmanesh, 2015; Haustein et al., 2014; Li et al. 2012), and (Zahedi, 2014)), in order to show the potential of Mendeley reads to represent the article’s impact, it is better to involve an altmetric score calculation algorithm. It should be considered that with a high number of users and article reads in Mendeley, an individual effect is quite difficult to measure the general trend of an article.\n\nA significant relationship was not observed between the number of mentions in evaluated social media and the number of citations. Finally, for certainty, further investigations show that there is also no meaningful relationship between total mentions of articles in all social media and the number of citations. In a study by Xia et al. (2016), a significant relationship was found between the number of citations and mentions on Twitter. In addition, in a study by Thelwall et al. (2013), there was a significant relationship between citations and articles on Facebook, weblogs, and mainstream media.\n\nIn order to further evaluate the correlation, the number of mentions in social media with altmetric scores was also evaluated and it was specified that the number of articles mentioned on Twitter, Facebook, weblogs, mainstream media and Mendeley with altmetric score had a direct and significant relationship. This may confirm the fact that the general algorithm created for the altmetric score in altmetric.com database is accurate. However, this issue definitely requires further investigations at different points in time, due to the extensive area of social media, its increasing growth, and changes in society’s approval of them.\n\nIn order to compare the ranking of Iranian pediatrics articles to other articles published by the same journals, the ranking of 0.43 for Iranian articles is indicative that the status of articles published from Iran are in the upper half of articles in pediatrics journals.\n\n\nConclusion\n\nToday, with the increasing orientation of the public and researchers to social media, due to easy access, a wide-ranging audience, expansive relationship with other specialists, sharing of scientific publications and related discussions, disregarding the effects of the web domain in the evaluation of scientific publications does not seem to be correct. Based on the current research findings related to the direct relationship between altmetric score and scientific citations, as also seen in previous research, it seems that alongside scientific citations, altmetrics can also assist in better evaluation of the impact of scientific publications, especially for newer publications. Altmetric covers various indices and a wide range of social media and is updated at a higher speed. However, scientific citations are time-consuming and they are affected by the publishing source, thus it seems that evaluation of a new document and also articles that are published in unfamiliar journals, such as journals not indexed in authentic databases, is better performed with altmetrics. On the other hand, the expansive realm of social media makes it much more difficult to manipulate the altmetric score rather than manipulating the number of citations.\n\nIn this research and previous research, among readable media, Mendeley and among social media, Twitter and Facebook, have the greatest share of article mentions. According to this research and previous studies, the number of readers, especially in Mendeley, has a direct relationship with the number of citations, and it is better to be involved in the altmetric score calculation algorithm. The relationship between mentions in social media involved in altmetric score and the number of citations also requires more extensive evaluation.\n\nFinally, by considering the extensive realm of Web2, its massive amount of data, the quick growth of social media, appearance of new facilities and new social media, and changes in people’s orientation, presenting an appropriate strategy for calculating the altmetric score requires extensive research and should be kept up-to-date.\n\n\nData availability\n\nDataset 1: CSV file with article titles, authors, years, journals, PMID, DOI, Mendeley readers, CiteULike readers, and Twitter, Facebook, Blog, Wikipedia, Google Plus, Pinterest, and F1000 mentions. doi, 10.5256/f1000research.12020.d168908 (Nemati-Anaraki et al., 2017).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: List of pediatric journals in Journal Citation Reports 2015.\n\nClick here to access the data.\n\n\nReferences\n\nAjallouyean M, Amirsalari S, Yousefi J, et al.: A repot of surgical complications in a series of 262 consecutive pediatric cochlear implantations in Iran. Iran J Pediatr. 2011; 21(4): 455–60. PubMed Abstract | Free Full Text\n\nAraújo RF, Murakami TR, De Lara JL, et al.: Does the Global South have altmetrics? Analyzing a Brazilian LIS journal. In Proceedings of ISSI 2015-15th Intl conf of the Internafional Society for Scientometrics and Informetrics. 2015; 111–112. Reference Source\n\nAsadi H, Naghshineh N, Nazari M: The study of scientific social networks as a replace tools or supplement in the assessment of Iranian researchers. Scientometric Bull. 2014; 1(2).\n\nBakhshayesh AR, Hänsch S, Wyschkon A, et al.: Neurofeedback in ADHD: a single-blind randomized controlled trial. Eur Child Adolesc Psychiatry. 2011; 20(9): 481–91. PubMed Abstract | Publisher Full Text\n\nBar-Ilan J, Haustein S, Peters I, et al.: Beyond citations: Scholars' visibility on the social Web. arXiv preprint arXiv: 1205.5611. 2012. Reference Source\n\nBoztug K, Rosenberg PS, Dorda M, et al.: Extended spectrum of human glucose-6-phosphatase catalytic subunit 3 deficiency: novel genotypes and phenotypic variability in severe congenital neutropenia. J Pediatr. 2012; 160(4): 679–683, e672. PubMed Abstract | Publisher Full Text\n\nCostas R, Zahedi Z, Wouters P: Do “altmetric” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. J Assoc Inf Sci Technol. 2014; 66(10): 2003–2019. Publisher Full Text\n\nErfanmanesh MA: The Presence of Iranian Information Science and Library Science Articles in Social Media: An Altmetric Study. Iranian Research Institute for Information Science and Technology. 2015; 32(2): 349–373. Reference Source\n\nFarahani LA, Ghobadzadeh M, Yousefi P: Comparison of the effect of human milk and topical hydrocortisone 1% on diaper dermatitis. Pediatr Dermatol. 2013; 30(6): 725–729. PubMed Abstract | Publisher Full Text\n\nFeizizadeh S, Salehi-Abargouei A, Akbari V: Efficacy and safety of Saccharomyces boulardii for acute diarrhea. Pediatrics. 2014; 134(1): e176–91. PubMed Abstract | Publisher Full Text\n\nHassan S-U, Gillani UA: Altmetric of“ altmetric” using Google Scholar, Twitter, Mendeley, Facebook, Google-plus, CiteULike, Blogs and Wiki. arXiv preprint arXiv: 1603.07992, 2016. Reference Source\n\nHaustein S, Peters I, Bar-Ilan J, et al.: Coverage and adoption of altmetrics sources in the bibliometric community. Scientometrics. 2014; 101(2): 1145–1163. Publisher Full Text\n\nGlobal Burden of Disease Pediatrics Collaboration, Kyu HH, Pinho C, et al.: Global and National Burden of Diseases and Injuries Among Children and Adolescents Between 1990 and 2013: Findings From the Global Burden of Disease 2013 Study. JAMA Pediatr. 2016; 170(3): 267–287. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi X, Thelwall M, Giustini D: Validating online reference managers for scholarly impact measurement. Scientometrics. 2012; 91(2): 461–471. Publisher Full Text\n\nMalamiri RA, Ghaempanah M, Khosroshahi N, et al.: Efficacy and safety of intravenous sodium valproate versus phenobarbital in controlling convulsive status epilepticus and acute prolonged convulsive seizures in children: a randomised trial. Eur J Paediatr Neurol. 2012; 16(5): 536–541. PubMed Abstract | Publisher Full Text\n\nMirshemirani AR, Sadeghyian N, Mohajerzadeh L, et al.: Diphallus: report on six cases and review of the literature. Iran J Pediatr. 2010; 20(3): 353–7. PubMed Abstract | Free Full Text\n\nNemati-Anaraki L, Aghajani Koupaei H, Alibeyk M: Dataset 1 in: The relationship between altmetric score with received citations in Iranian pediatrics articles. F1000Research. 2017. Data Source\n\nPiwowar H: Altmetrics: Value all research products. Nature. 2013; 493(7431): 159. PubMed Abstract | Publisher Full Text\n\nPriem J, Piwowar HA, Hemminger BM: Altmetric in the wild: Using social media to explore scholarly impact. arXiv preprint arXiv:1203.4745, 2012. Reference Source\n\nRadmanesh F, Nejat F, El Khashab M: Dermal sinus tract of the spine. Childs Nerv Syst. 2010; 26(3): 349–357. PubMed Abstract | Publisher Full Text\n\nRiyahi M, Brawn T, Glanzel V, et al.: Scientometric Indicators ,comparative assessment of publishing activities and citations impact of 32 countries. rahyaft. 1995; 8: 80–70.\n\nRobinson-García N, Torres-Salinas D, Zahedi Z, et al.: New data, new possibilities: exploring the insides of Altmetric.com. arXiv preprint arXiv:1408.0135, 2014. Reference Source\n\nSalajegheh M: The relationship between altmetric and SNIP, SCImago Journal Rank, Eigen Factor and Impact Factor of medical journals. National Study of Library and Information Organization. 2015; 27(1).\n\nSoltani R, Soheilipour S, Hajhashemi V, et al.: Evaluation of the effect of aromatherapy with lavender essential oil on post-tonsillectomy pain in pediatric patients: a randomized controlled trial. Int J Pediatr Otorhinolaryngol. 2013; 77(9): 1579–1581. PubMed Abstract | Publisher Full Text\n\nThelwall M, Haustein S, Larivière V, et al.: Do altmetrics work? Twitter and ten other social web services. PLoS One. 2013; 8(5): e64841. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang X, Wang Z, Xu S: Tracing scientist’s research trends realtimely. Scientometrics. 2013; 95(2): 717–729. Publisher Full Text\n\nXia F, Su X, Wang W, et al.: Bibliographic Analysis of Nature Based on Twitter and Facebook Altmetrics Data. PLoS One. 2016; 11(12): e0165997. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZahedi Z: Analyzing readerships of International Iranian publications in Mendeley: an altmetrics study. Paper presented at the First National Scientometrics Conference, Isfahan. 2014. Publisher Full Text\n\nZahedi Z, Costas R, Wouters P: How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. Scientometrics. 2014; 101(2): 1491–1513. Publisher Full Text"
}
|
[
{
"id": "25149",
"date": "21 Sep 2017",
"name": "Zohreh Zahedi",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper has analyzed the correlation between citations received by WoS articles, reviews and conference papers in the field of Pediatrics and their altmetrics scores obtained from altmetric.com. It is a very descriptive study. It make sense to do such an analysis for the field of Pediatrics as one of major fields of scientific publication in Iran, according to the authors. However, there are lots of altmetrics studies that have used correlation analysis between altmetrics and bibliometrics indicators as this paper and hence this is not a new topic, doesn’t contribute to the altmetrics literature and hence has no added value. It is not clear the importance of this study, lacking enough relation, in other words, what gaps is going to be filled and for what reasons? Also, to my idea, the paper needs major revisions in terms of English language (grammar, use of identifier such as the, etc. and not well written) and lacks sufficient justification of the study and interpretation of the results and other issues that are explained below. Hence, I don't suggest the paper for indexing due to these and the following reasons:\nThe introduction needs stronger justification in terms of the aim of this study since correlation between altmetrics and citations has been done before in many studies and is not a new thing;\n\nThe part including objective and aim of the study overlaps and repeated several times, the repeated sentences should be merged and summarized in one or two sentences.\n\nThe review of the literature on the correlation between altmetrics is not comprehensive and it misses some major studies which were done based on WoS, Scopus and PubMed databases across different fields. Additionally, the study by Thelwall et al. (2013)1 or the one by Haustein, et. al (2013)2, or other relevant studies are missing here in which they have done the same correlation with all the indicators from altmetric.com across medical fields. It would be nice to compare the results of the above papers in terms of coverage, proportion, and correlation analysis, etc. with the result of the current study.\n\nRegarding the following sentence the same holds true for altmetrics as well since motivation for tweeting scholarly papers is not known as well : “it is not possible to become aware of the citation motives. For example, citations might be given to criticize or reject a certain theory”\n\nI don’t agree with the following statement, the sentence should be rephrased: “Since the use of social media has increased during the past few years, article coverage using altmetric scores has also increased during these years, and as a result it is better that scientific citations and altmetric scores complement each other and be evaluated more for recent articles3”\n\nThis sentence should be removed, it seems that it is a translation from Persian language and is not common to use it in the paper: “This is a practical study of an analytical descriptive type, and the research methodology is scientometric”.\n\nRegarding the methodology, the authors have used different identifiers. I wonder how the authors handle duplicates in terms of papers with more than one identifier? Please clarify it in more details in the paper.\n\nAlso, I would suggest to provide the % and details of data collection in a table. A descriptive table of the data set with citations, altmetrics, coverage, and percentage, etc. is lacking. In the result section, I would suggest to add the proportion next to the raw values. For example, page 5: “it was found that 256 articles were mentioned in Mendeley…”, better to add the % next to 256 articles both in the text and in a descriptive table that explained above.\n\n“Outcomed” should be replaced with ‘collected’ in the following sentence: “From 1332 articles, the aforementioned identifiers were obtained for 1138 articles. All 1138 articles having identifiers were searched using the Webometric Analysis software (v.2 beta student) and also altmetric.com API in order to find the altmetric score and details of the scores. A total of 256 articles had an altmetric score then the citations number of these 256 articles were outcomed using the HistCite software”.\n\nplease replace “evaluated” with “searched” using DOI: “All 1332 articles were initially added to Endnote and were evaluated using the DOI and PUBMED ID. Then, articles that had none of the aforementioned identifiers were searched through \"find reference updates\" button on EndNote to complete reference information and obtain their DOI or PMID”.\n\nAlso regarding comparing citations with altmetrics score, comparing citations with individual altmetrics (tweet, Mendeley, etc.) is fine but with the aggregated altmetrics score is not recommended since it is a combined score of different altmetrics and doesn’t include all metrics in the score, the weight is also different for different source.\n\nTo my idea it is better to use “received by” or other equivalents instead of “allotted” in the following sentence: “The highest number of citations was allotted to 2010, 2011 and 2013, and the highest altmetric scores were allotted to 2013, 2015 and 2016, respectively.\n\nIn the following section and some other parts, “evaluated” and “eliminated” should be replaced by “analysed or studied” and “excluded”: “Relationship between altmetric data sources and altmetric score. In order to evaluate the effectiveness and relationship of each of the data sources in the altmetric score, the correlation between number of mentions and readers in altmetric data sources with the altmetric score was also evaluated. However, as it was previously mentioned, social media, including Google Plus, Pinterest, Reddit, and F1000, were eliminated …”.\n\n“impacts” should be replaced by “impact”.\n\nThe following statement is very strong and it is not true to mention that due to the existence of correlation, altmetrics can be used for evaluation disregarding challenges and quality issues for altmetrics data: “The present findings indicate that there is a direct correlation between the altmetric score and number of received citations for articles. Therefore, it seems that altmetrics can also be considered as a tool alongside scientific citations for the evaluation of scientific publication impacts”.\n\nThe sentence should be rephrased: “Therefore, regardless of citations, which are a better indication of the effectiveness of older articles, evaluation of new scientific publications may be better specified by altmetric score”.\n\nPage 7: discussion, ‘are’ should be removed: “the correlation between altmetrics scores has are been ….).\n\nI would suggest to distinguish between altmetrics score and individual altmetrics in the paper. Altmetrics score is the aggregated and combined score calculated by altmetric.com and which doesn’t include all altmetrics in its calculation but individual altmetrics refers to reader counts, tweets, etc. Also, for Mendeley reader counts, it is better to use Mendeley API than altmetric.com for reader counts.\n\nIt is better to report the correlation coefficients like r=0.01 instead of 0/01. Also, to my knowledge, it is not common to report the p value for correlation in the table. Also, the authors need to specify the type of correlation analysis (Pearson or Spearman) they have used, due to the skewness of altmetrics and citations Spearman correlation analysis is suggested.\n\nThe results and discussion part needs to be strengthen regarding discussing the current results, comparing them with other studies, and their interpretation. The current format is very descriptive. Also, might be good to provide some recommendations.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "27209",
"date": "19 Jun 2024",
"name": "Ramin Sadeghi",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript \"The relationship between altmetric score with received citations in Iranian pediatrics articles\" is a well written study regarding a fairly new index of scientometrics namely \"altmetric score\".\nThe following comments may help improving the manuscript.\nThe study needs English editing to be made more readable. More explanation regarding \"altmetrics\" would be very useful in the introduction.\n\nThe remainder of the article seems acceptable to me.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1221
|
https://f1000research.com/articles/6-1212/v1
|
25 Jul 17
|
{
"type": "Research Article",
"title": "In vitro regulation of reactive oxygen species formation in red blood cells of homozygous sickle cell patients using Vitamin C",
"authors": [
"Ogechukwu Egini",
"Edouard Guillaume",
"Titilope Adeyemo",
"Chiemeziem Nwanyanwu",
"Fnu Shweta",
"Eric Jaffe",
"Edouard Guillaume",
"Titilope Adeyemo",
"Chiemeziem Nwanyanwu",
"Fnu Shweta",
"Eric Jaffe"
],
"abstract": "Background: Sickle cell patients produce more reactive oxygen species (ROS) than healthy individuals, leading to increased cell membrane damage. Theoretically, reducing ROS formation would preserve red cell membranes of sickle cell patients. Vitamin C is a powerful anti-oxidant capable of inhibiting ROS formation in a variety of situations, by functioning as an electron donor to reduce molecular oxygen. This study aimed to determine whether Vitamin C reduced ROS formation in sickle red cells. Methods: 27 homozygous (HbSS) patients were recruited from the outpatient clinics of Lagos University Teaching Hospital, Nigeria, and annex at the Sickle Cell Foundation, Lagos, Nigeria. Demographic information and EDTA patient blood samples were collected. The test group were red cells preincubated in 80uM and 100uM Vitamin C concentrations before stressing with tertbutylhydroperoxide. These were compared to stressed matched controls preincubated in phosphate buffered saline. Cell staining was done with CellRox Orange followed by flow cytometry to quantify ROS. Results: ROS count for Vitamin C pre-treated red cells was significantly lower than matched controls (p<0.001). Average ROS count for 80uM test samples was 27.5/ul (95% CI, 17.5 to 72.5) and for 100uM 3.9/ul (95% CI, 1.9 to 5.9). Male gender was significantly associated with elevated baseline ROS count (p=0.03). Conclusion: Vitamin C reduced ROS formation in HbSS cells. Future studies should focus on a role for Vitamin C as a safe, cheap addition to maintenance therapy of sickle cell patients.",
"keywords": [
"Regulation",
"Reactive oxygen species",
"HbSS patients",
"Vitamin C"
],
"content": "Introduction\n\nSickle cell anemia is a single-nucleotide polymorphism due to substitution of glutamic acid by valine at position 6 of the B-globin chain1. In the homozygous (HbSS) state, this occurs on both B-globin chains leading to severe disease. Paulson et al. demonstrated a direct quantitative effect of the sickle gene pair on sufferers2. The HbS molecule polymerizes easily under stress conditions with formation of insoluble polymers, which distort red cell shape, leading to membrane damage and eventual red cell breakdown1. Conditions implicated in sickle red cell stress include hypoxia, infections and dehydration. These cause formation of reactive oxygen species (ROS), which oxidatively damage cell membranes3. ROS has a multiplier effect leading to more ROS generation, establishing a positive feedback system, the end-result of which is damage to red cell membranes4. Sickle red cells produce greater amounts of ROS than normal cells and are also more susceptible to oxidation5.\n\nROS formation is maximal during the reperfusion state when there is increased formation of methemoglobin and superoxide ion due to electron transfers that occur between the heme iron and oxygen6. Comparatively, HbS molecules auto-oxidize more than the HbA molecules and also have a poor mechanism to clear generated superoxide ion7. Consequently, the lifespan of the sickle red cell is about 1/5th that of the normal red cell8.\n\nTheoretically, reducing the ROS formation would ameliorate HbS molecule and lipid membrane polymerization and improve red cell life span. In a flow cytomeric comparison of ROS production in normal vs thalassaemic red cells, significantly higher ROS generation in thalassaemic red cells was noted5. In addition, pre-treatment of the red cells with N-acetyl-L-cysteine ameliorated the generation of ROS in both the normal and thalassaemic cells after the cells were stressed with 2’,7’-dichlorofluorescein diacetate, a potent inducer of ROS formation. Furthermore, Furman et al. found that sickle red cell pretreatment with Ginkgo biloba extract prior to cell stressing also ameliorated ROS, and reduced membrane polymerization and formation of methemoglobin9.\n\nVitamin C is a powerful anti-oxidant, working as an electron donor capable of reducing molecular oxygen10,11, making it important in a variety of physiological processes and reactions in the human body, including, fatty acid transport12, synthesis of collagen13, prostaglandin metabolism14 and neurotransmitter synthesis15. Vitamin C is almost completely absorbed in the distal small intestine via energy-dependent processes16 and exhibits a steady-state concentration at oral doses of 200–400mg/day, corresponding to plasma concentrations between 60–100umol/L17. Vitamin C is cheap, well-studied and readily available, making it attractive to study as a regulator of ROS formation in sickle red cells.\n\nNigeria has the highest number of people with sickle cell disease (SCD) in the world, with an estimated 91,000 babies born with the condition yearly18. This number is expected to grow. This means that Nigeria and other countries with a high disease burden will continue to require research to improve policies for prevention and management of sickle cell patients. Catastrophic financial burden has occurred in families of sickle cell patients evaluated in a study in Ekiti State, Nigeria19. Many of these families spent over 10% of family income to cover hospital admissions of an SCD patient and greater than 90% of these families had no health insurance and had to borrow to meet their financial needs. This study investigated a possible role for Vitamin C, a cheap drug, in reducing ROS formation in sickle cell patients.\n\n\nMethods\n\nInstitutional approval was given by the Lagos University Teaching Hospital Health Research Ethics Committee (Assigned number: ADM/DCST/HREC/APP/1533). 27 HbSS patients were recruited from the Sickle Cell Clinic of Lagos University Teaching Hospital (LUTH) and LUTH clinic annex at Sickle Cell Foundation of Nigeria, Lagos Office between May 22–25, 2017. These patients were known HbSS patients documented on file and previously confirmed by hemoglobin electrophoresis. Every 3rd HbSS patient who presented in the clinic and met the inclusion criteria was enrolled in the study after written informed consent was obtained in accordance with the Declaration of Helsinki. Afterwards, a screening questionnaire was applied to obtain each candidate’s demographics followed by weight and height measurements. Four volunteers declined height and weight assessment. 4mls of fresh blood was then drawn from the antecubital vein of each enrolled candidate into EDTA tubes. All samples were analyzed within 4 hours of collection.\n\nInclusion criteria. Patients of all ages who did not receive blood transfusion within 3 months and had not taken Vitamin C or multivitamin supplements within 3 months of the study were recruited. Patients taking hydroxyurea were included if they had been on a stable dose for at least 3 months.\n\nExclusion criteria. Blood transfusion within 3 months of study; acutely-ill patients; use of Vitamin C or multivitamin supplements within 3 months.\n\nGibco Phosphate-buffered saline (PBS; catalog number 20012043; Life Technologies, Grand Island, New York). CellRox Orange Flow Cytometry Assay kit (catalog number: C10493; Life Technologies). The kit includes the fluorophore, CellRox Orange reagent; N-acetyl cysteine (NAC; an antioxidant) and tert-butyl hydroperoxide solution (TBHP; to induce ROS). CellRox Orange reagent localizes to the cell cytoplasm and has absorption/emission maxima of 545/565 nm respectively20. L-Ascorbic acid powder (CSPC Weisheng Pharmaceutical, China); a 200uM solution of Vitamin C was prepared in cold PBS and protected from light.\n\nSample preparation and analysis were done at the Nigerian Institute of Medical Research - Human Virology Lab (NIMR-HVL), an ISO-certified lab in Yaba, Lagos. Pre-wash red cell counts for all samples were obtained from the hematology analyzer before centrifugation at 500xg for 10 minutes and supernatant removal. Cells were washed thrice in cold PBS and finally re-suspended in 5mls of PBS followed by determination of post-wash red cell count. This step was an adaptation of a previous process21.\n\nThe protocol for this step was adapted from Life Technologies standardized protocol20. For each sample, volume corresponding to 5 x 105 cells was pipetted into four different micro-centrifuge tubes. One was immediately incubated with CellRox orange stain at a final concentration of 500nM and ROS quantified on the Partec Cyflow Counter Version 2.4 to determine basal ROS present per sample. Another tube was incubated with PBS and then stressed at final concentration of 200uM TBHP to serve as control. Two other tubes were pre-treated with 80uM and 100uM concentrations of Vitamin C respectively, before stressing with 200uM TBHP to represent the test samples. In a supplemental study, nine samples were non-randomly selected and incubated with 200 uM NAC followed by TBHP stressing. Incubation time per added reagent was 30 minutes.\n\nCellRox Orange was prepared with DMSO and then added to all the samples described above at a final concentration of 500nM. After 30 minutes of incubation, 800uL of the solution was transferred into Rohren tubes for immediate analysis on the flow cytometer. A total of 5 × 105 cells were analysed and pulsed gating was used to exclude doublets.\n\nThe primary outcome measure was a comparison of the quantity of ROS formed following stressing of test vs control cells. Secondary analyses evaluated the relationship between gender, age, BMI and hydroxyurea use on basal ROS in sickle red cells. Microsoft Excel Version 14 was used for analysis. Analysis of variance (ANOVA) was used for statistical comparisons (significance defined by P values ≤ 0.05).\n\n\nResults\n\n27 participants were recruited into the study. The sample was equally distributed between males and females (Table 1). Blood transfusion rate in this population was lower than among Congolese sickle cell patients22, but higher than for U.S Medicaid patients seen between 2007–201223. Hydroxyurea utilization was higher than among Florida Medicaid patients24 and improved over a previous report from a major Nigerian Teaching Hospital25.\n\n* refers to patients transfused at a time earlier than 3 months prior to study – refer to exclusion criteria.\n\nAverage pre-wash red cell count was 2.89×1012/L. The average male red cell count was higher than females (Table 2). Post-wash red cell count was calculated after final cell suspension. Volume equivalent of 5 x 105 red cells was calculated from the post-wash red cell count (Table 2).\n\nM – male; F – female; HS – Hydroxyurea Status; N/D – not determined; Vit.C – Vitamin C. Scoring system – Age: 0 means <20 years and 1 means ≥20 years; BMI: 0 means <18.5 and 1 means ≥18.5; hydroxyurea status: 0 refers to those not taking and 1 to those taking hydroxyurea.\n\nBaseline ROS count (BRC) refers to number of ROS per ul before any red cell intervention. Average total BRC was 441.9/ul (Table 2). When BRC was matched in terms of gender, men were found to have a significantly higher count than women, with average ROS count of 583.1/ul (95% CI, 373.1 to 793.1; p-value = 0.03; Table 3). Average BRC for those ≥ 20 years was higher than those <20 years; those with BMI ≥ 18.5 had higher average ROS than those <18.5; and those on hydroxyurea also had higher average basal ROS than those not taking hydroxyurea.\n\nA score of 1 each was given to the male subjects, subjects on hydroxyurea at time of recruitment, age ≥20 years and for BMI ≥ 18.5. A score of 0 was applied in each case for women, those not on hydroxyurea, aged <20 years and BMI < 18.5 (Table 2). Average BRC for subjects with scores ≥ 3 were higher than those with scores 2, though not significant (p-value = 0.11). When each score cohort was matched in terms of gender, the average ROS production for men remained higher than for women without statistical signifance (Table 3).\n\nROS count after TBHP stressing of test and matched controls were compared for each sample (Figure 1). The controls had a higher ROS count than test cells. As shown in Figure 2, at both concentrations of Vitamin C, there were significantly less ROS formation than controls (p-value<0.001). Average ROS count for 80uM test samples was 27.5/ul (95% CI, 17.5 to 72.5) and for 100uM test group, it was 3.9/ul (95% CI, 1.9 to 5.9). No statistical difference existed between Vitamin C pretreatment at 80uM and at 100uM (p-value = 0.31).\n\nWe compared a subset of controls with NAC-pretreated cells. Samples 1 – 9 were pre-treated with 200uM NAC and the results compared with matched controls pretreated with PBS (Table 4 and Figure 3). NAC at 200uM final concentration reduced ROS formation compared to control. However, no statistical significance was noted (p-value = 0.12).\n\n\nDiscussion\n\nIndividually, age ≥ 20 years, BMI ≥ 18.5, hydroxyurea use and the male gender appear to be associated with increased baseline ROS count (BRC). In normal subjects, a statistically significant correlation was described by Kukovetz et al. between increasing age and generation of ROS26. However, even though our study did not show a statistical significance between increasing age and increased BRC, an appropriate path would be to investigate this relationship further with larger patient cohorts. Furthermore, this study has revealed an association between BMI ≥ 18.5 and increased BRC. Previously, a relationship between increasing obesity and generation of serum ROS Japanese students was noted27, while increased ROS production has been described in obese diabetics, particularly with abdominal obesity28. We also found that patients on hydroxyurea had a higher average BRC than those not taking hydroxyurea. It has been shown that hydroxyurea mediated E. coli cell death by inducing envelope ROS formation leads to membrane damage and cell death29,30. Elevated ROS formation has also been seen in hydroxyurea-treated yeast cell cytoplasm31. Hydroxyurea inhibits ribonucleotide reductase leading to a replication fork arrest32 and accumulation of ROS. Male gender is an independent factor statistically associated with increased BRC (p<0.05) in this study. Previous studies have shown that men have increased oxidative stress levels, more biomarkers of stress, elevated ROS production and lower antioxidant levels33–36. Among sickle cell patients in a Camerounian Hospital, women had a greater total anti-oxidant capacity compared with males37. Our data showed that, on average, men aged ≥ 20 years whose BMI were ≥ 18.5 and who were also on hydroxyurea produced more ROS. This is equivalent to Score 3 on the baseline ROS score described in the Results section. This group might be a target for an ROS-reducing agent. Further studies are required here.\n\nA significant reduction in ROS formation for all test samples pretreated with 80uM and 100uM concentrations of Vitamin C prior to tert-butyl hydroperoxide stressing vs matched controls (P-values < 0.05) was noted. It is evident that Vitamin C inhibited formation of ROS formation in test cells and may have served to protect the red cell membrane from damage. Guaiquil et al. found that increased sensitivity of glutathione-depleted human myeloid cells to membrane damage was significantly reversed by preloading with Vitamin C38. As noted earlier, there is a reduction in ROS formation in murine models with pulmonary contusion treated with Vitamin C11. This finding is significant because if we can reduce ROS formation in sickle cell patients by simply administering Vitamin C, we would theoretically reduce cell injury and improve disease-free intervals for patients. Our study did not find a significant difference in ROS formation between cell cohorts pretreated at 80uM vs 100uM concentrations of Vitamin C. One sample pretreated with 80uM Vitamin C did not show as much reduction in ROS in the degrees seen with other samples (Figure 1). This effect was not noted at incubation with 100uM Vitamin C. This might be due to individual variation in response rates at 80uM concentration. It is possible that extending Vitamin C incubation time beyond 30 minutes might have yielded larger changes.\n\nRate of cell lysis between treated vs control was not compared in this study. It will be interesting to see if the treated cells were more resistant to lysis than matched controls. Additionally, controlled studies to examine the effect on basal ROS formation of administering Vitamin C to sickle cell patient cohorts over time would be appropriate.\n\nN-acetylcysteine did not significantly reduce ROS. When compared to Vitamin C, there was a significant difference in efficacy, with Vitamin C showing superiority over NAC. It may help to consider other concentrations of NAC in future experiments.\n\n\nConclusion\n\nVitamin C significantly decreased ROS formation in stressed red cells of sickle cell patients. Future studies are required to evaluate the effect of Vitamin C administration on sickle cell patients.\n\n\nData availability\n\nDataset 1. Baseline characteristics, measured heights, weights and calculated BMI, red cell count and reactive oxygen species formation per sample. M – male; F – female; HS – Hydroxyurea Status; Y/N – Yes/No answers for hydroxyurea use and blood transfusion history; Vit.C – Vitamin C; N/D - not determined. doi, 10.5256/f1000research.12126.d16944939",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe thank the staff of the Nigerian Medical Research Institute, especially the Director-General, Prof. Salako; Deputy Director of Research, Dr. Audu and Mr. Jamda for providing technical support for our project. We are grateful to Dr. Kalejaiye, Consultant Hematologist at LUTH and Dr. Fezie Nnaji, Primary Care Physician at Simeon Hospital, Lagos who helped recruit sickle cell patients. Valentine Egini and Colins Nwosu supported logistics and transport. Stephanie Egini typeset the manuscript and Dr. Abraham Bezuneh provided valuable advice. Finally, we are very grateful to Dr. Mark Adler, Internal Medicine Program Director, Interfaith Medical Center for manuscript review.\n\n\nReferences\n\nIngram VM: Abnormal human haemoglobins. III. The chemical difference between normal and sickle cell haemoglobins. Biochim Biophys Acta. 1959; 36: 402–411. PubMed Abstract | Publisher Full Text\n\nPauling L, Itano HA, Singer S, et al.: Sickle cell anemia a molecular disease. Science. 1949; 110(2865): 543–548. PubMed Abstract | Publisher Full Text\n\nIlesanmi OO: Pathological basis of symptoms and crises in sickle cell disorder: implications for counseling and psychotherapy. Hematol Rep. 2010; 2(1): e2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZorov DB, Juhaszova M, Sollott SJ: Mitochondrial reactive oxygen species (ROS) and ROS-induced ROS release. Physiol Rev. 2014; 94(3): 909–950. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAmer J, Goldfarb A, Fibach E: Flow cytometric measurement of reactive oxygen species production by normal and thalassaemic red blood cells. Eur J Haematol. 2003; 70(2): 84–90. PubMed Abstract | Publisher Full Text\n\nGu J, Chang TM: Extraction of erythrocyte enzymes for the preparation of polyhemoglobin-catalase-superoxide dismutase. Artif Cells Blood Substit Immobil Biotechnol. 2009; 37(2): 69–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHebbel RP, Morgan WT, Eaton JW, et al.: Accelerated autoxidation and heme loss due to instability of sickle hemoglobin. Proc Natl Acad Sci U S A. 1988; 85(1): 237–241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCurdy PR, Sherman AS: Irreversibly sickled cells and red cell survival in sickle cell anemia: a study with both DF32P and 51CR. Am J Med. 1978; 64(2): 253–258. PubMed Abstract | Publisher Full Text\n\nFurman AE, Henneberg R, Hermann PB, et al.: Ginkgo biloba extract (EGb 761) attenuates oxidative stress induction in erythrocytes of sickle cell disease patients. Braz J Pharm Sci. 2012; 48(4): 659–665. Publisher Full Text\n\nRamanathan K, Balakumar BS, Panneerselvam C: Effects of ascorbic acid and alpha-tocopherol on arsenic-induced oxidative stress. Hum Exp Toxicol. 2002; 21(12): 675–680. PubMed Abstract | Publisher Full Text\n\nSirmali R, Giniş Z, Sirmali M, et al.: Vitamin C as an antioxidant: evaluation of its role on pulmonary contusion experimental model. Turk J Med Sci. 2014; 44(6): 905–913. PubMed Abstract | Publisher Full Text\n\nRebouche CJ: Renal handling of carnitine in experimental vitamin C deficiency.1995; 44(12): 1639–43. PubMed Abstract | Publisher Full Text\n\nRonchetti IP, Quaglino D Jr, Bergamini G: Ascorbic acid and connective tissue. Subcell Biochem. Harris JR (Ed), Plenum Press, New York. 1996; 25. : 249–64. PubMed Abstract | Publisher Full Text\n\nHorrobin DF: Ascorbic acid and prostaglandin synthesis. Subcell Biochem. Harris JR (Ed), Plenum Press, New York. 1996; 25. : 109–115. PubMed Abstract | Publisher Full Text\n\nKatsuki H: Vitamin C and nervous tissue. In vivo and in vitro aspects. Subcell Biochem. Harris JR (Ed), Plenum Press, New York. 1996; 25. : 293–311. PubMed Abstract | Publisher Full Text\n\nMellors AJ, Nahrwold DL, Rose RC: Ascorbic acid flux across mucosal border of guinea pig and human ileum. Am J Physiol. 1977; 233(5): E374–E379. PubMed Abstract\n\nLevine M, Conry-Cantilena C, Wang Y, et al.: Vitamin C pharmacokinetics in healthy volunteers: evidence for a recommended dietary allowance. Proc Natl Acad Sci U S A. 1996; 93(8): 3704–3709. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPiel FB, Hay SI, Gupta S, et al.: Global burden of sickle cell anaemia in children under five, 2010–2050: modelling based on demographics, excess mortality, and interventions. PLoS Med. 2013; 10(7): e1001484. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOlatunya OS, Ogundare EO, Fadare JO, et al.: The financial burden of sickle cell disease on households in Ekiti, Southwest Nigeria. Clinicoecon Outcomes Res. 2015; 7: 545–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThermo Fisher Scientific: CellRox™ Orange Reagent for oxidative stress detection. 2017. Reference Source\n\nHanson MS, Stephenson AH, Bowles EA, et al.: Phosphodiesterase 3 is present in rabbit and human erythrocytes and its inhibition potentiates iloprost-induced increases in cAMP. Am J Physiol Heart Circ Physiol. 2008; 295(2): H786–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTshilolo LM, Mukendi RK, Wembonyama SO: Blood transfusion rate in Congolese patients with sickle cell anemia. Indian J Pediat. 2007; 74(8): 735–738. PubMed Abstract | Publisher Full Text\n\nNouraie M, Gordeuk VR: Blood transfusion and 30-day readmission rate in adult patients hospitalized with sickle cell disease crisis. Transfusion. 2015; 55(10): 2331–2338. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRitho J, Liu H, Hartzema AG, et al.: Hydroxyurea use in patients with sickle cell disease in a Medicaid population. Am J Hematol. 2011; 86(10): 888–890. PubMed Abstract | Publisher Full Text\n\nAliyu ZY, Babadoko A, Mamman A: Hydroxyurea Utilization in Nigeria, a Lesson in Public Health. Blood. 2007; 110(11): 80. Reference Source\n\nKukovetz EM, Bratschitsch G, Hofer HP, et al.: Influence of age on the release of reactive oxygen species by phagocytes as measured by a whole blood chemiluminescence assay. Free Radic Biol Med. 1997; 22(3): 433–438. PubMed Abstract | Publisher Full Text\n\nKogawa T, Kashiwakura I: Relationship between obesity and serum reactive oxygen metabolites in adolescents. Environ Health Prev Med. 2013; 18(6): 451–457. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHirao K, Maruyama T, Ohno Y, et al.: Association of increased reactive oxygen species production with abdominal obesity in type 2 diabetes. Obes Res Clin Pract. 2010; 4(2): e83–e90. PubMed Abstract | Publisher Full Text\n\nDavies BW, Kohanski MA, Simmons LA, et al.: Hydroxyurea induces hydroxyl radical-mediated cell death in Escherichia coli. Mol Cell. 2009; 36(5): 845–860. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNakayashiki T, Mori H: Genome-wide screening with hydroxyurea reveals a link between nonessential ribosomal proteins and reactive oxygen species production. J Bacteriol. 2013; 195(6): 1226–1235. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang ME, Facca C, Fatmi Z, et al.: DNA replication inhibitor hydroxyurea alters Fe-S centers by producing reactive oxygen species in vivo. Sci Rep. 2016; 6: 29361. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRosenkranz HS, Winshell EB, Mednis A, et al.: Studies with hydroxyurea. VII. Hydroxyurea and the synthesis of functional proteins. J Bacteriol. 1967; 94(4): 1025–1033. PubMed Abstract | Free Full Text\n\nBarp J, Araújo AS, Fernandes TR, et al.: Myocardial antioxidant and oxidative stress changes due to sex hormones. Braz J Med Biol Res. 2002; 35(9): 1075–81. PubMed Abstract | Publisher Full Text\n\nIde T, Tsutsui H, Ohashi N, et al.: Greater oxidative stress in healthy young men compared with premenopausal women. Arterioscler Thromb Vasc Biol. 2002; 22(3): 438–42. PubMed Abstract | Publisher Full Text\n\nMatarrese P, Colasanti T, Ascione B, et al.: Gender disparity in susceptibility to oxidative stress and apoptosis induced by autoantibodies specific to RLIP76 in vascular cells. Antioxid Redox Signal. 2011; 15(11): 2825–36. PubMed Abstract | Publisher Full Text\n\nBhatia K, Elmarakby AA, El-Remessey AB, et al.: Oxidative stress contributes to sex differences in angiotensin II-mediated hypertension in spontaneously hypertensive rats. Am J Physiol Regul Integr Comp Physiol. 2012; 302(2): R274–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAma Moor VJ, Pieme CA, Chetcha Chemegne B, et al.: Oxidative profile of sickle cell patients in a Cameroonian urban hospital. BMC Clin Pathol. 2016; 16(1): 15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuaiquil VH, Vera JC, Golde DW: Mechanism of vitamin C inhibition of cell death induced by oxidative stress in glutathione-depleted HL-60 cells. J Biol Chem. 2001; 276(44): 40955–40961. PubMed Abstract | Publisher Full Text\n\nEgini O, Guillaume E, Adeyemo T, et al.: Dataset 1 in: In vitro regulation of reactive oxygen species formation in red blood cells of homozygous sickle cell patients using Vitamin C. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24997",
"date": "29 Aug 2017",
"name": "Eitan Fibach",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors measured the in vitro effect of vitamin C on reactive oxygen species generation in RBC of patients with Sickle Cell Disease.\nComments The data are not entirely novel. The authors should cite previous studies on the oxidative status of RBC from patients with sickle cell disease and their response to antioxidants, including Vit. C. For example, Amer et al.1\nThe experimental approach is not clear.\nFor example:\nAbstract “matched controls” – here and elsewhere – what does it mean? RBC of SCA patients not treated with Vit C? what about normal individuals?\nMaterials and reagents “Blood transfusion rate” – Provide data. When were the blood samples obtained? Before/after transfusion?\n“Cell staining and flow cytometry“ – How were the cells analyzed and ROS determined? “number of ROS per ul” – not clear. “per ul” of what? Before or after washing? Why not calculate the ROS per washed RBC?\n\nRelationship between baseline ROS count and patient characteristics “Baseline ROS count (BRC) refers to number of ROS per ul before any red cell intervention.” What does “before any red cell intervention” mean?\n\"Baseline ROS prediction score\" – Explain \"prediction score\"\nOther comments\nIntroduction “Paulson et al. ” – Change to Pauling et al.\n“Paulson et al. demonstrated a direct quantitative effect of the sickle gene pair on sufferers2.” – explain.\n“…during the reperfusion state…” – explain.\nTable 1. Baseline characteristics of study participants. The parameters should be shown per males and females.\n\nFigure 3. - What does “positive control” mean?\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "30594",
"date": "26 Feb 2018",
"name": "Edeghonghon Olayemi",
"expertise": [
"Reviewer Expertise Benign haematology specifically sickle cell disease and coagulation"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors set out to investigate a possible in vitro role for Vitamin C in reducing Reactive Oxygen Species (ROS) formation in sickle cell disease patients.\nComments\nAbstract Were there matched controls? How were they selected?\nIntroduction The study ignored the role of glucose 6 phosphate dehydrogenase status which plays an important role in protecting red cells against oxidative stress; especially since a significant proportion of the population at the study site will be G6PD deficient. G6PD plays an important role in the production and removal of ROS.\nMethods The sample size is rather small, how was this determined? What was the study design? The inclusion / exclusion criteria did not state if the patients were in steady state or not and if so how was this determined?\nThe calculation of the baseline ROS prediction score is not clear.\nResults It is known that obesity is associated with increased ROS generation, thus, addition of data from overweight patients and the 4 patients with 'unknown' BMI to those of patients with normal BMI could have adversely affected the results. These patients should be excluded in calculating the association between BMI and ROS.\nThe finding that men produced more ROS may be related to their G6PD status.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1212
|
https://f1000research.com/articles/6-1167/v1
|
21 Jul 17
|
{
"type": "Research Note",
"title": "Women as editors-in-chief of environmental science journals",
"authors": [
"Myrna L Yeverino-Gutiérrez",
"Ma del Rosario González-González",
"Ruth Corral-Symes",
"Omar González-Santiago",
"Myrna L Yeverino-Gutiérrez",
"Ma del Rosario González-González",
"Ruth Corral-Symes"
],
"abstract": "This research note describes an analysis regarding the role of women as editors-in-chief of environmental science journals. The list of journals analyzed was obtained from the database of “Web of Science”, published in 2015. This database does not include information on the name or gender of the editors-in-chief of journals, so a web search was performed. The results show that gender inequality is present in this important field of science. Causes of this bias merit more and profound research. The bias observed may not apply to journals of others areas of science.",
"keywords": [
"gender",
"science",
"editor",
"science communication"
],
"content": "Introduction\n\nGender bias has been observed in several aspects of science, mainly in the authorship of scientific papers, first author position, grants and employment1,2. It is possible that this bias is present for other important positions in science, such as the editorial positions in scientific journals. With this in mind, we determined the percentage of women who are editors-in-chief of environmental science journals.\n\n\nMethods\n\nThe list of journals was obtained from the 2015 Thomson Reuters Web of Science database, which groups journals by impact factor and area of scientific expertise. We chose journals grouped into environmental science. Since the name and gender of the editor-in-chief is not reported in this database, a web search was performed. The name of the editor-in-chief was obtained from the respective web page of the journal. In cases where it was not possible to identify the gender with the name only, a more extensive web search was performed. The criteria used to identify the gender was a headshot on the website of the respective institution, a Researchgate profile, or the journal that he or she directs. Differences between genders and amongst groups of journals were determined with a chi-square test. NCSS version 11 was used for statistical analysis.\n\n\nResults and discussion\n\nA total of 103 environmental science journals were analyzed. Of these, 22 journals had an impact factor (IF) < 1; 50 journals had an IF between 1-2; and 31 journals had IF > 2. For 4 journals, it was not possible to identify the gender of the editor-in-chief. The list of journals analyzed is available as a dataset. Overall, the percentage of women that were editors-in-chief was 21.6% (Table 1). This percentage was different according to the IF of the journals. In journals with low IF, the percentage of women as editors-in-chief was 33.3%, in journals with IF between 1-2, this percentage was 21.6%, and in journals with IF > 2, the percentage was 14.9%. The decreasing trend was statistically significant.\n\nWomen are underrepresented as editors-in-chief of environmental science journals and suggests a gender bias. Several factors that could contribute to underrepresentation of women in science have been previously suggested by other authors and could explain this observation3. Childbearing, forming a family, gender expectations, lifestyle choices and career preferences are among these factors. Other factor could be the scientific area. The percentage of women as editors-in-chiefs probably is major in areas where their participation is more active, so this analysis should be made with other types of journals that specialize on other fields of science. Finally, more studies that corroborate and identify causes of this outcome are needed.\n\n\nData availability\n\nDataset 1. List of journals included in the analysis. DOI, 10.5256/f1000research.11661.d1690394",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nFreund KM, Raj A, Kaplan SE, et al.: Inequities in Academic Compensation by Gender: A Follow-up to the National Faculty Survey Cohort Study. Acad Med. 2016; 91(8): 1068–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLarivière V, Ni C, Gingras Y, et al.: Bibliometrics: global gender disparities in science. Nature. 2013; 504(7479): 211–213. PubMed Abstract | Publisher Full Text\n\nCeci SJ, Williams WM: Understanding current causes of women’s underrepresentation in science. Proc Natl Acad Sci U S A. 2011; 108(8): 3157–3162. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYeverino-Gutierres ML, González-González MdR, Corral-Symes R, et al.: Dataset 1 in: Women as editors-in-chief of environmental science journals. F1000Research. 2017. Data Source"
}
|
[
{
"id": "25001",
"date": "29 Aug 2017",
"name": "Karin Amrein",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn a short research note, Yeverino-Gutierrez and colleagues report interesting data on the representation of women as editors in chief in environmental science journals.\n\nA few major aspects should be clarified:\nIn the abstract, the authors should state some specific results of their analysis (no. of journals, no. of editors, % female etc.) The manuscript is indeed very short and would benefit from some greater detail for all sections. The numbers mentioned in the text are discordant to the numbers in the table (e.g. 103 journals analyzed vs 148). Were data missing and if yes, why? I think it would be better to use tertiles in the analysis of impact factor in order to have similar group size as opposed to an arbitrary cutoff for the impact factor. Limitations should be added (only one time point, only one category, etc.) Add the used test to the table legend. A few minor typos/grammar errors are present\n\nPS:\n\nWere any efforts made to contact the journals and obtain more detailed data from them or have more information about the process of assignment for editor in chief? Are the authors aware of data on how the percentage of women in scientists or people working in this field is? To date, the category \"Environmental Sciences\" has well over 200 journals. Were indeed only 148 listed in 2015??\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "24413",
"date": "31 Aug 2017",
"name": "Emilio Bruna",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study addresses an important topic - the gender balance at the highest levels of journal editorial leadership. There data collection and analyses are straightforward and technically sound. While there is value in documenting the gender ratio of editors-in-chief, however, the study doesn't place these results in a greater context. This is both surprising and disappointing given the substantial research on the topic (and very little of which is cited). Why focus on environmental biology? How do these results compare with those from other fields? Why is the observed gender imbalance a problem and what can be done to remedy it? Without addressing these questions\nI would encourage the authors to move beyond simply presenting the data to interpreting and contextualizing it. This will greatly increase the impact of their substantial effort.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1167
|
https://f1000research.com/articles/6-1158/v1
|
21 Jul 17
|
{
"type": "Method Article",
"title": "Bioconductor workflow for single-cell RNA sequencing: Normalization, dimensionality reduction, clustering, and lineage inference",
"authors": [
"Fanny Perraudeau",
"Davide Risso",
"Kelly Street",
"Elizabeth Purdom",
"Sandrine Dudoit",
"Davide Risso",
"Kelly Street",
"Elizabeth Purdom",
"Sandrine Dudoit"
],
"abstract": "Novel single-cell transcriptome sequencing assays allow researchers to measure gene expression levels at the resolution of single cells and offer the unprecendented opportunity to investigate at the molecular level fundamental biological questions, such as stem cell differentiation or the discovery and characterization of rare cell types. However, such assays raise challenging statistical and computational questions and require the development of novel methodology and software. Using stem cell differentiation in the mouse olfactory epithelium as a case study, this integrated workflow provides a step-by-step tutorial to the methodology and associated software for the following four main tasks: (1) dimensionality reduction accounting for zero inflation and over dispersion and adjusting for gene and cell-level covariates; (2) cell clustering using resampling-based sequential ensemble clustering; (3) inference of cell lineages and pseudotimes; and (4) differential expression analysis along lineages.",
"keywords": [
"single-cell",
"RNA-seq",
"normalization",
"dimensionality reduction",
"clustering",
"lineage inference",
"differential expression",
"workflow"
],
"content": "Introduction\n\nSingle-cell RNA sequencing (scRNA-seq) is a powerful and promising class of high-throughput assays that enable researchers to measure genome-wide transcription levels at the resolution of single cells. To properly account for features specific to scRNA-seq, such as zero inflation and high levels of technical noise, several novel statistical methods have been developed to tackle questions that include normalization, dimensionality reduction, clustering, the inference of cell lineages and pseudotimes, and the identification of differentially expressed (DE) genes. While each individual method is useful on its own for addressing a specific question, there is an increasing need for workflows that integrate these tools to yield a seamless scRNA-seq data analysis pipeline. This is all the more true with novel sequencing technologies that allow an increasing number of cells to be sequenced in each run. For example, the Chromium Single Cell 3’ Solution was recently used to sequence and profile about 1.3 million cells from embryonic mouse brains.\n\nscRNA-seq low-level analysis workflows have already been developed, with useful methods for quality control (QC), exploratory data analysis (EDA), pre-processing, normalization, and visualization. The workflow described in Lun et al. (2016) and the package scater (McCarthy et al., 2017) are such examples based on open-source R software packages from the Bioconductor Project (Huber et al., 2015). In these workflows, single-cell expression data are organized in objects of the SCESet class allowing integrated analysis. However, these workflows are mostly used to prepare the data for further downstream analysis and do not focus on steps such as cell clustering and lineage inference.\n\nHere, we propose an integrated workflow for dowstream analysis, with the following four main steps: (1) dimensionality reduction accounting for zero inflation and over-dispersion, and adjusting for gene and cell-level covariates, using the zinbwave Bioconductor package; (2) robust and stable cell clustering using resampling-based sequential ensemble clustering, as implemented in the clusterExperiment Bioconductor package; (3) inference of cell lineages and ordering of the cells by developmental progression along lineages, using the slingshot R package; and (4) DE analysis along lineages. Throughout the workflow, we use a single SummarizedExperiment object to store the scRNA-seq data along with any gene or cell-level metadata available from the experiment See Figure 1.\n\nOn the right, main plots generated by the workflow.\n\n\nAnalysis of olfactory stem cell differentiation using scRNA-seq data\n\nThis workflow is illustrated using data from a scRNA-seq study of stem cell differentiation in the mouse olfactory epithelium (OE) (Fletcher et al., 2017). The olfactory epithelium contains mature olfactory sensory neurons (mOSN) that are continuously renewed in the epithelium via neurogenesis through the differentiation of globose basal cells (GBC), which are the actively proliferating cells in the epithelium. When a severe injury to the entire tissue happens, the olfactory epithelium can regenerate from normally quiescent stem cells called horizontal basal cells (HBC), which become activated to differentiate and reconstitute all major cell types in the epithelium.\n\nThe scRNA-seq dataset we use as a case study was generated to study the differentiation of HBC stem cells into different cell types present in the olfactory epithelium. To map the developmental trajectories of the multiple cell lineages arising from HBCs, scRNA-seq was performed on FACS-purified cells using the Fluidigm C1 microfluidics cell capture platform followed by Illumina sequencing. The expression level of each gene in a given cell was quantified by counting the total number of reads mapping to it. Cells were then assigned to different lineages using a statistical analysis pipeline analogous to that in the present workflow. Finally, results were validated experimentally using in vivo lineage tracing. Details on data generation and statistical methods are available in Fletcher et al. (2017); Risso et al. (2017); Street et al. (2017).\n\nIt was found that the first major bifurcation in the HBC lineage trajectory occurs prior to cell division, producing either mature sustentacular (mSUS) cells or GBCs. Then, the GBC lineage, in turn, branches off to give rise to mOSN and microvillous (MV) (Figure 2). In this workflow, we describe a sequence of steps to recover the lineages found in the original study, starting from the genes by cells matrix of raw counts publicly available on the NCBI Gene Expression Omnibus with accession GSE95601.\n\nReprinted from Cell Stem Cell, Vol 20, Fletcher et al., Deconstructing Olfactory Stem Cell Trajectories at Single-Cell Resolution Pages No. 817–830, Copyright (2017), with permission from Elsevier.\n\nThe following packages are needed.\n\n\n\nNote that in order to successfully run the workflow, we need the following versions of the Bioconductor packages scone (1.1.2), zinbwave (0.99.6), and clusterExperiment (1.3.2). We recommend running Bioconductor 3.6 (currently the devel version; see https://www.bioconductor.org/developers/how-to/useDevel/).\n\nTo give the user an idea of the time needed to run the workflow, the function system.time was used to report computation times for the time-consuming functions. Computations were performed with 2 cores on a MacBook Pro (early 2015) with a 2.7 GHz Intel Core i5 processor and 8 GB of RAM. The Bioconductor package iocParallel was used to allow for parallel computing in the zinbwave function. Users with a different operating system may change the package used for parallel computing and the NCORES variable below.\n\n\n\nCounts for all genes in each cell were obtained from NCBI Gene Expression Omnibus (GEO), with accession number GSE95601. Before filtering, the dataset had 849 cells and 28,361 detected genes (i.e., genes with non-zero read counts).\n\nNote that in the following, we assume that the user has access to a data folder located at ../data. Users with a different directory structure may need to change the data_dir variable below to reproduce the workflow.\n\n\n\n\n\nWe remove the ERCC spike-in sequences and the CreER gene, as the latter corresponds to the estrogen receptor fused to Cre recombinase (Cre-ER), which is used to activate HBCs into differentiation following injection of tamoxifen (see Fletcher et al. (2017) for details).\n\n\n\nThroughout the workflow, we use the class SummarizedExperiment to keep track of the counts and their associated metadata within a single object. The cell-level metadata contain quality control measures, sequencing batch ID, and cluster and lineage labels from the original publication (Fletcher et al., 2017). Cells with a cluster label of -2 were not assigned to any cluster in the original publication.\n\n\n\n\n\nUsing the Bioconductor R package scone, we remove low-quality cells according to the quality control filter implemented in the function metric_sample_filter and based on the following criteria (Figure 3): (1) Filter out samples with low total number of reads or low alignment percentage and (2) filter out samples with a low detection rate for housekeeping genes. See the scone vignette for details on the filtering procedure.\n\n\n\nAfter sample filtering, we are left with 747 good quality cells.\n\n\n\n\n\nFinally, for computational efficiency, we retain only the 1,000 most variable genes. This seems to be a reasonnable choice for the illustrative purpose of this workflow, as we are able to recover the biological signal found in the published analysis (Fletcher et al., 2017). In general, however, we recommend care in selecting a gene filtering scheme, as an appropriate choice is dataset-dependent.\n\n\n\nOverall, after the above pre-processing steps, our dataset has 1,000 genes and 747 cells.\n\n\n\nMetadata for the cells are stored in the slot colData from the SummarizedExperiment object. Cells were processed in 18 different batches.\n\n\n\n\n\nIn the original work (Fletcher et al., 2017), cells were clustered into 14 different clusters, with 151 cells not assigned to any cluster (i.e., cluster label of -2).\n\n\n\nNote that there is partial nesting of batches within clusters (i.e., cell type), which could be problematic when correcting for batch effects in the dimensionality reduction step below.\n\n\n\nIn scRNA-seq analysis, dimensionality reduction is often used as a preliminary step prior to downstream analyses, such as clustering, cell lineage and pseudotime ordering, and the identification of DE genes. This allows the data to become more tractable, both from a statistical (cf. curse of dimensionality) and computational point of view. Additionally, technical noise can be reduced while preserving the often intrinsically low-dimensional signal of interest (Dijk et al., 2017; Pierson & Yau, 2015; Risso et al., 2017).\n\nHere, we perform dimensionality reduction using the zero-inflated negative binomial-based wanted variation extraction (ZINB-WaVE) method implemented in the Bioconductor R package zinbwave. The method fits a ZINB model that accounts for zero inflation (dropouts), over-dispersion, and the count nature of the data. The model can include a cell-level intercept, which serves as a global-scaling normalization factor. The user can also specify both gene-level and cell-level covariates. The inclusion of observed and unobserved cell-level covariates enables normalization for complex, non-linear effects (often referred to as batch effects), while gene-level covariates may be used to adjust for sequence composition effects (e.g., gene length and GC-content effects). A schematic view of the ZINB-WaVE model is provided in Figure 4. For greater detail about the ZINB-WaVE model and estimation procedure, please refer to the original manuscript (Risso et al., 2017).\n\nThis figure was reproduced with kind permission from Risso et al. (2017).\n\nAs with most dimensionality reduction methods, the user needs to specify the number of dimensions for the new low-dimensional space. Here, we use K = 50 dimensions and adjust for batch effects via the matrix X.\n\nNote that if the users include more genes in the analysis, it may be preferable to reduce K to achieve a similar computational time.\n\n\n\nThe function zinbwave returns a SummarizedExperiment object that includes normalized expression measures, defined as deviance residuals from the fit of the ZINB-WaVE model with user-specified gene- and cell-level covariates. Such residuals can be used for visualization purposes (e.g., in heatmaps, boxplots). Note that, in this case, the low-dimensional matrix W is not included in the computation of residuals to avoid the removal of the biological signal of interest.\n\n\n\nAs expected, the normalized values no longer exhibit batch effects (Figure 5).\n\n\n\nThe principal component analysis (PCA) of the normalized values shows that, as expected, cells do not cluster by batch, but by the original clusters (Figure 6). Overall, it seems that normalization was effective at removing batch effects without removing biological signal, in spite of the partial nesting of batches within clusters.\n\nCells are color-coded by batch (left panel) and by original published clustering (right panel).\n\n\n\nThe zinbwave function can also be used to perform dimensionality reduction, where, in this workflow, the user-supplied dimension K of the low-dimensional space is set to K = 50. The resulting low-dimensional matrix W can be visualized in two dimensions by performing multi-dimensional scaling (MDS) using the Euclidian distance. To verify that W indeed captures the biological signal of interest, we display the MDS results in a scatterplot with colors corresponding to the original published clusters (Figure 7).\n\n\n\nThe next step of the workflow is to cluster the cells according to the low-dimensional matrix W computed in the previous step. We use the resampling-based sequential ensemble clustering (RSEC) framework implemented in the RSEC function from the Bioconductor R package clusterExperiment. Specifically, given a set of user-supplied base clustering algorithms and associated tuning parameters (e.g., k-means, with a range of values for k), RSEC generates a collection of candidate clusterings, with the option of resampling cells and using a sequential tight clustering procedure, as in Tseng & Wong (2005). A consensus clustering is obtained based on the levels of co-clustering of samples across the candidate clusterings. The consensus clustering is further condensed by merging similar clusters, which is done by creating a hierarchy of clusters, working up the tree, and testing for differential expression between sister nodes, with nodes of insufficient DE collapsed. As in supervised learning, resampling greatly improves the stability of clusters and considering an ensemble of methods and tuning parameters allows us to capitalize on the different strengths of the base algorithms and avoid the subjective selection of tuning parameters.\n\nNote that the defaults in RSEC are designed for input data that are the actual (normalized) counts. Here, we are applying RSEC instead to the low-dimensional W matrix from ZINB-WaVE, for which we make a separate SummarizedExperiment object. For this reason, we choose to not use certain options in RSEC. In particular, we do not use the default dimensionality reduction step, since our input W is already in a space of reduced dimension. Specifically, RSEC offers a dimensionality reduction option for the input to both the clustering routines (dimReduce) and the construction of the hiearchy between the clusters (dendroReduce). We also skip the option to merge our clusters based on the amount of differential gene expression between clusters.\n\n\n\n\n\nThe resulting candidate clusterings can be visualized using the plotClusters function (Figure 8), where columns correspond to cells and rows to different clusterings. Each sample is color-coded based on its clustering for that row, where the colors have been chosen to try to match up clusters that show large overlap accross rows. The first row correspond to a consensus clustering across all candidate clusterings.\n\n\n\nThe plotCoClustering function produces a heatmap of the co-clustering matrix, which records, for each pair of cells, the proportion of times they were clustered together across the candidate clusterings (Figure 9).\n\n\n\nThe distribution of cells across the consensus clusters can be visualized in Figure 10 and is as follows:\n\n\n\nThe distribution of cells in our workflow’s clustering overall agrees with that in the original published clustering (Figure 11), the main difference being that several of the published clusters were merged here into single clusters. This discrepancy is likely caused by the fact that we started with the top 1,000 genes, which might not be enough to discriminate between closely related clusters.\n\n\n\nFigure 12 displays a heatmap of the normalized expression measures for the 1,000 most variable genes, where cells are clustered according to the RSEC consensus.\n\n\n\nFinally, we can visualize the cells in a two-dimensional space using the MDS of the low-dimensional matrix W and coloring the cells according to their newly-found RSEC clusters (Figure 13); this is anologous to Figure 7 for the original published clusters.\n\n\n\nWe now demonstrate how to use the R software package slingshot to infer branching cell lineages and order cells by developmental progression along each lineage. The method, proposed in Street et al. (2017), comprises two main steps: (1) The inference of the global lineage structure (i.e., the number of lineages and where they branch) using a minimum spanning tree (MST) on the clusters identified above by RSEC and (2) the inference of cell pseudotime variables along each lineage using a novel method of simultaneous principal curves. The approach in (1) allows the identification of any number of novel lineages, while also accommodating the use of domain-specific knowledge to supervise parts of the tree (e.g., known terminal states); the approach in (2) yields robust pseudotimes for smooth, branching lineages.\n\nThe two steps of the Slingshot algorithm are implemented in the functions getLineages and getCurves, respectively. The first takes as input a low-dimensional representation of the cells and a vector of cluster labels. It fits an MST to the clusters and identifies lineages as paths through this tree. The output of getLineages is an object of class SlingshotDataSet containing all the information used to fit the tree and identify lineages. The function getCurves then takes this object as input and fits simultaneous principal curves to the identified lineages. These functions can be run separately, as below, or jointly by the wrapper function slingshot.\n\nFrom the original published work, we know that the start cluster should correspond to HBCs and the end clusters to MV, mOSN, and mSUS cells. Additionally, we know that GBCs should be at a junction before the differentiation between MV and mOSN cells (Figure 2). The correspondance between the clusters we found here and the original clusters is as follows.\n\n\n\n\n\nCells in cluster c4 have a cluster label of -2 in the original published clustering, meaning that they were not assigned to any cluster. These cells were actually identified as non-sensory contaminants, as they overexpress gene Reg3g (see Figure S1 from Fletcher et al. (2017) and Figure 14), and were removed from the original published clustering. While it is reassuring that our workflow clustered these cells separately, with no influence on the clustering of the other cells, we removed cluster c4 to infer lineages and pseudotimes, as cells in this cluster do not participate in the cell differentiation process. Note that, out of the 77 cells overexpressing Reg3g, 11 are captured in cluster c4 and 21 are unclustered in our workflow’s clustering (see Figure 14). However, we retain the remaining 45 cells to infer lineages as they did not seem to influence the clustering.\n\n\n\nTo infer lineages and pseudotimes, we apply Slingshot to the 4-dimensional MDS of the low-dimensional matrix W. We found that the Slingshot results were robust to the number of dimensions k for the MDS (we tried k from 2 to 5). Here, we use the unsupervised version of Slingshot, where we only provide the identity of the start cluster but not of the end clusters.\n\n\n\nBefore fitting the simultaneous principal curves, we examine the global structure of the lineages by plotting the MST on the clusters. This shows that our implementation has recovered the lineages found in the published work (Figure 15). The slingshot package also includes functionality for 3-dimensional visualization as in Figure 2, using the plot3d function from the package rgl.\n\n\n\nHaving found the global lineage structure, we now construct a set of smooth, branching curves in order to infer the pseudotime variables. Simultaneous principal curves are constructed from the individual cells along each lineage, rather than the cell clusters. This makes them more stable and better suited for assigning cells to lineages. The final curves are shown in Figure 16.\n\n\n\nIn the workflow, we recover a reasonable ordering of the clusters using the unsupervised version of slingshot. However, in some other cases, we have noticed that we need to give more guidance to the algorithm to find the correct ordering. getLineages has the option for the user to provide known end cluster(s). Here is the code to use slingshot in a supervised setting, where we know that clusters c3 and c7 represent terminal cell fates.\n\n\n\nAfter assigning the cells to lineages and ordering them within lineages, we are interested in finding genes that have non-constant expression patterns over pseudotime.\n\nMore formally, for each lineage, we use the robust local regression method loess to model in a flexible, non-linear manner the relationship between a gene’s normalized expression measures and pseudotime. We then can test the null hypothesis of no change over time for each gene using the gam package. We implement this approach for the neuronal lineage and display the expression measures of the top 100 genes by p-value in the heatmap of Figure 17.\n\n\n\n\n\nIn an effort to improve scRNA-seq data analysis workflows, we are currently exploring a variety of applications and extensions of our ZINB-WaVE model. In particular, we are developing a method to impute counts for dropouts; the imputed counts could be used in subsequent steps of the workflow, including dimensionality reduction, clustering, and cell lineage inference. In addition, we are extending ZINB-WaVE to identify differentially expressed genes, both in terms of the negative binomial mean and the zero inflation probability, reflecting, respectively, gradual DE and on/off DE patterns. We are also developing a method to identify genes that are DE either within or between lineages inferred from Slingshot.\n\nFinally, a new S4 class called SingleCellExperiment is currently under development (https://github.com/drisso/SingleCellExperiment). This new class is essentially a SummarizedExperiment class with a couple of additional slots, the most important of which is reducedDims, which, much like the assays slot of SummarizedExperiment, can contain one or more matrices of reduced dimension. This new SingleCellExperiment class would be a valuable addition to the workflow, as we could store in a single object the raw counts as well as the low-dimensional matrix created by the ZINB-WaVE dimensionality reduction step. Once the implementation of this class is stable, we would like to incorporate it to the workflow.\n\n\nConclusion\n\nThis workflow provides a tutorial for the analysis of scRNA-seq data in R/Bioconductor. It covers four main steps: (1) dimensionality reduction accounting for zero inflation and over-dispersion and adjusting for gene and cell-level covariates; (2) robust and stable cell clustering using resampling-based sequential ensemble clustering; (3) inference of cell lineages and ordering of the cells by developmental progression along lineages; and (4) DE analysis along lineages. The workflow is general and flexible, allowing the user to substitute the statistical method used in each step by a different method. We hope our proposed workflow will ease technical aspects of scRNA-seq data analysis and help with the discovery of novel biological insights.\n\n\nSoftware and data availability\n\nThe source code for this workflow can be found at https://github.com/fperraudeau/singlecellworkflow. Archived source code as at time of publication: http://doi.org/10.5281/zenodo.826211 (Perraudeau et al., 2017).\n\nThe four packages used in the workflow (scone, zinbwave, clusterExperiment, and slingshot) are Bioconductor R packages and are available at, respectively, https://bioconductor.org/packages/scone, https://bioconductor.org/packages/zinbwave, https://bioconductor.org/packages/clusterExperiment, and https://github.com/kstreet13/slingshot.\n\nData used in this workflow are available from NCBI GEO, accession GSE95601.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nDR, KS, EP, and SD were supported by the National Institutes of Health BRAIN Initiative (U01 MH105979, PI: John Ngai). KS was supported by a training grant from the National Human Genome Research Institute (T32000047).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors are grateful to Professor John Ngai (Department of Molecular and Cell Biology, UC Berkeley) and his group members Dr. Russell B. Fletcher and Diya Das for motivating the research presented in this workflow and for valuable feedback on applications to biological data. We would also like to thank Michael B. Cole for his contributions to scone.\n\n\nSupplementary material\n\nSupplementary File 1: sessionInfo.\n\nClick here to access the data.\n\n\nReferences\n\nDijk van D, Nainys J, Sharma R, et al.: MAGIC: A diffusion-based imputation method reveals gene-gene interactions in single-cell RNA-sequencing data. bioRxiv. 2017. Publisher Full Text\n\nFletcher RB, Das D, Gadye L, et al.: Deconstructing Olfactory Stem Cell Trajectories at Single-Cell Resolution. Cell Stem Cell. 2017; 20(6): 817–830.e8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLun AT, McCarthy DJ, Marioni JC: A step-by-step workflow for low-level analysis of single-cell RNA-seq data with Bioconductor [version 2; referees: 3 approved, 2 approved with reservations]. F1000Res. 2016; 5: 2122. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCarthy DJ, Campbell KR, Lun AT, et al.: Scater: pre-processing, quality control, normalization and visualization of single-cell RNA-seq data in R. Bioinformatics. 2017; 33(8): 1179–1186. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerraudeau F, Risso D, Street K, et al.: Bioconductor workflow for single-cell RNA sequencing: Normalization, dimensionality reduction, clustering, and lineage inference: fperraudeau/singlecellworkflow First release. Zenodo. 2017. Data Source\n\nPierson E, Yau C: ZIFA: Dimensionality reduction for zero-inflated single-cell gene expression analysis. Genome Biol. 2015; 16(1): 241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRisso D, Perraudeau F, Gribkova S, et al.: ZINB-WaVE: A general and flexible method for signal extraction from single-cell RNA-seq data. bioRxiv. 2017. Publisher Full Text\n\nStreet K, Risso D, Fletcher RB, et al.: Slingshot: Cell lineage and pseudotime inference for single-cell transcriptomics. bioRxiv. 2017. Publisher Full Text\n\nTseng GC, Wong WH: Tight Clustering: A Resampling-Based Approach for Identifying Stable and Tight Patterns in Data. Biometrics. 2005; 61(1): 10–6. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "24385",
"date": "03 Aug 2017",
"name": "Stephanie C. Hicks",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this article, the authors Perraudeau, Risso, Street, Purdom and Dudoit present a nice workflow for normalization, dimensionality reduction, clustering, and lineage inference of single-cell RNA-seq data (scRNA-seq) using R packages from the open-source Bioconductor project. I enthusiastically agree with the authors on an “increasing need for workflows that integrate these tools to yield a seamless scRNA-seq data analysis pipeline” and this workflow is a great step in the right direction. However, I have some constructive suggestions that will better integrate other previously developed work and improve this workflow.\nIn this workflow, the authors start with a count table. However, the majority of researchers will start with raw reads (e.g. a FASTQ file). It would be great if the author discussed current best practices for the quantification step of scRNA-seq data. Alternatively, the authors could point to other references that have already been developed.\n\nI would like to see the authors take advantage of the rich functionality and data exploration tools for cell- and gene-specific quality control (QC) introduced in low-level analysis workflows such as the one from Lun et al. (2016)1. Also, in this workflow, the authors create multiple SummarizedExperiment objects (e.g. one with only the top 1000 highly variable genes (HVGs), one with all genes, etc). This doesn’t seem efficient, especially with large single cell data sets such as the 1.3 million cells from embryonic mouse brains. I think both of these concerns can now be addressed with efforts such as the recently developed SingleCellExperiment Bioconductor object (https://github.com/drisso/SingleCellExperiment). For example, the authors could add a “USE” column in the gene- or cell-specific meta table to represent whether or not a particular gene in a particular cell met the filtering criteria applied. The authors could store W in the reduceDim assay of the SingleCellExperiment object.\n\nIn ZINB-WaVE, the authors specify the number of dimensions for the low-dimensional space (K) to be K=50. Could the authors add more details for the reader explaining why they picked K=50 and describe situations in which a user would want to specify a higher or lower K? In particular, it would be useful to discuss computational time in terms of number of genes and cells. Also, it would be useful to note that if you only wanted to use ZINB-WaVE to remove known covariates for normalization, you can use K=0.\n\nMinor comments:\nWhen selecting the top 1000 HVGs, why do the authors not take into account the overall mean-variance relationship and only select genes based on the variance?\n\nIt would be great if the authors referenced other tools available for similar analyses currently available. For example there are several available packages for normalization of scRNA-seq data, such as calculating global scaling factors can be done with scran (https://bioconductor.org/packages/release/bioc/html/scran.html) or gene and cell-specific scaling factors using SCnorm (https://github.com/rhondabacher/SCnorm). Alternatively, users might want to try using relative transcript counts using Census (https://bioconductor.org/packages/release/bioc/html/monocle.html).\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "24388",
"date": "07 Aug 2017",
"name": "Andrew McDavid",
"expertise": [
"Reviewer Expertise Statistics and bioinformatics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMajor comments:\nThere is something wrong with the data download link in the F1000 version so that I am unable to download these files and actually reproduce the workflow. I experimented a bit to see if I could figure out how to download the data anyways, but will reserve further evaluation of this submission until this issue can be resolved by the authors.\n```{r} urls = c(\"https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE95601&format=file&file=GSE95601%5FoeHBCdiff% \"https://raw.githubusercontent.com/rufletch/p63-HBC-diff/master/ref/oeHBCdiff_clusterLabels.) ```\n\na. This workflow will likely be out-of-date when the underlying packages transition to use SingleCellExperiment. This is actually a positive thing because many of the more opaque lines of code (involving subsetting ERCC genes, etc) will be more streamlined.\nb. It requires installation of the development branch of bioconductor, which impacts the usefulness of the workflow to the average user. I expect the authors will revise this tutorial when Bioconductor 3.6 is released and use of the devel branch is no longer necessary. Additionally `slingshot` is an requirement, but currently only exists on github and no SHA1 provided. I hope that `slingshot` will be added as a bioconductor package shortly. In the meantime, a tag must be added to the git repo for the release being used in this workflow and instructions provided for how to install this tag. Additionally, the authors may wish to note that installation instructions for the packages will be provided at the end of the workflow so that someone proceeding sequentially will not be tripped up.\nc.Opaque code is presented in order to generate plots, e.g. ```{r} palDF <- ceObj@clusterLegend[[1]] pal <- palDF[, \"color\"] names(pal) <- palDF[, \"name\"] pal[\"-1\"] = \"transparent\" plot(fit$points, col = pal[primaryClusterNamed(ceObj)], main = \"\", pch = 20, xlab = \"Component1\", ylab = \"Component2\") legend(x = \"topleft\", legend = names(pal), cex = .5, fill = pal, title = \"Sample\") ```\nWhile this complexity may be necessary, perhaps some of it could be encapsulated as accessor functions in the package? Too much complexity here may cause users to miss the forest for the trees.\n\nThe authors could better motivate (or at least explain the impact of) some of the default parameters and procedures.\n\nWhy do we set a zcut threshold of 3 for the `scone` filtering? Why K=50 for zinbwave? RSEC parameters\nHow should a user decide on a value for these parameters?\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Partly\n\nAre sufficient details provided to allow replication of the method development and its use by others? Partly\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "24387",
"date": "07 Aug 2017",
"name": "Michael I. Love",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have developed an easy to follow workflow which goes beyond preparing single-cell data for analysis, showing how to use existing methods and packages to normalize, perform dimension reduction, construct cell lineages and perform differential testing along those lineages. The workflow seems like it will be useful, and I hope the authors can update the workflow as new frameworks come into play (e.g. SingleCellExperiment).\nI have the following suggestions for improving the workflow:\nIt would be useful to put a link to the source code (or to the section where the link to the source code exists) near the top of the document.\nI was confused a bit by \"the first major bifurcation in the HBC lineage trajectory occurs prior to cell division\". Can you be more specific about what you are referring to here by cell division, as without knowledge of the system, I'm not sure where the cell division you refer to should appear.\n\"within a single object\": It may be good to explain what an \"object\" here is. You could, for example, refer to Figure 2 of the Bioconductor Nature Methods paper1.\nMisspelling: \"reasonnable\"\nOn filtering for most variable genes, I understand this decision, and I also recommend it during workshops before making ordination plots. I know that students are not always certain why we care about variance (unsupervised). I like to mention that these are the genes where the \"action\" is. A side point, the log(x+1) is not variance stabilizing for RNA-seq counts in general. This filter can give higher priority to low count genes than to genes where there is interesting biological variability (though I do not doubt that the very high biological variance genes will be preserved). It might be useful to show a vsn::meanSdPlot() for the matrix log1p(assay(se))?\n\n\"correcting for batch effects\": What are batch effects? (Of course, I know what they are, but a reader may not, and you could cite some of the single cell literature here.)\n\"Note that, in this case, the low-dimensional matrix W is not included in the computation of residuals to avoid the removal of the biological signal of interest.\": I understood this sentence only on a second pass through. One problem is that you haven't defined W in the text yet (only in Figure 4). I would only reference the matrix W if you have defined it.\nFigure 6: Can you change the figure width so that PC1 is not squished?\nIs there a circularity to the recovery of published clusters in Figure 6? Was ZINB-WaVE used in Fletcher (2017)?\nCan you say what the meaning of the color white is in Figure 8 (in the text or caption near this figure)?\nFigure 15 refers back to Figure 2 but does not use the same color scheme for the known cell types, so the reader cannot verify if you've recovered the lineages from the publication. It would be good therefore to have a legend for these figures (Fig 15 and following) indicating which cell types the colors refer to (this information is in the unlabeled table above, but should be included as a legend here).\nCan you briefly describe what a GAM is ahead of Figure 17?\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1158
|
https://f1000research.com/articles/6-1144/v1
|
19 Jul 17
|
{
"type": "Research Article",
"title": "CNS cell-type localization and LPS response of TLR signaling pathways",
"authors": [
"Gizelle M. McCarthy",
"Courtney R. Bridges",
"Yuri A. Blednov",
"R. Adron Harris",
"Gizelle M. McCarthy",
"Courtney R. Bridges",
"Yuri A. Blednov"
],
"abstract": "Background: Innate immune signaling in the brain has emerged as a contributor to many central nervous system (CNS) pathologies, including mood disorders, neurodegenerative disorders, neurodevelopmental disorders, and addiction. Toll-like receptors (TLRs), a key component of the innate immune response, are particularly implicated in neuroimmune dysfunction. However, most of our understanding about TLR signaling comes from the peripheral immune response, and it is becoming clear that the CNS immune response is unique. One controversial aspect of neuroimmune signaling is which CNS cell types are involved. While microglia are the CNS cell-type derived from a myeloid lineage, studies suggest that other glial cell types and even neurons express TLRs, although this idea is controversial. Furthermore, recent work suggests a discrepancy between RNA and protein expression within the CNS. Methods: To elucidate the CNS cell-type localization of TLRs and their downstream signaling molecules, we isolated microglia and astrocytes from the brain of adult mice treated with saline or the TLR4 ligand lipopolysaccharide (LPS). Glial mRNA and protein expression was compared to a cellular-admixture to determine cell-type enrichment. Results: Enrichment analysis revealed that most of the TLR pathway genes are localized in microglia and changed in microglia following immune challenge. However, expression of Tlr3 was enriched in astrocytes, where it increased in response to LPS. Furthermore, attempts to determine protein cell-type localization revealed that many antibodies are non-specific and that antibody differences are contributing to conflicting localization results. Conclusions: Together these results highlight the cell types that should be looked at when studying TLR signaling gene expression and suggest that non-antibody approaches need to be used to accurately evaluate protein expression.",
"keywords": [
"Toll-like receptor",
"MyD88",
"TRIF",
"microglia",
"astrocyte",
"neuroimmune",
"lipopolysaccharide"
],
"content": "Introduction\n\nInnate immune signaling has been well characterized in the body for decades, but the recent appreciation for its role in the brain has raised several questions. In particular, it has brought to light the similarities and differences between the immune response in the periphery and the central nervous system (CNS). At the center of this discussion are microglia, the resident immune cells of the brain. However, there is evidence that microglia have unique functions unrelated to immune signaling, and that other CNS cells can also participate in the immune response.\n\nA key component of innate immunity is Toll-like receptors (TLRs), a family of pattern recognition receptors that detect and respond to pathogen and danger signals. TLRs respond to a variety of bacterial and viral pathogens, including the bacterial endotoxin lipopolysaccharide (LPS), which is a ligand for TLR41. In response to LPS, TLR4 with its co-receptor cluster of differentiation 14 (CD14) can signal through two distinct pathways, the myeloid differentiation primary response protein 88 (MyD88)-dependent pathway and the TIR-domain containing adaptor protein inducing IFNβ (TRIF)-dependent pathway2 (Figure 1). The MyD88-dependent pathway signals through Interleukin 1 receptor associated kinases 1 and 4 (IRAK1 and IRAK4) and TNF receptor associated factor 6 (TRAF6), leading to activation of inhibitors of nuclear factor κB Kinases (IKKs)1. Activation of IKKs causes activation of NF-κB and the production of pro-inflammatory cytokines (e.g. TNF, IL-1β, IL-6). By contrast, the TRIF-dependent pathway utilizes the adaptor protein TRIF and signals through TRAF3, TBK1 and IKKε, leading to phosphorylation and activation of interferon regulatory factor 3 (IRF3)3. Activated IRF3 translocates to the nucleus where it leads to the transcription of type I interferons and interferon inducible genes (e.g. IFN-β, CCL5/RANTES, CXCL10/IP-10).\n\nLipopolysaccharide (LPS) is recognized by TLR4 and its co-receptors MD2 and CD14. TLR4 signals through two different pathways, the MyD88-dependent pathway and the TRIF-dependent pathway. The MyD88-dependent pathway utilizes the adapter protein MyD88, which recruits IRAK4, IRAK1, and TRAF6. Phosphorylation of IRAK1 and ubiquitination of TRAF6 leads to activation of IKKs and NF-κB. Activated NF-κB translocates to the nucleus where it promotes transcription of pro-inflammatory cytokines. TLR2 also signals through the MyD88-dependent pathway. The TRIF-dependent pathway, utilized by TLR3 and TLR4, signals through the adapter protein TRIF. TRIF recruits TRAF6 and TRAF3. Signaling through TRAF6 leads to NF-κB activation, while signaling through TRAF3 utilizes IKKMε to activate IRF3. Activated IRF3 translocates to the nucleus, where it leads to transcription of Type I interferons and interferon inducible genes.\n\nTLR signaling has been implicated in several CNS conditions, including ischemia, neurodegeneration, depression, and addiction4–9. However, the cell-type localization of TLR signaling within the CNS remains controversial and impairs our understanding and ability to develop treatments based on these signaling pathways. TLR signaling was originally characterized in peripheral immune cells; thus, it was believed that CNS expression of TLRs would be limited to microglia, the immune cells of the brain. Several studies support microglial expression of TLRs, and many reaffirm the idea that expression is completely or mostly microglial9–12. However, recent studies suggest that TLRs are also expressed and functionally important in other glial cells, such as astrocytes and oligodendrocytes10,11,13–18, or even non-glial CNS cells, like neurons19–24. These results are complicated by differences in methodologies across studies, including differences in: protein or mRNA; in vivo, primary cells or in established cell lines; species; and techniques. Interestingly, there seems to be disagreement between cell-type location of mRNA and protein expression for the same molecule, which raises many questions. For example, a brain RNA expression database shows Tlr4 as highly microglial25, while the human protein atlas (proteinatlas.org) shows it only detected in neurons26. Several other TLR signaling molecules (MyD88, IRAK1, TRIF, IRF3) also show highest mRNA expression in microglia, but highest protein expression in neurons.\n\nAlthough many studies have reported the localization of TLRs in the CNS, few have evaluated the expression of the downstream signaling molecules and pathway outputs that are responsible for functional changes. It also remains unclear how immune activation might change cell-type expression of TLR signaling in vivo, as most studies have evaluated the response to TLR agonists using cultured cells9,18,27–29. Recent studies suggest that established cell lines and even primary cultured glial cells don’t accurately reflect the expression profile in vivo30.\n\nAlthough this discrepancy may seem esoteric, it is a major hindrance to the study of neuroimmune signaling. In our lab alone, we have had several problematic studies because it was unclear which cell type to use for a conditional knockout or viral vector, or a gene was knocked out in microglia but couldn’t be verified on the protein level because of neuronal expression. These uncertainties not only result in wasted time and money, but also delay the discovery of important results. Given the key role of TLR signaling in CNS pathologies and the desire to manipulate and understand these pathways in the brain, it is imperative that cell-type localization of these molecules is determined and agreed upon.\n\nBased on the disagreement in the field and preliminary results that suggested TLR-signaling mRNAs are localized in microglia while protein is localized in neurons, we sought to investigate TLR signaling localization using glial cells isolated from adult mouse brain. The goals of this study were to identify the cell-type enrichment of TLR pathway mRNAs and proteins with and without immune activation (LPS treatment), and to determine which cells exhibit expression changes following activation. There is literature supporting the idea that cell-type protein expression can change after LPS31, so we hypothesized that key mRNAs will be abundant in microglia so to allow rapid translation into protein in response to immune activation. Our results revealed that mRNA was primarily microglial, although there were some differences in expression profiles, and that LPS increased mRNA expression in microglia. By contrast, our protein results were inconclusive, due to non-specific antibodies and conflicting results across antibodies for the same protein. Based on our results, we conclude that much of the disagreement in the field is due to antibody failures, and that better antibodies or alternative methods need to be developed to conclusively determine protein localization in CNS cells.\n\n\nMethods\n\nAll procedures were approved by the University of Texas at Austin Institutional Animal Care and Use Committee (animal protocol number AUP-2013-00061) and adhered to the National Instituted of Health Guidelines. The University of Texas at Austin animal facility is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care. All efforts were made to ameliorate any suffering of the mice. Any mice that became too sick in response to the LPS injections were euthanized.\n\nStudies were conducted in adult (6–8 weeks old) C57Bl/6J male mice (Jackson Laboratories, Bar Harbor, ME, USA). Mice were individually housed and allowed to acclimate to upright bottles one week before the start of the experiment. The experimental rooms were maintained at an ambient temperature of 21±1°C, 40–60% humidity, and a regular light/dark schedule (7 AM–7 PM). Food and water were available ad libitum. The mice were randomly divided into three groups, each containing 7 LPS treated mice and 5 saline treated mice (additional mice were put in the LPS group in case of death before 24 hours) (Figure S1). The mice were weighed, had water intake measured for two days prior to injection and then were injected with either LPS (2.0 mg/kg) or saline. Mice were weighed and water intake was measured 24-hours post-injection and the mice were sacrificed with anesthesia. Weight and water consumption data is provided in Figure S2.\n\nKnockout (null mutant) mice for TLR2, TLR4, and MyD88 are described in 32. Briefly, the TLR2 knockout mouse was B6.129S1-Tlr2tm1Dgen/J (Jackson Laboratories), which has a neomycin cassette inserted in the gene, making it non-functional33. The TLR4 knockout mouse was B6.B10Scn-Tlr4lps-del/JthJ (Jackson Laboratories), which has the locus containing the Tlr4 gene deleted34. The MyD88 knockout mouse was B6.129P2(SJL)-MyD88tm1.1Defr/J (Jackson Laboratories), and is a cross of Myd88tm1Defr mice (loxP sites flanking exon 3 of Myd88) with Tg(Zp3-cre)93Knw mice35. RT-qPCR was used to determine the transcript expression in the knockout mice (Figure S3). The TLR2 knockout mouse showed increased expression of Tlr2, which is consistent with a larger transcript being produced due to the neomycin cassette36. The TLR4 knockout mouse showed no transcript expression, consistent with previous studies34. The MyD88 knockout mouse showed decreased expression of MyD88, likely due the fact that only exon 3 is removed and the primers are not on exon 3.\n\nFive mice per group were perfused with ice-cold saline and the brain was removed (each group was performed on a different day). The dissected tissue was pooled by treatment within group (ie. all of group 1 saline samples were combined, see Figure S1). Samples were pooled to get enough microglia for both qPCR and western blots. Approximately 1% of the minced tissue was taken as a total homogenate (TH) sample that includes all cell types. The TH was further divided into 10% for RNA and 90% for protein and centrifuged at 1000 x g for 10 minutes at 4°C. The supernatant was removed and the cells were flash frozen in liquid nitrogen. The remaining sample was used for microglial isolation, as described by Nikodemova et al. 201237. Briefly, tissue suspension was enzymatically dissociated using the Neural Tissue Dissociation Kit-Papain (Miltenyi Biotec, Germany) in conjunction with Pasteur pipette manual dissociation. Dissociated tissue was passed through a 70 μM strainer (Miltenyi Biotec), centrifuged at 300 x g, and resuspended in 30% percoll (Sigma-Aldrich, St. Louis, MO, USA). The percoll-cell suspension was centrifuged at 700 x g for 15 minutes at room temperature, with the myelin fraction removed from the top fraction. Cells were washed and then incubated with CD11b MicroBeads (Miltenyi Biotec) and eluted using MS columns to collect CD11b+ cells. Cells were again divided (10% for RNA and 90% for protein) and CD11b+ cell pellets were collected by centrifugation at 300 x g for 10 minutes at 4°C and then flash frozen. The CD11b- fraction was also spun down and the pellet was resuspended in astrocyte-binding ACSA2 MicoBeads (Miltenyi Biotec). The ACSA2+ fraction was collected as the CD11b+ fraction was, and the remaining negative fraction (CD11b/ACSA2-) and the astrocyte fraction (ACSA2+) were divided (10% for RNA, 90% for protein), spun down and pellets were flash frozen.\n\nRNA was isolated from all four fractions (TH, CD11b+, ACSA2+, CD11b/ACSA2-) using the MagMax-96 Total RNA Isolation Kit (Thermo Fisher Scientific Inc., Rockford, IL, USA). The RNA yield was quantified on a NanoDrop 1000 spectrophotometer and assessed for quality on an Agilent 2200 TapeStation (Agilent Technologies, Santa Clara, CA, USA). RNA was reverse transcribed into cDNA using the Applied Biosystems High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific Inc.). cDNA was tested for genomic DNA contamination and showed at least a 10 Cq difference between the +RT (reverse transcription) and –RT samples38. Applied Biosystems TaqMan® Gene Expression Assay (Thermo Fisher Scientific Inc.) primers were used, and specific assay IDs are shown in Table S1. RT-qPCR reactions were performed using SsoAdvanced™ Universal Probes Supermix (BioRad, Hercules, CA, USA) in 10-μL reactions containing 250 pg of cDNA. All reactions were performed in technical triplicates for each biological replicate and included a negative no-template control. Samples were normalized to 18s rRNA and relative expression was determined using the CFX software version 3.1 (BioRad).\n\nCells or tissue were homogenized in lysis buffer (150 mM NaCl, 50 mM Tris-HCl pH 7.4, 1 mM EDTA, 1% Triton-X-100, 1% sodium deoxycholic acid, 0.1% SDS, 1X Halt Protease and Phosphatase Inhibitor Cocktail; Thermo Fisher Scientific Inc.), rocked for 30 minutes at 4°C, centrifuged for 10 minutes at 10,000 x g, aliquoted, and frozen at -80°C. HEK-293 cells were kindly provided by Dr. Mihic’s laboratory. These cells were washed with cold PBS, scraped and washed with lysis buffer, and processed as described above. Protein concentrations were determined using the DC Protein Assay (Bio-Rad). Cell lysates (20 μg for fractions, 40 ug for antibody tests) were boiled for 5 minutes, run on 4-15% Mini-Protean TGX Precast Gels (Bio-Rad), and transferred to PVDF membranes using semi-dry transfer. All fraction blots contained a control sample (mouse whole brain lysate) for normalizing across blots. Membranes were blocked with 5% dried milk in TBST (Tris-buffered saline with 0.5% Tween-20) and incubated overnight at 4°C with primary antibody (Table S2). Membranes were washed with TBST and incubated with HRP-conjugated secondary antibodies in 5% dried milk in TBST for 1 hour at room temperature (Table S2). Bands were visualized using Pierce ECL (Thermo Fisher Scientific Inc.) and imaged on film, using G:BOX Chemi XX6 (Syngene, Cambridge, UK). Attempts were made to identify a loading control that was equal across all cell types, but every loading control examined showed differences in expression across fractions.\n\nThe protocol was adapted from Exiqon miRCURY microRNA ISH Optimization Kit (Exiqon, Vedbaek, Denmark). Mice were transcardially perfused with 4% paraformaldehyde (PFA), and the brains were post fixed overnight in 4% PFA at 4°C and transferred to 30% sucrose overnight at 4°C. Brains were fresh frozen and coronally sectioned on a cryostat (20uM). Free-floating sections were post-fixed in 10% NBF overnight at room temperature. After three 1x PBS washes (3 minutes per wash), slices were hybridized with a double DIG-labeled custom Locked Nuclei Acid (LNA) probe (Exiqon) for 1 hour at appropriate hybridization temperature (Table S3). Following hybridization, slices were washed in 5x SSC, 1x SSC (2 times), and 0.2x SSC (2 times) at the same temperature as hybridization for 5 minutes per wash. After a final 0.2x SSC wash at room temperature for 5 minutes, slices were blocked with blocking solution (1x PBS, 0.1% Tween-20, 2% donkey serum, and 1% BSA) at room temperature for 15 minutes. Various permeabilization steps were also tested (see source data). Slices were then incubated in anti-DIG antibody (for mRNA probe) and appropriate primary antibody for protein of choice (Table S3) overnight at 4°C. All antibodies were diluted in antibody solution (1x PBS, 0.05% Tween-20, 1% donkey serum, and 1% BSA). After three 1x PBS-T (0.1%) washes (5 minutes per wash), appropriate secondary antibodies were applied to the slices and allowed to incubate at room temperature for 1.5 hours. After three final 1× PBS washes (10 minutes per wash), slices were mounted on charged slides and counterstained with DAPI (Fluoromount-G, Southern Biotech). Slides were visualized on a Zeiss Axiovert 200M Fluorescent Microscope and analysis was completed on Photoshop CC5 (Adobe). Probe and antibody information is found in Table S3.\n\nBrains were prepared as stated above and free-floating sections were placed into PBS. Sections were permeabilized in detergent (0.1% Triton-X-100) and blocked in 10% goat or donkey serum for 1 hour at room temperature. Antibody treatment and mounting was performed as described above. Antibody information is in Table S3.\n\nRT-qPCR data was analyzed with a two-way analysis of variance (ANOVA) and Tukey’s multiple comparisons test. All statistical analyses were performed using Prism 7 (GraphPad Software, La Jolla, CA). All p-values are shown in Table S4.\n\n\nResults\n\nFour fractions were collected from the saline and 24-hour LPS treated samples: TH (total homogenate), CD11b+ (microglial fraction), ACSA2+ (astrocyte fraction), and CD/AC- fraction (cells remaining after isolation of microglia and astrocytes, referred to as the negative fraction). RT-qPCR was performed using cell-type markers to determine the cell-type enrichment for each of these fractions (Figure 2). Cd11b/Itgam was used as a marker for microglia and expression was found to be highly expressed in the CD11b+ fraction (enriched 57-fold over TH, p < 0.0001), lowly expressed in the TH, and absent in the ACSA2+ and CD/AC- fractions (Figure 2A). Glast/Slc1a3 was used as an astrocyte marker and was found to be lowly expressed in the TH, highly expressed in the ACSA2+ fractions under saline conditions (enriched 8-fold over TH, p < 0.0001), and absent in the CD11b+ and CD/AC- fractions (Figure 2B). Neun was used as a neuronal marker and was expressed at high levels in the TH, low levels in the CD/AC- fraction (0.02-fold compared to TH, p < 0.0001), and expression was absent in the CD11b+ and ACSA2+ fractions (Figure 2C). The reason for the lack of neuronal markers in the negative fraction is that adult neurons don’t usually survive the isolation procedure; therefore, the TH taken before isolation contains the most neurons. Tek was used as a marker for endothelial cells and was highly expressed in the CD/AC- fraction (15-fold over TH, p < 0.0001) and lowly expressed in the TH, CD11b+ fraction and ACSA2+ fraction (Figure 2D). Tek expression decreased significantly in the CD/AC- fraction following LPS (0.2-fold, p < 0.0001). Cd68 was used as a marker of activated microglia and was highly expressed in the CD11b+ fraction and increased following LPS treatment (1.8-fold, p < 0.0001) (Figure 2F).\n\nqPCR analysis of cell-type marker expression in the four fractions. A. The microglial fraction was highly enriched for the microglial marker Cd11b, and Cd11b was absent in the astrocyte and negative fraction. B. The astrocyte fraction was highly enriched for the astrocyte marker Glast, and expression of Glast was extremely low or absent in the microglial and negative fractions. C. The total homogenate (TH) had high expression of the neuronal marker Neun. Neun was absent from the microglial and astrocytes fractions and was expressed in low levels in the negative fraction. D. The endothelial cell marker Tek was highly expressed in the negative fraction and lowly expressed in the other three fractions. Tek expression decreased with LPS in the negative fraction. E. The activated microglial marker Cd68 was highly expressed in the microglial fraction, and lowly expressed in the other fractions. Cd68 expression increased with LPS in the microglial fraction. Two bars with the same letter are not statistically different; two bars with no letter in common are statistically different (two-way ANOVA with Tukey’s test for multiple comparisons, p<0.05). SAL, saline; LPS, liposaccharide.\n\nqPCR was used to evaluate the expression of the most widely studied Tlrs; Tlr2, Tlr3, Tlr4, and the TLR4 co-receptor, Cd14. Under basal conditions, expression of Tlr2, Tlr4 and Cd14 was primarily localized to microglia, as evidenced by the high SAL-CD11b+ expression compared to SAL-TH expression (expression in Cd11b+ fraction over TH: Tlr2 41-fold, p = 0.001; Tlr4 25-fold, p < 0.0001; Cd14 75-fold, p < 0.0001) (Figure 3). In response to LPS, Tlr2 and Cd14 expression increased in microglia 4-fold (p < 0.0001) and 2.6-fold (p < 0.0001), respectively (Figures 3A and D). Alternatively, Tlr4 expression decreased by approximately 50% in microglia following LPS (p < 0.0001) (Figure 3C). In contrast to Tlr2, Tlr4 and Cd14, Tlr3 was expressed in all fractions, with highest expression in astrocytes (8-fold enrichment over TH, p < 0.0001). In response to LPS, Tlr3 expression increased in astrocytes (1.5-fold, p = 0.0007), but not in any of the other fractions (Figure 3B). No Tlr expression changes were detected in the total homogenate.\n\nFraction localization and LPS expression changes for TLRs and co-receptors measured by qPCR. A. Tlr2 is expressed primarily in the microglial fraction and expression increases with LPS. B. Tlr3 is enriched in both microglia and astrocytes compared to the total homogenate (TH), with higher expression in astrocytes. Astrocyte Tlr3 expression increased with LPS. C. Tlr4 expression is highly microglial and decreases following LPS. D. Cd14 is highly enriched in microglia and increases with LPS. Two bars with the same letter are not statistically different; two bars with no letter in common are statistically different (two-way ANOVA with Tukey’s test for multiple comparisons, p<0.05). SAL, saline; LPS, liposaccharide.\n\nTo determine localization and LPS response, mRNA expression of components of the MyD88-dependent pathway (Myd88, Irak1, Irak4, Traf6, and Ikkb), as well as cytokines produced in response to MyD88-pathway activation (Il1b, Il6, Tnf) were measured (Figure 4). While all MyD88-dependent pathway genes were expressed highest in microglia under basal conditions, the expression patterns were variable. Myd88 and Irak4 displayed low basal expression in other fractions, while Irak1, Traf6, and Ikkb were expressed at greater than 50% of the expression level of microglia, suggesting expression in astrocytes and endothelial cells as well (Figures 4A–E). In contrast, the cytokines were almost exclusively expressed in microglia (Figures 4F–H). In response to LPS, Myd88 expression increased in microglia (1.4-fold, p = 0.0062) while Irak4 decreased (0.64-fold, p = 0.0023) (Figures 4A and C). Interestingly, Traf6 increased in astrocytes (2.1-fold, p = 0.0005) and the CD/AC- fraction (2.1-fold, p = 0.004), while Irak1 trended towards an increase in astrocytes (p=0.02 in t-test, but not significant when corrected for multiple comparisons) (Figures 4B and D). Both Il1b and Tnf increased in microglia following LPS administration, with Tnf increasing almost 14-fold (Figures 4F and H). In contrast, Il6 expression did not increase in microglia, but trended towards an increase in astrocytes and the CD/AC- fraction (Figure 4G; p = 0.04 astrocytes and p = 0.03 CD/AC-, uncorrected t-test).\n\nFraction localization and LPS expression changes for components and outputs of the MyD88-Dependent Pathway, measured by qPCR. A. MyD88 is highest enriched in the microglial fraction and increases with LPS. B. Irak1 is highest enriched in the microglial fraction, but present in moderate levels (expression is 50% or more than that of microglia) in all other fractions. Irak1 expression increases in microglia with LPS. C. Irak4 expression is highly enriched in microglia under basal conditions, and decreases in microglia after LPS. D. With saline, Traf6 is enriched in the microglial fraction, but present in moderate levels in all other fractions. With LPS, Traf6 expression increases in the astrocyte fraction and the negative fraction. E. Ikkb expression was highest in microglia, but expressed in moderate levels in all other fractions. No significant expression changes were seen after LPS treatment. F. Expression of Il1b is only detected in microglia and increases with LPS. G. Expression of Il6 is only detected in microglia with saline, but is detected in all other fractions after LPS. H. Tnf was only detected in the microglial fraction and increased following LPS. Two bars with the same letter are not statistically different; two bars with no letter in common are statistically different (two-way ANOVA with Tukey’s test for multiple comparisons, p<0.05). SAL, saline; LPS, liposaccharide; TH, total homogenate.\n\nExpression of TRIF-dependent pathway components (Trif, Traf3, Ikki, Irf3) and outputs (Ifnb, Ccl5, Cxcl10) were measured under basal conditions and in response to LPS to allow comparison with the MyD88-dependent pathway (Figure 5). Trif and Irf3 had similar basal expression profiles with highest expression in microglia (Trif 5-fold enriched over TH, p < 0.0001; Irf3 4-fold enriched over TH, p < 0.0001), but Irf3 was enriched in both the astrocyte and negative fractions (3-fold enrichment in astrocyte fraction, p = 0.01; 2-fold enrichment in negative fraction over TH, p = 0.02), while Trif showed only modest expression in all fractions (Figures 5A and D). Traf3 and Ikki were expressed relatively evenly across the fractions under basal conditions, although Ikki trended towards highest expression in astrocytes (p < 0.0001 using 1-way ANOVA for saline group) (Figure 5B and C). Under basal conditions, Ifnb, Ccl5, and Cxcl10 are virtually undetectable in all fractions, except for some Ccl5 expression in microglia and some Cxcl10 expression in microglia and astrocytes (Figures 5E-G). In response to LPS, Trif and Irf3 expression decreased in microglia (Trif 0.73-fold, p = 0.02; Irf3 0.79-fold, p = 0.0138), while Traf3 expression decreased in the TH (0.62-fold, p = 0.03). In contrast, Ikki showed 23.5-fold increase in expression following LPS (p < 0.0001). Like Ikki, Ifnb and Ccl5 increased in microglia (Ifnb only detected after LPS, Ccl5 37-fold increase, p < 0.0001), while Cxcl10 trended towards an increase in microglia and astrocytes (microglia 39-fold, p = 0.047 uncorrected T-test; astrocytes 25-fold, p = .045 uncorrected T-test).\n\nFraction mRNA localization for components and outputs of the TRIF-Dependent Pathway in saline and LPS treated animals. A. Trif was highest expressed in the microglial fraction with saline and decreased with LPS. B. Traf3 expression was relatively even across the fractions, with only significant difference being between the total homogenate (TH) and the astrocyte fraction. There were no significant changes with LPS. C. Ikki expression was not significantly enriched in any fraction with saline, but was highest in astrocytes. With LPS, expression increased in the microglial fraction. D. Irf3 expression was highest in the microglial fraction, but was also significantly enriched over the TH in the astrocyte fraction and negative fraction. E. Ifnb was not detected in any fractions with saline, but was expressed in microglia with LPS. F. Expression of Ccl5 was expressed in low amounts in microglia with saline, but was detected in the TH with LPS and increased in microglia. G. Cxcl10 was expressed in low levels in the microglial and astrocyte fractions with saline, but was detected in all fractions with LPS, although none of the changes were significant. Two bars with the same letter are not statistically different; two bars with no letter in common are statistically different (two-way ANOVA with Tukey’s test for multiple comparisons, p<0.05). SAL, saline; LPS, liposaccharide.\n\nKnockout mice for TLR2, TLR4, and MyD88 were available in the lab and used to test the specificity of antibodies for those proteins (Figure 6). In addition, HEK-293 cell lysates were used for validation because these cells should not express TLR2, TLR3, TLR4, or IL-1β (www.proteinatlas.org)26. Testing with the TLR2 antibody revealed expression in wild-type brain tissue, HEK-293 cells, and TLR2 knockout tissue, suggesting non-specific binding (Figure 6A). The TLR3 antibody showed strong expression (although at a lower molecular weight than expected) in the WT brain tissue and no expression in the 293 cells (Figure 6B). Two TLR4 antibodies were tested and both produced signals in the 293 lysates and in the TLR4 knockout tissue (Figures 6C and D). Furthermore, the TLR4 (76B357.1) antibody appeared to run at a lower molecular weight than anticipated, though there were multiple bands that appeared at different molecular weights in each lysate (Figure 6C). The IL-1β antibody produced a strong signal in the 293 lysates, suggesting it is also non-specific (Figure 6E). Five MyD88 antibodies from two different companies were tested in MyD88 knockout tissue (Figures 6F–J). All 5 antibodies produced a signal in the knockout tissue, and sc-74532 appeared at the incorrect molecular weight, indicating that none of these antibodies were specific. These tests suggested that most of the antibodies that we tested were non-specific, and made us skeptical of the ones we could not test in knockout tissue. Responses from the antibody vendors indicated that antibodies were never tested against negative controls, only against blocking peptides.\n\nAntibody tests in negative controls (knockout tissue and HEK-293 cells). Each antibody was run once with just knockout tissue if available, and once with knockout tissue and HEK 293T cell lysates. A. The TLR2 antibody produces a signal in HEK-293 cells and TLR knockout tissue (KO), neither of which should express TLR2. B. The TLR3 antibody only produced signal in the WT tissue. C and D. Both TLR4 antibodies produced signals in the HEK-293 cells and the TLR4 knockout tissue, neither of which should express TLR4. E. The IL-1β antibody produced a signal in the HEK-293 cells, which should not express IL-1β. F–J. All five MyD88 antibodies produced a signal in the MyD88 knockout tissue.\n\nBecause antibody specificity could not be verified, full replicates of western blots were not performed, and thus not quantified. However, sample western blot images for each antibody are shown in Figure 7 to demonstrate the variety of expression profiles and how different antibodies to the same protein produce different results. First, as with qPCR, cell-type marker expression was evaluated in the lysates using antibodies for NEUN (neuronal marker), GFAP (astrocyte marker), and IBA1 (microglial marker) (Figure 7A). Differences in markers between qPCR and western blots were due to antibody availability and efficacy. Consistent with the qPCR data, NEUN was present in high amounts in the control sample and the total homogenate sample, but not in other fractions. GFAP was expressed in the control sample and the TH, but expressed highest in the astrocyte fraction, also consistent with the qPCR results. IBA1 was expressed very strongly in microglia and could be seen in the control and TH after a much longer exposure that left the microglial expression overexposed. These findings are consistent with the qPCR data which shows that expression of microglial markers is over 50x higher in the microglial fraction than the TH.\n\nFraction protein expression in representative western blot images. Number of experiments for each antibody are indicated in parenthesis. A. Cell-type specific antibodies verify cell-type enrichment in the fractions. NEUN, a neuronal marker, is expressed in the control sample and the total homogenate (TH) (n=3). GFAP, an astrocytic marker, is expressed in low levels in the TH and higher levels in astrocytes (n=2). IBA1, a microglial maker, is expressed in microglia (n=3). B. Expression for TLR2 appears to be in all fractions except microglia (n=3), while TLR3 is only detected in the TH (n=2), and TLR4 (n=2) and IL-1β (n=3) are detected in all fractions C. Blotting with three different MyD88 antibodies produced different results (n=3 for each). Sc-11356 suggested MyD88 is only expressed in the total homogenate, while ab2064 and ab2068 show expression in all fractions, with highest expression in microglia and the negative fraction. D. IRAK1 (n=2) shows expression in all fractions except microglia and IRAK4 (n=3) shows expression in all fractions, but highest expression in the TH. E. Two different TRAF6 antibodies produce multiple bands and different results. Based on predicted molecular weight, both antibodies show highest expression in the TH and lowest expression in microglia (n=2 for each). F. IKKβ showed expression in the TH and light expression in the negative fraction (n=2). G. IRF3 was detected in all fractions, but highest in the negative fraction. H. Two antibodies were used to evaluate IKKε. Sc-5693 gave signal only in the TH while Sc-376114 produced signal in all cell types (n=3 for each). Sal, saline; LPS, liposaccharide; CD, CD11b+; AC, ACSA2+.\n\nDespite the determination that many of the antibodies were non-specific, localization of TLR protein and IL-1β was investigated to see if these results mirrored some of the confusing data in the literature suggesting non-microglial localization (Figure 7B). Even though Tlr2 mRNA expression was predominantly microglial, TLR2 protein was detected in every fraction except microglia. TLR3, which was found to be microglial and astrocytic on the mRNA level, was found exclusively in the TH on the protein level, suggesting neuronal localization. TLR4 and IL1β were highly expressed in microglia on the mRNA level, but were expressed in all fractions on the protein level. Furthermore, IL-1β expression was lowest in microglia. These data suggest that studies detecting neuronal localization of TLRs, despite microglial mRNA, may be due to non-specific antibodies.\n\nBecause we had so many antibodies that claimed to detect MyD88, this presented an opportunity to compare localization of the same protein using different antibodies (Figure 7C). MyD88 (sc-11356) was expressed only in the TH, suggesting neuronal expression. In contrast, ab2064 and ab2068 were expressed in all fractions, although highest in microglia and in the negative fraction. MyD88 (sc-8197) gave such strange results with vastly different molecular weight bands across the fractions, that it was not included. MyD88 (sc-74532) was not used because tests revealed that the signal was at the wrong molecular weight (Figure 6H). These results were particularly concerning because every antibody tested in the knockout was non-specific and different antibodies produced different results.\n\nThe rest of the MyD88-pathway produced equally confusing results. Like TLR2, IRAK1 protein was expressed in every fraction except microglia. TLR4 showed highest expression in the TH, but faint expression in other fractions at a slightly lower molecular weight. For TRAF6, we had two antibodies from the same company, sc-8409 (monoclonal mouse) and sc-7221 (rabbit polyclonal). Both antibodies produced several bands (Figure 7E), making it difficult to determine what signal was real. TRAF6 should run at 60 kD, which corresponds to the middle band on the sc-8409 blot and top and on the sc-7221 blot. Based on these bands, expression appears to be highest in the TH and the astrocyte fraction. IKKβ was primarily localized to the TH (Figure 7F), which is consistent with the neuronal localization seen in immunohistochemistry data from our lab39, but inconsistent with the qPCR data.\n\nProtein expression evaluation was limited for the TRIF-dependent pathway (due to antibody challenges) and only expression of IRF3 and IKKε was determined (Figure 7G and H). IRF3 was expressed in all fractions, but highest expression was in the negative fraction. IKKε was evaluated using two antibodies, sc-5693 (goat polyclonal) and sc-376114 (mouse monoclonal). IKKε sc-5693 is a very weak antibody, but detected some protein in the control sample and total homogenate (the bands in the astrocyte fraction are suspected to be bleed through). IKKε (sc-376114) is supposed to be expressed at 80 kD, which corresponds to the top band; however, the multiple bands raise concerns.\n\nSeveral of the proteins evaluated with western blot have also been investigated in brain tissue using immunohistochemistry with the same or different antibodies. Examples of these are shown in Figure S4. Immunohistochemistry reveals highly neuronal expression in tissue for MyD88 (sc-8197), IRAK1 (sc-7883), and TRAF6 (sc-7221). These results are relatively consistent across TLR-pathway antibodies that have been tested in our lab (high neuronal staining). Interestingly, attempts to look at Irf3 mRNA expression via in situ hybridization also suggested neuronal localization (Figure S5). Because we knew that Irf3 mRNA should be in microglia, we tested a microglial marker, Tmem11940, using the same in situ protocol. Tmem119 also failed to express in microglia (Figures S5C and D), suggesting that there may be a permeability issue when targeting glial cells in tissue, resulting in high background staining in neurons. It is worth noting that Irf3, which is more heterogeneous across cell types, showed a much stronger neuronal signal than Tmem119, which should only be in microglia. This suggested to us that the small amount of Irf3 localized in neurons was all we could detect, while the detected neuronal Tmem119 was just background due to increased probe concentrations.\n\n\nDiscussion\n\nTLR signaling is a key component of the innate immune response and it contributes to many brain disorders, including alcohol use disorders. However, the cell-type specific response to immune stimuli in the CNS remains unclear. Identification of the cell-type localization of TLR signaling and immune response within the brain is necessary to elucidate the functional implications of perturbed signaling and to design future studies with in vivo manipulations. To address this, we used isolated glial cells from adult mice that had been administered either saline or LPS. Using four distinct cell-fractions, we evaluated the mRNA expression of TLRs, their downstream signaling molecules, and the transcriptional outputs of their signaling (Table 1; Figure 8). In addition, we tried to profile the protein expression of TLR signaling molecules. Unfortunately, we were not able to draw any conclusions about the protein localization, but we have identified reasons why there may be disagreement in the field.\n\nSummary of 24 hour LPS qPCR data. Colors indicate fraction: microglial, teal; astrocyte, yellow; negative fraction, orange. Primary localization with saline (SAL) is determined by fraction enrichment compared to the total homogenate (TH) under saline conditions, with fold-enrichment shown in the next column. Primary localization with liposaccharide (LPS) is determined by fraction enrichment compared to the TH with LPS treatment, with fold change shown in the next column. Change in each fraction with LPS is determined by comparing expression in that fraction with SAL to expression in that fraction with LPS, with direction and fold-change noted. Red indicates increased expression, while blue indicates decreased expression. Only significant differences are noted (p<0.05, two-way ANOVA with Tukey’s multiple comparisons test). CD, Cd11b+; AC, Acsa2+.\n\nMicroglial and astrocyte cell-type enrichment (compared to TH) is shown for TLR pathway genes in saline and LPS treated mice. The font size of each gene indicates fold-enrichment, with larger sizes meaning larger fold-enrichment. Colors on the LPS side denote whether that gene changed in that cell type with LPS treatment. Red indicates increased gene expression while blue denotes decreased gene expression. Figure created using http://servier.com/Powerpoint-image-bank.\n\nAlthough expression of mRNA from the TLR signaling pathway was primarily microglial as expected, there were extremely variable expression profiles within the pathways. This is consistent with gene expression data from adolescent (P17) mice in the RNA-Seq transcriptome database25. While expression of Tlr2, Tlr4, and Cd14 were highly microglial, expression of Tlr3 was highest in astrocytes, where expression increased in response to LPS. These findings are consistent with several studies that have shown Tlr3 to be expressed and functional in astrocytes15,16,41,42, as well as a study that shows in vitro LPS increases Tlr3 expression in primary astrocyte cultures while decreasing Tlr3 expression in primary microglial cultures18. TLR3 signals through the TRIF-dependent pathway; however, the components of the TRIF-dependent pathway showed varied expression and LPS responses. This raises the question of how signaling molecules within one pathway could be expressed in different cell types. Although Trif and Irf3 expression is highest in microglia, there is still significant expression in astrocytes, and Ikki expression trends towards being mostly astrocytic under basal conditions. Therefore, it is possible that signaling is occurring in both cell-types and that the mRNA expression of the receptor and its signaling molecules are not 1:1 within the cell. Furthermore, microglial Trif and Irf3 expression decrease following LPS, while Ikki expression increases, suggesting they are independently regulated. This is supported by the involvement of Ikki in other LPS-responsive pathways (e.g. JAK/STAT signaling), which could have different cell-type specificity.\n\nIt is surprising that the expression of Ifnb and Ccl5 is exclusively microglial. There is a trend towards increased expression of Cxcl10 in both astrocytes and microglia after LPS, suggesting that the TRIF-dependent pathway is being activated and inducing downstream signaling in both cell types. This raises the question of how expression of Cxcl10 is increased without Ifnb there to induce it. It is possible that interferon-inducible genes are produced in response to IFNβ in microglia, but changed in a different manner in astrocytes, which lack a macrophage lineage. Astrocytes produce interferon in a TLR3 and TLR4 dependent manner in vitro43,44, but it is possible that astrocytes respond differently in vivo. It is also plausible that the inflammatory response is temporally mediated within each cell-type, and that increased expression of interferons would be detected in astrocytes if evaluated earlier or later. It is noteworthy that TLR4 also signals through the TRIF-dependent pathway and is highly microglial, so perhaps TLR3 signaling is predominant in astrocytes, while TLR4-induced TRIF dependent signaling dominates in microglia, leading to the increased Ifnb, Ccl5, and Cxcl10 seen in the CD11b+ fraction.\n\nComponents of the MyD88 pathway were highest expressed in microglia, which is consistent with expression of Tlr2 and Tlr4. However, some components of this pathway (Irak1, Traf6, Ikkb) were more evenly distributed across the fractions, and Traf6 expression increased in the astrocyte fraction following LPS. The different expression profiles could be because Traf6 can also be activated via TRIF in response to TLR3 or TLR4, and is involved in other pathways like TGF-β signaling. Additionally, Ikkb is involved in every pathway that signals to NF-κB, not just TLR pathways. Consistent with the notion that MyD88-dependent signaling is mostly occurring in microglia, Il1b, Il6, and Tnf expression were primarily microglial. However, there is a trend towards an increase in Il6 seen in astrocytes and the CD/AC- negative fraction in response to LPS. There is evidence that Il6 is also activated in response to LPS and TRIF-dependent signaling in cultured astrocytes18,42, so it is possible that TRIF-dependent increases in Il6 expression occur in astrocytes in vivo.\n\nIt is worth noting that although several robust changes were observed in response to LPS within the cell fractions, none were observed in the total homogenate, which is the typical preparation for evaluation of gene expression. This highlights the importance of looking at discrete cell types when evaluating immune changes in the brain, particularly because expression could be decreasing in one cell type while increasing in another (as seen with Tlr3). A caveat to collecting cell fractions is that whole brain samples had to be pooled to get enough RNA for RT-qPCR. Because of this, any brain-region specific changes are missed and the statistical power is reduced. Furthermore, the primers used for RT-qPCR are designed to target a single exon-exon junction, so exon level expression and splice variants may be missed.\n\nAlthough expression of mRNA and protein is not always 1:1, we were unable to find any examples in the literature showing all mRNA changes occurring in one cell-type and all protein changes occurring in a different cell-type. Because this is what our preliminary data suggested, we sought to test our hypothesis that many copies of mRNA were found inside microglia to ensure rapid translation in response to danger signals (although this hypothesis did not address why protein was found in neurons). After this study, we are just as unclear, if not more, about the protein localization of TLR signaling molecules. However, we do have some thoughts as to what is causing this confusion.\n\nThe western blots we performed show incredibly variable expression profiles, but the most concerning result is that several TLR signaling proteins do not appear to be expressed in microglia (TLR2, TLR3, MyD88 sc-11356, Irak1, IKKβ), even though all of them show microglial mRNA expression and most show highest expression in microglia. However, after some quality control steps, we are unable to trust any of the protein results. For the antibodies we could test, all but one showed expression in a negative control. For the antibodies we were unable to test on null mutant tissue, we erred on the side of caution and assumed they were also non-specific. Furthermore, different antibodies to the same protein gave very different expression profiles (Figure 7C), reaffirming that the antibodies cannot be trusted. We suspect that antibody specificity is one of the major reasons for disagreement in the field. Even though other researchers have told us that TLR antibodies are notoriously non-specific, they continue to be used in publications and these results continue to be cited as accurate. Even resources like the human protein atlas use antibodies to determine cell type localization26. For example, the data for MyD88 in the human protein atlas suggests that protein is highly neuronal, but RNA expression is mostly glial. Interestingly, the antibody they use is sc-11356, which we found to be non-specific (Figure 6F). They do provide information about the antibody validation, but they are basing the validation on comparison of staining in one tissue type (colon) to the literature.\n\nIt isn’t surprising that antibodies are non-specific, given that most manufacturers only validate them in transfected cell lines or with blocking peptide. Some companies, when asked, could not even suggest a negative control and claimed it would be too difficult to test all antibodies on knockout tissue. Unless this practice changes, every lab needs to test the antibody in their hands with positive and negative controls to be confident their results are accurate. Due to the difficulty of testing several antibodies for each protein, other approaches may be better suited for looking at several proteins at once. Proteomic approaches in glial cells have revealed protein changes that more closely match what is expected45. Alternatively, construction of transgenic mice with GFP-tagged expression of TLR genes may be useful to show the CNS cell-type localization.\n\nIn addition to our western blots, our immunohistochemistry and in situ results suggest that glial cells are less likely to be permeable to probes or antibodies. Therefore, more stringent permeabilization steps may be needed to detect intracellular molecules in glia. Although we attempted different permeabilization steps, we continued to see neuronal localization.\n\nIn conclusion, this study confirms and expands on mRNA cell-type localization of TLR signaling molecules and evaluates cell-type specific increases following LPS administration. This study was unable to reliably determine the protein localization of TLR signaling molecules, and we suggest this is due to non-specific antibodies and problems with permeabilization. We suggest that future studies evaluating cell-type expression take these results into account and that perhaps other non-antibody approaches be used to determine the protein localization of this important pathway in the CNS.\n\n\nData availability\n\nDataset 1. Dataset containing six files as follows:\n\nIHC localization images: This folder contains all immunohistochemistry images for MyD88, Irak1, and Traf6. The MyD88 folder contains images from 4 different MyD88 antibodies.\n\nImages summary: Powerpoint file containing images for neuronal staining overlayed with each antibody\n\nIrak1\n\nIrak1 and NeuN\n\n■ TIF images for Irak1, NeuN, and merged\n\nMyD88\n\n■ Table summarizing staining in human brain overlayed with NeuN (JPG file)\n\n■ Powerpoint file summarizing MyD88 staining in primary neuronal cultures\n\n■ MyD88 Abcam 2064 antibody: TIF files for MyD88, Neun, and merge\n\n■ MyD88 F-19 antibody\n\n• MyD88 and GFAP: TIF or JPG files showing staining for MyD88, GFAP (astrocyte marker), and merge\n\n• Myd88 and Iba1: TIF, PSD, and JPG files showing staining for MyD88, Iba1 (microglial marker) and merge\n\n• MyD88 and NeuN: TIF and JPG files showing staining for MyD88, NeuN (neuronal marker), and merge\n\n■ MyD88 S.Cruz Full length (HFL-296): TIF files for MyD88, NeuN and merge\n\nISH images: This folder contains in situ images and information\n\nConfocal images: Contains confocal images as TIF files showing IRF3, IBA1 and merge\n\nSummary of fluorescent microscope images: Contains TIF images and word documents with representative images for in situs (probe, protein, and merged)\n\nISH testing summary: excel file containing all information about different in situ tests\n\nKnockout animal qPCR: This folder contains Biorad CFX data files and genes study as well as GraphPad file of data and statistical analysis\n\nSal-LPS qPCR data: This folder contains all Biorad CFX data files as well as an excel file showing data from the gene study (CFX analysis for multiple plates). It also contains a GraphPad file of all data and statistical tests.\n\nSal-LPS western blots: This folder contains all raw western blot images used in Figure 7 (as TIF files).\n\nValidation western blots: This folder contains all raw western blot images used in Figure 6 (as TIF files).\n\ndoi, 10.5256/f1000research.12036.d16839646",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding for this work was provided by the National Institute on Alcohol Abuse and Alcoholism (grants AA024654 and AA013520).\n\n\nAcknowledgements\n\nThe authors thank Olga Ponomareva, Jillian Benavidez, Mendy Black, and Adriana DaCosta for their technical assistance.\n\n\nSupplementary material\n\nFigure S1: Schematic of study methods. The mice were divided into three subgroups, each containing 5 mice per treatment. Mice were injected with either saline or 2 mg/kg LPS and sacrificed 24-hours later. The whole brain was removed and tissue was pooled within each group by treatment yielding 3 biological replicates per treatment. 1% of minced tissue was taken as total homogenate and the remaining tissue was used to isolate microglia (Cd11b+) and astrocytes (Acsa2+). The remaining cells (Cd11b/Acsa2) were also collected. 10% of each sample was used for RNA isolation and RT-qPCR and 90% was used for protein isolation and western blots.\n\nClick here to access the data.\n\nFigure S2: Weights and water consumption after LPS. Data verifying the effect of LPS treatment. A. All 3 LPS groups showed decreased weight following injection, data points are averages of 5 mice. B. All 3 LPS groups decreased water consumption following LPS injection, data points are averages of 5 mice.\n\nClick here to access the data.\n\nFigure S3: Knockout mouse qPCR. RT-qPCR on knockout mouse brain compared to wild type (C57/Bl6J). A. TLR4 knockout tissue showed no Tlr4 mRNA expression. B. TLR2 knockout tissue showed an increase in Tlr2 expression. C. MyD88 knockout tissue showed a decrease in Myd88 expression. * p value < 0.05, 2-tailed t-test, n =10 per group.\n\nClick here to access the data.\n\nFigure S4: Immunohistochemistry for MyD88, IRAK1, and TRAF6. Representative images from immunohistochemistry evaluation of MyD88 (n =10), IRAK1 (n = 4), and TRAF6 (n=5) expression in the mouse cortex revealed co-localization with the neuronal maker NEUN. Additional images are available in the source data files.\n\nClick here to access the data.\n\nFigure S5: In situ hybridization for Irf3 and microglial marker Tmem119. In situ hybridization compared mRNA expression with cell-type markers. A. Irf3 mRNA shows little overlap with the microglial marker IBA1 (3 biological replicates with at least 3 technical replicates each). B. Irf3 mRNA shows high overlap with the neuronal marker NEUN (2 biological replicates with 2 technical replicates each). C. Tmem119, a microglial marker, shows little overlap with IBA1 (2 biological replicates with 2 technical replicates each). D. Tmem119 shows overlap with the neuronal marker NEUN (1 biological replicate).\n\nClick here to access the data.\n\nTable S1: Taqman gene expression assays used for RT-qPCR.\n\nClick here to access the data.\n\nTable S2: Antibodies used for western blots.\n\nClick here to access the data.\n\nTable S3: Immunohistochemistry and in situ antibodies/probes. A. Antibodies used for immunohistochemistry, results in Figure S4. B. Probes used for in situ hybridization, results in Figure S5. C. Antibodies used for in situ hybridization, results in Figure S5.\n\nClick here to access the data.\n\nTable S4: Results of statistical tests. This table contains all input data and statistical results for the qPCR data presented in the manuscript. Data in this table is extracted from the GraphPad file included in the source data.\n\nClick here to access the data.\n\n\nReferences\n\nKawai T, Akira S: Signaling to NF-kappaB by Toll-like receptors. Trends Mol Med. 2007; 13(11): 460–469. PubMed Abstract | Publisher Full Text\n\nTakeda K, Akira S: TLR signaling pathways. Semin Immunol. 2004; 16(1): 3–9. PubMed Abstract | Publisher Full Text\n\nTakeda K, Akira S: Toll-like receptors in innate immunity. Int Immunol. 2005; 17(1): 1–14. PubMed Abstract | Publisher Full Text\n\nGarcía Bueno B, Caso JR, Madrigal JL, et al.: Innate immune receptor Toll-like receptor 4 signalling in neuropsychiatric diseases. Neurosci Biobehav Rev. 2016; 64: 134–147. PubMed Abstract | Publisher Full Text\n\nGasiorowski K, Brokos B, Echeverria V, et al.: RAGE-TLR Crosstalk Sustains Chronic Inflammation in Neurodegeneration. Mol Neurobiol. 2017; 12: 593. PubMed Abstract | Publisher Full Text\n\nGambuzza ME, Sofo V, Salmeri FM, et al.: Toll-like receptors in Alzheimer's disease: a therapeutic perspective. CNS Neurol Disord Drug Targets. 2014; 13(9): 1542–1558. PubMed Abstract | Publisher Full Text\n\nGesuete R, Kohama SG, Stenzel-Poore MP: Toll-like receptors and ischemic brain injury. J Neuropathol Exp Neurol. 2014; 73(5): 378–386. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrews FT, Walter TJ, Coleman LG Jr, et al.: Toll-like receptor signaling and stages of addiction. Psychopharmacology (Berl). 2017; 234(9–10): 1483–1498. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHanke ML, Kielian T: Toll-like receptors in health and disease in the brain: mechanisms and therapeutic potential. Clin Sci (Lond). 2011; 121(9): 367–387. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBsibsi M, Ravid R, Gveric D, et al.: Broad expression of Toll-like receptors in the human central nervous system. J Neuropathol Exp Neurol. 2002; 61(11): 1013–1021. PubMed Abstract | Publisher Full Text\n\nLehnardt S, Lachance C, Patrizi S, et al.: The toll-like receptor TLR4 is necessary for lipopolysaccharide-induced oligodendrocyte injury in the CNS. J Neurosci. 2002; 22(7): 2478–2486. PubMed Abstract\n\nOlson JK, Miller SD: Microglia initiate central nervous system innate and adaptive immune responses through multiple TLRs. J Immunol. 2004; 173(6): 3916–3924. PubMed Abstract | Publisher Full Text\n\nKielian T: Toll-like receptors in central nervous system glial inflammation and homeostasis. J Neurosci Res. 2006; 83(5): 711–730. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGorina R, Font-Nieves M, Márquez-Kisinousky L, et al.: Astrocyte TLR4 activation induces a proinflammatory environment through the interplay between MyD88-dependent NFκB signaling, MAPK, and Jak1/Stat1 pathways. Glia. 2011; 59(2): 242–255. PubMed Abstract | Publisher Full Text\n\nBorysiewicz E, Doppalapudi S, Kirschman LT, et al.: TLR3 ligation protects human astrocytes against oxidative stress. J Neuroimmunol. 2013; 255(1–2): 54–59. PubMed Abstract | Publisher Full Text\n\nPark C, Lee S, Cho IH, et al.: TLR3-mediated signal induces proinflammatory cytokine and chemokine gene expression in astrocytes: Differential signaling mechanisms of TLR3-induced IP-10 and IL-8 gene expression. Glia. 2006; 53(3): 248–256. PubMed Abstract | Publisher Full Text\n\nBlanco AM, Vallés SL, Pascual M, et al.: Involvement of TLR4/type I IL-1 receptor signaling in the induction of inflammatory mediators and cell death induced by ethanol in cultured astrocytes. J Immunol. 2005; 175(10): 6893–6899. PubMed Abstract | Publisher Full Text\n\nMarinelli C, Di Liddo R, Facci L, et al.: Ligand engagement of Toll-like receptors regulates their expression in cortical microglia and astrocytes. J Neuroinflammation. 2015; 12: 244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu XJ, Liu T, Chen G, et al.: TLR signaling adaptor protein MyD88 in primary sensory neurons contributes to persistent inflammatory and neuropathic pain and neuroinflammation. Sci Rep. 2016; 6: 28188. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAurelian L, Warnock KT, Balan I, et al.: TLR4 signaling in VTA dopaminergic neurons regulates impulsivity through tyrosine hydroxylase modulation. Transl Psychiatry. 2016; 6: e815. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeltier DC, Simms A, Farmer JR, et al.: Human neuronal cells possess functional cytoplasmic and TLR-mediated innate immune pathways influenced by phosphatidylinositol-3 kinase signaling. J Immunol. 2010; 184(12): 7010–7021. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPréhaud C, Mégret F, Lafage M, et al.: Virus infection switches TLR-3-positive human neurons to become strong producers of beta interferon. J Virol. 2005; 79(20): 12893–12904. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQi J, Buzas K, Fan H, et al.: Painful pathways induced by TLR stimulation of dorsal root ganglion neurons. J Immunol. 2011; 186(11): 6417–6426. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrews FT, Qin L, Sheedy D, et al.: High mobility group box 1/toll-like receptor danger signaling increases brain neuroimmune activation in alcohol dependence. Biol Psychiatry. 2013; 73(7): 602–612. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang Y, Chen K, Solan SA, et al.: An RNA-sequencing transcriptome and splicing database of glia, neurons, and vascular cells of the cerebral cortex. J Neurosci. 2014; 34(36): 11929–11947. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUhlén M, Fagerberg L, Hallström BM, et al.: Proteomics. Tissue-based map of the human proteome. Science. 2015; 347(6220): 1260419. PubMed Abstract | Publisher Full Text\n\nLawrimore CJ, Crews FT: Ethanol, TLR3, and TLR4 agonists have unique innate immune responses in neuron-like SH-SY5Y and microglia-like BV2. Alcohol Clin Exp Res. 2017; 41(5): 939–954. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRosenberger K, Derkow K, Dembny P, et al.: The impact of single and pairwise Toll-like receptor activation on neuroinflammation and neurodegeneration. J Neuroinflammation. 2014; 11: 166. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJune HL, Lie J, Warnock KT, et al.: CRF-amplified neuronal TLR4/MCP-1 signaling regulates alcohol self-administration. Neuropsychopharmacology. 2015; 40(6): 1549–1559. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButovsky O, Jedrychowski MP, Moore CS, et al.: Identification of a unique TGF-β-dependent molecular and functional signature in microglia. Nat Neurosci. 2014; 17(1): 131–143. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBéchade C, Colasse S, Diana MA, et al.: NOS2 expression is restricted to neurons in the healthy brain but is triggered in microglia upon inflammation. Glia. 2014; 62(6): 956–963. PubMed Abstract | Publisher Full Text\n\nBlednov YA, Black M, Chernis J, et al.: Ethanol Consumption in Mice Lacking CD14, TLR2, TLR4, or MyD88. Alcohol Clin Exp Res. 2017; 41(3): 516–530. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWerts C, Tapping RI, Mathison JC, et al.: Leptospiral lipopolysaccharide activates cells through a TLR2-dependent mechanism. Nat Immunol. 2001; 2(4): 346–352. PubMed Abstract | Publisher Full Text\n\nPoltorak A, He X, Smirnova I, et al.: Defective LPS signaling in C3H/HeJ and C57BL/10ScCr mice: mutations in Tlr4 gene. Science. 1998; 282(5396): 2085–2088. PubMed Abstract | Publisher Full Text\n\nHou B, Reizis B, DeFranco AL: Toll-like receptors activate innate and adaptive immunity by using dendritic cell-Intrinsic and -extrinsic mechanisms. Immunity. 2008; 29(2): 272–282. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWooten RM, Ma Y, Yoder RA, et al.: Toll-like receptor 2 is required for innate, but not acquired, host defense to Borrelia burgdorferi. J Immunol. 2002; 168(1): 348–355. PubMed Abstract | Publisher Full Text\n\nNikodemova M, Watters JJ: Efficient isolation of live microglia with preserved phenotypes from adult mouse brain. J Neuroinflammtion. 2012; 9: 147. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBustin SA, Benes V, Garson JA, et al.: The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin Chem. 2009; 55(4): 611–622. PubMed Abstract | Publisher Full Text\n\nTruitt JM, Blednov YA, Benavidez JM, et al.: Inhibition of IKKβ Reduces Ethanol Consumption in C57BL/6J Mice. eNeuro. 2016; 3(5): pii: ENEURO.0256-16.2016. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBennett ML, Bennett FC, Liddelow SA, et al.: New tools for studying microglia in the mouse and human CNS. Proc Natl Acad Sci U S A. 2016; 113(12): E1738–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScumpia PO, Kelly KM, Reeves WH, et al.: Double-stranded RNA signals antiviral and inflammatory programs and dysfunctional glutamate transport in TLR3-expressing astrocytes. Glia. 2005; 52(2): 153–162. PubMed Abstract | Publisher Full Text\n\nJack CS, Arbour N, Manusow J, et al.: TLR signaling tailors innate immune responses in human microglia and astrocytes. J Immunol. 2005; 175(7): 4320–4330. PubMed Abstract | Publisher Full Text\n\nReinert LS, Harder L, Holm CK, et al.: TLR3 deficiency renders astrocytes permissive to herpes simplex virus infection and facilitates establishment of CNS infection in mice. J Clin Invest. 2012; 122(4): 1368–1376. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPascual-Lucas M, Fernandez-Lizarbe S, Montesinos J, et al.: LPS or ethanol triggers clathrin- and rafts/caveolae-dependent endocytosis of TLR4 in cortical astrocytes. J Neurochem. 2014; 129(3): 448–462. PubMed Abstract | Publisher Full Text\n\nBell-Temin H, Zhang P, Chaput D, et al.: Quantitative proteomic characterization of ethanol-responsive pathways in rat microglial cells. J Proteome Res. 2013; 12(5): 2067–2077. PubMed Abstract | Publisher Full Text\n\nMcCarthy GM, Bridges CR, Blednov YA, et al.: Dataset 1 in: CNS cell-type localization and LPS response of TLR signaling pathways. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24588",
"date": "16 Aug 2017",
"name": "Eitan Okun",
"expertise": [
"Reviewer Expertise Neuroimmunology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper, Gizelle, Courtney, Yuri and Adron have undertook a difficult task of determining the expression pattern of Toll-like receptors (TLRs), their adapter proteins and downstream effector proteins in CNS cells. This is an important and timely work which emphasizes the importance of establishing the appropriate tools in order to advance an entire field.\nAfter reading the manuscript, I'd like to raise the following points:\nTechnical comment: under \"tissue harvest and microglial isolation\", it is described that the used kit for tissue dissociation uses papain. Is it possible that this type of dissociation degraded some of the extracellular epitopes and altered their expression in subsequent western blots? Perhaps it should be cautioned that different enzymes or dissociation methods can result in different outcomes.\n\nUnder \"protein isolation and western blots\", it is mentioned that appropriate loading controls could not be found for all cell types. This is understandable, as specialized CNS cells contain different levels of proteins such as beta-actin, alpha tubulin, beta tubulin and others. Please indicate which proteins were tested in your hands.\n\nWith respect to the probes used for in-situ hybridization, it is entirely possible that different cells will express different splice variants of the genes. Has that been taken into consideration?\n\nOn page 6, results section, it is unclear to me why there is a lack of NeuN expression in the CD11-/ACSA2- fraction (Figure 2C). Should it not contain neuronal cells?\n\nOn the 2nd results paragraph, the fold change in expression of the different genes is indicated. I think that it should be further stressed that despite the fact that fold changes are important to note, the very expression, even though it is in smaller levels, of TLRs 2,3 and 4 is present in non-microglial cells.\n\nOn page 7, under \"antibody validation in knockout tissue and HEK-293 cells\". It is debatable whether post-translational modification can alter the size of a protein in a gel. Also, a KO tissue which lacks the target of the antibody can still express a target which will be second-in-highest affinity in absence of the original target. therefore, overexpressing the target of interest (TLR2,3,4 etc fused to GFP) will determine clearly whether the commercial antibody is capable of binding its target. This way, if the antibody cannot bind its target TLR but GFP can be detected, this is a definite proof that the antibody fails to detect its target under these conditions.\n\nTable S2 should also include specification of the declared application for the antibody by the manufacturer, whether it is denaturative WB, IHC, ELISA etc...\n\nOn the same note, although this information is often missing from datasheets of commercial antibodies, the epitope used to vaccinate against should be indicated.\n\nOn page 13, the issue of a microglial-specific marker, Tmem119, not being detected on microglail cells using the described protocol is of concern. I understand that 0.1% Tween-20 was used. Is it possible that a more harsh detergent at a higher concentration would solve this issue? For example - 0.5-1% triton-x?\n\nIn the discussion, the issue of expression of adapter-protein expression in various cells is discussed. Despite the fact that MyD88 is a signaling mediator of the IL1R family, which is also reported to be abundantly expressed in the brain, it is not discussed in the paper. Therefore, the expression of certain signaling molecules could be the result of pathways unrelated to TLRs but rather to other immune receptors such as IL1R.\n\nOn the last sentence on page 15, the authors rightfully, and correctly indicated that the current analysis cannot take into consideration any brain-region specific expressions, and that the probes used cannot address alterations in splice variants. This by itself could draw the conclusion that a similar effort should concentrate at a specific TLR at a time, studying the different splice variants in details throughout the brain.\n\nMinor points:\nPage 4, left column, the end of the second paragraph - a reference has to be added following \"expression in neurons\".\n\nSame page, right column, lines 3-4, immune activation is exemplified by LPS. I think that it should be stated that it is TLR4-specific immune activation, as there are numerous ways to activate the immune response by different pathways.\n\nPlease abbreviate \"MS\" in page 5, left column: \"eluted using MS columns\".\n\nSame page, right column, please abbreviate \"NBF\": \"in 10% NBF overnight..\"\n\nOn page 16, first sentence, i think that it is better to replace 1:1 with 'correlative', as 1:1 relates to quantifiable ratios, whereas the agreement between mRNA to proteins is describes as trends.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "26347",
"date": "27 Sep 2017",
"name": "Miles Herkenham",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI recommend adding text to the Discussion to include supporting data for the conclusion that TLR4, TLR2, and their related signaling components shown in Figure 1 are localized primarily to microglia, secondarily to vascular endothelia, and least in neurons. There are several sources of supporting data that should be described and cited. First, and the most expansive, is the RNA-Seq database available at the Barres Lab Stanford RNA-Seq Transcriptome website at https://web.stanford.edu/group/barres_lab/brain_rnaseq.html. The database is derived from the work published by Zhang et al., 2014 (ref 25). The interactive site provides gene expression levels of most genes of interest, including the TLRs and all the signaling components shown in Figure 1. In agreement with the data in the McCarthy study, Tlr4 mRNA expression levels, provided in values of fragments per kilobase of exon per million reads mapped (FPKM), are highest in microglia (3), second in endothelia (2.3), and very low in all other cell types (~ 0.3-0.4 on astrocytes and OPCs, and ~ 0.1 on neurons). Tlr2 mRNA is abundantly and almost exclusively expressed in microglia (>150). All of the downstream co-factors and pathways are predominantly expressed in microglia, with the exception of the side-chain pathway via TRAF3 and IRF3, which is expressed in all the cell types. The cytokine mRNAs measured by McCarthy, and others such as IL-1b and TNFα, are almost exclusively expressed in microglia. Finally, for TLR3, the FPKM values are astrocytes (13), endothelia (9), microglia (2.5), and neurons (< 1.0).\n\nNext, the authors should note that earlier in situ hybridization studies showed Tlr4 mRNA expressed in a variety of blood-brain barrier cell types including choroid plexus epithelial cells, meningeal cells, vascular endothelial cells (Laflamme and Rivest, 2001, DOI:10.1096/fj.00-0339com; Chakravarty and Herkenham, 2005, DOI: 10.1523/JNEUROSCI.4268-04.2005), and within the brain, microglial cells (Chakravarty, 2005). Supporting the microglial localization, a recent article used the very sensitive and specific RNAscope (ACD) colorimetric methodology to co-localize Tlr4 mRNA and Iba1 mRNA (Kashima, 2017, DOI:10.1073/pnas.1705974114) and showed that in the nucleus accumbens, the microglia-TLR4 double-labeled population was the majority (~80%) of all Tlr4 mRNA-positive cells. Finally, the early in situ hybridization autoradiographic work also showed that the TLR4 mRNA expression level was downregulated by LPS at 3 h post-injection (Laflamme and Rivest, 2001), in support of McCarthy’s findings with qPCR.\nA TLR4-bearing cell population overlooked by McCarthy et al. in terms of LPS responsiveness is endothelial cells, which strongly express Tlr4 mRNA. These cells are the major first responders to LPS (Serrats, 2010, DOI: 10.1016/j.neuron.2009.11.032). It is not clear whether this population survived the separation procedure used by McCarthy.\n\nThe LPS stimulation data in the report are supported by other published data, but the experiment was not performed optimally because the 24-h survival time selected was too long. There is an early response to LPS (0.5–2 h) that is chiefly mediated by cells at the blood-brain barrier, and later responses, indicated by induction of IkB and cytokine mRNAs within microglia (Quan, 1997, PMID: 9380746; Quan, 1998 , PMID: 10378870) might be prostaglandin- or cytokine-mediated (Serrats, 2010). Note that LPS does not significantly cross the blood brain barrier. At 12 h post-LPS, most responses have died down, and at 24 h, the initial LPS-mediated effects have dropped to near zero (Quan, 1998; Serrats, 2010).\n\nNevertheless, the inclusion of LPS data raises another important observation that should be addressed—neurons are unresponsive to LPS. We reported that primary cultured mouse cortical and hippocampal neurons do not show activation of NF-kB pathways by LPS, indicating that they do not possess functional TLR4 (Listwak, 2013, DOI: 10.1016/j.neuroscience.2013.07.013). Microglia, in contrast, have a massive response to LPS.\nMcCarthy et al. importantly address the lack of specificity of antibodies. That conclusion is also supported by our work on the NF-kB pathways. We attacked that problem in our study showing that many of the published antibodies for the NF-kB subunits p65 and p50 and their activated (phosphorylated) forms were nonspecific (Herkenham, 2011, DOI: 10.1186/1742-2094-8-141). The use of these antibodies had supported the claim in the literature that NF-kB is active in neurons. Thus, we heartily agree that antibodies are not specific in many studies of TLR pathways, especially for immunohistochemistry and especially when epitope levels are low. We endorse the increased demand for rigor in use of antibodies imposed by many journals and by the NIH grant review process. It would be helpful if McCarthy could mention these new demands for rigor, which will raise awareness of the burdens caused by improper antibody use.\nMy main objection to McCarthy study is the poor quality of their in situ hybridization based on use of locked nucleic acid (LNA) probes and digoxygenin detection. I would recommend that future work be done with the new highly sensitive and selective probes from ViewRNA (Affymetrix) or RNAscope (ACD).\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1144
|
https://f1000research.com/articles/6-1140/v1
|
18 Jul 17
|
{
"type": "Case Report",
"title": "Case Report: An incidentaloma that catches your eye - adrenal myelolipoma",
"authors": [
"Rosanna D'Addosio",
"Joselyn Rojas",
"Valmore Bermúdez",
"Flor Ledesma",
"Kyle Hoedebecke",
"Rosanna D'Addosio",
"Joselyn Rojas",
"Valmore Bermúdez",
"Flor Ledesma"
],
"abstract": "Background: Adrenal incidentaloma refers to the incidental finding of a tumor in the adrenal gland, where nonfunctional forms are the most common variant. Myelolipoma is a rare (0.08-0.4%) occurrence characterized by adipose and hematopoietic tissue. The aim of this case report is to describe the diagnosis and appropriate management of a myelolipoma in an asymptomatic patient, which was originally considered an incidental hepatic hemangioma prior to being identified as a giant adrenal adenoma. Case description: The patient was a 54 year old obese female with a recent diagnosis of diabetes type II and dyslipidemia with recent ultrasound imaging suggestive of a hepatic hemangioma. An MRI was performed revealing a 7x6cm lesion in the right adrenal area indicating a giant adrenal adenoma. An adrenalectomy was performed without complications. The pathology report identified a myelolipoma. Discussion: The incidence of myelolipoma has recently increased due to advances in radiological techniques. Its etiology is unclear and the most accepted theories support a myeloid cell metaplasia in the embryonic stage as a result of stress, infections, or adrenocorticotropic hormone or erythropoietin stimulus. Contributing components may include bone morphogenetic protein 2 and β-catenin, as well as the presence of the chromosomal translocation (3, 21) (q25; p11). Despite its benign nature, the association with other adrenal lipomas must be ruled out. A biochemical evaluation is essential for detecting subclinical states, such as Cushing syndrome and pheochromocytoma. Conclusion: Adrenal myelolipomas are rare benign tumors that are generally asymptomatic. Uncertainty still exists surrounding their etiology. Surgical management depends on hormone production, tumor size, high risk features on imaging and patient consent. Additional information is needed to better understand myelolipomas, their etiology, and clinical management. Incidentalomas may confuse the physician and patient. Ensuring proper multidisciplinary management based on the clinical guidelines of endocrinology allowed a satisfactory resolution of this case.",
"keywords": [
"Myelolipoma adrenal",
"adrenal incidentaloma",
"benign adrenal tumor"
],
"content": "Introduction\n\nThe term incidentaloma is derived from “incidental tumor,” describing a mass discovered on imaging by pure chance1. When discussing adrenal incidentalomas (AIs), this refers to a finding of a visible adrenal mass greater than 1cm in diameter found on imaging performed for other medical causes2. In general, adrenal tumors are detected in 0.4% of abdominal ultrasounds and occur with ten times greater frequency in those with a positive cancer history3. Exclusion criteria for AIs include patients who present with manifestations of adrenal dysfunction2 and those with extra-adrenal cancers in the process of stratification4.\n\nAdvances in modern diagnostic methods have produced a greater prevalence of AIs, especially due to advances in CT and MRI technology3. Incidental adrenal masses are found in 2–4% of abdominal CT scans and the frequency increases in correlation with the patient’s age - adding 0.2% in the third decade of life, up to 7% in those greater than 70 years old4. Among these, non-functional adenoma remains the most frequent (60–85%), while a minority present as functional adenomas (5–16%)5. Of functional masses, 6% consist of pheochromocytomas, 5% are subclinical Cushing Syndrome, 5% are adrenal carcinoma, 2% prove to be a metastasis, and the rest belong to other etiologies, such as myelolipomas, hematomas, cysts, or lymphomas6,7.\n\nNevertheless, the retroperitoneal location increases the difficulty of detection during a standard physical exam. This often leads to the late diagnosis of such tumors only when clinical systemic manifestation is present - in the case of functional incidentalomas - or the compromise of the adjacent tissues secondary to abnormal gland growth8. According to endocrinology guidelines, both hormonal and radiographic evaluation must be performed in order to rule out subclinical states9,10. In general, masses ≥4cm are removed surgically, independent of functionality. Furthermore, all functional tumors and those with malignant characteristics undergo an adrenalectomy under endocrinologic supervision. Non-functional adenomas, small myelolipomas, and benign asymptomatic cysts do not require surgical intervention10.\n\nWith this in mind, providers must remember two primary questions, first asking “Is the mass hormonally active?” as this differentiates between functional and nonfunctional masses5,6. Additionally, asking “Are there malignant characteristics?” proves equally important. This is determined by the radiologic imaging that look for heterogeneity, poorly delimited borders, the presence of necrosis, hemorrhage, calcification, or an attenuation coefficient greater than 20 Hounsfield Units7.\n\nThis case report describes a giant right upper quadrant incidentaloma in an asymptomatic patient that was initially thought to be a hepatic hemangioma, due to its size and location, which was later confirmed to be an adrenal tumor.\n\n\nCase Report\n\nA 54 year old asymptomatic female patient was seen by her family physician in Marcaibo, Venezuela, for her annual health exam in January 2014 in a primary care center. She had no complaints, except for recent unintended weight gain. Her past medical and surgical history are notable for a left breast lumpectomy (1973), a salpingectomy (1994), a hysterectomy without oophorectomy for NIC III (2005), and a left unilateral oophorectomy for ovarian torsion (2007). The patient used no medications and has no known allergies, and denied tobacco, alcohol, or drug use. The patient is monogamous and happily married. Her family history is notable for a sister who died of Hodgkin Lymphoma.\n\nOn physical exam, the patient was afebrile with normal vital signs. Her weight was 92.5 kg, 1.74 meters tall, with a BMI of 30.6. She appeared well hydrated with moist mucous membranes. She had an unremarkable exam - no findings of violaceous striae, acanthosis, acrochordons, or signs of virilization.\n\nLaboratory results showed a normal complete blood count, mixed dyslipidemia, fasting blood glucose levels >125 mg/dl (normal range, 70–100 mg/dl) on more than two occasions, and HOMA1-IR index >2.5 (normal index, ≤ 2.5) (Table 1); meeting the diagnostic criteria for type 2 diabetes mellitus (DM2) and metabolic syndrome. Initial recommendations were lifestyle changes, including 30 minutes walks five days a week, and a nutritionist consult. Additionally, pharmacotherapy, sitagliptin/metformin (Janumet®, 50/1000mg) 1 tab daily, ezetimibe/simvastatin (Vytorin®,10/40 mg) 1 tab daily, gemfibrozil (Lipontal®, 900 mg) 1 tab daily, and orlistat (Xerogras®, 120 mg) 1 cap daily, was initiated.\n\n*HOMA-IR = [Basal Insulin (IU/ml) × GA (mg/dL)/405]\n\nSimultaneously, a right upper quadrant ultrasound was ordered showing slight hepatic steatosis, as well as a round space occupying lesion with well-defined hyperechoic borders measuring 5.6×7.3cm in segment V of the right lobe suggestive of a hemangioma. Of note, a bilateral non-obstructive nephrolithiasis was observed (Figure 1). Due to these findings, the patient was referred to a local hospital diagnostic center for imaging studies, a triphasic hepatic MRI was performed as part of an additional workup. This identified a 7.0×6.0cm right adrenal space occupying lesion suggestive of a large adrenal adenoma (Figure 2). A hormone profile was performed with normal results - classifying this mass as a non-functional adenoma. Lack of reagents in local laboratories caused that the patients moved to Avila Clinic in Caracas (Capital of Venezuela) (Table 2). The work up was completed with a serologic evaluation to rule out fungal infection with negative results for mycoplasma IgM (0.15; normal range: 0.00 – 0.90).\n\nA hyperechogenic 5.6 × 7.3 cm image is observed in segment V of the right hepatic lobe suggestive of an incidental hemangioma.\n\nLeft panel, longitudinal cut; right panel, transverse cut. Performed using SIEMENS Magneton Essenza 1.5 TESLA.\n\n24 hr urine collection:* 3.377 ml/24 hrs and **3.860.0 ml/24 hrs\n\nIn April 2014, a right subcostal adrenalectomy was performed in at a level three hospital so as to ensure the presence of an intensive care unit due to the potential bleeding risk. The pathology report described a 4×7×6cm adrenal mass with a grey-yellow surface covered partially with a thick grey capsule with brown areas with a hemorrhagic and yellow adipose center. The microscopic evaluation showed an external layer of clear cortical cells of the adrenal granulosa; a center made of mature adipocytes and all three hematopoietic cell lines without calcifications or fibrosis. The final diagnosis was determined to be an adrenal myelolipoma (Figure 3).\n\nSurgical specimen, macroscopic. Amado Polyclinic, Maracaibo- Edo Zulia (10/04/2013).\n\nThe patient experienced no post-surgical complications. She has subsequently completed regular physical activity and continues with the same treatment at the same dosage. Standard laboratory checks at three months showed notable improvement in all parameters.\n\n\nDiscussion\n\nAdrenal myelolipoma is a rare encapsulated benign tumor described first in 1905 by Gierke11 and later named by the French pathologist Charles Oberling in 192912,13. These tumors are metabolically inactive - or nonfunctional - and composed of adipose and hematopoietic cells originating from the adrenal stroma. They are predominantly asymptomatic and tend to be discovered incidentally13–15.\n\nThe incidence of these tumors is between 0.08–0.4%12, although they comprise 15% of the AIs discovered due to advances in radiographic imagery13. They frequently present between the fifth and seventh decades of life without a predominance in either sex - though there is a greater incidence in the right adrenal gland15. Though the adrenal location predominates, there have been discoveries in other locations with a preference for the presacral region, and less frequently in gastric, hepatic, ganglionic lymphatics, cranium, and spleen locations16. These statistics are in accordance with this case report.\n\nThe etiology for adrenal myelolipoma is not clear with numerous theories being proposed. Some suggest a metaplasia of the adrenal and myeloid cells that migrated during embryogenesis, extramedullary hematopoiesis, and embolization of osseous medulla elements17. This metaplasia may occur as a response to necrosis, stress, infections, or prolonged adrenocorticotropic hormone (ACTH) stimulation11,18. For example, Al-Bahri et al.19 reported a case of a large bilateral myelolipoma in a 39 year old male with a history of congenital adrenal hyperplasia secondary to a 21-α hydroxylase deficiency treated with steroids starting in childhood. This was later stopped during adolescence with a subsequent myelolipoma development - supporting the theory that ACTH stimulation causes adrenocortical metaplasia. Finally, giant myelolipomas usually are associated with hematologic disorders, like hereditary spherocytosis, thalassemia, and falciform anemia, as a response to adrenal stimulation from erythropoietin20.\n\nRecent cytogenetic analyses propose that myelolipomas are out-of-place masses of myeloid cells. Mitsui et al. described an extremely rare case with the presence of osseous tissue with cells similar to osteoblasts21. Upon immunohistochemical analysis, there were positive results for bone morphogenetic protein 2 (BMP2), which acts as an inductor for osseous formation and the β-catenin that intervenes in the signal pathway. This finding can help give insight into myelolipoma tumorigenesis.\n\nResearchers have also identified (3,21)(q25;p11) chromosomal translocations in patients with myelolipomas and hematological neoplasias18. Because of this, some consider myelolipomas as variants of multiple endocrine neoplasias22, while others recommend that they be grouped with other tumors, such as lipomas, teratomas, liposarcomas, or angiomyolipomas23,24. Despite its benign characteristics, the pathological studies and immunohistochemical evaluation (not performed due to lack of reagents) was recommended, because of the patient's personal and family history that increased risk for malignant results.\n\nThough these tumors are nonfunctional13–15,25, there may be the coexistence of myelolipoma with hyperplasia in any of the three adrenal cortical zones26,27. For these cases, treatment is adrenalectomy (just as in any case of myelolipomas >6cm) independent of its functionality, due to the risk of intratumoral necrosis, hemorrhage from rupture or compression of adjacent structures due to mass effect28. Alternatively, nonfunctional tumors ≤4cm with benign characteristics are recommended to be periodically monitored with radiological and biochemical evaluations. For masses between 4 and 6cm, the surgical intervention should be based on presenting characteristics, growth rate, and the patient’s preference7,29.\n\nIt is estimated that 20% of AIs will have subclinical hormone production and these patients represent an at-risk population with greater risk of metabolic disorders and cardiovascular disease7,19. In the present case, the patient’s hormone values were within normal parameters - ruling out subclinical states, including Conn's Syndrome (hyperaldosteronism), Cushing Syndrome or pheochromocytoma. Nevertheless, the presence of myelolipoma is associated with obesity, DM2 and dyslipidemia warranting pharmacological intervention30. This was further emphasized through a retrospective review of 34 AIs in patients of both sexes over the age of 50, where over half suffered from hypertension, 20.6% had DM2, and 37% had obesity. Of these, 80% were histopathologically confirmed to be adenomas with one being a myelolipoma25,30.\n\nAs strengths, we can point out the collaboration between different levels of medical attention and the shared effort of the family and the patient to travel to another state to complete this medical Care. Despite the Venezuelan medical assistance crisis, a relatively quick resolution of the case was achieved. Lastly, we emphasize the compliance with the protocol for proper management of adrenal tumors.\n\nThe limitations include the inability to perform the hormonal profile and determine whether the tumor was functional or not. Additionally, the choice of imaging could have been better. Specifically, the use of MRI instead of CT is not the first choice for the diagnosis of the myelolipoma; however, this occurred because the initial diagnosis was directed towards a hepatic hemangioma.\n\n\nConclusions\n\nAdrenal myelolipomas are rare benign tumors that are generally asymptomatic, whose size ranges from a few millimeters to over a dozen centimetres. Much uncertainty exists surrounding the etiology of these masses with continued debate in the current literature on whether or not they are true neoplasms or manifestations secondary to a reactive process26. In general, surgical management depends on hormone production, tumor size, high risk features on imaging and patient consent. Yet additional studies and information are needed to better understand myelolipomas, their etiology, and clinical management.\n\nLastly, this case demonstrates how family physicians can manage various aspects of patient care through the facilitation of medical treatments, surgical interventions, and ensuring a proper multidisciplinary approach based on the endocrinology clinical guidelines.\n\n\nConsent statement\n\nWritten informed consent was obtained from the patient for the publication of the patient’s details and accompanying images.",
"appendix": "Author contributions\n\n\n\nRD contributed to the conception of the article. JR, VB, FL and RD contributed to the design of the work. FL and JR contributed to the acquisition of data. RD prepared the first draft of the manuscript. VB provided expertise in endocrinology. KH translation from Spanish to English. RD, JR, VB and KH participated in the revision of the manuscript draft and agreed on the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nNikièma Z, Yaméogo AA, N'Goran K, et al.: [Enormous adrenal incidentalomas: the role of medical imaging about two cases]. Pan Afr Med J. 2012; 13: 74. PubMed Abstract | Free Full Text\n\nPapierska L, Cichocki A, Sankowski AJ, et al.: Adrenal incidentaloma imaging - the first steps in therapeutic management. Pol J Radiol. 2013; 78(4): 47–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMusella M, Conzo G, Milone M, et al.: Preoperative workup in the assessment of adrenal incidentalomas: outcome from 282 consecutive laparoscopic adrenalectomies. BMC Surg. 2013; 13: 57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChervin R, Herrera J, Juri A, et al.: Mesa 1: Incidentaloma Suprarrenal. Rev Argent Endocrinol Metab. 2009; 46(4): 55–64. Reference Source\n\nOliveira Caiafa R, Salvador Izquierdo R, Buñesch Villalba L, et al.: [Diagnosis and management of adrenal incidentaloma]. Radiologia. 2011; 53(6): 516–30. PubMed Abstract | Publisher Full Text\n\nCho YY, Suh S, Joung JY, et al.: Clinical characteristics and follow-up of Korean patients with adrenal incidentalomas. Korean J Intern Med. 2013; 28(5): 557–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGac P, Cabané P, Jans J, et al.: Surgical management of adrenal incidentaloma. Rev Chil Cir. 2012; 64(1): 25–31. Publisher Full Text\n\nAndrade C, Espírito Santo Paulo R, Teixeira A: Giant adrenal incidentaloma in young patient. Rev Col Bras Cir. 2000; 27(5): 352–354. Publisher Full Text\n\nKim J, Bae KH, Choi YK, et al.: Clinical Characteristics for 348 Patients with Adrenal Incidentaloma. Endocrinol Metab (Seoul). 2013; 28(1): 20–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZeiger MA, Thompson GB, Duh QY, et al.The American Association of Clinical Endocrinologists and American Association of Endocrine Surgeons medical guidelines for the management of adrenal incidentalomas. Endocr Pract. 2009; 15(supl1): 1–20. PubMed Abstract | Publisher Full Text\n\nWani NA, Kosar T, Rawa IA, et al.: Giant adrenal myelolipoma: Incidentaloma with a rare incidental association. Urol Ann. 2010; 2(3): 130–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNabi J, Rafiq D, Authoy FN, et al.: Incidental detection of adrenal myelolipoma: a case report and review of literature. Case Rep Urol. 2013; 2013: 789481. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGac P, Cabané P, Klein E, et al.: Giant adrenal myelolipoma. Rev Chi Cir. 2012; 64(3): 292–6. Publisher Full Text\n\nBenítez G, Obregón F, García E, et al.: Mielolipoma de glándula suprarenal: Reporte de un caso. RFM. 2005; 28(1): 23–6. Reference Source\n\nLópez Martín L, García Cardoso J, Gómez Muñoz J, et al.: Mielolipoma suprarrenal: Aportación de un caso y revisión de la literatura. Arch Esp Urol. 2010; 63(10): 880–3. Reference Source\n\nLeón González O, Pol Herrera P, López Rodríguez P, et al.: Myelolipoma, a rare surgical lesion of the adrenal gland. Rev Cubana Cir. 2012; 51(3): 254–9. Reference Source\n\nCastillo Lario M, Carro Alonso B, Gimeno Peribáñez M, et al.: Giant right adrenal myelolipoma. Arch Esp Urol. 2006; 59(9): 911–3. Publisher Full Text\n\nJoy PS, Marak CP, Nashed NS, et al.: Giant Adrenal Myelolipoma Masquerading as Heart Failure. Case Rep Oncol. 2014; 7(1): 182–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAl-Bahri S, Tariq A, Lowenntritt B, et al.: Giant Bilateral adrenal myelolipoma with congenital adrenal hyperplasia. Case Rep Sug. 2014; 2014: 728198. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarman S, Mandal KC, Mukhopadhyay M: Adrenal myelolipoma: An incidental and rare benign tumor in children. J Indian Assoc Pediatr Surg. 2014; 19: 236–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMitsui Y, Yasumoto H, Hiraki M, et al.: Coordination of bone morphogenetic protein 2 (BMP2) and aberrant canonical Wnt/β-catenin signaling for heterotopic bone formation in adrenal myelolipoma: A case report. Can Urol Assoc J. 2014; 8(1–2): E104–E107. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKetelsen D, Von Weyhern CH, Horger M: Diagnosis of bilateral giant adrenal myelolipoma. J Clin Oncol. 2010; 28(33): e678–9. PubMed Abstract | Publisher Full Text\n\nPareja Megía MJ, Barrero Candau R, Medina Pérez M, et al.: [Giant adrenal myelolipoma]. Arch Esp Urol. 2005; 58(4): 362–5. PubMed Abstract\n\nYildiz BD: Giant Extra-Adrenal Retroperitoneal Myelolipoma With Incidental Gastric Mesenchymal Neoplasias. Int Surg. 2015; 100(6): 1018–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnis-Ul-Islam M, Qureshi AH, Zaidi SZ: Adrenal myelolipoma in a young male - a rare case scenerio. J Pak Med Assoc. 2016; 66(3): 342–4. PubMed Abstract\n\nCampos Arbulú AL, Sadava EE, Kerman J, et al.: [Giant adrenal myelolipoma. Right laparoscopic adrenalectomy]. Medicina (B Aires). 2016; 76(4): 249–50. PubMed Abstract\n\nSu HC, Huang X, Zhou WL, et al.: Pathologic analysis, diagnosis and treatment of adrenal myelolipoma. Can Urol Assoc J. 2014; 8(9–10): E637–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRamirez M, Misra S: Adrenal myelolipoma: To operate or not? A case report and review of the literature. Int J Surg Case Rep. 2014; 5(8): 494–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYalagachin GH, Bhat BK: Adrenal incidentaloma does it require surgical treatment? Case report and review of literature. Int J Surg Case Rep. 2013; 4(2): 192–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChervin RA, Danilowicz K, Pitoia F, et al.: [A study of 34 cases of adrenal incidentaloma.] Medicina (B Aires). 2007; 67(4): 341–50. PubMed Abstract"
}
|
[
{
"id": "24338",
"date": "25 Jul 2017",
"name": "José Manuel Ramírez Aranda",
"expertise": [
"Reviewer Expertise JMRA - family physician",
"JZVP - endocrine disorders"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI think the case report fulfills diagnostic criteria for Adrenal Incidentaloma as Kim and Zeiger et al. rightfully have pointed it out. The clinical case was well studied according to accepted guidelines. I consider that the manuscript is well written. Perhaps the only thing that would call my attention is in the Discussion section since it seems to me unnecessarily extensive, but it depends on journal requirements and policies.\nOne suggestion in the abstract: Use Arabic number to describe the type of Diabetes.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "24335",
"date": "31 Jul 2017",
"name": "Ana Nunes Barata",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis case provides an extensive and comprehensive background and description of a case of an adrenal myelolipoma. As this is not a common condition, sharing the knowledge on a rare case may facilitate other practitioners to reach the correct diagnosis. It also includes description of relevant differential diagnosis as well as treatment.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1140
|
https://f1000research.com/articles/6-1138/v1
|
18 Jul 17
|
{
"type": "Software Tool Article",
"title": "YODEL: Peak calling software for HITS-CLIP data",
"authors": [
"Lance E. Palmer",
"Mitchell J. Weiss",
"Vikram R. Paralkar",
"Mitchell J. Weiss",
"Vikram R. Paralkar"
],
"abstract": "YODEL is a peak calling software for analyzing RNA sequencing data generated by High-Throughput Sequencing of RNA isolated by Crosslinking Immunoprecipitation (HITS-CLIP; also known as CLIP-SEQ), a method to identify RNA-protein interactions genome-wide. We designed YODEL to analyze HITS-CLIP experiments, in which Argonaute proteins are immunoprecipitated, followed by sequencing of the associated RNA in order to identify bound microRNAs and their mRNA targets. The HITS-CLIP sequenced reads are mapped to the genome, and then read peaks are visualized where clustered sets of reads map to the same region. Several peak calling algorithms have been developed to define the boundaries of these peaks. In contrast to other peak callers for HITS-CLIP data, such as Piranha, YODEL does not map the starts of reads to fixed interval bins, but instead uses a heuristic approach to iteratively find the tallest point within a set clustered reads and examine bases upstream and downstream of that point until a peak has been determined. This allows the peak boundary to be defined more precisely than coordinates that are multiples of the bin size. Per-sample peak counts are also generated by YODEL, which quickly enables downstream differential representation analysis. YODEL is available at https://github.com/LancePalmerStJude/YODEL/.",
"keywords": [
"HITS-CLIP",
"CLIP-SEQ",
"peak caller"
],
"content": "Introduction\n\nA peak caller that could accurately define a single peak amongst several samples was required to analyze High-Throughput Sequencing of RNA isolated by Crosslinking Immunoprecipitation (HITS-CLIP) (Darnell, 2010) data from fetal liver red blood cell precursors of miR-144/451-/- and wild-type mice (Paralkar et al., 2014; Paralkar VR, Palmer LE, Xu P, Lechauve C, Zhao G, Yao Y, Luan J, Wu G, Vourekas A, Mourelatos Z, Scheutz JD and Weiss MJ; unpublished study). Piranha (Uren et al., 2012) is one such software that is commonly used to identify peaks generated by HITS-CLIP. However, Piranha bins the starts of reads and does not fully define a peak. Consequently, a large bin size may result in multiple peaks being combined into one peak. We found that the identification and resolution of peaks using Piranha was highly dependent on the background threshold (-a) and binSize (-b) parameters, and it was unclear how these parameters should be set in order to obtain the most biologically relevant information. We also found that running Piranha on multiple samples separately results in peak boundaries that may be quite different from sample to sample. Generating initial peak calls from a combined sample dataset creates a single standard set of peak boundaries for all samples, which simplifies downstream analysis. We therefore developed a peak calling algorithm, named YODEL, with the following properties: 1) Incorporate strand specificity (Piranha does this, but many other CHIP-SEQ peak callers do not); 2) Generate per-sample read counts for each peak; 3) Have parameters that have easily understandable implications when changed.\n\n\nMethods\n\nThe main input for the peak caller is a BED file generated by clusterBed from the BEDtools suite (Quinlan & Hall, 2010) with the -s option used (see Supplementary material: Input file formats). If multiple samples are to be analyzed simultaneously, the name field must contain the sample name or ID before the first colon, followed by the read ID or other descriptive text. In addition, a sample list must be designated (with the YODEL parameter -sampleList) (see Supplementary material: Input file formats). The sample list will identify which samples are to be included for peak calling. After peak calling, read counts for each peak in all samples will be calculated. If no sample list is provided, all reads will be treated as one sample. As an example of how to process HITS-CLIP FASTQ files to generate the input clustered BED file, see Supplementary material: Analysis of HITS-CLIP data from Chi et al., 2009.\n\nYODEL was written in Python and tested with Python version 2.7. YODEL processes each read cluster as it is encountered within the input clustered BED file. For each cluster, the base coverage at each position for all samples under examination for peak calling is determined. Position specific counts for all individual samples are calculated as well. Once the counts are tallied, the program iteratively identifies peaks until no additional peaks are found. The program identifies the position with the highest read count. If the read count is less than the minimum peak height (mph) than no further peak calling for the cluster is performed. From the position with the highest read count, bases upstream are analyzed one at a time. The lowest read count (lowestPoint) up to the current base is tracked. Where dipTolerance (dt) and peakDipBuffer (pdb) are input parameters, if at any position the count is 0 or the count >= (lowestPoint+ peakDipBuffer)* dipTolerance, then the peak start has been determined and is recorded as the base position where the count was 0 or the base position of lowestPoint. This is repeated for bases downstream of the highest point to find the peak end. The peak summit is defined as the median of all the positions with the highest count. Two sets of peak boundaries (25% and 50%) are defined as the positions where the coverage is 25% and 50% of the highest point, and the maximum peak heights per sample are determined. The numbers of reads per peak are calculated (at both the 25% and 50% range) by determining the number of reads that overlap the peak by at least the input parameter binSize (bs). Typically, we have used the 25% peak boundary for downstream calculations. The peak counts are determined on a per sample basis, and for all the samples used in peak calling combined. Once a peak has been determined, the base coverage for positions within the full peak is set to 0 so that no further peaks are called overlapping it. The peak finding process (starting with finding the position with highest read count) is repeated until no more peaks are identified. After peaks are identified by YODEL, several filtering steps can be applied to remove low quality peaks (see Supplementary material: Peak filtering, Supplementary Figure 2–Supplementary Figure 6).\n\nYODEL has been tested on both Windows and Linux running Python 2.7 with standard libraries. Some of the tools (e.g. Bedtools) used to generate input files are Linux or OS X specific. There is no minimum memory requirement for YODEL, but the size of any BED files sorted with the Linux sort command may be dependent on system memory. See ‘Supplementary material: Analysis of HITS-CLIP data from Chi et al., 2009’ for instructions on how to preprocess data and run YODEL.\n\n\nUse case\n\nThe output of YODEL is described in the Table 1. Figure 1 shows a comparison of YODEL and Piranha output from HITS-CLIP analysis of wild type and miR144/451-/- fetal liver erythroblasts. Results from two different YODEL parameter settings are shown in blue in the lower half of the figure. Cab39 (Figure 1A) (Godlewski et al., 2010) and Ywhaz (Figure 1B) (Yu et al., 2010) are two known miR-451a target mRNAs. For Cab39, the largest peak contains a miR-451a seed match that is not present in the knockout sample. Because Piranha bins the start of reads, the peak defined by Piranha may not actually include the seed match location, as is observed with the Cab39 seed match. The failure of a peak to cover a seed match may prevent a microRNA from being assigned to a peak and subsequently interfere with downstream analysis. Also, the peak calling of Piranha is greatly influenced by bin size. A bin size of 32 (default parameter) is not able to resolve many individual peaks. In Ywhaz mRNA (panel B) YODEL detects three peaks around the miR-451a seed match. One Piranha setting (a=0.98, b=16) identified the three individual peaks, but compromised detection of smaller peaks upstream of the predicted miR-451a binding site. Therefore for our miR144/451-/- data set, YODEL was superior to Piranha in defining HITS-CLIP peaks.\n\nIGV browser (Thorvaldsdóttir et al., 2013) images showing YODEL output for HITS-CLIP data analyzing the 3’ untranslated regions of Cab39 (A) and Ywhaz (B) mRNAs in wild-type and miR-451a/miR-144-/- mouse fetal livers at embryonic day 14.5. The coverage tracks (WT in blue, KO in red, and combined coverage in magenta) show combined sequencing reads from three animals of each genotype mapped to the mouse mm10 genome. The seedMatches track shows microRNA seed matches for miR-451a, miR-144-3p and the three most abundant erythroid microRNAs besides miR-451a (miR-16-5p, miR-486a-5p and miR-122-5p) (Paralkar VR, Palmer LE, Xu P, Lechauve C, Zhao G, Yao Y, Luan J, Wu G, Vourekas A, Mourelatos Z, Scheutz JD and Weiss MJ; unpublished results). Average peak counts per sample are shown for wild-type and miR-451a/miR-144-/- erythroblasts using the YODEL more sensitive parameters (see below). The lower panels show YODEL (blue) and Piranha (green) peak boundaries with the indicated parameter settings. For YODEL, full boundaries are shown by thin lines and 25% boundaries by thick lines. YODEL less sensitive parameter settings: dipTolerance=2, peakDipBuffer=2; more sensitive settings: dipTolerance=1.5, peakDipBuffer=1; Both parameter settings: binSize=16, minPeakHeight=5. Note that the ability of Piranha to resolve different peaks representing distinct microRNA binding sites is highly dependent on parameter settings. Piranha parameters: a=background, threshold b=binSize. The default for Piranha is a=0.99 and b=32.\n\nWe have also tested the YODEL software on a publically available HITS-CLIP data set. HITS-CLIP reads from mouse neocortex Argonaute immunoprecipitations (Chi et al., 2009) was retrieved from http://ago.rockefeller.edu/rawdata.php. The reads were pre-processed and run through YODEL, as described in the Supplementary material: Analysis of HITS-CLIP data from Chi et al., 2009. We examined the first four genes identified by Table 1 in Chi et al., as potential target of microRNAs. Figure 2 shows a comparison of peak calling in the 3’ UTR of the Plod3 gene. Again, it is seen that binning starts of reads will cause potential microRNA seeds to be missed. Figure 3 shows the 3’ UTR of the Cd164 gene. See Supplementary Figure 1 for browser images of Ctdsp1 and Itgb1.\n\nIGV browser image of a portion of the Plod3 3’ UTR. Mouse neocortex HITS-CLIP read coverage is shown along with peak calls from YODEL (lower panel blue bars) and Piranha (lower panel green bars). Relevant microRNA seeds (for seeds mapped see Supplementary material: Seeds mapped for brain neocortex data) are shown in black. For YODEL, full boundaries are shown by thin lines and 25% boundaries by thick lines. YODEL parameters: dipTolerance=1.5, peakDipBuffer=1, binSize=16, minPeakHeight=5. Piranha parameters: a=background, threshold b=binSize. The default for Piranha is a=0.99 and b=32.\n\nIGV browser image of a portion of the Cd164 3’ UTR. Mouse neocortex HITS-CLIP read coverage is shown along with peak calls from YODEL (lower panel blue bars) and Piranha (lower panel green bars). Relevant microRNA seeds (For seeds mapped see Supplementary material: Seeds mapped for brain neocortex data) are shown in black. For YODEL, full boundaries are shown by thin lines and 25% boundaries by thick lines. YODEL parameters: dipTolerance=1.5, peakDipBuffer=1, binSize=16, minPeakHeight=5. Piranha parameters: a=background, threshold b=binSize. The default for Piranha is a=0.99 and b=32.\n\n\nConclusions\n\nWe have designed a new peak-caller, termed YODEL, for analysis of RNA-seq data generated by HITS-CLIP-type experiments. Advantages of YODEL compared to Piranha, a program commonly used for the same purpose, include standardization of peak calls for comparative analysis of multiple samples, improved resolution of peak boundaries, and more consistent overlap between peak calls and microRNA seed matches.\n\n\nSoftware and data availability\n\nYODEL is a Python script and is available at: https://github.com/LancePalmerStJude/YODEL/\n\nArchived source code as at time of publication: https://doi.org/10.5281/zenodo.820635 (Palmer, 2017).\n\nLicense: GPLv3\n\nInput BED files for peak calling with the miR-144/451 KO HITS-CLIP can be found in the Cab39_Ywhaz.allSamples.fullCollapsed.clusters.bed file within the archived source code listed above. This file contains the clustered BED file used for YODEL input (only around Cab39 and Ywhaz).\n\nSample HITS-CLIP data from the 130kD band from mouse neocortex samples was downloaded from http://ago.rockefeller.edu/rawdata.php (130kD Brain A-E samples).\n\nThe pre-processing BASH pipeline used to generate a clustered BED files, as well as some accessory scripts, can be found in the Ch2009 directory within the archived source code link listed above.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nLEP was supported by the Cancer Center Support (CORE) Grant (CA021765). VRP was supported by the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) grant (K08 5K08DK102533). MJW was supported by the NIDDK grant (R01 DK092318). This work was also supported by ALSAC.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript\n\n\nSupplementary material\n\nSupplementary File 1: YODEL supplementary text. This file contains supplementary text for this study. Within this file are descriptions of input files, publically available tools used, a second test data set analyzed by YODEL, and a description of a set of filters that can be used after peak calling.\n\nClick here to access the data.\n\n\nReferences\n\nChi SW, Zang JB, Mele A, et al.: Argonaute HITS-CLIP decodes microRNA-mRNA interaction maps. Nature. 2009; 460(7254): 479–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDarnell RB: HITS-CLIP: panoramic views of protein-RNA regulation in living cells. Wiley Interdiscip Rev RNA. 2010; 1(2): 266–286. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGodlewski J, Nowicki MO, Bronisz A, et al.: MicroRNA-451 regulates LKB1/AMPK signaling and allows adaptation to metabolic stress in glioma cells. Mol Cell. 2010; 37(5): 620–632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPalmer L: LancePalmerStJude/YODEL: Yodel Initial Release. Zenodo. 2017. Data Source\n\nParalkar VR, Luan J, Sridhar S, et al.: Argonaute HITS-CLIP Reveals Global miRNA-mRNA Networks in Erythropoiesis. Blood. 2014; 124(21): 446. Reference Source\n\nQuinlan AR, Hall IM: BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010; 26(6): 841–842. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThorvaldsdóttir H, Robinson JT, Mesirov JP: Integrative Genomics Viewer (IGV): high-performance genomics data visualization and exploration. Brief Bioinform. 2013; 14(2): 178–192. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUren PJ, Bahrami-Samani E, Burns SC, et al.: Site identification in high-throughput RNA-protein interaction data. Bioinformatics. 2012; 28(23): 3013–3020. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu D, dos Santos CO, Zhao G, et al.: miR-451 protects against erythroid oxidant stress by repressing 14-3-3zeta. Genes Dev. 2010; 24(15): 1620–1633. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "24325",
"date": "25 Jul 2017",
"name": "Michele Trabucchi",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors developed a new tool called YODEL to identify peaks from Ago2 HITS-CLIP data using a novel approach based on the identification of the peak summit of reads cluster and estimation of the size of the peaks based on read coverage. The work is sound and interesting, however we have some concerns about the benchmarking of this new tool. Our major questions mainly refer to Bottini et al. (2017)1. That should be cited.\n\nMajor suggestions/concerns:\n\nBenchmarking:\nThe authors showed just for few selected targets that YODEL identifies peaks that include miRNAs seed matches, whereas Piranha did not. This should be shown at the genome-wide scale.\nInclusion of seed match sequences do not assure per se a better performance. In fact, seed matches can be included just by chance due to an overestimation of the peaks size. To rule out the possibility, the authors should show the distribution of the peaks length by the two programs on both entire datasets and calculate the correlation between peak length and number of seed matches.\n\nFor Ago2 CLIP-seq peak calling programs it is expected that miRNA-binding sites and cross-linked-dependent mutations position at the peak centers. How does YODEL perform compare to Piranha?\n\nThe authors defined two thresholds to assess the peak boundaries, namely 25% and 50% of the coverage of the highest point: how these two threshold have been assessed? Since the peaks boundaries is a primary concern for the authors, it should be explained and supported by analysis how they chose these two percentages. Therefore, we recommend to benchmark the thresholds by adding intermediate percentages and calculate the sensitivity toward seed match identified at the genome-wide scale.\n\nOther major points:\nThe introduction needs improvement: a brief overview of the software/pipelines available to perform CLIP-seq data analysis and cite some reviews that explain all the steps of the data analysis, including Bottini et al. (2017)2 and Uhl et al. (2017)3.\n\nIt should be clearly stated whether YODEL is able to find peaks enriched when comparing two conditions (differential CLIP) and/or only one condition.\n\nIn the Methods section all the parameters should be clearly stated and explained in the main text and not in the Supplementary Information. Furthermore, it should be added which kind of input data (not only the file format) are needed to run YODEL (replicates, IgG, control/KO …).\n\nFinally, it should be made clear whether YODEL can be applied to analyze only Ago2 HITS-CLIP data or also other RNA-binding proteins and why.\n\nMinor points:\nSupplemental figure 2 is missing.\n\nAbout the first sentence on page 2 “A peak caller that could accurately define….” is odd. A peak caller tool is always needed to identify peaks from CLIP-seq experiments, and not just for the specific case mentioned by the authors.\n\nThe sentence on page 2 “Where dipTolerance (dt) and peakDipBuffer (pbd) are input parameters…” is clumsy.\n\nSome language misspelling such as: publically -> publicly.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "24815",
"date": "16 Aug 2017",
"name": "Neelanjan Mukherjee",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present an alternative peak caller for HITS-CLIP data. While the idea is interesting and the examples are compelling, there is not sufficient analysis presented to determine the utility of YODEL.\nMajor:\nThe bench-marking is insufficient to evaluate the difference between YODEL and PIRANHA. The primary figures only have single examples. There needs to be a transcriptome-wide analysis to evaluate the performance.\n\nThe analysis should include some type of specificity/sensitivity analysis. It would be instructive to design \"true\" positives and \"false\" positive. Generally the \"TRUE\" positives could be thought of as miRNAs that are expressed in that system vs those that are not. Additionally, one can design 'decoy' seeds that are di-nucleotide shuffled seeds if the expressed miRNAs (that don't match the expressed miRNA seeds) and evaluate the number of counts relative to the actual expressed miRNA seed.\nIn the case of the KO, one could examine if the peaks called from WT data that contain seed matches to the KO miRNAs change in coverage (WT vs KO), particularly relative to compared to the peaks that contain seed matches to the expressed non-KO miRNAs. Comparing YODEL and PIRANHA in this analysis would be quite instructive.\n\nMinor:\nIn the intro the authors describe three properties of YODEL. The first was:\n\"Incorporate strand specificity (Piranha does this, but many other CHIP-SEQ peak callers do not)\"\nI think this should be removed. Any peak finder for RNA interactions needs to be strand-specific. I do not know why CHIP-SEQ peak callers are even mentioned, unless the authors believe this could be beneficial for CHIP-seq data. If so, they would need to compare to common CHIP-seq peak finders, though that would be a distraction in my opinion.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1138
|
https://f1000research.com/articles/4-919/v1
|
29 Sep 15
|
{
"type": "Research Article",
"title": "Blood Interferon Signatures Putatively Link Lack of Protection Conferred by the RTS,S Recombinant Malaria Vaccine to an Antigen-specific IgE Response",
"authors": [
"Darawan Rinchai",
"Scott Presnell",
"Damien Chaussabel",
"Darawan Rinchai",
"Scott Presnell"
],
"abstract": "Malaria remains a major cause of mortality and morbidity worldwide. Progress has been made in recent years with the development of vaccines that could pave the way towards protection of hundreds of millions of exposed individuals. Here we used a modular repertoire approach to re-analyze a publically available microarray blood transcriptome dataset monitoring the response to malaria vaccination. We report the seminal identification of interferon signatures in the blood of subjects on days 1, 3 and 14 following administration of the third dose of the RTS,S recombinant malaria vaccine. These signatures at day 1 correlate with protection, and at days 3 and 14 to susceptibility to subsequent challenge of study subjects with live parasites. In addition we putatively link the decreased abundance of interferon-inducible transcripts observed at days 3 and 14 post-vaccination with the elicitation of an antigen specific IgE response in a subset of vaccine recipients that failed to be protected by the RTS,S vaccine.",
"keywords": [
"Transcriptome",
"Blood",
"Malaria",
"Vaccination",
"IgE",
"Humoral",
"Bioinformatics",
"Modular repertoires"
],
"content": "Introduction\n\nAbout 3.4 billion people, nearly half of the world’s population, live in areas at risk of malaria transmission1. Malaria infection resulted in an estimated 198 million cases in 2013 that may have caused between 367,000 and 755,000 deaths, according to the World Health Organization2. Recent concerns have been caused by rise in parasite resistance to artemisinin, which is the last effective monotherapy available for the treatment of malaria3. Cases of artemisinin resistance have been reported from much of Southeast Asia and now appear likely to reach the Indian subcontinent, with potentially dire consequences4. But significant advances over the past years have been made towards the development of an effective malaria vaccine5. Most notably this includes successful testing of a live vaccine consisting of radiation-attenuated sporozoites6, and this year licensure by regulatory authorities of the first malaria vaccine, the recombinant adjuvanted vaccine developed by global pharmaceutical GSK, called RTS,S also known by its commercial name, Mosquirix®. This is a highly significant landmark but unfortunately the efficacy of the vaccine for unknown reasons, and despite optimization attempts, remains suboptimal7,8. Thus identification of mechanisms underlying protection conferred by this vaccine, or lack thereof, may be key to the development of a broadly effective prophylactic vaccine against malaria. Unbiased “systems approaches”, consisting in profiling all the elements constitutive of a given biological system, have recently been implemented to investigate responses to vaccines9. Such an approach consisting in measuring blood transcript abundance on a genome-wide scale has been adopted for the serial profiling of responses to the influenza, pneumococcal, yellow fever or malaria vaccines10–15. In 2010, Vahey et al. reported results from a study investigating changes in transcript abundance in blood following administration of the malaria RTS,S vaccine15. In this report we share the results of a re-analysis of the data made available by Vahey et al. upon publication of their findings. We employed an innovative approach developed earlier; including by other team members part of previous publications which consists in identification of modular transcriptional repertoires – collections of co-clustered gene sets – in order to carry out modular level “fingerprinting analyses”16. This re-analysis led to original findings, with the identification of an interferon transcriptional signature at day 1 post-vaccination, correlating with protection as well as a second interferon signature at days 3 and 14 post-vaccination correlating this time with lack of protection of study subjects from subsequent challenges with the malaria parasite.\n\n\nMethods\n\nThe methodology for constructing modular transcriptional repertoires has been described earlier16,17. The particular framework employed in this re-analysis has been described in an earlier study investigating responses to influenza and pneumococcal vaccines12. Briefly, nine datasets were used as input, including blood transcriptome profiles generated from patients with HIV, tuberculosis, sepsis, systemic lupus erythematosus, systemic arthritis, and liver transplant. Each dataset was clustered independently using Hartigan’s k-means clustering, using the elbow criterion to determine the optimal number of clusters for each dataset. Cluster membership information for each gene across the nine datasets was used to build a table recording the number of co-clustering events for each possible gene pair. This table was used in turn to build a weighted co-clustering network where each node is a gene and edges indicate co-clustering events with weight ranging from 1 (pair of genes belonging to the same cluster in 1 out of 9 datasets) to 9 (pair of genes belonging to the same cluster in 9 out of 9 datasets). The module selection process consisted in the identification within this large network of cliques, which are densely connected subnetworks. A principled approach was used starting in the first round with the selection of the largest subnetworks carrying the highest weight (co-clustering in 9 out of 9 datasets; corresponding to the M1 modules), followed by identification and removal from the selection pool of the next largest subnetwork and so on (with minimum clique size set at 10). When no additional modules could be identified for a given round of selection the stringency of the selection criteria was progressively relaxed (e.g. co-clustering occurring for 8 out of 9 datasets in the second round of selection, corresponding to the M2 modules; in 7 out of 9 datasets in the third round of selection, corresponding to the M3 modules, etc…). The datasets used for module construction have been deposited in NCBI’s Gene Expression Omnibus: GSE30101.\n\nFunctional analyses were carried out systematically for each module using commercial as well as publically available tools (primarily MetaCore™ version 5.0 and DAVID version 6.718) and results are reported on a wiki page: http://www.biir.net/public_wikis/module_annotation/V2_Trial_8_Modules. A complete list of the genes forming the modules is also available from the wiki.\n\nThe top six rounds of modules defined by this approach (M1–M6, a total of 62 modules) were used as a framework to analyze and interpret the datasets generated in the context of the Vahey et al. study: i.e. rather than carrying out analyses at the individual gene level, which assume that changes in transcript abundance for each gene occur independently from that of other genes, we performed analyses at the modular level, were changes are assessed for sets of co-clustered genes. Thus we summarize “modular response” as a single value, the percent of responsive genes for a given module. In earlier analyses the average fold change per module was also used to demonstrate that high level of concordance could be observed across microarray platforms at the modular level but not at the gene level17. For determining changes for individual subjects post-vaccination a cutoff is set against which individual genes constitutive of a module are tested. If the gene meets the set criteria it is considered “responsive”. “Module-level” data is subsequently expressed as a % value representing the proportion of responsive transcripts for a given module.\n\nMann Whitney tests were performed on individual module response values expressed as percentages comparing protected and non-protected groups using GraphPad Prism software version 6 (GraphPad Software, San Diego, CA).\n\n\nResults\n\nThe design of the vaccine trial is described in detail by Vahey et al. and in an earlier publication15,19. Briefly, study subjects received the RTS,S vaccine, which consists of sequences of the Plasmodium falciparum Circum Sporozoite Protein (CSP) expressed in hepatitis B surface antigen and formulated with the proprietary adjuvant systems AS01/AS0220. Challenge was performed with a homologous 3D7 strain of P. falciparum delivered by 5 bites from infected mosquitoes. Samples were obtained from study participants at study entry (36 samples); on the day of the third vaccination (44 samples); at day 1 (43 samples), day 3 (43 samples), and day 14 (37 samples) thereafter; and at day 5 post-challenge (39 samples). Whole blood transcriptome profiles were generated using commercial Affymetrix HG-U133 chips. Data processing and normalization methodologies are described in the original publication. Data are available publically from the NCBI Gene Expression Omnibus (GSE18323). Only the blood transcriptional profiles generated on the day of the third vaccination and at day 1, day 3 and day 14 post-third vaccination were used in our re-analysis.\n\nWe employed a “modular repertoire approach” first described in 2008 in a research paper17, and more recently in a review16. Briefly, this approach consists in a priori identifying relationships among constituents of a given biological system, which in our case is the blood transcriptome. This makes it in turn possible to analyze transcriptional profiles as functionally interpretable gene sets rather than independent genes. Modular repertoires are established in an entirely data-driven process through the recording of co-clustering patterns of transcripts across a wide range of immune-related diseases. A collection of datasets encompassing infectious as well as autoimmune disorders and primary immune deficiency was used as input in order to capture a wide variety of immune signatures. The module construction process and modular analyses are described in detail in the Methods section.\n\nIn the original analysis of this dataset Vahey et al. report the identification: 1) of a transient signature at 24 hours post-vaccination that was not observed at subsequent time points. This signature is described as being associated with inflammatory processes elicited by the vaccine and was not associated with outcome of the infectious challenge; 2) of a signature at 5 days post-challenge that distinguish vaccinated from non-vaccinated individuals, thus directly reflecting and demonstrating the effect of vaccination; 3) of a signature at 14 days post-vaccination correlating with protection conferred by the vaccine. This 393-gene signature was identified using high resolution Gene Set Enrichment Analysis (GSEA) and consisted in transcripts belonging to the immunoproteasome pathway associated with the processing of major histocompatibility complex class I peptides.\n\nIn our re-analysis we first assessed changes in transcript abundance at the modular level. The percentage of responsive transcripts constitutive of a given module was determined for each individual at days 1, 3 and 14 following administration of the third vaccine dose in comparison to the levels obtained in samples collected just prior to that injection (see Methods for details). Hierarchical clustering was then performed at each time point to group modules (rows) and subjects (columns) based on patterns of changes in blood transcript abundance represented by the percent module response values (day 1, Figure 1A). Modules were filtered to only retain those with changes >15% in at least one subject. This analysis is unsupervised since it does not take knowledge of outcome of the infectious challenge into account. We observed nonetheless that samples tended to segregate based on whether or not the vaccine conferred protection (Figure 1A). Three modules associated with induction by interferon appeared to be the main elements driving the clustering of study subjects, with higher abundance levels being observed in subjects protected from subsequent infectious challenge. We demonstrated in our previous work that those three interferon modules represent distinct signatures that can be used for stratification of subjects with systemic lupus erythematosus21. Thus, we used in turn the same M1.2, M3.4 and M5.12 modules to stratify malaria vaccine recipients. Hierarchical clustering using only this subset of modules contributed to further separation of subjects based on the outcome of the infectious challenge (Figure 1B). The difference in % module responsiveness between protected and non-protected subjects was also statistically significant for M1.2 (p=0.0094, Mann Whitney test) (Figure 1C). M3.4 tended to be elevated compared to pre-vaccination baseline in both protected and non-protected individuals but was not different between those two groups. Abundance of M5.12 transcripts did not change following vaccination.\n\nBlood transcriptional responsiveness to malaria vaccination was determined at the modular level in subjects one day following administration of the third dose of RTS,S. A. The percentage of responsive transcripts was determined for each module (co-clustered gene set) and represented by a colored spot on a heatmap where modules are arranged in rows and samples in columns (using a custom web; manuscript describing this resource is in preparation). Increases in transcript abundance compared to baseline pre-third vaccine sample are shown in red and decreases in transcript abundance in blue. Modules and samples are arranged by hierarchical clustering based on patterns of module responsiveness. B. Grouping of samples based on patterns of responsiveness of interferon modules is shown here. C. Responsiveness of the three interferon modules on day 1 is shown on a plot.\n\nWe next used a similar approach to classify subjects at days 3 and 14 post-third vaccination. Subjects once again segregated based on whether or not protection is conferred by the vaccine (Figures 2A & 2B). Notably, however, at these time points the signature showed a decrease in levels of transcript abundance in comparison to baseline pre-vaccine samples in subjects that were not protected. Thus conversely with the signature described at day 1, signatures at days 3 and 14 correlated with lack of protection by the vaccine. Differences between protected and non-protected groups where highly significant for M1.2 (Figure 2C , day 3 p<0.0001, day 14 p<0.0001, Mann Whitney test). M3.4 and M5.12 did not show significant differences between those groups. Notably, we found that the genes constitutive of M1.2 do not overlap with the day 14 immunoproteasome signature described by Vahey et al. Taken together results of our reanalysis of the Vahey dataset using a modular repertoire framework led to an original finding, by demonstrating the association between diverging day 1 and days 3 and 14 interferon signatures and protection conferred by the RTS,S vaccine.\n\nBlood transcriptional responsiveness to malaria vaccination was determined at the modular level in subjects 3 and 14 days following administration of the third dose of RTS,S. A. As is the case of Figure 1 the percentage of responsive transcripts was determined for each module (co-clustered gene set) and represented by a colored spot on a heatmap where modules are arranged in rows and samples in columns. Increases in transcript abundance compared to baseline pre-third vaccine sample are shown in red and decreases in transcript abundance in blue. Modules and samples are arranged by hierarchical clustering based on patterns of module responsiveness. B. Similar description as in A, this time applied to day 14 data. C. Responsiveness of module M1.2 at days 3 and 14 post vaccination are represented on a plot showing also differences between the protected and non-protected groups.\n\nWe have shown in an earlier work that the three interferon modules that were described above tend to become elevated sequentially in patients with systemic lupus and may be associated with differential induction of type I and type II interferon in this disease21. Furthermore lupus disease severity was found to correlate significantly with M5.12 levels. We have also shown that an interferon response dominated by M1.2 and M3.4 was transiently increased 1 day following vaccination with the trivalent influenza virus12. O’Gorman et al. recently demonstrated that this transient interferon response is mediated by flu antigen-specific IgG immune complexes rather than engagement of pathogen-associated molecular pattern receptors22. The day 1 interferon response observed in the context of malaria vaccination could similarly be the result of engagement of CSP-specific IgG immune complexes since it occurs following administration of the third dose of RTS,S, at a time when a pre-existing humoral response would have been elicited by the first two doses.\n\nBut most peculiar is the fact that this increased modular interferon response in protected individuals at day 1 is followed by a persistent decrease in abundance of M1.2 transcripts below the pre-vaccination baseline in individuals that were not protected by the RTS,S vaccine. Indeed, in over 10 years of investigating blood transcriptome responses in a wide range of clinical and experimental settings the authors have not encountered a single instance of such a sustained and uniform decrease in abundance of interferon-inducible transcript. What is especially striking is the clear cut association between lack of protection conferred by RTS,S with the decrease in abundance of M1.2 transcripts seen in Figure 2C. This implies that the immunological mechanism underlying this suppressed interferon signature may be key to overcoming current limitations of sub-unit malaria vaccination.\n\nHere we putatively attribute this decrease in abundance of interferon-inducible transcripts and subsequent lack of protection to the elicitation by the vaccine of an antigen-specific IgE response. This assertion is based on an array of converging evidence, as outlined below:\n\nEngagement of the high affinity IgE receptor, FCER1, mediates decreased responsiveness to interferon-inducing stimuli. Gill et al. have shown that constitutively plasmacytoid dendritic cells (pDCs) isolated from patients with allergic asthma produce reduced levels of interferon alpha in response to the influenza virus in vitro when compared with pDCs isolated from non-asthmatic controls23. They also demonstrated that production of interferon alpha by pDC stimulated in vitro with the virus is significantly decreased upon cross-linking of the FCER1 receptor23. Similar findings have been reported more recently in PBMCs exposed to Human Rhinovirus (HRV)24. This is to our knowledge the only immune-mediated mechanisms of suppression of interferon responses that may explain the decrease in M1.2 observed following RTS,S vaccination. Thus we hypothesize that the suppression by RTS,S of levels of interferon inducible transcripts results from formation of IgE-CSP immune complexes, with anti-CSP IgE being elicited in earlier rounds of vaccination (Figure 3). IgE-antigen immune complexes would cause cross-linking and downstream signaling through the FCER1 that is expressed at the surface of leukocytes of the myeloid lineage. While IgG levels have recently been correlated with protection conferred by RTS,S25, to our knowledge the elicitation of IgE responses by this vaccine has thus far not been reported. Furthermore, our hypothesis is supported by evidence independently linking IgE responses, and specifically engagement of the FCER1, to susceptibility to malaria. Perlmann et al. identified IgE as a pathogenic factor in malaria, with immune complexes contributing to excess TNF induction in peripheral blood mononuclear cells in vitro26. Furthermore, mice deficient for the high affinity IgE receptor showed increased resistance to malaria infection, specifically implicating FCER1 expressing neutrophils as pathogenic mediators27. A more recent study has established a link between asthma and atopic dermatitis and delayed development of clinical immunity to P. falciparum28. Notably, in addition to shifting cytokine balance by promoting IL10 and TNF production, engagement of high affinity IgE receptors has been reported to critically impair phagocytic function of monocytes, a mechanism that is essential for the control of malaria infection29.\n\nOur model infers that the interferon signatures observed on days 1, 3 and 14 post-vaccination correlating with outcome of the infectious challenge are the result of engagement of Fc receptors by immune complexes. According to our model no interferon signatures should be observed following administration of the first vaccine dose in absence of pre-existing immunity to the Circum Sporozoite Protein (CSP). The injection of the first two doses of vaccines should elicit a humoral response, which in non-protected individuals is dominated by IgE rather than IgG. Ig-CSP immune complexes should form when the third dose of vaccine, which contains the CSP antigen, is administered. The transient interferon response elicited in individuals who develop a protective response and that we observed at Day 1 could be mediated engagement of the FCGR by IgG-CSP immune complexes as has been described earlier in the context of influenza vaccination)22. Our model predicts that IgE-CSP complexes form in non-protected individuals and cross-link the high affinity IgE receptor FCER1 at the surface of leukocytes of the myeloid lineage. FCER1 engagement would in turn mediate reduction in levels of IFN-inducible (IFI) transcripts that is observed on days 3 and 14. This suppression of constitutive levels of IFN inducible genes in the non-protected group would be countered at least partially on Day 1 by residual IgG-IC response in subjects displaying mixed IgE/IgG humoral responses. If this is indeed the case the levels of M1.2 suppression on Day 1 in individuals displaying IgE response should be inversely correlated with levels of CSP-specific IgG. Given the transient nature of the interferon signature that was observed and that we tentatively attribute to IgG ICs this partial reversal of IgE mediated suppression would not be observed at later time point, which would account for the sustained decrease in abundance of interferon-inducible transcripts that was observed at days 3 and 14. Engagement and crosslinking of FCER1 may also result in altered phagocytic and parasite killing abilities thus contributing to lack of resistance to infection in cases where the humoral response to RTS,S vaccination is dominated by IgE29.\n\n\nConclusions\n\nGaining an understanding of immunological mechanisms that confer protection via immunization with the RTS,S malaria vaccine, or conversely that prevent it, can help address decisively the global health challenges caused by malaria infection. In this report we identify a candidate blood transcriptional signature correlating with protection following subsequent infectious challenge. Furthermore we establish a potential link between the peculiar decrease in abundance of interferon-inducible transcripts observed at days 3 and 14 following administration of the third dose of the vaccine and the possible elicitation of an IgE response in a subset of individuals that subsequently fail to be protected by vaccination. The validity of the model that we are proposing here can easily be tested by groups having ready access to samples obtained from subjects enrolled in the RTS,S vaccine trials. If this model holds true it would also open the possibility through the choice of appropriate antigens or adjuvants, or other immune modulating agents, to design strategies aiming at preventing, suppressing or skewing the development of IgE responses and thus confer high rates of protection against malaria infection through prophylactic immunization.",
"appendix": "Author contributions\n\n\n\nDR: data analysis & interpretation, manuscript preparation; SP: software & database development; DC: data analysis & interpretation; manuscript preparation.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was made possible through funding support from NIH (U01AI082110, U19AI089987, U19AI08998 and U19AI057234) to DC and SP, and the Qatar Foundation to DR and DC.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nLoitto VM, Forslund T, Sundqvist T, et al.: Neutrophil leukocyte motility requires directed water influx. J Leukoc Biol. 2002; 71(2): 212–222. PubMed Abstract\n\nMalaria Fact sheet N°94. 2015. Reference Source\n\nAshley EA, Dhorda M, Fairhurst RM, et al.: Spread of artemisinin resistance in Plasmodium falciparum malaria. N Engl J Med. 2014; 371(5): 411–423. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTun KM, Imwong M, Lwin KM, et al.: Spread of artemisinin-resistant Plasmodium falciparum in Myanmar: a cross-sectional survey of the K13 molecular marker. Lancet Infect Dis. 2015; 15(4): 415–421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoffman SL, Vekemans J, Richie TL, et al.: The march toward malaria vaccines. Vaccine. 2015; pii: S0264-410X(15)01070-1. PubMed Abstract | Publisher Full Text\n\nSeder RA, Chang LJ, Enama ME, et al.: Protection against malaria by intravenous immunization with a nonreplicating sporozoite vaccine. Science. 2013; 341(6152): 1359–1365. PubMed Abstract | Publisher Full Text\n\nOckenhouse CF, Regules J, Tosh D, et al.: Ad35.CS.01-RTS,S/AS01 Heterologous Prime Boost Vaccine Efficacy against Sporozoite Challenge in Healthy Malaria-Naïve Adults. PLoS One. 2015; 10(7): e0131571. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRTS,S Clinical Trials Partnership: Efficacy and safety of RTS,S/AS01 malaria vaccine with or without a booster dose in infants and children in Africa: final results of a phase 3, individually randomised, controlled trial. Lancet. 2015; 386(9988): 31–45. PubMed Abstract | Publisher Full Text\n\nHagan T, Nakaya HI, Subramaniam S, et al.: Systems vaccinology: Enabling rational vaccine design with systems biological approaches. Vaccine. 2015; 33(40): 5294–5301. PubMed Abstract | Publisher Full Text\n\nDunachie S, Hill AV, Fletcher HA: Profiling the host response to malaria vaccination and malaria challenge. Vaccine. 2015; 33(40): 5316–5320. PubMed Abstract | Publisher Full Text\n\nGaucher D, Therrien R, Kettaf N, et al.: Yellow fever vaccine induces integrated multilineage and polyfunctional immune responses. J Exp Med. 2008; 205(13): 3119–3131. PubMed Abstract | Publisher Full Text | Free Full Text\n\nObermoser G, Presnell S, Domico K, et al.: Systems scale interactive exploration reveals quantitative and qualitative differences in response to influenza and pneumococcal vaccines. Immunity. 2013; 38(4): 831–844. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuerec TD, Akondy RS, Lee EK, et al.: Systems biology approach predicts immunogenicity of the yellow fever vaccine in humans. Nat Immunol. 2009; 10(1): 116–125. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTsang JS, Schwartzberg PL, Kotliarov Y, et al.: Global analyses of human immune variation reveal baseline predictors of postvaccination responses. Cell. 2014; 157(2): 499–513. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVahey MT, Wang Z, Kester KE, et al.: Expression of genes associated with immunoproteasome processing of major histocompatibility complex peptides is indicative of protection with adjuvanted RTS,S malaria vaccine. J Infect Dis. 2010; 201(4): 580–589. PubMed Abstract | Publisher Full Text\n\nChaussabel D, Baldwin N: Democratizing systems immunology with modular transcriptional repertoire analyses. Nat Rev Immunol. 2014; 14(4): 271–280. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChaussabel D, Quinn C, Shen J, et al.: A modular analysis framework for blood genomics studies: application to systemic lupus erythematosus. Immunity. 2008; 29(1): 150–164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang da W, Sherman BT, Lempicki RA: Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009; 4(1): 44–57. PubMed Abstract | Publisher Full Text\n\nKester KE, Cummings JF, Ofori-Anyinam O, et al.: Randomized, double-blind, phase 2a trial of falciparum malaria vaccines RTS,S/AS01B and RTS,S/AS02A in malaria-naive adults: safety, efficacy, and immunologic associates of protection. J Infect Dis. 2009; 200(3): 337–346. PubMed Abstract | Publisher Full Text\n\nLee S, Nguyen MT: Recent advances of vaccine adjuvants for infectious diseases. Immune Netw. 2015; 15(2): 51–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChiche L, Jourde-Chiche N, Whalen E, et al.: Modular transcriptional repertoire analyses of adults with systemic lupus erythematosus reveal distinct type I and type II interferon signatures. Arthritis Rheumatol. 2014; 66(6): 1583–1595. PubMed Abstract | Publisher Full Text | Free Full Text\n\nO'Gorman WE, Huang H, Wei YL, et al.: The Split Virus Influenza Vaccine rapidly activates immune cells through Fcγ receptors. Vaccine. 2014; 32(45): 5989–5997. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGill MA, Bajwa G, George TA, et al.: Counterregulation between the FcepsilonRI pathway and antiviral responses in human plasmacytoid dendritic cells. J Immunol. 2010; 184(11): 5999–6006. PubMed Abstract | Publisher Full Text\n\nDurrani SR, Montville DJ, Pratt AS, et al.: Innate immune responses to rhinovirus are reduced by the high-affinity IgE receptor in allergic asthmatic children. J Allergy Clin Immunol. 2012; 130(12): 489–495. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAjua A, Lell B, Agnandji ST, et al.: The effect of immunization schedule with the malaria vaccine candidate RTS,S/AS01E on protective efficacy and anti-circumsporozoite protein antibody avidity in African infants. Malar J. 2015; 14: 72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerlmann P, Perlmann H, Flyg BW, et al.: Immunoglobulin E, a pathogenic factor in Plasmodium falciparum malaria. Infect Immun. 1997; 65(1): 116–121. PubMed Abstract | Free Full Text\n\nPorcherie A, Mathieu C, Peronet R, et al.: Critical role of the neutrophil-associated high-affinity receptor for IgE in the pathogenesis of experimental cerebral malaria. J Exp Med. 2011; 208(11): 2225–2236. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHerrant M, Loucoubar C, Bassène H, et al.: Asthma and atopic dermatitis are associated with increased risk of clinical Plasmodium falciparum malaria. BMJ Open. 2013; 3(7): pii: e002835. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPyle DM, Yang VS, Gruchalla RS, et al.: IgE cross-linking critically impairs human monocyte function by blocking phagocytosis. J Allergy Clin Immunol. 2013; 131(2): 491–500.e491–495. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "11692",
"date": "21 Dec 2015",
"name": "Geneviève Milon",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is a pleasure to provide my comments and highlight some issues in your well-conceived and well-conducted re-analysis of dataset generated and assembled by Vahey et al; this latter study was conducted to investigate changes in transcript abundance in Peripheral Blood Mononuclear Cells/PBMCs prepared from blood sampled from human volunteers who were recipients of three doses of the recombinant RTS,S vaccine containing a protein named Circum Sporozoite Protein/ CSP. Of note the CSP is displayed at the surface of by the human- invasive 3D7 Plasmodium falciparum developmental stage population named the invasive sporozoite population. The invasive properties of sporozoite populationreflect maturation within the salivary glands of the blood-feeding Anopheles stephensi/A.s and is transmitted when these blood-feeding anopheline mosquitos are1. either probing the skin.2. or sampling their blood meals, whether it is one or the other process it is coupled to delivery of\n\nboth saliva A. stephensi saliva molecules and sporozoites.Your re-analysis did allow generating interesting data setswith the identification of an interferon transcriptional signature at day 1 post-vaccination, correlating with protection, as well as the identification of distinct interferon transcriptional signatures at days 3 and 14 post-vaccination correlating, at these time points, with reduced or lack of protection of study subjects from subsequent challenges with sporozoites delivered by insectarium reared-blood-feeding Anopheles hosting mature sporozoites in their salivary glandsWhen concluding your re-analysis and the putative model accounting for the contrasted interferon signatures monitored at day 1 and days 3 and 14 post RTS,S vaccination, please could you also consider another biologically sound variable , namely the potentially contrasted ratio profile of the Anopheles saliva molecules-binding IgE/IgG: indeed you are aware that infants and adults who share durably the habitats of blood feeding Anopheles spp are more frequently exposed to Anopheles females that do not act as Plasmodium sporozoite vectors but that deliver saliva in the dermis a site where many saliva derived agonists are rapidly sensed by immune sensors the T cell receptors and the membrane Ig receptors included; such features are reflected by the ability to detect saliva –reactive T cells and saliva-reactive Ig molecules in the blood at very early times points post either skin probing or blood sampling by blood feeding insects. Please note that blood- feeding Anopheles does not transmit malaria – the latter term is expected to depict only the symptoms that occur over the asexual P. falciparum intra- erythrocytic developmental program : the blood feeding Anopheles delivers saliva while transmitting sporozoites that will further reach the hepatocytes where proceeds the generation of P. falciparum merozoites that are invasive for erythrocytes . Please could you review the Fig 3 legend:\"Figure 3. Proposed immunological mechanism determining protection – or lack thereof – in adult human individuals to whom was inoculated the RTS,S vaccine.Our model infers that the interferon signatures observed on days 1, 3 and 14 post-vaccination correlating with outcome of the processes that deploy post the co-delivery of Anopheles saliva and P. falciparum sporozoites, are the result of engagement of Fc receptors by immune complexes.According to our model no interferon signatures should be observed following administration of the first vaccine dose of the CSP- containing RTS,S vaccine, in absence of pre-existing immune effectors/regulators reactive to P. falciparum Circum Sporozoite Protein/CSP.The injection of the first two doses of vaccines should elicit a humoral response, which in non-protected individuals is dominated by CSP- binding IgE rather than IgG. CSP- Ig- immune complexes should form when the third vaccine dose is administered.The transient interferon response elicited in individuals who develop a protective response and that we observed at Day 1 could be mediated by engagement of the FcγR by IgG-CSP immune complexes as has been described earlier in the context of influenza vaccination)22.Our model predicts that IgE-CSP complexes form in non-protected individuals and cross-link the high affinity IgE receptor at the surface of cells of the myeloid lineage. FCER1 engagement would in turn mediate reduction in levels of IFN-inducible (IFI) transcripts that is observed on days 3 and 14.This reduction of constitutive levels of IFN inducible transcripts in the non-protected group would be countered at least partially on Day 1 by residual IgG-IC response in subjects displaying mixed IgE/IgG humoral responses.If this is indeed the case the levels of M1.2 reduction on Day 1 in individuals displaying CSP-binding IgE should be inversely correlated with levels of CSP-specific IgG.\"Many thanks for your attention",
"responses": [
{
"c_id": "2821",
"date": "18 Jul 2017",
"name": "Damien Chaussabel",
"role": "Author Response",
"response": "We thank you very much for your time and valuable comments. All your edits have been incorporated in the legend of Figure 3. At the end of the conclusion we added the following sentence to reflect the suggestion that you have made, which is made all the more relevant by the data that we have generated and are now presenting in the second version of this manuscript. \"Furthermore, as pointed out by Dr Milon in her comments, extending the investigation to include profiling of IgE which are specific for antigen present in the saliva of the vector is also warranted [REF].\" REF: Milon G. Referee Report For: Blood Interferon Signatures Putatively Link Lack of Protection Conferred by the RTS,S Recombinant Malaria Vaccine to an Antigen-specific IgE Response [version 1; referees: 1 approved, 1 approved with reservations]. F1000Research 2015, 4:919"
}
]
},
{
"id": "11338",
"date": "08 Feb 2016",
"name": "Adrian J. F. Luty",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOver the last 3 decades or so the development and testing of candidate vaccines for malaria has seen numerous advances but an almost equal number of setbacks. RTS,S (aka Mosquirix) stands apart, for now, as the only such vaccine to have advanced as far as registration. Whilst that is clearly an achievement to be applauded, there nevertheless remain substantial gaps in our basic understanding of the immunological mechanisms involved in the protection conferred by the vaccine. The study reported here by Darawan Rinchai and colleagues represents an attempt to fill some of the gap in that knowledge.The source data that Rinchai and colleagues were able to access for their analyses were derived from an original study authored by Kester and colleagues1 that comprised a comparison of outcomes, including efficacy, of two formulations of RTS,S in different adjuvants administered to adult volunteers. Notably, that study documented higher titres of CSP-specific IgG as well as higher frequencies of 'activated' CSP-specific CD4+ T cells and IFN-γ-producing T cells in individuals who were protected from an experimental challenge infection with sporozoites compared to the study individuals who were not protected. The results of an adjunct to Kester and colleagues' study were published subsequently by Vahey and colleagues2. That study documented gene expression analyses of peripheral blood mononuclear cells isolated from the volunteers at different time-points and identified - through the use of high-resolution geneset analysis - immunoproteasome processing of MHC peptides as predictively distinguishing protected from unprotected individuals.Rinchai and colleagues' analytical approach employed a 'modular' method that they had previously developed 'in-house' and had used in other studies of vaccine-induced gene profiles. Here they used the same approach to assess gene expression profiles in the dataset generated by Vahey and colleagues. On that basis it would have been instructive for them to demonstrate the validity of their approach by assessing the immunoproteasome geneset identified by Vahey and colleagues in the context of its predictive capacity. Also, they need to justify the rationale for the module filtering procedure that discarded any displaying changes <15%. On what basis was that threshold chosen? In passing, for accuracy they also need to modify their repeated incorrect reference to 'blood' proteome to the correct 'peripheral blood mononuclear cell' proteome.Rinchai and colleagues' principal finding concerning the association between upregulated expression of an 'interferon module' response and protection conferred by RTS,S appears to be consistent with the enhanced T cell responses associated with protection reported by Kester and colleagues in the same individuals. It would also be consistent with the known role for IFN-α in enhancement of naive B cell differentiaton3. My main concern in the context of their discussion and conclusions relates to their speculative interpretation for the under-expression/down-regulation of the interferon module response they recorded both 3 & 14 days after the third RTS,S immunization in non-protected individuals. They contend that that particular profile is most likely to be linked to an aberrant IgE rather than IgG response to CSP. They conclude that this would result in IgE-CSP immune complex-mediated cross-linking of IgE receptors on myeloid cells that would in turn mediate the type of down-regulation of pDC activity they observed in the non-protected individuals. Whilst this may indeed be a plausible explanation for their observations, it is far from being the only one, and the authors need to recognize this. Firstly, it is of possibly central relevance here that RTS,S is a hybrid molecule comprising the CSP sequence conjugated to a 226 amino acid stretch of hepatitis B surface antigen (HBsAg). The same portion of HBsAg is co-expressed in unconjugated form along with the conjugate such that the purified vaccine spontaneously forms virus-like particles. Importantly, HBsAg has been shown to inhibit IFN-α production by pDC4, implying that the 'non-malaria' component of RTS,S could thus potentially be exerting some influence on the outcome of immunization in terma of the anti-malarial protection generated. Secondly, a role for IgG-CSP immune complex-mediated inhibition of IFN-α production could also be envisaged through their interaction with the inhibitory FcγRIIB receptor expressed on pDC5. Thus, I feel, the authors need to modify their article to incorporate discussion of these alternative, but not necessarily mutually exclusive explanations for their findings. In the context of their preferred conclusion, I wonder if might it not also be a possibility that they re-interrogate their dataset to try to identify possible changes in relevant 'modules' related to IgE/FCERI expression levels?Finally, I also suggest strongly that they remove references to and discussion of publications that have proposed a role for IgE in malarial morbidity. Disease manifestations due to infection with Plasmodium falciparum result from pathophysiological events associated with the asexual blood stage multiplication phase, reference to which in the context of putative antibody responses to a component of a vaccine that targets the pre-erythrocytic sporozoite stage are entirely erroneous in my view.In conclusion I would say that Rinchai and colleagues' findings certainly provide some pointers to elucidating the immunological mechanisms governing the capacity of RTS,S to induce protective responses in some individuals but not others. I would nevertheless add the rider that the vaccine's capacity for induction of immunological responses in malaria non-exposed adults in North America may be quite far removed from its capacity to induce responses in sub-Saharan African newborns or infants, many of whom could have had some exposure to the pathogen or its products already either in utero or in early life. Such exposures could plausibly modulate their capacity to respond to vaccines in general and to a malaria vaccine in particular.",
"responses": [
{
"c_id": "2822",
"date": "18 Jul 2017",
"name": "Damien Chaussabel",
"role": "Author Response",
"response": "We would like to thank the reviewers for their time and constructive feedback. In this revised version, we have added a new result section and figure showing FCER1 transcript profiles in vaccinated individuals, as well as newly generated IgE profiling data measured in plasma samples of children from malaria endemic areas who received the RTS,S vaccine or a comparator vaccine. We have included below our response to the points that were raised in the review. Approved with Reservations ... Rinchai and colleagues' analytical approach employed a 'modular' method that they had previously developed 'in-house' and had used in other studies of vaccine-induced gene profiles. Here they used the same approach to assess gene expression profiles in the dataset generated by Vahey and colleagues. On that basis it would have been instructive for them to demonstrate the validity of their approach by assessing the immunoproteasome geneset identified by Vahey and colleagues in the context of its predictive capacity. Authors: Indeed, this is an important point. We assessed the overlap of the proteasome signature reported by Vahey et al. with our modular genesets. Six genes overlapped with module M5.13 and three with module M5.4 (Supplementary Figure 1). However, neither M5.13 nor M5.4 were identified as being associated with protection, or lack of protection following RTS,S vaccination. We added the paragraph below in the result section “Decreased interferon module response at days 3 and 14 correlate with lack of protection” and show results in supplementary figure 1 “Notably, we found that the genes constitutive of M1.2, M3.4 and M5.12 do not overlap with the day 14 immunoproteasome signature (32 genes) described by Vahey et al (Supplementary figure 1). All but three of the 32 genes mapped to the modules constituting our repertoire. Six genes mapped to module M5.13 and three to module M5.4, the remaining 23 genes mapped to 18 other modules. However, none of the genes mapped to either of the three interferon modules identified as associated with protection by RTS,S vaccine at Day 1, 3 or 14 after the third vaccination. Notably, IFNG, which was part of the 32 gene signature is among the genes constituting M3.6, which is associated with NK cells/Cytotoxic activity.” Also, they need to justify the rationale for the module filtering procedure that discarded any displaying changes <15%. On what basis was that threshold chosen? Authors: We have added the sentence below in the manuscript to clarify this point; “This arbitrary cutoff is set at three times the false discovery rate used for multiple testing correction (5%), which allows effective filtering of false positive results.” In passing, for accuracy they also need to modify their repeated incorrect reference to 'blood' proteome to the correct 'peripheral blood mononuclear cell' proteome. Authors: The manuscript was edited accordingly and reference now being made to “Peripheral blood mononuclear cell transcriptome….” Throughout. Rinchai and colleagues' principal finding concerning the association between upregulated expression of an 'interferon module' response and protection conferred by RTS,S appears to be consistent with the enhanced T cell responses associated with protection reported by Kester and colleagues in the same individuals. It would also be consistent with the known role for IFN-α in enhancement of naive B cell differentiaton3. My main concern in the context of their discussion and conclusions relates to their speculative interpretation for the under-expression/down-regulation of the interferon module response they recorded both 3 & 14 days after the third RTS,S immunization in non-protected individuals. They contend that that particular profile is most likely to be linked to an aberrant IgE rather than IgG response to CSP. They conclude that this would result in IgE-CSP immune complex-mediated cross-linking of IgE receptors on myeloid cells that would in turn mediate the type of down-regulation of pDC activity they observed in the non-protected individuals. Whilst this may indeed be a plausible explanation for their observations, it is far from being the only one, and the authors need to recognize this. Firstly, it is of possibly central relevance here that RTS,S is a hybrid molecule comprising the CSP sequence conjugated to a 226 amino acid stretch of hepatitis B surface antigen (HBsAg). The same portion of HBsAg is co-expressed in unconjugated form along with the conjugate such that the purified vaccine spontaneously forms virus-like particles. Importantly, HBsAg has been shown to inhibit IFN-α production by pDC4, implying that the 'non-malaria' component of RTS,S could thus potentially be exerting some influence on the outcome of immunization in terma of the anti-malarial protection generated. Secondly, a role for IgG-CSP immune complex-mediated inhibition of IFN-α production could also be envisaged through their interaction with the inhibitory FcγRIIB receptor expressed on pDC5. Thus, I feel, the authors need to modify their article to incorporate discussion of these alternative, but not necessarily mutually exclusive explanations for their findings. In the context of their preferred conclusion, I wonder if might it not also be a possibility that they re-interrogate their dataset to try to identify possible changes in relevant 'modules' related to IgE/FCERI expression levels? Authors: These comments have contributed significantly to the discussion and interpretation of the IgE profiling data presented in this revised manuscript: With regards to the first point made in the comment above: “An alternative explanation is advanced by Dr Luty in his comments, who mentions the potential relevance of the use of a 226 amino acid stretch of HBsAg as a conjugate for the CSP antigen in the RTS,S vaccine. Indeed this molecule has also been shown to inhibit Toll-like Receptor (TLR)9-mediated IFN-α production by pDC [Shi B., et al., PLoS One. 2012], and “could thus potentially be exerting some influence on the outcome of immunization in term of the anti-malarial protection generated” [Luty AJF., Referee report., F1000Research 2015, 4:919]. This is indeed another possibility, which warrants testing of the inhibitory capacity of the protein fragment used in RTS,S vaccine formulation. Furthermore the cell populations and signaling pathways involved in elicitation and modulation of interferon responses by RTS,S vaccine would also need to be investigated. It was found for instance that, rather unexpectedly, the induction of interferon response by the trivalent influenza vaccine is mediated by immune-complexes rather than TLR engagement [O'Gorman WE., et al., Vaccine. 2014].” With regards to the second point made in the comment above: “At Dr Luty’s suggestion we also examined levels of abundance of FCER1 subunits transcripts in the RTS,S vaccine dataset used in our analysis (Figure 4). Consistently with what is observed during acute malaria infection (GSE34404) [22949651], FCER1A and FCER1G levels were respectively increased and decreased one day post-RTS,S vaccination. However, notably, FCER1G levels remained significantly elevated at day 3 post-vaccination in non-protected individuals, while they decreased to baseline levels in protected individuals (Figure 4D). Abundance of FCER1G transcript was also significantly elevated in a third dataset where changes were measured in patients during episodes of asthma exacerbation (GSE24745) [22316092]. Moreover, we also checked expression level of FCGR2A and FCGR2B in the same dataset and found that abundance of FCGR2A and FCGR2B were not different in subjects who were protected and in those who were not protected.” Finally, I also suggest strongly that they remove references to and discussion of publications that have proposed a role for IgE in malarial morbidity. Disease manifestations due to infection with Plasmodium falciparum result from pathophysiological events associated with the asexual blood stage multiplication phase, reference to which in the context of putative antibody responses to a component of a vaccine that targets the pre-erythrocytic sporozoite stage are entirely erroneous in my view. Authors: The IgE profiling results that have been newly generated illustrate the point made by Dr Luty. It shows a weak association with elevated baseline levels of CSP-specific IgEs and subsequent malaria morbidity, but a much stronger association with MSP-specific IgEs. These findings make the point raised and citations in question all the more relevant. The discussion points have been modified accordingly: “Further evidence pointing to the influence of IgE responses comes from the literature and from results of our preliminary investigation reported below showing a dependence on both stages and antigen specificity of IgE on outcomes of malaria infection in children from endemic areas. Perlmann et al. identified IgE as a pathogenic factor in malaria, with immune complexes contributing to excess TNF induction in PBMC in vitro 26. Mice deficient for the high affinity IgE receptor have shown increased resistance to blood-stage of parasite infection, specifically implicating FCER1 expressing neutrophils as pathogenic mediators 27. A more recent study has established a link between asthma and atopic dermatitis and delayed development of clinical immunity to P. falciparum 28. Notably, in addition to shifting cytokine balance by promoting IL-10 and TNF production, engagement of high affinity IgE receptors has been reported to critically impair phagocytic function of monocytes, a mechanism that is essential for the control of malaria infection 29. The antigen specificity of IgE responses that have been associated with enhanced malaria morbidity remains to be determined. It is of relevance to RTS,S vaccination, at least in naïve subjects, since the vaccine targets the liver stage rather than the blood stage of the parasite.”"
}
]
}
] | 1
|
https://f1000research.com/articles/4-919
|
https://f1000research.com/articles/6-724/v1
|
18 May 17
|
{
"type": "Review",
"title": "The advantage of channeling nucleotides for very processive functions",
"authors": [
"Diana Zala",
"Uwe Schlattner",
"Thomas Desvignes",
"Julien Bobe",
"Aurélien Roux",
"Philippe Chavrier",
"Mathieu Boissan",
"Uwe Schlattner",
"Thomas Desvignes",
"Julien Bobe",
"Aurélien Roux",
"Philippe Chavrier"
],
"abstract": "Nucleoside triphosphate (NTP)s, like ATP (adenosine 5’-triphosphate) and GTP (guanosine 5’-triphosphate), have long been considered sufficiently concentrated and diffusible to fuel all cellular ATPases (adenosine triphosphatases) and GTPases (guanosine triphosphatases) in an energetically healthy cell without becoming limiting for function. However, increasing evidence for the importance of local ATP and GTP pools, synthesised in close proximity to ATP- or GTP-consuming reactions, has fundamentally challenged our view of energy metabolism. It has become evident that cellular energy metabolism occurs in many specialised ‘microcompartments’, where energy in the form of NTPs is transferred preferentially from NTP-generating modules directly to NTP-consuming modules. Such energy channeling occurs when diffusion through the cytosol is limited, where these modules are physically close and, in particular, if the NTP-consuming reaction has a very high turnover, i.e. is very processive. Here, we summarise the evidence for these conclusions and describe new insights into the physiological importance and molecular mechanisms of energy channeling gained from recent studies. In particular, we describe the role of glycolytic enzymes for axonal vesicle transport and nucleoside diphosphate kinases for the functions of dynamins and dynamin-related GTPases.",
"keywords": [
"Glycolysis",
"oxidative phosphorylation",
"bioenergetics",
"ATP",
"GTP",
"dynamin",
"nucleoside diphosphate kinase",
"creatine kinase"
],
"content": "Introduction\n\nOne hundred years ago, Michaelis and Menten described the enzyme kinetics of invertase, which today still forms the basis of a model describing the kinetic properties of many enzymes [republished in 1]. However, this model of the kinetics of enzyme reactions in vitro may be not always be applicable to those in vivo2. Assumptions that the concentration of substrates and enzymes is large, that the cytosol is a homogeneous aqueous solution, and that diffusion is not a limiting factor, for example, are unlikely to be valid in vivo.\n\nAs early as 1929, the Nobel prize winner F. G. Gowland Hopkins recognised that the cell is not “just a bag of enzymes”3. Today, it is accepted that the exact cellular location of a protein is crucial for its function4,5; however, the view that enzymes and metabolites often do not behave as if they were freely diffusible in solution took quite some time to become widely accepted, mostly due to the lack of suitable methods for the study of subcellular organisation and its functional consequences. In fact, the highly heterogeneous and structured intracellular space imposes various limitations on the diffusion even of small metabolites such as adenine or guanine nucleotides. Notably, among these intracellular spaces, the high viscosity of the intracellular medium6–8 is very rich in various macromolecules (resulting in ‘macromolecular crowding’) and densely packed with bulky structures, like components of the cytoskeleton and membrane systems9–11.\n\nHere, we will first introduce the classical thermodynamical model that determines the free energy released from nucleotide hydrolysis, and then discuss the functional consequences when enzymes are not homogeneously distributed in the cell, but associate with subcellular compartments. Although a very simple and intuitive concept, the notion of local energy transfer is somewhat controversial: we will explain this concept of ATP (adenosine 5’-triphosphate) and GTP (guanosine 5’-triphosphate) channeling between a site where these nucleotides are produced and a close second site where they are consumed. This energy transfer, called energy channeling, may be used for several cellular functions to enable a rapid and specific response to high and fluctuating energy requirements. The main purpose of this review is to provide a clear and precise understanding of energy channeling, with an emphasis on recent examples of ATP channeling by glycolytic enzymes to ATPases (adenosine triphosphatases) and GTP channeling by nucleoside diphosphate kinases (NDPKs), in particular, to dynamin and dynamin-related GTPases (guanosine triphosphatases).\n\n\nWhy is ATP the main high-energy molecule used by the cell?\n\nAll cells transform chemical energy into biological work. The three main kinds of biological work are: mechanical work (such as the beating of cilia, muscle contraction, and movement of chromosomes during cell division), transport work (such as pumping substances across membranes against the direction of spontaneous movement), and chemical work that drives thermodynamically unfavourable reactions (such as the synthesis of polypeptides and nucleic acids). In most cases, the source of chemical energy that powers biological work is ATP, the predominant form of chemical energy in all living cells12. ATP is composed of the nitrogenous base, adenine, the five-carbon sugar ribose, and a chain of three phosphate groups. Energy is stored in the covalent bonds between phosphate groups. The hydrolysis of ATP to ADP (adenosine diphosphate) and Pi (inorganic phosphate) is a strongly exergonic reaction, i.e. it releases a large amount of energy (called Gibbs free energy, ΔG), which is used to perform much of the biological work described above12–14. The other nucleoside triphosphate (NTP)s have similar chemical properties as ATP, but they are used for different tasks in the cell: GTP, which has a guanine base in the place of the adenine in ATP, is important in protein synthesis as well as in signal transduction through G proteins and in tubulin polymerisation15, whereas UTP (uridine 5’-triphosphate) and CTP (cytidine 5’-triphosphate) are used in polysaccharide and phospholipid synthesis, respectively. ATPases and GTPases are the main classes of enzyme that use the Gibbs free energy ΔG of nucleotide hydrolysis. ATPases mostly convert this energy into mechanical force or ion gradients, whereas GTPases often act as molecular switches that use cycles of GTP binding and hydrolysis.\n\nThe standard Gibbs energy (ΔG0’) released by hydrolysis of ATP or GTP is –30.5 kJ/mol (–7.3 kcal/mol) at pH 7.0, 25°C, 1 bar pressure, and concentrations of reactants and products of 1 M. However in the cell, the concentrations of ATP and GTP, ADP and GDP, and Pi are all different to each other and much lower than 1 M16; cellular pH and temperature may also differ from the standard conditions. Thus, the ΔG of hydrolysis of ATP and GTP under intracellular conditions differs from the standard ΔG0’ 17. Under intracellular conditions, this ΔG is given by the following relationship:\n\nΔGNTP = ΔG0’ NTP + RT ln ([NDP][Pi]/[NTP])\n\nSince the bulk concentrations of NDPs, NTPs and Pi differ, depending on the nucleotide and cell type, the ΔG for hydrolysis of each NTP must also vary. Furthermore, ΔGNTP will change in space and time depending on the metabolic conditions of the cell, which modify the global and/or local nucleotide concentrations. Thus, it is difficult to calculate universal ΔGNTP values in vivo. In general, the intracellular concentration of ATP is about 1.5–4.5 mM and ADP is less than 100 μM, GTP is 100–200 μM and GDP 10–20 μM, and the intracellular concentrations of Pi are similar to those of ATP16. For simplicity, we may consider a cell containing 1 mM ATP, 100 μM ADP, 100 μM GTP, 10 μM GDP, and 1 mM Pi. Assuming these concentrations, a pH of 7.0 and a temperature of 25°C, we can calculate an in vivo ΔGATP = ΔGGTP = -53.35 kJ/mol (Figure 1) – much greater than the corresponding ΔG0’. Importantly, also, GTP is bioenergenetically equivalent to ATP, and the ΔG associated with the hydrolysis of CTP and UTP is also close to those of ATP and GTP.\n\nATPases and GTPases hydrolyse their NTP substrates to NDP and inorganic phosphate; both hydrolysis reactions liberate 53 kJ/mol Gibbs free energy. ATP, adenosine 5’-triphosphate; GTP, guanosine 5’-triphosphate; NTP, nucleoside triphosphate; NDP, nucleoside diphosphate; Pi, inorganic phosphate.\n\nIf the amount of energy released by hydrolysis of all NTPs is similar, one fascinating but unresolved question is why ATP rather than GTP, CTP or UTP became the cardinal high-energy intermediate of the cell. Indeed, ATP is the only NTP directly produced by oxidative phosphorylation in mitochondria (the primary source under aerobic conditions) and by glycolysis in the cytoplasm (under anaerobic conditions). It is continuously recycled; the human body contains 250 grams of ATP, on average, and the amount of ATP turned over per day corresponds approximately to body weight. By contrast, to be regenerated from NDPs, the other three NTPs require NDPKs and ATP or nucleoside monophosphate kinases and two molecules of NDP (generating NTP and NMP). As the cellular concentration of ATP is much higher than that of other NTPs, the reversible NDPK reaction is driven towards phosphoryl transfer from ATP to GDP, CDP, or UDP to form their corresponding NTPs. Although NDPKs are considered non-specific with respect to the base moiety of acceptor nucleotides, guanine nucleotides are their best substrates, whereas cytosine nucleotides are the poorest in terms of both Km and kcat18,19.\n\nThus, ATP may be the dominant energy fuel of the cell simply because most biosynthetic pathways evolved to generate it. This suggests that ATP was the first nucleotide to appear during evolution, and that the much higher cellular concentration of ATP as compared to GTP and other NTPs may have been sufficient for ATP to become the universal energy carrier. Even if ATP and GTP have the same standard ΔG0, and a similar ΔG of hydrolysis at given cellular conditions, enzymes may favour the more highly concentrated ATP for reasons of accessibility, kinetics and reserve.\n\n\nChanneling: A smart strategy to maximize efficiency\n\nThe notion of channeling of a substrate or metabolic intermediate describes its direct delivery from one enzyme to another, or more precisely from one active site to another, without dissociation (‘tight’ channeling) or only minor dissociation (‘leaky’ channeling) into the bulk solution (Figure 2)20. This requires spatial proximity between the participating enzymes, as it occurs in multifunctional enzymes or kinetically stable multienzyme complexes, but also in more dynamic, reversible enzyme complexes or by colocalisation on subcellular particles or biological membranes. Channeling can be considered a general mechanism to increase the efficiency of sequential reactions in a metabolic pathway or as a form of metabolic compartmentation within the cell21,22. Therefore, the transferred metabolite is out of the diffusion equilibrium, resulting in a reaction that is more rapid and efficient than if the enzymes were randomly distributed in the cytosol23. Substrate channeling may also protect a metabolite from being consumed by competing reactions catalysed by other enzymes. In addition, by overcoming the reaction equilibrium, substrate channeling creates a unidirectional flux. The physical transfer from one site to another can occur in several ways, e.g. by molecular tunneling, where the substrate moves through a ‘tunnel’ in the protein connecting two active sites; by an electrostatic ‘highway’ that guides a charged substrate from one active site to another, or by substrate attachment to a flexible protein ‘arm’ that moves between several active sites20,24,25. Furthermore, several consecutive enzymes of a metabolic pathway can join together in a transient complex to channel substrates between them. Such a supercomplex, coined ‘metabolon’ by Paul Srere over 30 years ago26, can be found for Krebs cycle enzymes27,28 or demonstrated in vitro by tethering the sequential enzymes of glycolysis on a surface29.\n\na: The sixth step of glycolysis is catalysed by GAPDH, which adds a phosphate group at position one of glyceraldehyde 3-phosphate to produce the intermediate 1,3 bisphosphoglycerate and NADH, H+. This reaction is reversible. The intermediate product and ADP are then transformed by PGK in the seventh step of glycolysis, to produce 3-phosphoglycerate and ATP. b: GAPDH and PGK can associate. In this case, the intermediate product, 1,3 bisphosphoglycerate, is channeled between the two enzymes resulting in a unidirectional reaction. GAPDH, glyceraldehyde 3-phosphate dehydrogenase; Pi, inorganic phosphate; NADH,H+ nicotinamide adenine dinucleotide; ADP, adenosine diphosphate; ATP, adenosine 5’-triphosphate; PGK, phosphoglycerate kinase.\n\nThe ten enzyme-catalysed steps of glycolysis that convert glucose to pyruvate are illustrated. The enzymes shown in red consume ATP, those shown in green produce ATP or NADH and H+, and those shown in yellow are energetically neutral. In the preparatory phase, energy is invested (-2ATP) and in the payoff phase energy is produced (+4ATP +2NADH, H+), as indicated on the right. ADP, adenosine diphosphate; ATP, adenosine 5’-triphosphate; NADH,H+ nicotinamide adenine dinucleotide.\n\nA good example of substrate channeling is the coupled reaction between the sixth and the seventh steps of glycolysis, which is catalysed by glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and phosphoglycerate kinase (PGK) (Figure 2). The finding that phosphoryl exchange between these enzymes is unidirectional provided the first indication that these enzymes may be involved in substrate channeling30,31. The interaction between GAPDH and PGK was subsequently confirmed by fluorescence resonance energy transfer and by coimmunoprecipitation32. In this example, the intermediate glycolytic substrate 1,3-bisphosphoglycerate is channeled from GAPDH to PGK in an enzyme–substrate–enzyme complex without its release into the cytosol33. The complex formed by GAPDH and PGK can thus be considered an ATP production module (Figure 2).\n\n\nIncreasing efficiency with ATP channeling\n\nBy analogy with substrate channeling, we refer here to energy channeling as the process whereby phosphonucleotides, like ATP or GTP, are directly transferred between two proteins, one providing them (e.g. enzymes or transporters) and one consuming them (e.g. molecular motors or ion pumps), without full equilibration of these phosphonucleotides with the nucleotide pools of the surrounding medium.\n\nThe first evidence of such direct energy transfer was reported in 1987 by Aflalo and colleagues, who immobilised on beads pyruvate kinase (PK; which catalyses the last step of glycolysis to produce ATP; Figure 3), and hexokinase (HK; which catalyses the first step of glycolysis and consumes ATP; Figure 3). They showed that the accessibility of ATP depends on whether these enzymes are bound together on beads or are in the soluble fraction34. Thus, the ATP, which is formed close to the immobilised enzymes, does not rapidly equilibrate with the ATP pool in the bulk solution. This experiment indicates that, even in vitro, the ATP produced by an enzyme is preferentially used by enzymes in close proximity and that energy channeling may be induced simply by the association of complementary enzymes. It also suggests that energy channeling might be a general strategy to accelerate reactions. In the reverse direction, the products ADP or GDP – which are at least ten times less abundant than ATP and GTP – would be transferred directly from the ATPase or GTPase module back to the ATP- or GTP-generating module.\n\na: The ATP-production module (green) uses the energy of a substrate to convert ADP to ATP, whereas the ATPase module (red) uses the high energy of the phosphoryl bond of ATP to liberate 53 kJ/mol to perform a cellular function. b: As in substrate channeling (see Figure 2b), the association of these two modules results in energetic channeling of ATP from its site of production to its site of consumption without its release into the bulk phase. c: Energetic channeling may also involve ADP and Pi. ADP, adenosine diphosphate; ATP, adenosine 5’-triphosphate; Pi, inorganic phosphate.\n\nBioenergetics provides an exemplary case of highly structured metabolism. Generation and consumption of ATP often occur at specific cellular sites and at very high and/or fluctuating turnover rates. Since the ΔG available for the ATPase reaction depends on the [ATP]/[ADP] ratio (see above), both ATP availability and removal of ATPase reaction products (ADP and Pi) can become a limiting factor35,36. Thus, ‘microcompartments’ have evolved in which ATPases associate with the components necessary for immediate ATP resynthesis from ADP and Pi (Figure 4). These microcompartments may range in size from multiprotein or proteolipid complexes, where more or less tight metabolite channeling can occur21,22,37, to cellular domains with preferential directions for intracellular diffusion, as in oxidative muscle cells. These microcompartments have also been referred to as ‘intracellular energetic units’38,39.\n\nLocal regeneration of ATP for channeling to ATPases (Figure 5a) has been shown, for example, for creatine kinase (CK), which uses a highly concentrated ‘high energy’ intermediate, phosphocreatine (PCr), to regenerate ATP, and for glycolytic enzymes, which directly generate ATP. These glycolytic enzymes, which are small, globular proteins of only a few nanometers diameter, are found associated with macromolecular complexes, cytoskeletal networks, and membranes. This ubiquitous occurrence of channeling modules suggests that local generation of ATP and GTP is a general principle driving many cellular functions, such as membrane trafficking, actin cytoskeleton assembly, molecular pumps, and the beating of flagellae and cilia, all of which use processive molecular machines.\n\na: Model of the energetic channeling between an ATP production module (green) in close proximity to an ATPase module (red). ADP and a high energy substrate is converted by the ATP production module to a low energy product and ATP. The ATP (green dot) and ADP (red dot) channel between the two modules (red and green arrows) to fuel a cellular function. Note that in the following panels, ATP, ADP and most of the substrates and products are removed to highlight the energetic channeling. b: In the mitochondrion, CK (purple) is bound to the IMM through its interaction with the anionic phospholipid cardiolipin, where it comes into close proximity with the ANT. CK uses the ATP exported by ANT to generate PCr, which is exported from the mitochondrion by the VDAC. In the cytosol, CK uses PCr to channel ATP directly to the Na+/K+-ATPase in the PM. Thus, CK functions as an ATP production module in the PM and as an ATPase module in the mitochondrion. c: Coupling of the ATP-exporting VDAC and ANT in mitochondria to the ATP-consuming enzyme HK1 in the cytosol fuels the first step of the preparatory phase of glycolysis: conversion of glucose to glucose 6-phosphate. d: Coupling of the cytosolic ATP-producing module GAPDH–PGK, or the ATP-producing enzyme PK, to the H+-ATPase in the membrane of synaptic vesicles at the presynapse fuels the transport of protons into the vesicles. e: On-board coupling of the ATP-producing module GAPDH–PGK to a molecular motor enables fast axonal transport along microtubules. f:Coupling of the ATP-producing module GAPDH–PGK to the Na+/K+-ATPase pump in the plasma membrane of red blood cells fuels ion transport to maintain cell shape. ADP, adenosine diphosphate; ATP, adenosine 5’-triphosphate; PM, plasma membrane; IMM, inner mitochondrial membrane; OMM, outer mitochondrial membrane; CK, creatine kinase; ANT, adenine nucleotide transporter; PCr, phosphocreatine; VDAC, voltage-dependent anion channel; HK1, hexokinase I; PK, pyruvate kinase; PGK, phosphoglycerate kinase; GAPDH, glyceraldehyde 3-phosphate dehydrogenase.\n\n\nCreatine kinase isoforms establish an energy shuttle\n\nPossibly one of the best-studied examples of ATP channeling in bioenergetics is the CK system, which has become a paradigm for the compartmentalisation of energy metabolism. In this review, only some well-examined examples will be described; further exhaustive information can be found in a number of excellent reviews31,35,40–47.\n\nCK is a key player in maintaining cellular energy homeostasis by reversible phosphoryl transfer between ATP and PCr in the reaction:\n\nPCr+MgADP ⇆ Cr+MgATP\n\nPCr is an alternative energy carrier that, when compared to ATP, is metabolically inert (except for the CK reaction), much smaller and less charged over the physiological pH range, and thus significantly more diffusible than ATP. In a given cell type, at least one cytosolic isoform – a dimer – is coexpressed with a predominantly octameric mitochondrial isoform (mtCK): in muscle, for example, the cytosolic MCK isoform is coexpressed with a sarcomeric mtCK, whereas in brain the cytosolic BCK isoform is coexpressed with the ubiquitous mtCK isoform35,45. At the cellular level, CK isoforms have two main functions that probably appeared very early during metazoan evolution48. First, CKs build-up a large cellular PCr pool that can be used to regenerate ATP when there is a mismatch between ATP generation and consumption (i.e. an energy buffer function). Second, and more important with regard to metabolite channeling, cytosolic and mtCK isoforms interact with protein and lipid partners at various subcellular sites close to ATP-providing and ATP-consuming reactions, and, together with PCr, constitute an energy shuttle that corrects for a spatial mismatch between ATP generation and consumption (i.e. an energy transfer function)47. The CK–PCr energy shuttle is particularly important for large, polar cells with high and/or fluctuating energy demands, such as skeletal and heart muscle or neuronal cells. It may occupy a specialised subcellular metabolic compartment, as in intracellular energetic units38,49. An important feature of these metabolic compartments is that they ensure efficient feedback regulation to stimulate oxidative phosphorylation and thus maintain metabolic stability in the form of high cytosolic [ATP]/[ADP] ratios close to ATPases. This ensures that maximal free energy is released from ATP hydrolysis.\n\nSolid evidence has accumulated for the existence of CK-containing multiprotein and proteolipid complexes, in which CK isoforms either interact directly or, more frequently, come in close proximity to ATP-delivering processes (oxidative phosphorylation, glycolysis) or ATP-consuming processes (motor proteins, ion pumps, etc.). These channeling complexes also drive the reversible CK reaction predominantly in a given direction, i.e. towards a non-equilibrium state.\n\n\nATP channeling to and from creatine kinase\n\nProbably the best-described example of a channeling complex in which ATP is channeled to CK to drive the reaction towards PCr generation is found in mitochondria, the organelles that provide the bulk of ATP in cells that rely on oxidative metabolism [reviewed in 45]. In this example, mtCK is bound to the outer face of the inner mitochondrial membrane (IMM), facing the intermembrane space (Figure 5b) and the continuous cristae space50,51. High-resolution structures of mtCK isoforms have allowed a detailed analysis of their structure-function relationships45,52,53. Membrane interaction occurs between C-terminal positive charges of mtCK and negatively charged (anionic) phospholipids in the IMM, notably cardiolipin (CL), the IMM signature lipid54–56. Several CL molecules are also tightly bound to an IMM transmembrane protein, the adenine nucleotide transporter (or carrier, ANT)57. This obligatory antiporter exports ATP from the matrix where it is generated by oxidative phosphorylation, and imports ADP into the matrix to stimulate its rephosphorylation58. Due to their common high affinity for CL and their capacity to organise CL-rich membrane patches, mtCK and ANT come in very close proximity and form proteolipid complexes45,59,60. This proximity allows preferred metabolite exchange (Figure 5b), where mtCK uses mainly mitochondrial ATP provided by ANT, together with cytosolic Cr, to generate ADP and PCr35,51. The degree of this direct channeling depends on the species, the tissue and the physiological state61,62, but has been observed in many cell types (it is most pronounced in heart and skeletal muscle) and by means of several methods, including kinetic, radioisotopic and thermodynamic approaches51,54,63,64. The channeling between ANT and mtCK also preserves an adenylate pool within mitochondria that communicates only slowly with the cytosol65. In oxidative tissues, this makes PCr the preferred high-energy intermediate exported from mitochondria. Such export occurs via the voltage-dependent anion channel (VDAC), a regulated pore in the outer mitochondrial membrane (OMM)66,67. The portion of mtCK facing the intermembrane space also directly interacts with VDAC67, thus forming a tripartite complex of mtCK, ANT, and VDAC (Figure 5b). This complex establishes contact sites between the IMM and OMM and also allows preferential metabolite exchange between mtCK and VDAC, favoring Cr import from and PCr export to the cytosol (Figure 5b). The degree of metabolite channeling between mtCK and ANT, and also partially between mtCK and VDAC, thus controls the PCr flux out of mitochondria. Similar channeling, where ATP supply drives the CK reaction towards PCr generation, may occur in the cytosol in situations and tissues that favor glycolytic metabolism. Here, a subpopulation of cytosolic CK isoforms is associated with or binds close to glycolytic enzymes that generate ATP, such as pyruvate kinase47,68,69.\n\nCytosolic CK is also localised at or close to cellular ATPases, where constant use of ATP drives the CK reaction towards PCr consumption and ATP regeneration. Probably the best-described channeling of this type, again, occurs in muscle cells, where the cytosolic MCK isoform is localised, in part, at the M-line of myofibrils to fuel ATP to the nearby myosin ATPases55,70. The M-line is part of a complex multiprotein structure in striated muscle that holds the myosin filaments in register and is not structurally altered during the contraction cycle. Here, MCK specifically interacts with the M-band proteins M-protein and myomesin71 and possibly also with myosin-binding protein C (MyBPC1)56. These interactions occur by means of several negative charges that are specific to the MCK isoform and form a ‘clamp’, bridging the various interaction partners in the M-line72. The regenerated ATP can then easily reach the myosin ATPases, since diffusion of such small metabolites is highly anisotropic: it is facilitated in the direction of the myosin filaments, but hindered in the direction of the surrounding cytosol36,73.\n\nAnother fraction of the MCK isoform binds to the sarcoplasmic reticulum of muscle cells to fuel the Ca2+ pump SERCA, which consumes large amounts of ATP74–76. However, the nature of the molecular interactions involved in this case is less well studied than in the preceding examples. SERCA is essential for sequestration of Ca2+, which functions as an intracellular second messenger. The importance of ATP channeling between CK and SERCA is evident from mice with total CK deficiency, whose main phenotype is dysfunctional Ca2+ handling62,77. Similar fueling of the endoplasmic reticulum Ca2+ pump seems to occur in cell types that express the cytosolic BCK isoform43. For example, BCK-mediated Ca2+ homeostasis is also required in the hair cells of the inner ear, in particular for high-sensitivity hearing78. One determinant localising BCK to the endoplasmic reticulum Ca2+ pump is phosphorylation of this isoform at Ser6 by AMP-activated protein kinase79.\n\nA particular type of ATP channeling occurs in the electrocytes of the electric organ of electric fish, such as Torpedo. Their postsynaptic membranes contain many ion channels that allow sodium influx into the cell upon binding of acetylcholine, thus producing an electric discharge80. To restore intracellular resting conditions, a membrane-bound MCK orthologue and high intracellular PCr concentrations are necessary to fuel the very active Na+/K+-ATPase for rapid sodium extrusion out of the cell (Figure 3b)81. In vivo 31P-NMR saturation transfer measurements have provided direct evidence for ATP channeling between CK and the Na+/K+-ATPase80.\n\nFinally, the cytosolic BCK isoform engages in many other protein–protein interactions47. Their functional significance is less well studied, but many of them seem to involve ATP channeling. BCK colocalises with and fuels the gastric H+/K+-ATPase pump at the apical membrane and the membranes of the tubulovesicular system82. At the plasma membrane, BCK interacts with and activates the K+ and Cl- cotransporters KCC2 [also known as SLC12A5;83,84], and KCC3 [SLC12A6;85], although in this case no ATPase reaction is involved. Furthermore, BCK fuels actin-related functions, including actin polymerisation, formation of dynamic actin-based protrusions, and phagocytosis in macrophages76,86, as well as cell motility in astrocytes and fibroblasts87. The recruitment of BCK into these actin structures seems to depend on a C-terminal flexible loop of BCK86, although F-actin may not be the direct interaction partner87.\n\n\nMitochondria in the secret service of glycolysis\n\nGlucose is the major source of energy for most cells. It is metabolised by glycolysis in the cytoplasm, which can be divided into two phases (Figure 3): a preparatory phase, in which two molecules of ATP are consumed, and a payoff phase, in which four molecules of ATP are produced. Hence, the net positive yield from glycolysis is two molecules of ATP per molecule of glucose degraded. The end product of glycolysis, pyruvate, is then taken up by mitochondria to fuel the Krebs cycle and drive oxidative phosphorylation, which produces roughly 30 molecules of ATP per molecule of glucose consumed88. Thus, glucose metabolism provides two major sources of ATP for cellular functions: glycolysis and mitochondrial respiration. Whereas the latter produces about 15 times more ATP, the former might be better suited for rapid and localised supply of energy in certain situations.\n\nGlycolytic enzymes are often referred to as sticky proteins because they are found in several subcellular fractions, and are also often in yeast two-hybrid, coimmunofluorescence, protein pull-down, and coimmunoprecipitation assays89. Two-way coimmunoprecipitation analyses using endogenous proteins rather than overexpressed, tagged constructs is the ‘gold-standard’ approach to demonstrate a specific interaction but, unfortunately, such evidence is available only in a minority of studies. Thus, these interactions are usually considered non-specific and are often ignored. We argue, however, that the ubiquitous presence of glycolytic enzymes in preparations of protein complexes, membranes, and cytoskeletal elements supports the notion of generalised local energy production for many cellular functions related to membrane trafficking processes, molecular pumps, and flagellar and cilia beating, which involve very processive molecular machines. Processive enzymes repeat their catalytic cycle and so perform multiple rounds of catalysis. If those enzymes are ATPases, they consume ATP at each round. Glycolytic enzymes may, in fact, be ‘glued’ where they are needed, so that processive enzymes can easily be fueled.\n\nThe two ATPase enzymes in the preparatory phase of glycolysis are HK and phosphofructokinase-1 (Figure 3). Both enzymes associate with mitochondria, suggesting possible energy channeling resulting in a direct supply of ATP from mitochondria to glycolytic enzymes90–92. HK catalyses the phosphorylation of glucose to glucose 6-phosphate and uses one ATP molecule93 (Figure 3). The HKI isoform is the most highly expressed of the four HKs, and is mainly found in brain, kidney, and red blood cells. In mitochondria, VDAC in the OMM interacts with both HK and ANT (Figure 5c)94–96, thus providing a transfer pathway for ATP and ADP that connects the cytosol and the mitochondrial matrix. HK associates with VDAC on the cytoplasmic side of the channel and is, therefore, perfectly placed to receive ATP from mitochondria for the phosphorylation of glucose and to return the reaction product ADP to mitochondria96,97. This is an example of energy channeling in which ATP and ADP are channeled between two compartments, the mitochondrial matrix and the cytosolic face of mitochondria (Figure 5c). This type of channeling seems to be particularly important for cancer cells to maintain their high glycolytic rate97.\n\n\nGlycolysis to reload synaptic vesicles\n\nDuring neurotransmission, synaptic vesicles release their contents into the synaptic cleft and are then rapidly refilled for subsequent rounds of signal transmission. This reloading is driven by specialised membrane pumps that consume ATP98. Early studies on brain slices showed that reducing the concentration of extracellular glucose drastically reduces the release of glutamate at synapses without affecting the global ATP level, suggesting that glycolysis is necessary for this neurotransmission99. Subsequently, this effect was elegantly explained by a study showing that synaptic vesicles carry active glycolytic enzymes that produce sufficient ATP to fuel the glutamate uptake system100. The ATP production module in this case, GAPDH–PGK, is coupled to the vesicular H+-ATPase, which generates an electrochemical proton gradient across the vesicular membrane. This gradient provides the driving force that enables vesicular glutamate transporters to reload synaptic vesicles (Figure 5c). Furthermore, glutamate uptake by synaptic vesicles in an in vitro assay is more efficient when substrates for glycolysis are added to produce ATP locally as compared to addition of exogeneous ATP. This example highlights the kinetic advantage of local energy channeling over a more global and distant supply of ATP.\n\nIn a similar way, PK, the enzyme involved in the last step of glycolysis, which also produces ATP, associates with vesicles and fuels the H+-ATPase that drives glutamate reloading101 (Figure 5d). In an ATP trap experiment, in which soluble HK was added to a preparation of synaptic vesicles to compete with the H+-ATPase for the consumption of ATP, the ATP produced locally by PK was restricted to the vicinity of the vesicle membranes and was used predominantly by the H+-ATPase and not by HK101. This simple experiment reinforces the notion that ATP channels directly from one enzyme to an adjacent one, without diffusing through the bulk cytosol.\n\nATP generated by glycolysis at the surface of synaptic vesicles appears to play an essential role in their rapid refilling with glutamate. Indeed, mitochondria alone may not be able to meet all the energy requirements to maintain rapid neurotransmission, in particular in situations where mitochondria are not located close to the synapses, as observed in half of hippocampal presynaptic termini102.\n\n\nOn-board glycolytic fueling of fast axonal transport\n\nSimilar energetic coupling to that described for synaptic vesicle reloading, was recently demonstrated to take place during fast axonal transport (FAT)103,104 (Figure 5e). FAT is an ATP-driven process involving microtubules and the molecular motors kinesin and dynein, which are highly processive, resulting in constant and fast transport over long distances105. In some neurodegenerative diseases, FAT is affected, and changes in both glycolytic and mitochondrial metabolism have also been described, suggesting a possible link between energy supply and vesicular transport in these diseases106,107. The first evidence that local production of ATP could activate transport by kinesin came from a motility assay108, in which PK was covalently attached to beads that were further linked to microtubules through a biotin–streptavidin link in order to generate ATP directly on microtubules. This locally produced ATP was sufficient to drive the movement of the beads on a glass surface coated with kinesin108. However, does such ATP channeling also fuel kinesin motors in vivo?\n\nAssuming that kinesin motors operate at a velocity of ~ 2 μm/s, take steps of 8 nm (the distance between two tubulin heterodimers), and consume one molecule of ATP per step, one kinesin motor must consume ~ 250 ATP molecules per second109. Thus, the ATP concentration in vivo might be a limiting factor for the very high and constant speed of FAT. However, in studies of cultured primary neurons, when the mitochondrial F1F0-ATP synthase was inhibited acutely and cellular ATP levels fell to 20% of normal, the velocity of transport of vesicles was unaffected103. This indicates that FAT is dependent neither on the bulk concentration of ATP (at least at physiological concentrations) nor on mitochondrial ATP production. In contrast to the transport of vesicles, transport of mitochondria was drastically impaired under these conditions103, suggesting that the molecular motors associated with mitochondria and those associated with vesicles do not use the same pool of ATP. This idea was further substantiated by inhibiting glycolysis, which, as expected, had only a modest effect on cellular ATP levels, while strongly affecting the transport of vesicles, but not of mitochondria103. This simple experiment indicates that the transport of mitochondria uses ATP generated by oxidative phosphorylation and the transport of vesicles uses ATP generated by glycolysis.\n\nThe finding that ATP from glycolysis fuels the transport of vesicles, but not of mitochondria along axons suggests that ATP must be produced in close proximity to these vesicles. Consistent with this idea, an unbiased proteomics study of transport vesicles isolated from mouse brain found all the enzymes of glycolysis associated to this fraction104. Moreover, these vesicles could perform glycolysis and produce ATP when incubated with the various substrates of each step of the pay-off phase104. Hence, in these brain vesicles, the ATP-producing module formed by glycolytic enzymes must be close to the ATPase module formed by the molecular motor complex103,110. ATP channeling between glycolytic enzymes and molecular motors was ultimately demonstrated by means of an elegant and minimal in vitro motility assay comprising only microtubules attached to a glass surface and purified brain vesicles incubated with substrates of the pay-off phase of glycolysis: the ATP produced by glycolysis fueled transport of the vesicles on microtubules, showing that locally produced ATP is sufficient to propel vesicles on microtubules104.\n\nTo investigate this phenomenon in neurons, the amount of GAPDH on vesicles was artificially controlled by genetic approaches. When GAPDH expression was reduced in cultured neurons, FAT was impaired, confirming that glycolysis is essential for FAT103. Moreover, when GAPDH was engineered to bind to vesicles without being present as a soluble, cytosolic enzyme, FAT continued, demonstrating that energetic channeling to molecular motors occurs in cells103. Depletion of GAPDH from neurons in Drosophila larvae inhibited FAT103, reinforcing the conclusion that energetic channeling occurs in vivo during axonal transport and is evolutionary conserved.\n\nHuntingtin, a scaffold protein present on vesicles, interacts with proteins of the vesicular molecular motor complex and promotes vesicle transport111. Intriguingly, one of the first proteins found to interact with huntingtin was GAPDH112. Huntingtin might therefore promote FAT by physically linking this ATP-producing glycolytic enzyme with the ATPase of the molecular motor. Consistent with this idea, depletion knockout of huntingtin in mouse brain neurons and depletion from neuronal cells in culture by means of gene silencing also depleted GAPDH specifically from vesicles, without affecting the total GAPDH level, as well as reducing FAT, whereas overexpression of an engineered chimeric, vesicle-bound form of GAPDH restored transport103. Thus, the amount of GAPDH on vesicles is crucial for FAT and controls the velocity of transport of the vesicles.\n\nIt would be interesting to know whether energy channeling is specific to the transport of vesicles in neurons or whether it is a more general phenomenon in membrane trafficking. The finding of glycolytic enzymes in clathrin-coated vesicles and in early endosome fractions by proteomics analysis113 suggests that the latter may indeed be the case114,115. Since mitochondria use their own ATP for their transport, not that produced by glycolysis, it would be intriguing to investigate whether a similar energy channeling exists between mitochondrial molecular motors and the ATP delivered by the VDAC in the OMM.\n\n\nIntraflagellar transport: A paradigm for energy channeling?\n\nCilia and flagella are organelles that project from the surface of eukaryotic cells; they have multiple functions in cellular motility, sensory function, developmental signalling and cell morphogenesis116. These structurally similar organelles are extensions of the plasma membrane with a central core, or axoneme, composed of a bundle of fused microtubules. The membrane of primary cilia contain receptors and ion channels that coordinate many cellular signaling pathways117,118. External signals, for example the protein sonic hedgehog, are detected by transmembrane receptors at the surface of the cilium and are then transported retrogradely by the dynein-2 motor towards the basal body of the cilium. Conversely, anterograde transport is required for receptor recycling and is mediated by kinesin-2. This bidirectional, intraflagellar transport (IFT) is also necessary for cilium formation and maintenance, and defects in IFT can result in ciliopathies. In IFT, the motor proteins are associated with dense structures called trains, which are multiprotein complexes whose components appear to be specialised for the transport of different sets of cargo proteins. These trains constantly traffic along the axoneme to ensure a constant turnover of proteins along the cilium. Reminiscent of the paternoster lift, in which passengers can freely step on or off at any floor, cargoes such as receptors associate with and dissociate from the IFT trains.\n\nIFT trains move extremely rapidly – faster even than FAT – with anterograde velocities of 1.5–2.5 μm/s and retrograde velocities that can be over 5 μm/s119,120. However, cilia do not contain mitochondria, so the source of energy for this transport, as well as for the beating of motile cilia and flagella, is unknown. In our opinion, cilia provide a perfect experimental system to investigate the role of local energy production and energetic channeling for very processive cellular functions. Glycolytic enzymes have been found by proteomic analysis of primary cilia121 and of the flagellum of the protozoan Trypanosoma brucei122. Importantly, a PCr–CK shuttle has also been found in flagella; it was first described in the sperm of the echinoderm sea urchin Strongylocentrotus123, and later also in the polychaete Chaetopterus and the tunicate Ciona, all based on specific flagellar isoforms of CK42,48,124. Moreover, analogous to the PCr–CK shuttle, a phosphoarginine–arginine kinase system, comprising a flagellum-specific isoform of arginine kinase (TbAK1-3), has been found in the flagellum of Trypanosoma brucei125. This suggests that ATP buffering and local ATP production is important for the bioenergetics of ciliary functions and that intraflagellar transport might be generally fueled by energy channeling.\n\n\nMembrane glycolysis shapes red blood cells\n\nRed blood cells distribute oxygen in the body by means of the protein hemoglobin, which has a very high affinity for oxygen due to the presence of an iron ion (Fe2+). This high load of iron in red blood cells induces a high osmotic pressure, which is compensated by the exchange of other ions between the cytosol and the blood plasma. Erythrocyte ion transport is driven by Na+/K+- and Ca2+-ATPase pumps. Depletion of ATP from these cells changes their typical biconcave disk shape to an abnormal echinocyte shape126–128. Red blood cells do not have mitochondria, so their ATPases are fueled exclusively by glycolysis. The importance of this glycolytic energy supply is evident from several red cell enzymopathies in which the glycolytic pathway specifically is affected129. In red blood cells, localization of the entire glycolytic metabolon at the plasma membrane has been observed already earlier130. Many molecular details have been discovered since then131,132, showing the advantages of this metabolon for energy coupling to plasma membrane ion pumps (Figure 5f). Experiments using inside-out vesicles prepared from red blood cells (in order to access the cytoplasmic membrane surface) demonstrated that membrane-bound glycolytic enzymes, when provided with the substrates for GAPDH and PGK, can synthesise ATP to support active Na+ transport, and that this ATP remains bound to the membrane133. This plasma membrane-bound ATP fuels Na+/K+ and Ca2+ pumps133–135. Direct coupling between the ATPases and glycolysis may be achieved by a specific arrangement of membrane components and cytoskeletal elements involving the ATPase pumps, anion exchanger 1 (also known as Band 3), GAPDH, PGK, PK and ankyrin/β-spectrin134. Interestingly, in a rare genetic anomaly, there seems to be also a CK system present in human erythrocytes136.\n\nEnergy channeling from glycolytic enzymes to membrane ATPases may represent a general mechanism to satisfy high membrane-associated ATP requirements. Also in cell types other than red blood cells, ATP produced by glycolysis rather than by mitochondria seems to be the preferred energy source for cellular functions at the plasma membrane137. For example, GAPDH colocalises and interacts with the anion exchanger 1, an ATPase responsible for the exchange of Cl- and HCO3- across the plasma membrane138,139. Also, the cardiac ATP-sensitive K+ channel associates with the enzymes involved in the payoff phase of glycolysis140. Functional coupling between the glycolytic enzymes GAPDH, PGK and PK, and transport of Ca2+ into the sarcoplasmic reticulum has also been described141. This channeling was suggested first by a trap assay in which HK did not impair Ca2+ transport, and is further supported by the observation that transport was less efficient with exogeneous ATP than with locally produced ATP141. Overall, a close physical association and functional interaction of glycolytic enzymes with ion-handling membrane proteins seems to assure their high activity.\n\n\nDynamin: A membrane fission GTPase\n\nMembers of the dynamin superfamily are evolutionarily conserved membrane-remodeling GTPases involved in both membrane fission, in which a single membrane separates into two, and membrane fusion reactions, in which two topologically separate membranes merge into one142–144. How proteins belonging to the same family participate in two opposite physical processes remains an exciting but unresolved question.\n\nIn the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans, one gene encodes several isoforms of dynamin145–147, whereas in mammals, three distinct genes, Dnm1, Dnm2, and Dnm3, encode three isoforms: dynamin-1, expressed at high levels specifically in neuronal tissues and involved in synaptic vesicle endocytosis148; dynamin-2, ubiquitously expressed and involved in clathrin-mediated endocytosis (CME), as well as in some clathrin-independent endocytic pathways149; and dynamin-3, the least well-characterised isoform, enriched in testis and neurons (but in the latter case, at a much lower level than dynamin-1)150–152. These ‘classical’ mammalian isoforms have over 80% amino acid sequence identity and are all cytoplasmic proteins, suggesting a common biological function.\n\nThe best known function of dynamins is to mediate plasma membrane fission during CME, the canonical endocytic pathway in all eukaryotic cell types153–157. The evidence for this comes from a wide variety of in vivo and in vitro systems, ranging from D. melanogaster mutants, genetically modified mice and cells derived from these mice, to artificial lipid membranes of various composition, as well as a huge amount of biochemical, biophysical, and structural data. Since the mechanism by which dynamin mediates membrane fission is still debated and because it is not the main focus of this review, we present here only the major elements for which there is a broad consensus144.\n\nDuring CME, dynamin forms a helical polymer around the neck of the invaginated clathrin-coated pit that constricts the membrane, thus resulting in membrane fission (Figure 6a)158–161. This constriction is proposed to result from torsion of the dynamin helix, which applies torque to the membrane161,162. Multiple rounds of GTP loading and hydrolysis are probably needed for constriction and fission155; the number of GTP molecules hydrolysed to complete a single fission event is estimated to be more than one per dynamin dimer (i.e. more than 15 per helix turn)162. In this constriction model, dynamin is proposed to convert the chemical energy of GTP hydrolysis into mechanical work, in a similar way to the ATPase motor proteins myosin, kinesin and dynein, which hydrolyse ATP to apply force14,163. Dynamin can thus be thought of as a motor protein and, in fact, it is one of the most powerful molecular motors known, with a torque of 1000 pN/nm162, equivalent to that of the bacterial flagellum motor. Paradoxically, GTP is much less concentrated in vivo than is ATP16, which raises the question of how such torque may be generated by such a limited energy source. A closer look at the way dynamins bind and use GTP is useful to understand their energy requirements.\n\nThe GTPase cycle of dynamin is very different to that of the small regulatory GTPases (Ras, for example), which are binary molecular switches that cycle between a GDP-bound, inactive state and a GTP-bound, active state164 that can stably interact with effector molecules165. Small G proteins have a high affinity for GTP (range: Km = 10-1–10-5 μM), but a very low intrinsic rate of GTP hydrolysis (range: kcat = 10-2–10-3 min-1)166–169. To switch from one conformational state to another, small G proteins require guanine nucleotide exchange factors (GEFs) that promote the exchange of G-protein-bound GDP for GTP (favored by a high GTP/GDP concentration ratio), and GTPase-activating proteins (GAPs) that stimulate the basal rate of GTP hydrolysis 105–106 fold170–173. Since their affinity for guanine nucleotides is high, most small G proteins very rarely change their nucleotide state unless GEFs and GAPs help them to do so.\n\nDynamin is different in two key features of its GTPase activity. First, it has a much lower affinity for GTP (Km = 10–150 μM), which abolishes the requirement for GEFs for GTP loading and implies that dynamin is predominantly loaded with GTP under physiological conditions174,175. Second, dynamin has a higher intrinsic GTPase activity (kcat = 8–30 × 10-3 s-1), with rapid GTP hydrolysis and GDP/GTP exchange, which is further stimulated up to 1000-fold by polymerisation176–179. Thus, whereas the GTPase activity of small G proteins is stimulated by GAPs, the GTPase activity of dynamin is stimulated by polymerisation180,181. The low nucleotide affinity and high nucleotide hydrolysis rate of dynamin are also features of the motor proteins myosin and kinesin182, reinforcing the notion of dynamin as a mechanochemical enzyme. Unlike myosin and kinesin, however, which are fuelled by high concentrations of intracellular ATP, the intracellular concentration of GTP may not be sufficient to maintain a high rate of GTP hydrolysis by dynamin. If so, a mechanism of GTP channeling achieved by enzymes that synthesise GTP in close proximity to dynamin may be required to secure a high GTP/GDP concentration ratio and to promote GTP hydrolysis.\n\n\nMitochondrial dynamins: Fission and fusion GTPases\n\nDynamin-related or dynamin-like proteins are members of the dynamin superfamily that mediate fission and fusion of mitochondria183–186, two processes which determine shape, size, and number of these organelles in the cell. One of these mitochondrial dynamin-related proteins, DRP1, cycles between the cytosol and the OMM to mediate mitochondrial fission187–189. Biochemical and structural studies point to a DRP1-mediated mitochondrial fission mechanism similar to that of plasma membrane fission by classical dynamins. Indeed, DRP1 constricts membranes upon assembly into a helical structure around the OMM and induces GTP-dependent scission of mitochondria by dividing the outer and inner membranes in order to generate two daughter mitochondria186. Interestingly, recent studies have shown that the classical dynamin-2 is also a component of the mitochondrial division machinery, working in concert with DRP1 to orchestrate sequential constriction events that induce mitochondria division190.\n\nThe second mitochondrial dynamin-like protein, OPA1 (optic atrophy 1), is located in the IMM facing the intermembrane space and driving IMM fusion and remodeling191–193. Although OPA1 mediates membrane fusion rather than fission, its similarity to classical dynamins is striking in respect to its structure and to its ability to self-assemble into polymers by its GTPase activity. OPA1 lacks the PH and PRD domains of classical dynamins. Instead, it contains a transmembrane domain that can be cleaved by mitochondrial proteases, and a CL-binding domain194–196 that mediates the interaction of the protein with CL, the most abundant anionic lipid of the IMM. OPA1 and its yeast ortholog Mgm1p then polymerise and induce membrane deformation coupled to GTP hydrolysis, as do the classical dynamins, consistent with a mechanoenzyme mechanism rather than a GTPase switch133–135.\n\nAccordingly, Mgm1p has a weak affinity for GTP (Km ~ 300 μM), similar to those of classical dynamins (Km = 10–150 μM), and its basal rate of GTP hydrolysis is around 7 × 10-3 s-1, similar to classical dynamins (kcat = 8–30 × 10-3 s-1), but much higher than that of small GTPases (20 × 10-5 s-1)194. Furthermore, like classical dynamins, the intrinsic GTPase activity of OPA1 is enhanced up to 100-fold by polymerisation195. The fact that Mgm1p- and OPA1-mediated IMM fusion requires high levels of GTP (~ 500 μM), together with the biochemical properties of Mgm1p and OPA1, indicate that efficient and dynamic replenishment of GTP is absolutely necessary to sustain the activity of the mitochondrial dynamin.\n\nThe third type of mitochondrial dynamin-like proteins are mitofusins 1 and 2, which induce OMM fusion. In contrast to OPA1, they require only low GTP amounts for OMM fusion in vitro, suggesting that mitofusins 1 and 2 use a different mechanism and are the most divergent members of the dynamin superfamily at the functional level. A very recent crystal structure of mitofusin 1 reveals a nucleotide-triggered dimerization, which is critical for mitochondrial fusion193.\n\n\nNDPKs fuel dynamin superfamily proteins with GTP\n\nGenetics studies in Drosophila first found evidence of a functional interaction between the gene encoding dynamin, called Shibire, and the gene encoding NDPK, Awd197. A temperature-sensitive mutant of Shibire blocks dynamin function, resulting in paralysis, due to defects in endocytosis-mediated neurotransmitter uptake at synaptic junctions. Remarkably, in a genetic screen designed to identify mutations that modify this neurological phenotype, only Awd mutations were found, indicating that the functional relationship between Shibire and Awd is exceedingly specific. Subsequent work in Drosophila epithelial cells, such as tracheal cells and border cells, confirmed the functional link between Shibire and Awd for internalisation of the growth factor receptor homologs for FGF and PDGF/VEGF198,199. Awd-dependent endocytosis also contributes to the epithelial integrity of the follicular cell layer in the egg chamber by modulating the levels of adherens junction components200. Furthermore, a novel genetic interaction was found recently between DNM-1 and NDK-1, the homologs of dynamin and NDPK in C. elegans, during the engulfment of apoptotic corpses, a process that requires reorganisation of the cytoskeleton and membrane remodeling to extend the surface of the engulfing cell201. Mutant embryos lacking DNM-1 or NDK-1 have similar phenotypes (i.e. both accumulate apoptotic cell corpses), and loss of both DNM-1 and NDK-1 is lethal. Moreover, in a genome-wide RNAi screen for genes involved in membrane trafficking, silencing of NDK-1 caused defects in receptor-mediated endocytosis202. Taken together, these findings clearly indicate that dynamin and NDPK are close functional partners involved in membrane remodeling and trafficking in various model systems.\n\nThe NDPKs, which are encoded in humans by the NM23 (also known as NME, according to the official international gene nomenclature) genes, are nucleotide metabolism enzymes203,204. Ten genes comprise the NM23 family in humans204. The two most abundant and ubiquitously expressed isoforms, NM23-H1 (NDPK-A) and NM23-H2 (NDPK-B), are cytosolic proteins that are 88% identical to each other and 78% identical to Drosophila Awd. Whereas neither NM23-H1 nor NM23-H2 are localised in mitochondria203, NM23-H3 (NDPK-C) is reported to be, at least partly, associated with these organelles205. It has a 17 residue N-terminal hydrophobic peptide that is not a canonical mitochondrial targeting sequence, but might potentially anchor the protein to the outer membrane. NM23-H4 (NDPK-D) is the only protein of the family with a true mitochondrial targeting sequence and it is located exclusively in mitochondria206. Like the mitochondrial dynamin OPA1, NM23-H4 in the intermembrane space can bind the IMM by electrostatic interactions with CL207 (Figure 6b). All these enzymes NM23-H1/NDPK-A, NM23-H2/NDPK-B, NM23-H3/NDPK-C, and NM23-H4/NDPK-D, sharing 58 to 88% amino acid identity, assemble into stable catalytically active hexamers.\n\na: Classical endocytic dynamins (dynamin-1 and dynamin-2) are recruited to clathrin-coated pits where they catalyse plasma membrane fission by creating torque. b: NM23, an NDPK that produces GTP from GDP and ATP, is a hexamer with six active sites. c: The NDPKs NM23-H1 and NM23-H2 (green) are recruited to clathrin-coated pits by their physical interaction with dynamin-1 and dynamin-2 (red). The NDPKs thus regenerate local GTP from GDP and intracellular ATP by a channeling mechanism to optimise dynamin activity. d: NM23-H4 activity in the mitochondrial intermembrane space uses the ATP from oxidative phosphorylation to regenerate GTP directly for fusion of the IMM by OPA1. PM, plama membrane; GTP, guanosine 5’-triphosphate; GDP, guanosine diphosphate; ADP, adenosine diphosphate; ATP, adenosine 5’-triphosphate; Pi, inorganic phosphate; ANT, adenine nucleotide transporter; IMM, inner mitochondrial membrane; OPA1, optic atrophy 1; NDPK, nucleoside diphosphate kinases.\n\nThe different subcellular locations of these four NM23 gene products suggests they may provide GTP to specific dynamin superfamily proteins in distinct compartments. Consistent with this idea, studies in Drosophila, C. elegans, and mammals have found that cytosolic NDPKs have a highly specific and evolutionarily conserved function in dynamin-dependent endocytosis. Knockdown of the cytosolic NDPKs, NM23-H1 and NM23-H2, impairs dynamin-mediated endocytosis of receptors, including the transferrin, EGF, and IL-2 beta chain receptors; however, knockdown appears not to affect intracellular trafficking because recycling of the transferrin receptor from endosomes to the plasma membrane is not altered208. Moreover, among the NM23-H1-binding proteins identified in cell lysates and tumours by a proteomics approach, several are intimately connected to endocytosis, including the α2 and β1 subunits of the clathrin adaptor protein complex AP2, the phosphatidylinositol-binding clathrin assembly protein, and 1-phosphatidylinositol-4,5-bisphosphate phosphodiesterase beta-2, which is involved in inositol phospholipid signaling209.\n\nThe catalytic activity of NM23-H1 and NM23-H2 is required for efficient and optimal dynamin-mediated endocytosis and, as in dynamin-null cells210, knockdown of NM23-H1 and NM23-H2 results in a greater density of clathrin-coated pits (CCPs) at the plasma membrane when compared to control cells, as well as more deeply invaginated CCPs with elongated necks. Thus, in the absence of NM23-H1 and NM23-H2, CCPs form properly but fail to detach from the plasma membrane, indicating a role for these NDPKs in dynamin-mediated membrane fission at the CCPs. Consistent with this interpretation, recruitment of the uncoating protein auxillin to CCPs is strongly impaired when NM23-H1 and NM23-H2 are inactive.\n\nStrikingly, although knockdown of NM23-H1 and NM23-H2 cause a dramatic loss of cellular NDPK activity, the global intracellular levels of GTP are not affected, consistent with the hypothesis that these NDPKs deliver GTP locally to dynamin. Three further lines of experimental evidence support this hypothesis. First, NM23-H1 and NM23-H2 colocalise with the AP-2 complex and dynamin at CCPs and interact with dynamin208. An interaction of NM23-H1 and NM23-H2 with dynamin-1 in mouse brain extract, and similarly, an association with dynamin-2 in HeLa cells, have also been found208. Furthermore, in pull-down assays of HeLa cell lysates, the C-terminal proline-rich domain (PRD) of dynamin-2 was found to interact with endogenous NM23-H1 and NM23-H2. Dimers and hexamers of NM23, probably resulting from incomplete denaturation and/or disulphide cross-linking, were also found to interact with the PRD domain of dynamin-2, indicating that NM23 polymers associate with dynamin208. Together, these data demonstrate that NM23-H1 and NM23-H2 physically interact with the classical cytosolic dynamins at CCPs. The second line of evidence is that catalytically active recombinant NM23-H1 and NM23-H2, once recruited to dynamin-coated tubules, are able to stimulate dynamin GTPase activity, a well-known measure of GTP-loading onto dynamin. This occurs in the absence of GTP, when only NDPK substrates GDP (1 mM) and ATP (1 mM) are present208. However, even in the presence of physiological concentrations of GTP (100 μM), ATP (1 mM) and GDP (10 μM), NM23-H1 and -H2 can increase dynamin GTPase activity by 30–35% relative to GTP-only controls. Thus, both NM23 isoforms stimulate dynamin activity in the presence of physiological nucleotide levels. The third line of evidence that NDPKs deliver GTP locally to dynamin is that NM23-H1 and NM23-H2 trigger dynamin-mediated membrane fission in the presence of ATP and GDP. Classical dynamins tubulate membrane sheets in the absence of GTP and then, in the presence of GTP, fragment the tubules. In the absence of added nucleotides, membrane tubulation induced by dynamin is not altered by adding NM23-H1 and NM23-H2 proteins. Addition of ATP and GDP, however, induces breakage and collapse of the tubule network208. Importantly, very similar effects are also observed when the soluble NM23-H1 and NM23-H2 is removed by washing the membranes before addition of the nucleotides, indicating that NM23 bound to membrane-associated dynamin is responsible for dynamin function208. This evidence strongly supports the concept that NM23-H1 and NM23-H2 channel GTP to classical cytosolic dynamins at plasma membrane CCPs to power their activity during endocytosis (Figure 6c).\n\nIn much the same way as the cytoplasmic NDPK isoforms NM23-H1 and NM23-H2 interact with cytoplasmic dynamin to provide GTP for endocytosis, the mitochondrial NDPK isoform, NM23-H4, and the dynamin-related GTPase OPA1, which are both located at the IMM bound to the phospholipid cardiolipin, also interact to increase GTP loading onto OPA1 for membrane fusion. To directly demonstrate the involvement of NM23-H4 in local GTP fueling for OPA1-dependent mitochondrial dynamics, we determined the GTP hydrolysis rate of OPA1 reflecting its GTP loading in specific conditions related to mitochondria. Recombinant NM23-H4 protein increases the GTPase activity of OPA1 specifically in the presence of 25% cardiolipin-enriched liposomes, which mimics the composition of IMM. Like the effect of NM23-H1 and NM23-H2 on dynamin-1 and -2, NM23-H4 is still able to increase the GTPase of OPA1 by ~30% in the presence of physiological concentrations of nucleotides. Accordingly, silencing of NM23-H4 results in mitochondrial fragmentation, reflecting fusion defects similar to those seen upon loss of function of the OPA1 gene211, whereas silencing of NM23-H1 and NM23-H2 does not alter mitochondrial morphology. These observations indicate the involvement of the NM23-H4 kinase in supplying GTP locally for OPA1-dependent mitochondrial dynamics. The fact that the most abundant mitochondrial proteins, VDAC and ANT, do not bind to NM23-H4212 indicates the specificity of the interaction implicating NM23-H4 in local and direct delivery of GTP to OPA1 (Figure 6d).\n\nThe evidence described above supports a model in which NDPKs physically interact with dynamin superfamily members in the same subcellular compartment to maintain a high local concentration of GTP for dynamin function in membrane remodeling. NM23-H1 and NM23-H2 fuel cytoplasmic dynamin-1 and -2 at plasma membrane CCPs to drive endocytosis, and NM23-H4 fuels OPA1 at the IMM to drive IMM fusion (Figure 6c and d). The localisation of NM23-H3 at the OMM, where the dynamin-like protein DRP1 is recruited to mediate mitochondrial fission, suggests that NM23-H3 might, likewise, assist DRP1 in this process.\n\nThe impressive NDPK activity of the NM23 hexamer and its six active sites [kcat ~ 600 s-1 per active site,213], and its high affinity for GDP as compared to the other diphosphate nucleosides19, is ideal to maintain a high local concentration of GTP for dynamin function. These observations provide a biochemical and thermodynamic explanation of why dynamin superfamily proteins are dependent on NDPKs. However, to fully validate this functional model, more comprehensive structural analyses of the interactions between hexameric NDPKs and their polymeric dynamin partners are needed.\n\n\nEvolution of dynamins and NME–NM23–NDPK family members\n\nThe dynamin superfamily is an ancient family whose genes have been conserved throughout the evolution of prokaryotic and eukaryotic lineages214. This early origin and conservation throughout the eukaryotic lineage is consistent with the fact that dynamin proteins are involved in key cellular processes like endocytosis, cell division and fusion of mitochondria185,215; dynamin superfamily genes are essential for fundamental cell functions.\n\nThe family of true dynamins appeared as a single Dnm gene in holozoa, which include the metazoans and single-celled sister lineages, excluding fungi, by evolution from a dynamin-like protein precursor216. They coevolved with the metazoan radiation, the emergence of pluricellularity and the nervous system214,217. This late emergence undoubtedly contributed to improving cell–cell interactions and communications and ultimately to accelerating synaptic transmission214. With the emergence of vertebrates, three of the four true dynamin ohnologs (i.e. paralogous genes originating from whole genome duplication events), namely Dnm1, Dnm2, and Dnm3, were retained following the two rounds of whole genome duplication that occurred at the root of the vertebrate lineage218. By contrast, such an expansion was not observed for other members of the dynamin superfamily. This retention of true dynamins and subsequent early functional specialisation suggests that the three retained true vertebrate dynamins might have been essential for perfecting the neuronal system and the evolution of the spinal cord214,217. Concomitant with the expansion of the true dynamins following the two rounds of vertebrate genome duplication, novel vertebrate-specific microRNA genes, mir199, mir214 and mir3120, mirror-miRNA of mir214219, emerged in introns of Dnm1, Dnm2 and Dnm3218. Recent studies suggest that these microRNAs regulate the translation of proteins involved in cell remodeling mechanisms, such as endocytosis or exosome secretion220–222. These microRNAs may thus cooperate with dynamins to finely regulate cell membrane remodeling mechanisms by tuning the amount of some protein actors.\n\nLike the dynamin superfamily, the NDPK family emerged at the stem of life, which is likely related to the essential and basal function of NDPKs in regenerating cellular NTPs. The NDPK gene family expanded at the time of the emergence of flagella and pluricellularity, to produce novel protein family members with catalytically inactive NDPK domains223. In the vertebrate lineage, Group I NDPK genes, which have catalytic activity, subsequently expanded224. This expansion was initiated by the first round of vertebrate genome duplication and followed by cis-duplication events224.\n\nAs presented above, dynamin and dynamin-like proteins display subcellular spatial specialisation and functional evolution. The subcellular and functional specialisation of the different dynamins and dynamin-like proteins is consistent with the hypothesis that cellular processes were optimised during evolution by the segregation of specific protein activities within various organelles of the cell with distinct functions. These specialisations of location and activity may thus have helped to accelerate the evolution of cellular processes by being more efficient at a particular function, while avoiding functional redundancy in the mechanisms of fusion and fission, two seemingly opposite processes that nevertheless display striking similarities, such as a common ‘stalk’ hemi-fusion/fission intermediate state225–227. As discussed above, NME–NM23–NDPK family proteins and dynamin superfamily proteins have similar subcellular localisations. However, this colocalisation of the dynamins with their cognate NME proteins, and the physical interaction in the case of NME1/2 and DNM1/2, as well as NME4 and OPA1, did not happen simultaneously because dynamins specialised early in the eukaryotic lineage, whereas NME proteins specialised only later during vertebrate evolution. There is thus a gap in the timing of the evolutionary specialisations of both protein superfamilies. Nonetheless, the convergence of the subcellular localisation of an NME protein with its dynamin superfamily counterpart, forming ‘dynamin–NME teams’ in vertebrates, suggests that the subcellular specialisation of dynamins may have influenced that of NME proteins, which can thus channel GTP more efficiently to their dynamin partners. The channeling of GTP, made possible by the colocalisation and direct interaction of dynamins with NMEs, would have boosted dynamin function208, so providing an evolutionary advance.\n\n\nDo NDPKs supply GTP to tubulin during microtubule dynamics?\n\nLike dynamin superfamily proteins, α- and β-tubulins, the building blocks of the microtubule cytoskeleton, bind and hydrolyse GTP during their polymerisation. The α- and β-tubulins form α–β heterodimers that assemble to form the hollow tubular structure of the microtubule. Upon incorporation of the α–β heterodimer at the tip of growing microtubules, the GTP bound to β-tubulin is hydrolysed. As a consequence, GTP-bound β-tubulin is only found in a ‘cap’ at the growing tip, whereas the shaft of the microtubule contains mostly GDP-bound β-tubulin subunits. This is key to the dynamic instability of microtubules: the GTP cap allows for continuous addition of new subunits, but when this cap is lost the GDP-associated conformation of the tubulin heterodimer in the shaft favors the rapid depolymerisation of the microtubule.\n\nLike dynamin, tubulin has a weak affinity for GTP (Km = 10 μM)228, suggesting that NDPKs might provide an advantage by regenerating GTP in the vicinity of the growing tip of polymerising microtubules. Several studies have reported that NDPK copurifies with microtubules; however, two studies found no direct interaction with NM23-H1 or NM23-H2229,230. Interestingly, NM23-H8 and NM23-H9, which are more phylogenetically divergent than NM23-H1 and NM23-H2, bind directly to microtubules231,232.\n\n\nConclusion\n\nIn this review, we have focused on the various cellular mechanisms that involve ATP and GTP channeling at the interface between membranes and the cytosol, thus maintaining a directed energy flux. The evidence and arguments discussed above support a model in which the enzymes that produce ATP and GTP – enzymes of glycolysis and NDPKs, respectively – physically interact or colocalise with certain ATP- and GTP-requiring enzymes in order to channel the NTPs directly to their active sites to perform work more efficiently. Such energy channeling, as a specific case of substrate channeling, has also been described for other ATP-generating enzymes, such as creatine kinases. In general, energy channeling seems to provide a functional advantage for cellular functions that require a rapid supply of energy and maximal ΔGNTP to sustain high turnover reactions. This could be particularly important at the interface between membranes and the cytosol where nucleotide diffusion may be restricted.\n\nA better understanding of the biological properties of membrane surfaces will be required to understand the physicochemical mechanisms of this channeling. Other reactions, which are not localised at membranes, may involve energetic coupling directly in the cytosol or nucleus. For example, the presence of glycolytic enzymes in the proteasome suggests that glycolysis may fuel protein degradation233. Advances in video super-resolution microscopy will be invaluable to further our understanding of the bioenergetics of these processes and to precisely locate the various players involved in energy transfer at nanoscale resolution. Another important technological advance needed to understand these bioenergetic processes is the development of new ATP and GTP reporters that allow superior temporal and spatial resolution234. Despite the need for these technical improvements, the data reviewed here strongly suggest that the notion of ATP and GTP diffusing freely in the cell and being available without limitation for any cellular function should be definitively discarded.",
"appendix": "Author contributions\n\n\n\nDZ and MB prepared and wrote the first draft of the manuscript, except the evolution section. TD and JB wrote the paragraph concerning evolution. US wrote the paragraph concerning ATP channeling and creatine kinase. AR and PC brought important corrections to the first draft. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nMB is an Associate Professor and Hospital Practitioner in Cell Biology at the Faculty of Medicine from the University Pierre & Marie Curie. DZ is an Inserm investigator. No other competing interests were disclosed.\n\n\nGrant information\n\nResearch by the authors reviewed here was supported by the Fondation pour la Recherche Médicale, France (FRM DPM20121125557 to US and MB) and the Groupement des Entreprises Françaises contre le Cancer (GEFLUC R16170DD/RAK16044DDA to MB).\n\n\nAcknowledgements\n\nThe authors thank Allen Smith for the software Bricksmith 2.6.1 used for the figure construction (http://bricksmith.sourceforge.net/). Carol Featherstone of Plume Scientific Communication Services edited the manuscript.\n\n\nReferences\n\nMichaelis L, Menten MM: The kinetics of invertin action. 1913. FEBS Lett. 2013; 587(17): 2712–20. PubMed Abstract | Publisher Full Text\n\nGarcía-Contreras R, Vos P, Westerhoff HV, et al.: Why in vivo may not equal in vitro - new effectors revealed by measurement of enzymatic activities under the same in vivo-like assay conditions. FEBS J. 2012; 279(22): 4145–59. PubMed Abstract | Publisher Full Text\n\nSteven AC, Baumeister W, Johnson LN, et al.: Molecular biology of assemblies and machines. New York: Garland Science; 2016. Reference Source\n\nSrere PA, Knull HR: Location-location-location. Trends Biochem Sci. 1998; 23(9): 319–20. PubMed Abstract | Publisher Full Text\n\nBanani SF, Lee HO, Hyman AA, et al.: Biomolecular condensates: organizers of cellular biochemistry. Nat Rev Mol Cell Biol. 2017; 18(5): 285–98. PubMed Abstract | Publisher Full Text\n\nLuby-Phelps K: Cytoarchitecture and physical properties of cytoplasm: volume, viscosity, diffusion, intracellular surface area. Int Rev Cytol. 2000; 192: 189–221. PubMed Abstract | Publisher Full Text\n\nSwaminathan R, Bicknese S, Periasamy N, et al.: Cytoplasmic viscosity near the cell plasma membrane: translational diffusion of a small fluorescent solute measured by total internal reflection-fluorescence photobleaching recovery. Biophys J. 1996; 71(2): 1140–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGellerich FN, Wagner M, Kapischke M, et al.: Effect of macromolecules on the regulation of the mitochondrial outer membrane pore and the activity of adenylate kinase in the inter-membrane space. Biochim Biophys Acta. 1993; 1142(3): 217–27. PubMed Abstract | Publisher Full Text\n\nZhou HX, Rivas G, Minton AP: Macromolecular crowding and confinement: biochemical, biophysical, and potential physiological consequences. Annu Rev Biophys. 2008; 37: 375–97. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMinton AP: How can biochemical reactions within cells differ from those in test tubes? J Cell Sci. 2006; 119(Pt 14): 2863–9. PubMed Abstract | Publisher Full Text\n\nDix JA, Verkman AS: Crowding effects on diffusion in solutions and cells. Annu Rev Biophys. 2008; 37: 247–63. PubMed Abstract | Publisher Full Text\n\nNath SS, Nath S: Energy transfer from adenosine triphosphate: quantitative analysis and mechanistic insights. J Phys Chem B. 2009; 113(5): 1533–7. PubMed Abstract | Publisher Full Text\n\nLipowsky R, Liepelt S, Valleriani A: Energy conversion by molecular motors coupled to nucleotide hydrolysis. J Stat Phys. 2009; 135(5): 951–75. Publisher Full Text\n\nLipowsky R, Valleriani A: Active biomimetic systems: force generation and cargo transport by molecular machines. Biophys Rev Lett. 2009; 4(1 & 2): 1–4. Publisher Full Text\n\nCarvalho AT, Szeler K, Vavitsas K, et al.: Modeling the mechanisms of biological GTP hydrolysis. Arch Biochem Biophys. 2015; 582: 80–90. PubMed Abstract | Publisher Full Text\n\nTraut TW: Physiological concentrations of purines and pyrimidines. Mol Cell Biochem. 1994; 140(1): 1–22. PubMed Abstract | Publisher Full Text\n\nIotti S, Sabatini A, Vacca A: Chemical and biochemical thermodynamics: from ATP hydrolysis to a general reassessment. J Phys Chem B. 2010; 114(5): 1985–93. PubMed Abstract | Publisher Full Text\n\nLascu I, Gonin P: The catalytic mechanism of nucleoside diphosphate kinases. J Bioenerg Biomembr. 2000; 32(3): 237–46. PubMed Abstract | Publisher Full Text\n\nCervoni L, Lascu I, Xu Y, et al.: Binding of nucleotides to nucleoside diphosphate kinase: a calorimetric study. Biochemistry. 2001; 40(15): 4583–9. PubMed Abstract | Publisher Full Text\n\nHuang X, Holden HM, Raushel FM: Channeling of substrates and intermediates in enzyme-catalyzed reactions. Annu Rev Biochem. 2001; 70: 149–80. PubMed Abstract | Publisher Full Text\n\nOvádi J, Srere PA: Macromolecular compartmentation and channeling. Int Rev Cytol. 2000; 192: 255–80. PubMed Abstract | Publisher Full Text\n\nSchlattner U, Wallimann T: Metabolite channeling. In: Lennarz WJ, Lane MD, editors. Encyclopedia of Biological Chemistry. 2nd ed. New York: Academic Press; 2013; 80–5.\n\nSrere PA: Complexes of sequential metabolic enzymes. Annu Rev Biochem. 1987; 56: 89–124. PubMed Abstract | Publisher Full Text\n\nMiles EW, Rhee S, Davies DR: The molecular basis of substrate channeling. J Biol Chem. 1999; 274(18): 12193–6. PubMed Abstract | Publisher Full Text\n\nPerham RN: Swinging arms and swinging domains in multifunctional enzymes: catalytic machines for multistep reactions. Annu Rev Biochem. 2000; 69: 961–1004. PubMed Abstract | Publisher Full Text\n\nSrere PA: The metabolon. Trends Biochem Sci. 1985; 10(3): 109–10. Publisher Full Text\n\nSrere PA: Is there an organization of Krebs cycle enzymes in the mitochondrial matrix? In: Energy Metabolism and the Regulation of Metabolic Processes in Mitochondria. Hanson RW, WAM editors. Trends in biochemical sciences. New York: Academic Press; 1972; 79–91. Publisher Full Text\n\nLyubarev AE, Kurganov BI: Supramolecular organization of tricarboxylic acid cycle enzymes. Biosystems. 1989; 22(2): 91–102. PubMed Abstract | Publisher Full Text\n\nMukai C, Bergkvist M, Nelson JL, et al.: Sequential reactions of surface- tethered glycolytic enzymes. Chem Biol. 2009; 16(9): 1013–20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLei H, Ugurbil K, Chen W: Measurement of unidirectional Pi to ATP flux in human visual cortex at 7 T by using in vivo 31P magnetic resonance spectroscopy. Proc Natl Acad Sci U S A. 2003; 100(24): 14409–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDzeja PP, Terzic A: Phosphotransfer networks and cellular energetics. J Exp Biol. 2003; 206(Pt 12): 2039–47. PubMed Abstract | Publisher Full Text\n\nTomokuni Y, Goryo K, Katsura A, et al.: Loose interaction between glyceraldehyde-3-phosphate dehydrogenase and phosphoglycerate kinase revealed by fluorescence resonance energy transfer-fluorescence lifetime imaging microscopy in living cells. FEBS J. 2010; 277(5): 1310–8. PubMed Abstract | Publisher Full Text\n\nWeber JP, Bernhard SA: Transfer of 1,3-diphosphoglycerate between glyceraldehyde-3-phosphate dehydrogenase and 3-phosphoglycerate kinase via an enzyme-substrate-enzyme complex. Biochemistry. 1982; 21(17): 4189–94. PubMed Abstract | Publisher Full Text\n\nAflalo C, DeLuca M: Continuous monitoring of adenosine 5’-triphosphate in the microenvironment of immobilized enzymes by firefly luciferase. Biochemistry. 1987; 26(13): 3913–20. PubMed Abstract | Publisher Full Text\n\nWallimann T, Wyss M, Brdiczka D, et al.: Intracellular compartmentation, structure and function of creatine kinase isoenzymes in tissues with high and fluctuating energy demands: the 'phosphocreatine circuit' for cellular energy homeostasis. Biochem J. 1992; 281(Pt 1): 21–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlekseev AE, Guzun R, Reyes S, et al.: Restrictions in ATP diffusion within sarcomeres can provoke ATP-depleted zones impairing exercise capacity in chronic obstructive pulmonary disease. Biochim Biophys Acta. 2016; 1860(10): 2269–78. PubMed Abstract | Publisher Full Text\n\nSrere PA: Channeling: the pathway that cannot be beaten. J Theor Biol. 1991; 152(1): 23. PubMed Abstract | Publisher Full Text\n\nSaks VA, Kaambre T, Sikk P, et al.: Intracellular energetic units in red muscle cells. Biochem J. 2001; 356(Pt 2): 643–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeppet EK, Eimre M, Anmann T, et al.: Intracellular energetic units in healthy and diseased hearts. Exp Clin Cardiol. 2005; 10(3): 173–83. PubMed Abstract | Free Full Text\n\nBessman SP, Carpenter CL: The creatine-creatine phosphate energy shuttle. Annu Rev Biochem. 1985; 54: 831–62. PubMed Abstract | Publisher Full Text\n\nBessman SP, Geiger PJ: Transport of energy in muscle: the phosphorylcreatine shuttle. Science. 1981; 211(4481): 448–52. PubMed Abstract | Publisher Full Text\n\nEllington WR: Evolution and physiological roles of phosphagen systems. Annu Rev Physiol. 2001; 63: 289–325. PubMed Abstract | Publisher Full Text\n\nMcLeish MJ, Kenyon GL: Relating structure to mechanism in creatine kinase. Crit Rev Biochem Mol Biol. 2005; 40(1): 1–20. PubMed Abstract | Publisher Full Text\n\nSaks V, Favier R, Guzun R, et al.: Molecular system bioenergetics: regulation of substrate supply in response to heart energy demands. J Physiol. 2006; 577(Pt 3): 769–77. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchlattner U, Tokarska-Schlattner M, Wallimann T: Molecular structure and function of mitochondrial creatine kinases. In: Vial C, editor. Creatine kinase. New York: Nova Science Publishers; 2006; 123–70.\n\nWallimann T, Tokarska-Schlattner M, Schlattner U: The creatine kinase system and pleiotropic effects of creatine. Amino Acids. 2011; 40(5): 1271–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchlattner U, Klaus A, Ramirez Rios S, et al.: Cellular compartmentation of energy metabolism: creatine kinase microcompartments and recruitment of B-type creatine kinase to specific subcellular sites. Amino Acids. 2016; 48(8): 1751–74. PubMed Abstract | Publisher Full Text\n\nEllington WR, Suzuki T: Early evolution of the creatine kinase gene family and the capacity for creatine biosynthesis and membrane transport. Subcell Biochem. 2007; 46: 17–26. PubMed Abstract | Publisher Full Text\n\nSeppet E, Eimre M, Peet N, et al.: Compartmentation of energy metabolism in atrial myocardium of patients undergoing cardiac surgery. Mol Cell Biochem. 2005; 270(1–2): 49–61. PubMed Abstract | Publisher Full Text\n\nStachowiak O, Schlattner U, Dolder M, et al.: Oligomeric state and membrane binding behaviour of creatine kinase isoenzymes: implications for cellular function and mitochondrial structure. Mol Cell Biochem. 1998; 184(1–2): 141–51. PubMed Abstract | Publisher Full Text\n\nDeFuria RA, Ingwall JS, Fossel ET, et al.: The integration of isoenzymes for energy distribution. In: Jacobus WE, Ingwall JS, editors. Heart Creatine Kinase. Baltimore: Williams & Wilkins Co.; 1980; 135–41.\n\nFritz-Wolf K, Schnyder T, Wallimann T, et al.: Structure of mitochondrial creatine kinase. Nature. 1996; 381(6580): 341–5. PubMed Abstract | Publisher Full Text\n\nEder M, Fritz-Wolf K, Kabsch W, et al.: Crystal structure of human ubiquitous mitochondrial creatine kinase. Proteins. 2000; 39(3): 216–25. PubMed Abstract\n\nBarbour RL, Ribaudo J, Chan SH: Effect of creatine kinase activity on mitochondrial ADP/ATP transport. Evidence for a functional interaction. J Biol Chem. 1984; 259(13): 8246–51. PubMed Abstract\n\nWallimann T, Schlösser T, Eppenberger HM: Function of M-line-bound creatine kinase as intramyofibrillar ATP regenerator at the receiving end of the phosphorylcreatine shuttle in muscle. J Biol Chem. 1984; 259(8): 5238–46. PubMed Abstract\n\nChen Z, Zhao TJ, Li J, et al.: Slow skeletal muscle myosin-binding protein-C (MyBPC1) mediates recruitment of muscle-type creatine kinase (CK) to myosin. Biochem J. 2011; 436(2): 437–45. PubMed Abstract | Publisher Full Text\n\nPebay-Peyroula E, Dahout-Gonzalez C, Kahn R, et al.: Structure of mitochondrial ADP/ATP carrier in complex with carboxyatractyloside. Nature. 2003; 426(6962): 39–44. PubMed Abstract | Publisher Full Text\n\nKay L, Nicolay K, Wieringa B, et al.: Direct evidence for the control of mitochondrial respiration by mitochondrial creatine kinase in oxidative muscle cells in situ. J Biol Chem. 2000; 275(10): 6937–44. PubMed Abstract | Publisher Full Text\n\nEpand RF, Tokarska-Schlattner M, Schlattner U, et al.: Cardiolipin clusters and membrane domain formation induced by mitochondrial proteins. J Mol Biol. 2007; 365(4): 968–80. PubMed Abstract | Publisher Full Text\n\nCheniour M, Brewer J, Bagatolli L, et al.: Evidence of proteolipid domain formation in an inner mitochondrial membrane mimicking model. Biochim Biophys Acta. 2017; 1861(5 Pt A): 969–76. PubMed Abstract | Publisher Full Text\n\nVentura-Clapier R, Kuznetsov A, Veksler V, et al.: Functional coupling of creatine kinases in muscles: species and tissue specificity. Mol Cell Biochem. 1998; 184(1–2): 231–47. PubMed Abstract | Publisher Full Text\n\nCrozatier B, Badoual T, Boehm E, et al.: Role of creatine kinase in cardiac excitation-contraction coupling: studies in creatine kinase-deficient mice. FASEB J. 2002; 16(7): 653–60. PubMed Abstract | Publisher Full Text\n\nSaks VA, Kuznetsov AV, Kupriyanov VV, et al.: Creatine kinase of rat heart mitochondria. The demonstration of functional coupling to oxidative phosphorylation in an inner membrane-matrix preparation. J Biol Chem. 1985; 260(12): 7757–64. PubMed Abstract\n\nSaks V, Kaambre T, Guzun R, et al.: The creatine kinase phosphotransfer network: thermodynamic and kinetic considerations, the impact of the mitochondrial outer membrane and modelling approaches. Subcell Biochem. 2007; 46: 27–65. PubMed Abstract | Publisher Full Text\n\nDolder M, Walzel B, Speer O, et al.: Inhibition of the mitochondrial permeability transition by creatine kinase substrates. Requirement for microcompartmentation. J Biol Chem. 2003; 278(20): 17760–6. PubMed Abstract | Publisher Full Text\n\nHiller S, Garces RG, Malia TJ, et al.: Solution structure of the integral human membrane protein VDAC-1 in detergent micelles. Science. 2008; 321(5893): 1206–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuzun R, Gonzalez-Granillo M, Karu-Varikmaa M, et al.: Regulation of respiration in muscle cells in vivo by VDAC through interaction with the cytoskeleton and MtCK within Mitochondrial Interactosome. Biochim Biophys Acta. 2012; 1818(6): 1545–54. PubMed Abstract | Publisher Full Text\n\nDillon PF, Clark JF: The theory of diazymes and functional coupling of pyruvate kinase and creatine kinase. J Theor Biol. 1990; 143(2): 275–84. PubMed Abstract | Publisher Full Text\n\nKraft T, Hornemann T, Stolz M, et al.: Coupling of creatine kinase to glycolytic enzymes at the sarcomeric I-band of skeletal muscle: a biochemical study in situ. J Muscle Res Cell Motil. 2000; 21(7): 691–703. PubMed Abstract | Publisher Full Text\n\nTurner DC, Wallimann T, Eppenberger HM: A protein that binds specifically to the M-line of skeletal muscle is identified as the muscle form of creatine kinase. Proc Natl Acad Sci U S A. 1973; 70(3): 702–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHornemann T, Kempa S, Himmel M, et al.: Muscle-type creatine kinase interacts with central domains of the M-band proteins myomesin and M-protein. J Mol Biol. 2003; 332(4): 877–87. PubMed Abstract | Publisher Full Text\n\nHornemann T, Stolz M, Wallimann T: Isoenzyme-specific interaction of muscle-type creatine kinase with the sarcomeric M-line is mediated by NH2-terminal lysine charge-clamps. J Cell Biol. 2000; 149(6): 1225–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKekenes-Huskey PM, Liao T, Gillette AK, et al.: Molecular and subcellular-scale modeling of nucleotide diffusion in the cardiac myofilament lattice. Biophys J. 2013; 105(9): 2130–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossi AM, Eppenberger HM, Volpe P, et al.: Muscle-type MM creatine kinase is specifically bound to sarcoplasmic reticulum and can support Ca2+ uptake and regulate local ATP/ADP ratios. J Biol Chem. 1990; 265(9): 5258–66. PubMed Abstract\n\nKorge P, Byrd SK, Campbell KB: Functional coupling between sarcoplasmic-reticulum-bound creatine kinase and Ca2+-ATPase. Eur J Biochem. 1993; 213(3): 973–80. PubMed Abstract | Publisher Full Text\n\nKuiper JW, Pluk H, Oerlemans F, et al.: Creatine kinase-mediated ATP supply fuels actin-based events in phagocytosis. PLoS Biol. 2008; 6(3): e51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSteeghs K, Benders A, Oerlemans F, et al.: Altered Ca2+ responses in muscles with combined mitochondrial and cytosolic creatine kinase deficiencies. Cell. 1997; 89(1): 93–103. PubMed Abstract | Publisher Full Text\n\nShin JB, Streijger F, Beynon A, et al.: Hair bundles are specialized for ATP delivery via creatine kinase. Neuron. 2007; 53(3): 371–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRamírez Ríos S, Lamarche F, Cottet-Rousselle C, et al.: Regulation of brain-type creatine kinase by AMP-activated protein kinase: interaction, phosphorylation and ER localization. Biochim Biophys Acta. 2014; 1837(8): 1271–83. PubMed Abstract | Publisher Full Text\n\nBlum H, Balschi JA, Johnson RG Jr: Coupled in vivo activity of creatine phosphokinase and the membrane-bound (Na+,K+)-ATPase in the resting and stimulated electric organ of the electric fish Narcine brasiliensis. J Biol Chem. 1991; 266(16): 10254–9. PubMed Abstract\n\nBorroni E: Role of creatine phosphate in the discharge of the electric organ of Torpedo marmorata. J Neurochem. 1984; 43(3): 795–8. PubMed Abstract | Publisher Full Text\n\nSistermans EA, Klaassen CH, Peters W, et al.: Co-localization and functional coupling of creatine kinase B and gastric H+/K+-ATPase on the apical membrane and the tubulovesicular system of parietal cells. Biochem J. 1995; 311( Pt 2): 445–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nInoue K, Yamada J, Ueno S, et al.: Brain-type creatine kinase activates neuron-specific K+-Cl- co-transporter KCC2. J Neurochem. 2006; 96(2): 598–608. PubMed Abstract | Publisher Full Text\n\nInoue K, Ueno S, Fukuda A: Interaction of neuron-specific K+-Cl- cotransporter, KCC2, with brain-type creatine kinase. FEBS Lett. 2004; 564(1–2): 131–5. PubMed Abstract | Publisher Full Text\n\nSalin-Cantegrel A, Shekarabi M, Holbert S, et al.: HMSN/ACC truncation mutations disrupt brain-type creatine kinase-dependant activation of K+/Cl- co-transporter 3. Hum Mol Genet. 2008; 17(17): 2703–11. PubMed Abstract | Publisher Full Text\n\nVenter G, Polling S, Pluk H, et al.: Submembranous recruitment of creatine kinase B supports formation of dynamic actin-based protrusions of macrophages and relies on its C-terminal flexible loop. Eur J Cell Biol. 2015; 94(2): 114–27. PubMed Abstract | Publisher Full Text\n\nKuiper JW, van Horssen R, Oerlemans F, et al.: Local ATP generation by brain-type creatine kinase (CK-B) facilitates cell motility. PLoS One. 2009; 4(3): e5030. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRich PR: The molecular machinery of Keilin’s respiratory chain. Biochem Soc Trans. 2003; 31(Pt 6): 1095–105. PubMed Abstract | Publisher Full Text\n\nFeller SM, Lewitzky M: Very 'sticky' proteins - not too sticky after all? Cell Commun Signal. 2012; 10(1): 15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEl-Bacha T, de Freitas MS, Sola-Penna M: Cellular distribution of phosphofructokinase activity and implications to metabolic regulation in human breast cancer. Mol Genet Metab. 2003; 79(4): 294–9. PubMed Abstract | Publisher Full Text\n\nGraham JW, Williams TC, Morgan M, et al.: Glycolytic enzymes associate dynamically with mitochondria in response to respiratory demand and support substrate channeling. Plant Cell. 2007; 19(11): 3723–38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCraven PA, Basford RE: ADP-induced binding of phosphofructokinase to the brain mitochondrial membrane. Biochim Biophys Acta. 1974; 354(1): 49–56. PubMed Abstract | Publisher Full Text\n\nAlberts B, Johnson A, Lewis J, et al.: Molecular Biology of the Cell. Garland Sc. New York: Garland Science; 2002. Reference Source\n\nBlachly-Dyson E, Forte M: VDAC channels. IUBMB Life. 2001; 52(3–5): 113–8. PubMed Abstract | Publisher Full Text\n\nAllouche M, Pertuiset C, Robert JL, et al.: ANT-VDAC1 interaction is direct and depends on ANT isoform conformation in vitro. Biochem Biophys Res Commun. 2012; 429(1–2): 12–7. PubMed Abstract | Publisher Full Text\n\nBrdiczka DG, Zorov DB, Sheu SS: Mitochondrial contact sites: their role in energy metabolism and apoptosis. Biochim Biophys Acta. 2006; 1762(2): 148–63. PubMed Abstract | Publisher Full Text\n\nShoshan-Barmatz V, Ben-Hail D, Admoni L, et al.: The mitochondrial voltage-dependent anion channel 1 in tumor cells. Biochim Biophys Acta. 2015; 1848(10 Pt B): 2547–75. PubMed Abstract | Publisher Full Text\n\nPyle JL, Kavalali ET, Piedras-Rentería ES, et al.: Rapid reuse of readily releasable pool vesicles at hippocampal synapses. Neuron. 2000; 28(1): 221–31. PubMed Abstract | Publisher Full Text\n\nFleck MW, Henze DA, Barrionuevo G, et al.: Aspartate and glutamate mediate excitatory synaptic transmission in area CA1 of the hippocampus. J Neurosci. 1993; 13(9): 3944–55. PubMed Abstract\n\nIkemoto A, Bole DG, Ueda T: Glycolysis and glutamate accumulation into synaptic vesicles. Role of glyceraldehyde phosphate dehydrogenase and 3-phosphoglycerate kinase. J Biol Chem. 2003; 278(8): 5929–40. PubMed Abstract | Publisher Full Text\n\nIshida A, Noda Y, Ueda T: Synaptic vesicle-bound pyruvate kinase can support vesicular glutamate uptake. Neurochem Res. 2009; 34(5): 807–18. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShepherd GM, Harris KM: Three-dimensional structure and composition of CA3-->CA1 axons in rat hippocampal slices: implications for presynaptic connectivity and compartmentalization. J Neurosci. 1998; 18(20): 8300–10. PubMed Abstract\n\nZala D, Hinckelmann MV, Yu H, et al.: Vesicular Glycolysis Provides On-board Energy for Fast Axonal Transport. Cell. 2013; 152(3): 479–491. PubMed Abstract | Publisher Full Text\n\nHinckelmann MV, Virlogeux A, Niehage C, et al.: Self-propelling vesicles define glycolysis as the minimal energy machinery for neuronal transport. Nat Commun. 2016; 7: 13233. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrown A: Axonal transport of membranous and nonmembranous cargoes: a unified perspective. J Cell Biol. 2003; 160(6): 817–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHinckelmann MV, Zala D, Saudou F: Releasing the brake: restoring fast axonal transport in neurodegenerative disorders. Trends Cell Biol. 2013; 23(12): 634–43. PubMed Abstract | Publisher Full Text\n\nMillecamps S, Julien JP: Axonal transport deficits and neurodegenerative diseases. Nat Rev Neurosci. 2013; 14(3): 161–76. PubMed Abstract | Publisher Full Text\n\nDu YZ, Hiratsuka Y, Taira S, et al.: Motor protein nano-biomachine powered by self-supplying ATP. Chem Commun (Camb). 2005; (16): 2080–2. PubMed Abstract | Publisher Full Text\n\nHirokawa N, Takemura R: Molecular motors and mechanisms of directional transport in neurons. Nat Rev Neurosci. 2005; 6(3): 201–14. PubMed Abstract | Publisher Full Text\n\nBurré J, Volknandt W: The synaptic vesicle proteome. J Neurochem. 2007; 101(6): 1448–62. PubMed Abstract | Publisher Full Text\n\nGauthier LR, Charrin BC, Borrell-Pagès M, et al.: Huntingtin controls neurotrophic support and survival of neurons by enhancing BDNF vesicular transport along microtubules. Cell. 2004; 118(1): 127–38. PubMed Abstract | Publisher Full Text\n\nBurke J, Enghild JJ, Martin ME, et al.: Huntingtin and DRPLA proteins selectively interact with the enzyme GAPDH. Nat Med. 1996; 2(3): 347–50. PubMed Abstract | Publisher Full Text\n\nDuclos S, Clavarino G, Rousserie G, et al.: The endosomal proteome of macrophage and dendritic cells. Proteomics. 2011; 11(5): 854–64. PubMed Abstract | Publisher Full Text\n\nGirard M, Allaire PD, McPherson PS, et al.: Non-stoichiometric relationship between clathrin heavy and light chains revealed by quantitative comparative proteomics of clathrin-coated vesicles from brain and liver. Mol Cell Proteomics. 2005; 4(8): 1145–54. PubMed Abstract | Publisher Full Text\n\nBlondeau F, Ritter B, Allaire PD, et al.: Tandem MS analysis of brain clathrin-coated vesicles reveals their critical involvement in synaptic vesicle recycling. Proc Natl Acad Sci U S A. 2004; 101(11): 3833–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDrummond IA: Cilia functions in development. Curr Opin Cell Biol. 2012; 24(1): 24–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSatir P, Christensen ST: Structure and function of mammalian cilia. Histochem Cell Biol. 2008; 129(6): 687–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSatir P, Christensen ST: Overview of Structure and Function of Mammalian Cilia. Annu Rev Physiol. 2007; 69(1): 377–400. PubMed Abstract | Publisher Full Text\n\nHuet D, Blisnick T, Perrot S, et al.: The GTPase IFT27 is involved in both anterograde and retrograde intraflagellar transport. eLife. 2014; 3: e02419. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBuisson J, Chenouard N, Lagache T, et al.: Intraflagellar transport proteins cycle between the flagellum and its base. J Cell Sci. 2013; 126(Pt 1): 327–38. PubMed Abstract | Publisher Full Text\n\nGuzun R, Timohhina N, Tepp K, et al.: Systems bioenergetics of creatine kinase networks: physiological roles of creatine and phosphocreatine in regulation of cardiac cell function. Amino Acids. 2011; 40(5): 1333–48. PubMed Abstract | Publisher Full Text\n\nSubota I, Julkowska D, Vincensini L, et al.: Proteomic analysis of intact flagella of procyclic Trypanosoma brucei cells identifies novel flagellar proteins with unique sub-localization and dynamics. Mol Cell Proteomics. 2014; 13(7): 1769–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWothe DD, Charbonneau H, Shapiro BM: The phosphocreatine shuttle of sea urchin sperm: flagellar creatine kinase resulted from a gene triplication. Proc Natl Acad Sci U S A. 1990; 87(13): 5203–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuzuki T, Mizuta C, Uda K, et al.: Evolution and divergence of the genes for cytoplasmic, mitochondrial, and flagellar creatine kinases. J Mol Evol. 2004; 59(2): 218–26. PubMed Abstract | Publisher Full Text\n\nVoncken F, Gao F, Wadforth C, et al.: The phosphoarginine energy-buffering system of Trypanosoma brucei involves multiple arginine kinase isoforms with different subcellular locations. PLoS One. 2013; 8(6): e65908. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatel VP, Fairbanks G: Relationship of major phosphorylation reactions and MgATPase activities to ATP-dependent shape change of human erythrocyte membranes. J Biol Chem. 1986; 261(7): 3170–7. PubMed Abstract\n\nLevin S, Korenstein R: Membrane fluctuations in erythrocytes are linked to MgATP-dependent dynamic assembly of the membrane skeleton. Biophys J. 1991; 60(3): 733–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSakota D, Sakamoto R, Yokoyama N, et al.: Glucose Depletion Enhances Sensitivity to Shear Stress-induced Mechanical Damage in Red Blood Cells by Rotary Blood Pumps. Artif Organs. 2009; 33(9): 733–9. PubMed Abstract | Publisher Full Text\n\nvan Wijk R, van Solinge WW: The energy-less red blood cell is lost: erythrocyte enzyme abnormalities of glycolysis. Blood. 2005; 106(13): 4034–42. PubMed Abstract | Publisher Full Text\n\nKurganov BI, Lyubarev AE: [Hypothetical structure of the glycolytic enzyme complex (glycolytic metabolon) formed on erythrocyte membranes]. Mol Biol (Mosk). 1988; 22(6): 1605–13. PubMed Abstract\n\nPuchulu-Campanella E, Chu H, Anstee DJ, et al.: Identification of the components of a glycolytic enzyme metabolon on the human red blood cell membrane. J Biol Chem. 2013; 288(2): 848–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampanella ME, Chu H, Low PS: Assembly and regulation of a glycolytic enzyme complex on the human erythrocyte membrane. Proc Natl Acad Sci U S A. 2005; 102(7): 2402–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMercer RW, Dunham PB: Membrane-bound ATP fuels the Na/K pump. Studies on membrane-bound glycolytic enzymes on inside-out vesicles from human red cell membranes. J Gen Physiol. 1981; 78(5): 547–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChu H, Puchulu-Campanella E, Galan JA, et al.: Identification of cytoskeletal elements enclosing the ATP pools that fuel human red blood cell membrane cation pumps. Proc Natl Acad Sci U S A. 2012; 109(31): 12794–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPost RL, Merritt CR, Kinsolving CR, et al.: Membrane adenosine triphosphatase as a participant in the active transport of sodium and potassium in the human erythrocyte. J Biol Chem. 1960; 235: 1796–802. PubMed Abstract\n\nKay L, Tokarska-Schlattner M, Quenot-Carrias B, et al.: Creatine kinase in human erythrocytes: A genetic anomaly reveals presence of soluble brain-type isoform. Blood Cells Mol Dis. 2017; 64: 33–37. PubMed Abstract | Publisher Full Text\n\nAw TY: Intracellular compartmentation of organelles and gradients of low molecular weight species. Int Rev Cytol. 2000; 192: 223–53. PubMed Abstract | Publisher Full Text\n\nErcolani L, Brown D, Stuart-Tilley A, et al.: Colocalization of GAPDH and band 3 (AE1) proteins in rat erythrocytes and kidney intercalated cell membranes. Am J Physiol. 1992; 262(5 Pt 2): F892–6. PubMed Abstract\n\nMoriyama R, Makino S: Interaction of glyceraldehyde-3-phosphate dehydrogenase with the cytoplasmic pole of band 3 from bovine erythrocyte membrane: the mode of association and identification of the binding site of band 3 polypeptide. Arch Biochem Biophys. 1987; 256(2): 606–17. PubMed Abstract | Publisher Full Text\n\nWeiss JN, Lamp ST: Cardiac ATP-sensitive K+ channels. Evidence for preferential regulation by glycolysis. J Gen Physiol. 1989; 94(5): 911–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXu KY, Zweier JL, Becker LC: Functional coupling between glycolysis and sarcoplasmic reticulum Ca2+ transport. Circ Res. 1995; 77(1): 88–97. PubMed Abstract | Publisher Full Text\n\nMorlot S, Roux A: Mechanics of dynamin-mediated membrane fission. Annu Rev Biophys. 2013; 42: 629–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoux A: Reaching a consensus on the mechanism of dynamin? F1000Prime Rep. 2014; 6: 86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAntonny B, Burd C, De Camilli P, et al.: Membrane fission by dynamin: what we know and what we need to know. EMBO J. 2016; 35(21): 2270–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen MS, Obar RA, Schroeder CC, et al.: Multiple forms of dynamin are encoded by shibire, a Drosophila gene involved in endocytosis. Nature. 1991; 351(6327): 583–6. PubMed Abstract | Publisher Full Text\n\nvan der Bliek AM, Meyerowitz EM: Dynamin-like protein encoded by the Drosophila shibire gene associated with vesicular traffic. Nature. 1991; 351(6325): 411–4. PubMed Abstract | Publisher Full Text\n\nClark SG, Shurland DL, Meyerowitz EM, et al.: A dynamin GTPase mutation causes a rapid and reversible temperature-inducible locomotion defect in C. elegans. Proc Natl Acad Sci U S A. 1997; 94(19): 10438–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFerguson SM, Brasnjo G, Hayashi M, et al.: A selective activity-dependent requirement for dynamin 1 in synaptic vesicle endocytosis. Science. 2007; 316(5824): 570–4. PubMed Abstract | Publisher Full Text\n\nWarnock DE, Baba T, Schmid SL: Ubiquitously expressed dynamin-II has a higher intrinsic GTPase activity and a greater propensity for self-assembly than neuronal dynamin-I. Mol Biol Cell. 1997; 8(12): 2553–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGray NW, Fourgeaud L, Huang B, et al.: Dynamin 3 is a component of the postsynapse, where it interacts with mGluR5 and Homer. Curr Biol. 2003; 13(6): 510–5. PubMed Abstract | Publisher Full Text\n\nLu J, Helton TD, Blanpied TA, et al.: Postsynaptic positioning of endocytic zones and AMPA receptor cycling by physical coupling of dynamin-3 to Homer. Neuron. 2007; 55(6): 874–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVaid KS, Guttman JA, Babyak N, et al.: The role of dynamin 3 in the testis. J Cell Physiol. 2007; 210(3): 644–54. PubMed Abstract | Publisher Full Text\n\nSchmid SL, Frolov VA: Dynamin: functional design of a membrane fission catalyst. Annu Rev Cell Dev Biol. 2011; 27: 79–105. PubMed Abstract | Publisher Full Text\n\nFerguson SM, De Camilli P: Dynamin, a membrane-remodelling GTPase. Nat Rev Mol Cell Biol. 2012; 13(2): 75–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFaelber K, Held M, Gao S, et al.: Structural insights into dynamin-mediated membrane fission. Structure. 2012; 20(10): 1621–8. PubMed Abstract | Publisher Full Text\n\nChappie JS, Dyda F: Building a fission machine--structural insights into dynamin assembly and activation. J Cell Sci. 2013; 126(Pt 13): 2773–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSundborger AC, Hinshaw JE: Regulating dynamin dynamics during endocytosis. F1000Prime Rep. 2014; 6: 85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHinshaw JE, Schmid SL: Dynamin self-assembles into rings suggesting a mechanism for coated vesicle budding. Nature. 1995; 374(6518): 190–2. PubMed Abstract | Publisher Full Text\n\nTakei K, McPherson PS, Schmid SL, et al.: Tubular membrane invaginations coated by dynamin rings are induced by GTP-gamma S in nerve terminals. Nature. 1995; 374(6518): 186–90. PubMed Abstract | Publisher Full Text\n\nSweitzer SM, Hinshaw JE: Dynamin undergoes a GTP-dependent conformational change causing vesiculation. Cell. 1998; 93(6): 1021–9. PubMed Abstract | Publisher Full Text\n\nRoux A, Uyhazi K, Frost A, et al.: GTP-dependent twisting of dynamin implicates constriction and tension in membrane fission. Nature. 2006; 441(7092): 528–31. PubMed Abstract | Publisher Full Text\n\nMorlot S, Galli V, Klein M, et al.: Membrane shape at the edge of the dynamin helix sets location and duration of the fission reaction. Cell. 2012; 151(3): 619–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAstumian RD: Thermodynamics and kinetics of molecular motors. Biophys J. 2010; 98(11): 2401–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVetter IR, Wittinghofer A: The guanine nucleotide-binding switch in three dimensions. Science. 2001; 294(5545): 1299–304. PubMed Abstract | Publisher Full Text\n\nWittinghofer A, Herrmann C: Ras-effector interactions, the problem of specificity. FEBS Lett. 1995; 369(1): 52–6. PubMed Abstract | Publisher Full Text\n\nBourne HR, Sanders DA, McCormick F: The GTPase superfamily: conserved structure and molecular mechanism. Nature. 1991; 349(6305): 117–27. PubMed Abstract | Publisher Full Text\n\nNeal SE, Eccleston JF, Hall A, et al.: Kinetic analysis of the hydrolysis of GTP by p21N-ras. The basal GTPase mechanism. J Biol Chem. 1988; 263(36): 19718–22. PubMed Abstract\n\nJohn J, Frech M, Wittinghofer A: Biochemical properties of Ha-ras encoded p21 mutants and mechanism of the autophosphorylation reaction. J Biol Chem. 1988; 263(24): 11792–9. PubMed Abstract\n\nBourne HR, Sanders DA, McCormick F: The GTPase superfamily: a conserved switch for diverse cell functions. Nature. 1990; 348(6297): 125–32. PubMed Abstract | Publisher Full Text\n\nBos JL, Rehmann H, Wittinghofer A: GEFs and GAPs: critical elements in the control of small G proteins. Cell. 2007; 129(5): 865–77. PubMed Abstract | Publisher Full Text\n\nCherfils J: GEFs and GAPs: Mechanisms and Structures. In: Wittinghofer A, editor. Ras superfamily small G proteins: Biology and Mechanisms 1. Springer; 2014; 51–63. Publisher Full Text\n\nScheffzek K, Ahmadian MR, Kabsch W, et al.: The Ras-RasGAP complex: structural basis for GTPase activation and its loss in oncogenic Ras mutants. Science. 1997; 277(5324): 333–8. PubMed Abstract | Publisher Full Text\n\nScheffzek K, Lautwein A, Kabsch W, et al.: Crystal structure of the GTPase-activating domain of human p120GAP and implications for the interaction with Ras. Nature. 1996; 384(6609): 591–6. PubMed Abstract | Publisher Full Text\n\nWarnock DE, Schmid SL: Dynamin GTPase, a force-generating molecular switch. Bioessays. 1996; 18(11): 885–93. PubMed Abstract | Publisher Full Text\n\nBinns DD, Helms MK, Barylko B, et al.: The mechanism of GTP hydrolysis by dynamin II: a transient kinetic study. Biochemistry. 2000; 39(24): 7188–96. PubMed Abstract | Publisher Full Text\n\nTuma PL, Collins CA: Activation of dynamin GTPase is a result of positive cooperativity. J Biol Chem. 1994; 269(49): 30842–7. PubMed Abstract\n\nWarnock DE, Hinshaw JE, Schmid SL: Dynamin self-assembly stimulates its GTPase activity. J Biol Chem. 1996; 271(37): 22310–4. PubMed Abstract | Publisher Full Text\n\nMarks B, Stowell MH, Vallis Y, et al.: GTPase activity of dynamin and resulting conformation change are essential for endocytosis. Nature. 2001; 410(6825): 231–5. PubMed Abstract | Publisher Full Text\n\nStowell MH, Marks B, Wigge P, et al.: Nucleotide-dependent conformational changes in dynamin: evidence for a mechanochemical molecular spring. Nat Cell Biol. 1999; 1(1): 27–32. PubMed Abstract | Publisher Full Text\n\nChappie JS, Mears JA, Fang S, et al.: A pseudoatomic model of the dynamin polymer identifies a hydrolysis-dependent powerstroke. Cell. 2011; 147(1): 209–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChappie JS, Acharya S, Leonard M, et al.: G domain dimerization controls dynamin’s assembly-stimulated GTPase activity. Nature. 2010; 465(7297): 435–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBagshaw CR: Muscle Contraction. Chapman and Hall; 1993; 155. Reference Source\n\nHoppins S, Lackner L, Nunnari J: The machines that divide and fuse mitochondria. Annu Rev Biochem. 2007; 76: 751–80. PubMed Abstract | Publisher Full Text\n\nChan DC: Fusion and fission: interlinked processes critical for mitochondrial health. Annu Rev Genet. 2012; 46: 265–87. PubMed Abstract | Publisher Full Text\n\nvan der Bliek AM, Shen Q, Kawajiri S: Mechanisms of mitochondrial fission and fusion. Cold Spring Harb Perspect Biol. 2013; 5(6): pii: a011072. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPernas L, Scorrano L: Mito-Morphosis: Mitochondrial Fusion, Fission, and Cristae Remodeling as Key Mediators of Cellular Function. Annu Rev Physiol. 2016; 78: 505–31. PubMed Abstract | Publisher Full Text\n\nSmirnova E, Shurland DL, Ryazantsev SN, et al.: A human dynamin-related protein controls the distribution of mitochondria. J Cell Biol. 1998; 143(2): 351–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLabrousse AM, Zappaterra MD, Rube DA, et al.: C. elegans dynamin-related protein DRP-1 controls severing of the mitochondrial outer membrane. Mol Cell. 1999; 4(5): 815–26. PubMed Abstract | Publisher Full Text\n\nSmirnova E, Griparic L, Shurland DL, et al.: Dynamin-related protein Drp1 is required for mitochondrial division in mammalian cells. Mol Biol Cell. 2001; 12(8): 2245–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee JE, Westrate LM, Wu H, et al.: Multiple dynamin family members collaborate to drive mitochondrial division. Nature. 2016; 540(7631): 139–43. PubMed Abstract | Publisher Full Text\n\nMeeusen S, McCaffery JM, Nunnari J: Mitochondrial fusion intermediates revealed in vitro. Science. 2004; 305(5691): 1747–52. PubMed Abstract | Publisher Full Text\n\nSong Z, Ghochani M, McCaffery JM, et al.: Mitofusins and OPA1 mediate sequential steps in mitochondrial membrane fusion. Mol Biol Cell. 2009; 20(15): 3525–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCao YL, Meng S, Chen Y, et al.: MFN1 structures reveal nucleotide-triggered dimerization critical for mitochondrial fusion. Nature. 2017; 542(7641): 372–376. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeglei G, McQuibban GA: The dynamin-related protein Mgm1p assembles into oligomers and hydrolyzes GTP to function in mitochondrial membrane fusion. Biochemistry. 2009; 48(8): 1774–84. PubMed Abstract | Publisher Full Text\n\nBan T, Heymann JA, Song Z, et al.: OPA1 disease alleles causing dominant optic atrophy have defects in cardiolipin-stimulated GTP hydrolysis and membrane tubulation. Hum Mol Genet. 2010; 19(11): 2113–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGriffin EE, Detmer SA, Chan DC: Molecular mechanism of mitochondrial membrane fusion. Biochim Biophys Acta. 2006; 1763(5–6): 482–9. PubMed Abstract | Publisher Full Text\n\nKrishnan KS, Rikhy R, Rao S, et al.: Nucleoside diphosphate kinase, a source of GTP, is required for dynamin-dependent synaptic vesicle recycling. Neuron. 2001; 30(1): 197–210. PubMed Abstract | Publisher Full Text\n\nDammai V, Adryan B, Lavenburg KR, et al.: Drosophila awd, the homolog of human nm23, regulates FGF receptor levels and functions synergistically with shi/dynamin during tracheal development. Genes Dev. 2003; 17(22): 2812–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNallamothu G, Woolworth JA, Dammai V, et al.: Awd, the homolog of metastasis suppressor gene Nm23, regulates Drosophila epithelial cell invasion. Mol Cell Biol. 2008; 28(6): 1964–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoolworth JA, Nallamothu G, Hsu T: The Drosophila metastasis suppressor gene Nm23 homolog, awd, regulates epithelial integrity during oogenesis. Mol Cell Biol. 2009; 29(17): 4679–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFancsalszky L, Monostori E, Farkas Z, et al.: NDK-1, the homolog of NM23-H1/H2 regulates cell migration and apoptotic engulfment in C. elegans. PLoS One. 2014; 9(3): e92687. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBalklava Z, Pant S, Fares H, et al.: Genome-wide analysis identifies a general requirement for polarity proteins in endocytic traffic. Nat Cell Biol. 2007; 9(9): 1066–73. PubMed Abstract | Publisher Full Text\n\nLacombe ML, Milon L, Munier A, et al.: The human Nm23/nucleoside diphosphate kinases. J Bioenerg Biomembr. 2000; 32(3): 247–58. PubMed Abstract | Publisher Full Text\n\nBoissan M, Dabernat S, Peuchant E, et al.: The mammalian Nm23/NDPK family: from metastasis control to cilia movement. Mol Cell Biochem. 2009; 329(1–2): 51–62. PubMed Abstract | Publisher Full Text\n\nNegroni A, Venturelli D, Tanno B, et al.: Neuroblastoma specific effects of DR-nm23 and its mutant forms on differentiation and apoptosis. Cell Death Differ. 2000; 7(9): 843–50. PubMed Abstract | Publisher Full Text\n\nMilon L, Meyer P, Chiadmi M, et al.: The human nm23-H4 gene product is a mitochondrial nucleoside diphosphate kinase. J Biol Chem. 2000; 275(19): 14264–72. PubMed Abstract | Publisher Full Text\n\nTokarska-Schlattner M, Boissan M, Munier A, et al.: The nucleoside diphosphate kinase D (NM23-H4) binds the inner mitochondrial membrane with high affinity to cardiolipin and couples nucleotide transfer with respiration. J Biol Chem. 2008; 283(38): 26198–207. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoissan M, Montagnac G, Shen Q, et al.: Membrane trafficking. Nucleoside diphosphate kinases fuel dynamin superfamily proteins with GTP for membrane remodeling. Science. 2014; 344(6191): 1510–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarino N, Marshall JC, Collins JW, et al.: Nm23-h1 binds to gelsolin and inactivates its actin-severing capacity to promote tumor cell motility and metastasis. Cancer Res. 2013; 73(19): 5949–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFerguson SM, Raimondi A, Paradise S, et al.: Coordinated actions of actin and BAR proteins upstream of dynamin at endocytic clathrin-coated pits. Dev Cell. 2009; 17(6): 811–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGriparic L, van der Wel NN, Orozco IJ, et al.: Loss of the intermembrane space protein Mgm1/OPA1 induces swelling and localized constrictions along the lengths of mitochondria. J Biol Chem. 2004; 279(18): 18792–8. PubMed Abstract | Publisher Full Text\n\nSchlattner U, Tokarska-Schlattner M, Ramirez S, et al.: Dual function of mitochondrial Nm23-H4 protein in phosphotransfer and intermembrane lipid transfer: a cardiolipin-dependent switch. J Biol Chem. 2013; 288(1): 111–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLascu I, Schaertl S, Wang C, et al.: A point mutation of human nucleoside diphosphate kinase A found in aggressive neuroblastoma affects protein folding. J Biol Chem. 1997; 272(25): 15599–602. PubMed Abstract | Publisher Full Text\n\nBramkamp M: Evolution of dynamin: modular design of a membrane remodeling machine (retrospective on DOI 10.1002/bies.201200033). Bioessays. 2015; 37(4): 348. PubMed Abstract | Publisher Full Text\n\nPraefcke GJ, McMahon HT: The dynamin superfamily: universal membrane tubulation and fission molecules? Nat Rev Mol Cell Biol. 2004; 5(2): 133–47. PubMed Abstract | Publisher Full Text\n\nDergai M, Iershov A, Novokhatska O, et al.: Evolutionary Changes on the Way to Clathrin-Mediated Endocytosis in Animals. Genome Biol Evol. 2016; 8(3): 588–606. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPawlowski N: Why do we need three dynamins? (Comment on DOI 10.1002/bies.201200033). Bioessays. 2012; 34(8): 632. PubMed Abstract | Publisher Full Text\n\nDesvignes T, Contreras A, Postlethwait JH: Evolution of the miR199-214 cluster and vertebrate skeletal development. RNA Biol. 2014; 11(4): 281–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDesvignes T, Batzel P, Berezikov E, et al.: miRNA Nomenclature: A View Incorporating Genetic Origins, Biosynthetic Pathways, and Sequence Variants. Trends Genet. 2015; 31(11): 613–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAranda JF, Canfrán-Duque A, Goedeke L, et al.: The miR-199-dynamin regulatory axis controls receptor-mediated endocytosis. J Cell Sci. 2015; 128(17): 3197–209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScott H, Howarth J, Lee YB, et al.: MiR-3120 is a mirror microRNA that targets heat shock cognate protein 70 and auxilin messenger RNAs and regulates clathrin vesicle uncoating. J Biol Chem. 2012; 287(18): 14726–33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Balkom BW, de Jong OG, Smits M, et al.: Endothelial cells require miR-214 to secrete exosomes that suppress senescence and induce angiogenesis in human and mouse endothelial cells. Blood. 2013; 121(19): 3997–4006. PubMed Abstract | Publisher Full Text\n\nDesvignes T, Pontarotti P, Bobe J: Nme gene family evolutionary history reveals pre-metazoan origins and high conservation between humans and the sea anemone, Nematostella vectensis. Kolokotronis S-O, editor. PLoS One. 2010; 5(11): e15506. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDesvignes T, Pontarotti P, Fauvel C, et al.: Nme protein family evolutionary history, a vertebrate perspective. BMC Evol Biol. 2009; 9(1): 256. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMattila JP, Shnyrova AV, Sundborger AC, et al.: A hemi-fission intermediate links two mechanistically distinct stages of membrane fission. Nature. 2015; 524(7563): 109–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrolov VA, Zimmerberg J: Cooperative elastic stresses, the hydrophobic effect, and lipid tilt in membrane remodeling. FEBS Lett. 2010; 584(9): 1824–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKozlov MM, McMahon HT, Chernomordik LV: Protein-driven membrane stresses in fusion and fission. Trends Biochem Sci. 2010; 35(12): 699–706. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMejillano MR, Himes RH: Binding of guanine nucleotides and Mg2+ to tubulin with a nucleotide-depleted exchangeable site. Arch Biochem Biophys. 1991; 291(2): 356–62. PubMed Abstract | Publisher Full Text\n\nGallagher BC, Parrott KA, Szabo G, et al.: Receptor activation regulates cortical, but not vesicular localization of NDP kinase. J Cell Sci. 2003; 116(Pt 15): 3239–50. PubMed Abstract | Publisher Full Text\n\nMelki R, Lascu I, Carlier MF, et al.: Nucleoside diphosphate kinase does not directly interact with tubulin nor microtubules. Biochem Biophys Res Commun. 1992; 187(1): 65–72. PubMed Abstract | Publisher Full Text\n\nDuriez B, Duquesnoy P, Escudier E, et al.: A common variant in combination with a nonsense mutation in a member of the thioredoxin family causes primary ciliary dyskinesia. Proc Natl Acad Sci U S A. 2007; 104(9): 3336–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSadek CM, Jiménez A, Damdimopoulos AE, et al.: Characterization of human thioredoxin-like 2. A novel microtubule-binding thioredoxin expressed predominantly in the cilia of lung airway epithelium and spermatid manchette and axoneme. J Biol Chem. 2003; 278(15): 13133–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVerma R, Chen S, Feldman R, et al.: Proteasomal proteomics: identification of nucleotide-sensitive proteasome-interacting proteins by mass spectrometric analysis of affinity-purified proteasomes. Mol Biol Cell. 2000; 11(10): 3425–39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPelosse M, Cottet-Rousselle C, Grichine A, et al.: Genetically Encoded Fluorescent Biosensors to Explore AMPK Signaling and Energy Metabolism. EXS. 2016; 107: 491–523. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "23402",
"date": "12 Jun 2017",
"name": "Petras Dzeja",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a nice comprehensive review regarding advantages of close proximity and metabolite channelling in energy supply to cellular processes. Role of phosphotransfer enzymes – creatine kinase, nucleoside diphosphokinase and glycolysis is analysed in “on site” fuelling energy dependent processes. New evidence is reviewed.\nSome parts of the review could be improved and clarified. For example in Fig. 3 presented glycolytic pathway represents an old classical view which has undergone changes in recent years including localization and spatial cluster arrangement of glycolytic metabolism. Localization of hexokinase close to mitochondria enables pickup of high-energy phosphoryls generated in mitochondria and to transfer on glycolytic intermediates and deliver to ATP consuming sites. Thus, first stage of glycolysis is better to call Energy Investment, energy is not released and not consumed (-2ATP?) but rather transferred on other molecules. So the total energetic balance of glycolysis is 6 ATP or 6 ~P (2~P from OxPhos and 4~P from glycolysis) transferred and delivered to remote ATPases. Since glycolytic rate can be high and close proximity to ATP consumption sites makes glycolysis energy efficient pathway delivering over 30% of OxPhos and rest of glycolytic ~P. This could be addressed, not to repeat textbook mistakes.\nSecond, an important seminal paper directly demonstrating how positioning of “energetic” enzymes affect energy supply and cell motility is not discussed and cited: van Horssen R, Janssen E, Peters W, van de Pasch L, Lindert MM, van Dommelen MM, Linssen PC, Hagen TL, Fransen JA, Wieringa B. Modulation of cell motility by spatial repositioning of enzymatic ATP/ADP exchange capacity. J Biol Chem. 2009 Jan 16;284(3):1620-7[Ref-1].\nThis can improve an overall nice review.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": [
{
"c_id": "2878",
"date": "18 Jul 2017",
"name": "mathieu boissan",
"role": "Author Response",
"response": "This is a nice comprehensive review regarding advantages of close proximity and metabolite channeling in energy supply to cellular processes. Role of phosphotransfer enzymes – creatine kinase, nucleoside diphosphokinase and glycolysis is analysed in “on site” fuelling energy dependent processes. New evidence is reviewed. Some parts of the review could be improved and clarified. For example in Fig. 3 presented glycolytic pathway represents an old classical view which has undergone changes in recent years including localization and spatial cluster arrangement of glycolytic metabolism. Localization of hexokinase close to mitochondria enables pickup of high-energy phosphoryls generated in mitochondria and to transfer on glycolytic intermediates and deliver to ATP consuming sites. Thus, first stage of glycolysis is better to call Energy Investment, energy is not released and not consumed (-2ATP?) but rather transferred on other molecules. So the total energetic balance of glycolysis is 6 ATP or 6 ~P (2~P from OxPhos and 4~P from glycolysis) transferred and delivered to remote ATPases. Since glycolytic rate can be high and close proximity to ATP consumption sites makes glycolysis energy efficient pathway delivering over 30% of OxPhos and rest of glycolytic ~P. This could be addressed, not to repeat textbook mistakes. Authors’ response We decided not to include the notion of a metabolome in this figure, however, because the purpose of the figure is simply to present the biochemical steps of glycolysis, which are thereafter used as examples of channeling. Also, the contribution of mitochondria in the investment phase is not a universal rule, for example, it is not the case in red blood cells. We have, however, already discussed the contribution of OxPhos in the investment phase of glycolysis in the section headed ‘Mitochondria in the secret service of glycolysis’. Second, an important seminal paper directly demonstrating how positioning of “energetic” enzymes affect energy supply and cell motility is not discussed and cited: van Horssen R, Janssen E, Peters W, van de Pasch L, Lindert MM, van Dommelen MM, Linssen PC, Hagen TL, Fransen JA, Wieringa B. Modulation of cell motility by spatial repositioning of enzymatic ATP/ADP exchange capacity. J Biol Chem. 2009 Jan 16;284(3):1620-7[Ref-1]. Authors’ response Although we agree that the example of the effects of spatial repositioning of ATP supply enzymes for cell motility in the article by van Horssen et al. is interesting, we decided not to include this particular example in the article, which does not aim to exhaustively review all of the known examples of energetic channeling. References 1. van Horssen R, Janssen E, Peters W, van de Pasch L, Lindert MM, van Dommelen MM, Linssen PC, Hagen TL, Fransen JA, Wieringa B: Modulation of cell motility by spatial repositioning of enzymatic ATP/ADP exchange capacity.J Biol Chem. 2009; 284 (3): 1620-7 PubMed Abstract | Publisher Full Text"
}
]
},
{
"id": "23090",
"date": "13 Jun 2017",
"name": "Dragomir Milovanovic",
"expertise": [
"Reviewer Expertise biophysical chemistry",
"membrane biochemistry",
"lipid metabolism",
"protein trafficking",
"vesicle sorting",
"synaptic transmission"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Diana Zala, Mathieu Boissan and colleagues is a well-written, thorough, and timely review of the literature on the biological relevance of nucleotide channeling, focusing on ATP and GTP. This piece has three strong aspects. First, in order to the build a case for the importance of spatial and temporal regulation of NTPs, this review provides a range of biological examples from basic metabolism, to membrane trafficking, to specialized subcellular structures. Second, the coverage of literature encompasses both the classical papers in the field as well as the insights from the recent publications. Third, the Authors nicely elaborate on the evolutionary link between the biochemical abundance, subcellular localization and specialized functions of NTPs. Hence I strongly endorse the publication of this manuscript.\nI have two suggestions. Recently a milestone paper was published suggesting that ATP may, in fact, act as the biological hydrotope1. Given that the cytoplasm is a very crowded environment the high concentration of ATP increases the solubility of macromolecules. The authors should discuss this novel, additional role of ATP in the cell. Also, I recommend including the example of glycolytic enzymes that are shown to assemble into a non-membrane bound compartment in in vivo synapses under stress conditions2 to maintain the necessary levels of ATP. Minor comment: For a clearer overview, include the chemical structures of ATP and GTP next to cartoons in Figure 1, and the formulas of the intermediates in glycolytic pathway in Figures 2 and 3.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": [
{
"c_id": "2877",
"date": "18 Jul 2017",
"name": "mathieu boissan",
"role": "Author Response",
"response": "The manuscript by Diana Zala, Mathieu Boissan and colleagues is a well-written, thorough, and timely review of the literature on the biological relevance of nucleotide channeling, focusing on ATP and GTP. This piece has three strong aspects. First, in order to the build a case for the importance of spatial and temporal regulation of NTPs, this review provides a range of biological examples from basic metabolism, to membrane trafficking, to specialized subcellular structures. Second, the coverage of literature encompasses both the classical papers in the field as well as the insights from the recent publications. Third, the Authors nicely elaborate on the evolutionary link between the biochemical abundance, subcellular localization and specialized functions of NTPs. Hence I strongly endorse the publication of this manuscript. I have two suggestions. Recently a milestone paper was published suggesting that ATP may, in fact, act as the biological hydrotope1. Given that the cytoplasm is a very crowded environment the high concentration of ATP increases the solubility of macromolecules. The authors should discuss this novel, additional role of ATP in the cell. Authors’ response As you suggest, in Version 2 of the manuscript we now discuss the findings reported in the paper by Patel et al. (new reference number 20) that ATP functions as a hydrotrope. The new text can be found on page 4 at the end of the section headed ‘Why is ATP the main high-energy molecule used by the cell’, as follows: A new function of ATP was described recently (20) in which it acts as a hydrotrope that contributes to the solubility of proteins in the very crowded environment of the cell. This might explain why ATP is found at millimolar concentrations even though ATP-dependent enzymes require only micromolar concentrations. GTP has similar amphiphilic proprieties as ATP, however, so the puzzle of why ATP is the universal currency of energy in the cell remains unresolved. Also, I recommend including the example of glycolytic enzymes that are shown to assemble into a non-membrane bound compartment in in vivo synapses under stress conditions2 to maintain the necessary levels of ATP. Authors’ response As you recommend, we now discuss the example of the glycolytic metabolon found in C. elegans synapses under stress conditions reported in the paper by Jang et al. (new reference number 106). The new text can be found on page 10 at the end of the section ‘Glycolysis to reload synaptic vesicles’, as follows: In addition to synaptic reload, local ATP production of both mitochondria and glycolysis are required to sustain active synaptic transmission (105). For example, glycolysis is an important player in synaptic vesicles endocytosis in C. elegans. Under hypoxia, pharmacological or optogenetic synaptic stimulation, glycolytic enzymes translocate from an axonal and diffused location to pre-synapses to form a glycolytic metabolome associated to scaffold proteins (106). Minor comment: For a clearer overview, include the chemical structures of ATP and GTP next to cartoons in Figure 1, and the formulas of the intermediates in glycolytic pathway in Figures 2 and 3. Authors’ response We have now included the chemical formulas in Figures 1 and 3. We have decided not to overload Figure 2, because the formulas are already included in Figure 3 References 1. Patel A, Malinovska L, Saha S, Wang J, Alberti S, Krishnan Y, Hyman AA: ATP as a biological hydrotrope.Science. 2017; 356 (6339): 753-756 PubMed Abstract | Publisher Full Text 2. Jang S, Nelson JC, Bend EG, Rodríguez-Laureano L, Tueros FG, Cartagenova L, Underwood K, Jorgensen EM, Colón-Ramos DA: Glycolytic Enzymes Localize to Synapses under Energy Stress to Support Synaptic Function.Neuron. 2016; 90(2): 278-91 PubMed Abstract | Publisher Full Text"
}
]
},
{
"id": "23472",
"date": "14 Jun 2017",
"name": "Judit Ovádi",
"expertise": [
"Reviewer Expertise metabolic regulation",
"structure and functions of proteins"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe channeling for controlling metabolic pathways as an original idea was introduced in 1987 by Professor Srere who suggested that “the intermediates are considered to be out of diffusion equilibrium with identical molecules in the bulk phase of the same compartment of the cell” in the case of multienzymes complexes1. Even in these years and afterwards a couple of papers and reviews were published related to the metabolism of different enzyme systems providing pro and contra data concerning the validity of the channeling mostly in in vitro systems and doubt its existence in living cells and organisms. Due to the contradictory data produced by the believers and unbelievers, a special issue of Journal of Theoretical Biology entitled \"Physiological significance of intermediate channeling: Author`s response to commentaries” was published by the edition of Professor Cornish-Bowden (Ovádi J. (1991) J Theor Biol 152). From that time a number of related papers have been published, nevertheless, direct unambiguous in vivo evidence for the proof of the channeling is rare, if at all, in spite of the fact that in addition to the multifarious experimental studies experiment-based mathematical modeling was also developed that can be considered as the seed of the system biology discipline.\n\nThe review by Zala et al. objects to timely summarize the data related to the potential advantage of nucleotide channeling in the energy consumption which is highly appreciated. It is, however, intriguing that the authors use the channeling term for nucleotide transferring reactions, presented by large number of examples from their own and other’s studies without mentioning the validation problem of the channeling as occurred in the case of metabolite channeling. If there are direct evidences for the function of ATP or GTP transfer in coupled reactions in vivo it should be presented in a more emphasized-mode. It would be supported since most of the evidences for existence of nucleotide channeling are based upon in vitro data obtained by using different systems and approaches. The presentation of a Table involving key parameters of the coupled systems such as organization levels, methods used for identification, consequences, references, remarks, could enormously help the readers.\n\nIn addition, we would expect hypothesis for the situation which occur not rarely when the same enzyme is involved in both the metabolite and nucleotide channeling processes by interacting with distinct partners; what kind of mechanism can control this situation? A good example is the GAPDH, a multifunctional glycolytic enzyme. Only loose interaction between GAPDH and PGK was found by fluorescence resonance energy transfer and by co-immunoprecipitation in vivo (reference 32). By assuming effective intracellular association of these two glycolytic enzymes there is still an open question how this ATP module could interconnect with the ATP consuming partner?\n\nThe review is solid, and the figures are well designed. The authors review many-many energy-consuming processes, sometimes those which are unrelated to the channeling issue, for example, the description of the dynamin system is too detailed. In fact, this section is suggested to be significantly reduced, and focus on the role of channeling effects in the cases of the isoforms.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": [
{
"c_id": "2876",
"date": "18 Jul 2017",
"name": "mathieu boissan",
"role": "Author Response",
"response": "The channeling for controlling metabolic pathways as an original idea was introduced in 1987 by Professor Srere who suggested that “the intermediates are considered to be out of diffusion equilibrium with identical molecules in the bulk phase of the same compartment of the cell” in the case of multienzymes complexes1. Even in these years and afterwards a couple of papers and reviews were published related to the metabolism of different enzyme systems providing pro and contra data concerning the validity of the channeling mostly in in vitro systems and doubt its existence in living cells and organisms. Due to the contradictory data produced by the believers and unbelievers, a special issue of Journal of Theoretical Biology entitled \"Physiological significance of intermediate channeling:” was published by the edition of Professor Cornish-Bowden (Ovádi J. (1991) J Theor Biol 152). From that time a number of related papers have been published, nevertheless, direct unambiguous in vivo evidence for the proof of the channeling is rare, if at all, in spite of the fact that in addition to the multifarious experimental studies experiment-based mathematical modeling was also developed that can be considered as the seed of the system biology discipline. The review by Zala et al. objects to timely summarize the data related to the potential advantage of nucleotide channeling in the energy consumption which is highly appreciated. It is, however, intriguing that the authors use the channeling term for nucleotide transferring reactions, presented by large number of examples from their own and other’s studies without mentioning the validation problem of the channeling as occurred in the case of metabolite channeling. If there are direct evidences for the function of ATP or GTP transfer in coupled reactions in vivo it should be presented in a more emphasized-mode. It would be supported since most of the evidences for existence of nucleotide channeling are based upon in vitro data obtained by using different systems and approaches. The presentation of a Table involving key parameters of the coupled systems such as organization levels, methods used for identification, consequences, references, remarks, could enormously help the readers. In addition, we would expect hypothesis for the situation which occur not rarely when the same enzyme is involved in both the metabolite and nucleotide channeling processes by interacting with distinct partners; what kind of mechanism can control this situation? A good example is the GAPDH, a multifunctional glycolytic enzyme. Only loose interaction between GAPDH and PGK was found by fluorescence resonance energy transfer and by co-immunoprecipitation in vivo (reference 32). By assuming effective intracellular association of these two glycolytic enzymes there is still an open question how this ATP module could interconnect with the ATP consuming partner? Authors’ response In response to your comments, we have now mentioned the controversy between ‘believers and unbelievers’ concerning metabolic channeling. Also, we have warned the readers about the lack of a robust evidence in the literature that channeling occurs in vivo. These changes to the text appear in Version 2 on page 5 at the end of the penultimate paragraph of the section headed ‘Channeling: A smart strategy to maximize efficiency’. The new text reads: The notion that in cells the kinetics of reaction may not be diffusion-driven has alimented a long-term controversy. Even today, despite many publications, metabolic channeling is not universally accepted. In particular, there remains a technical bottleneck to measuring directly metabolic channeling in vivo. For a historical point of view regarding the debate, the reader should refer to a review from 1991 (31). We also agree with you that nucleotide channeling is a particular case of metabolic challenging (here referred as substrate channeling, Figure 2). Although we agree that a deeper description of general metabolic channeling might lead the reader to a better comprehension of the mechanisms, we feel that this should be the subject of a second review. Also, we feel that adding the table you propose would put more emphasis on the methodologies used to analyze channeling and less on the biological importance of nucleotide channeling, which is the scope of this review. For these reasons, we have not included these suggestions in Version 2. The review is solid, and the figures are well designed. The authors review many-many energy-consuming processes, sometimes those which are unrelated to the channeling issue, for example, the description of the dynamin system is too detailed. In fact, this section is suggested to be significantly reduced, and focus on the role of channeling effects in the cases of the isoforms. References 1. Srere PA: Complexes of sequential metabolic enzymes.Annu Rev Biochem. 1987; 56: 89-124 Authors’ response We disagree that the description of the dynamin system is too detailed, therefore, we have decided not to reduce this section."
}
]
}
] | 1
|
https://f1000research.com/articles/6-724
|
https://f1000research.com/articles/6-1133/v1
|
17 Jul 17
|
{
"type": "Research Article",
"title": "The effects of the Er:YAG laser on trabecular bone micro-architecture: Comparison with conventional dental drilling by micro-computed tomographic and histological techniques",
"authors": [
"Jihad Zeitouni",
"Bret Clough",
"Suzanne Zeitouni",
"Mohammed Saleem",
"Kenan Al Aisami",
"Carl Gregory",
"Bret Clough",
"Suzanne Zeitouni",
"Mohammed Saleem",
"Kenan Al Aisami"
],
"abstract": "Background: The use of lasers has become increasingly common in the field of medicine and dentistry, and there is a growing need for a deeper understanding of the procedure and its effects on tissue. The aim of this study was to compare the erbium-doped yttrium aluminium garnet (Er:YAG) laser and conventional drilling techniques, by observing the effects on trabecular bone microarchitecture and the extent of thermal and mechanical damage. Methods: Ovine femoral heads were employed to mimic maxillofacial trabecular bone, and cylindrical osteotomies were generated to mimic implant bed preparation. Various laser parameters were tested, as well as a conventional dental drilling technique. The specimens were then subjected to micro-computed tomographic (μCT) histomorphometic analysis and histology. Results: Herein, we demonstrate that mCT measurements of trabecular porosity provide quantitative evidence that laser-mediated cutting preserves the trabecular architecture and reduces thermal and mechanical damage at the margins of the cut. We confirmed these observations with histological studies. In contrast with laser-mediated cutting, conventional drilling resulted in trabecular collapse, reduction of porosity at the margin of the cut and histological signs of thermal damage. Conclusions: This study has demonstrated, for the first time, that mCT and quantification of porosity at the margin of the cut provides a quantitative insight into damage caused by bone cutting techniques. We further show that with laser-mediated cutting, the marrow remains exposed to the margins of the cut, facilitating cellular infiltration and likely accelerating healing. However, with drilling, trabecular collapse and thermal damage is likely to delay healing by restricting the passage of cells to the site of injury and causing localized cell death.",
"keywords": [
"dental drilling",
"Er:YAG laser",
"micro-computed tomography"
],
"content": "Introduction\n\nSince the pioneering work of Stern, Sognnaes and the Goldman brothers on the ruby laser in the 1960s, followed by the CO2 and Nd:YAG lasers in the 1980s (Coluzzi & Convissar, 2004; Featherstone & Nelson, 1987), and the erbium series of lasers in 1989 (Hibst & Keller, 1989), there has been considerable interest in the use of laser radiation for cutting of bone tissue, particularly in the field of dentistry.\n\nOver the past ten years, the Er:YAG laser with a working wavelength of 2940 nm is one of the most commonly used in dentistry (Romanos, 2015). It has been suggested that the Er:YAG laser is probably the least destructive of the bone cutting lasers because it generates light at an energy level that is readily absorbed by water and thus minimizes carbonation and adjacent tissue necrosis (Bornstein, 2003). While a handful of studies have suggested that Er:YAG laser energy is indeed sparing of tissue (Baek et al., 2015; Gholami et al., 2013; Panduric et al., 2014; Yoshino et al., 2009), the field is controversial, with at least one study predicting that laser energy causes excessive thermal damage (Martins et al., 2011). Furthermore, studies that specifically address the microstructure of bone after exposure to laser radiation are qualitative.\n\nTo help address these concerns, we propose a method to quantitatively evaluate thermal and mechanical destruction of trabecular bone by cutting techniques, and ask definitively whether the Er:YAG laser causes less thermal tissue damage than conventional drilling techniques. Motivated by forensic studies (Thompson, 2005), we reasoned that trabecular structure would collapse during thermal or mechanical challenge, and this could be quantified by standard measures of porosity. Moreover, this assay could be rapidly performed with standard modern μCT and histomorphometry methodologies. Herein, we compare the effect of typical Er:YAG laser parameters with conventional drilling techniques on trabecular microarchitecture with μCT scanning, computational histomorphometry and histology.\n\n\nMethods\n\nFemoral heads of 1 year-old lambs were acquired from a meat distributer (Antonis Butchers, Paralimni, Cyprus) and used within five days of acquisition. Ethical approval was not required in this case because the specimens utilized were from pre-existing biological material, rather than from animals euthanized for the purpose of a scientific study. The articular surface of the femoral head was thoroughly cleaned and then covered by a 3 mm layer of silicone to prevent contamination by outside particulates. Guide holes were made in the silicone that ensured that the diameter of the hole created by the laser was consistent across all samples. Using the various means described below, cylindrical osteotomies were created 4 mm diameter by 5 mm depth to mimic a typical implant bed. The laser (Lambda Pluser, Brendola, Italy) was used to create the three osteotomies using three typically utilized settings, designated hereafter as condition 1, 2 and 3 (Table 1–Table 3). To compare to a conventional drilling technique (Bicon Drill System, Bicon, Boston, MA, USA), an osteotomy was generated with a 2 mm pilot drill at 1250 rpm with irrigation, followed by enlargement at 50 rpm in the absence of irrigation. As a positive control, to validate porosity measurements and histological observations, an abrasive diamond-coated dental burr (Strauss, model 836KR, Palm Coast, FL, USA) was also used, which provided highly damaged reference material for comparison with experimental samples. Negative control blocks that did not receive holes were also prepared. Blocks of bone (10×10×10 mm) harboring each hole were cut from the femoral head with a diamond coated rotary blade (0.2 mm by 15 mm diameter, Strauss Diamond) fitted to a heavy duty drill (Foredom K5300 Blackstone Industries, Bethel, CT, USA).\n\nWith the holes in the vertical orientation, the bone blocks were scanned at 40 kV/661 mA at 21 mm resolution using a Skyscan 1174 µCT unit (Bruker, Kontich, Belgium). Data were collected at 1° increments over the 360° with flat field, random movement and geometrical correction activated. After acquisition, the data were thresholded to a scale ranging between 350 and 2554 Hounsfield units, so as to maximize visualization of trabecular bone. Axial images corresponding to 20 μm sections were then obtained using NRecon software (Vers 1.5.1.1, Skyscan) and saved as JPEG files.\n\nIn an attempt to objectively quantify damage to bone, the change in trabecular porosity was measured at the margin of the cut. Trabecular structures collapse under extreme heat and abrasion caused by conventional drilling (Thompson, 2005) (Heinemann et al., 2012). This results in a reduction in the porosity of trabecular bone which can be employed to quantify thermal and mechanical damage. To perform these measurements, a region of interest (ROI) was plotted on axial sections corresponding to a 0.4 – 0.5 mm margin around the hole (Figure 1a). This ROI was plotted on every 10th section from the surface of the hole to a point 2 mm below the surface (Figure 1b). The percent porosity was calculated on 10 × 20 μm sections along a 0.4 – 0.5 mm thick margin at the edge of each hole (Figures 1a, b) using CTAn software (Vers 1.8.1.4, Skyscan), and the means and standard deviations were calculated using GraphPad Prism version 5.00 for Windows (GraphPad Software, California, USA). Multiple pairwise comparisons within datasets were analyzed using one-sided ANOVA followed by Dunnet’s post-test. P-values below 0.05 were designated statistically significant in all cases. Statistical tests and data plotting were performed using GraphPad.\n\na) The ROI plotted on each of the 10 axial sections encompasses a 0.4 – 0.5 mm margin around the circumference of each hole. b) Measurements are taken on every 10th section from the surface of the hole to a depth of 2 mm, resulting in 10 values. c) Negative (uncut) control demonstrating distinct trabecular architecture. d) Left panel: demonstrating a dense layer along the circumference of the hole caused by thermal damage due to friction (arrowed). Right panel: magnified image illustrating a dense compacted layer (arrowed). e) Left panel: hole cut with conventional dental drill, with similar dense layer. Right panel: magnified image illustrating a compacted layer (arrowed). f) Left panel: hole cut with laser at 2.5 Watts, demonstrating undamaged trabecular structures at the circumference of the hole. Right panel: magnified image. g) Laser condition 2 (6 Watts). h) Laser condition 3, (8 Watts).\n\nFollowing μCT measurements, bone blocks were washed with fresh saline and decalcified in 1M dibasic ethylene-diamine tetra-acetic acid at pH 8.0 for 4 weeks, then with 8% (v/v) formic acid for a further 5 days (Sigma, St Louis, MO, USA) until radiolucency. The tissue was chemically dehydrated through an ascending gradient of alcohols and was then cleared with Sub-X clearing agent (Surgipath Medical Industries Inc., Richmond, IL). Paraffin-embedded blocks (paraffin wax type 6, Richard-Allan Scientific; Kalamazoo, MI) were cut in 10 μm thick sections and floated onto Superfrost plus microscope slides (Fisher Scientific). Sections were baked at 60°C for one hour before clearing with citrus clearing agent (Richard-Allan Scientific) and rehydration with distilled water. Masson’s trichrome staining was performed using a commercially acquired kit (American Mastertech Scientific Inc., Lodi, CA). Permount with toluene (Fisher Scientific was used as a mounting medium. Micrographs were generated using an upright microscope (Nikon Eclipse 80i fitted with a Retiga 2000 camera) running digital imaging software (Elements Vers 4.20, Nikon).\n\n\nResults\n\nMicroCT scanning of bone blocks revealed a classical trabecular bone structure that was readily visualized in axial reconstructions (Figures 1c–h). Upon inspection of the margin of the hole drilled with the abrasive bit (positive control) a distinct layer of compacted trabecular bone was evident, suggestive of heat damage (Figure 1d). This layer was evident albeit to a lesser extent along the edges of the hole generated by the conventional dental drill (Figure 1e). Conversely, the trabecular structures were preserved along the edges of the holes, generated by all 3 laser conditions (Figures 1f–h).\n\nTo quantify the extent of the damage caused by cutting, the percentage porosity was measured on 10 × 20 μm sections along a 0.4 – 0.5 mm thick margin at the edge of each hole (Figures 1a, b). The percentage porosity is reduced in compacted trabecular bone, providing a surrogate measure of heat and abrasive damage. Under the conditions of measurement described in the methods section, the negative control (uncut) bone sample ROI had a mean porosity of 56% (Figure 2a). In contrast, the abrasive diamond bit (positive control) sample ROI had approximately half the porosity seen in the uncut control (Figure 2a). When the experimental samples were measured, it was apparent that the hole generated by the conventional dental drill had a porosity significantly lower than the uncut control (p<0.01 indicated by ++ on the histogram in Figure 2a), but statistically indistinct from the specimen cut with the abrasive test bit. In contrast, those holes generated by laser had a trabecular porosity that was statistically similar to uncut bone, with condition 1 and 2 exhibiting the highest porosities (p<0.005 compared to positive control, indicated by *** on the histogram in Figure 2a) and condition 3 showing slightly lower porosity but still statistically distinct from the abrasive control (p<0.05, indicated by * on the histogram in Figure 2a). Collectively, these data demonstrate that the laser preserves the trabecular structure at the margin of cuts, whereas conventional drilling causes trabecular compaction, probably due to thermal or abrasive damage.\n\na) Plot of porosity measurements with statistical analyses. Values represent mean porosity for the 10 measured sections per sample with error bars representing standard deviations (n=10). Statistics are one-sided ANOVA with Dunnett’s post-test. Asterisks refer to comparison with abrasive bit (p<0.005=***, p<0.05=*). Crosses refer to comparison with negative (uncut) control (p<0.01=++). Panels b–g represent trichrome stained 10 mm sections of cut margins. b) Uncut control bone. c–d) Bone cut with abrasive bit and dental drill respectively, demonstrating areas of destroyed trabecular bone with severe carbonization (arrowed). e–g) Bone cut with laser parameters 1–3 respectively, demonstrating a lack of trabecular compaction and clean margins. h) 100× original magnification of charred cell mass (arrowed) present extensively in dental drill and abrasive bit. i) 60× original magnification of sporadic areas of slight carbonization that occurs with the laser.\n\nThe bone samples were then decalcified, paraffin embedded and subjected to histological analysis. Uncut bone (Figure 2b) had a distinct trabecular appearance when stained with Masson’s Trichrome, demonstrating areas of mature (blue) and remodeling (red) osteoid, typical of homeostatic bone tissue. Conversely, holes cut with the abrasive bit indicated distinct signs of trabecular collapse at the margin of the hole, with clear signs of severe carbonization on the bone tissue (Figure 2c, arrowed) and also in the marrow cavities adjacent to the cutting site. Localized carbonization was also detected on the sample cut with the conventional dental drill, but to a lesser degree than the abrasive bit (Figure 2d, arrowed). When visualized at high power, clusters of carbonized cells and charred debris were evident (Figure 2h). All holes cut with the laser lacked significant signs of carbonization and where evident, it was minor and sporadic (Figures 2f–g). Qualitatively, the carbonization appeared to increase with increasing laser power (Figure 2h), but even at the highest setting, the carbonization was not as severe as the abrasive bit or the conventional dental drill.\n\n\nDiscussion\n\nLaser technology is potentially an attractive alternative to mechanical and electrosurgical approaches for dental osteotomy, but there is a lack of comparative preclinical and clinical studies (Ishikawa et al., 2008; Moslemi et al., 2017). Nevertheless, it has been suggested that the Er:YAG laser is particularly suited to dental applications because the wavelength of the light employed has the capacity to cut hydroxyapatite, but the energy is readily absorbed by water, thus minimizing fear of soft tissue damage (Bornstein, 2003). A recent study compared the Er:YAG laser to standard mechanical cutting techniques on porcine rib explants, and demonstrated that the laser generated a cut with well-defined trabecular spaces at the margin. In contrast, drilling resulted in what was described by the investigators as a “smear-like surface” with no clear trabecular patterning (Panduric et al., 2014). The investigators also reported virtually no carbonization at energies in excess of those employed in this study (1000 mJ versus 250–400 mJ). Later, Baek et al. reported the same qualitative differences in bone micro-architecture at the cut margin, when targeting the mandibular ridge of live porcine subjects (Baek et al., 2015). The Baek study further proposed that the open architecture of the cut margin could facilitate bleeding which in turn could facilitate healing. While highly informative, the Panduric and Baek studies employed electron microscopy to evaluate the cut margin and data were limited to quantitative evaluation. The results presented here corroborate the findings of both reports, but we also offer the novel contribution of a quantitative appraisal of bone architecture.\n\nWe reasoned that extreme exposure to thermal and abrasive energy would result in local trabecular collapse that could be measurable as a function of reduced porosity. Indeed, the compaction of trabecular structure is a well-known forensic indicator of bones subjected to excessive heat (Thompson, 2005). Using high resolution μCT scans, it was possible to define an ROI (Figure 1a, b) that corresponded to the cut margin in cylindrical osteotomies performed with 3 standard laser parameters, a conventional dental drill and a highly abrasive diamond bit. We then employed standard histomorphometric software to measure the porosity in 10 virtual axial cross sections for each condition. In support of the rationale, we found that the highly abrasive diamond bit caused significantly reduced trabecular porosity as compared to uncut bone (Figure 2a). Furthermore we found that conventional drilling caused more trabecular compaction than all of the laser conditions (Figure 2a). There were no statistically significant differences in trabecular porosity between laser energies employed.\n\nAnother sign of heat damage is carbonization. Examination of the histological sections showed localized carbonization presence on the sample cut with the conventional dental drill (Figure 2c, h) and extensively carbonized tissue with the abrasive bit (Figure 2d). All experimental samples cut with the laser lacked significant signs of carbonization (Figures 2e–g), but at high laser energy, a thin carbonized layer was evident on some surfaces (Figure 2i).\n\nWhile the data presented here and the work of the aforementioned groups suggest that the Er:YAG laser results in minimized deformation of bone tissue and accelerated healing, a contrasting study suggests that Er:YAG cuts could slow healing through thermal damage of a thin layer of tissue (Martins et al., 2011). While surprising, the reason for these contrasting observations probably arises from distinctions between the structure of the bone tissues analyzed. In the Martins study, a qualitative appraisal was made on cortical bone of rodents, whereas the Baek study and the data presented here, focus on the structure of trabecular bone, which is more typical of the structure of the mandible in larger animals, including humans. We suggest that cortical bone offers a flat, uninterrupted surface for accumulation of thermal damage whereas the complex surface of trabecular bone would be expected to mask a significant area from the direct effects of the electromagnetic radiation.\n\nThe evidence presented in this study suggests that the use of the Er:YAG laser preserves trabecular architecture at the cut margin and is therefore likely more suitable for osteotomy than the conventional dental hand-piece. We also propose that a combination of μCT scanning and measurement of cut margin porosity represents a useful quantitative measure of thermal and mechanical destruction caused by bone-cutting tools. Further studies are needed to confirm these predictions in live animal subjects.\n\n\nData availability\n\nAvailable raw datasets on Open Science Framework, DOI, 10.17605/OSF.IO/PB8V9 (Gregory, 2017):\n\n‘Abrasive drill’: Raw scans of the bone blocks, cut with the abrasive tool\n\n‘Uncut’: Raw scans of uncut bone\n\nDental drill: Raw scans of the bone blocks, cut with the dental drill\n\nLaser condition 1: Er:YAG laser condition 1\n\nLaser condition 2: Er:YAG laser condition 2\n\nLaser condition 3: Er:YAG laser condition 3\n\nUncropped images of Figure 2h and Figure 2i.",
"appendix": "Competing interests\n\n\n\nJZ and MS are Co-directors of the International Course in Laser Dentistry (Cyprus) and Diplomates of the American Board of Laser Surgery. The remaining authors declare no potential conflicts of interest.\n\n\nGrant information\n\nThis work was funded in part by Institute for Regenerative Medicine Research Support Funds provided by Texas A&M Health Science Center (CAG).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe data presented in this manuscript has also been submitted to the University of Genoa (Department of Laser Surgery and Laser Therapy, University of Genoa, Largo Rosanna Benzi 10, Pad. IV 16132, Genoa, Italy) to satisfy in part the requirements of a Master’s Degree in Laser Dentistry that has since been awarded to JZ.\n\n\nSupplementary material\n\nSupplementary File 1: Percent porosity calculated from the raw data used to generate Figure 2a.\n\nClick here to access the data.\n\n\nReferences\n\nBaek KW, Deibel W, Marinov D, et al.: A comparative investigation of bone surface after cutting with mechanical tools and Er:YAG laser. Lasers Surg Med. 2015; 47(5): 426–432. PubMed Abstract | Publisher Full Text\n\nBornstein ES: Why wavelength and delivery systems are the most important factors in using a dental hard-tissue laser: a literature review. Compend Contin Educ Dent. 2003; 24(11): 837–838, 841, 843 passim; quiz 848. PubMed Abstract\n\nColuzzi DJ, Convissar RA: Lasers in clinical dentistry. Dent Clin North Am. 2004; 48(4): xi–xii. PubMed Abstract | Publisher Full Text\n\nFeatherstone JD, Nelson DG: Laser effects on dental hard tissues. Adv Dent Res. 1987; 1(1): 21–26. PubMed Abstract | Publisher Full Text\n\nGholami A, Baradaran-Ghahfarokhi M, Ebrahimi M: Thermal Effects of Laser-osteotomy on Bone: Mathematical Computation Using Maple. J Med Signals Sens. 2013; 3(4): 262–268. PubMed Abstract | Free Full Text\n\nGregory C: Zeitouni et al. (2017) F1000. 2017. Data Source\n\nHeinemann F, Hasan I, Kunert-Keil C, et al.: Experimental and histological investigations of the bone using two different oscillating osteotomy techniques compared with conventional rotary osteotomy. Ann Anat. 2012; 194(2): 165–170. PubMed Abstract | Publisher Full Text\n\nHibst R, Keller U: Experimental studies of the application of the Er:YAG laser on dental hard substances: I. Measurement of the ablation rate. Lasers Surg Med. 1989; 9(4): 338–344. PubMed Abstract | Publisher Full Text\n\nIshikawa I, Aoki A, Takasaki AA: Clinical application of erbium:YAG laser in periodontology. J Int Acad Periodontol. 2008; 10(1): 22–30. PubMed Abstract\n\nMartins GL, Puricelli E, Baraldi CE, et al.: Bone healing after bur and Er:YAG laser ostectomies. J Oral Maxillofac Surg. 2011; 69(4): 1214–1220. PubMed Abstract | Publisher Full Text\n\nMoslemi N, Shahnaz A, Masoumi S, et al.: Laser-Assisted Osteotomy for Implant Site Preparation: A Literature Review. Implant Dent. 2017; 26(6): 129–136. PubMed Abstract | Publisher Full Text\n\nPanduric DG, Juric IB, Music S, et al.: Morphological and ultrastructural comparative analysis of bone tissue after Er:YAG laser and surgical drill osteotomy. Photomed Laser Surg. 2014; 32(7): 401–408. PubMed Abstract | Publisher Full Text\n\nRomanos G: Current concepts in the use of lasers in periodontal and implant dentistry. J Indian Soc Periodontol. 2015; 19(5): 490–494. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThompson TJ: Heat-induced dimensional changes in bone and their consequences for forensic anthropology. J Forensic Sci. 2005; 50(5): 1008–1015. PubMed Abstract | Publisher Full Text\n\nYoshino T, Aoki A, Oda S, et al.: Long-term histologic analysis of bone tissue alteration and healing following Er:YAG laser irradiation compared to electrosurgery. J Periodontol. 2009; 80(1): 82–92. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "24950",
"date": "22 Aug 2017",
"name": "Robert A. Convissar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nExcellent paper. Well written. Well researched. Excellent methodology and scientific protocol. The authors should be proud of their work.\nI would like to see a comparison of this to ErCrYSGG as a next step. My only comment is the age of the lambs-was the age appropriate and comparative to a human adult? If the age of the lamb was young-comparative to a child or teen, the results might be quite different with a more mature specimen.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "29182",
"date": "19 Jan 2018",
"name": "Antonio C. Scarano",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript reports an interesting study evaluating Er:YAG compared with conventional dental drilling by micro-computed tomographic and histological techniques in Ovine femoral heads model.\nIntroduction The introduction has been well written.\nMaterials and Methods: Specific Comments: Why did you choose to preparation of osteotomies in Ovine femoral heads model? How many samples were used? Did you do power analysis to see the required number of specimens per group?\nDiscussion The discussion has been well written.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1133
|
https://f1000research.com/articles/6-1131/v1
|
17 Jul 17
|
{
"type": "Research Article",
"title": "Serum complexed and free prostate specific antigen levels are lower in female elite athletes in comparison to control women",
"authors": [
"Emma Eklund",
"Eleftherios P. Diamandis",
"Carla Muytjens",
"Sarah Wheeler",
"Anu Mathew",
"Martin Stengelin",
"Eli Glezer",
"Galina Nikolenko",
"Marshall D. Brown",
"Yingye Zheng",
"Angelica Lindén Hirschberg",
"Emma Eklund",
"Carla Muytjens",
"Sarah Wheeler",
"Anu Mathew",
"Martin Stengelin",
"Eli Glezer",
"Galina Nikolenko",
"Marshall D. Brown",
"Yingye Zheng"
],
"abstract": "Background: We hypothesize that prostate specific antigen (PSA), a protein that it is under regulation by androgens, may be differentially expressed in female elite athletes in comparison to control women. Methods: We conducted a cross-sectional study of 106 female athletes and 114 sedentary age-matched controls. Serum from these women was analyzed for complexed prostate specific antigen (cPSA) and free prostate specific antigen (fPSA), by fifth generation assays with limits of detection of around 6 and 140 fg/mL, respectively. A panel of estrogens, androgens and progesterone in the same serum was also quantified by tandem mass spectrometry. Results: Both components of serum PSA (cPSA and fPSA) were lower in the elite athletes vs the control group (P=0.033 and 0.013, respectively). Furthermore, estrone (p=0.003) and estradiol (p=0.004) were significantly lower, and dehydroepiandrosterone (p=0.095) and 5-androstene-3β, 17β-diol (p=0.084) tended to be higher in the athletes vs controls. Oral contraceptive use was similar between groups and significantly associated with increased cPSA and fPSA in athletes (p= 0.046 and 0.009, respectively). PSA fractions were not significantly associated with progesterone changes. The Spearman correlation between cPSA and fPSA in both athletes and controls was 0.75 (P < 0.0001) and 0.64 (P < 0.0001), respectively. Conclusions: Elite athletes have lower complexed and free PSA, higher levels of androgen precursors and lower levels of estrogen in their serum than sedentary control women. Abbreviations: cPSA, complexed PSA; fPSA, free PSA; PCOS, polycystic ovarian syndrome; E1, estrone; E2, estradiol; DHEA, dehydroepiandrosterone, Testo, testosterone; DHT, dihydrotestosterone; PROG, progesterone; Delta 4, androstenedione; Delta 5, androst-5-ene-3β, 17β-diol; BMD, body mineral density; LLOQ, lower limit of quantification; ULOQ, upper limit of quantification; LOD, limit of detection; ACT, α1-antichymotrypsin",
"keywords": [
"prostate specific antigen",
"elite female athletes",
"hyperandrogenism",
"fifth-generation PSA assays",
"serum PSA in women",
"Olympic teams"
],
"content": "Introduction\n\nProstate specific antigen (PSA) is a well-known and clinically useful biomarker of prostate adenocarcinoma1. PSA circulates in blood of males as a complex with alpha 1 antichymotrypsin (cPSA) (approx. 80% of total) or as free, non-complexed PSA (fPSA) (approx. 20% of total)2,3. It has now been well-documented that PSA is also produced by many female tissues, including breast, periurethral, salivary and thyroid tissues, and by many tumors4,5. The PSA gene is up-regulated by androgens and progestins in breast and other female tissues, as well as in model systems such as breast carcinoma cell lines6–12. Serum PSA in women fluctuates during the menstrual cycle, and these changes are attributed to up-regulation by progesterone during the luteal phase13–15.\n\nPSA also circulates in female serum as complexed and free PSA16, but its concentrations are exceedingly low (around 1pg/mL), precluding accurate determination with third generation PSA assays17,18. In some circumstances, such as in women with hyperandrogenic syndromes, including polycystic ovarian syndrome (PCOS) and hirsutism, it has been shown that total PSA in female serum is elevated and this finding may be used as an aid to disease diagnosis19–25.\n\nRecently, fifth generation PSA assays have been developed by many groups, allowing accurate PSA determinations in the low fg/mL range26–31. These assays can quantify both complexed and free PSA in serum and urine of females and allow examination of the possible role of female PSA in healthy and disease states. We recently confirmed the diagnostic value of cPSA and fPSA in women with PCOS by using one of these assays32.\n\nIt has previously been speculated that female elite athletes may have higher circulating androgen levels than sedentary age-matched females, and that hyperandrogenic syndromes such as PCOS, congenital adrenal hyperplasia and 46XY disorders of sex development are more common in elite athletes33–37.\n\nIn this paper we speculated that serum cPSA and fPSA, due to their in-vivo up-regulation by androgens and progestins, as seen with female-to-male transsexuals treated with testosterone38,39, may represent an integrated index of total androgenic/progestational activity in female tissues. We thus examine here, the levels of serum complexed and free PSA in 106 female elite athletes and 114 age-matched sedentary controls, along with levels of estrogens, androgens and progesterone. The observed differences in serum cPSA and fPSA between athletes and controls were examined in the context of oral contraceptive use and ovulatory cycles.\n\n\nMaterials and methods\n\nThis project was approved by the Regional Ethics Committee of the Karolinska Institutet, Stockholm, Sweden (EPN 2011-1426-3). Serum samples from 106 Swedish Olympic female athletes representative of the Swedish participation in the summer and winter Olympic games, were recruited in connection with pre-Olympic training camps. Participants were at least 18 years of age. Samples were also collected from 114 age-matched, healthy non-athletic female controls (maximum of 2 hours endurance and/or strength training per week and no prior participation in elite level competition). Recruitment started in November, 2011 and was completed by April, 2015.\n\nThe subjects were investigated at the Women’s Health Research Unit, Karolinska University Hospital or in connection with training camps. Data on ethnicity, past and present health problems, injuries, medications, gynecological history (bleeding pattern, date of last menstruation, pregnancies, hormonal contraceptive use) and symptoms of hyperandrogenism (hirsutism, acne), were collected by a general health questionnaire. Furthermore, data on sport discipline, training hours per week, achieved sport performance and goals were collected from the Olympic athletes. A fasting blood sample was collected by a standard venipuncture between 7–9AM. Body composition (body fat, muscle mass, BMD) was investigated by dual X-ray absorptiometry (DXA), at the Department of Radiology, Karolinska University Hospital, Solna, Stockholm.\n\nSerum from both athletes and controls was analyzed by tandem mass spectrometry for the following steroid hormones and metabolites: estrone (E1), estradiol (E2), testosterone (Testo), dehydroepiandrosterone (DHEA), androstenedione (Delta 4), androst-5-ene-3β, 17β-diol (Delta 5), dihydrotestosterone (DHT), and progesterone (PROG).\n\nSerum samples were kept frozen at -80°C until thawed for analysis. Details of the tandem mass spectrometry analysis have been described elsewhere40. A progesterone concentration ≥ 5.3 ng/mL in the luteal phase of the menstrual cycle was taken as an indication of successful ovulation.\n\nAmong the athletes, 65 were not using contraceptives and 41 were using hormonal contraceptives (39%). Among the control group, 69 were not using contraceptives and 45 were using contraceptives (39%). The type of oral contraceptive used was quite variable and included the following combinations: Ethinylestradiol + Levonorgestrel, Ethinylestradiol + Etonogestrel, Ethinylestradiol + Drospirenon, Ethinylestradiol + Dienogest, Ethinylestradiol + Norgestimate, Ethinylestradiol + Cyproterone, Ethinylestradiol + Nomegestrol, Ethinylestradiol + Noretisterone, or the following single progestins: Desogestrel, Etonogestrel, Levonorgestrel. Due to the small number of participants, we did not analyze the data according to type of contraceptive used.\n\nOne vial of 200µL serum per sample was provided blinded to Meso Scale Diagnostics (MSD) for cPSA and fPSA measurement using MSD’s MULTI-ARRAY® electrochemiluminescence technology in the S-PLEXTM format, which allows quantitation of previously unmeasurable levels of biomarkers with fg/mL sensitivity27,41. The samples were thawed and centrifuged at 10,000g for 10 minutes at 4˚C before being aliquoted into low retention 96-well round bottom plates for subsequent testing. Plates were immediately frozen on dry ice and stored at -80˚C until testing. cPSA and fPSA assays were calibrated to the WHO International Standard for prostate-specific antigen, with 90% bound to alpha1-antichymotrypsin and 10% in the free form (National Institute for Biological Standards and Control, [NIBSC], code 96/670, Hertfordshire, England) and the WHO International Standard for prostate-specific antigen free (NIBSC, code 96/668, Hertfordshire, England), respectively. Assay characteristics were determined prior to sample testing.\n\nFor each assay, 8-point calibration curves were included on each plate, and the data were fitted with a weighted 4-parameter logistic curve fit. Limit of detection (LOD) is a calculated concentration corresponding to the average signal 2.5 standard deviations above the background (zero calibrator). Lower limit of quantitation (LLOQ) and upper limit of quantitation (ULOQ) were established for the plate lot by measuring multiple levels of calibrator near the expected LLOQ and ULOQ. LLOQ and ULOQ are, respectively, the lowest and highest concentration of calibrator tested which has a %CV of 20% or less, with recovered concentration within 70–130%. The LOD was 5.7 and 140 fg/mL for cPSA and fPSA assays, respectively. The LLOQ was 17 and 480 fg/mL for cPSA and fPSA assays, respectively. The ULOQ was 72,000 and 240,0000 fg/mL for cPSA and fPSA assays, respectively. Precision was determined from testing of three internal quality control samples that span the detectable range and is expressed as the % CV from 16 specimen assay runs, with 2 operators over 3 testing days. % CVs were between 11–13% for cPSA and 6–23% for fPSA.\n\nSerum, EDTA plasma, and heparin plasma samples (7–8 samples total) were spiked with calibrator at two or three concentrations. The non-complexing form of PSA (Scripps Laboratories, San Diego, CA; #90024) that does not bind to a1- antichymotrypsin (ACT) was used in spike recovery experiments for the fPSA assay. Average spike recoveries for the fPSA and cPSA assays were 88% and 90%, respectively. Serum, EDTA plasma, and heparin plasma samples (7–8 samples total) were diluted 2, 4 and 8-fold. Average dilution linearities for the fPSA and cPSA assays were 114% and 109%, respectively.\n\nThe samples and calibrator dilutions were assayed in duplicate and all samples were measured for cPSA and fPSA. Measurement of cPSA was performed with a 2-fold dilution of the samples; fPSA measurement was performed on neat samples. Concentrations of biomarkers in each sample were calculated from the calibrator curves taking into account sample dilutions. The mean of two measurements was derived for each analyte in each sample and reported in fg/mL.\n\nWe first categorized the athletes and control subjects by several clinical and demographic variables and hormonal measurements. When comparing variables among athletes and controls, the Wilcoxon rank sum test was used to determine if there were significant differences. Correlation of parameters between athletes and controls were examined using the Spearman correlation coefficient.\n\nLower limits of detection (LOD) for cPSA and fPSA assays are 6 fg/mL and 140 fg/mL, respectively. Since measurements that fall below these values are unreliable, we set marker measurements that fall below the LOD to LOD/2. Among all samples, 102/206 serum fPSA values but none of the serum cPSA values were below the LOD of the method used. Similarly, 110 out of 220 progesterone values were below the detection limit of the method (0.10 ng/mL) and these values were adjusted to 0.05 ng/mL. Rank-based non-parametric methods were employed so that the results would be robust to these transformations.\n\n\nResults\n\nThe included excel file contains all anthopometric and biochemical data used to perform statistical analyses. Significant differences between athletes and controls, irrespective of oral contraceptive use, were noted for the following variables: total BMD, spine BMD, lean mass (all parameters were elevated in athletes; p<0.001) and fat percentage (decreased in athletes; p<0.001).\n\nInitial examination of our data revealed the significant effect of oral contraceptives on hormonal and PSA measurements. For this reason, we stratified both athlete and control groups by hormonal measurements and PSA, according to oral contraceptive use (data shown in Table 1 and Table 2). E1 and E2 levels were significantly reduced in athletes (p=0.003 and 0.004, respectively). Other measurements were not significantly different between athletes and controls that were not taking oral contraceptives, but there was a trend for athletes to have higher DHEA (p=0.095) and Delta 5 (p=0.084). In Table 2 it is shown that both cPSA and fPSA were significantly lower in athletes in comparison to control subjects, irrespective of hormonal contraceptive use. Median cPSA was 776 fg/mL in athletes and 1249 fg/mL in controls (p=0.003). Median fPSA was 70 fg/mL in athletes and 169.5 fg/mL in controls (p=0.013).\n\n1See abbreviation list for full details\n\n2Number of participants\n\n1P-values calculated using Wilcoxon signed rank test.\n\nIt is shown in Table 3 that in athletes, the use of oral contraceptives significantly increased both cPSA (from 776 to 1812 fg/mL; (p=0.046) and fPSA from 70 to 216 fg/mL (p=0.009). In control subjects, we noticed similar trends (from 1249 fg/mL to 2002 fg/mL for cPSA and from 169.5 fg/mL to 220 fg/mL for fPSA) but the differences were not significant. The elevation of serum PSA with oral contraceptive use in women has been reported before9.\n\nNumbers represent medians (25th, 75th percentiles).\n\n1P-values calculated using Wilcoxon signed rank test.\n\n2Numbers represent medians (25th – 75th percentiles).\n\nThe distribution of all clinical and demographic variables between athletes and controls, stratified by oral contraceptive use, is shown in Supplementary Figure 1. The distribution of all hormonal and PSA measurements in controls and athletes, stratified by oral contraceptive use, is shown in Supplementary Figure 2 and Supplementary Figure 3.\n\nWe then examined the correlation between cPSA and fPSA in serum of both athletes and controls. The Spearman correlation coefficients (rs) were highly significant for both groups of subjects (Figure 1). For athletes, rs=0.75 (p < 0.001) and for controls, rs=0.64 (p < 0.001). We also examined the correlation between the two PSA variables and all other clinical, demographic and hormonal variables in subjects not taking oral contraceptives (Supplementary Table 1). The only statistically significant correlations (p < 0.01; all positive) were between cPSA and fPSA and age, and cPSA with Testo, Delta4 and Delta5 in the athletes group, and between cPSA and fPSA with DHEA, Testo, Delta4 and Delta5 in the control group. The correlations between all of the available parameters in the control and athlete groups not taking oral contraceptives are further depicted in Figure 2.\n\nThe pairs for which ovals are displayed indicate a Spearman correlation that is significantly different from zero, with p-value < 0.01. See also Table 1 for parameter description.\n\nWe also examined whether cPSA and fPSA correlated with progesterone. We found no correlation, in either the control or athlete groups, as stratified by hormonal contraceptive use (Supplementary Figure 4). We further dichotomously classified progesterone as being ≥ 5.3 ng/mL (indicative of luteal phase) or < 5.3 ng/mL in athletes and controls not using oral contraceptives, and examined any association with cPSA or fPSA. We found no association (Supplementary Figure 5).\n\n\nDiscussion\n\nPSA is used widely for diagnosis and monitoring of prostatic adenocarcinoma in males1. Quantification of PSA in female serum is problematic, due to its very low concentrations4,5. Newly developed fifth generation PSA assays demonstrate enough sensitivity for quantifying both complexed PSA and free PSA in serum of normal women27. We have recently shown that median cPSA concentration in normal women is around 900 fg/mL, while fPSA is about ten times lower (70 fg/mL). In women with PCOS, serum PSA concentration is elevated by about 3-fold32.\n\nIt is well-known that the PSA gene is up-regulated by androgens7–12. Previous studies have shown that women who are taking androgenic steroids to change sex (e.g. young female-to-male transsexuals) demonstrate high elevations of serum PSA and urine PSA37,38. We have also shown previously that women who are taking oral contraceptives upregulate their PSA at both the tissue and the serum level9. Based on this data, we hypothesized that serum PSA concentration may represent a novel integrated index of androgenic stimulation in normal women.\n\nIt has been suggested that elite athletes more frequently suffer from various hyperandrogenic syndromes such as PCOS, congenital adrenal hyperplasia and 46XY disorders of sex development33–37. In this paper, we examine the possible differences in serum PSA between Olympic elite athletes and sedentary control women. For this, we measured both cPSA and fPSA, as well as a panel of androgens, estrogens, and progesterone in serum.\n\nWe have shown that in athletes, E1 and E2 concentrations are significantly reduced, whereas levels of DHEA and Delta 5 tend to be slightly elevated. Moreover, we identified for the first time a significant decrease in serum cPSA and fPSA in elite athletes, in comparison to the control group. We have also confirmed the previous finding9 that oral contraceptive use increases both cPSA and fPSA in serum of athletes (but the differences were not significant in the control group, although the same trend was observed).\n\nRecently, we speculated that serum and urine PSA in female athletes could be used to test for doping with androgenic steroids42. In this work, we did not examine this possibility due to lack of samples from provenly doping athletes. However, now that serum PSA in women can be reliably quantified, this parameter could be considered for inclusion into the athlete’s biological passport, as suggested by Sottas et al43.\n\nOur finding that serum E1 and E2 are lower in athletes or exercising individuals in comparison to controls is supported by previous literature43–45. An explanation for this phenomenon may be related to menstrual cycle disturbances induced by strenuous exercise and stress during the competition. Such findings have prompted some to speculate that the lower incidence of breast cancer in individuals who exercise may be due to reduced estrogen and androgen levels43–45. Previously, we also established a connection between tumoral or nipple aspirate PSA and breast cancer prognosis46–49. Clearly, the interconnections between exercise, androgen, estrogen and serum PSA levels and breast cancer need to be better defined.\n\nIt has been reported before that serum PSA fluctuates with the menstrual cycle, likely due to up-regulation by progesterone in the luteal phase13–15. We examined the correlation and the association of cPSA and fPSA with serum progesterone, as a continuous and dichotomous variable, respectively. Our results have shown that there was no correlation or association between either cPSA or fPSA and progesterone or with the frequency of ovulatory cycles, (ovulation being defined as progesterone ≥ 5.4 ng/mL in the luteal phase).\n\nWhile we found that many anthropometric measurements are different between athletes and controls (Table 1), among steroid measurements, only E1, E2, and to a lesser extent DHEA and Delta5 concentrations were different between the two groups. However, we identified a significant difference between both cPSA and fPSA between elite athletes and controls. The differences in cPSA and fPSA levels between these two groups do not seem to be associated with hyperandrogenism or the menstrual cycle. It will be interesting to examine, in the future, what other parameters are responsible for the differential concentrations of cPSA and fPSA in serum of these women. Since PSA in women originates mostly from the breast, differences in breast size, which was one parameter we did not study, could be one possible cause for the different levels of these proteins.\n\n\nData availability\n\nDataset 1: Anthropometric and biochemical data for athletes and controls. The data file contains all available anthropometric and biochemical data for all athletes and controls included in this study, and includes free PSA and complexed PSA. It was used to do statistics and to derive the results of this paper. DOI, 10.5256/f1000research.11821.d16803950\n\n\nConsent\n\nThis study was approved by the Regional Ethics Committee of the Karolinska Institutet, Stockholm, Sweden (EPN 2011-1426-3). Written informed consent was obtained from all participants.",
"appendix": "Competing interests\n\n\n\nAuthors SW, AM, MS, EG and GN are employees of Meso Scale Diagnostics. The other authors have no conflicts of interest to declare.\n\n\nGrant information\n\nThis work was funded by the Partnership for Clean Competition (grant to EPD).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary Figure 1: Distribution of age, weight and body composition variables between athletes and controls, stratified by oral contraceptive use.\n\nClick here to access the data.\n\nSupplementary Figure 2: Distributions of steroid hormone variables among athletes and controls, stratified by oral contraceptive use.\n\nClick here to access the data.\n\nSupplementary Figure 3: Distributions of serum PSA measurements among athletes and controls, stratified by oral contraceptive use.\n\nClick here to access the data.\n\nSupplementary Figure 4: Correlation between PSA values and progesterone in athletes and controls, stratified by hormonal contraceptive use. No correlation was found.\n\nClick here to access the data.\n\nSupplementary Figure 5: PSA values stratified by progesterone < 5.3 vs. ≥ 5.4 ng/mL in athletes and controls, for women who do not take hormonal contraceptives. P-values are calculated using Wilcoxon tests.\n\nClick here to access the data.\n\nSupplementary Table 1: Spearman correlation between PSA and other variables among athletes and controls for women not taking hormonal contraceptives. The coloured cells indicate a Spearman correlation that is significantly different from zero with p-value < 0.01.\n\nClick here to access the data.\n\n\nReferences\n\nDiamandis EP: Prostate-specific Antigen: Its Usefulness in Clinical Medicine. Trends Endocrinol Metab. 1998; 9(8): 310–6. PubMed Abstract | Publisher Full Text\n\nStenman UH, Leinonen J, Alfthan H, et al.: A complex between prostate-specific antigen and alpha 1-antichymotrypsin is the major form of prostate-specific antigen in serum of patients with prostatic cancer: assay of the complex improves clinical sensitivity for cancer. Cancer Res. 1991; 51(1): 222–6. PubMed Abstract\n\nLilja H, Christensson A, Dahlén U, et al.: Prostate-specific antigen in serum occurs predominantly in complex with alpha 1-antichymotrypsin. Clin Chem. 1991; 37(9): 1618–25. PubMed Abstract\n\nDiamandis EP, Yu H: Nonprostatic sources of prostate-specific antigen. Urol Clin North Am. 1997; 24(2): 275–82. PubMed Abstract | Publisher Full Text\n\nBlack MH, Diamandis EP: The diagnostic and prognostic utility of prostate-specific antigen for diseases of the breast. Breast Cancer Res Treat. 2000; 59(1): 1–14. PubMed Abstract | Publisher Full Text\n\nYu H, Diamandis EP, Zarghami N, et al.: Induction of prostate specific antigen production by steroids and tamoxifen in breast cancer cell lines. Breast Cancer Res Treat. 1994; 32(3): 291–300. PubMed Abstract | Publisher Full Text\n\nCleutjens KB, van der Korput HA, van Eekelen CC, et al.: An androgen response element in a far upstream enhancer region is essential for high, androgen-regulated activity of the prostate-specific antigen promoter. Mol Endocrinol. 1997; 11(2): 148–61. PubMed Abstract | Publisher Full Text\n\nCleutjens KB, van Eekelen CC, van der Korput HA, et al.: Two androgen response regions cooperate in steroid hormone regulated activity of the prostate-specific antigen promoter. J Biol Chem. 1996; 271(11): 6379–88. PubMed Abstract | Publisher Full Text\n\nYu H, Diamandis EP, Monne M, et al.: Oral contraceptive-induced expression of prostate-specific antigen in the female breast. J Biol Chem. 1995; 270(12): 6615–8. PubMed Abstract | Publisher Full Text\n\nCleutjens KB, van der Korput HA, Ehren-van Eekelen CC, et al.: A 6-kb promoter fragment mimics in transgenic mice the prostate-specific and androgen-regulated expression of the endogenous prostate-specific antigen gene in humans. Mol Endrocrinol. 1997; 11(9): 1256–65. PubMed Abstract | Publisher Full Text\n\nZhang J, Zhang S, Murtha PE, et al.: Identification of two novel cis-elements in the promoter of the prostate-specific antigen gene that are required to enhance androgen receptor-mediated transactivation. Nucleic Acids Res. 1997; 25(15): 3143–50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchuur ER, Henderson GA, Kmetec LA, et al.: Prostate-specific antigen expression is regulated by an upstream enhancer. J Biol Chem. 1996; 271(12): 7043–51. PubMed Abstract | Publisher Full Text\n\nZarghami N, Grass L, Sauter ER, et al.: Prostate-specific antigen in serum during the menstrual cycle. Clin Chem. 1997; 43(10): 1862–7. PubMed Abstract\n\nAksoy H, Akçay F, Umudum Z, et al.: Changes of PSA concentrations in serum and saliva of healthy women during the menstrual cycle. Ann Clin Lab Sci. 2002; 32(1): 31–6. PubMed Abstract\n\nNagar R, Msalati AA: Changes in Serum PSA During Normal Menstrual Cycle. Indian J Clin Biochem. 2013; 28(1): 84–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiai M, Yu H, Roagna R, et al.: Prostate-specific antigen in serum of women with breast cancer. Br J Cancer. 1995; 72(3): 728–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu H, Diamandis EP, Wong PY, et al.: Detection of prostate cancer relapse with prostate specific antigen monitoring at levels of 0.001 to 0.1 microG./L. J Urol. 1997; 157(3): 913–8. PubMed Abstract | Publisher Full Text\n\nFerguson RA, Yu H, Kalyvas M, et al.: Ultrasensitive detection of prostate-specific antigen by a time-resolved immunofluorometric assay and the Immulite immunochemiluminescent third-generation assay: potential applications in prostate and breast cancers. Clin Chem. 1996; 42(5): 675–84. PubMed Abstract\n\nObiezu CV, Scorilas A, Magklara A, et al.: Prostate-specific antigen and human glandular kallikrein 2 are markedly elevated in urine of patients with polycystic ovary syndrome. J Clin Endocrinol Metab. 2001; 86(4): 1558–61. PubMed Abstract | Publisher Full Text\n\nVural B, Ozkan S, Bodur H: Is prostate-specific antigen a potential new marker of androgen excess in polycystic ovary syndrome? J Obstet Gynaecol Res. 2007; 33(2): 166–73. PubMed Abstract | Publisher Full Text\n\nBahceci M, Bilge M, Tuzcu A, et al.: Serum prostate specific antigen levels in women with polycystic ovary syndrome and the effect of flutamide+desogestrel/ethinyl estradiol combination. J Endocrinol Invest. 2004; 27(4): 353–6. PubMed Abstract | Publisher Full Text\n\nKocak M, Tarcan A, Beydilli G, et al.: Serum levels of prostate-specific antigen and androgens after nasal administration of a gonadotropin releasing hormone-agonist in hirsute women. Gynecol Endocrinol. 2004; 18(4): 179–85. PubMed Abstract | Publisher Full Text\n\nBurelli A, Cionini R, Rinaldi E, et al.: Serum PSA levels are not affected by the menstrual cycle or the menopause, but are increased in subjects with polycystic ovary syndrome. J Endocrinol Invest. 2006; 29(4): 308–12. PubMed Abstract | Publisher Full Text\n\nMardanian F, Heidari N: Diagnostic value of prostate-specific antigen in women with polycystic ovary syndrome. J Res Med Sci. 2011; 16(8): 999–1005. PubMed Abstract | Free Full Text\n\nBili E, Dampala K, Iakovou I, et al.: The combination of ovarian volume and outline has better diagnostic accuracy than prostate-specific antigen (PSA) concentrations in women with polycystic ovarian syndrome (PCOs). Eur J Obstet Gynecol Reprod Biol. 2014; 179: 32–5. PubMed Abstract | Publisher Full Text\n\nLiang J, Yao C, Li X, et al.: Silver nanoprism etching-based plasmonic ELISA for the high sensitive detection of prostate-specific antigen. Biosens Bioelectron. 2015; 69: 128–34. PubMed Abstract | Publisher Full Text\n\nNikolenko GN, Stengelin MK, Sardesai L, et al.: Abstract 2012: Accurate measurement of free and complexed PSA concentrations in serum of women using a novel technology with fg/mL sensitivity.. [abstract]. In: Proceedings of the 106th Annual Meeting of the American Association for Cancer Research; 2015 Apr 18-22; Philadelphia, PA. Cancer Res. 2015; 75(15 Suppl). Publisher Full Text\n\nThaxton CS, Elghanian R, Thomas AD, et al.: Nanoparticle-based bio-barcode assay redefines \"undetectable\" PSA and biochemical recurrence after radical prostatectomy. Proc Natl Acad Sci U S A. 2009; 106(44): 18437–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcDermed JE, Sanders R, Fait S, et al.: Nucleic acid detection immunoassay for prostate-specific antigen based on immuno-PCR methodology. Clin Chem. 2012; 58(4): 732–40. PubMed Abstract | Publisher Full Text\n\nWilson DH, Hanlon DW, Provuncher GK, et al.: Fifth-generation digital immunoassay for prostate-specific antigen by single molecule array technology. Clin Chem. 2011; 57(12): 1712–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRissin DM, Kan CW, Campbell TG, et al.: Single-molecule enzyme-linked immunosorbent assay detects serum proteins at subfemtomolar concentrations. Nat Biotechnol. 2010; 28(6): 595–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDiamandis EP, Stanczyk FZ, Wheeler S, et al.: Serum complexed and free prostate-specific antigen (PSA) for the diagnosis of the polycystic ovarian syndrome (PCOS). Clin Chem Lab Med. 2017; pii: /j/cclm.ahead-of-print/cclm-2016-1124/cclm-2016-1124.xml, (In press). PubMed Abstract | Publisher Full Text\n\nRickenlund A, Carlström K, Ekblom BJ, et al.: Hyperandrogenicity is an alternative mechanism underlying oligomenorrhea or amenorrhea in female athletes and may improve physical performance. Fertil Steril. 2003; 79(4): 947–55. PubMed Abstract | Publisher Full Text\n\nRickenlund A, Thorén M, Carlström K, et al.: Diurnal profiles of testosterone and pituitary hormones suggest different mechanisms for menstrual disturbances in endurance athletes. J Clin Endocrinol Metab. 2004; 89(2): 702–707. PubMed Abstract | Publisher Full Text\n\nBermon S, Garnier PY, Hirschberg AL, et al.: Serum androgen levels in elite female athletes. J Clin Endocrinol Metab. 2014; 99(11): 4328–4335. PubMed Abstract | Publisher Full Text\n\nEnea C, Boisseau N, Fargeas-Gluck MA, et al.: Circulating androgens in women: exercise-induced changes. Sports Med. 2011; 41(1): 1–15. PubMed Abstract | Publisher Full Text\n\nHagmar M, Berglund B, Brismar K, et al.: Hyperandrogenism may explain reproductive dysfunction in olympic athletes. Med Sci Sports Exerc. 2009; 41(6): 1241–8. PubMed Abstract | Publisher Full Text\n\nObiezu CV, Giltay EJ, Magklara A, et al.: Serum and urinary prostate-specific antigen and urinary human glandular kallikrein concentrations are significantly increased after testosterone administration in female-to-male transsexuals. Clin Chem. 2000; 46(6 Pt 1): 859–62. PubMed Abstract\n\nSlagter MH, Scorilas A, Gooren LJ, et al.: Effect of testosterone administration on serum and urine kallikrein concentrations in female-to-male transsexuals. Clin Chem. 2006; 52(8): 1546–51. PubMed Abstract | Publisher Full Text\n\nKe Y, Bertin J, Gonthier R, et al.: A sensitive, simple and robust LC-MS/MS method for the simultaneous quantification of seven androgen- and estrogen-related steroids in postmenopausal serum. J Steroid Biochem Mol Biol. 2014; 144(Pt B): 523–34. PubMed Abstract | Publisher Full Text\n\nGlezer EN, Stengelin M, Aghvanyan A, et al.: Abstract 2014: Cytokine iummunoassays with sub-fg/mL detection limits. Abstract T2065. AAPS Annual Meeting. 2014. Reference Source\n\nMusrap N, Diamandis EP: Prostate-Specific Antigen as a Marker of Hyperandrogenism in Women and Its Implications for Antidoping. Clin Chem. 2016; 62(8): 1066–74. PubMed Abstract | Publisher Full Text\n\nSottas PE, Robinson N, Rabin O, et al.: The athlete biological passport. Clin Chem. 2011; 57(7): 969–76. PubMed Abstract | Publisher Full Text\n\nMcTiernan A, Tworoger SS, Rajan KB, et al.: Effect of exercise on serum androgens in postmenopausal women: a 12-month randomized clinical trial. Cancer Epidemiol Biomarkers Prev. 2004; 13(7): 1099–105. PubMed Abstract\n\nMcTiernan A, Tworoger SS, Ulrich CM, et al.: Effect of exercise on serum estrogens in postmenopausal women: a 12-month randomized clinical trial. Cancer Res. 2004; 64(8): 2923–8. PubMed Abstract | Publisher Full Text\n\nPlinta R, Olszanecka-Glinianowicz M, Drosdzol-Cop A, et al.: [State of nutrition and diet habits versus estradiol level and its changes in the pre-season preparatory period for the league contest match in female handball and basketball players]. Ginekol Pol. 2012; 83(9): 674–80. PubMed Abstract\n\nSauter ER, Daly M, Linahan K, et al.: Prostate-specific antigen levels in nipple aspirate fluid correlate with breast cancer risk. Cancer Epidemiol Biomarkers Prev. 1996; 5(12): 967–70. PubMed Abstract\n\nYu H, Giai M, Diamandis EP, et al.: Prostate-specific antigen is a new favorable prognostic indicator for women with breast cancer. Cancer Res. 1995; 55(10): 2104–10. PubMed Abstract\n\nFoekens JA, Diamandis EP, Yu H, et al.: Expression of prostate-specific antigen (PSA) correlates with poor response to tamoxifen therapy in recurrent breast cancer. Br J Cancer. 1999; 79(5–6): 888–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEklund E, Diamandis EP, Muytjens C, et al.: Dataset 1 in: Serum complexed and free prostate specific antigen levels are lower in female elite athletes in comparison to control women. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24245",
"date": "26 Jul 2017",
"name": "Yves Courty",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTo perform their research, the authors have used recognized approaches for the analysis of free and complexed PSA, androgens, estrogens and progesterone in sera of elite athletes and control women. This manuscript is scientifically sound, very well written and informative.\n\nA comment on the lack of correlation between cPSA or fPSA and progesterone would be desirable since other authors have shown that serum PSA fluctuates with the menstrual cycle.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "24410",
"date": "01 Aug 2017",
"name": "Geoffrey S. Baird",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors report on measurements of free and complexed PSA, as well as a variety of steroid hormones, in a cohort of Olympic athletes as well as a control group of sedentary women. The PSA assays used are “5th generation” and therefore sensitive enough to detect the very low levels present in women.\n\nthe fact that nearly half of the measured fPSA values are undetectable (or rather, below the limit of detection) makes me question the appropriateness of the correlation analysis in Figure 1; perhaps two correlations (one in supplemental material) that show the correlation in those with measurable fPSA would be more statistically sound? The finding of lower PSA in athletes is, as pointed out by the authors, not what would have been expected based on prior work. The likelihood of a confounder is thus high, in my opinion. The authors reasonably mention the role of breast PSA production as a possible confounder. However, it is unclear whether or not the experimental setup here could address an effect that was not linearly related to athletic activity. For example, there are many possible differences between Olympians and sedentary people, so perhaps a control group of non-elite athletes might better be able to address the physiologic question about PSA regulation by androgens, as there could be fewer confounding issues. The athletes in the study had much higher lean body mass, and the sedentary controls had more adipose mass. Does adipose tissue produce or directly up-regulate PSA?\nThe athletes had nearly identical testosterone as the sedentary women. Is that expected? Would it be possible to retrospectively get testosterone-epitestosterone ratios?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1131
|
https://f1000research.com/articles/6-112/v1
|
07 Feb 17
|
{
"type": "Research Article",
"title": "Dose Titration Algorithm Tuning (DTAT) should supersede the Maximum Tolerated Dose (MTD) concept in oncology dose-finding trials",
"authors": [
"David C. Norris"
],
"abstract": "Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent ‘confirmatory’ Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational drug. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of ‘the’ maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived of, not as ‘dose-finding’, but as dosing algorithm-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug’s population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple dose titration algorithm targeting neutrophil nadir of 500 cells/mm3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace ‘the’ MTD with an individualized concept of MTDi . To illustrate this principle, the simplest possible dose titration algorithm capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. The individual-level linearization of myelosuppression dynamics demonstrated for the simulation model used here suggest that a titration algorithm specified in the more general terms of the linear Kalman filter will be worth exploring.",
"keywords": [
"Dose-finding studies",
"oncology",
"Phase I clinical trial",
"individualized dose-finding",
"precision medicine"
],
"content": "Introduction\n\nDespite advances in Bayesian adaptive designs1,2 and model-based dose-finding3, oncology dose-finding studies remain conceptually in the thrall of the maximum tolerated dose (MTD). This concept stands opposed to the long-recognized heterogeneity of cancer patients’ pharmacokinetics and pharmacodynamics (PK/PD), and to the diversity of their individual values and goals of care. Under this conceptual yoke, these dose-finding studies constitute a significant choke-point in drug development, where a severe discount may be applied to the potential value in new molecules through the hobbling of subsequent ‘efficacy’ trials by inadequate individual-level dosing4.\n\nStrangely, Bayesian innovation in dose-finding studies has proceeded apace without issuing a meaningful challenge to the inherently frequentist conception of an MTD as determined by whole-cohort frequencies of dose-limiting toxicities (DLTs). Thus, even as Bayesianism has made progress toward the ethical imperative of efficient use of data5 in such studies, it has neglected to confront the distinct ethical dimension of individualism6. This seems a great irony, as the dynamic learning model of Bayesianism is equally suited, and indeed equally essential, to solving the latter problem.\n\nThis paper demonstrates individualized dose-finding in a simulated Phase I study of a cytotoxic chemotherapy drug for which neutropenia constitutes the critical dose-limiting toxicity. Importantly, myelosuppression is interpreted also as a monotone index of therapeutic efficacy, without a dose-response ‘plateau’7 of the type postulated for molecularly targeted agents (MTAs). This creates a problem setting where simple heuristics apply, simplifying the demonstration undertaken here. The aim of this exercise is to elaborate a concrete setting in which ‘dose-finding study’ may be seen as a misnomer. Under the view advanced here, early-phase studies of this kind should be conceived as dose titration algorithm tuning (DTAT) studies.\n\nThe idea that ‘dose finding studies’ should yield dose titration algorithms is not new. More than a quarter-century ago, Sheiner and colleagues8 advocated a learn-as-you-go concept for “dose-ranging studies”, addressing concerns about “parallel-dose designs” that are not far removed from the motivations for the present paper. As in the advocacy of Sheiner et al., parametric models play an important role in this paper, although in keeping with a spirit of pragmatism this dependence is relaxed to some extent by means of a semiparametric dynamic on-line learning heuristic.\n\n\nMethods\n\nA hypothetical cytotoxic drug is considered, modeled notionally after docetaxel, to be infused in multiple 3-week cycles. The pharmacokinetics are taken to follow a 2-compartment model with parameters as estimated for docetaxel in a recent population pharmacokinetic study9. Chemotherapy-induced neutropenia (CIN) is taken to follow a myelosuppression model due to Friberg et al.10. Together, these models form a population pharmacokinetic/pharmacodynamic (PK/PD) model within which dose titration algorithms may be simulated and tuned for optimality. For simulation purposes, and anticipating the future value of ready access to a variety of inference procedures in follow-on work, this PK/PD model is implemented in R package pomp11. R version 3.3.2 was used12.\n\nBasic behaviors of the models are illustrated by simulation graphics generated for 25 individuals randomly generated from the population PK/PD model. Properties with specific relevance to absolute neutrophil count nadir (ANCnadir)-targeted dose titration are then investigated, with an eye to demonstrating the predictability of nadir timing. In particular, an approximate linearization of neutrophil nadir level and timing is demonstrated, achieved through suitable power-law transformation of infusion doses and logarithmic transformation of neutrophil concentration. Within this transformed parameter space, a simple recursive dose titration algorithm is defined on the basic heuristic of the Newton-Raphson method for root-finding. For simplicity, monitoring of CIN is not modeled endogenously to this algorithm, but is treated as exogenous such that nadir timing and level are known precisely. A ‘DTAT’ study is simulated and visualized for 25 patients, with the tuning parameters of the recursive titration algorithm held fixed. The visualization supports a discussion of how these parameters might be tuned over the course of a Phase I study. All simulations and figures in this paper were generated by a single R script, archived on OSF13.\n\nWe take the population pharmacokinetics of our cytotoxic drug to obey a 2-compartment model, with parameters drawn notionally from estimates published for docetaxel [9, Table 2]; see Table 1.\n\nCL: clearance; Q: intercompartmental clearance; Vc: volume of central compartment; Vp volume of peripheral compartment; CV: coefficient of variation. (*) A CV for Vc was unavailable in 9 and has been set arbitrarily to 0.1.\n\nFigure 1 shows illustrative pharmacokinetic profiles for 25 randomly-generated individuals from this population, administered a 100 mg dose.\n\nCc and Cp are drug concentrations in the central and peripheral compartments, respectively.\n\nChemotherapy-induced neutropenia is simulated using the 5-compartment model of Friberg et al. [10, Table 4], in which myelocytes (here, neutrophils) arise from progenitor cells in a proliferative compartment, mature through a series of 3 transitional states, and emerge into the systemic circulation; see Figure 2. Transit between successive compartments in this model is a Poisson process with time constant ktr, total mean transit time being therefore given by MTT = 4/ktr. See Table 2.\n\nProl: proliferative compartment; Transitn: maturation compartments; Circ: systemic circulation; ktr: transition rate; kprol: rate of proliferation of progenitor cells, regulated by a negative-feedback loop parametrized by γ > 0.\n\nCirc0: baseline neutrophil concentration; MTT : mean transit time between the 5 model compartments; γ: exponent of feedback loop; EC50, Emax: parameters of a model (of the standard Emax type) governing docetaxel-induced depletion in the proliferative compartment.\n\nFigure 3 shows illustrative myelosuppression profiles for 25 randomly-generated individuals from this population, administered a 100 mg dose.\n\nNote how a chemotherapeutic ‘shock’ to the proliferative compartment Prol propagates through the maturation compartments Tx1,2,3 and thence to the systemic circulation Circ. (ANC: absolute neutrophil count.)\n\nWhen parametrized by dose4, individuals’ trajectories in (log(ANCnadir)×timenadir)-space may be approximately linearized, as shown in Figure 4. A consequence of this rough linearity is that we may hold out some hope that a linear predictive model could be a suitable basis for an adaptive dose titration scheme.\n\nThe 10 doses plotted are evenly spaced on a fourth-root scale. Not only are the trajectories themselves nearly linear in (log(ANC) × time)-space, but each one is traversed at roughly ‘constant velocity’ with respect to dose4.\n\n\nDose titration\n\nRecursive nonlinear filtering, as implemented in the extended Kalman filter (EKF) or its more modern adaptations14, constitutes a powerful conceptual framework for approaching model-based dose titration15. Under a suitable linearization of the dynamics, as for example suggested by Figure 4, even the linear Kalman filter16 might succeed admirably. The ‘tuning’ in ‘DTAT’ has itself been suggested by the practice of tuning a Kalman filter for optimal performance.\n\nFor present purposes, however, it suffices to implement a model-free recursive titration algorithm built on the Newton-Raphson method, with a numerically-estimated derivative based on most recent infusion doses and their corresponding ANC nadirs. In this algorithm, a relaxation factor ω = 0.75 is applied to any proposed dose increase, with safety in mind. Whereas the slope of log(ANCnadir) with respect to dose4 is expected to be strictly negative at steady state, hysteresis effects arising during initial steps of dose titration do sometimes yield positive numerical estimates for this slope; so the slope estimates are constrained to be ≤ 0. The infusion dose for cycle 1 is 50 mg, and the cycle-2 dose is calculated conservatively using a slope −2.0, which is larger (in absolute terms) than for any of our simulated patients except id1 and id13; see Figure 4. For reference, these starting values for the tuning parameters of the titration algorithm are collected in Table 3.\n\nWith the illustrative purpose of this article again in mind, we treat neutropenia monitoring as an exogenous process yielding precise nadir timing and levels. This enables a demonstration of the main point without the encumbrance of additional modeling infrastructure peripheral to the main point.\n\n\nOn ‘tuning’\n\nIf one considers Figure 5 as a sequence of titration outcomes emerging in serially enrolled study subjects, it becomes clear that even quite early in the study it will seem desirable to ‘retune’ the titration algorithm. For example, provided that course-1 CIN monitoring is implemented with sufficient intensity to deliver advance warning of an impending severely neutropenic nadir, so that timely colony-stimulating factor may be administered prophylactically17, then upon review of the titration courses in the first 10 subjects it may well appear desirable to increase dose1 from 50mg to 100mg. Likewise, given the third-dose overshooting that occurs in 4 of the first 10 subjects, it may seem desirable to adjust the relaxation factor ω downward. Of note, at any given time any such proposed retuning may readily be subjected to a ‘dry run’ using retrospective data from all convergent titration courses theretofore collected. (Hysteresis effects would however be inaccessible to a strictly data-driven dry run absent formal modeling.) Furthermore, the ‘tuning’ idea readily generalizes to the fundamental modification or even wholesale replacement of a dose titration algorithm; the overshooting seen for subjects id10, id12 and id23, for example, inspires further thought about refining (or replacing) the admittedly very naive Newton-Raphson method employed herein.\n\nCc and Cp are drug concentrations in the central and peripheral compartments, respectively.\n\nA further dimension of ‘tuning’ that must be discussed is the potential for driving the tuning parameters using statistical models built on baseline covariates. Surely, to the extent that the great heterogeneity in final dosing evident in Figure 5 could be predicted based on age, sex, weight or indeed on pharmacogenomic testing, then dose1 should be made a function of these covariates. The recalibration of such models as data accumulate from successive study subjects is very much a part of the full concept of ‘tuning’ I wish to advance.\n\nFinally, whereas I have discussed ‘tuning’ here largely in terms of reflective, organic decision-making such as occurs in the creative refinement of algorithms or in data-driven statistical model development, I do not mean to exclude more formal approaches to algorithm tuning. A decision-theoretic framing of the tuning problem should enable formal algorithm tuning to be specified and carried out meaningfully. Such framing would also have the salutary effect of bringing into view objectively the important matter of patients’ heterogeneity with respect to values and goals of care. It seems quite likely that the balance of benefits from aggressive titration versus harms of toxicities will generally differ from one patient to another. Dose titration algorithms should most emphatically be tuned to these factors as well.\n\n\nDiscussion\n\nIt is where pharmacometrics meets the field of optimal control that the current literature seems to make its closest point of contact with the DTAT concept I am advancing here. In optimal-control investigations of chemotherapy18–23, as in DTAT, relatively large decision spaces are explored. Indeed, the infinite-dimensional spaces of control functions posited for exploration in optimal control applications dwarf the finite-dimensional spaces of tuning parameters in DTAT as dramatically as the latter dwarf the finite sets of discrete doses trialed in now-standard Phase I studies. This intermediate ‘cardinality’ of DTAT reflects an important advantage in an era when, to almost universal chagrin, the detested 3+3 dose-finding design retains its hegemony due partly to widespread resistance to modeling24. In such an era, optimal control applications that involve detailed mathematical modeling of tumor biology and dynamics sadly seem consigned to the fringes of practice. Acceptance of such ambitious problem formulations, expressing as they do the spirit of a future age, must await deep cultural changes in the medical sciences and clinical practice.\n\nAs easy as it is, however, to disparage ‘resistance to modeling’ as some kind of antediluvian attitude, this resistance does rightfully assert the importance of unmodeled complexities that necessitate application of organic forms of clinical judgment25. It should be clear from the above discussion of ‘tuning’ that DTAT readily accommodates and veritably invites scrutiny, supervision and modification by clinical judgment. For example, if during the course of a DTAT study adverse effects other than neutropenia were to emerge as occasional dose-limiting toxicities, then the full concept of ‘tuning’ advanced above would invite dynamic, ‘learn-as-you-go’ modifications of the titration algorithm. Such modifications may begin with decreasing the relaxation factor ω, but might also involve efforts to classify and predict these new DLTs, and to incorporate such new understanding explicitly into the dose titration algorithm yielded by the study. Indeed, whatever philosophical challenge DTAT embodies is likely to take the form of requiring an intensified commitment to clinical judgment, in a learn-as-you-go world where the always-provisional nature of medical knowledge must frankly be acknowledged6,26.\n\n\nConclusions\n\nI have advanced a concept of dose titration algorithm tuning (DTAT), drawing connections with recursive filtering and optimal control. I have illustrated key elements of DTAT by simulating neutrophil-nadir-targeted titration of a hypothetical cytotoxic chemotherapy drug with pharmacokinetics and myelosuppressive dynamics patterned on previously estimated population models for docetaxel. I believe DTAT presents a prima facie case for discarding the outmoded concept of ‘the’ maximum tolerated dose (MTD) of a chemotherapy drug. This argument should be of interest to a wide range of stakeholders, from cancer patients with a stake in receiving optimal individualized ‘MTDi’ dosing, to shareholders in pharmaceutical innovation with a stake in efficient dose-finding before Phase III trials.\n\n\nData availability\n\nOpen Science Framework: Code and Figures for v1 of F1000Research submission: Dose Titration Algorithm Tuning (DTAT) should supersede the Maximum Tolerated Dose (MTD) concept in oncology dose-finding trials, doi 10.17605/osf.io/vwnqz13\n\n\nEndorsement\n\nDaniela Conrado (Associate Director, Quantitative Medicine at Critical Path Institute) confirms that the author has an appropriate level of expertise to conduct this research, and confirms that the submission is of an acceptable scientific standard. Daniela Conrado declares she has no competing interests.",
"appendix": "Competing interests\n\n\n\nThe author operates a scientific and statistical consultancy focused on precision-medicine methodologies such as those advanced in this article.\n\nAuthor Endorsement: Daniela Conrado (Associate Director, Quantitative Medicine at Critical Path Institute) confirms that the author has an appropriate level of expertise to conduct this research, and confirms that the submission is of an acceptable scientific standard. Daniela Conrado declares she has no competing interests.\n\n\nGrant information\n\nThe author declared that no grants were involved in supporting this work.\n\n\nReferences\n\nLunn D, Best N, Spiegelhalter D, et al.: Combining MCMC with 'sequential' PKPD modelling. J Pharmacokinet Pharmacodyn. 2009; 36(1): 19–38. PubMed Abstract | Publisher Full Text\n\nBailey S, Neuenschwander B, Laird G, et al.: A Bayesian case study in oncology Phase I combination dose-finding using logistic regression with covariates. J Biopharm Stat. 2009; 19(3): 469–484. PubMed Abstract | Publisher Full Text\n\nPinheiro J, Bornkamp B, Glimm E, et al.: Model-based dose finding under model uncertainty using general parametric models. Stat Med. 2014; 33(10): 1646–1661. PubMed Abstract | Publisher Full Text\n\nLisovskaja V, Burman CF: On the choice of doses for phase III clinical trials. Stat Med. 2013; 32(10): 1661–1676. PubMed Abstract | Publisher Full Text\n\nBerry DA: Bayesian Statistics and the Efficiency and Ethics of Clinical Trials. Statist Sci. 2004; 19(1): 175–187. Publisher Full Text\n\nPalmer CR: Ethics, data-dependent designs, and the strategy of clinical trials: time to start learning-as-we-go? Stat Methods Med Res. 2002; 11(5): 381–402. PubMed Abstract | Publisher Full Text\n\nZang Y, Lee JJ, Yuan Y: Adaptive designs for identifying optimal biological dose for molecularly targeted agents. Clin Trials. 2014; 11(3): 319–327. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSheiner LB, Beal SL, Sambol NC: Study designs for dose-ranging. Clin Pharmacol Ther. 1989; 46(1): 63–77. PubMed Abstract | Publisher Full Text\n\nOnoue H, Yano I, Tanaka A, et al.: Significant effect of age on docetaxel pharmacokinetics in Japanese female breast cancer patients by using the population modeling approach. Eur J Clin Pharmacol. 2016; 72(6): 703–710. PubMed Abstract | Publisher Full Text\n\nFriberg LE, Henningsson A, Maas H, et al.: Model of chemotherapy-induced myelosuppression with parameter consistency across drugs. J Clin Oncol. 2002; 20(24): 4713–4721. PubMed Abstract | Publisher Full Text\n\nKing MD, Grech-Sollars M: A Bayesian spatial random effects model characterisation of tumour heterogeneity implemented using Markov chain Monte Carlo (MCMC) simulation [version 1; referees: 1 approved]. F1000Res. 2016; 5: 2082. Publisher Full Text\n\nR Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2016. Reference Source\n\nNorris DC: Code and Figures for v1 of F1000Research submission: Dose Titration Algorithm Tuning (DTAT) should supersede the Maximum Tolerated Dose (MTD) concept in oncology dose-finding trials. Open Science Foundation. 2017. Data Source\n\nJulier SJ, Uhlmann JK: New extension of the Kalman filter to nonlinear systems. 1997; 3068: 182–193. Publisher Full Text\n\nNorris DC, Gohh RY, Akhlaghi F, et al.: Kalman filtering for tacrolimus dose titration in the early hospital course after kidney transplant. F1000Res. 2017; 6. Publisher Full Text\n\nKalman RE: A New Approach to Linear Filtering and Prediction Problems. J Basic Eng. 1960; 82(1): 35–45. Publisher Full Text\n\nWeycker D, Li X, Edelsberg J, et al.: Risk and Consequences of Chemotherapy-Induced Febrile Neutropenia in Patients With Metastatic Solid Tumors. J Oncol Pract. 2015; 11(1): 47–54. PubMed Abstract | Publisher Full Text\n\nDe Pillis LG, Fister KR, Gu W, et al.: Optimal control of mixed immunotherapy and chemotherapy of tumors. J Biol Syst. 2008; 16(01): 51–80. Publisher Full Text\n\nDua P, Dua V, Pistikopoulos EN: Optimal delivery of chemotherapeutic agents in cancer. Comput Chem Eng. 2008; 32(1–2): 99–107. Publisher Full Text\n\nd’Onofrio A, Ledzewicz U, Maurer H, et al.: On optimal delivery of combination therapy for tumors. Math Biosci. 2009; 222(1): 13–26. PubMed Abstract | Publisher Full Text\n\nKrabs W, Pickl S: An optimal control problem in cancer chemotherapy. Appl Math Comput. 2010; 217(3): 1117–1124. Publisher Full Text\n\nEngelhart M, Lebiedz D, Sager S: Optimal control for selected cancer chemotherapy ODE models: a view on the potential of optimal schedules and choice of objective function. Math Biosci. 2011; 229(1): 123–134. PubMed Abstract | Publisher Full Text\n\nLedzewicz U, Schättler H, Gahrooi MR, et al.: On the MTD paradigm and optimal control for multi-drug cancer chemotherapy. Math Biosci Eng. 2013; 10(3): 803–819. PubMed Abstract | Publisher Full Text\n\nPetroni GR, Wages NA, Paux G, et al.: Implementation of adaptive methods in early-phase clinical trials. Stat Med. 2017: 36(2): 215–224. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeinstein AR: \"Clinical Judgment\" revisited: the distraction of quantitative models. Ann Intern Med. 1994; 120(9): 799–805. PubMed Abstract | Publisher Full Text\n\nNorris DC: Casting a realist’s eye on the real world of medicine: Against Anjum’s ontological relativism. J Eval Clin Pract. 2017. Publisher Full Text"
}
|
[
{
"id": "20080",
"date": "14 Mar 2017",
"name": "Matthew E. Nielsen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author presents a provocative modeling-based demonstration of an innovative alternative to the traditional one-size-fits-all maximally tolerated dose concept. The concept of individualized pharmacokinetic-based precision dosing is intuitively appealing and the analyses presented herein support potential utility from the development and application of such methods.",
"responses": []
},
{
"id": "21177",
"date": "22 Mar 2017",
"name": "Natalja Strelkowa",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI support the main idea of the article to promote individualized dose finding for toxic drugs. The methodological framework of reinforcement learning is an interesting and promising alternative to the current practice of population based maximum tolerated dose approach (MTD).\nThe following points need to be stressed and/or addressed in the paper:\nThe immediate toxic response must be reliably measurable for the reinforcement learning to work; long term toxicity is as far as I have understood is not included in the current framework. This needs to be addressed, in particular in light of new immune-oncology and targeted therapies where the toxic response is not immediately observable.\n\nThe time to achieve optimal dosing scheme according to the dose finding algorithm is also of a key importance and must be much shorter than expected survival of the patient.\n\nFor the practical implementation of the approach, hart boundaries on overshooting must be implemented. This will likely influence the time to optimal dosing for each patient.\nThe title and the conclusions are too strong in my view and need revision. This approach is very promising and I myself am also very interested in its application, but in the abstract and discussion section it needs to be clear that key practical questions are still to be addressed. Strong statements like: ‘DTAT should supersede MTD’ should be removed from the title and abstract and replaced by ‘DTAT is an alternative to current practice’ or ‘Could be a better alternative to current practice for drugs with narrow therapeutic window’ etc.",
"responses": [
{
"c_id": "2578",
"date": "24 Mar 2017",
"name": "David C. Norris",
"role": "Author Response",
"response": "I thank Dr. Strelkowa greatly for her supportive and critical comments, which invite extended discussion on several points that v1 of this paper has omitted to its detriment. I use this reply to indicate changes I propose to make in v2, pursuant to Dr. Strelkowa’s comments. Unless otherwise indicated, these changes would seem to belong in the Discussion section: Dr. Strelkowa rightly points to general conditions that constrain the time-scale on which DTAT-based learning may operate. Specifically, DTAT cannot learn faster than the lag-time at which the targeted toxic response(s) develop. In the important area of immuno-oncology that Dr. Strelkowa highlights, common dose-limiting toxicities (DLTs) do admit monitoring on time scales comparable to the chemotherapy induced neutropenia (CIN) simulated in this paper. For example, the cytokine release syndrome (CRS) that accompanies chimeric antigen receptor (CAR-)T cell therapies typically arises within 1 week of administration (even earlier with concomitant high-dose IL-2) and constitutes a clinical syndrome that admits multivariate monitoring on numerous quantitative clinical and laboratory measures.[1] Regarding however the molecularly targeted agents (MTAs) also mentioned by Dr. Strelkowa, late toxicities indeed have tended to attract the lion’s share of attention.[2] One reason for this is that the early toxicities of MTAs tend to be relatively milder than those of cytotoxic and immunologic therapies.[3] Nevertheless, a DTAT principle continues to apply here—just on a longer time scale. Of course, a ‘dose titration algorithm’ (DTA) that responds to MTA toxicities that patients mainly experience and evaluate subjectively may resemble a process of ongoing shared decision making (with the oncology care team) more than it resembles the impersonal calculations we typically think of as ‘algorithmic’. But with a suitably broadened understanding of ‘algorithm’—one that accommodates what might typically be termed protocols—the DTAT (or perhaps, DTPT) principle continues to apply. In such applications, “supervision and modification by clinical judgment” as mentioned in the Discussion clearly comes to the fore. But even in such contexts, the development and application of scoring systems for patient-reported clinical symptoms and quality of life would enable dose titration protocols (DTPs) to be described objectively in quite ‘algorithmic’ terms that would preserve the applicability of a ‘tuning’ concept. The issue of competing risks Dr. Strelkowa raises is truly important, and brings into play the inter-individual heterogeneity in “values and goals of care” that I discussed in opening the Introduction and (especially) toward the end of the On ‘tuning’ section, where I explicitly highlight values/goals as factors that should “most emphatically” inform DTA tuning. (In the PDF, this utterly essential point tragically breaks across pages 7-9 due to an intervening page 8 of figures, a mishap I will aim to avoid in v2.) To address Dr. Strelkowa's comment here, I plan to expand that latter discussion explicitly to incorporate prognosis. For example, if a patient with more advanced disease and short expected survival has (in consultation with the oncologist and oncology care team) nevertheless decided to enroll in a Phase I DTAT study to pursue the possibility of therapeutic benefit, then this patient’s decision indicates a subjective weighting of benefits vs harms favoring a higher starting dose and more aggressive titration. I agree that any sound dose titration algorithm will incorporate fail-safe limits on dose escalation. To support a richer discussion in the On ‘tuning’ section, I purposely chose ‘wrong’ tuning parameters (and a naive Newton-Raphson DTA) so that Figure 5 would illustrate several problems that one would never purposely ‘design in’ to an actual study: (a) a too-low starting dose and (b) the potential for overshooting, such as occurred in id8, id10, id12 and id23—the last of whom received in fact an off-scale dose. For v2, in both the Figure 5 caption and the main text of On ‘tuning’, I propose to note the important role that fail-safe upper bounds on both absolute dose and dose multipliers will play in reducing the risk of overshooting in a practical trial. A larger point about DTAT requires clarification in v2. The technical connections with recursive filtering and optimal control, as drawn in the paper, serve much the same notional purpose as choosing docetaxel as the basis for simulation. The above discussion shows the DTAT principle applicable to oncology therapeutics beyond the cytotoxics. Likewise, notwithstanding the essential heuristic role recursive filtering has played in DTAT’s development, it should not be thought to define DTAT. (In fact, I have at this point in my further work already abandoned the linear Kalman filter approach indicated in the last sentence of v1 Abstract, in favor of full-information methods.) What does define DTAT’s essential contribution to Phase I oncology study design is that it yields a new abstraction (the DTA with its tuning parameters) capable of embodying knowledge objectively [4], to supersede a fallacious abstraction (‘the’ MTD) that almost completely lacks this capability. I must make this point explicit in v2, emphasizing that the role of the technical connections I’ve drawn is to illustrate the (DTA+tuning) abstraction which constitutes DTAT’s essential contribution. I will also try to modify the title somehow to underscore this point. I will however retain a withering treatment of ‘the’ MTD, an anti-precision idea whose time has passed. In support of that view, I will in the v2 Discussion or Introduction briefly discuss ‘the’ MTD specifically as a fallacy of misplaced concreteness [5]. The purpose of my strong title is to provoke long-overdue critical thought and discourse. Should any clinical trial methodologist leap now to the defense of ‘the’ MTD, I will most heartily welcome that challenge. References Weber JS, Yang JC, Atkins MB, Disis ML. Toxicities of Immunotherapy for the Practitioner. JCO. 2015;33(18):2092-2099. doi:10.1200/JCO.2014.60.0379. Postel-Vinay S, Gomez-Roca C, Molife LR, et al. Phase I trials of molecularly targeted agents: should we pay more attention to late toxicities? J Clin Oncol. 2011;29(13):1728-1735. doi:10.1200/JCO.2010.31.9236. Molife LR, Alam S, Olmos D, et al. Defining the risk of toxicity in phase I oncology trials of novel molecularly targeted agents: a single centre experience. Ann Oncol. 2012;23(8):1968-1973. doi:10.1093/annonc/mds030. Popper KR. Objective Knowledge: An Evolutionary Approach. Rev. ed. Oxford [Eng.] : New York: Clarendon Press ; Oxford University Press; 1979. Whitehead AN. Science and the Modern World: Lowell Lectures, 1925. New York: The Free Press; 1997."
}
]
}
] | 1
|
https://f1000research.com/articles/6-112
|
https://f1000research.com/articles/6-36/v1
|
12 Jan 17
|
{
"type": "Research Article",
"title": "Genome-wide characterization of folate transporter proteins of eukaryotic pathogens",
"authors": [
"Mofolusho Falade",
"Benson Otarigho",
"Benson Otarigho"
],
"abstract": "Background: Medically important pathogens are responsible for the death of millions every year. For many of these pathogens, there are limited options for therapy and resistance to commonly used drugs is fast emerging. The availability of genome sequences of many eukaryotic protozoa is providing important data for understanding parasite biology and identifying new drug and vaccine targets. The folate synthesis and salvage pathway are important for eukaryote pathogen survival and organismal biology and may present new targets for drug discovery. Methods: We applied a combination of bioinformatics methods to examine the genomes of pathogens in the EupathDB for genes encoding homologues of proteins that mediate folate salvage in a bid to identify and assign putative functions. We also performed phylogenetic comparisons of identified proteins. . Results: We identified 234 proteins to be involve in folate transport in 63 strains, 28 pathogen species and 12 phyla, 60% of which were identified for the first time. Many of the genomes examined contained genes encoding transporters such as folate-binding protein YgfZ, folate/pteridine transporter, folate/biopterin transporter, reduced folate carrier family protein, folate/methotrexate transporter FT1. The mitochondrion is the predicted location of the majority of the proteins, with 15% possessing signal peptides. Phylogeny computation shows the similarity of the proteins identified. Conclusion: These findings offer new possibilities for potential drug development targeting folate-salvage proteins in eukaryotic pathogens.",
"keywords": [
"Folate transporter",
"Eukaryotic pathogens",
"Drug discovery",
"Putative homologues"
],
"content": "Introduction\n\nA heterogeneous diversity of eukaryotic pathogens is responsible for the most economically important diseases of humans and animals1,2. As a result of underdevelopment, a lack of social infrastructure and insufficient funding of public health facilities, most of these pathogens are endemic to resource-poor countries in sub-Saharan Africa, South-East Asia and South America, where they are responsible for high morbidity and mortality1–3. Of these, parasitic protozoa form a major group, with the apicomplexans and kinetoplastid parasites represented by important members, which cause diseases such as malaria, cryptosporidiosis, toxoplasmosis, babesiosis, leishmaniasis, Human African trypanosomiasis and south American trypanosomiasis or Chagas’ disease causing most of the morbidity and mortality4,5. Other important diseases caused by protozoans include giardiasis, amoebic dysentery6,7 and trichomoniasis8. A vicious cycle of poverty and disease exists for most of these parasites with a high infection and death rate in affected populations9–11. The appreciable burden of disease caused by these parasites has been aggravated by the lack of a licensed vaccine for most of them12. Furthermore, current drugs of choice for treatment for many of the parasites have significant side effects, with the added emergence of drug resistant strains13–15. Despite the urgent demand for new therapies for control, few drugs have been developed to combat these parasites16. A major limitation to the development of new drugs is the paucity of new drug targets. There is therefore a need for discovery of novel and alternate potential chemotherapeutic targets that can help in drug development efforts for disease control16–18. A possible approach to selective antimicrobial chemotherapy has been to exploit the inhibition of unique targets, vital to the pathogen and absent in mammals17,18.\n\nA metabolic pathway that has been exploited considerably for the development of drugs is the folate biosynthetic pathway19. Antifolate drugs target this pathway and are the most important and successful antimicrobial chemotherapies targeting a range of bacterial and eukaryotic pathogens. While most parasitic protozoa can synthesize folates from simple precursors, such as GTP, p-aminobenzoic acid (pABA) and glutamate, higher animals and humans cannot20. Additionally, a few of these parasites can also salvage folate as nutrient from their host21. These folate compounds are important for synthesis of DNA, RNA and membrane lipids and are transported via receptor-mediated or/and carrier-mediated transmembrane proteins; folate transporters20–22. Importantly, antifolate chemotherapies that target the biosynthesis and processing of folate cofactors have been effective in the chemotherapy of bacterial and protozoan parasites21. More importantly, the folate pathway has also been confirmed as being essential in some eukaryotic pathogens such as Plasmodium, trypanosomes and Leishmania19.\n\nIn addition to the folate biosynthesis pathway, proteins that mediate transport of useful nutrients such as folic acid have been identified as important chemotherapeutic drug targets18,19,23. Hence, the folate pathway, metabolites and transporters continue to be extensively studied for identification of new enzymes including transporters, which may serve as new drug targets22. Recent estimates have ascribed eight different membrane transporters to eukaryotes24.\n\nProteins that mediate transportation of folates have been well studied in a few parasites such as Plasmodium falciparum, Trypanosoma brucei, Leishmania donovani and Leishmania major25,26. These studies have provided information on mode of action of drugs25,27,28 in addition to studies describing mechanisms of parasite drug resistance25–32. However, folate transport proteins remain unidentified and uncharacterized in many other eukaryotic pathogens. This is despite the sequencing of the genomes of most eukaryotic pathogens, which has produced a vast wealth of data that could aid in identification of druggable pathogen-specific proteins33–39. It is therefore imperative to search and identify from these parasite genomes additional proteins such as folate transporters that may serve as novel drug targets40,41.\n\nTherefore, in an attempt to identify and characterize targets for novel therapeutics, we report herein an extensive search of folate transporters from pathogen genomes. In addition, we investigated the evolutionary relationship of these transporters in a bid to determine similarities and differences that make them attractive drug targets. The knowledge provided may assist in the design of new antifolates for protozoan parasites.\n\n\nMethods\n\nOur experiment workflow is depicted in Figure 1. We extracted protein sequences of approximately 200 pathogens that mediate transportation or salvage of folates from Eukaryotic Pathogen Genome Database Resources (http://eupathdb.org/eupathdb/), and from the literature using a key-word search. We also searched the 200 pathogen genome sequences archived at the Eukaryotic Pathogen Genome Database Resources (http://eupathdb.org/eupathdb/). The search was for all proteins that mediate transportation or folate salvage alone or folate salvage and related compounds (such as pteridine, biopterin and methotrexate) together. This database gives public access to most sequenced emerging/re-emerging infectious pathogen genomes42. We utilized the word “folate” for search on the gene text and “folic acid” was used to confirm the hits. Hit results containing proteins annotated as folate-binding protein YgfZ, folate/pteridine transporter, folate/biopterin transporter, reduced folate carrier family protein, folate/methotrexate transporter FT1, Folate transporters alone and other folate related proteins were retrieved. The complete list of proteins extracted from Eupthadb is presented in Dataset 143. The folate transporters were classified based on type of transporter, number of transmembrane helix (TMH) and localization (either cell or mitochondrial membrane) of transporter. Gene sequences were obtained in FASTA format for transporter proteins using the sequence download tool on EupathDB (http://eupathdb.org/eupathdb/).\n\nTo ensure that most of the retrieved proteins had not been previously studied, we performed a literature search on PubMed (http://www.ncbi.nlm.nih.gov/pubmed/?term=) and Google Scholar (https://scholar.google.com) using the query “folate transporter + Parasite name”. The protein sequence information (Dataset 143 and Table 1) obtained from literature search was used for a BLAST search on EupathDB (http://eupathdb.org/eupathdb/), UniprotDB (http://www.uniprot.org) and GeneDB (http://www.genedb.org/Homepage). Sequence data were edited on textEdit mac version and uploaded to Molecular Evolutionary Genetics Analysis (MEGA) platform version 7.0 obtained from http://www.megasoftware.net44. The 234 sequences were aligned using muscle tools with large alignment (Max iterations = 2) selected while other settings were left at defaults. Evolutionary history was inferred using the Neighbor-Joining method45. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (500 replicates) was also analysed46. The tree was drawn to scale, with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. The evolutionary distances were computed using the number of differences method47. While uniform rate and complete deletion was selected for substitution rates and data subset, respectively. Other parameters were at default settings. All positions containing gaps and missing data were eliminated. The newick format of the tree was exported and opened on FigTree 1.4.2 platform downloaded from http://tree.bio.ed.ac.uk/software/figtree/48. The final tree was constructed using radial tree layout. Additional analysis consisted of sub-phylogenies based on the transporter type. Since folate-binding protein YgfZ, folate/pteridine transporter, folate/biopterin transporter, putative, reduced folate carrier family protein, folate/methotrexate transporter FT1, putative folate transporters alone and others have 10, 25, 132, 2, 7, 49 and 9. So we decided to reconstruct the phylogeny based folate transporter, folate-biopterin transporter after considering the identification number, the species diversity in each category.\n\nA total of 234 folate transporter proteins from 63 pathogens were identified from the various classes of pathogens. √, Folate transporters found from literature search; X, Folate transporters identified from this study. FTP, Folate Transporter Protein.\n\n\nResults\n\nA methodological search for folate transporters in all eukaryotic pathogen genomes we examined under EupathDB with validation via GenBank, GeneDB and Uniprot contained a total of 234 proteins (detail features of proteins are presented in Dataset 143). We identified these transporters in 28 pathogen species (containing 63 strains) cutting across 12 phyla (Table 1). The parasites with the highest number of folate transporters are Phytophthora parasitica INRA-310, P. infestans T30–4 and Leptomonas pyrrhocoris H10 with 20, 16 and 16 proteins, respectively. While Aspergillus clavatus NRRL 1, A. flavus NRRL3357, A. macrogynus ATCC 38327, Crithidia fasciculata strain Cf-Cl and others have one folate transporter protein each (Table 1). The different proteins identified to be involved in folate salvage or related molecules were folate-binding protein YgfZ, folate/pteridine transporter, folate/biopterin transporter, reduced folate carrier family protein, folate/methotrexate transporter FT1 and folate transporters having a 4%, 11%, 56%, 1%, 3% and 21% identity, respectively. Proteins that did not belong to these groups were classified as others (4%) (Figure 2A). A good number of the proteins identified had predicted transmembrane helixes with a few having none (Figure 2B). Furthermore, a number of the transporters possess signal peptides (Dataset 143), which may be required for targeting to cellular locations. Deciphering the sequence of the targeting signal may indicate its product destination.\n\nThis is based on [A] Transporter Type [B] Number of TMMs [C] Novel folate transporters [D] Localization [E] Presence/absence of Signal peptide. TMM, Transmembrane Helix;\n\nOur literature search for parasite folate transporters on PubMed and Google Scholar indicated 60% (38 out 63) of the proteins were identified for the first time as presented in Table 1 and Figure 2C, while 40% have been previously investigated. Besides, the Leishmania folate transporters we came across were not found on the EupathDB resource. We thus performed a BLAST search of Kinetoplastida on EupathDB, the returned hits were folate/biopterin transporter for L. infantum. The only Plasmodium species with results for proteins that salvage folate was P. falciparum. Our study, however, describes for the first time the presence of these transporters in other Plasmodium species. There were no transporter proteins deposited in EupathDB for P. malariae and P. ovale. However, folate transporters I and II were retrieved from our search of GeneDB for P. malariae and P. ovale curtisi, respectively.\n\nOur analysis of folate transporters indicate the presence in Plasmodium species of two proteoforms; folate transporter I and II (Dataset 143). All Leishmania species identified possess folate/biopterin transporters and not folate transporters. Trypanosome species have both folate/pteridine and folate transporters; T. cruzi Dm28c, T. cruzi Sylvio X10/1 and T. cruzi CL Brener Esmeraldo-like, T. cruzi CL Brener Non-Esmeraldo-like and T. cruzi marinkellei strain B7 all have folate/pteridine transporter while T. brucei TREU927, T. brucei Lister strain 427, T. brucei gambiense DAL972, T. congolense IL3000 possess folate transporters. Eukaryotic parasites like Eimeria acervulina Houghton, E. brunetti Houghton, E. maxima Weybridge, E. necatrix Houghton, E. praecox Houghton, E. tenella strain Houghton and Neospora caninum Liverpool all boast folate/methotrexate transporter FT1. The folate-binding protein YgfZ was found in the fungus, Allomyces macrogynus ATCC 38327, the protist C. fasciculata strain Cf-Cl, C. immitis RS, the feline protozoon, Hammondia hammondi strain H.H.34, Sarcocystis neurona SN3, S. punctatus DAOM BR117, T. gondii GT1, T. gondii ME49, T. gondii VEG and T. brucei TREU927. Parasites such as Microsporidium daphniae UGP3 and the amoeba Naegleria fowleri ATCC 30863 possess the reduced folate carrier family protein (Figure 2D). We observed that 7% of the identified proteins are localized on the mitochondrial membrane of some pathogens such as the fungi Aspergillus clavatus NRRL 1, A. flavus NRRL3357, C. immitis RS, the yeast Cryptococcus neoformans var. grubii H99, Fusarium graminearum PH-1, A. capsulatus G186AR, Leptomonas pyrrhocoris H10, the food fungus Neosartorya fischeri NRRL 181, Phytophthora parasitica INRA-310 and P. ultimum DAOM BR144. The remaining proteins are localized on the plasma membrane (Dataset 143 and Figure 2D).\n\nApproximately 15% (34/234) of the folate transporters identified possess signal peptides (Figure 2E) with the trypanosomes with the most signal peptides. Deductions can be made of the probable destination within the cell of any transporter by its signal peptide sequence; thus, further work may seek to decipher the sequence of the targeting signal to determine its localization. The proteins identified all have transmembrane helixes with the exception of the alveolate Chromera velia CCMP2878, apicomplexan P. berghei ANKA, S. neurona SN3, the kinetoplastid T. brucei TREU927, T. grayi ANR4 and protist Vitrella brassicaformis CCMP3155 with Gene ID’s Cvel_17766, PBANKA_0713700, SN3_01500005, Tb927.8.6480, Tgr.2739.1000 and Vbra_15327, respectively (Dataset 143).\n\nThe phylogenetic tree (Figure 3) shows the evolutionary position, history and relationship of all the folate transporters identified in this work. The type of transporter or species/strain was used for constructing phylogenic trees, with the 234 proteins identified forming two clades, a major and minor. The major clade lacked a sub-clade, while the minor clade possessed a sub-clade. All proteins identified were distributed between the two major clades; except for folate/methotrexate transporter and mitochondrial folate transporter, with the latter present on the major clade and the former on the minor clade exclusively. All the species are represented on both clades, however, V. brassicaformis CCMP3155, Plasmodium species, A. clavatus NRRL, A. flavus NRRL3357, A. macrogynus ATCC 38327, C. fasciculata strain Cf-Cl, C. immitis RS, C. immitis RS, C. muris RN66, C. neoformans var. grubii H99, C. neoformans var. grubii H99, Leishmania species, N. bombycis CQ1, N. caninum Liverpool, F. graminearum PH-1 and H. hammondi strain H.H.34 are exclusively on the major clade. There are some parasites that were identified once, as shown in Dataset 143; these are mostly in the large clade. Some of these pathogens include P. ultimum DAOM BR144, which has mitochondrial folate transporter/carrier proteins similar to Homo sapiens, E. cuniculi GB-M1, which has proteins similar to folate transporter, and S. punctatus DAOM BR117, which has folate-binding protein YgfZ. These were the only proteins of the aforementioned species identified in this work. However, M. daphniae UGP3, which had reduced folate carrier domain containing protein, was the only parasite that was found in the small clade. Improving on our phylogenetic analysis, we performed a sub-phylogenetic reconstruction (Figure 4–Figure 6) based on the substrate type of the transport proteins. After phylogenetic analysis each sub-phylogeny show a clear characterization except for folate-biopterin transporters (Figure 5), which fell in a different clade save for Leptomonas species and C. velia.\n\nTree was constructed using Neighbor-Joining method (Boostrap test was a 1000 replicates).\n\nTree was constructed using Neighbor-Joining method (Boostrap test was a 1000 replicates).\n\nTree was constructed using Neighbor-Joining method (Boostrap test was a 1000 replicates).\n\nTree was constructed using Neighbor-Joining method (Boostrap test was a 1000 replicates).\n\n\nDiscussion\n\nFolate transporters are important proteins involved in the salvage of folate, cofactors and related molecules in eukaryotic pathogens important for metabolism and survival in their respective hosts21. We identified proteins that could mediate the salvage of folates into cells and/or mitochondria from eukaryotic pathogen genomes in EupathDB. Many of these proteins are involved in folate biosynthesis or transport and are present in many of the eukaryotic pathogens we queried. In this study, 234 genes encoding homologues of folate salvaging proteins were identified in the genome of 64 strains, representing 28 species of eukaryotic pathogens. Some of the pathogens include P. falciparum 3D7 and IT, P. knowlesi H, P. berghei ANKA, P. chabaudi chabaudi, T. brucei Lister 427, T. brucei TREU927, T. brucei gambiense DAL972, Encephalitozoon cuniculi GB-M1. The pathogens range from bacteria through to fungi, intracellular parasites such as Plasmodium and leishmania species, to extracellular parasites such as trypanosome species. This suggests a widespread presence of the proteins cutting across a range of pathogens that infect humans and animals.\n\nA few of the proteins we identified have been previously identified and characterized in parasites such as Plasmodium falciparum22,30, Trypanosome species26, Leishmania species and Toxoplasma gondii49. It has been estimated that over half of the drugs currently on the market target integral membrane proteins of which membrane transporters are a part, but unfortunately, these transporters have not been adequately explored as drug targets50. Folate transporters therefore represent attractive drug targets for treatment of infectious diseases. Thus their identification from other eukaryotic pathogens could open a window for novel chemotherapeutics for disease control51,52.\n\nIn Plasmodium two folate transporters have been identified, namely PfFT1 and PfT2. These transporters have been shown to mediate the salvage of folate derivatives and precursors in P. falciparum and proposed blocking of their salvage activities may improve the antimalarial efficacy of several classes of antimalarial drugs. In our work we identified folate transporters for other plasmodial species, which, like P. falciparum, may also be chemotherapeutic targets. Transport of folate in higher eukaryotes is made possible by a high affinity folate-biopterin transporters FBT or BT1 family22,30. In the trypanosomes and related kinetoplastids, a member of these transporters, the folate biopterin transporter (FBT) family of proteins was identified in Leishmania28. It is thought that MFS proteins are related to the FBT. These proteins have been characterized in a few protozoa and cyanobacteria53. Results from our study describing the presence of these transporters across several phyla corroborate results other researches, establishing the conservation of folate transport function among FBT family proteins from species from plants and protists22,53.\n\nMalaria parasites encode transporters belonging to the organo anion transporter (OAT) folate-biopterin transporter (FBT), glycoside-pentoside-hexuronide: cation symporter (GPH), families, which are closely related to the major facilitator super-family of membrane proteins54. The inhibition of these transporters by blockers of organic anion transporters such as probenecid has been implicated in sensitization of Plasmodium resistant parasites to antifolates55,56. Thus, in Plasmodium chemotherapy, identification of folate transporters could lead to screening for compounds that interfere with folate transport and salvage for antimalarial chemotherapy22,30. We identified several types of folate transporters that have been described and functionally characterized in Leishmania with some implicated in the import of the antifolate methotrexate57,58. Thus far, only protozoan transporters in Plasmodium, Leishmania, and Trypanosoma brucei have been characterized and these are known to mediate the uptake of the vitamins folate and/or biopterin22,59,60. Thus in parasites species of medical importance folate transporter proteins may provide new targets for therapy.\n\nWe also identified folate salvaging proteins from fungi such as Coccidioides immitis and A. clavatus, fungi found in soil61–63, vegetable61 and waters in tropical and subtropical areas64. These fungi are known to occasionally become pathogenic and act as opportunistic pathogens for animals and man63. Coccidioidomycosis caused by C. immitis in association with AIDS has been known to be a fatal disease65. Treatment of acute and chronic infections with antifungals such as amphotericin B have not been adequate, hence folate transporters may present new targets in these group of pathogens. Identification was also made on pathogens such as C. fasciculata that parasitize several species of insects including mosquitoes and has been widely used to test new therapeutic strategies against parasitic infections66. As a model organism, folate transporters identified in C. fasciculata may be useful in research for developing new drugs in medically important Kinetoplasts as has been shown for other targets in this protozoan parasite67.\n\nWe noticed that P. parasitica INRA-310 and L. pyrrhocoris H10 had the highest number of folate transporters identified. Their utility as model fungal (P. parasitica) and monoxenous kinetoplast may provide models instrumental for developing new antifolates for fungal and protozoan diseases. The relatedness of these proteins across the different pathogens shows that there are two major phylogenetically distinct clades in the eukaryotic pathogens examined. The clustering of these proteins suggests that these transport proteins have highly conserved regions often required for basic cellular function or stability68–85. Thus, antifolate chemotherapic drugs that are effective against one pathogen might have some effect on others.\n\n\nConclusion\n\nIn summary, we have identified and classified 234 proteins after an extensive search of pathogens genome in eukaryotic pathogen resource databases, though experimental studies will be required to confirm the expression and function of these proteins in parasites. Our results show that these proteins that mediate the transportation folate are widely distributed in different pathogen species examined in various phyla. The identification of folate salvage proteins in diverse eukaryotes extend the evolutionary diversity of these proteins and suggests they might offer new possibilities for potential drug development targeting folate-salvaging routes in eukaryotic pathogens.\n\n\nData availability\n\nDataset 1: Complete list of proteins extracted from Eupthadb and literature search, including their properties. These data are available in a .xlsx file. Doi, 10.5256/f1000research.10561.d14874243",
"appendix": "Author contributions\n\n\n\nM.O.F. and B.O. Conceptualized and Designed the study. M.O.F. and B.O. structured methodology. B.O. performed analysis. M.O.F and B.O. wrote manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nB.O. was supported by a TWAS-CNPq fellowship (FP number: 3240274297).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nFadiel A, Isokpehi RD, Stambouli N, et al.: Protozoan parasite aquaporins. Expert Rev Proteomic. 2009; 6(2): 199–211. PubMed Abstract | Publisher Full Text\n\nProle DL, Taylor CW: Identification and analysis of putative homologues of mechanosensitive channels in pathogenic protozoa. PLoS One. 2013; 8(6): e66068. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWiser M: Protozoa and human disease. Garland Science. 2010. Reference Source\n\nTurrens JF: Oxidative stress and antioxidant defenses: a target for the treatment of diseases caused by parasitic protozoa. Mol Aspects Med. 2004; 25(1–2): 211–20. PubMed Abstract | Publisher Full Text\n\nShiadeh MN, Niyyati M, Fallahi S, et al.: Human parasitic protozoan infection to infertility: a systematic review. Parasitol Res. 2016; 115(2): 469–77. PubMed Abstract | Publisher Full Text\n\nKhanum H, Kadir R, Arju T, et al.: Detection of Entamoeba histolytica, Giardia lamblia and Cryptospodium sp. Infection among diarrheal patients. Bangladesh J Zoology. 2015; 43(1): 1–7. Publisher Full Text\n\nHerman ML, Surawicz CM: Intestinal Parasites. In: Textbook of Pediatric Gastroenterology, Hepatology and Nutrition. Springer International Publishing. 2016; 185–193. Publisher Full Text\n\nTon Nu PA, Nguyen VQ, Cao NT, et al.: Prevalence of Trichomonas vaginalis infection in symptomatic and asymptomatic women in Central Vietnam. J Infect Dev Ctries. 2015; 9(06): 655–60. PubMed Abstract | Publisher Full Text\n\nGarcia LS, Bruckner DA: Diagnostic medical parasitology. Washington, DC. 2001; 131–5.\n\nMacpherson CN: Human behaviour and the epidemiology of parasitic zoonoses. Int J Parasitol. 2005; 35(11–12): 1319–31. PubMed Abstract | Publisher Full Text\n\nAndrews KT, Fisher G, Skinner-Adams TS: Drug repurposing and human parasitic protozoan diseases. Int J Parasitol Drugs Drug Resis. 2014; 4(2): 95–111. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOuattara A, Laurens MB: Vaccines against malaria. Clin Infect Dis. 2015; 60(6): 930–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMonzote L, Siddiq A: Drug development to protozoan diseases. Open Med Chem J. 2011; 5(1): 1–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPetersen I, Eastman R, Lanzer M: Drug-resistant malaria: molecular mechanisms and implications for public health. FEBS Lett. 2011; 585(11): 1551–62. PubMed Abstract | Publisher Full Text\n\nCastillo E, Dea-Ayuela MA, Bolás-Fernández F, et al.: The kinetoplastid chemotherapy revisited: current drugs, recent advances and future perspectives. Curr Med Chem. 2010; 17(33): 4027–51. PubMed Abstract | Publisher Full Text\n\nDias DA, Urban S, Roessner U: A historical overview of natural products in drug discovery. Metabolites. 2012; 2(2): 303–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCroft SL: The current status of antiparasite chemotherapy. Parasitology. 1997; 114(Suppl): S3–15. PubMed Abstract\n\nPink R, Hudson A, Mouriès MA, et al.: Opportunities and challenges in antiparasitic drug discovery. Nat Rev Drug Discov. 2005; 4(9): 727–40. PubMed Abstract | Publisher Full Text\n\nPaiardini A, Fiascarelli A, Rinaldo S, et al.: Screening and in vitro testing of antifolate inhibitors of human cytosolic serine hydroxymethyltransferase. ChemMedChem. 2015; 10(3): 490–7. PubMed Abstract | Publisher Full Text\n\nNzila A, Ward SA, Marsh K, et al.: Comparative folate metabolism in humans and malaria parasites (part I): pointers for malaria treatment from cancer chemotherapy. Trends Parasitol. 2005; 21(6): 292–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNzila A, Ward SA, Marsh K, et al.: Comparative folate metabolism in humans and malaria parasites (part II): activities as yet untargeted or specific to Plasmodium. Trends Parasitol. 2005; 21(7): 334–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSalcedo-Sora JE, Ward SA: The folate metabolic network of Falciparum malaria. Mol Biochem Parasitol. 2013; 188(1): 51–62. PubMed Abstract | Publisher Full Text\n\nCosti MP, Ferrari S: Update on antifolate drugs targets. Curr Drug Targets. 2001; 2(2): 135–66. PubMed Abstract | Publisher Full Text\n\nZhao R, Diop-Bove N, Visentin M, et al.: Mechanisms of membrane transport of folates into cells and across epithelia. Annu Rev Nutr. 2011; 31: 177–201. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMüller IB, Hyde JE: Antimalarial drugs: modes of action and mechanisms of parasite resistance. Future Microbiol. 2010; 5(12): 1857–73. PubMed Abstract | Publisher Full Text\n\nEmmer BT, Nakayasu ES, Souther C, et al.: Global analysis of protein palmitoylation in African trypanosomes. Eukaryot Cell. 2011; 10(3): 455–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMüller IB, Hyde JE: Folate metabolism in human malaria parasites--75 years on. Mol Biochem Parasitol. 2013; 188(1): 63–77. PubMed Abstract | Publisher Full Text\n\nVickers TJ, Beverley SM: Folate metabolic pathways in Leishmania. Essays Biochem. 2011; 51: 63–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBorst P, Ouellette M: New mechanisms of drug resistance in parasitic protozoa. Annu Rev Microbiol. 1995; 49(1): 427–60. PubMed Abstract | Publisher Full Text\n\nWang P, Brobey RK, Horii T, et al.: Utilization of exogenous folate in the human malaria parasite Plasmodium falciparum and its critical role in antifolate drug synergy. Mol Microbiol. 1999; 32(6): 1254–62. PubMed Abstract | Publisher Full Text\n\nOuellette M, Légaré D, Papadopoulou B: Multidrug resistance and ABC transporters in parasitic protozoa. J Mol Microbiol Biotechnol. 2001; 3(2): 201–6. PubMed Abstract\n\nStein WD, Sanchez CP, Lanzer M: Virulence and drug resistance in malaria parasites. Trends Parasitol. 2009; 25(10): 441–3. PubMed Abstract | Publisher Full Text\n\nAgüero F, Al-Lazikani B, Aslett M, et al.: Genomic-scale prioritization of drug targets: the TDR Targets database. Nat Rev Drug Discov. 2008; 7(11): 900–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFranzén O, Jerlström-Hultqvist J, Castro E, et al.: Draft genome sequencing of giardia intestinalis assemblage B isolate GS: is human giardiasis caused by two different species? PLoS Pathog. 2009; 5(8): e1000560. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButt AM, Nasrullah I, Tahir S, et al.: Comparative genomics analysis of Mycobacterium ulcerans for the identification of putative essential genes and therapeutic candidates. PLoS One. 2012; 7(8): e43080. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTiffin N, Adie E, Turner F, et al.: Computational disease gene identification: a concert of methods prioritizes type 2 diabetes and obesity candidate genes. Nucleic Acids Res. 2006; 34(10): 3067–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGötz S, García-Gómez JM, Terol J, et al.: High-throughput functional annotation and data mining with the Blast2GO suite. Nucleic Acids Res. 2008; 36(10): 3420–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNikolskaya T, Bugrim A, Nikolsky Y, et al.: Methods for identification of novel protein drug targets and biomarkers utilizing functional networks. United States patent US 8000949 B2; 2011. Reference Source\n\nPanwar B, Menon R, Eksi R, et al.: Genome-wide Functional Annotation of Human Protein-coding Splice Variants Using Multiple Instance Learning. J Proteome Res. 2016; 15(6): 1747–53. PubMed Abstract | Publisher Full Text\n\nAntony AC: Folate receptors. Annu Rev Nutr. 1996; 16(1): 501–21. PubMed Abstract | Publisher Full Text\n\nGirardin F: Membrane transporter proteins: a challenge for CNS drug development. Dialogues Clin Neurosci. 2006; 8(3): 311–21. PubMed Abstract | Free Full Text\n\nAurrecoechea C, Heiges M, Wang H, et al.: ApiDB: integrated resources for the apicomplexan bioinformatics resource center. Nucleic Acids Res. 2007; 35(Database issue): D427–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFalade M, Otarigho B: Dataset 1 in: Genome-wide characterization of folate transporter proteins of eukaryotic pathogens. F1000Research. 2017. Data Source\n\nSaitou N, Nei M: The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987; 4(4): 406–25. PubMed Abstract\n\nSanderson MJ: Confidence limits on phylogenies: the bootstrap revisited. Cladistics. 1989; 5(2): 113–29. Publisher Full Text\n\nNei M, Kumar S: Molecular evolution and phylogenetics. Oxford university press; 2000. Reference Source\n\nRambaut A: FigTree 1.4. 2 software. Institute of Evolutionary Biology, Univ. Edinburgh.\n\nMassimine KM, Doan LT, Atreya CA, et al.: Toxoplasma gondii is capable of exogenous folate transport. A likely expansion of the BT1 family of transmembrane proteins. Mol Biochem Parasitol. 2005; 144(1): 44–54. PubMed Abstract | Publisher Full Text\n\nSwaan PW: Membrane Transport Proteins and Drug Transport. Burger's Medicinal Chemistry and Drug Discovery. 2003. Publisher Full Text\n\nMansour TE: Chemotherapeutic targets in parasites: contemporary strategies. Cambridge University Press; 2002. Reference Source\n\nAlam A, Goyal M, Iqbal MS, et al.: Novel antimalarial drug targets: hope for new antimalarial drugs. Expert Rev Clin Pharmacol. 2009; 2(5): 469–89. PubMed Abstract | Publisher Full Text\n\nDean P, Major P, Nakjang S, et al.: Transport proteins of parasitic protists and their role in nutrient salvage. Front Plant Sci. 2014; 5: 153. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaier MH Jr, Beatty JT, Goffeau A, et al.: The major facilitator superfamily. J Mol Microbiol Biotechnol. 1999; 1(2): 257–79. PubMed Abstract\n\nSowunmi A, Fehintola FA, Adedeji AA, et al.: Open randomized study of pyrimethamine-sulphadoxine vs. pyrimethamine-sulphadoxine plus probenecid for the treatment of uncomplicated Plasmodium falciparum malaria in children. Trop Med Int Health. 2004; 9(5): 606–14. PubMed Abstract | Publisher Full Text\n\nNzila A, Mberu E, Bray P, et al.: Chemosensitization of Plasmodium falciparum by probenecid in vitro. Antimicrob Agents Chemother. 2003; 47(7): 2108–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichard D, Kündig C, Ouellette M: A new type of high affinity folic acid transporter in the protozoan parasite Leishmania and deletion of its gene in methotrexate-resistant cells. J Biol Chem. 2002; 277(33): 29460–7. PubMed Abstract | Publisher Full Text\n\nRichard D, Leprohon P, Drummelsmith J, et al.: Growth phase regulation of the main folate transporter of Leishmania infantum and its role in methotrexate resistance. J Biol Chem. 2004; 279(52): 54494–501. PubMed Abstract | Publisher Full Text\n\nKündig C, Haimeur A, Légaré D, et al.: Increased transport of pteridines compensates for mutations in the high affinity folate transporter and contributes to methotrexate resistance in the protozoan parasite Leishmania tarentolae. EMBO J. 1999; 18(9): 2342–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGottesdiener KM: A new VSG expression site-associated gene (ESAG) in the promoter region of Trypanosoma brucei encodes a protein with 10 potential transmembrane domains. Mol Biochem Parasitol. 1994; 63(1): 143–51. PubMed Abstract | Publisher Full Text\n\nStewart RA, Meyer KF: Isolation of Coccidioides immitis (Stiles) from the soil. Exp Biol Med. 1932; 29(8): 937–8. Publisher Full Text\n\nGreene DR, Koenig G, Fisher MC, et al.: Soil isolation and molecular identification of Coccidioides immitis. Mycologia. 2000; 92(3): 406–10. Publisher Full Text\n\nLitvintseva AP, Marsden-Haug N, Hurst S, et al.: Valley fever: finding new places for an old disease: Coccidioides immitis found in Washington State soil associated with recent human infection. Clin Infect Dis. 2015; 60(1): e1–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHajji M, Kanoun S, Nasri M, et al.: Purification and characterization of an alkaline serine-protease produced by a new isolated Aspergillus clavatus ES1. Process Biochem. 2007; 42(5): 791–7. Publisher Full Text\n\nRodríguez-Cerdeira C, Arenas R, Moreno-Coutiño G, et al.: Systemic fungal infections in patients with human inmunodeficiency virus. Actas Dermosifiliogr. 2014; 105(1): 5–17. PubMed Abstract | Publisher Full Text\n\nAwadelkariem FM, Hunter KJ, Kirby GC, et al.: Crithidia fasciculata as feeder cells for malaria parasites. Exp Parasitol. 1995; 80(1): 98–106. PubMed Abstract | Publisher Full Text\n\nKrungkrai J, Cerami A, Henderson GB: Pyrimidine biosynthesis in parasitic protozoa: purification of a monofunctional dihydroorotase from Plasmodium berghei and Crithidia fasciculata. Biochemistry. 1990; 29(26): 6270–5. PubMed Abstract | Publisher Full Text\n\nEyal E, Najmanovich R, Mcconkey BJ, et al.: Importance of solvent accessibility and contact surfaces in modeling side-chain conformations in proteins. J Comput Chem. 2004; 25(5): 712–24. PubMed Abstract | Publisher Full Text\n\nFedorova ND, Khaldi N, Joardar VS, et al.: Genomic islands in the pathogenic filamentous fungus Aspergillus fumigatus. PLoS Genet. 2008; 4(4): e1000046. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNierman WC, Yu J, Fedorova-Abrams ND, et al.: Genome sequence of Aspergillus flavus NRRL 3357, a strain that causes aflatoxin contamination of food and feed. Genome Announc. 2015; 3(2): e00168–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatinka MD, Duprat S, Cornillot E, et al.: Genome sequence and gene compaction of the eukaryote parasite Encephalitozoon cuniculi. Nature. 2001; 414(6862): 450–3. PubMed Abstract | Publisher Full Text\n\nRogers MB, Hilley JD, Dickens NJ, et al.: Chromosome and gene copy number variation allow major structural change between species and strains of Leishmania. Genome Res. 2011; 21(12): 2129–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSelman M, Sak B, Kváč M, et al.: Extremely reduced levels of heterozygosity in the vertebrate pathogen Encephalitozoon cuniculi. Eukaryot Cell. 2013; 12(4): 496–502. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaur K, Coons T, Emmett K, et al.: Methotrexate-resistant Leishmania donovani genetically deficient in the folate-methotrexate transporter. J Biol Chem. 1988; 263(15): 7020–8. PubMed Abstract\n\nBeck JT, Ullman B: Affinity labeling of the folate-methotrexate transporter from Leishmania donovani. Biochemistry. 1989; 28(17): 6931–7. PubMed Abstract | Publisher Full Text\n\nRichard D, Leprohon P, Drummelsmith J, et al.: Growth phase regulation of the main folate transporter of Leishmania infantum and its role in methotrexate resistance. J Biol Chem. 2004; 279(52): 54494–501. PubMed Abstract | Publisher Full Text\n\nOuameur AA, Girard I, Légaré D, et al.: Functional analysis and complex gene rearrangements of the folate/biopterin transporter (FBT) gene family in the protozoan parasite Leishmania. Mol Biochem Parasitol. 2008; 162(2): 155–64. PubMed Abstract | Publisher Full Text\n\nUbeda JM, Légaré D, Raymond F, et al.: Modulation of gene expression in drug resistant Leishmania is associated with gene amplification, gene deletion and chromosome aneuploidy. Genome Biol. 2008; 9(7): R115. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFlegontov P, Butenko A, Firsov S, et al.: Genome of Leptomonas pyrrhocoris: a high-quality reference for monoxenous trypanosomatids and new insights into evolution of Leishmania. Sci Rep. 2016; 6: 23704. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKraeva N, Butenko A, Hlaváčová J, et al.: Leptomonas seymouri: Adaptations to the Dixenous Life Cycle Analyzed by Genome Sequencing, Transcriptome Profiling and Co-infection with Leishmania donovani. PLoS Pathog. 2015; 11(8): e1005127. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEl-Sayed NM, Myler PJ, Blandin G, et al.: Comparative genomics of trypanosomatid parasitic protozoa. Science. 2005; 309(5733): 404–9. PubMed Abstract | Publisher Full Text\n\nEl-Sayed NM, Myler PJ, Bartholomeu DC, et al.: The genome sequence of Trypanosoma cruzi, etiologic agent of Chagas disease. Science. 2005; 309(5733): 409–15. PubMed Abstract | Publisher Full Text\n\nMassimine KM, Doan LT, Atreya CA, et al.: Toxoplasma gondii is capable of exogenous folate transport. A likely expansion of the BT1 family of transmembrane proteins. Mol Biochem Parasitol. 2005; 144(1): 44–54. PubMed Abstract | Publisher Full Text\n\nKelly S, Ivens A, Manna PT, et al.: A draft genome for the African crocodilian trypanosome Trypanosoma grayi. Sci Data. 2014; 1: 140024. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStoco PH, Wagner G, Talavera-Lopez C, et al.: Genome of the avirulent human-infective trypanosome--Trypanosoma rangeli. PLoS Negl Trop Dis. 2014; 8(9): e3176. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar S, Stecher G, Tamura K: MEGA7: Molecular Evolutionary Genetics Analysis Version 7.0 for Bigger Datasets. Mol Biol Evol. 2016; 33(7): 1870–4. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "20535",
"date": "27 Feb 2017",
"name": "Raphael D. Isokpehi",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary of Referee’s Report The manuscript presents a strong justification for research on folate transporter proteins as drug targets for diseases caused by eukaryotic pathogens including the malaria parasite. The manuscript reports a data curation effort that involves the use of the Eukaryotic Pathogens Database (EuPathDB) Resource. Several novel results to guide future research are included such as (i) a list of 234 folate transporter proteins from 63 eukaryotic microbes including eukaryotic pathogens; and (ii) phylogenetic trees of relatedness of the protein sequences. The authors observed the clustering of the protein sequences that indicate the possibility that antifolate drugs could be effective for multiple eukaryotic pathogens.\n\nMajor concerns are: (i) The need for a clearer description of the workflow for the construction of the protein list. (ii) There is inadequate support for the statement that “60% of the proteins were identified for the first time”.\n\n(iii) Confusion between retrieval and identification of protein sequences. The workflow diagram indicates retrieval of sequences but narrative text describes identification in multiple sections. The potential drug targeting categorization of the retrieved protein sequences is a key contribution of the research study.\n\nSome minor concerns include (i) typographic errors such as spelling of abbreviations (e.g. EuPathDB, PubMed and UniProt); (ii) classification of A. (Ajellomyces) capsulatus G186AR as a bacteria; and (iii) need to quantify the statistics associated with observations (Examples: in abstract “Many of the genomes”; in discussion: “A few of the proteins” ).\n\nTitle and Abstract: Is the title appropriate for the content of the article? The manuscript title is “Genome-wide characterization of folate transporter proteins of eukaryotic pathogens”. “Genome-wide characterization” does not effectively describe the accomplishments of the research reported. The manuscript in the conclusion section (Page 15) states “we have identified and classified 234 proteins…”. The workflow (Figure 1) provides categorization of proteins by features such as cellular location, presence of signal peptide and number of transmembrane helices. Figure 2 has the title “Categorization of proteins identified”. A suggested revised title is “Categorization of potential drug targeting folate transporter proteins from eukaryotic pathogens”. The “potential drug targeting” is obtained from the conclusion section.\n\nDoes the abstract represent a suitable summary of the work? There are sentences in the abstract that should be revised to accurately represent a suitable summary of the research performed. Comments/suggestions are provided below.\nQuantify the observations described as many. For example “genome sequences of many”: How many?\n\n“eukaryotic protozoa” > “eukaryotic microbes”. The term “eukaryotic microbes” encompasses pathogens and non-pathogens (e.g. Chromera velia and Vitrella brassicaformis1).\n\n“important data” > “critical biological information”\n\n“pathway are important for” > “pathways are necessary for”\n\n“Methods: We applied a combination of bioinformatics methods to examine the genomes of pathogens in the EupathDB for genes encoding homologues of proteins that mediate folate salvage in a bid to identify and assign putative functions.” > “Methods: We developed automated search strategies in the Eukaryotic Pathogen Database Resources (EuPathDB) to construct a protein list and retrieve protein sequences of folate transporters encoded in the genomes of 200 eukaryotic microbes. The folate transporters were categorized according to features including mitochondrial localization, number of transmembrane helix, and protein sequence relatedness.\n\nProvide key result(s) of the protein list retrieval and phylogenetic comparison of the retrieved proteins. For example, We constructed a list of 234 folate transporter proteins associated with 63 eukaryotic microbes including ??? algae, ??? fungi and ??? protozoa. Seven percent of the proteins were predicted to localize on the mitochondrial membrane. Phylogenetic tree revealed major (??? proteins) and minor (??? Proteins) clades. All the folate transporter sequences from the malaria parasite, Plasmodium, belonged to the major clade.\n\n“The mitochondrion is the predicted location of the majority of the proteins”. This statement is not supported by Figure 2D, where 7% of the protein sequences are labelled as Mitochondrial folate transporters.\n\nArticle content: Have the design, methods and analysis of the results from the study been explained and are they appropriate for the topic being studied?\n\nDesign and Methods:\nFigure 1 presents a conceptual hierarchical methodology. The rectangle labelled “Protein names/ sequences verification” has arrows to PubMed, UniProt, Membranetransporters.org, NCBI, GeneDB, Google Scholar and Phylogenetic analyses. It appears that the integrated results from the search strategies in the databases provided the input for the phylogenetic analyses. Please clarify.\n\nWhich step of the workflow resulted in the list of 234 proteins?\n\nHow many proteins were retrieved from the initial search using EuPathDB?\n\nProtein Features Retrieved rectangle: Was the retrieval of protein features performed on only the 234 proteins?\n\nThere is adequate explanation of the methods for phylogenetic analyses. Please provide the Newick format phylogenetic tree as a supplementary dataset.\n\nAnalysis of the Results:\nTable 1 is a major curation effort presented in the manuscript. a. The title of Table 1 should be updated to “Eukaryotic microbes from which folate transporters were identified”. The list includes non-pathogens. b. The content of the table (especially the Kingdom entries) should be checked for accuracy. The Kingdom column could be updated to Eukaryotic Microbe Group with entries as algae, protozoa or fungi. c. The column entries for A. capsulatus G186AR (mislabelled as bacteria) should be updated as the organism name is for a fungus (genus Ajellomyces). This update will also affect the Phylogenetic Tree (Figure 3). The node labelled Actinobacillus clusters with Ajellomyces macrogynus. d. An updated Table 1 should be presented as Dataset 2 in a spreadsheet file. This would enable secondary data analysis by other researchers. e. A new Table 1 could consist of columns for Eukaryotic Microbe Group, Genera of Eukaryotic Microbe, List of Species/Strain and Number of Folate Transport Proteins. This will provide reader with an overview of how the 234 proteins is distributed into the genera of the eukaryotic microbes. f. References listed for confirmation searches. Page 8, Paragraph 1, Sentence 1: “Our literature search for parasite folate transporters on PubMed and Google Scholar indicated 60% (38 out 63) of the proteins were identified for the first time as presented in Table 1. Comment: Among the references included in Table 1, only eight references (22, 73 to 77 and 82) on the basis of the article title provide experimental assessments of the folate transporters. Reference 85 is a reference for MEGA7 software. In the sentence “proteins” should be eukaryotic microbes. The proportion of eukaryotic microbes whose folate transporter(s) have been previously investigated with functional assays should be revised.\n\nFigure 2. Categorization of proteins identified. Authors should consider representing Figure 2A and 2B as bar graphs. Figures 2C, 2D and 2E have only two categories that can be described in the Results section.\n\nDiscussion a. “genomes of 64 strains”. Table 1 has 63 eukaryotic microbes. b. Discuss Chromera velia and Vitrella brassicaformis as organismal systems for investigating folate transporter function. See Woo et al.1\n\nConclusion Consider revising “In summary, we have identified and classified 234 proteins…” to “In summary, we have retrieved information on 234 folate transporter proteins from the Eukaryotic Pathogen Database (EuPathDB) resources. The folate transporter proteins were categorized into potential drug targeting features including mitochondrial localization, number of transmembrane helix, and protein sequence relatedness.\"\n\nData (if applicable): Has enough information been provided to be able to replicate the experiment? Are the data in a usable format/structure and have all the data been provided?\nTable 1 needs to be revised and converted to a Dataset.\n\nPlease provide the Newick format phylogenetic tree as a supplementary dataset.\n\nThe Gene Identifiers [Gene ID] in Dataset 1 can be used to retrieve the protein sequences from EuPathDB.",
"responses": []
},
{
"id": "20419",
"date": "06 Mar 2017",
"name": "Gajinder Singh",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBelow are my major concerns:\nThe abstract does not provides an adequate summary of the article.\nThe authors have claimed much higher scope of their work than actually reported. While their work is restricted to folate transporters, they have claimed to work on whole folate pathway. In the Abstract: \"We applied a combination of bioinformatics methods to examine the genomes of pathogens in the EupathDB for genes encoding homologues of proteins that mediate folate salvage in a bid to identify and assign putative functions.\" \"These findings offer new possibilities for potential drugConclusion: development targeting folate-salvage proteins.\"\n\nNo proper justification for the work is provided.\na) While folate pathway is well established as drug target, the authors have only identified folate transporters. So please give examples of drugs (with names) which are known to target folate transporters in any organism. If no sufficient information is provided, the usefulness of the work is severely reduced.\nb) How many of folate transporters are essential in species where essentiality data is available such as P. berghei and T. gondii?\n\nThe methodology to identify transporters is not comprehensive.\nSince authors have used key-word searches to identify folate transporters, they are likely to miss many transporters not labelled as such. An appropriate methodology will incorporate profile (such as HMM) based searches. Thus the number of transporters identified by authors is most likely to be an underestimate.\n\nThe methodology is not clear.\nAuthors write that \"we utilized the word “folate” for search on the gene text and “folic acid” was used to confirm the hits\", then how only transporters were retrieved? The Figure 1 is confusing. Where is BLAST used here?\n\nThe manuscript is written very poorly with so many scientific and grammatical mistakes that it is very difficult for the reader to follow the manuscript. Below are some examples.\na) \"The mitochondrion is the predicted location of the majority of the proteins, with 15% possessing signal peptides.\" - how can mitochondria be majority location if only 15% have signal peptides and even less with mitochondrial signal peptide? Shouldn't the majority then be cytoplasmic?\nb) \"We identified 234 proteins to be involve in folate transport\".\nc) \"Since folate-binding protein YgfZ, folate/ pteridine transporter, folate/biopterin transporter, putative, reduced folate carrier family protein, folate/methotrexate transporter FT1, putative folate transporters alone and others have 10, 25, 132, 2, 7, 49 and 9.\" What are these numbers?\nd) \"So we decided to reconstruct the phylogeny based folate transporter, folate-biopterin transporter after considering the identification number, the species diversity in each category.\"\ne) \"The different proteins identified to be involved in folate salvage or related molecules were folatebinding protein YgfZ, folate/pteridine transporter, folate/biopterin transporter, reduced folate carrier family protein, folate/methotrexate transporter FT1 and folate transporters having a 4%, 11%, 56%, 1%, 3% and 21% identity, respectively.\" What does this statement mean?\nf) Does Table 1 really need to be 4 page long?\ng) \"The only Plasmodium species with results for proteins that salvage folate was P. falciparum\"\nh) \"However, folate transporters I and II were retrieved from our search of GeneDB for P. malariae and P. ovale curtisi, respectively.\" What are these transporter classes?\ni) \"Some of these pathogens include P. ultimum DAOM BR144, which has mitochondrial folate transporter/carrier proteins similar to Homo sapiens, E. cuniculi GB-M1, which has proteins similar to folate transporter, and S. punctatus DAOM BR117, which has folate-binding protein YgfZ.\"\nj) \"After phylogenetic analysis each sub-phylogeny show a clear characterization except for folate-biopterin transporters\"\nh) \"In this study, 234 genes encoding homologues of folate salvaging proteins were identified in the genome of 64 strains, representing 28 species of eukaryotic pathogens. Some of the pathogens include P. falciparum 3D7 and IT, P. knowlesi H, P. berghei ANKA, P. chabaudi chabaudi, T. brucei Lister 427, T. brucei TREU927, T. brucei gambiense DAL972, Encephalitozoon cuniculi GB-M1. The pathogens range from bacteria through to fungi, intracellular parasites such as Plasmodium and leishmania species, to extracellular parasites such as trypanosome species\" Which bacteria was included in the study?\ni) \"It has been estimated that over half of the drugs currently on the market target integral membrane proteins of which membrane transporters are a part, but unfortunately, these transporters have not been adequately explored as drug targets. Folate transporters therefore represent attractive drug targets for treatment of infectious diseases.\" Please tell us how many drugs are available in the market which target folate transporters, which is a more relavant statistic with respect to this study.\nj) \"In the trypanosomes and related kinetoplastids, a member of these transporters, the folate biopterin transporter (FBT) family of proteins was identified in Leishmania.\"\nk) \"It is thought that MFS proteins are related to the FBT.\" What is MFS?\nl) \"Results from our study describing the presence of these transporters across several phyla corroborate results other researches, establishing the conservation of folate transport function among FBT family proteins from species from plants and protists\".\nm) \"The clustering of these proteins suggests that these transport proteins have highly conserved regions often required for basic cellular function or stability\". The clustering does not suggest that these transport proteins have highly conserved regions often required for basic cellular function or stability.\n\nn) \" We also performed phylogenetic comparisons of identified proteins. .\".",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-36
|
https://f1000research.com/articles/6-273/v1
|
15 Mar 17
|
{
"type": "Review",
"title": "General guidelines for biomedical software development",
"authors": [
"Luis Bastiao Silva",
"Rafael C. Jiménez",
"Niklas Blomberg",
"José Luis Oliveira",
"Rafael C. Jiménez",
"Niklas Blomberg"
],
"abstract": "Most bioinformatics tools available today were not written by professional software developers, but by people that wanted to solve their own problems, using computational solutions and spending the minimum time and effort possible, since these were just the means to an end. Consequently, a vast number of software applications are currently available, hindering the task of identifying the utility and quality of each. At the same time, this situation has hindered regular adoption of these tools in clinical practice. Typically, they are not sufficiently developed to be used by most clinical researchers and practitioners. To address these issues, it is necessary to re-think how biomedical applications are built and adopt new strategies that ensure quality, efficiency, robustness, correctness and reusability of software components. We also need to engage end-users during the development process to ensure that applications fit their needs. In this review, we present a set of guidelines to support biomedical software development, with an explanation of how they can be implemented and what kind of open-source tools can be used for each specific topic.",
"keywords": [
"biomedical software",
"guidelines",
"software development",
"bioinformatics",
"Agile"
],
"content": "Introduction\n\nAs an increasing number of scientific results are being generated from omics studies, new translational medicine applications and bioinformatics tools are needed to promote the flow of these results into clinical practice, i.e. the knowledge needs to be translated from the bench to the bedside, to foster development of new biotechnological products and improve patients’ health. Biomedical informatics intends to support the integration and transfer of knowledge across all major subject areas of translational medicine – from the study of individual molecules to the study of whole populations1. Translational medicine brings together many areas of informatics, including bioinformatics, imaging informatics, clinical informatics and public health informatics. Bioinformaticians, translational researchers and computational biologists identify the molecular and cellular components that can be targeted for specific clinical interventions and treatments for specific diseases. Imaging informatics also plays a significant role in understanding pathogenesis and identifying treatments at the molecular, cellular, tissue and organ level. Richer methods to visualize and analyse imaging data are already being investigated and developed2. Other techniques such as text and data mining have been applied to clinical reports. Additionally, translational research teams need to focus on decision support, natural language processing (NLP), standards, information retrieval and electronic health records.\n\nThe biomedical informatics landscape is pushing for the development of more professional and easy-to-use software applications, in order to address the pressing need to translate research outcomes into clinical practice. To accomplish this, solid software engineering approaches must be adopted. Despite being a relatively young discipline, biomedical informatics has evolved at an impressive rate, constantly creating new software solutions and tools. However, due to their multidisciplinary nature, it is often difficult for individual studies to gather solid knowledge in their various fields. This problem has been flagged by several authors, who have proposed general competences that undergraduate students should acquire3,4. These competences can be obtained through introducing complementary courses, such as software programming, in existing curricula, or by creating new academic degree courses5. While these strategies have resulted in many new and successful graduates, the right balance between looking for strong expertise in a single topic, or medium expertise in many topics, is not always easy to find.\n\nMany researchers without training in software engineering have found themselves faced with the intricate task of building their own software solutions. Moreover, researchers and clinicians typically perceive software development as an auxiliary task to serve science, rather than a central goal6. The result is sometimes code-difficult and costly to maintain and re-use. This software dependency is indeed a problem across all science, where concerns about the reproducibility of research have raised the need for robust, open access and open source software7,8. The development of software projects requires effective collaboration between users and software developers, and also between the users themselves.\n\nAnother common drawback of current bioinformatics applications is the lack of user-friendly interfaces, making them difficult to use and navigate. User-centered design has also been proposed as a way to minimize this problem9. The development of open source solutions has promoted software quality in the field, since it encourages public review, reuse, correction and continuous extension10.\n\nMost bioinformatics software is written by researchers who use it for their own individual purposes, a process long-identified as end-user programming11. However, these “new” programmers face many software engineering challenges, such as making decisions about design, reuse, integration, testing, and debugging12. Several authors have tried to introduce software engineering approaches in bioinformatics programming to address this problem. Hastings et al.13 compiled several recommendations that should be used to ensure the usability and sustainability of research software. Most of these suggestions are part of fundamental programming principles; e.g. keep simple, avoid repetitions, avoid spaghetti code. By examining a group of software projects, Rother et al. also identified a set of techniques that facilitate the introduction of software engineering approaches in academic projects14. This work, which came from the authors’ own experience in conducting software projects, provided readers with a toolbox consisting of several steps, starting with traditional ones such as user stories and CRC cards. In a more specific study, Kamali et al. discussed several software testing methodologies that can be used in bioinformatics, such as simulators, testing in operational environment and cloud based software testing15. Artaza et al. proposed 10 metrics for life science software development, identified as the most relevant by a group of experts16. They include topics such as version control and software discoverable or automatic building. In a similar approach, Wilson et al.17,18 described a set of “good enough” principles that should be followed to better organize scientific computing projects, starting at the data gathering phase and continuing up to the writing of the manuscript.\n\nIn this paper, leveraging on the experience of the MedBioinformatics project, we present a set of recommendations for biomedical software development, with an explanation of how they can be implemented and what kind of open-source tools can be used for each specific topic.\n\n\nWhy should we care about software development recommendations?\n\nMany research organizations and teams can create biomedical software, but far too often, they are not sufficiently developed to be used by most clinical researchers and practitioners, because they are incomplete, lack user-friendly interfaces and software maintenance is not guaranteed after project completion. So, the main question we asked ourselves was how to ensure that the biomedical software development process in research institutes remained reliable and repeatable without them having to undertake major organizational changes.\n\nDeveloping high quality biomedical software that accomplishes end-users’ expectations implies following a minimal set of software engineering guidelines. We propose the following:\n\n• Team and project management\n\n• Tracking the development process\n\n• Software integration and interoperability\n\n• Test-Driven Development (TDD) and continuous integration (CI)\n\n• Documentation\n\n• Software distribution\n\n• Licensing\n\nFigure 1 presents a software development process that is following this general set of key steps. The first, team and project management allows team members to keep track of group tasks and schedules, and be involved in development decisions. This encourages involvement of other users besides developers, who can point out missing features, give feedback and report bugs, helping communication between the whole team. Tracking the software development process consists of a combination of technologies and practices mostly used for source code management, but applicable to other collaborative tasks such as writing papers, product documentation, web site content, internal guidelines, and many more. Next, we have a cyclical pipeline between software integration and interoperability, which starts with the software specification phase and proceeds to the distribution phase, consisting of development, validation and deployment stages. The licensing of the software is one step that should be defined as early as possible, because during the development process it is often needed to include third-party dependency libraries, and the licenses should be compatible.\n\nThis test-driven development process can be used throughout the entire workflow, so that each unit is tested and the components’ integration is validated. Moreover, the documentation of each software module is important, and should be updated during all development phases. Finally, after the software application is distributed, appropriate maintenance and support is needed to assure end-users can rely on someone to handle their requests and help solve any problems.\n\nTo help the reader navigate through each of the following guidelines, we have divided each one into three sub-sections:\n\n1) A summary that describes what it is intended for\n\n2) The process description that explains what benefits it provides\n\n3) Examples of tools and services that help to implement the guideline.\n\nSummary:\n\nTeam and/or project management tools are essential for many organizations, to help in planning and organizing teams, tasks, and schedules. Implementing them during software development allows teams to stay synchronized about task scheduling and milestones, and helps track individual and general progress, identifying difficulties early on so that the necessary adjustments can be made. There are various software applications available that manage the development process; they typically include a variety of features for planning, scheduling, controlling costs, managing budgets, allocating resources, collaborating, and making decisions.\n\nProcess description:\n\nTracking and organizing the development process typically involves the following main features:\n\n• Task management – To prioritize what functionality is developed over the different phases of project. It is often provided as a graphical user interface tool that uses the drag and drop functionality to facilitate project management, such as Kanban boards – a method to visualize and manage the workflow;\n\n• Code reviewing – This important practice is often used to support teams of multiple developers, despite also being very useful to track the progress of a single developer. These tools allow the code to be audited by providing differential views of code changes, normally web-based interfaces where reviewers/auditors inspect the code independently, from their own machines, as opposed to synchronous review sessions where authors and reviewers meet to discuss changes;\n\n• Source code repositories – A source code repository is a web hosting facility to store and manage source code and which normally supports version control;\n\n• Bug tracking – Keeps track of all defects and problems with the source code, using a predefined nomenclature to describe each issue.\n\nThe process typically also includes document repositories, wikis, discussion forums, time-tracking, Gantt mapping, file storage, calendars and versioning control.\n\nThe principles behind team and project management tools have been implemented in several software development methodologies, such as Lean and Agile, and are important aspects of Scrum methodology, Kanban and extreme programming (XP)19. Here, team management relies on several types of meetings, such as sprint planning meetings, daily Scrum meetings, sprint review meetings, sprint retrospective meetings and backlog refinement meetings. The Scrum Master is responsible for planning what will be discussed. Developers also need to be prepared to analyse their development process, and negotiate future plans and potential deadlines. While Agile methodologies can lead to too many meetings, it is highly recommended to meet periodically to coordinate the development process.\n\nExamples:\n\nDepending on the type of financial resources available, free or open source management applications can be adopted, installed locally or used as a service in the cloud. Some examples of management applications are: Phabricator, Redmine or JIRA, Github and Bitbucket.\n\nSummary:\n\nA source control management system (SCM) provides coordination and management services between members of a software development team. It could be implemented in many different ways, and the most basic level, it could be a shared folder, and only the newest versions of files are available for use. In software programming, when there are several team members, the concept of branches is very important. Quite often, projects are only supported by a single researcher, but this is also very important for these small projects. To correctly support the concept of branch, more complex software is required.\n\nProcess description:\n\nThe more recent versions of SCMs allows developers to work simultaneously on the same file, merge changes with other developers’ changes and track and audit changes that were pull requested. Nowadays, SCMs often include components to assist the code revision and also to manage software process milestones and roadmaps.\n\nThe development process includes two branches: master and dev. Master will be the most stable branch. Only bug fixing is allowed and it should always be pull requested to master. Dev contains the new features and their unstable branches. This is where the developers are creating the next releases of the software. Figure 2 shows an example of the bug fixing flux that occurs while a new branch is created from the master.\n\nThe process usually starts with an issue being reported, and after a decision has been made, it is assigned to a developer. Before going to production, it needs to pass internal tests overseen by an internal testing team. If the bug has been fixed according to requests, the case is closed, or a report is sent back to the developer with a new set of issues.\n\nNew features are developed according to users’ feedback. It is a complex task that often involves re-engineering the applications. This process may break some other features already in place. Thus, the new features are implemented in a development branch, passing through several analyses, tests and user feedback stages. Finally, release management is also performed within the SCM. Generally, it uses an incremental numbering schema to tag each version. In this way, it is always possible to track older versions and roll back to a previous version, which is mainly required to compare the behaviour of different versions.\n\nThe following best practices should be applied to software version control:\n\n• Before committing, check for possible changes in the repository\n\n• When committing a change to the repository, make sure the change reflects a single purpose (E.g. Fixing a bug, adding a new feature);\n\n• If possible, try to create change sets linked to the issue tracker. Use the issue ID in the commit message;\n\n• After merging, run the unit tests to ensure that the merge was successful;\n\n• After creating a tag, do not commit to it any more. Visualize the tag as read-only. If it is necessary to resolve an issue in that specific version, create a branch from that tag and commit the changes to it;\n\n• Try not to merge a large number of changes between the trunk and the branches. Use atomic commits;\n\n• Make at least one commit per day with all the day’s work.\n\nExamples:\n\nSeveral version control systems (VCS) can manage code development, such as Git or Mercurial. Github or Bitbucked are some examples of ready-to-use SCM.\n\nSummary:\n\nSoftware integration and interoperability with external systems is a very important requirement in the biomedical domain, due to the reusability of existing repositories, services, algorithms, components and even applications. Designing an application programming interface (API) is crucial in distributed system development, so that the final solution can interconnect and interoperate with other systems.\n\nProcess description:\n\nA programming interface exposes part of a system behaviour, and it is sometimes difficult to implement when different platforms and programming languages are required. Since creating a new interface for each specific service could be tiresome and error-prone, it is often preferred to take a generic interface and express application-specific semantics to them. This is often a trade-off between performance, extensibility and stability of the API. To collaborate with specifying new semantics and the development of systems complying with such interfaces, Interface Description Languages (IDLs) emerged as formal definition languages for describing software interfaces, often coupled with facilities for documenting the API and generating consumer and provider code stubs for multiple platforms or programming languages.\n\nTwo of the most used types of API are SOAP and REST:\n\n• The Simple Object Access Protocol (SOAP) is an Internet protocol for messaging and remote procedure calls, using Extended Markup Language (XML) as the base message format and usually (although not necessarily) HTTP as the transport protocol. Web Service Definition Language (WSDL) is a commonly used IDL for describing a web service using SOAP. This protocol was very popular in its conception but is nowadays becoming replaced by other solutions such as REST.\n\n• Representational State Transfer (REST) is an architectural style that defines an interface as a means of accessing and manipulating well-identified resources, using HTTP as the transport protocol and a set of methods for reading and writing resource state. REST is praised for its simplicity, performance, scalability and reliability. In the scope of web applications, client modules for consuming RESTful services can be easily implemented without the need for complex external libraries.\n\nDefining an API is very important for software reusability, to ensure that developers allow their services to be integrated in third-party applications. In the biomedical domain, besides the existence of REST web services, use of well-defined standards and vocabulary is also crucial.\n\nExamples:\n\nWeb service facilities are generally included in software development toolkits and for several programming languages.\n\nSummary:\n\nThe Test-Driven Development methodology is a software development technique based on short cycles. The basic idea is that the developer creates a set of test cases and writes those test cases to ensure a specific use case. A set of assertions should be established in each test, helping developers to better identify the requirements for each component of the software. As a complement to TDD, Continuous Integration (CI) is a development practice that automates the build, allowing teams to detect early problems.\n\nProcess description:\n\nIn a software development journey, there are often several strategies to bug fixing, and changing the behaviour of modules may introduce problems in other parts of the software. There are three strategies that could be used to tackle the issue:\n\n• Unit and integration tests - Tests written by the programmer to verify if that particular part of the code respects the contract, i.e. what the input and the output is. Integration tests are often built to verify if the different pieces of system work together.\n\n• Continuous integration – A practice that incorporates automatic builds, and allows the teams to detect problems earlier.\n\n• TDD - The practice of writing the tests before writing the code.\n\nTDD can be applied not only with unit tests but also with interfaces. To develop unit tests for the core of the application, it depends on the programming language. The methodology is simple, but application might be more complex. There is always a trade-off between the overhead it introduces and its benefit, so it can be adapted according to specific needs, e.g. validation of critical processes, as is common in the biomedical domain. TDD allows writing of code that automatically verifies if the produced output of an algorithm is as expected20. These tests can be used at any time, allowing to better deal with future changes in code, and saving time in future updates.\n\nTDD and CI make the development process smoother, more predictable and less risky, even in advanced stages of the software lifecycle. Additionally, bugs can be traced and solved sooner, as they are continuously introduced into the project code. CI proposes the following set of development guidelines:\n\n• Do not check in on a broken build;\n\n• Always run all commit tests locally before committing;\n\n• Commit your changes frequently (at least once a day);\n\n• Never go home with changes to commit;\n\n• Never go home on a broken build;\n\n• Always be prepared to revert to the previous revision;\n\n• Take responsibility for all breakages that result from your changes;\n\n• Fix broken builds immediately.\n\nExamples:\n\nAn example of a tools that can be used for TDD is JUnit for java. To test web interfaces, there is the nightwatch.js tool, amongst others. For CI there are tools such as Jenkins, Travis-CI or TeamCity.\n\nSummary:\n\nDocumentation is one of the most important aspects of long-term software development. Building comprehensive documentation is very important for software reusability and maintenance, helping to mitigate the arrival/departure of team members Nevertheless, biomedical research software is often born based on experiments and scripts, and researchers are often not willing to document all processes and source code.\n\nProcess description:\n\nHigh-level requirements intend to depict what the system “will be”, rather than what it “will do”. The emphasis is therefore on non-functional or business requirements. As the project evolves, these requirements will be progressively more detailed, and eventually converge with low-level requirements. Use case analysis is important for any development project, and it is a task usually shared with end-users. It is important to choose a simple and comprehensive use case template, and sometimes a first iteration with a key user can help refine it before distributing the template among all users.\n\nOther technical documentation needs are mostly related to the project set-up, where a wiki system can be used for storing dispersed information in a controlled environment where everyone is able to edit/comment. This repository can include use cases, architecture/database diagrams, user interface mock-ups, and any project-related documents.\n\nLast but not least, inline source code documentation is very important to define and explain the different parts of the source code, making it easier for the programmers when they need to add extra features or fix bugs. Nowadays specific and automatic API generation documentation tools allow creation of easy to read documentation based on inline source code documentation.\n\nExamples:\n\nFor general documentation, Markdown or Sphinx (also used for Python) can be used. For Java language, there is Javadoc, while other languages have their own documentation strategy that can be followed.\n\nSummary:\n\nWeb-based solutions can be deployed in web servers, which makes life a lot easier for the application's end-users who do not need to deal with local installation. It is essential to handle updates smoothly without disrupting the quality of service provided.\n\nProcess description:\n\nThe deployment stage of each new release must not be performed in the production environment. It should follow three release management steps: development, testing and production (Figure 3). These distinct stages have similar conditions and they are deployed over different servers. Also, the production data is replicated in these environments to guarantee that the deployment will be safely performed. Software engineers will often perform the development deployment and test the new features in this environment. When this milestone is reached, the release is performed and updated in the test stage. This version will be passed to a group responsible for testing, gathering feedback and feature enhancement. Once it has passed this stage, the final release will go into production to be used by the end-users.\n\nExamples:\n\nThis is an organizational guideline, so no special tools are needed. Nevertheless, there are auxiliary tools that help the deployment and distribution process, mainly when the applications require complex setup tools. For example, it is possible to use software containers like Docker to distribute complex software and help deploy it, ensuring the whole community can run the software21–23.\n\nSummary:\n\nLicensing and copyright attribution is a subject that should be addressed from the very beginning of the project. The goal is to clarify the terms that will regulate future use of the software – e.g. commercial, free use, open source. Open source software is currently a trend, even in bigger companies, as a way to credit the authors and promote work dissemination and collaborative development. Several kinds of licenses are available to regulate these relationships, although an individual disclaimer can be written. A commonly used license is the Free and Open Source Software (FOSS) license, which allows the product to be modified and redistributed without having to pay the original author.\n\nProcess description:\n\nThe license should be stated clearly on the project’s front page and in the root of the source code. The full license text can be included here in a file called COPYING or LICENSE, following the standard format.\n\nThe copyrights should be assigned together with the license. The common nomenclature adds the year and the organization owning the copyright: Copyright (C) <year><name of organization>. The year specification may be a range, such as 2014–2016, to restrict the copyright to a period of time24. This line should be included in the headers of all source code files, together with a short license.\n\nExamples:\n\nThere are different types of open source licenses, that come with different conditions and restrictions. We will list the most commonly used open source licenses:\n\n• BSD License – It is the most permissive FOSS license. Users that re-use the code can do whatever they want, except in the case of redistributing source or binary, where they must always retain the copyright notice.\n\n• Apache Public License 2.0 – This license is also very permissive. It allows the licensed source code to be used in open-source and also in closed-source software.\n\n• GNU GPL – This license is restrictive. The users of the licensed system are free to use the licensed system without any usage restrictions; analyze it and use the results of the analysis (the source code must be provided and cannot be hidden); redistribute unchanged copies of the licensed system, and also modify and redistribute modified copies of the licensed system\n\n• GNU LGPL – It is trade-off between the restrictive GNU GPL and the permissive BSD. LGPL assumes that a library licensed under LGPL can be used in a non-GPL application. All the changes applied to the LGPL library must remain under LGPL. It assumes copyleft (‘All copyrights reversed’) on source code files, and not on the whole program.\n\n\nConclusion and future directions\n\nIn the biomedical domain, many new code scripts, algorithms, tools and services are currently being developed on a worldwide scale. However, the reuse of some of these software solutions outside the research lab is being hindered by them not following consolidated software developing methodologies. Early adoption of these methodologies is important in the development of biomedical tools so that they can reach a greater number of users; not only researchers but also healthcare professionals. During the development and distribution processes it is very important to involve end-users, to collect as much feedback as possible and create effective solutions during the development process.\n\nWe described a set of recommendations targeted at biomedical software developers aimed at achieving a good balance between fast prototyping, and robustness and long term maintenance. It is important to keep in mind that these recommendations are quite general and may not fit all cases, so adaptations may be required. We hope they can help biomedical researchers to reorganize their workflow, make their tools more visible, allow reproducibility of their research, and most importantly, that the outcome of that research can be more easily translated into daily clinical practice.",
"appendix": "Author contributions\n\n\n\nAll authors participated in the discussions to achieve the software development recommendations. We believe all authors contributed equally to this work. All authors contributed to the writing and reviewing of this article. All authors read and approved the submitted manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work has partially received funding from the European Union’s Horizon 2020 Research and Innovation programme for 2014–2020 under Grant Agreement n. 634143 (MedBioinformatics) and from the EU/EFPIA Innovative Medicines Initiative Joint Undertaking (EMIF grant n° 115372).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nSarkar IN: Biomedical informatics and translational medicine. J Transl Med. 2010; 8: 22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGehlenborg N, O'Donoghue SI, Baliga NS, et al.: Visualization of omics data for systems biology. Nat Methods. 2010; 7(3 Suppl): S56–S68. PubMed Abstract | Publisher Full Text\n\nHe B, Baird R, Butera R, et al.: Grand challenges in interfacing engineering with life sciences and medicine. IEEE Trans Biomed Eng. 2013; 60(3): 589–598. PubMed Abstract | Publisher Full Text\n\nKulikowski CA, Shortliffe EH, Currie LM, et al.: AMIA Board white paper: definition of biomedical informatics and specification of core competencies for graduate education in the discipline. J Am Med Inform Assoc. 2012; 19(6): 931–938. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUmarji M, Seaman C, Koru AG, et al.: Software engineering education for bioinformatics.2009 22nd Conference on Software Engineering Education and Training. 2009; 216–223. Publisher Full Text\n\nKane DW, Hohman MM, Cerami EG, et al.: Agile methods in biomedical software development: a multi-site experience report. BMC Bioinformatics. 2006; 7: 273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJoppa LN, McInerny G, Harper R, et al.: Computational science. Troubling trends in scientific software use. Science. 2013; 340(6134): 814–815. PubMed Abstract | Publisher Full Text\n\nGymrek M, Farjoun Y: Recommendations for open data science. Gigascience. 2016; 5(1): 22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPavelin K, Cham JA, de Matos P, et al.: Bioinformatics meets user-centred design: a perspective. PLoS Comput Biol. 2012; 8(7): e1002554. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGentleman RC, Carey VJ, Bates DM, et al.: Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10): R80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNardi BA: A small matter of programming: perspectives on end user computing. MIT press, 1993. Reference Source\n\nKo AJ, Abraham R, Beckwith L, et al.: The state of the art in end-user software engineering. ACM Computing Surveys (CSUR). 2011; 43(3): 21. Publisher Full Text\n\nHastings J, Haug K, Steinbeck C: Ten recommendations for software engineering in research. GigaScience. 2014; 3(1): 31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrlić A, Procter JB: Ten simple rules for the open development of scientific software. PLoS Comput Biol. 2012; 8(12): e1002802. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKamali AH, Giannoulatou E, Chen TY, et al.: How to test bioinformatics software? Biophys Rev. 2015; 7(3): 343–352. Publisher Full Text\n\nArtaza H, Hong NC, Corpas M, et al.: Top 10 metrics for life science software good practices [version 1; referees: 2 approved]. F1000Res. 2016; 5: pii: ELIXIR-2000. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson G, Bryan J, Cranston K, et al.: Good Enough Practices in Scientific Computing. arXiv:1609.00037. 2016. Reference Source\n\nWilson G, Aruliah DA, Brown CT, et al.: Best practices for scientific computing. PLoS Biol. 2014; 12(1): e1001745. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwaber K: Scrum development process.Business Object Design and Implementation. ed: Springer, 1997; 117–134. Publisher Full Text\n\nBeck K: Test-driven development: by example. Addison-Wesley Professional, 2003. Reference Source\n\nBelmann P, Dröge J, Bremges A, et al.: Bioboxes: standardised containers for interchangeable bioinformatics software. GigaScience. 2015; 4(1): 47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoettiger C: An introduction to Docker for reproducible research.ACM SIGOPS Operating Systems Review. 2015; 49(1): 71–79. Publisher Full Text\n\nDi Tommaso P, Palumbo E, Chatzou M, et al.: The impact of Docker containers on the performance of genomic pipelines. PeerJ. 2015; 3: e1273. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFogel K: Producing open source software: How to run a successful free software project. \"O'Reilly Media, Inc.\", 2005. Reference Source"
}
|
[
{
"id": "21001",
"date": "03 Apr 2017",
"name": "João P. G. L. M. Rodrigues",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review focuses on an important topic in modern bioinformatics: good practices for software development. As the authors note, there is a growing body of data derived from experimental studies that requires automated analysis. The analysis is often carried out using custom software, written by first-time or inexperienced programmers, and results in unsupported, sub-optimal, or duplicated code. As the authors also mention, several groups have tried to put forward a collection of tips and guidelines to help researchers in developing ‘proper’ software. This review offers a similar set of guidelines, targeted specifically at the field of biomedical informatics, and draws on the experience of the authors on building their own tools.\nThe suggestions cover seven topics, from management to in-depth software development tips, and do a very good job at explaining their importance and their take on what constitutes a good approach. The authors also give very good examples of software tools to help readers setup a development environment. These range from the usual ‘use GitHub’ to TravisCI, Sphinx, and Docker. One suggestion would be to integrate some of these tools in Figure 1, to give readers a visual cue where these tools fit in each topic/step. The authors also provide a very nice summarized view of the release process, namely licensing and distribution (e.g. using Docker), and the follow-up maintainance.\nThere is one less positive aspect of this review, which is anyway transversal to most such attempts at ‘guidelines’ for bioinformatics software development. As the authors note, most of these tools are created to solve one very specific problem, or process a very specific dataset. These are not amenable to test-driven development, or to continuous integration. More importantly, most of the authors of these tools/scripts are biologists, not programmers, which usually translates to a lack of interest in proper programming etiquette. Thus, I believe that it is important to show and teach such users very very simple programming rules, namely about how to make their code readable for others. For example, in the Python world, a simple recommendation to use ‘flake8’ to check for PEP8 coding standards and an editor (e.g. Atom, Sublime) that can do real-time code checks (typos, unused variables, indentation issues, etc). There is no need to suggest quasi-professional IDEs, as these will likely scare users away!\nAll in all, as a biologist doing bioinformatics and doing his best to follow proper software guidelines, I find reviews like this one very important to the field. They should probably feature in a ‘starting package’ to new PhD students in many labs. As an added suggestion, the authors could think of putting these guidelines in practice and follow up with a simple workshop/tutorial series, a la software carpentry, even if in webinar format.",
"responses": [
{
"c_id": "2861",
"date": "12 Jul 2017",
"name": "Luis Bastiao Silva",
"role": "Author Response",
"response": "Much obliged for your assessment and recommendations. We have redrawn Figure 1 following your suggestion. Regarding the second point, we recognised the importance of the subject and how recommendations vary according each developer/research profile and even programming language. For beginners or sporadic developers, most of the recommendation may not apply. However, this type of review creates the awareness of developing for the community, not just for ourselves. Finally, regarding last comment, indeed, guidelines for a new comers is good idea. We think this can be done at the institution level, since different methodologies may be used locally."
}
]
},
{
"id": "21000",
"date": "05 Apr 2017",
"name": "Victor Maojo",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a timely report, given the proliferation of all types of biomedical informatics applications (from medical apps to laboratory or even complex clinical ones) delivered by software developers that do not follow even simple criteria of solid software engineering. In fact, many of these applications are built to carry out quite simple computational tasks (even, many times, quite successfully) but without a sound rigorous computing basis, and then are prone to multiple subtle errors or they lack standardized approaches and interoperability capacities. Besides the interest of the topic, the paper is well written, with a solid analysis of the topic, useful recommendations and a selected reference section, which can be very helpful to a broad range of readers, from public health informaticians to bioinformaticians. Below are some comments.\n\nAlthough the authors are usually careful with this issue, readers outside the field may have some problems to understand the differences between medical informatics, bioinformatics, computational biologists and biomedical informatics. Sometimes the words are used in an interchangeable way in the paper, but this may lead to confusion. Some explanation might be necessary.\n\nWhen the authors refer to “focus on decision support, NLP, information retrieval and EHRs” they mix techniques and a concrete system (EHR). They should explicit what technique they refer for EHRs.\n\nThe authors begin to address, apparently, biomedical informatics (thus, including public health and clinical topics) but they focus later in bioinformatics, which I believe it is the best target for the paper. Differences are usually significant between clinical (for instance, EHRs, with many big software companies dedicated to this field). Some example may be useful.\n\nBesides the provided hyperlink, some reference should be added for the MedBioinformatics project and a brief description.\n\nThe software engineering guidelines suggested are of interest, but some additional brief comparison and similarities/differences with established methodologies (besides what is presented for Test-driven) may provide additional insight.\n\nAs mentioned above, some brief, real example carried out by the authors may add some information of interest, pointing out actual problems and possible approaches. In fact, the paper is quite generic, but some specific case/application in the biomedical domain can be of interest.\n\nSome differences may be pointed out when software developers work for an academic thesis or project, compared to a software company? Some comment may be of interest, too. In fact, many tools are quite simple, for a single task, and not intended for broader scenarios, where interoperability is necessary. Some recommendations could differentiate both cases.\n\nFor this reviewer, the paper may require some more explanation about design and prerequisites aspects, which are quite important, and some concrete example, but this is quite personal and the decision should be up to the authors.",
"responses": [
{
"c_id": "2860",
"date": "12 Jul 2017",
"name": "Luis Bastiao Silva",
"role": "Author Response",
"response": "Thank you for the positive assessment and helpful recommendations. We will answer point by point for your comments: 1) We agree, this discussion is important. We have included in this revision two new references where the explanation of the different fields is well addressed. 2) Done3) Indeed, this is true and we tried to make it more clear along the article. Moreover, we included new references that discuss this issue in detail.4) Thank you for highlighting this. A brief introduction to the project is now provided.5/6) Indeed, we agree with this remark. However, since these recommendations result from the experience of several software projects, where many concrete use cases were explored, we also feel that detailing those could be out of scope of the article. 7) Thank you for raising this, which is indeed a very important remark. We have now discussed this in more detail in the introduction section.8) We agree with your remark. The design and prerequisites aspects are briefly addressed in the documentation process. We changed this section to highly better these issues."
}
]
}
] | 1
|
https://f1000research.com/articles/6-273
|
https://f1000research.com/articles/6-1120/v1
|
12 Jul 17
|
{
"type": "Review",
"title": "Shifting paradigm of maternal and perinatal death review system in Bangladesh: A real time approach to address sustainable developmental goal 3 by 2030",
"authors": [
"Animesh Biswas"
],
"abstract": "Recently, Bangladesh has made remarkable progress in reducing maternal and neonatal morality, even though the millennium developmental goal to reduce maternal and neonatal mortality was not achieved. Sustainable Developmental Goal (SDG) 3 has already been set for a new target to reduce maternal and neonatal deaths by 2030. The country takes this timely initiative to introduce a maternal and perinatal death review system. This review will discuss the shifting paradigm of the maternal and perinatal death review system in Bangladesh and its challenges in reaching the SDG on time. This review uses existing literature on the maternal and perinatal death review system in Bangladesh, and other systems in similar settings, as well as reports, case studies, news, government letters and meeting minutes. Bangladesh introduced the maternal and perinatal death review system in 2010. Prior to this there was no such comprehensive death review system practiced in Bangladesh. The system was established within the government health system and has brought about positive effects and outcomes. Therefore, the Ministry of Health and Family Welfare of Bangladesh gradually scaled up the maternal and perinatal death review system nationwide in 2016 within the government health system. The present death review system highlighted real-time data use, using the district health information software(DHIS-2). Health mangers are able to take remedial action plans and implement strategies based on findings in DHIS-2. Therefore, effective utilization of data can play a pivotal role in the reduction of maternal and perinatal deaths in Bangladesh. Overall, the maternal and perinatal death review system provides a great opportunity to achieve the SDG 3 on time. However, the system needs continuous monitoring at different levels to ensure its quality and validity of information, as well as effective utilization of findings for planning and implementation under a measureable accountability framework.",
"keywords": [
"Maternal death",
"neonatal death",
"death review system",
"Bangladesh"
],
"content": "Introduction\n\nDeath review systems for maternal deaths have been performed both in developed and developing countries for many years1. The majority of the death reviews focus on maternal death at the facility level2–16, but there are small number of community death review systems trialled in countries with high maternal deaths17,18. In Bangladesh, a comprehensive death review system for maternal death was not functional in the health system until 2010. In addition, adequate registration and notification of deaths was lacking until 2010. The Bangladesh maternal mortality survey in 2001 showed that the maternal mortality ratio is 320 per 100,000 live births19, and the millennium developmental goal (MDG) was set to reduce this to 143 per 100,000 live births by 201520. Likewise, neonatal deaths were also aimed to be reduced by two-thirds in the same time frame. Considering these challenges and needs, the country initiated and established a maternal and neonatal death review system in 2010 to address the MDGs21. The initial death review system ran for a year in a district of Bangladesh and showed some positive results21. In the meantime, the results of the Bangladesh maternal mortality survey 2010 were published in 2011, and clearly showed a progressive reduction in maternal mortality - 194 per 100,000 livebirths22. Although the annual death reduction rate did not achieve the target, the country started to use death review system data at a local level for the improvement of the overall situation. Subsequently, the country gradually expanded the death review into 10 districts by 2013, which covered approximately 20 million of the general population. Although the countdown2015 report mentioned that Bangladesh is one of nine countries with good progress in achieving the maternal death target by 201520, the United Nations (UNs) report estimated that the maternal mortality ratio of Bangladesh in 2015 was 176 per 100,00 livebirths23. Moreover, Bangladesh demographic and health survey report of 2014 stated that the neonatal mortality rate is 28 per 1000 live births24.\n\nThe new sustainable development goal (SDG) 3 has set a universal target, which means Bangladesh has to reduce maternal mortality to less than 70 per 100,00 live births, as well as neonatal deaths to <12 per 1000 live births by 20301. Considering the present targets, the Ministry of Health and Family Welfare (MoH&FW) of Bangladesh has put forward the maternal and perinatal death review system for national scale up in 2016. The country has revised the maternal and perinatal death review system to be more action oriented and named this as Maternal and Perinatal Death Surveillance and Response (MPDSR).\n\nThis reviewed discusses the shifting paradigm of the Bangladesh maternal and neonatal death review system in the last seven years and the effects of the death review system in the country, and its facilitation for reaching the SDG3 target. This review focusses on the most recent articles, news and reports published on the maternal and perinatal death review system in Bangladesh, as well as related articles in low income countries in a similar context. Google scholar and MEDLINE/PubMed were used to find suitable articles, and full text of articles were preferred to review. A total number of nine articles relating to the maternal and perinatal death review in Bangladesh were found in full text. Moreover, the study also reviewed available reports, case studies, government letters, meeting minutes, web articles and thesis papers relating to the topic.\n\n\nInception of the system in Bangladesh\n\nA maternal and perinatal death review explores medical and social causes related to maternal and neonatal deaths though a systematic process. Bangladesh initiated this type of intervention initially in one district named Thakurgaon with a population of around 1.4 million in 201021. The system was run by the government though the Directorate General of Health Services (DGHS) in collaboration with Directorate General of Family Planning (DGFP) of the MoH&FW. The initiative was under a partnership of the government and UN maternal and neonatal health initiatives. The death review was funded by UNICEF and Bangladesh, initially through DFID, UK Aid and then Global Affairs Canada; technical implementation support was given by a national non-government organization, Centre for Injury Prevention and Research Bangladesh (CIPRB) under a partnership with UNICEF21.\n\nPreviously, there were no structured death review tools used widely in the country that provided evidence-based data for preparing action plans at different levels to reduce maternal and neonatal deaths in Bangladesh. Therefore, the country adopted its death review tool considering existing tools from various sources, such as Bangladesh maternal mortality survey, hospital base facility death review tool used by DGFP, World Health Organization (WHO) verbal autopsy tool, and tools developed by the Obstetric and Gynaecological Society of Bangladesh. A national technical group was formed to review all existing tools to prepare a simpler version for the country to use. Technical persons from DGHS and DGFP, professional experts, UNs, researchers and public health experts worked together to finalize these tools and guidelines under a participatory process. Next, these were endorsed by the government to use in the targeted district25.\n\nThe guiding principal of the WHO ‘beyond the numbers’ for the maternal death audit was followed, where the entire system maintained confidentiality, non-blaming, anonymity and a non-punitive approach26.\n\nThe system highlighted both community and facility death notifications of maternal deaths, and neonatal deaths and stillbirths though the government health system. Subsequently, for community deaths, each of the maternal deaths, neonatal death and stillbirths were reviewed. In addition, social intervention through social interaction with the community to discuss preventable future deaths was also performed, called ‘social autopsy’. Maternal and perinatal death review findings were used for local level planning, and implementation of those action plans was performed by health managers to reduce such preventable deaths25. Death review findings were discussed and analysed at death review committees at the upazila and district level. A number of local action plans were taken and implemented using the findings from the death review committee meetings21,25.\n\n\nScale up of the system\n\nIn following years, the death review system gradually expanded from four districts in 2011 to ten districts by 201327. Moreover, at the beginning of 2015, Save the Children supported the government in introducing Maternal and Perinatal Death Review (MPDR) in four districts where MaMoni Health System Strengthening Projects were running28. Overall, by 2015, 14 districts were under the coverage of MPDR, and the surveillance system covered approximately 28 million of the general population.\n\nIn 2015, the government took the initiative countrywide for its national scale-up, considering its need and importance to address the quality of care on maternal and neonatal health, as well as to reach the SDG3 in time. As part of this initiative, the Health Economics Unit of the Quality Improvement Secretariat of the MoH&FW took the initiative to update the national guideline and training manual for health care providers29–33. Three technical working groups were formed at a national level30 and followed the maternal death surveillance and response (MDSR) framework developed by the WHO in 2013, in order to fit the revised version of death review system34. The previous version of the death review system did not focus on ‘response’; therefore, this component was embedded within the existing MPDR. The updated version of the death review system highlighted death ‘surveillance and response’ in the conceptual framework, and was renamed Maternal and Perinatal Death Surveillance and Response (MPDSR). New revised version of National guideline, ToT manual and tools were approved by the ministry of health and family welfare for country use. The revised, simplified maternal and neonatal death review tools allow field level government health workers to complete the review more easily. MPDSR also highlights the integration of the quality improvement committee at sub-district, district, divisional and national levels to review the progress and respond to findings. In addition, facility-based MPDSR sub-committee at various levels was included to review facility deaths, in order to improve quality of care in facilities. Furthermore, the new system has introduced a focal person for MPDSR from the health department at sub-district, district, division and national levels to closely monitor and supervise MPDSR activities, including reporting to the quality improvement committees34.\n\n\nUsing District Health Information Software 2\n\nThe system has been further enriched by integrating the death review data in the District Health Information Software 2 (DHIS-2) of the Health Management Information System (HMIS) of DGHS. This has strengthened the overall health system by effective reporting of data online. Therefore, all death notification data entered in the DHIS-2 by a community health care provider, who works in a community clinic (small unit of outpatient service for maternal and neonatal health at the village level run by the government), can be seen at any time using the DHIS-2 platform by the health managers of different levels for immediate planning and interventions based on the data35. Similarly, cause analysis of community maternal and neonatal deaths, including ICD-10 codes, are assigned at a divisional level by professional experts of medical college hospitals (obstetricians & gynaecologists and neonatologists). The Quality Improvement Secretariat of the MoH&FW organizes capacity development training for professional experts performing manual analysis on possible medical causes, including factors associated with death using a community verbal autopsy form36. Final causes of death are inserted into the DHIS-2 at the divisional level in a periodic basis. This creates a unique opportunity for the upazilas and district level health managers, as well as for national level policy makers, planners, researchers, developmental partners and related stakeholders to understand and track the maternal and neonatal death situation in Bangladesh.\n\n\nImpact of the system\n\nThe development of the death review system for maternal and neonatal death has had a positive effect on the improvement of overall maternal and neonatal health, including playing a pivotal role in the reduction of a significant number of deaths. Various components of the system have been evaluated to moderate its effects37. Notifications of maternal and neonatal deaths at the community level allows the deaths from the whole community to be captured, including under-privileged areas and hard to reach areas. Therefore, the maternal and neonatal mortality ratios that are calculated yearly in districts provide an accurate idea on the maternal and neonatal health situations in the particular areas. The notification system also highlights areas with a high amount of deaths. Specific interventions taken by local managers in death-dense areas can lead to a reduction in deaths in subsequent years38. Another utilization of the death notification system is its incorporation with DHIS-2, which helps to visualise the live data online. In addition, local planning, such as using death spot maps at the sub-districts level, helps health managers to concentrate where interventions are needed39.\n\nVerbal autopsy in the death review system at the community level is used to explore medical causes, causes of delays and social factors associated with deaths reported by the MPDSR in Bangladesh40,41. Findings of verbal autopsies are used by local health managers for effective planning and reduction of such deaths in the future, causing improvements in 1st delay (decision making) and 2nd delays (transferring to referral centre) and improvement in referrals to specific facilities42.\n\nFacility death review of maternal and neonatal deaths can also be used to explore the causes of deaths in the facilities, as well as factors associated with the deaths, including gaps and challenges in the facility, which can then be overcome. The district uses these findings for effective planning and can intervene accordingly. For example, in one district, the provision of blood donors for an emergency supply of blood for mothers with postpartum haemorrhage (PPH) reduced the amount of deaths due to PPH43.\n\nSocial autopsy in the death review system has allowed communities to engage in dialog on the causes of the deaths, explore social stigmas and barriers, and try to explore possible solutions for the prevention of preventable future deaths under an effective and harmonized dialog between the community and health care providers who facilitate the social autopsy interviews25,44. It has been observed over the years that social autopsy is able to explore social causes behind deaths45. Community interaction enhances the ownership of the community, allows understanding of their own problems, and empowers the community to think and take appropriate action. Representative publicly-elected community leaders at the village level can change social barriers to prevent unwanted deaths, minimize delays and influence pregnant mothers to uptake care at government health facilities46,47.\n\n\nA unique approach\n\nThe MPDSR in Bangladesh is unique, as it not only includes death surveillance and response, but also covers community and facility maternal and newborn deaths together. In addition, the MPDSR captures stillbirth data though death notification system. The system is a national programme of the government and is led by various sections of the government for its implementation within the MoH&FW, including DGHS, DGFP, Management Information System (MIS) and Health Economic Unit (HEU), along with support of developmental partners, professional bodies and non-government organizations. The government health providers working in the health system are the key actors in the MPDSR roll out, which is sustainable and replicable in other countries with similar settings. Similarly, quality improvement committees at different level is the main platform for planning and implementation of various interventions. The data platform DHIS-2 is a live surveillance system and effective monitoring tool to see the progress of the country to reach the maternal and neonatal death reduction by 2030 as set out in SDG334. Initial costing of the development phase of the death review system was found to be high; however, cost estimation has shown that field implementation of the system is much lower in order to sustain and run the system in the ongoing government health system48.\n\nSouth-East Asian countries, such as India, Pakistan, Sri Lanka, Nepal and the Maldives, have already implemented death reviews for maternal deaths. India has conducted maternal and perinatal death inquiry and response in a few districts with an aim to work on community deaths. However, this system does not explore facility deaths, or how best to use the outcome data17,49. A study conducted in India has calculated both community and facility deaths, but was restricted to maternal deaths only11. By contrast, Pakistan has determined maternal death cause identification using verbal autopsy17. A recent report by the WHO on MDSR highlighted the role of death surveillance and response in the reduction of maternal mortality in order to address the SDGs1. It also focused on both community and facility maternal death review.\n\n\nChallenges ahead\n\nThe new approach of MPDSR in Bangladesh could be an evidence-based example for other low and middle income countries in achieving the best effects at a local level using a government health system, which has achieved functional planning and implementation of decisions based on death review findings. However, many countries around the world have reported that maternal death review results showed disparity between policy and practice; especially, effective coordination and planning requirement for a timely response remain a challenge4,50,51.\n\nExperiences of the early implementation of the Bangladesh maternal and perinatal death review from 2010 to 2015 has shown some strategic challenges. Previous studies mentioned that challenges still persist in capturing all deaths from the community, especially from pocket areas in hard to reach districts38,39. Ensuring the quality of data though conduction of verbal autopsy in the community and the best utilization of the findings at various level is also a challenge40–42. Moreover, social autopsy findings showed that social barriers are still a key issue in averting maternal and neonatal deaths45,46. It is recommended that social autopsy intervention in the community could play pivotal role in changing the beliefs and practices of the community to seek appropriate care during pregnancy, delivery and postpartum period. Social mobilization also influences social empowerment through engagement and social commitment; therefore, social autopsies in MPDSR have an enormous opportunity for use in other countries44. At a facility level, limitation of proper documentation and record keeping has been identified as a key barrier to reviewing facility deaths43.\n\n\nConclusions\n\nThe shifting of Bangladesh from the MPDR to the MPDSR is a timely initiative to achieve the SDG3 in Bangladesh. A death notification system is able to capture all maternal deaths, neonatal deaths and stillbirths in the community and facility, while the death review for maternal and neonatal deaths explores the causes, gaps, and challenges that needed to be overcome. The quality improvement committees at different levels closely monitor the progress and intervene accordingly. At a local level, action plans use live data in the HMIS guide, which supports health managers and planners to make decisions and implement effective and focused interventions for achieving the SDG3 in Bangladesh by 2030.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nI am grateful to CIPRB during the preparation of this manuscript. I am also thankful to Tarana Ferdous of the ARK Foundation for editing the manuscript.\n\n\nReferences\n\nWorld Health Organization: Time to respond: a report on the global implementation of maternal death surveillance and response (MDSR). World Health Organization. Maternal, newborn, child and adolescent health. 2016. Reference Source\n\nKongnyuy EJ, Mlava G, van den Broek N: Facility-based maternal death review in three districts in the central region of Malawi: an analysis of causes and characteristics of maternal deaths. Womens Health Issues. 2009; 19(1): 14–20. PubMed Abstract | Publisher Full Text\n\nDumont A, Tourigny C, Fournier P: Improving obstetric care in low-resource settings: implementation of facility-based maternal death reviews in five pilot hospitals in Senegal. Hum Resour Health. 2009; 7: 61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKongnyuy EJ, van den Broek N: The difficulties of conducting maternal death reviews in Malawi. BMC Pregnancy Childbirth. 2008; 8: 42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHofman JJ, Mohammed H: Experiences with facility-based maternal death reviews in northern Nigeria. Int J Gynecol Obstet. 2014; 126(2): 111–4. PubMed Abstract | Publisher Full Text\n\nOsman H, Campbell OM, Sinno D, et al.: Facility-based audit of maternal mortality in Lebanon: a feasibility study. Acta Obstet Gynecol Scand. 2009; 88(12): 1338–44. PubMed Abstract | Publisher Full Text\n\nKhanam RA, Khan M, Halim MA, et al.: Facility and Community Based Maternal Death Review in Bangladesh. Bangladesh J Obstet Gynaecol. 2009; 24(1): 18–21. Publisher Full Text\n\nPearson L, deBernis L, Shoo R: Maternal death review in Africa. Int J Gynaecol Obstet. 2009; 106(1): 89–94. PubMed Abstract | Publisher Full Text\n\nOladapo OT, Adetoro OO, Fakeye O, et al.: National data system on near miss and maternal death: shifting from maternal risk to public health impact in Nigeria. Reprod Health. 2009; 6: 8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOlsen BE, Hinderaker SG, Bergsjø P, et al.: Causes and characteristics of maternal deaths in rural northern Tanzania. Acta Obstet Gynecol Scand. 2002; 81(12): 1101–9. PubMed Abstract | Publisher Full Text\n\nJafarey SN, Rizvi T, Koblinsky M, et al.: Verbal autopsy of maternal deaths in two districts of Pakistan--filling information gaps. J Health Popul Nutr. 2009; 27(2): 170–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDumont A, Gaye A, de Bernis L, et al.: Facility-based maternal death reviews: effects on maternal mortality in a district hospital in Senegal. Bull World Health Organ. 2006; 84(3): 218–24. PubMed Abstract | Free Full Text\n\nSupratikto G, Wirth ME, Achadi E, et al.: A district-based audit of the causes and circumstances of maternal deaths in South Kalimantan, Indonesia. Bull World Health Organ. 2002; 80(3): 228–34. PubMed Abstract | Free Full Text\n\nGoswami D, Rathore AM, Batra S, et al.: Facility-based review of 296 maternal deaths at a tertiary centre in India: could they be prevented? J Obstet Gynaecol Res. 2013; 39(12): 1569–79. PubMed Abstract | Publisher Full Text\n\nDumont A, Gaye A, De Bernis L, et al.: Lessons from the Field Facility-based maternal death reviews: effects on maternal mortality in a district hospital in Senegal. 2006; 023903(05). Reference Source\n\nPearson L, deBernis L, Shoo R: Maternal death review in Africa. Int J Gynaecol Obstet. 2009; 106(1): 89–94. PubMed Abstract | Publisher Full Text\n\nDikid T, Gupta M, Kaur M, et al.: Maternal and perinatal death inquiry and response project implementation review in India. J Obstet Gynaecol India. 2013; 63(2): 101–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBayley O, Chapota H, Kainja E, et al.: Community-linked maternal death review (CLMDR) to measure and prevent maternal mortality: a pilot study in rural Malawi. BMJ Open. 2015; 5(4): e007753. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNational Institute of Population Research and Training (NIPORT), ORC Macro, Johns Hopkins University and ICDDR,B: Bangladesh Maternal Health Services and Maternal Mortality Survey 2001. Dhaka, Bangladesh and Calverton, Maryland (USA), 2003. Reference Source\n\nEl Arifeen S, Hill K, Ahsan KZ, et al.: Maternal mortality in Bangladesh: a Countdown to 2015 country case study. Lancet. 2014; 384(9951): 1366–74. PubMed Abstract | Publisher Full Text\n\nBiswas A, Rahman F, Halim A, et al.: Maternal and Neonatal Death Review (MNDR): A Useful Approach to Identifying Appropriate and Effective Maternal and Neonatal Health Initiatives in Bangladesh. Health. 2014; 6: 1669–79. Publisher Full Text\n\nNational Institute of Population Research and Training (NIPORT), MEASURE Evaluation, ICDDR,B: Bangladesh Maternal Mortality and Health Care Survey 2010. Summary of key findings and Implementation. Dhaka, Bangladesh, 2012. Reference Source\n\nWHO, UNICEF, UNFPA, et al.: Trends in maternal mortality: 1990 to 2015. WHO, Sexual and reproductive health, 2015. Reference Source\n\nNational Institute of Population Research and Training (NIPORT), Mitra and Associates, ICF International: Bangladesh Demographic and Health Survey 2014. Dhaka, Bangladesh, and Rockville, Maryland, USA, 2016. Reference Source\n\nBiswas A: Maternal, newborn, child and adolescent health Maternal and Perinatal Death Review (MPDR): Experiences in Bangladesh. World Health Organisation, 2015; 1–5, [cited 20167 Jan 19]. Reference Source\n\nLewis G: Beyond the numbers: reviewing maternal deaths and complications to make pregnancy safer. Br Med Bull. 2003; 67(1): 27–37. PubMed Abstract | Publisher Full Text\n\nDirectorate general of Health Services (DGHS): MPDR newsletter. DGHS. 2015; [Cited on 29 June 2017]. Reference Source\n\nSave the Children, Bangladesh: MaMoni Newsletter. Save the Children, Bangladesh, 2015; [Cited on 29 June 2017]. Reference Source\n\nBiswas A: Scaling-up Maternal and Perinatal Death Reviews in Bangladesh. MDSR action network. 2015; [Cited on 15 June 2017]. Reference Source\n\nMinistry of Health and Family Welfare, Bangladesh: QIS newsletter. Ministry of Health and Family Welfare, Bangladesh. 2015; [Cited on 29 June 2017]. Reference Source\n\nQuality Improvement Secretariat, MoHFW: E-newsletter. Quality Improvement Secretariat, MoHFW. 2016; [Cited on 29 June 2017]. Reference Source\n\nMahmud R: Rolling out MPDSR across the country. MDSR action network. 2016; [Cited on 15 June 2017]. Reference Source\n\nMahmud R, Biswas A: The roll out of MPDSR. Cited on 15 June 2017. Reference Source\n\nMinistry of Health and Family Welfare (MoHFW): National Guideline on MPDSR. 2016; [cited 2017 Jan 28]. Reference Source\n\nBiswas A: Using eHealth to support MPDR: Early experiences from Bangladesh. MDSR Action Network, 2015; 6–8. Reference Source\n\nQuality Improvement Secretariat, MoHFW: E-newsletter. Quality Improvement Secretariat, MoHFW, 2017; [Cited on 29 June 2017]. Reference Source\n\nBiswas A: Maternal and Neonatal Death Review System to Improve Maternal and Neonatal Health Care Services in Bangladesh. Örebro University, 2015. Reference Source\n\nBiswas A, Rahman F, Eriksson C, et al.: Community Notification of Maternal, Neonatal Deaths and Still Births in Maternal and Neonatal Death Review (MNDR) System: Experiences in Bangladesh. Health. 2014; 6(16): 2218–26. Publisher Full Text\n\nBiswas A: MDSR Action Network Mapping for action: Bangladesh. MDSR Action Network, 2015; 1–5, [cited 2017 Jan 27]. Reference Source\n\nHalim A, Utz B, Biswas A, et al.: Cause of and contributing factors to maternal deaths; a cross-sectional study using verbal autopsy in four districts in Bangladesh. BJOG. 2014; 121(Suppl 4): 86–94. PubMed Abstract | Publisher Full Text\n\nHalim A, Dewez JE, Biswas A, et al.: When, Where, and Why Are Babies Dying? Neonatal Death Surveillance and Review in Bangladesh. PLoS One. 2016; 11(8): e0159388. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiswas A, Rahman F, Halim A, et al.: Experiences of Community Verbal Autopsy in Maternal and Newborn Health of Bangladesh. HealthMED. 2015; 9(8): 329–38. Reference Source\n\nBiswas A, Rahman F, Eriksson C, et al.: Facility Death Review of Maternal and Neonatal Deaths in Bangladesh. PLoS One. 2015; 10(11): e0141902. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiswas A: Social autopsy as an intervention tool in the community to prevent maternal and neonatal deaths: experiences from Bangladesh Social autopsy. MDSR Action Network. 2016; 6–8, [cited 2017 Jan 24]. Reference Source\n\nBiswas A, Halim MA, Dalal K, et al.: Exploration of social factors associated to maternal deaths due to haemorrhage and convulsions: Analysis of 28 social autopsies in rural Bangladesh. BMC Health Serv Res. 2016; 16(1): 659. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiswas A, Rahman F, Eriksson C, et al.: Social Autopsy of maternal, neonatal deaths and stillbirths in rural Bangladesh: qualitative exploration of its effect and community acceptance. BMJ Open. 2016; 6(8): e010490. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahmud R: Social autopsy triggers community response for averting maternal and neonatal death in Bangladesh. WHO, 2016. Reference Source\n\nBiswas A, Halim A, Rahman F, et al.: The Economic Cost of Implementing Maternal and Neonatal Death Review in a District of Bangladesh. J Public Health Res. 2016; 5(3): 729. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUNICEF: Maternal and perinatal death inquiry and response. Empowering communities to avert maternal deaths in India, New Delhi: UNICEF, 2009; [cited 2016 Dec 18]. Reference Source\n\nSingh S, Murthy GV, Thippaiah A, et al.: Community based maternal death review: lessons learned from ten districts in Andhra Pradesh, India. Matern Child Health J. 2015; 19(7): 1447–54. PubMed Abstract | Publisher Full Text\n\nArmstrong CE, Lange IL, Magoma M, et al.: Strengths and weaknesses in the implementation of maternal and perinatal death reviews in Tanzania: perceptions, processes and practice. Trop Med Int Heal. 2014; 19(9): 1087–95. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "24174",
"date": "22 Aug 2017",
"name": "Mithila Faruque",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI have gone through the review article. To me, the author has taken a timely initiative to review on maternal and perinatal death review system and also he is experienced enough to write review on this topic. He has described the total MPDR system in Bangladesh from its initiation to scale up, benefits and challenges. But he didn’t mention in the whole paper what was the purpose of this review paper. Also some more comparison with the maternal and perinatal death review system of other countries should be added. The author could include a table showing the findings of different articles (used in reference) in short regarding MPDR, so that the readers can get a clear view of the review articles at a glance. Also the conclusion should be modified by adding the advantage of this review paper, not only of the death review system, I mean how this review paper will bring benefits to other researchers or how can it be used further in future.\n\nAlso I have some corrections mentioned below (the correction lines are numbered as in the pdf file)\nIntroduction\nLast paragraph, line 1 - \"This reviewed\" needs to be corrected to \"This review\" Last paragraph, line 4 - \" This review\" needs to be corrected to \"The review\" Last paragraph line 9 - \"full text of articles\" needs to be corrected to \"full text articles\"\nInception of the system in Bangladesh\nFirst paragraph line 2 - \"though\" needs to be corrected to \"through\" Last paragraph line 3 - \"though\" needs to be corrected to \"through\" Last paragraph line 4 - \"neonatal death\" needs to be corrected to \"neonatal deaths\"\nScale up of the system\nSecond paragraph line 18 - \"ministry of health and family welfare\" needs to be corrected to \"Ministry of Health and Family Welfare\"\nA unique approach\nFirst paragraph line 4 - \"though\" needs to be corrected to \"through\"\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Partly",
"responses": []
},
{
"id": "24173",
"date": "05 Sep 2017",
"name": "Olakunle Alonge",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well-written review article and highlights an intervention for a very important public health problem - maternal deaths in low and middle income countries. The article is a very fine contribution to the literature.\nIt could be further strengthened by providing specific recommendations on the way forward regarding maternal death notification, review and response in Bangladesh based on findings from other countries - perhaps, these recommendations could be presented in a table highlighting identified challenges and potential solutions. Similarly, a box or panel highlighting the essential components of the perinatal and maternal death review system in Bangladesh will also be useful for readers.\nThe abstract could be rewritten as a summary of the literature review (highlighting the reason for the literature review, and briefly summarizing each segment of the paper). The abstract and paper could use further editing for typos.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1120
|
https://f1000research.com/articles/5-2543/v1
|
20 Oct 16
|
{
"type": "Software Tool Article",
"title": "dot-app: a Graphviz-Cytoscape conversion plug-in",
"authors": [
"Braxton Fitts",
"Ziran Zhang",
"Massoud Maher",
"Barry Demchak"
],
"abstract": "dot-app is a Cytoscape 3 app that allows Cytoscape to import and export Graphviz (*.dot, *.gv) files, also known as DOT files due to the *.dot extension and their conformance to the DOT language syntax. The DOT format was originally created in the early 2000s to represent graph topologies, layouts and formatting. DOT-encoded files are produced and consumed by a number of open-source graph applications, including GraphViz, Gephi, neato, smyrna, and others. While DOT-based graph applications are popular, they emphasize general graph layout and styling over the topological and semantic analysis functions available in domain-focused applications such as Cytoscape. While domain-focused applications have easy access to large networks (10,000 to 100,000 nodes) and advanced analysis and formatting, they do not offer all of the styling options that DOT-based applications (particularly GraphViz) do. dot-app enables the interchange of networks between Cytoscape and DOT-based applications so that users can benefit from the features of both. dot-app was first deployed to the Cytoscape App Store in August 2015, has since registered more than 1,200 downloads, and has been highly rated by more than 20 users.",
"keywords": [
"Network",
"import",
"export",
"format conversion",
"attribute conversion",
"data visualization",
"Cytoscape",
"GraphViz",
"DOT"
],
"content": "Introduction\n\nCytoscape1 is a popular tool for visualizing and analyzing networks used in scientific and commercial analysis, most commonly in bioinformatics. It enables users to discover and load curated and uncurated networks representing molecular and genomic interactions, load ad-hoc or custom networks, and share networks that others have created. Once networks are loaded, users can manually annotate a network or automatically integrate annotations using a number of algorithms and databases. Users can perform a number of graph-oriented and semantic-aware analyses ranging from graph statistics to motif and cluster discovery to upstream and downstream structural and functional inferences. Users can also perform a number of complex graph filtering and layout operations to drive and focus the semantic understanding of network interactions and structure.\n\nEven beyond analysis and layout, users commonly derive and demonstrate network meaning by using visual cues to distinguish relationships and attributes. For this, Cytoscape provides a visual style system that enables users to paint nodes and edges using color, border thickness, size, fonts, arrows, and other devices.\n\nMuch of the power and functionality of Cytoscape is delivered as apps available in the Cytoscape App Store (http://apps.cytoscape.org). The store contains nearly 300 apps that provide a range of functionality from file import/export to analysis to visualization and publishing. Based on the success of the combination of Cytoscape core and downloadable apps, Cytoscape is downloaded approximately 14,000 times per month worldwide and is started approximately 3,000 times each working day. As of 2015, Cytoscape has been cited in 700 academic peer-reviewed papers per year.\n\nWhile Cytoscape is the dominant network analysis and visualization platform in bioinformatics, it is not the only platform. To support interoperability with a number of network-oriented workflows and applications, Cytoscape offers a number of natively supported file import/export modules, and leverages a number of them that are available as apps in the App Store. Some of these file formats2 do not include visual information, such as SIF (.sif) and NNF (.nnf), whereas others do (e.g., GraphML’s .graphml and XGMML’s .xgmml formats).\n\nGraphviz is a popular, well-established graph visualization application that produces a DOT (.dot, .gv) file containing graph structure, layout, and styling information, but for which there is no input/export module for Cytoscape. DOT files adhere to the DOT language syntax (http://www.graphviz.org/doc/info/lang.html). In comparison to XGMML and GraphML, Graphviz and its DOT language syntax define more visual attributes and support more visual features, such as edges composed of colored segments and single edges that are represented as multiple edges of different colors. Some of the DOT attributes can be used only by the Graphviz software because they specify parameters for the layout algorithms that the software uses.\n\nDOT files contain a number of visual attributes that map well to Cytoscape visualization functionality, and vice versa. However, incompatibilities do exist where some Cytoscape features cannot be represented in DOT, or where DOT represents some features that cannot be realized in Cytoscape. These incompatibilities are described in \"Conversion details\" section.\n\nNote that GraphViz is one of several applications that produces or consumes DOT files, but it is by far the most commonly used. In this paper, we use “DOT file/network” and “GraphViz file/network” interchangeably.\n\nWe present dot-app as a Cytoscape app that implements both the import and export of graphs encoded in DOT files. We describe the operation of dot-app; how dot-app maps Cytoscape networks to DOT networks and vice versa; issues that arise because of incompatibilities between the Cytoscape and DOT network models; representative use cases; and prospects for future work.\n\n\nOperation\n\nDot-app requires Java 7 or above and Cytoscape v3.2 or above.\n\nA Graphviz network can be imported in two ways: from the welcome screen (via the From Network File… button), from the menu (“File->Import->Network->File…”), or from the toolbar (by clicking the “Import Network from File” button).\n\nUsers are presented with a file browser dialog titled “Network file to load” (as in Figure 1). The user is able to filter the dialog to display only Graphviz files by selecting Graphviz files (*.gv, *.dot) from the drop-down menu for “Files of Type.” Note that no difference exists between a Graphviz file with an extension of .dot and a Graphviz file with an extension of .gv. However, the .gv extension is preferred because versions of Microsoft Word also use the .dot extension (https://marc.info/?l=graphviz-devel&m=129418103126092). From this point, importing a Graphviz network is the same as importing a network from any of Cytoscape's accepted file formats. Those steps are detailed in the Cytoscape User Manual (http://manual.cytoscape.org/en/stable/).\n\nTo export a Cytoscape network as a GraphViz network, use the “Export -> Network and View” menu. (Using \"Export -> Network\" is also possible, but this will result in a Graphviz file that contains no visual information and a notification to use \"Export -> Network and View\" instead.)\n\nSelecting “GraphViz files (*.dot,*.gv)” in the Export dialog launches dot-app and prompts the user to choose from three options, as shown in Figure 2 below. The purposes of these options are explained in the following section.\n\nPick edge style. Cytoscape provides edge-routing capabilities that cannot be conserved during the export process, so dot-app provides three edge routing options: “Straight segments,” “Curved segments” and “Curved segments routed around nodes.” These options change the value of the “splines” attribute that appears in the exported Graphviz file. The Graphviz file for a network exported from Cytoscape is shown below, and the attribute modified by the “Pick Edge Style” option is underlined and in bold. Figure 3, Figure 4, and Figure 5 depict pictures of the network with each edge style chosen.\n\n\n\nPick node label location. Graphviz does not offer the flexible label placement that Cytoscape offers. As such, dot-app gives the options of “Center,” “Top,” “Bottom,” and “External” to allow the user to specify the label location applied to every node. In the output Graphviz file, the “Center,” “Top,” and “Bottom” options change the value of the “labelloc” that appears in the node default attribute list. The options respectively change the value to “c,” “t,” and “b”. In contrast, the “External” option causes the node labels to set the “xlabel” attribute instead of the “label” attribute in the output Graphviz file. The “xlabel” attribute causes the label to be placed in a location near its node that does not cause it to overlap with any other nodes or labels. Figure 6 shows a network exported with the “External” option.\n\nPick network label location. dot-app provides the options “No network label,” “Top,” and “Bottom” to allow the user to specify whether the network itself should be labeled and, if so, where the label is placed. The options “Top” and “Bottom” cause the “labelloc” attribute and “label” attribute for the graph to be written to the output Graphviz file. Furthermore, the “label” attribute will be set to the network’s name in Cytoscape. In contrast, the “No network label” option omits both the “labelloc” attribute and the “label” attribute.\n\n\nImplementation\n\nFor the import function of dot-app, we used Java-based Parser for Graphviz Documents (JPGD), a Graphviz document parser made by Alexander Merz(5). A Graphviz file typically contains information about the nodes, edges, and subgraphs (including annotations) for the layout. JPGD is a parser that transforms such a description into a data structure. We used it to create Java objects modeling the graph, its nodes, and its edges. Each model object contains related DOT attributes represented as key-value pairs. Figure 7 provides a high-level picture of the conversion of a DOT node declaration to the Node object that JPGD created. Detailed information about the JPGD objects can be found on JPGD’s website (http://www.alexander-merz.com/graphviz/doc.html).\n\nAfter JPGD creates the model objects, dot-app creates a corresponding Cytoscape model, including a CyNetwork object, CyNode objects, and CyEdge objects. These associations are stored in maps: one map for each type of graph component. When the network view is being built in Cytoscape, our Reader objects use these associations to create the Cytoscape View objects. Three Reader classes exist: NetworkReader, NodeReader, and EdgeReader. At the start of the network view creation, a VisualStyle object is created for the network. Each Reader object uses the VisualStyle to set the default attributes for its class of graph components. In addition, Reader iterates through the corresponding association map to create the View objects for the graph components and to set their VisualProperties. Figure 8 shows the high-level relationships among the JPGD Graph objects, the Cytoscape Graph objects, and the Cytoscape View objects.\n\nAfter the View objects are created, the DOT attributes and their assigned values are converted into their Cytoscape equivalents, and the resulting VisualProperty and VisualPropertyValue are assigned to the View. If the DOT attribute’s assigned value does not have an equivalent Cytoscape VisualPropertyValue, the VisualProperty is set to a default VisualPropertyValue.\n\nWe created three classes—NodePropertyMapper class, EdgePropertyMapper class and NetworkPropertyMapper class—to accomplish the export function of dot-app. Each Mapper class contains an ArrayList into which the Mapper classes insert the DOT attribute strings for easily convertible Cytoscape VisualProperties. In addition, each Mapper class has unique helper methods that create the DOT attribute strings for the DOT attributes that have values determined by multiple Cytoscape VisualProperties. One such attribute is the “style” DOT attribute, the value of which is determined by NODE_SHAPE, NODE_BORDER_LINE_TYPE, and NODE_VISIBLE. The NodePropertyMapper class handles the conversion of the CyNodes and their VisualProperties into their DOT string equivalents. The EdgePropertyMapper class handles the conversion of the CyEdges and their VisualProperties into their DOT string equivalents. Finally, the NetworkPropertyMapper class handles the conversion of the CyNodes' and CyEdges' default VisualProperties and the CyNetwork’s VisualProperties into their DOT string equivalents.\n\n\nConversion details\n\nSupported DOT attributes. The following DOT attributes contribute to the Cytoscape network during the import process. Most of the DOT attributes listed below correspond to a single Cytoscape visual property, but a few affect multiple visual properties (e.g., the “style” DOT attribute, as described below) at once due to the fact that the information is stored differently between the DOT model and the Cytoscape model. The “weight” DOT attribute is imported as an Edge table attribute (i.e., data) because no corresponding Cytoscape visual property exists. All other DOT attributes are ignored during the import process and have no effect on the visualization in Cytoscape.\n\nNode DOT attributes. Table 1 lists the DOT attributes that can apply to nodes and the specific Cytoscape visual properties to which they map. The “pos” attribute maps to both NODE_X_POSITION and NODE_Y_POSITION because the value of the “pos” attribute is a coordinate pair of the form “x, y”.\n\nEdge DOT attributes. Table 2 lists the DOT attributes that apply to edges and the specific Cytoscape visual properties to which they map.\n\nThe “style” DOT attribute. The “style” DOT attribute applies to both nodes and edges. The attribute takes a comma-separated list of keywords as its value. These keywords directly affect which Cytoscape visual properties are modified. Table 3 lists the keywords that dot-app supports, the graph components they affect and the Cytoscape visual properties to which the keywords map.\n\nThe “weight” DOT attribute. During the importing of a network using dot-app, a weight column is added to the Cytoscape network’s edge table. If the “weight” attribute is supplied for an edge, its value is assigned to the weight column entry for the edge.\n\nUnsupported DOT features. The following features of Graphviz are not supported in the import:\n\n1. Any HTML\n\n2. Subgraphs\n\n3. Clusters\n\n4. Edges that are rendered as colored parallel lines: These are made by assigning a color list without weights to the “color” attribute. Figure 9 depicts an example edge rendered in this manner.\n\n5. Edges that are rendered as colored segments in series. These are made by assigning a color list with weights to the “color” attribute. Figure 10 depicts an example edge rendered in this manner.\n\n6. Gradients applied to the network background\n\nUnsupported Cytoscape features. When exporting the network as a GraphViz file, some Cytoscape information is lost because it cannot be represented in DOT format. dot-app does not keep a log of the information that is not transferred to the Graphviz file. The following information is lost:\n\nVisual information\n\n1. Custom graphics\n\na. Images on nodes\n\nb. Charts on nodes\n\n2. Edge bends\n\n3. Nested network images contained in nodes\n\n4. Arrowhead colors (they will appear the same color as the edge itself)\n\n5. Certain line types\n\na. Dash dot\n\nb. Contiguous arrow\n\nc. Backward slash\n\nd. Separate arrow\n\ne. Sinewave\n\nf. Vertical slash\n\ng. Zigzag\n\nh. Forward slash\n\ni. Parallel lines\n\n6. Label positioning\n\na. Edge labels only go on the midpoint of the edges\n\nb. Node label positions are selected at export\n\n7. The V node shape\n\n8. Target arrow shape\n\na. Target arrow shape does not appear if set as a default; it only appears if set as a bypass\n\n9. All annotations\n\nNon-visual information\n\n1. Node group information (groups are treated as a single node with no additional data)\n\n2. All table data\n\n\nUse cases\n\nDetailed below are two cases for dot-app. The first use case describes how a DOT file can be imported into Cytoscape. The second use case describes how a Cytoscape network can be exported as a DOT file.\n\nOur first use case details how we would use dot-app to view a Graphviz-created network in Cytoscape. We used Graphviz’s neato utility to create a DOT file with layout information and a PNG of the resulting network. The DOT file is shown below, and Figure 11 is the created PNG.\n\n\n\nFigure 12 shows the result of the import into Cytoscape version 3.4. The differences that arise between the Graphviz network and the Cytoscape network stem from how the two programs handle implicit default values. If a DOT attribute is omitted from the DOT file when using a Graphviz utility, an implicit default value for that attribute is used. The list of DOT attributes and their default values can be found on the Graphviz website (http://www.graphviz.org/content/attrs.html). Moreover, if a Cytoscape-compatible DOT attribute is not specified in the DOT file during import, Cytoscape supplies default values for the Cytoscape VisualProperties to which the missing DOT attribute maps. These default values come from Cytoscape’s default visual style. This is why the nodes appear as ellipses in the PNG created using neato and as rounded rectangles in Cytoscape. These implicit defaults cause three apparent differences between the two results. The first difference is the node shapes, the second difference is the border colors for the nodes, and the third difference is the font used for the labels.\n\nIn this second use case, we export the network in Figure 12 from Cytoscape. The output file is shown below. Figure 13 shows the PNG created by using Graphviz’s neato utility on the output file. With this DOT file, we are able to use Graphviz and other programs that accept DOT files, such as NetworkX and PyGraphviz.\n\n\n\nAgain, slight differences exist in the network’s appearance: the node shape and the labels’ font. The variation in the node shape is due to how Graphviz and Cytoscape render rounded rectangles different. The difference in the labels’ font is due to the font chosen in Cytoscape. In the output file shown above, we can see that the fontname attribute for the node default list is set to “SansSerif.plain”. This font is not an actual font family; rather, it is one of Java’s logical fonts (https://docs.oracle.com/javase/tutorial/2d/text/fonts.html#logical-fonts). It is a font name that the Java Runtime Environment used and that maps to a physical font. When neato encounters the font name, it attempts to find an actual font named “SansSerif.plain”; if it cannot find one, it uses a default font.\n\n\nTesting\n\nWe verified the dot-app import and export functions separately.\n\nFor import, we downloaded DOT files from Graphviz’s gallery page (http://graphviz.org/Gallery.php) and wrote our own DOT files. We then ran Graphviz’s neato utility on these files to generate DOT files that contained layout information and PNG files to use as references. We then imported the DOT files to Cytoscape and visually compared the Graphviz-created PNG files to the Cytoscape display to validate the import process of dot-app.\n\nFor export, we loaded each of the Cytoscape test session files (https://github.com/cytoscape/cytoscape-tests/blob/master/docs/Session-Files/Session%20Files.md) into Cytoscape and exported them to DOT files, which were then loaded into GraphViz. We visually compared the GraphViz display to the Cytoscape display to determine the correctness and completeness of the Cytoscape-to-DOT translation.\n\n\nConclusion\n\nThis article describes the dot-app Cytoscape app, which enables a user to import a DOT-formatted into Cytoscape and to export a Cytoscape network to a DOT-formatted file. We demonstrated the operation of dot-app and explained its implementation and the limitations of DOT-to-Cytoscape and Cytoscape-to-DOT translation. Finally, we explained typical use cases and how dot-app delivers value in each situation.\n\nWhereas the dot-app conversions are diligent and true to Cytoscape 3.3 (with limitations), we recognize that future versions of Cytoscape may introduce new visual effects (e.g., new arrow heads) that present opportunities for rendering DOT files more truly or for the loss of formatting information if the DOT format cannot represent them. These issues will be dealt with in future versions of dot-app.\n\n\nSoftware availability\n\nSoftware available from:\n\nhttp://apps.cytoscape.org/apps/dotapp\n\nLatest source code:\n\nhttps://github.com/idekerlab/dot-app\n\nArchived source code as at the time of publication:\n\nhttp://doi.org/10.5281/zenodo.1596373\n\nLicense:\n\nGNU General Public License",
"appendix": "Author contributions\n\n\n\nBF, ZZ, and MM were involved in writing the article. BF, ZZ, and BD were involved in revising the draft manuscript and have agreed on the final content. BF, ZZ, and MM were involved in designing and implementing dot-app. BD is the principal investigator for this dot-app project.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis material is based upon work supported by the National Institutes of Health under Grant P41 GM103504. The grant is assigned to Dr. Trey Ideker at the University of California, San Diego (UCSD). All work was performed at UCSD.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nKeiichiro Ono and Christian Zmasek helped with identifying the Cytoscape VisualProperties during the dot-app development.\n\n\nReferences\n\nShannon P, Markiel A, Ozier O, et al.: Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003; 13(11): 2498–504. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoughan M, Tuke J: Unravelling graph-exchange file formats. arXiv preprint arXiv:1503.02781. 2015. Reference Source\n\nFitts B, Zhang Z, Maher M: dot-app. ZENODO. 2016. Data Source"
}
|
[
{
"id": "17124",
"date": "27 Oct 2016",
"name": "Eric Bonnet",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes a new Cytoscape plugin called “dot-app” designed for the import and export of Graphviz files conforming to the DOT format for representing graph topologies, layouts and formatting.\nGraphviz is a popular set of open source software tools initiated at the AT&T Labs Research that can generate or process DOT files. Many other software tools are using Graphviz to represent all kinds of graphs.\nThe DOT language support visual features and attributes that are not currently included by Cytoscape network formats. There is currently no Cytoscape plugin for the import and export of networks to the DOT language. Therefore the dot-app plugin may be useful to convert and import biological networks, and might be very interesting for the large community of Cytoscape users.\nThe article is globally clear, well-organized and logically structured. I particularly appreciated the efforts of the authors to explain the details of the conversion process and exactly what kind of visual properties are imported and exported, including the properties that are not supported.\nI would suggest minor improvements to the article that could make it more precise and valuable for the readers.\nAdd a table for at least some of the main visual features and attributes supported by the DOT format and indicate which ones are supported by XGMML/GraphML. The sentence mentioning the differences in the introduction section is too vague.\n\nCompile also in a small table some examples of applications (related to biology or not) that produces or consumes DOT files. Here also the paragraph discussing this in the introduction section is quite vague.\n\nMinor comments:\nThe reference to the JPGD parser do not appear in the bibliography.\n\nA few sentences on how to install the app in Cytoscape might be helpful.",
"responses": [
{
"c_id": "2258",
"date": "31 Oct 2016",
"name": "Ziran Zhang",
"role": "Author Response",
"response": "Thank you Dr. Bonnet for your valuable comments. We will look into the possible improvements and minor issues you brought up, and fix them in our next version. Again, we really appreciate your detailed observations!"
}
]
},
{
"id": "17148",
"date": "31 Oct 2016",
"name": "Matthias König",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article \"dot-app: a Graphviz-Cytoscape conversion plug-in\" is well written and describes background, implementation and use of the application in a comprehensible way.\n## major comments \"The differences that arise between the Graphviz network and the Cytoscape network stem from how the two programs handle implicit default values. If a DOT attribute is omitted from the DOT file when using a Graphviz utility, an implicit default value for that attribute is used. The list of DOT attributes and their default values can be found on the Graphviz website (http://www.graphviz.org/content/attrs.html).\"\n-> The implicit dot default values should be used as default values in dot-app for the Cytoscape styles. Users of the dot language rely on the default values and expect rendering tools to use them. -> Especially, in combination with dot-app writing the Cytoscape default values in the exported dot this results in unnecessary changes of the graph rendering in the round trip (dot -> cytoscape -> dot). This round trip should introduce minimal changes to the rendering. Dot default values should be used and the respective text passage and figure be updated.\n## minor comments >>> In the implementation all visual styles are hard coded in the visual mapping via bipasses. It would be better to create node attributes for the dot attributes and subsequently use them in visual styles. In this way users have access to the dot attributes, can use them in other visual mappings, create derived node attributes from them, or use them in analyses.\n>>> It is unclear how dot height and width are transformed to Cytoscape node height and width. There is some scaling factor, but how this is actually handled should be described in the manuscript. What information is lost in the roundtrip due to scaling. For instance height = \"0.486111\",width = \"1.041667\" result in height > 40 in Cytoscape.\n>>> The dot node font color is not rendered correctly. See: https://github.com/idekerlab/dot-app/issues/7\n>>> Saving as *.gz always adds the .dot extension so that files are called *.gz.dot. If the user selects \"name.gz\" the file should be saved as 'name.gz', not as 'name.gz.dot'.\n>>> p5 \"For the import function of dot-app, we used Java-based Parser for Graphviz Documents (JPGD), a Graphviz document parser made by Alexander Merz(5)\" -> reference missing/wrong. There should be a reference for (5).\n>>> p5 \"After JPGD creates the model objects, dot-app creates a corresponding \" -> something wrong with first half-sentence, probably better: \"After JPGD has created the model objects, ...\n>>> p6 \"All other DOT attributes are ignored during the import process and have no effect on the visualization in Cytoscape.\" If the ignored DOT attributes can be listed, state them in the manuscript as list. This is very helpful to see what information should not be used in dot files if one wants full Cytoscape support of the file. If the unsupported attributes are only the features listed in \"Unsupported DOT features\" than this should be clearly stated.\n>>> p2 \"The store contains nearly 300 apps that ...\" At the state of review this is already >= 300. Add a date to the statement: -> The store contains nearly 300 apps (October 2016) that ...\n>>> Add reference for graphviz software/url see for instance http://www.graphviz.org/content/citing-graphviz-paper The preferred citation is @ARTICLE{Gansner00anopen,\n\nauthor = {Emden R. Gansner and Stephen C. North},\n\ntitle = {An open graph visualization system and its applications to software engineering},\n\njournal = {SOFTWARE - PRACTICE AND EXPERIENCE},\n\nyear = {2000},\n\nvolume = {30},\n\nnumber = {11},\n\npages = {1203--1233} } and the URL is www.graphviz.org\n>>> p8 use cases Provide the *.gv files and Cytoscape files of the use cases in the supplement. This will provide the necessary materials to follow the provided examples. I tried to save the files from the pdf which did not work (due to line breaks in pdf), copying the examples from the HTML version of the manuscript worked. Adding the two example files to the supplementary information will improve this.\n>>> p9 testing Provide the url/repository for the test files and test results. There should be a page which shows all the comparison images with the respective dot files. In the article it is mentioned that this exists, but no resource is given for the test files and test results.\n>>> p9 conclusion \"This article describes the dot-app Cytoscape app, which enables a user to import a DOT-formatted **here** into Cytoscape\" -> word missing\n>>> p9 conclusion \"we recognize that future versions of Cytoscape may introduce new visual effects\" -> formulation. Better: 'visual shapes and styles\", visual effects are something else.\n>>> p9 conclusion \"we recognize that future versions of Cytoscape may introduce new visual effects (e.g., new arrow heads) that present opportunities for rendering DOT files more truly or for the loss of formatting information if the DOT format cannot represent them.\" -> second half of sentence does not make sense (... or for the loss of formating). Please reformulate to clarify what is meant.\n>>> The github release corresponding to the zenodo code is missing. The latest github release is 0.9.1 on 11.September, the zenodo was created in February. This makes it difficult to try to build the app from the mentioned source code, by checking out a corresponding tag. Which version is packed in zenodo? 0.9.2 ? Also the current app version in the app store is 0.9.3, but there is no 0.9.3 release on github. Please create a github release which is corresponding to the zenodo release and the mentioned code state of the publication.\n>>> I was unable to build the app from latest source code. The app is working via the app store, but in addition it should be possible to build the latest source code. https://github.com/idekerlab/dot-app/issues/8\ngit clone https://github.com/idekerlab/dot-app.git mvn clean install\nresults in\n[INFO] Compiling 16 source files to /home/mkoenig/git/dot-app/target/classes [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 7.250 s [INFO] Finished at: 2016-10-31T12:48:57+01:00 [INFO] Final Memory: 21M/238M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile (default-compile) on project dot-app: Compilation failure: Compilation failure: [ERROR] /home/mkoenig/git/dot-app/src/main/java/org/cytoscape/intern/read/reader/NodeReader.java:[46,36] error: package com.alexmerz.graphviz.objects does not exist [ERROR] ...\nThis should be fixed, I would recommend to setup some Continous Integration with travis to make sure the latest version always builds. see https://github.com/idekerlab/dot-app/issues/8",
"responses": [
{
"c_id": "2264",
"date": "01 Nov 2016",
"name": "Ziran Zhang",
"role": "Author Response",
"response": "Thank you so much Dr. Matthias König for your insightful comments. We really appreciate your efforts of carefully testing the dot-app features and throughly examining our article. We thank you for bringing the problems you found to our attention. While waiting for another referee's upcoming peer review report, we will take a close look into the issues and address them for our next version."
}
]
},
{
"id": "17125",
"date": "08 Nov 2016",
"name": "David J. Lynn",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\ndot-app is a Cytoscape App that enables the inter-conversion of Graphviz and Cytoscape formatted files. As far as we are aware no similar application exists and this is therefore a potentially useful development.\nWe found no problems with the usage of the app; it integrates well into Cytoscape, and can convert back and forth between formats without issue aside from the loss of information which is documented in the article. The code is readable and very well commented.\nWe did have an issue when compiling from source, however. The DOT parsing jar from Alex Merz isn't being included properly when Maven builds the project.\nThe major concern we had was that the concept for the application is really very straight-forward and following on from this, its functionality is therefore pretty limited. We wondered initially whether there was really a need for such an application, however, dot-app has been downloaded >1200 times so there is obviously a market for the tool. A remaining question, however, is whether the app justifies a 10-page paper? The paper itself reads a bit more like documentation and we would have preferred to have seen a more concise paper introducing the concepts together with more detailed documentation.\nWe think that the paper could also benefit from a better discussion in the introduction of the motivation behind the development of dot-app. Can you give examples where one would want to create a network in one format and then convert it to another? How hard would it be to manually re-create the visualisation in Graphviz/Cytoscape?\nThe application also appears very reliant on the parser developed by Alex Merz (this is not referenced properly). It would be good to clarify how much of the functionality is provided by this parser and how much is extended by dot-app.\nA key limitation of the app is that many of features of either format are lost when converting between them. This is because there isn’t an equivalent feature in the other format. It is good that the authors are clear about which ones cannot be converted but this problem still means that manual intervention would likely be needed to customise the desired style.\nOne thing that could be improved is how the implicit default values are handled by dot-app. Couldn’t you set reasonable values for these if they are not specified in a particular visualisation to enable more faithful conversion of the visualisation style?\nOne final concern is that it is very likely that the visual styles in both Graphviz and Cytoscape will continue to evolve over time. How will dot-app keep pace with these changes? Have the authors considered trying to get the Graphviz and Cytoscape communities to adopt at least the equivalent visual style features in both applications? Or at least to indicate in the Cytoscape and Graphviz documentation what is the equivalent feature (when there is one).\n\nMinor comment: Under Operation it would be good to start by explaining how to install the App (for novice Cytoscape users).",
"responses": [
{
"c_id": "2274",
"date": "09 Nov 2016",
"name": "Ziran Zhang",
"role": "Author Response",
"response": "Thank you Dr. David Lynn and Dr. John Salamon for your comments. We completely agree with your suggestions, and we'll add those (e.g. network conversion examples, dot-app's future adaptation, detailed installation instructions, etc.) to our article. Please expect to see the changes you recommended in the second version of our article."
}
]
},
{
"id": "17222",
"date": "20 Dec 2016",
"name": "Giovanni Scardoni",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article introduces an app to import/export Graphviz (.gv-.dot) files to/from Cytoscape. Since the Graphviz format is of wide usage, the app is very useful to the community.\nThe article is well-written and the user-guide is complete and easy to read.\n\nAs minor comments I ask more details about which characteristics can be lost passing from a format to another. I tried some network choosing them randomly from internet, and sometimes I lost features that I didn't expect to lose.\nA table explaining which features are preserved and which not will be very useful in the paper. If there are problem of space maybe the implementation details can be described in supplementary materials and not in the main paper.\nThe table should contains also short comments about the missing features that can be added in the future, and the problem that can be encountered implementing those characteristics that are not of immediate conversion.\nI think that with these updates the paper is suitable for indexing.",
"responses": [
{
"c_id": "2381",
"date": "22 Dec 2016",
"name": "Ziran Zhang",
"role": "Author Response",
"response": "Thank you Dr. Scardoni for your comment. We will take your valuable suggestions into consideration while we are refining the article. We appreciate your effort in examining the dot-app and the article for us in detail."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2543
|
https://f1000research.com/articles/6-453/v1
|
10 Apr 17
|
{
"type": "Research Article",
"title": "Protein sites with more coevolutionary connections tend to evolve slower, while more variable protein families acquire higher coevolutionary connections",
"authors": [
"Sapan Mandloi",
"Saikat Chakrabarti",
"Sapan Mandloi"
],
"abstract": "Background: Correlated mutation or coevolution of positions in a protein is tightly linked with the protein’s respective evolutionary rate. It is essential to investigate the intricate relationship between the extent of coevolution and the evolutionary variability exerted at individual protein sites, as well as the whole protein. Methods: In this study, we have used a reliable set of coevolutionary connections (sites within 10Å spatial distance) and investigated their correlation with the evolutionary diversity within the respective protein sites. Results: Based on our observations, we propose an interesting hypothesis that higher numbers of coevolutionary connections are associated with lesser evolutionary variable protein sites, while higher numbers of the coevolutionary connections can be observed for a protein family that has higher evolutionary variability. Our findings also indicate that highly coevolved sites located in a solvent accessible state tend to be less evolutionary variable. This relationship reverts at the whole protein level where cytoplasmic and extracellular proteins show moderately higher anti-correlation between the number of coevolutionary connections and the average evolutionary conservation of the whole protein. Conclusions: Observations and hypothesis presented in this study provide intriguing insights towards understanding the critical relationship between coevolutionary and evolutionary changes observed within proteins. Our observations encourage further investigation to find out the reasons behind subtle variations in the relationship between coevolutionary connectivity and evolutionary diversity for proteins located at various cellular localizations and/or involved in different molecular-biological functions.",
"keywords": [
"Coevolution",
"evolutionary diversity",
"correlated mutation",
"solvent accessibility",
"cellular localization"
],
"content": "Introduction\n\nAccording to the neutral theory of evolution, the functionality of a protein with a disadvantageous mutation can be restored by another mutation that compensates for the first to sustain the function1. Such compensating mutations, together with other factors arising due to common functional, structural and folding constraints, lead to correlations between different positions in a protein or protein family. Coordinated changes of amino acid residues are typically acquired by examining covariation between two aligned positions. A large number of computational methods have been proposed2–11 to quantify the covariation between two protein sites in a given multiple sequence alignment (MSA). Most methods are based on variation of mutual information12–17, maximum likelihood approximations18, Bayesian probabilities19, and phylogenetic approaches20,21. Newer methods successfully implement direct coupling analysis22, Protein Sparse Inverse COVariance: PSICOV23 and Matrix Match Maker24 algorithms to identify coevolving sites. These previous studies demonstrate that sequence covariation is powerful in detecting protein-protein interactions, ligand receptor binding, and the folding structure of a protein. In addition to direct physical interactions, distantly located coevolving amino acid residues are reported to be energetically coupled25 or subject to similar functional constraints26. Compensated amino acid substitutions have been described in previous works in terms of their locations in structure and their physico-chemical properties3,20,21. Coevolutionary signals coming from residue charge compensating mutations have been found to be stronger compared to size compensating mutations3,21,27. Despite the fact that coevolution has been found to be rather weak in many cases, correlated mutations have had comparative success in predicting protein secondary and tertiary structures, and in some cases protein interaction partners28–30.\n\nCoevolution is difficult to detect due to various reasons, such as the variable nature of compensatory mutations, the strong dependence of covariations on evolutionary distances, and the number of sequences in the alignment. Hence, it is crucial to understand how coevolutionary processes are related to evolutionary diversity within protein families. Despite significant efforts in this field, the relationship between evolutionary conservation and the extent of coevolution is not well understood. For example, it is not clear whether families with higher evolutionary diversity would exhibit more coevolutionary connections or not. Similarly, at the residue level, this relationship needs to be thoroughly examined. An earlier study by Fodor and Aldrich31 observed a lack of agreement between correlated mutation methods, and the resultant differences might have been caused by differing sensitivities to background conservation. In a previous study, it was also indicated that residues, which form many coevolutionary connections with other residues, are more evolutionary conserved and are involved in specific functionally important interactions and conformational changes32.\n\nA complete understanding of protein evolution and coevolution will require a large scale analysis of important factors that determine the selective forces acting on different residues of a protein to be coevolved. Here, we present a study that undertook a detailed analysis to investigate the relationship between evolutionary conservation and the extent of coevolution within a protein. This relationship could be dependent on the reliability of the predicted coevolved sites as there are no direct ways to validate the coevolutionary connectivity. Therefore, it is a good idea to use multiple coevolution extracting algorithms and filter out a reliable set within protein sites. Similarly, spatial proximity between the coevolved sites might provide additional reliability about the predicted coevolved sites33. We examined the evolutionary conservation using the popular AL2CO34 program within 19,736, 35,514, 50,217, and 56,879 coevolved site pairs (located within 10Å spatial distance), which were identified by approaches, such as mutual information (MIp program35), McLachlan amino acid similarity matrix based techniques (McBASC program27), Direct Coupling Analysis (DCA program22), and Protein Sparse Inverse COVariance method (PSICOV program23), from 753 curated protein family alignments, available from the Conserved Domain Database (CDD36). Our study suggests the hypothesis that a higher number of coevolutionary connection is likely to be observed for a particular site that is less evolutionary variable, while a higher number of coevolutionary connections can be observed for a protein family that has higher evolutionary variability. We found that the sites with a higher number of coevolutionary connections have a much higher tendency to be conserved compared to the sites with a smaller number of connections. These sites might act as ‘hub points’, and therefore changes in these sites would affect many other connected sites. We further investigated the impact of important structural properties, like secondary structures, solvent accessibilities and hydrogen bonding of the coevolved sites, to understand the reasons behind the observed correlation between coevolution and evolutionary diversity. Our findings indicate that coevolved sites are generally preferred at a solvent accessible/hydrogen bonded/helical state compared to a solvent buried/non-hydrogen bonded/β strand state. However, discernable differences in evolutionary conservation between the higher and lesser coevolved sites were observed only for sites located at solvent accessible states compared to buried states. We also examined whether the observed negative (anti) correlation between coevolution and evolutionary conservation for a protein family can be under the influence of its cellular localization or the type of functions with which it is involved. Coevolution analysis for the whole protein suggests that the cytoplasmic and extracellular proteins possess moderately stronger negative (anti) correlation between the number of coevolutionary connections and their average evolutionary conservation.\n\n\nMethods\n\nWe collected 753 protein domain alignments from the Conserved Domain Database (CDD; https://www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml36) version 2.13, for which at least one 3D structure entry and more than 50 protein sequences are available. An alignment length threshold (>=100) was also applied to exclude smaller proteins. A complete list of protein families is provided in Table S1.\n\nMutual information35 (Suppl. Mat. Ref.) is widely used measure to estimate the covariation between sites in protein families. In this analysis, we used a mutual information based method to estimate coevolutionary connection between two sites of a protein family. This method (MIp) is based on information theory that accurately estimates the expected levels of background coming from random and phylogenetic signals. Removal of the phylogenetic and random background allows identifying substantially more coevolving positions in protein families. Altogether we identified 19,736 (out of total 36,616) coevolved site pairs located within 10Å spatial distance from the 753 family alignments, with a MIp Z-score cutoff of 4.0 or higher.\n\nMcBASC27 (http://fodorwebsite.appspot.com//covariance1_1.zip) was used to calculate the simple inter-position coevolution for the 753 protein family alignments. McBASC provides high score for non-conserved and co-varying positions from a multiple sequence alignment. The calculation of McBASC was performed as described in Fodor and Aldrich 2004, using the software provided by the authors (http://www.afodor.net/). McBASC does not use any structural or phylogenetic information in the calculation of coevolution. We identified 35,514 (out of total 95,866) coevolved site pairs located within 10Å spatial distance from the 753 family alignments with McBASC Z-score cutoff of 4.0 or higher.\n\nDCA22 (Direct Coupling Analysis) aims at predicting coevolving residues based on the maximum entropy principle. DCA is also used in predicting inter and intra domain contacts. This method is used in separating direct and indirect correlation between residues. DCA analysis was implemented with MATLAB code kindly provided to us by Domenico L. Gatti (Supplementary File 1). We identified 50,217 (out of total 1,61,332) coevolved site pairs located within 10Å spatial distance from the 753 family alignments with DCA Z-score cutoff of 4.0 or higher.\n\nPSICOV23 (Protein Sparse Inverse COVariance) method is developed with the specific goal of separating direct from indirect coupling between residues. PSICOV takes into account the global correlations between pairs. Modified MATLAB code (without the default minimum requirement of 500 sequences), which was kindly provided to us by Domenico L. Gatti (Supplementary File 1), was used in this study. We identified 56,879 (out of total 162,336) coevolved site pairs located within 10Å spatial distance from the 753 family alignments with PSICOV Z-score cutoff of 4.0 or higher.\n\nSite pairs other than those involved in coevolutionary connections were considered as non-coevolutionary sites. We randomly selected non-coevolved sites from each protein family (Supplementary File 2). For each randomly selected non-coevolved site (i), neighboring non-coevolved sites were selected based on the structural distance (<10Å) and sequence distance filters (>i±6 positions). Similar numbers of non-coevolved site pairs were selected randomly 10 times. We performed similar correlation analysis between the numbers of spatial neighbors and evolutionary conservation of non-coevolved sites.\n\nAnalysis of positional conservation in a sequence alignment can aid in the detection of functionally and/or structurally important residues. The AL2CO34 program performs conservation analysis in a comprehensive and systematic way. It was used to calculate the conservation index for each position for a given multiple sequence alignment. Twelve different strategies of conservation index calculation have been implemented in the AL2CO program (http://prodata.swmed.edu/al2co/al2co.php). For this analysis, we used independent count (sequence weighting scheme) and matrix based sum-of-pair37 (conservation calculation method) measure scoring scheme to calculate evolutionary conservation of each coevolved sites or column in the alignment. A higher AL2CO score indicates higher conservation index.\n\nRepresentative three-dimensional (3D) structures were collected for each family from the Protein Data Bank (PDB; http://www.rcsb.org/pdb/home/home.do)38. Spatial distances were calculated using atom coordinates supplied in the individual PDB file. Structural properties, such as solvent accessibility, secondary structures, and hydrogen bonds, were computed from the protein structure using the JOY package39 (http://mizuguchilab.org/joy/) Solvent accessibility was measured using the PSA program from the JOY package, and residues that had an accessible surface area <7% were treated as solvent buried or inaccessible. Similarly, secondary structures (helix, strand and coil) and hydrogen bonding patterns were estimated using the SSTRUC and HBOND programs from the JOY package40, respectively.\n\nThe Gene Ontology (http://www.geneontology.org/)41 covers three classes/domains: cellular localization, molecular function and biological process. Functional information of each CDD family was collected from Gene Ontology database using the UNIPROT40 ID of the representative protein structure as a query. We mapped 517, 720, and 634 protein domain families into cellular localization, molecular function and biological process, respectively.\n\nMapping of evolutionary conservation and coevolutionary information onto the 3D structure was done using in-house Perl scripts (Supplementary File 3). B-Factor column in PDB file was substituted with evolutionary conservation score and colored according to B-Factor ranging blue (low conservation) to red (high conservation). Lines connecting C-alpha atoms of residues represent coevolutionary connection between those residues.\n\n\nResults and discussion\n\nCoevolutionary connections between protein sites were identified from multiple sequence alignments of 753 protein domain families by algorithms employing differing approaches, such as mutual information (MIp program35), McLachlan amino acid similarity matrix based techniques (McBASC program27), Direct Coupling Analysis (DCA program22), and Protein Sparse Inverse COVariance method (PSICOV program23). Minimal overlaps were observed for coevolved sites predicted by these programs (Figure S1), supporting previous interpretations that differences in the preferred level of background conservation may exist within each program to identify coevolved residue pairs35.\n\nThe pattern of evolutionary diversity within the coevolved sites was examined using evolutionary conservation scoring approaches (e.g., AL2CO). Figure 1 plots the average conservation scores of sites having higher or lower coevolutionary connections (A: MIp; B: McBASC; C: DCA; D: PSICOV programs, respectively). Figure 1 suggests that highly coevolved sites possess higher average AL2CO scores, depicting higher evolutionary conservation. Coevolutionary connections, even though selected based on a strong statistically significant threshold (Z-score >4), might contain background noise resulting in an unreliable relationship between coevolution and evolutionary conservation. To disprove this, we performed similar analysis using non-coevolutionary random sites and found that there is a smaller correlation between non-coevolved sites having higher or lower structural distance based neighbors (<10Å) and their evolutionary conservation (Figure S2).\n\nX-axes show the coevolutionary connection (represented in bins of 1 and 5) per site whereas Y-axes represent the average evolutionary conservation index (CI) estimated by the AL2CO program. Each vertical panel (panel A–D) represents results obtained from coevolution predicted by various programs (Panel A: MIp; B: McBASC; C: DCA; D: PSICOV). Panels provide correlation data for the coevolved sites that are located within or equal to 10Å. The coefficient of determination (R2) indicates how well the data points fit to the linear regression model between coevolutionary connection and evolutionary conservation.\n\nObservation of strong positive correlation between coevolutionary connections and evolutionary conservation within the coevolved sites selected based on structural proximity suggests that highly coevolved protein sites tend to evolve slower.\n\nInfluence of structural environment. The structural environment of a protein site is a critical factor that can influence its evolutionary diversity pattern42,43. To understand the reasons behind the observed phenomenon where higher coevolutionary connections are found for sites that are less diversified, we investigated the roles of structural environments, such as solvent accessibility state, and secondary structural content of the coevolved sites.\n\nWe have observed more coevolutionary connections for sites that are solvent accessible compared to that observed within buried sites (Figure 2A). Interestingly, solvent accessible sites that possess lower numbers (<3) of coevolutionary connections (LCC) are consistently less conserved compared to the sites that have relatively higher number (>3) of coevolutionary connections (HCC) (Figure 2A). Although the similar trend is also observed within the solvent buried sites, the differences of conversation indices between the HCC and LCC are more prominent within solvent accessible state compared to that observed at buried state (Figure 2B).\n\n(A) Number of coevolved sites involved in forming coevolutionary pairs, where both sites are present in solvent accessible (ACC_ACC; dark grey) and buried (BUR_BUR; light grey) environments. (B) Difference of conversation indices (CI) between higher coevolutionary connection (HCC) and lower coevolutionary connection (LCC) sites involved in ACC_ACC and BUR_BUR environments. LCC: less than or equal to 3 coevolutionary connections; HCC: higher than 3 coevolutionary connections.\n\nHigher abundance of coevolutionary connections is also observed for sites that are involved in hydrogen bonding compared to those are not involved in hydrogen bonding. However, no discernable differences in evolutionary conservation were observed between the higher and lesser coevolved sites involved in hydrogen bonding compared to those that do not have hydrogen bonding (Figure S3).\n\nSlightly higher abundance of coevolutionary connections was observed for sites that were located in helix compared to those forming strands. No discernable differences in evolutionary conservation were observed between the higher and lesser coevolved sites located at helical environments compared to those that form strands (Figure S4).\n\nInfluence of functional involvement. We also investigated the relationship between coevolutionary connection and evolutionary conservation for protein sites with respect to their functional involvement. However, functional sites (e.g., active sites, protein or ligand binding sites) do not show significantly higher positive correlation between coevolutionary connection and evolutionary conservation, and no discernable differences were observed among the correlation coefficients between coevolutionary connection and evolutionary conservation observed for various types of functional sites (data not visualised).\n\nIt is important to know how the evolutionary conservation profile of the whole protein or family influences the coevolutionary connections within its sites. Figure 3 and Figure S5 plot the average conservation scores of protein families (considering all gapless columns of the family alignment) with respect to the total number of coevolved sites observed within those families. Our results suggest strong negative correlation between the number of coevolved sites found within a protein family and its average conservation score. This finding indicates that, in general, more conserved proteins/families tend to possess lower coevolutionary connections, whereas proteins/families with less stringent evolutionary pressure might engineer more intra-coevolutionary connections.\n\nX-axes show the coevolutionary connections (represented in bins of 40) of protein families whereas Y-axes represent the average evolutionary conservation score of the same families estimated by the AL2CO program. Panels show the data extracted from all 753 CDD families. Each panel (A–D) represents results obtained from coevolution predicted by various programs (MIp, McBASC, DCA and PSICOV, respectively).\n\nWe further investigated the influence of cellular localization and biological-molecular functions of the proteins that displayed correlation between the coevolutionary connections and evolutionary conservation. We categorized the representative proteins from 517, 720, 634 families into cellular localization, molecular function and biological processes, respectively, using their Gene Ontology annotations. For example, 54%, 15% and 12% of the 517 families, having at least one pair of coevolved sites, reside within cytoplasm, nucleus and membrane, respectively (Figure S6). Similarly, 55%, 17% and 10% coevolved protein families are involved in catalysis (enzyme), nucleic acid and ion binding functions, respectively. Coevolved proteins were also found to be abundant in various metabolic functions (Figure S6). Table 1 provides the R2 and slope (m) values between the coevolutionary connection and evolutionary conservation for proteins categorized in certain cellular localization. Cytoplasmic and extracellular proteins show slightly stronger anti-correlation between the number of their coevolutionary connections and evolutionary conservation. Similarly, proteins involved in catalysis and nucleic acid binding type of molecular functions show moderately stronger negative correlation, whereas proteins involved in miscellaneous metabolic processes, which mostly include generic carbohydrate and glutamine metabolisms and nitrogen fixation processes, exhibit stronger negative correlation between coevolutionary connections within the protein and its average conservation (Table 1).\n\nR2: coefficient of determination; m: slope of line for relationship between the coevolutionary connections (predicted by MI, McBASC, DCA and PSICOV programs) and the evolutionary conservation of proteins with respect to their most frequently observed Gene Ontology based cellular localizations, molecular functions, and biological processes.\n\nFigure 4A provides an example case where coevolutionary connections are overlaid with evolutionary conservation scores onto the 3D structure of a representative protein (PDB code: 1DJ0) from the pseudouridine synthase domain family (CDD code: CD01291). 8, 30, 20 and 46 coevolutionary connections were predicted by MIp35, McBASC27, DCA22 and PSICOV23 methods, respectively. Interestingly, in this family, the average conservation score (AL2CO score: 0.65) for all sites are quite low (as shown by color coding), despite having higher coevolutionary connections. Hence, observations in this family support the hypothesis that a higher number of coevolutionary connections can be expected for a protein family that has higher evolutionary variability or lower evolutionary conservation. Similarly, Figure 4B provides a case where coevolutionary connections are projected onto the 3D structure of a representative protein (PDB code: 1SRO) from the ribosomal protein S1 domain (CDD code: CD00164). It is evident from Figure 4B that the number of coevolutionary connections is relatively low in this family, while the overall evolutionary conservation (indicated by color coding) is higher (AL2CO score: 1.63). Hence, observations in this protein support the hypothesis that lower evolutionary connections can be expected for a less evolutionary variable protein. Interestingly, sites within the 1SRO protein show a similar trend as observed in the 1DJ0 protein (panels A2 and B2 of Figure 4), and higher numbers of coevolutionary connections are observed for protein sites that are less evolutionary variable.\n\nPanel A1 provides an example of higher coevolutionary connections (average >20) with respect to an overall lower evolutionary conservation (average AL2CO score: 0.65) status projected on the 3D structure of a representative protein (PDB code: 1DJ0) from the pseudouridine synthase domain family (CDD code: CD01291). Panel B1 represents a case [representative protein (PDB code: 1SRO; CDD code: CD00164) from the ribosomal protein S1 domain] where lower coevolutionary connections (average <10), with respect to overall higher evolutionary conservation (average AL2CO score: 1.63) status are observed. Lower panels (A2 and B2) show examples from the same families (zoomed image) of higher coevolutionary connections for sites that have relatively higher evolutionary conservation.\n\n\nConclusions\n\nOver the years, it has become apparent that intra protein coevolution is an important evolutionary phenomenon to maintain proteins’ functional flexibility. However, the signs of coevolution are subtle, and as a consequence, hard to detect. The majority of sites in a protein coevolve to some degree, in that they contribute more or less to structural integrity and, thus, function of the protein. However, some sites will more directly influence each other. By definition, coevolution is closely connected to the evolutionary variability of a protein. Hence, it is essential to investigate the intricate relationship between the extent of coevolution and the evolutionary variability exerted at individual protein sites, as well as the whole protein. However, it is also relevant to check the reliability of the predicted coevolved sites before deriving any hypothesis between coevolution and evolutionary conservation. Therefore, we employed multiple algorithms for the detection of coevolutionary connection and used a structural proximity based filtration system to validate the coevolutionary connections within protein sites.\n\nTo the best of our knowledge, this is the first time where such a detailed analysis is performed to investigate any existing correlation between the coevolution and evolutionary conservation. Based on our observations, we propose an interesting hypothesis that a higher number of coevolutionary connection is associated for a protein site that is less evolutionary variable, while a higher number of the coevolutionary connections can be observed for a protein family that has higher evolutionary variability. The obvious question is why such apparently contrasting relationship exists. One probable explanation could be that these highly coevolved sites might act as ‘coevolutionary hubs’, and therefore changes at these sites would affect many other connected sites. On the contrary, the evolutionary selection pressure needs to be lower at the whole protein for more sites to be involved in covariation. Probably, sites that are critical to maintain structural integrity and functional flexibility are co-varying with many other sites, but the extent of variation is limited. Hence, the critical balance between covariation and evolutionary conservation is maintained via these ‘coevolutionary hub’ sites. However, to be rich in a coevolutionary connection, a protein requires evolutionary flexibility so that correlated or compensatory mutations can be arranged with response to an initial change. Hence, higher coevolutionary connection is observed for families that are more evolutionary variable than others.\n\n\nData availability\n\nDataset 1: Predicted data for coevolution and conservation. Files of coevolutionary sites predicted by four programs with conservation score predicted by AL2CO program with 10Å filter. doi, 10.5256/f1000research.11251.d15710844",
"appendix": "Author contributions\n\n\n\nSC designed the work, analyzed and interpreted data; SM conceived the experiments, analyzed and interpreted data. SC and SM wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe work is supported by the CSIR network project fund (BSC 0121).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nSC acknowledges CSIR-IICB for infrastructural support and Department of Biotechnology for Ramalingaswami fellowship. SM acknowledges Department of Science and Technology for INSPIRE fellowship.\n\n\nSupplementary material\n\nTable S1: List of CDD families. File contains CDD family ID, PDB ID and UNIPROT ID used in the study.\n\nClick here to access the data.\n\nSupplementary File 1: Program for coevolutionary connection prediction. MATLAB code for DCA and PSICOV coevolutionary connection prediction program. This code is provided by Domenico L. Gatti with permission.\n\nClick here to access the data.\n\nSupplementary File 2: Program to extract non-coevolved sites.\n\nClick here to access the data.\n\nSupplementary File 3: Program and data-files to map conservation and coevolutionary information onto PDB.\n\nClick here to access the data.\n\nFigure S1: Number of coevolved pairs predicted (in blue, pink, green and yellow) by different programs (MIp, MCBASC, DCA and PSICOV, respectively) and common pairs (in black) between them.\n\nClick here to access the data.\n\nFigure S2: Relationship between the number of structural neighbor and evolutionary conservation for non-coevolved protein sites. X-axes show the numbers of structural neighbor (represented in bins of 1) per site whereas Y-axes represent the average evolutionary conservation index estimated by the AL2CO program. Each panel (panel A–D) represents results obtained from the number of non-coevolved pairs similar to the number of coevolved pairs predicted by various coevolution programs (A: MIp; B: McBASC; C: DCA; D: PSICOV). Panels provide correlation data for the non-coevolved sites (i) that are located within or equal to 10Å (and at sequence position between i and >i±6). The coefficient of determination (R2) indicates how well the data points fit to the linear regression model between coevolutionary connection and evolutionary conservation.\n\nClick here to access the data.\n\nFigure S3: Analysis for sites involved in H-bonding. (A) Number of coevolved sites involved in forming coevolutionary pairs, where both sites are either involved in H-bonding (HBY_HBY; dark grey) or not involved in H-bonding (HBX_HBX; light grey). (B) Difference of conversation indices (CI) between higher coevolutionary connection (HCC) and lower coevolutionary connection (LCC) sites involved in HBY_HBY or HBX_HBX. LCC: less than or equal to 3 coevolutionary connections; HCC: higher than 3 coevolutionary connections.\n\nClick here to access the data.\n\nFigure S4: Analysis for sites involved in different secondary structures. (A) Number of coevolved sites involved in forming coevolutionary pairs, where both sites are located in helix (H_H; dark grey) and strand (E_E; light grey). (B) Difference of conversation indices (CI) between higher coevolutionary connection (HCC) and lower coevolutionary connection (LCC) sites involved in H_H and E_E. LCC: less than or equal to 3 coevolutionary connections; HCC: higher than 3 coevolutionary connections.\n\nClick here to access the data.\n\nFigure S5: Relationship between coevolutionary connections and evolutionary conservation for the full-length protein. X-axes show the coevolutionary connection (represented in bins of 10) of protein families whereas Y-axes represent the average evolutionary conservation score of the same families estimated by the AL2CO program. Panels show the data extracted from all 753 CDD families. Each panel (A–D) represents results obtained from coevolution predicted by various programs (MIp, MCBASC, DCA and PSICOV, respectively).\n\nClick here to access the data.\n\nFigure S6: Gene Ontology distribution of the protein families used for this study. (1) Representative proteins from 517 CDD families were assigned to cellular localization, whereas the same from 720 and 624 families could be assigned to at least one (2) molecular function or (3) biological process (3), respectively. Details can be found in Methods.\n\nClick here to access the data.\n\n\nReferences\n\nKimura M: The Neutral Theory of Molecular Evolution. Cambridge: Cambridge University Press. 1994.\n\nTaylor WR, Hatrick K: Compensating changes in protein multiple sequence alignments. Protein Eng. 1994; 7(3): 341–8. PubMed Abstract | Publisher Full Text\n\nChelvanayagam G, Eggenschwiler A, Knecht L, et al.: An analysis of simultaneous variation in protein structures. Protein Eng. 1997; 10(4): 307–16. PubMed Abstract | Publisher Full Text\n\nPazos F, Helmer-Citterich M, Ausiello G, et al.: Correlated mutations contain information about protein-protein interaction. J Mol Biol. 1997; 271(4): 511–23. PubMed Abstract | Publisher Full Text\n\nOliveira L, Paiva AC, Vriend G: Correlated mutation analyses on very large sequence families. Chembiochem. 2002; 3(10): 1010–7. PubMed Abstract | Publisher Full Text\n\nDunn SD, Wahl LM, Gloor GB: Mutual information without the influence of phylogeny or entropy dramatically improves residue contact prediction. Bioinformatics. 2008; 24(3): 333–40. PubMed Abstract | Publisher Full Text\n\nMartin LC, Gloor GB, Dunn SD, et al.: Using information theory to search for co-evolving residues in proteins. Bioinformatics. 2005; 21(22): 4116–24. PubMed Abstract | Publisher Full Text\n\nGoh CS, Bogan AA, Joachimiak M, et al.: Co-evolution of proteins with their interaction partners. J Mol Biol. 2000; 299(2): 283–93. PubMed Abstract | Publisher Full Text\n\nGoh CS, Cohen FE: Co-evolutionary analysis reveals insights into protein-protein interactions. J Mol Biol. 2002; 324(1): 177–92. PubMed Abstract | Publisher Full Text\n\nFares MA, McNally D: CAPS: coevolution analysis using protein sequences. Bioinformatics. 2006; 22(22): 2821–2. PubMed Abstract | Publisher Full Text\n\nYip KY, Patel P, Kim PM, et al.: An integrated system for studying residue coevolution in proteins. Bioinformatics. 2008; 24(2): 290–2. PubMed Abstract | Publisher Full Text\n\nBuslje CM, Santos J, Delfino JM, et al.: Correction for phylogeny, small number of observations and data redundancy improves the identification of coevolving amino acid pairs using mutual information. Bioinformatics. 2009; 25(9): 1125–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGouveia-Oliveira R, Pedersen AG: Finding coevolving amino acid residues using row and column weighting of mutual information and multi-dimensional amino acid representation. Algorithms Mol Biol. 2007; 2: 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKorber BT, Farber RM, Wolpert DH, et al.: Covariation of mutations in the V3 loop of human immunodeficiency virus type 1 envelope protein: an information theoretic analysis. Proc Natl Acad Sci U S A. 1993; 90(15): 7176–80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLittle DY, Chen L: Identification of coevolving residues and coevolution potentials emphasizing structure, bond formation and catalytic coordination in protein evolution. PLoS One. 2009; 4(3): e4762. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFatakia SN, Costanzi S, Chow CC: Computing highly correlated positions using mutual information and graph theory for G protein-coupled receptors. PLoS One. 2009; 4(3): e4681. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGao H, Dou Y, Yang J, et al.: New methods to measure residues coevolution in proteins. BMC Bioinformatics. 2011; 12: 206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPollock DD, Taylor WR, Goldman N: Coevolving protein residues: maximum likelihood identification and relationship to structure. J Mol Biol. 1999; 287(1): 187–98. PubMed Abstract | Publisher Full Text\n\nDimmic MW, Hubisz MJ, Bustamante CD, et al.: Detecting coevolving amino acid sites using Bayesian mutational mapping. Bioinformatics. 2005; 21(Suppl 1): i126–35. PubMed Abstract | Publisher Full Text\n\nFukami-Kobayashi K, Schreiber DR, Benner SA: Detecting compensatory covariation signals in protein evolution using reconstructed ancestral sequences. J Mol Biol. 2002; 319(3): 729–43. PubMed Abstract | Publisher Full Text\n\nChoi SS, Li W, Lahn BT: Robust signals of coevolution of interacting residues in mammalian proteomes identified by phylogeny-aided structural analysis. Nat Genet. 2005; 37(12): 1367–71. PubMed Abstract | Publisher Full Text\n\nMorcos F, Pagnani A, Lunt B, et al.: Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proc Natl Acad Sci U S A. 2011; 108(49): E1293–301. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJones DT, Buchan DW, Cozzetto D, et al.: PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. Bioinformatics. 2012; 28(2): 184–90. PubMed Abstract | Publisher Full Text\n\nRodionov A, Bezginov A, Rose J, et al.: A new, fast algorithm for detecting protein coevolution using maximum compatible cliques. Algorithms Mol Biol. 2011; 6: 17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLockless SW, Ranganathan R: Evolutionarily conserved pathways of energetic connectivity in protein families. Science. 1999; 286(5438): 295–9. PubMed Abstract | Publisher Full Text\n\nFares MA, Travers SA: A novel method for detecting intramolecular coevolution: adding a further dimension to selective constraints analyses. Genetics. 2006; 173(1): 9–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOlmea O, Rost B, Valencia A: Effective use of sequence correlation and conservation in fold recognition. J Mol Biol. 1999; 293(5): 1221–39. PubMed Abstract | Publisher Full Text\n\nGöbel U, Sander C, Schneider R, et al.: Correlated mutations and residue contacts in proteins. Proteins. 1994; 18(4): 309–17. PubMed Abstract | Publisher Full Text\n\nKann MG, Shoemaker BA, Panchenko AR, et al.: Correlated evolution of interacting proteins: looking behind the mirrortree. J Mol Biol. 2009; 385(1): 91–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Juan D, Pazos F, Valencia A: Emerging methods in protein co-evolution. Nat Rev Genet. 2013; 14(4): 249–61. PubMed Abstract | Publisher Full Text\n\nFodor AA, Aldrich RW: Influence of conservation on calculations of amino acid covariance in multiple sequence alignments. Proteins. 2004; 56(2): 211–21. PubMed Abstract | Publisher Full Text\n\nChakrabarti S, Panchenko AR: Coevolution in defining the functional specificity. Proteins. 2009; 75(1): 231–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHorner DS, Pirovano W, Pesole G: Correlated substitution analysis and the prediction of amino acid structural contacts. Brief Bioinform. 2008; 9(1): 46–56. PubMed Abstract | Publisher Full Text\n\nPei J, Grishin NV: AL2CO: calculation of positional conservation in a protein sequence alignment. Bioinformatics. 2001; 17(8): 700–12. PubMed Abstract | Publisher Full Text\n\nDunn SD, Wahl LM, Gloor GB: Mutual information without the influence of phylogeny or entropy dramatically improves residue contact prediction. Bioinformatics. 2008; 24(3): 333–40. PubMed Abstract | Publisher Full Text\n\nMarchler-Bauer A, Anderson JB, Cherukuri PF, et al.: CDD: a Conserved Domain Database for protein classification. Nucleic Acids Res. 2005; 33(Database issue): D192–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang G, Dunbrack RL Jr: Scoring profile-to-profile sequence alignments. Protein Sci. 2004; 13(6): 1612–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBerman HM, Westbrook J, Feng Z, et al.: The Protein Data Bank. Nucleic Acids Res. 2000; 28(1): 235–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMizuguchi K, Deane CM, Blundell TL, et al.: JOY: protein sequence-structure representation and analysis. Bioinformatics. 1998; 14(7): 617–23. PubMed Abstract | Publisher Full Text\n\nUniProt Consortium: The Universal Protein Resource (UniProt) in 2010. Nucleic Acids Res. 2010; 38(Database issue): D142–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAshburner M, Ball CA, Blake JA, et al.: Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet. 2000; 25(1): 25–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOverington J, Donnelly D, Johnson MS, et al.: Environment-specific amino acid substitution tables: tertiary templates and prediction of protein folds. Protein Sci. 1992; 1(2): 216–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee S, Blundell TL: Ulla: a program for calculating environment-specific amino acid substitution tables. Bioinformatics. 2009; 25(15): 1976–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMandloi S, Chakrabarti S: Dataset 1 in: Protein sites with more coevolutionary connections tend to evolve slower, while more variable protein families acquire higher coevolutionary connections. F1000Research. 2017. Data Source"
}
|
[
{
"id": "21733",
"date": "25 Apr 2017",
"name": "Anna Panchenko",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper expands on a previous study (Ref 32) and shows that the number of inter-residue coevolutionary relationships can be correlated with the evolutionary conservation of a protein site and protein family. The authors applied four different algorithms to calculate the coevolutionary relationships between sites and an overall trend observed in this study is confirmed by different methods. Interestingly, the absolute scale of site conservation for sites with the same number of coevolutionary relationships can differ drastically between methods (for example McBASC and MIp on Figure 1). I wonder if the sets of pairwise correlated sites overlap between different methods. I would also suggest using MISTIC server which can provide information on conservation, coevolution and structure mapping. The relationships between coevolution and diversity of protein families is interesting and intriguing, can it be related to the quality of alignments, one of the major factors defining the accuracy of coevolutionary detection algorithms? It is important also to discuss the difference between covariation and coevolution, the latter is not necessarily the cause, see some recent studies: PMID:25944916).\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2790",
"date": "15 Jun 2017",
"name": "Saikat Chakrabarti",
"role": "Author Response",
"response": "This paper expands on a previous study (Ref 32) and shows that the number of inter-residue coevolutionary relationships can be correlated with the evolutionary conservation of a protein site and protein family. The authors applied four different algorithms to calculate the coevolutionary relationships between sites and an overall trend observed in this study is confirmed by different methods. Interestingly, the absolute scale of site conservation for sites with the same number of coevolutionary relationships can differ drastically between methods (for example McBASC and MIp on Figure 1).We thank the reviewer for the positive comments. We agree with reviewer's comment, the observed scale of values obtained from multiple coevolutionary programs varies a lot. The probable reason for such observation can be the algorithm used by individual programs for calculation of covariation/coevolution. We are providing our point-by-point response in the following.I wonder if the sets of pairwise correlated sites overlap between different methods.Figure S1 in supplementary file 1 shows number of coevolved pairs predicted (in blue, pink, green and yellow) by different programs (MIp, MCBASC, DCA and PSICOV, respectively) and common pairs (in black) between them. I would also suggest using MISTIC server which can provide information on conservation, coevolution and structure mapping.We thank the reviewer for the comment. MISTIC (mutual information server to infer coevolution) is an online server, hence running for large number of protein families is not feasible. However, as case studies, we have performed the analysis on MISTIC server for 20 protein families (including CD01291 and CD00164 families provided in the paper). Result of MISTIC server for CD01291 is available at http://mistic.leloir.org.ar/results.php?jobid=201705252335594338 and for CD00164 at http://mistic.leloir.org.ar/results.php?jobid=201705269510226.Circos representation of result for both the families is as follows:CD01291 (Pseudouridine synthases family):Link:http://mistic.leloir.org.ar/Results/job201705252335594338/circos/circos201705252335594338.png CD00164 (Ribosomal protein S1-like RNA-binding domain):Link: http://mistic.leloir.org.ar/Results/job201705269510226/circos/circos201705269510226.pngMI Circo is a sequential circular representation of the MSA and the information it contains. Coloured square boxes of the second circle indicate the MSA position conservation (highly conserved positions are in red, while less conserved ones are in blue).Lines connect pairs of positions with MI greater than 6.5 (Marino Buslje et al, 2009). Red edges represent the top 5%, black ones are between 70% and 95%, and gray edges account for the remaining 70%. Interestingly observed coevolutionary network predicted for both the families are similar to our study. Where CD01291 family has higher coevolutionary network connections for fewer variables sites whereas CD00164 family has less coevolutionary connections and overall less conserved. Result for other families:CD01424 (MGS_CPS_II):http://mistic.leloir.org.ar/results.php?jobid=20170611337071057CD01887 (Initiation Factor 2 (IF2):http://mistic.leloir.org.ar/results.php?jobid=20170611409092676CD03377 (Thiamine pyrophosphate (TPP family):http://mistic.leloir.org.ar/results.php?jobid=20170611947015544CD03278 (ATP-binding cassette domain of barmotin):http://mistic.leloir.org.ar/results.php?jobid=20170611948108735CD03481 (Transducer domain):http://mistic.leloir.org.ar/results.php?jobid=2017061194942951CD01357 (Aspartase):http://mistic.leloir.org.ar/results.php?jobid=20170611950419609CD00036 (Chitin/cellulose binding domains):http://mistic.leloir.org.ar/results.php?jobid=20170614134138131CD00089 (Protein kinase C-related kinase homology region 1 (HR1)):http://mistic.leloir.org.ar/results.php?jobid=20170614138375373CD04371 (DEP domain):http://mistic.leloir.org.ar/results.php?jobid=20170614139192179CD00052 (Eps15 homology domain):http://mistic.leloir.org.ar/results.php?jobid=20170614140306460CD00173 (Src homology 2 (SH2) domain):http://mistic.leloir.org.ar/results.php?jobid=20170614142492212CD01926 (cyclophilin_ABH_like domain):http://mistic.leloir.org.ar/results.php?jobid=201706111058271452CD04912 (ACT domains located C-terminal):http://mistic.leloir.org.ar/results.php?jobid=20170611105511638CD00164 (Ribosomal protein S1-like RNA-binding domain):http://mistic.leloir.org.ar/results.php?jobid=20170614204299821CD01714 (The electron transfer flavoprotein (ETF)):http://mistic.leloir.org.ar/results.php?jobid=201706111053598165CD00585 (Peptidase C1B subfamily):http://mistic.leloir.org.ar/results.php?jobid=201706111052469189CD04867 (TGS domain-containing YchF GTP-binding protein):http://mistic.leloir.org.ar/results.php?jobid=201706111051354847CD02014 (Thiamine pyrophosphate (TPP) family):http://mistic.leloir.org.ar/results.php?jobid=20170611953291860 The relationships between coevolution and diversity of protein families is interesting and intriguing, can it be related to the quality of alignments, one of the major factors defining the accuracy of coevolutionary detection algorithms?We thank the reviewer for bringing very important point of quality of alignment for coevolutionary analysis. Quality of alignment is major factor in the analysis and for this reason we have utilized manually curated CDD alignments. It is important also to discuss the difference between covariation and coevolution, the latter is not necessarily the cause, see some recent studies: PMID: 25944916).We thank the reviewer for providing the information. In this study we have not checked/compared the difference between the two concepts of covariation and coevolution. We have used different programs (MIp, McBASC, DCA and PSICOV) which calculate covariation among protein sites in tree-independent manner. In this study, it was assumed that observed patterns of covariation are caused by molecular coevolution and they were treated synonymously."
},
{
"c_id": "2797",
"date": "16 Jun 2017",
"name": "Anna Panchenko",
"role": "Reviewer Response",
"response": "I would like thank the authors for adequately responding to my comments."
}
]
},
{
"id": "21734",
"date": "03 May 2017",
"name": "Ramanathan Sowdhamini",
"expertise": [
"Reviewer Expertise Structural bioinformatics",
"genomics",
"genome analysis",
"protein-protein interactions"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe evolutionary conservation of a large number of predicted co-evolving residue pairs has been investigated for possible correlation between the extent of conservation and the strength of co-evolving residue networks. Co-evolution has been predicted by four different popular algorithms. Residues with a high networking of co-evolved residues are found to be more evolutionarily conserved. However, the same trend is not true at the entire protein domain family level and evolutionarily conserved protein families appear to exhibit less co-evolved network of residues. Likewise, solvent-accessible residues were predicted to retain more co-evolutionary connections in comparison to solvent-buried residues. These are interesting observations, but the connections between these individual observations and possible implications/applications will be good to include in the paper.\n\nQueries:\n\nThe first sentence in the Abstract could be changed to \"Amino acid exchanges within proteins sometimes compensate for one another and could therefore be co-evolved.\" since this fact of tight linking is not well-known and forms one of the questions in this study.\n\nPage 4: It will be nice to explain how the conservation score (within the AL2CO program) is calculated.\n\nWas there no check for consensus in predicting the co-evolved residues? For instance, to see whichever are predicted by three or more methods … It will be interesting to examine the results for subset of such highly predicted co-evolved sites.\n\nPage 4: “Representative three-dimensional (3D) structures were collected for each family from the Protein Data Bank” – to provide details as to how they were selected?\n\nPage 5: This statement “Observation of strong positive correlation between coevolutionary connections and evolutionary conservation within the coevolved sites selected based on structural proximity suggests that highly coevolved protein sites tend to evolve slower.” seems to be apparently counter-intuitive. How can highly conserved sites be co-evolving also? Highly conserved sites usually imply high degree of identity (self-amino acid preservation). If so, how a co-evolutionary index can be set up for two spatially proximate residues which remain identical? Please explain for the benefit of the readers.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2789",
"date": "15 Jun 2017",
"name": "Saikat Chakrabarti",
"role": "Author Response",
"response": "The evolutionary conservation of a large number of predicted co-evolving residue pairs has been investigated for possible correlation between the extent of conservation and the strength of co-evolving residue networks. Co-evolution has been predicted by four different popular algorithms. Residues with a high networking of co-evolved residues are found to be more evolutionarily conserved. However, the same trend is not true at the entire protein domain family level and evolutionarily conserved protein families appear to exhibit less co-evolved network of residues. Likewise, solvent-accessible residues were predicted to retain more co-evolutionary connections in comparison to solvent-buried residues. These are interesting observations, but the connections between these individual observations and possible implications/applications will be good to include in the paper. We thank the reviewer for the positive comments. Current study does not provide practical application in its current form, but does offer insight into the underlying properties of covariation/coevoluution methods and the relationship of these methods with evolutionary rate. However, the knowledge regarding the intricate relationship between evolutionary variability and coevolutionary connection is very important to gain insight about the dynamics and pattern of evolutionary history of protein families. The variable nature of this intricate balance is perhaps crucial in determining the overall conservation and/or flexibility of functionally important sites within certain protein families. The first sentence in the Abstract could be changed to \"Amino acid exchanges within proteins sometimes compensate for one another and could therefore be co-evolved.\" since this fact of tight linking is not well-known and forms one of the questions in this study.We thank the reviewer for her thoughtful comment. We agree to change the line in the abstract.Page 4: It will be nice to explain how the conservation score (within the AL2CO program) is calculated.Information on conservation score calculation is provided in manuscript subsection “Calculation of amino acid conservation”. The AL2CO (manuscript reference: 34) program performs conservation analysis in a comprehensive and systematic way. We used independent count (sequence weighting scheme) and matrix based sum-of-pair (conservation calculation method) measure scoring scheme of AL2CO program to calculate evolutionary conservation of each coevolved sites or column in the alignment. These scoring functions use the summation of the products of frequencies for the column for every combination of amino acid a and b and multiplies the products by the corresponding BLOSUM62 matrix amino acid substitution frequencies. Was there no check for consensus in predicting the co-evolved residues? For instance, to see whichever are predicted by three or more methods … It will be interesting to examine the results for subset of such highly predicted co-evolved sites.We thank the reviewer for her insightful opinion. Figure S1 in supplementary file 1 shows number of coevolved pairs predicted (in blue, pink, green and yellow) by different programs (MIp, MCBASC, DCA and PSICOV, respectively) and common pairs (in black) between them. We have performed correlation analysis on consensus data (for ex. 3335 coevolved pairs predicted by all four programs), we have observed similar trend, but as the number of consensus predicted coevolved sites are not so large, we have not provided in the results. Page 4: “Representative three-dimensional (3D) structures were collectedfor each family from the Protein Data Bank” – to provide details as to how they were selected?We have selected 3D structure of each family from the conserved domain database (CDD) alignment (represented as first sequence in alignment file).Page 5: This statement “Observation of strong positive correlation between coevolutionary connections and evolutionary conservation within the coevolved sites selected based on structural proximity suggests that highly coevolved protein sites tend to evolve slower.” seems to be apparently counter-intuitive. How can highly conserved sites be co-evolving also? Highly conserved sites usually imply high degree of identity (self-amino acid preservation). If so, how a co-evolutionary index can be set up for two spatially proximate residues which remain identical? Please explain for the benefit of the readers.We thank the reviewer for these useful comments. We agree with the apparent counter intuitiveness of the sentence. However, it is the fact that we observed. The obvious question is why such apparently contrasting relationship exists. Perhaps, both coevolutionary and evolutionary changes are dynamic processes and for a given protein site at a certain point of evolutionary conservation status, highest coevolutionary connections are observed. This evolutionary conservation status of the site is perhaps selected and maintained. One probable explanation could be that these highly coevolved sites might act as ‘coevolutionary hub’ and therefore changes at these sites would affect many other connected sites. However, we must mention that in this study the higher conservation is with respect to other coevolving sites and not necessarily meant completely conserved sites."
}
]
}
] | 1
|
https://f1000research.com/articles/6-453
|
https://f1000research.com/articles/5-2676/v1
|
16 Nov 16
|
{
"type": "Method Article",
"title": "Heterogeneous ensembles for predicting survival of metastatic, castrate-resistant prostate cancer patients",
"authors": [
"Sebastian Pölsterl",
"Pankaj Gupta",
"Lichao Wang",
"Sailesh Conjeti",
"Amin Katouzian",
"Nassir Navab",
"Pankaj Gupta",
"Lichao Wang",
"Sailesh Conjeti",
"Amin Katouzian",
"Nassir Navab"
],
"abstract": "Ensemble methods have been successfully applied in a wide range of scenarios, including survival analysis. However, most ensemble models for survival analysis consist of models that all optimize the same loss function and do not fully utilize the diversity in available models. We propose heterogeneous survival ensembles that combine several survival models, each optimizing a different loss during training. We evaluated our proposed technique in the context of the Prostate Cancer DREAM Challenge, where the objective was to predict survival of patients with metastatic, castrate-resistant prostate cancer from patient records of four phase III clinical trials. Results demonstrate that a diverse set of survival models were preferred over a single model and that our heterogeneous ensemble of survival models outperformed all competing methods with respect to predicting the exact time of death in the Prostate Cancer DREAM Challenge.",
"keywords": [
"survival analysis",
"censoring",
"prostate cancer",
"ensemble learning",
"heterogeneous ensemble"
],
"content": "Introduction\n\nToday, Cox’s proportional hazards model1 is the most popular survival model because of its strong theoretical foundation. However, it only accounts for linear effects of the features and is not applicable to data with multicolinearities or high-dimensional feature vectors. In addition to Cox’s proportional hazards model, many alternative survival models exist: accelerated failure time model, random survival forest2, gradient boosting3,4, or support vector machine5–9. Often it is difficult to choose the best survival model, because each model has its own advantages and disadvantages, which requires extensive knowledge of each model. Ensembles techniques leverage multiple decorrelated models – called base learners – by aggregating their predictions, which often provides an improvement over a single base learner if base learners’ predictions are accurate and diverse10,11. The first requirement states that a base learner must be better than random guessing and the second requirement states that predictions of any two base learners must be uncorrelated. The base learners in most ensemble methods for survival analysis are of the same type, such as survival trees in a random survival forest2.\n\nCaruana et al.12 proposed heterogeneous ensembles for classification, where base learners are selected from a library of many different types of learning algorithms: support vector machines, decision trees, k nearest neighbor classifiers, and so forth. In particular, the library itself can contain other (homogeneous) ensemble models such that the overall model is an ensemble of ensembles. The ensemble is constructed by estimating the performance of models in the library from a separate validation set and iteratively selecting the model that increases ensemble performance the most, thus satisfying the first requirement with respect to the accuracy of base learners. To ensure that models are diverse, which is the second requirement, Margineant and Dietterich13 proposed to use Cohen’s kappa14 to estimate the degree of disagreement between any pair of classifiers. The S pairs with the lowest kappa statistic formed the final ensemble. In addition, Rooney et al.15 proposed a method to construct a heterogeneous ensemble of regression models by ensuring that residuals on a validation set are uncorrelated.\n\nWe present heterogeneous survival ensembles to build an ensemble from a wide range of survival models. The main advantage of this approach is that it is not necessary to rely on a single survival model and any assumptions or limitations that model may imply. Although predictions are real-valued, a per-sample error measurement, similar to residuals in regression, generally does not exist. Instead, the prediction of a survival model consists of a risk score of arbitrary scale and a direct comparison of these values, e.g., by computing the squared error, is not meaningful. Therefore, we propose an algorithm for pruning an ensemble of survival models based on the correlation between predicted risk scores on an independent test set. We demonstrate the advantage of heterogeneous survival ensembles in the context of the Prostate Cancer DREAM Challenge16, which asked participants to build a prognostic model to predict overall survival of patients with metastatic, castrate-resistant prostate cancer (mCRPC). In the early stages of therapy, prostate cancer patients are usually treated with androgen deprivation therapy, but for 10–20% of patients the cancer will inevitably progress from castrate-sensitive to castrate-resistant within 5 years17. The median survival time for patients with mCRPC is typically less than 2 years17. To improve our understanding of mCRPC, the Prostate Cancer DREAM Challenge exposed the community to a large and curated set of patient records and asked participants to 1) predict patients’ overall survival, and 2) predict treatment discontinuation due to adverse events. In this paper, we focus on the first sub challenge, i.e., the prediction of survival. To the best of our knowledge, this is the first scientific work that uses heterogeneous ensembles for survival analysis. The paper is organized as follows. In the methods section, we briefly describe the framework of heterogeneous ensembles proposed by Caruana et al.12 and Rooney et al.15 and propose an extension to construct a heterogeneous ensemble of survival models. Next, we present results of three experiments on data of the Prostate Cancer DREAM Challenge, including our final submission. Finally, we discuss our results and close with concluding remarks.\n\n\nMethods\n\nCaruana et al.12 formulated four basic steps to construct a heterogeneous ensemble:\n\n1. Initialize an empty ensemble.\n\n2. Update the ensemble by adding a model from the library that maximizes the (extended) ensemble’s performance on an independent validation (hillclimb) set.\n\n3. Repeat step 2 until the desired size of the ensemble is reached or all models in the library have been added to the ensemble.\n\n4. Prune the ensemble by reducing it to the subset of base learners that together maximize the performance on a validation (hillclimb) set.\n\nBy populating the library with a wide range of algorithms, the requirement of having a diverse set of base learners is trivially satisfied. In addition, each model can be trained on a separate bootstrap sample of the training data. The second step ensures that only accurate base learners are added to the ensemble, and the fourth step is necessary to avoid overfitting on the validation set and to ensure that the ensemble comprises a diverse group of base learners. These two steps are referred to as ensemble selection and ensemble pruning and are explained in more detail below.\n\nThe algorithm by Caruana et al.12 has the advantage that models in the library can be evaluated with respect to any performance measure. The final heterogeneous ensemble maximizes the selected performance measure by iteratively choosing the best model from the library. Therefore, the training data 𝒟 needs to be split into two non-overlapping parts: one part (𝒟train) used to train base learners from the library, and the other part (𝒟val) used as the validation set to estimate model performances. Data in the biomedical domain is usually characterized by small sample sizes, which would lead to an even smaller training set if a separate validation set is used. Caruana et al.18 observed that if the validation set is small, the ensemble tends to overfit more easily, which is especially concerning when the library contains many models. To remedy this problem, Caruana et al. [18, p. 3] proposed a solution that “embed[ded] cross-validation within ensemble selection so that all of the training data can be used for the critical ensemble hillclimbing step.” Instead of setting aside a separate validation set, they proposed to use cross-validated models to determine the performance of models in the library (see Algorithm 1).\n\nAlgorithm 1. Ensemble selection for survival analysis\n\nInput: Library of N base survival models, training data 𝒟,\n\nnumber of folds K, minimum desired performance cmin.\n\nOutput: Ensemble of base survival models exceeding\n\nminimum performance.\n\n1 𝓜 ← ∅\n\n2 for i ← 1 to N do\n\n3 𝒞i ← ∅\n\n4 for k ← 1 to K do\n\n5 𝒟ktrain ← k-th training set\n\n6 𝒟ktest ← k-th test set\n\n7 Mik ← Train k-th sibling of i-th survival model on 𝒟 ktrain\n\n8 ck ← Prediction of survival model Mik on 𝒟ktest\n\n9 𝒞i ← 𝒞i ∪ {(𝒟ktest, ck)} /* Store prediction and\n\nassociated ground truth */\n\n10 end\n\n11 c̄i ← Performance of i-th survival model based on\n\npredictions and ground truth in 𝒞i\n\n12 if c̄i ≥ cmin then\n\n13 𝓜 ← 𝓜 ∪ {(Mi1, …, MiK , c̄i)} /* Store K\n\nsiblings and performance of i-th model */\n\n14 end\n\n15 end\n\n16 return Base models in 𝓜\n\nA cross-validated model is itself an ensemble of identical models, termed siblings, each trained on a different subset of the training data. It is constructed by splitting the training data into K equally sized folds and training one identically parametrized model on data from each of the K combinations of K – 1 folds. Together, the resulting K siblings form a cross-validated model.\n\nTo estimate the performance of a cross-validated model, the complete training data can be used, because the prediction of a sample i in the training data 𝒟 only comes from the sibling that did not see that particular sample during training, i.e., for which i ∉ 𝒟train. Therefore, estimating the performance using cross-validated models has the same properties as if one would use a separate validation set, but without reducing the size of the ensemble training data. If a truly new data point is to be predicted, the prediction of a cross-validated model is the average of the predictions of its siblings. Algorithm 1 summarizes the steps in building a heterogeneous ensemble from cross-validated survival models.\n\nNote that if a cross-validated survival model is added to the ensemble, the ensemble actually grows by K identically parametrized models of the same type – the siblings (see line 13 in Algorithm 1). Therefore, the prediction of an ensemble consisting of S cross-validated models is in fact an ensemble of K × S models.\n\nEnsemble selection only ensures that base learners are better than random guessing, but does not guarantee that predictions of base learners are diverse, which is the second important requirement for ensemble methods10,11.\n\nIn survival analysis, predictions are real-valued, because they either correspond to a risk score or to the time of an event. Therefore, we adapted a method for pruning an ensemble of regression models that accounts for a base learner’s accuracy and correlation to other base learners15, as illustrated below.\n\nPruning regression ensembles. Given a library of base learners, first, the performance of each base learner is estimated either from a separate validation set or via cross-validated models following Algorithm 1. To estimate the diversity of a pair of regression models, Rooney et al.15 considered a model’s residuals as a per-sample error measurement. Given the residuals of two models on the same data, it is straightforward to obtain a measure of diversity by computing Pearson’s correlation coefficient. They defined the diversity of a single model based on the correlation of its residuals to the residuals of all other models in the ensemble and by counting how many correlation coefficients exceeded a user-supplied threshold τcorr. The diversity score can be computed by subtracting the number of correlated models from the total number of models in the ensemble and normalizing it by the ensemble size. If a model is sufficiently correlated with all other models, its diversity is zero, while if it is completely uncorrelated, its diversity is one. Moreover, they defined the accuracy of the i-th model relative to the root mean squared error (RMSE) of the best performing model as accuracy(i) = (minj = 1,…,S RMSE(j))/RMSE(i). Finally, Rooney et al.15 added the diversity score of each model to its accuracy score and selected the top S base learners according to the combined accuracy-diversity score. Algorithm 2 summarizes the algorithm by Rooney et al.15, where the correlation function would compute Pearson’s correlation coefficient between residuals of the i-th and j-th model.\n\nAlgorithm 2. Ensemble pruning algorithm of Rooney et al.15\n\nInput: Set of base survival models 𝓜 and their average\n\ncross-validation performance, validation set 𝒟val,\n\ndesired size S of ensemble, correlation threshold τcorr.\n\nOutput: Aggregated predictions of S base survival models.\n\n1 cmax ← Highest performance score of any model in 𝓜\n\n2 if |𝓜 | > S then\n\n3 𝒞 ← ∅\n\n4 for i ← 1 to |𝓜 | do\n\n5 pi ← Prediction of data 𝒟val using i-th base survival\n\nmodel in 𝓜\n\n6 count ← 0\n\n7 for j ← 1 to |𝓜 | do\n\n8 pj ← Prediction of data 𝒟val using j-th base\n\nsurvival model in 𝓜\n\n9 if i ≠ j ∧ correlation(pi, pj, 𝒟val) ≥ τcorr then\n\n10 count ← count + 1\n\n11 end\n\n12 end\n\n13 di ← ( |𝓜 | – count)/ |𝓜 |\n\n14 c̄i ← Average cross-validation performance of i-th\n\nsurvival model in 𝓜\n\n15 𝒞 ← 𝒞 {(i, c̄i/cmax + di)}\n\n16 end\n\n17 𝓜* ← Top S survival models with highest score according\n\nto 𝒞\n\n18 else\n\n19 𝓜* ← 𝓜\n\n20 end\n\n21 return Prediction of 𝒟val by aggregating predictions of base\n\nlearners in survival ensemble 𝓜*\n\nIf the library consists of survival models rather than regression models, a persample error, similar to residuals in regression, is difficult to define. Instead, predictions are risk scores of arbitrary scales and the ground truth is the time of an event or the time of censoring. Hence, a direct comparison of a predicted risk score to the observed time of an event or the time of censoring, for instance via the squared error, is not meaningful. We propose to measure the diversity in an ensemble based on the correlation between predicted risk scores, i.e., independent of the ground truth. Here, we consider two correlation measures:\n\n1. Pearson’s correlation coefficient, and\n\n2. Kendall’s rank correlation coefficient (Kendall’s τ).\n\nHence, we measure the diversity of a heterogeneous ensemble of survival models without requiring ground truth or a separate validation set. We believe this is not a disadvantage, because the combined score in line 15 of Algorithm 2 already accounts for model accuracy, which could be estimated by the concordance index19 or integrated area under the time-dependent ROC curve20,21 on a validation set or using Algorithm 1. In fact, since the diversity score for survival models does not depend on ground truth, the pruning step can be postponed until the prediction phase – under the assumption that prediction is always performed for a set of samples and not a single sample alone. Consequently, the ensemble will not be static anymore and is allowed to change if new test data is provided, resulting in a dynamic ensemble.\n\nIn summary, for pruning an ensemble of survival models, Algorithm 2 is applied during prediction with the following modifications:\n\n1. Replace validation data 𝒟val by the feature vectors of the test data Xnew.\n\n2. Compute the performance score using the concordance index19, integrated area under the time-dependent, cumulative-dynamic ROC curve20,21 or any other performance measure for censored outcomes.\n\n3. Measure the correlation among predicted risk scores using Pearson’s correlation coefficient or Kendall’s rank correlation coefficient.\n\nThe prediction of the final ensemble is the average predicted risk score of all its members after pruning.\n\n\nExperiments\n\nThe Prostate Cancer DREAM Challenge16 provided access to 1,600 health records from three separate phase III clinical trials for training22–24, and data from an independent clinical trial of 470 men for testing (values of dependent variables were held back and not revealed to participants)25. Figure 1 illustrates the distribution of censoring and survival times of the respective trials. The median follow-up time for the MAINSAIL trial23, the ASCENT-2 trial22, and VENICE trial24 was 279, 357, and 642.5 days, respectively. For the test data from the ENTHUSE-33 trial25, the median follow-up was 463 days.\n\nNumbers in brackets denote the total number of patients in the respective trial, and the dashed line is the median follow-up time in the ENTHUSE-33 trial, which was used as independent test data.\n\nWe partitioned the training data into 7 sets by considering all possible combinations of the three trials constituting the training data (see Table 1). Each partition was characterized by a different set of features, ranging between 383 features for data from the MAINSAIL trial to 217 features when combining data of all three trials. Features were derived from recorded information with respect to medications, comorbidities, laboratory measurements, tumor measurements, and vital signs (see Supplementary material for details). Finally, we used a random survival forest2 to impute missing values in the data.\n\nWe performed a total of three experiments, two based on cross-validation using the challenge training data, and one using the challenge test data from the ENTHUSE-33 trial as hold-out data. In the first experiment, we randomly split each of the datasets in Table 1 into separate training and test data and performed 5-fold cross-validation. Thus, test and training data comprised different individuals from the same trial(s). We refer to this scenario as within-in trial validation. In the second experiment, referred to as between trials validation, we used data from one trial as hold-out data for testing and data from one or both of the remaining trials for training. This setup resembles the challenge more closely, where test data corresponded to a separate trial too. We only considered features that were part of both the training and test data. In each experiment above, the following six survival models were evaluated:\n\n1. Cox’s proportional hazards model1 with ridge (ι2) penalty,\n\n2. Linear survival support vector machine (SSVM)9,\n\n3. SSVM with the clinical kernel26,\n\n4. Gradient boosting of negative log partial likelihood of Cox’s proportional hazards model3 with randomized regression trees as base learners27,28,\n\n5. Gradient boosting of negative log partial likelihood of Cox’s proportional hazards model3 with componentwise least squares as base learners29,\n\n6. Random survival forest2.\n\nIn addition, the training of each survival model was wrapped by grid search optimization to find optimal hyper-parameters. The complete training data was randomly split into 80% for training and 20% for testing to estimate a model’s performance with respect to a particular hyper-parameter configuration. The process was repeated for ten different splits of the training data. Finally, a model was trained on the complete training data using the hyper-parameters that on average performed the best across all ten repetitions. Performance was estimated by Harrell’s concordance index (c index)19. All continuous features were normalized to zero mean and unit variance and nominal and ordinal features were dummy coded.\n\nFor the Prostate Cancer DREAM Challenge’s final evaluation, we built a heterogeneous ensemble from a wide range of survival models. In sub challenge 1a, models were evaluated using the integrated area under the time-dependent, cumulative-dynamic ROC curve (iAUC)20,21 – integrated over time points every 6 months up to 30 months after the first day of treatment – and in sub challenge 1b, using the root mean squared error (RMSE) with respect to deceased patients in the test data. Organizers of the Prostate Cancer DREAM Challenge estimated the performance of submitted models based on 1,000 bootstrap samples of the ENTHUSE-33 trial data and the Bayes factor to the top performing model and a baseline model by Halabi et al.30 (only for sub challenge 1a). The Bayes factor provides an alternative to traditional hypothesis testing, which relies on p-values to determine which of two models is preferred (see e.g. 31). According to Jeffreys32, a Bayes factor in the interval [3; 10] indicates moderate evidence that the first model outperformed the second model and strong evidence if the Bayes factor is greater 10, else evidence is insufficient.\n\n\nResults\n\nFigure 2 summarizes the average cross-validation performance across all five test sets for all seven datasets in Table 1. Overall, the average concordance index ranged between 0.629 and 0.713 with a mean of 0.668. It is noteworthy that all classifiers but SSVM models performed best on data of the MAINSAIL trial, which comprised 526 subjects and the highest number of features among all trials (383 features). A SSVM was likely to have an disadvantage due to the high number of features and because feature selection is not embedded into its training as for the remaining models. In fact, SSVM models performed worst on data from the MAINSAIL and VENICE trials, which were the datasets with the most features. SVM-based models performed best if data from at least two trials were combined, which increased the number of samples and decreased the number of features. Moreover, the results show that linear survival support vector machines performed poorly. A considerable improvement could be achieved when using kernel-based survival support vector machines with the clinical kernel, which is especially useful if data is a mix of continuous, categorical and ordinal features. For low-dimensional data, the kernel SSVM could perform equally well as or better than gradient boosting models, but was always outperformed by a random survival forest.\n\nWhen considering the performance of models across all datasets (last row in Figure 2), random survival forests and Cox’s proportional hazards models stood out with an average c index of 0.681, outperforming the third best: gradient boosting with componentwise least squares base learners. Random survival forests performed better than Cox’s proportional hazards models on 4 out of 7 datasets and was tied on one dataset. The results seem to indicate that a few datasets contain non-linearities, which were captured by random survival forests, but not by gradient boosting with componentwise least squares and Cox’s proportional hazards models. Nevertheless, Cox models performed as well as random survival forests when averaging results over all datasets.\n\nFinally, we would like to mention that 5 out of 6 survival models performed worst on the VENICE data. Although it contained the largest number of patients, its median follow-up time was almost by a factor of two longer than for the ASCENT-2 trial and the overlap in the distribution of censoring and survival times was rather small (see Figure 1). Thus, the difference between observed time points in the training and test data based on the VENICE trial is likely more pronounced than for the data from the MAINSAIL or ASCENT-2 trials, which means a survival model has to generalize to a much larger time period. Moreover, the amount of censoring in the VENICE trial is relatively low compared to the other trials. Therefore, the observed drop in performance might stem from the fact that the bias of Harrell’s concordance index usually increases as the amount of censoring increases33. As an alternative, we considered the integrated area under the time-dependent, cumulative-dynamic ROC curve20,21, which was the main evaluation measure in the Prostate Cancer DREAM Challenge. However, comparing the estimated integrated area under the ROC curve across multiple datasets is not straightforward when follow-up times differ largely among trials (see Figure 1). If the integral is estimated from time points that exceed the follow-up time of almost all patients, the inverse probability of censoring weights used in the estimator of the integrated area under the curve cannot be computed, because the estimated probability of censoring at that time point becomes zero. On the other hand, if time points are defined too conservatively, the follow-up period of most patients will end after the last time point and the estimator would ignore a large portion of the follow-up period. Hence, defining time points that lead to adequate estimates of performance in all three datasets is challenging due to large differences in the duration of follow-up periods.\n\nThe last column (mean) denotes the average performance of all models on a particular dataset and the last row (mean) denotes the average performance of a particular model across all datasets. Numbers indicate the average of Harrell’s concordance index across five cross-validation folds.\n\nIn the second experiment, training and test data were from separate trials, which resembled the setup of the Prostate Cancer DREAM Challenge. Figure 3 summarizes the results.\n\nOne trial was used as hold-out data (indicated by the name to the right of the arrow) and one or two of the remaining trials as training data. Numbers indicate Harrell’s concordance index on the hold-out data.\n\nOverall, the performance of models was in a similar range as in the previous experiment, except if VENICE data was used for testing. If performance was estimated on the VENICE data, all models performed considerably worse compared to performance estimated on the other datasets. We believe the reason for these results are similar to the cross-validation results on the VENICE data described in the previous section. The bias of Harrell’s concordance index due to vastly different amounts of censoring among trials could be one factor, while the other could be that the follow-up times differed drastically between training and testing. If the follow-up period is much shorter in the training data than in the testing data, it is likely that models generalize badly for time points that were never observed in the training data, which is only the case if the VENICE data is used for testing, but not if data from the MAINSAIL or the ASCENT-2 trial is used (cf. Figure 1).\n\nThe experiments also confirmed observations discussed in the previous section: 1) on average, random survival forests performed better than gradient boosting models and SSVMs, and 2) using SSVM with the clinical kernel was preferred over the linear model. Interestingly, all models, except linear SSVM, performed best when trained on the maximum number of available patient records, which is different from results in the previous section, where models trained on data with more features performed better. Moreover, an unexpected result is that Cox’s proportional hazards model was able to outperform many of the machine learning methods, including random survival forest, which is able to implicitly model non-linear relationships that are not considered by Cox’s proportional hazards model. We investigated whether this difference is significant by performing a Nemenyi post-hoc test34 based on the results of all train-test-set combinations in Figure 3. Figure 4 shows that models with embedded feature selection (gradient boosting and random survival forest) were not significantly better than models that take into account all features (Cox model and SSVM with the clinical kernel). Moreover, we observed that the linear SSVM was significantly outperformed by Cox’s proportional hazards model and random survival forest. A possible explanation why models with embedded feature selection, in particular gradient boosting with componentwise least squares base learner, performed worse in this experiment than in our first experiment might be slight differences in the importance of features between training and test data. Consider an example with two datasets consisting of 3 features: in the first dataset only features 1 and 2 are associated with survival, and in the second dataset only features 2 and 3. A parsimonious model trained on the first dataset and applied to the second dataset would likely perform worse than a model considering all features, because the data generation processes of dataset one and two only share a single feature and a model considering all features can potentially compensate for that.\n\nMethods are sorted by average rank (left to right) and groups of methods that are not significantly different are connected (p-value >0.05).\n\nTo summarize, results presented in the previous two sections demonstrate that\n\n1. SSVM should be used in combination with the clinical kernel.\n\n2. Increasing the number of samples is preferred over increasing the number of features, especially if follow-up periods are large.\n\n3. There is no single survival model that is clearly superior to all other survival models.\n\nFrom these observations, we concluded that employing heterogeneous survival models, trained on all 1,600 patient records in the training data, would be most reliable. We built two ensembles using Algorithm 1 and Algorithm 2: one maximizing Harrell’s concordance index19, and one minimizing the RMSE. The former was constructed from a library of 1,801 survival models for sub challenge 1a (K = 5, cmin = 0.66, τcorr = 0.6, S = 90) and the latter from a library of 1,842 regression models for sub challenge 1b (K = 5, cmin = 0.85, τcorr = 0.6, S = 92).\n\nSub challenge 1a. Four of the six survival models evaluated in the cross-validation experiments formed the basis of the ensemble. Linear SSVM was excluded, because it performed poorly when not combined with the clinical kernel and Cox’s proportional hazards model had to be excluded, because we encountered numerical problems in its optimization routine. Due to all survival models having one or more hyper-parameters, we populated the library of models with multiple copies of each survival model, but each with a different hyper-parameter configuration (see Table 2).\n\nAll denotes the initial size of the ensemble, Pruned the size after pruning models with Harrell’s concordance index below 0.66, and Top 5% to the final size of the ensemble corresponding to the top 5% according the combined accuracy and diversity score in Algorithm 2.\n\nFigure 5 depicts scatter plots comparing models’ performance and diversity. Most of the gradient boosting models with regression trees as base learners were pruned because their predictions were redundant to other models in the ensemble (Figure 5A). In contrast, all random survival models remained in the ensemble throughout (Figure 5C). We observed the highest diversity for gradient boosting models (mean = 0.279) and the highest accuracy for random survival forests (mean = 0.679). The final ensemble comprised all types of survival models in the library, strengthening our conclusion that a diverse set of survival models is preferred over a single model.\n\nThe concordance index was evaluated by cross-validated models on the training data from the from the ASCENT-2, VENICE, and MAINSAIL trial. Diversity was computed based on Pearson’s correlation coefficient between predicted risk scores for 313 patients of the ENTHUSE-33 trial (final scoring set). 230 240 250 260 270 280 root mean squared error 0.00\n\nIn the challenge’s final evaluation based on 313 patients of the ENTHUSE-33 trial, 30 out of 51 submitted models outperformed the baseline model by Halabi et al.30 by achieving a Bayes factor greater than 316. There was a clear winner in team FIMM-UTU and the performance of the remaining models were very close to each other; there was merely a difference of 0.0171 points in integrated area under the ROC curve (iAUC) between ranks 2 and 2516.\n\nThe proposed heterogeneous ensemble of survival models by team CAMP achieved an iAUC score of 0.7646 on the test data and was ranked 23rd according to iAUC and 20th according to Bayes factor with respect to the best model (FIMM-UTU). When considering the Bayes factor of the proposed ensemble method to all other models, there is only sufficient evidence (Bayes factor greater 3) that five models performed better (FIMM-UTU, Team Cornfield, TeamX, jls, and KUstat). The Bayes factor to the top two models was 20.3 and 6.6 and ranged between 3 and 4 for the remaining three models. With respect to the model by Halabi et al.30, there was strong evidence (Bayes factor 12.2; iAUC 0.7432) that heterogeneous ensembles of survival models could predict survival of mCRPC patients more accurately.\n\nSub challenge 1b. In sub challenge 1b, participants were tasked with predicting the exact time of death rather than ranking patients according to their survival time. Similar to sub challenge 1a, our final model was a heterogeneous ensemble, but based on a different library of models (see Table 3).\n\nAll denotes the initial size of the ensemble, Pruned the size after pruning models with a root mean squared error more than 15% above the error of the best performing model, and Top 5% to the final size of the ensemble corresponding to the top 5% according the combined accuracy and diversity score in Algorithm 2. AFT: Accelerated Failure Time.\n\nFigure 6 illustrates the RMSE and diversity of all 1,281 models after the first pruning step (cf. Table 3). In contrast to the ensemble of survival models used in sub challenge 1a, the ensemble in this sub challenge was characterized by very little diversity: the highest diversity was 0.064. In fact, all 92 models included in the final ensemble had a diversity score below 0.001, which means that pruning was almost exclusively based on the RMSE. Gradient boosting models with componentwise least squares base learners were completely absent from the final ensemble and only two hybrid survival support vector machine models had a sufficiently low RMSE to be among the top 5%.\n\nThe RMSE was evaluated by cross-validated models on the training data from the ASCENT-2, VENICE, and MAINSAIL trial. Diversity was computed based on Pearson’s correlation coefficient between residuals on the training data.\n\nThe evaluation of all submitted models on the challenge’s final test data from the ENTHUSE-33 trial revealed that our proposed heterogeneous ensemble of regression models achieved the lowest root mean squared error (194.4) among all submissions16. The difference in RMSE between the 1st placed model and the 25th placed model was less than 25. With respect to our proposed winning model, there was insufficient evidence to state it outperformed all other models, because the comparison to five other models yielded a Bayes factor less than three (Team Cornfield, M S, JayHawks, Bmore Dream Team, and A Bavarian dream).\n\n\nDiscussion\n\nFrom experiments on the challenge training data, we concluded that it would be best to combine data from all three clinical trials to train a heterogeneous ensemble, because maximizing the number of distinct time points was preferred. Interestingly, the winning team of sub challenge 1a completely excluded data from the ASCENT-2 trial in their solution. They argued that it was too dissimilar to data of the remaining three trials, including the test data35. Therefore, it would be interesting to investigate unsupervised approaches that could deduce a similarity or distance measure between patients, which can be used to decrease the influence of outlying patients during training.\n\nThe second important conclusion from our experiments is that no survival model clearly outperformed all other models in all the evaluated scenarios. Our statistical analysis based on results of the between trials validation revealed that Cox’s proportional hazards model performed significantly better than the linear survival support vector machine and gradient boosting with regression trees as base learners, and that the random survival forest performed significantly better than linear survival support vector machines; the remaining differences were deemed statistically insignificant. Therefore, we constructed a heterogeneous ensemble of several survival models with different hyper-parameter configurations and thereby avoided relying only on a single survival model with a single hyper-parameter configuration. In total, we considered two libraries, each consisting of over 1,800 different models, which were pruned to ensure accuracy and diversity of models – we observed only minor differences when substituting Pearson’s correlation for Kendall’s rank correlation during ensemble pruning.\n\nThe proposed ensemble approach was able to outperform all competing models in sub challenge 1b, where the task was to predict the exact time of death. In sub challenge 1a, participants had to provide a relative risk score and our ensemble approach was significantly outperformed by five competing models16. Due to large differences in teams’ overall solutions it is difficult to pinpoint the reason for the observed performance difference: it could be attributed to the choice of base learners, or to choices made during pre-processing or filtering the data. From our experience of the three intermediate scoring rounds before the final submission, we would argue that identifying the correct subset of patients in the training data that is most similar to the test data is more important than choosing a predictive model. By training a survival model on data combined from three trials and applying it to patients from a fourth trial, inconsistencies between trials inevitably lead to outliers with respect to the test data, which in turn diminishes the performance of a model – if not addressed explicitly during training.\n\nA possible explanation why the heterogeneous ensemble worked better for survival time prediction (sub challenge 1b) than for risk score prediction (sub challenge 1a) might be that we maximized the concordance index during ensemble construction and not the area under the time-dependent ROC curve, which was used in the challenge’s final evaluation. In addition, we aggregated predictions of survival models by averaging, although predictions of survival models are not necessarily on the same scale. In regression, the prediction is a continuous value that directly corresponds to the time of death, which allows simple averaging of individual predictions. In survival analysis, semantics are slightly different. Although predictions are real-valued as well, the prediction of a survival model does generally not correspond to the time of death, but is a risk score on an arbitrary scale. A homogeneous ensemble only consists of models of the same type, therefore predictions can be aggregated by simply computing the average. A problem arises for heterogeneous ensembles if the scale of predicted risk scores differs among models. To illustrate the problem, consider an ensemble consisting of survival trees as used in a random survival forest2 and ranking-based linear survival support vector machines9. The prediction of the former is based on the cumulative hazard function estimated from samples residing in the leaf node a new sample was assigned to. Thus, predictions are always positive due to the definition of the cumulative hazard function (see e.g. 36). In contrast, the prediction of a linear SSVM is the inner product between a model’s vector of coefficients and a sample’s feature vector, which can take on negative as well as positive values. It is easy to see that, depending on the scale difference, simply averaging predicted risk scores favors models with generally larger risk scores (in terms of absolute value) or positive and negative predicted risk scores cancel each other out. Instead of simply averaging risk scores, the problem could be alleviated if model risk scores were first transformed into ranks, thereby putting them on a common scale, before averaging the resulting ranks. We evaluated this approach after the Prostate Cancer DREAM Challenge ended: averaging ranks instead of raw predicted risk scores increased the iAUC value from 0.7644 to 0.7705 on a random sub sample of the ENTHUSE-33 trial.\n\n\nConclusions\n\nWe proposed heterogeneous survival ensembles that are able to aggregate predictions from a wide variety of survival models. We evaluated our method using data from an independent fourth trial from the Prostate Cancer DREAM Challenge. Our proposed ensemble approach could predict the exact time of death more accurately than any competing model in sub challenge 1b and was significantly outperformed by 5 out of 50 competing solutions in sub challenge 1a. We believe this result is encouraging and warrants further research in using heterogeneous ensembles for survival analysis. The source code is available online https://www.synapse.org/#!Synapse:syn3647478.\n\n\nData availability\n\nThe Challenge datasets can be accessed at: https://www.projectdatasphere.org/projectdatasphere/html/pcdc\n\nChallenge documentation, including the detailed description of the Challenge design, overall results, scoring scripts, and the clinical trials data dictionary can be found at: https://www.synapse.org/ProstateCancerChallenge\n\nThe code and documentation underlying the method presented in this paper can be found at: http://dx.doi.org/10.7303/syn364747837",
"appendix": "Author contributions\n\n\n\nSP prepared the raw datasets, implemented the survival models, and wrote the manuscript. PG and SP performed analyses to establish the final models. LW and SC contributed to establishing the final models. AK and NN supervised the analysis.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the German Research Foundation (DFG) and the Technische Universität München within the funding program Open Access Publishing.\n\n\nAcknowledgements\n\nWe thank Sage Bionetworks, the DREAM organization, and Project Data Sphere for developing and supplying data for the Prostate Cancer DREAM Challenge. We thank the Leibniz Supercomputing Centre (LRZ, www.lrz.de) for providing the computational resources for our experiments. This publication is based on research using information obtained from www.projectdatasphere.org, which is maintained by Project Data Sphere, LLC. Neither Project Data Sphere, LLC nor the owner(s) of any information from the web site have contributed to, approved or are in any way responsible for the contents of this publication.\n\n\nSupplementary material\n\nData pre-processing, missing imputations, and hyper-parameter configurations.\n\nClick here to access the data\n\n\nReferences\n\nCox DR: Regression models and life tables. J R Stat Soc Series B. 1972; 34(2): 187–220. Reference Source\n\nIshwaran H, Kogalur UB, Blackstone EH, et al.: Random survival forests. Ann Appl Stat. 2008; 2(3): 841–860. Publisher Full Text\n\nRidgeway G: The state of boosting. Comput Sci Stat. 1999; 31: 172–181. Reference Source\n\nHothorn T, Bühlmann P, Dudoit S, et al.: Survival ensembles. Biostatistics. 2006; 7(3): 355–373. Publisher Full Text\n\nVan Belle V, Pelckmans K, Suykens JAK, et al.: Support vector machines for survival analysis. In Proc of the 3rd International Conference on Computational Intelligence in Medicine and Healthcare. 2007; 1–8. Reference Source\n\nShivaswamy PK, Chu W, Jansche M: A support vector approach to censored targets. In 7th IEEE International Conference on Data Mining. 2007; 655–660. Publisher Full Text\n\nKhan FM, Zubek VB: Support vector regression for censored data (SVRc): A novel tool for survival analysis. In 8th IEEE International Conference on Data Mining. 2008; 863–868. Publisher Full Text\n\nEleuteri A: Support vector survival regression. In 4th IET International Conference on Advances in Medical, Signal and Information Processing. 2008; 1–4. Publisher Full Text\n\nPölsterl S, Navab N, Katouzian A: Fast training of support vector machines for survival analysis. In Machine Learning and Knowledge Discovery in Databases.volume 9285 of Lecture Notes in Computer Science. 2015; 243–259. Publisher Full Text\n\nHansen LK, Salamon P: Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1990; 12(10): 993–1001. Publisher Full Text\n\nDietterich TG: Ensemble methods in machine learning. In 1st International Workshop on Multiple Classifier Systems. 2000; 1857: 1–15. Publisher Full Text\n\nCaruana R, Niculescu-Mizil A, Crew G, et al.: Ensemble selection from libraries of models. In 22nd International Conference on Machine Learning. 2004.\n\nMargineantu DD, Dietterich TG: Pruning adaptive boosting. In 14th International Conference on Machine Learning. 1997; 211–218. Reference Source\n\nCohen J: A coefficient of agreement of nominal scales. Educ Psychol Meas. 1960; 20(1): 37–46. Publisher Full Text\n\nRooney N, Patterson D, Anand S, et al.: Dynamic integration of regression models. In Proc of the 5th International Workshop on Multiple Classifier Systems. 2004; 3077: 164–173. Publisher Full Text\n\nCostello J, Guinney J, Zhou L, et al.: Prostate Cancer DREAM Challenge. Publisher Full Text\n\nKirby M, Hirst C, Crawford ED: Characterising the castration-resistant prostate cancer population: a systematic review. Int J Clin Pract. 2011; 65(11): 1180–1192. PubMed Abstract | Publisher Full Text\n\nCaruana R, Munson A, Niculescu-Mizil A: Getting the most out of ensemble selection. In 6th IEEE International Conference on Data Mining. 2006; 828–833. Publisher Full Text\n\nHarrell FE Jr, Califf RM, Pryor DB, et al.: Evaluating the yield of medical tests. JAMA. 1982; 247(18): 2543–2546. PubMed Abstract | Publisher Full Text\n\nUno H, Cai T, Tian L, et al.: Evaluating prediction rules for t-year survivors with censored regression models. J Am Stat Assoc. 2007; 102(478): 527–537. Publisher Full Text\n\nHung H, Chiang CT: Estimation methods for timedependent AUC models with survival data. Can J Stat. 2010; 38(1): 8–26. Publisher Full Text\n\nScher HI, Jia X, Chi K, et al.: Randomized, open-label phase III trial of docetaxel plus high-dose calcitriol versus docetaxel plus prednisone for patients with castration-resistant prostate cancer. J Clin Oncol. 2011; 29(16): 2191–2198. PubMed Abstract | Publisher Full Text\n\nPetrylak DP, Vogelzang NJ, Budnik N, et al.: Docetaxel and prednisone with or without lenalidomide in chemotherapy-naive patients with metastatic castration-resistant prostate cancer (MAINSAIL): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet Oncol. 2015; 16(4): 417–425. PubMed Abstract | Publisher Full Text\n\nTannock IF, Fizazi K, Ivanov S, et al.: Aflibercept versus placebo in combination with docetaxel and prednisone for treatment of men with metastatic castration-resistant prostate cancer (VENICE): a phase 3, double-blind randomised trial. Lancet Oncol. 2013; 14(8): 760–768. PubMed Abstract | Publisher Full Text\n\nFizazi K, Higano CS, Nelson JB, et al.: Phase III, randomized, placebo-controlled study of docetaxel in combination with zibotentan in patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2013; 31(14): 1740–1747. PubMed Abstract | Publisher Full Text\n\nDaemen A, Timmerman D, Van den Bosch T, et al.: Improved modeling of clinical data with kernel methods. Artif Intell Med. 2012; 54(2): 103–114. PubMed Abstract | Publisher Full Text\n\nBreiman L, Friedman JH, Stone CJ, et al.: Classification and Regression Trees.Wadsworth International Group. 1984.\n\nBreiman L: Random forests. Mach Learn. 2001; 45(1): 5–32. Publisher Full Text\n\nBühlmann P, Yu B: Boosting with the L2 loss. J Am Stat Assoc. 2003; 98(462): 324–339. Publisher Full Text\n\nHalabi S, Lin CY, Kelly WK, et al.: Updated prognostic model for predicting overall survival in first-line chemotherapy for patients with metastatic castration-resistant prostate cancer. J Clin Oncol. 2014; 32(7): 671–677. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWasserman L: Bayesian Model Selection and Model Averaging. J Math Psychol. 2000; 44(1): 92–107. PubMed Abstract | Publisher Full Text\n\nJeffreys H: The Theory of Probability. Oxford University Press. 1961. Reference Source\n\nAntolini L, Boracchi P, Biganzoli E: A time-dependent discrimination index for survival data. Stat Med. 2005; 24(24): 3927–3944. PubMed Abstract | Publisher Full Text\n\nDemšar J: Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006; 7: 1–30. Reference Source\n\nLaajala TD, Khan S, Airola A, et al.: Predicting patient survival and treatment discontinuation in DREAM 9.5 mCRPC challenge. Reference Source\n\nKlein JP, Moeschberger ML: Survival Analysis: Techniques for Censored and Truncated Data. Springer, 2nd edition. 2003. Publisher Full Text\n\nPölsterl S, Gupta P, Wang L, et al.: Team CAMP – DREAM Prostate Cancer Challenge. Synapse Storage. 2016. Publisher Full Text"
}
|
[
{
"id": "18609",
"date": "19 Dec 2016",
"name": "Donna P. Ankerst",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors are to be congratulated for landing among the circle of winners of the Prostate Cancer DREAM Challenge and for clearly describing their innovative methods in this paper. An informative discussion critically appraises their approach, providing suggestions for advancing the field of clinical risk prediction. Instead of relying on one survival model, their approach hinges on heterogeneous ensembles that invoke a variety of model types, including gradient boosting (least squares versus trees), random survival forests, and survival support vector machines (linear versus clinical kernels), thereby hedging against sub-optimality of any single model for any single test set. I have only minor comments.\nIt is argued throughout that heterogeneous ensembles have been shown to be optimal compared to single models for this challenge, but I did not see a head-to-head comparison illustrating this. For example, could one not add ensemble methods as an extra column to the within- and between-trial validations in Figures 2 and 3, respectively?\n\nI greatly appreciated Figure 4 that showed which of the multiple comparisons in Figure 3 (the between-trial validation) were actually critically different, as many of the iAUCs only differed out to the second decimal (which is by the way a clinically meaningless difference). It would be nice to also have such a comparison for Figure 2 (the within-trial validation) that could definitely show whether or not the Cox model was statistically indistinguishable from random forests, and to temper the Results section concerning the comparison of the methods. One method only beats another if the confidence intervals of the respective AUCs do not overlap. Given their similar performance, the comparison among the different individual survival models might not be as relevant as whether or not the ensemble outperformed any one of them.\n\nAs nicely pointed out in the Discussion, it is a surprise and a great pity that the concordance statistic c was used for the training of the models instead of the iAUC, the criterion used for evaluation for the challenge. While easy to compute, the concordance statistic suffers greatly from censored observations, they essentially are discarded in the evaluation. This means that only a minority of the data in the ASCENT and MAINSAIL trials were used (71% and 82.5% of the data censored). The iAUC, however, also suffers from censored data, but from what I understand, to a lesser extent. Is it possible to redo Figures 2 and 3 using the iAUC instead of the concordance statistic, to see if similar conclusions held?\n\nIn the discussion of the within-trial internal cross-validation of Figure 2 it is mentioned that some of the methods may have performed poorly because of a difference in follow-up between the random partitions of the trial into training and test sets. In medical studies, this is often controlled using stratified randomization, which ensures the proportion of observed events (deaths in this case) or follow-up remains equal across the sets. Would it be possible to implement to see if it improved the outcome for VENICE, in order to help explain the poor behavior there? It of course, does not help the between-trial validation, the subject of the next point.\n\nThe problem of recalibration to different trials is becoming more and more recognized in medicine; searching for “recalibration risk score” or “recalibration risk model” in PubMed reveals hundreds of suggestions and applications. The authors do a nice job of illustrating the particular difficulties with survival data – a look at Figure 1 shows that median follow-up in the held out ENTHUSE-33 trial was longer than two of the trials used for training. In our analysis for the challenge we showed that recalibration made a big difference for the root-mean-squared-error in Subchallenge 1b but not the iAUC in Subchallenge 1a, matching previous results we have obtained in proposals to dynamically update risk models (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4532612/). Recalibration means any method to tweak an existing risk model to conform to a particular target population, but has the problem that it requires data from the intended target population, something not generally possible for clinical practice. I agree with the authors that this could have improved their models and would like to see more discussion of the literature from recalibration of survival models.\n\nIn the Discussion of the between-trials validation, the authors try to explain the surprising result that the simpler Cox model with its stringent proportional hazards and linear assumption performs as well as some of the other models that incorporate non-linearity. I think lack of statistical power, i.e., small sample size, may be another culprit here. The effective information size for survival data (defined as the size of the information matrix) is only proportional to the number of observed events and not the total sample size, this is an issue that clinical trial statisticians who design trials understand well, but unfortunately not the rest of the community. It was a point I tried to raise at the first Challenge webinar, foreseeing that there would be many ties among winners due to the high censoring. While for training it seemed like there were trials of size 476, 526 and 598 patients for the respective trials in Figure 1, with a total of 1600 patients, the effective information content was only 138, 92, and 432, respectively, for a total of 662 patients. Simulation studies would reveal what sample size would be needed to detect nonlinearities of different magnitudes. My point is not to suggest doing these, but rather to modify the discussion that the high-performance of Cox’s simpler model may be due to the Occam’s Razor principle, that if there exists two explanations for data, the simpler is preferred.\n\nIn light of Point 6.), it is a pity that the well-performing Cox’s proportional hazards model was eventually dropped because of numerical problems. Our team used this model without much difficulty. Can the authors elaborate here or propose suggestions for overcoming the numerical difficulties? For example, could it be that the input data contained a lot of features with anomalies that should have been cleaned out?\n\nI realize it was not the point of this paper, but it is a pity that there is no discussion of the specifics of the 90 features that ultimately made it into the prediction models. Were they the same as the ones found by Halabi et al.? 90 features are a lot and not generally implementable in online risk tools designed to help patients – would there be a way to summarize the features that are most important in order to help clinicians understand the important indicators?\n\nLooking back at the Halabi paper, which has a simple Cox model with a handful of predictors that is immediately interpretable, the AUC obtained there on the test set (0.76) seems close to those obtained in this challenge. The AUC is a rank-based discrimination measure, that reflects the probability that for a randomly selected pair of patients, the patient that died later had a lower risk score and differences have to be interpreted relative to this meaning. I would like to hear the authors’ reflection as to whether the DREAM Challenge has proven the case for the large-scale methods used in the Challenge or against them. What future directions are needed to improve prediction? Some, like myself, would argue that new markers need to be discovered rather than bigger models.",
"responses": [
{
"c_id": "2826",
"date": "27 Jun 2017",
"name": "Sebastian Pölsterl",
"role": "Author Response",
"response": "The authors are to be congratulated for landing among the circle of winners of the Prostate Cancer DREAM Challenge and for clearly describing their innovative methods in this paper. An informative discussion critically appraises their approach, providing suggestions for advancing the field of clinical risk prediction. Instead of relying on one survival model, their approach hinges on heterogeneous ensembles that invoke a variety of model types, including gradient boosting (least squares versus trees), random survival forests, and survival support vector machines (linear versus clinical kernels), thereby hedging against sub-optimality of any single model for any single test set. I have only minor comments. It is argued throughout that heterogeneous ensembles have been shown to be optimal compared to single models for this challenge, but I did not see a head-to-head comparison illustrating this. For example, could one not add ensemble methods as an extra column to the within- and between-trial validations in Figures 2 and 3, respectively? Response: We included heterogeneous ensembles in the between trials validation (see figures 3 and 5) and in our discussion of the results. I greatly appreciated Figure 4 that showed which of the multiple comparisons in Figure 3 (the between-trial validation) were actually critically different, as many of the iAUCs only differed out to the second decimal (which is by the way a clinically meaningless difference). It would be nice to also have such a comparison for Figure 2 (the within-trial validation) that could definitely show whether or not the Cox model was statistically indistinguishable from random forests, and to temper the Results section concerning the comparison of the methods. One method only beats another if the confidence intervals of the respective AUCs do not overlap. Given their similar performance, the comparison among the different individual survival models might not be as relevant as whether or not the ensemble outperformed any one of them. Response: As suggested, we added a plot for the results presented in figure 2. It shows that the Cox model and random survival forest only significantly outperform linear SVM, whereas the performance of the other methods lies within the critical difference interval. As nicely pointed out in the Discussion, it is a surprise and a great pity that the concordance statistic c was used for the training of the models instead of the iAUC, the criterion used for evaluation for the challenge. While easy to compute, the concordance statistic suffers greatly from censored observations, they essentially are discarded in the evaluation. This means that only a minority of the data in the ASCENT and MAINSAIL trials were used (71% and 82.5% of the data censored). The iAUC, however, also suffers from censored data, but from what I understand, to a lesser extent. Is it possible to redo Figures 2 and 3 using the iAUC instead of the concordance statistic, to see if similar conclusions held? Response: We did perform the same analyses as depicted in figures 2 and 3 using iAUC as evaluation criteria. When ranking methods according to average iAUC, one arrives at the same conclusion as when ranking according to average c-index. However, the average performance with respect to the test datasets are quite different. As we pointed out in the main text, this is due to the definition of the iAUC used in the Prostate Cancer Dream Challenge, which is the integral over time points every 6 months up to 30 months after the first day of treatment. This would cover most time points in ASCENT-2 and MAINSAIL, but would miss out many events occurring after 30 months for VENICE (cf. figure 1). Consequently, it appears that all methods perform considerably better when tested on the VENICE data. Usually, it is recommended to choose the limits of the interval to integrate over from the data, e.g. the 5% to 90% percentile of observed time points. However, the iAUC would be based on a different interval for each study, making inter-study comparisons difficult to interpret. Therefore, we believe that the c-index is easier to interpret when considering the inter-study comparison. In addition, we re-trained our heterogeneous ensemble using the iAUC metric in algorithm 2 and submitted its prediction after the challenge concluded. We obtained an iAUC of 0.7636 compared to 0.7644 when using c-index, and 0.7537 for the Halabi model. In the discussion of the within-trial internal cross-validation of Figure 2 it is mentioned that some of the methods may have performed poorly because of a difference in follow-up between the random partitions of the trial into training and test sets. In medical studies, this is often controlled using stratified randomization, which ensures the proportion of observed events (deaths in this case) or follow-up remains equal across the sets. Would it be possible to implement to see if it improved the outcome for VENICE, in order to help explain the poor behavior there? It of course, does not help the between-trial validation, the subject of the next point. Response: We implemented the suggested modification to perform stratified cross-validation and repeated the experiment. The results are very similar to figure 2: the average performance of all methods is still worst when trained and tested on data from the VENICE study. The problem of recalibration to different trials is becoming more and more recognized in medicine; searching for “recalibration risk score” or “recalibration risk model” in PubMed reveals hundreds of suggestions and applications. The authors do a nice job of illustrating the particular difficulties with survival data – a look at Figure 1 shows that median follow-up in the held out ENTHUSE-33 trial was longer than two of the trials used for training. In our analysis for the challenge we showed that recalibration made a big difference for the root-mean-squared-error in Subchallenge 1b but not the iAUC in Subchallenge 1a, matching previous results we have obtained in proposals to dynamically update risk models (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4532612/). Recalibration means any method to tweak an existing risk model to conform to a particular target population, but has the problem that it requires data from the intended target population, something not generally possible for clinical practice. I agree with the authors that this could have improved their models and would like to see more discussion of the literature from recalibration of survival models. Response: We agree with the referee that calibration is an important aspect besides discrimination that should be considered for prognostic models. The focus of the referenced article is on calibration of binary classification models, which is very different from calibration with respect to survival models. In contrast to binary classification, predicted risk scores of a survival model are only the relative risk, but not absolute risks (Royston and Altman. BMC Medical Research Methodology, 13(1), 2013. http://doi.org/10.1186/1471-2288-13-33). Although, a measure of absolute risk can be derived for the Cox model by estimating the baseline hazard function, to the best of our knowledge, there is no standard approach to statistically assess calibration for arbitrary survival models. We could only visually assess calibration by constructing low, medium, and high risk groups corresponding to cut-offs at the 33% and 66% percentile of predicted risk scores. Using the same cut-offs on predicted risk scores from hold-out dataset, we constructed two Kaplan-Meier plots, each stratified by risk group. For a well calibrated model, we would expect the Kaplan-Meier curves derived from the training and hold-out data to agree with each other. The disadvantage of this approach is that we cannot precisely quantify the lack of calibration. In the Discussion of the between-trials validation, the authors try to explain the surprising result that the simpler Cox model with its stringent proportional hazards and linear assumption performs as well as some of the other models that incorporate non-linearity. I think lack of statistical power, i.e., small sample size, may be another culprit here. The effective information size for survival data (defined as the size of the information matrix) is only proportional to the number of observed events and not the total sample size, this is an issue that clinical trial statisticians who design trials understand well, but unfortunately not the rest of the community. It was a point I tried to raise at the first Challenge webinar, foreseeing that there would be many ties among winners due to the high censoring. While for training it seemed like there were trials of size 476, 526 and 598 patients for the respective trials in Figure 1, with a total of 1600 patients, the effective information content was only 138, 92, and 432, respectively, for a total of 662 patients. Simulation studies would reveal what sample size would be needed to detect nonlinearities of different magnitudes. My point is not to suggest doing these, but rather to modify the discussion that the high-performance of Cox’s simpler model may be due to the Occam’s Razor principle, that if there exists two explanations for data, the simpler is preferred. Response: Thank you for pointing out that effective sample size could be limiting the performance of more complicated models. We added a paragraph discussing this issue to the “Between trials validation” section. In light of Point 6.), it is a pity that the well-performing Cox’s proportional hazards model was eventually dropped because of numerical problems. Our team used this model without much difficulty. Can the authors elaborate here or propose suggestions for overcoming the numerical difficulties? For example, could it be that the input data contained a lot of features with anomalies that should have been cleaned out? Response: We fit Cox’s proportional hazards model using a Newton-Rhapson algorithm with constant step size, i.e., without a line search. We observed that optimization sometimes diverged, which can be attributed to choosing a constant step size. If the chosen step size is too large, it can lead to oscillation around the minimum due to overshooting and the minimum is never reached. A crude solution would be to increase the tolerance that determines convergence or choose a different starting point. A better solution would be to determine the optimal step size in each iteration of Newton’s method via line search or employing a trust region method (see e.g. Boyd and Vandenberghe, Convex Optimization, Cambridge University Press, 2009). Unfortunately, we were unable to implement this modification before the challenge was closed. I realize it was not the point of this paper, but it is a pity that there is no discussion of the specifics of the 90 features that ultimately made it into the prediction models. Were they the same as the ones found by Halabi et al.? 90 features are a lot and not generally implementable in online risk tools designed to help patients – would there be a way to summarize the features that are most important in order to help clinicians understand the important indicators? Response: Our final ensemble considered 217 features (see table 1), which included those found by Halabi et al., and was comprised of 90 different base learners (see table 3). We agree that the raw predictive performance of a model often provides insufficient information in medical research. Clearly, there is a trade-off between model complexity and how well a model can be interpreted. As we mentioned in the introduction of our manuscript, an ensemble approach is only beneficial if base learners perform better than random prediction and are diverse. The latter can be achieved by using base learners that are based on different loss functions (heterogeneous ensemble) or by using the same loss for all base learners and forcing each base learner to consider different subsets of features (homogeneous ensemble). We chose to utilize both types by selecting base learners from a large library of different models and identical, but differently parametrized, models. Consequently, we encourage base learners to weight features differently, which makes creating a universal ranking of features challenging. Although feature importances are not directly available from our final ensemble, there are several alternative ways to obtain insight. For instance, Breiman (Machine Learning, 45:1, 2001. http://dx.doi.org/10.1023/A:1010933404324) suggested a variable importance measure for random forests that could be adapted. The j-th feature is randomly permuted for all out-of-bag samples and run down the corresponding tree. The output is the relative increase in prediction error as compared to if the j-th feature is intact. Feature with a larger increase in prediction error, are considered more important to the ensemble. If one wants to infer which interactions among features the ensemble considers, more sophisticate methods are available (e.g. Henelius et al., SLDS 2015. http://dx.doi.org/10.1007/978-3-319-17091-6_5). Looking back at the Halabi paper, which has a simple Cox model with a handful of predictors that is immediately interpretable, the AUC obtained there on the test set (0.76) seems close to those obtained in this challenge. The AUC is a rank-based discrimination measure, that reflects the probability that for a randomly selected pair of patients, the patient that died later had a lower risk score and differences have to be interpreted relative to this meaning. I would like to hear the authors’ reflection as to whether the DREAM Challenge has proven the case for the large-scale methods used in the Challenge or against them. What future directions are needed to improve prediction? Some, like myself, would argue that new markers need to be discovered rather than bigger models. Response: The Prostate Cancer DREAM challenge provided high-quality data to a large group of researchers, which led to improved prediction performance compared to the model by Halabi et al. and highlighted interesting problems for future research. In particular, we believe that future research should focus on how to best utilize data from multiple clinical trials and how to adapt a model to new patient cohorts. As described in our manuscript, the Prostate Cancer DREAM challenge compiled data from four clinical trials, with each trial having its own characteristics, ranging from different follow-up periods to different clinical information collected. In light of these differences, just combining all the data and learning a model on top of it is likely to lead to a poor model, despite an increase in sample size. Multiple teams identified this problem and tried to address it. Most importantly, the winning team (FIMM-UTU) selected a subset of the provided patient data as to obtain a coherent patient sample for training their model. By identifying and omitting patients that appear considerably different from the remaining patients, they successfully lessened the effect of study-specific batch-effects. Another interesting approach has been proposed by team A Bavarian dream (as pointed out by the reviewer above). They used recalibration methods to adapt their model to the target study, which was used for final evaluation. In conclusion, we believe that the biggest improvements in risk prediction were not due to identifying new risk markers, but by choosing methods that account for sub-structures in the data. More research is needed to reliably detect such sub-structures and to overcome the problems they attend."
}
]
},
{
"id": "20214",
"date": "06 Mar 2017",
"name": "Jinfeng Xiao",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is written by CAMP, a winning team of the 2015 Prostate Cancer DREAM Challenge (“the PCDC”, or “the challenge”), to introduce their winning method. The authors built heterogeneous ensembles with the training data (Trials ASCENT-2, MAINSAIL, VENICE) and the unlabeled part of the validation data (Trial ENTHUSE-33). The high performance of their method, especially in predicting patients’ days to death, was confirmed by the challenge organizers on the validation data. This manuscript contains sufficient details about the actual method they used in the PCDC. Achilles heels for this paper include: 1) To show the necessity of using ensembles; 2) To establish the generalizability of the proposed ensemble models to new data sets. See the major issues below for more detailed comments.\n\nMajor issues:\nThe power of averaging over the base learners was taken for granted in the paper without experimental evidence. Training an ensemble costs much more effort than training a single model, and therefore it has to be shown that such effort is worth it. Direct comparison in performance between the ensemble and base learners is needed to make this point clear. The training of the ensembles, in particular the ensemble pruning step, used information from the validation set. Although only the features, but not the outcomes, of the validation data were seen by the model, this practice is still not encouraged. A generalizable model should not use the validation data in any way during training. Therefore, whether the proposed method is generalizable to new data sets is in doubt. I would suggest the authors to prune the ensemble on the training set and check the performance on the validation set. There is no instruction in the code documentation about how to apply the code to new data sets. Adding such information can greatly increase the chance that the code will be used by other researchers. Can the authors mine some knowledge from the trained model? For example, what are the most important features? Where are the baseline (i.e. Halabi’s model1) features in the ranked list? Such analysis of the model can be helpful to biomedical researchers and doctors.\n\nMinor issues:\nIn Algorithms 1 & 2, how did the authors choose the minimum desired performance cmin and the desired set of ensemble S? Page 6, paragraph 2, line 3: “Median” should be changed to “standard deviation” or some other measures of variance, because in a within-trial validation the “median” is not directly related to “the difference between observed time points in the training and test data” (lines 5-6). Page 8, paragraph 2, the last 8 lines: This example is not very convincing. A model considering all features trained on the first dataset will assign a very small (if not zero) weight to feature 3, which will compensate little for the fact that feature 3 is important in the second dataset. Page 8, paragraph 5: What numerical difficulties did the authors encounter so that they could not include the Cox regression in the ensembles? Is there anything special about Cox model that makes it harder to train than other base learners? It is not explicitly stated in the paper that the authors are from Team CAMP.\n\nGrammar: Page 4, last paragraph: “within-in trial validation” should be “within-trial validation”; “between trials validation” should be “between-trial validation”.",
"responses": [
{
"c_id": "2827",
"date": "27 Jun 2017",
"name": "Sebastian Pölsterl",
"role": "Author Response",
"response": "This paper is written by CAMP, a winning team of the 2015 Prostate Cancer DREAM Challenge (“the PCDC”, or “the challenge”), to introduce their winning method. The authors built heterogeneous ensembles with the training data (Trials ASCENT-2, MAINSAIL, VENICE) and the unlabeled part of the validation data (Trial ENTHUSE-33). The high performance of their method, especially in predicting patients’ days to death, was confirmed by the challenge organizers on the validation data. This manuscript contains sufficient details about the actual method they used in the PCDC. Achilles heels for this paper include: 1) To show the necessity of using ensembles; 2) To establish the generalizability of the proposed ensemble models to new data sets. See the major issues below for more detailed comments. Major issues: The power of averaging over the base learners was taken for granted in the paper without experimental evidence. Training an ensemble costs much more effort than training a single model, and therefore it has to be shown that such effort is worth it. Direct comparison in performance between the ensemble and base learners is needed to make this point clear. Response: We included heterogeneous ensembles in the between trials validation (see figures 3 and 5) and in our discussion of the results. The training of the ensembles, in particular the ensemble pruning step, used information from the validation set. Although only the features, but not the outcomes, of the validation data were seen by the model, this practice is still not encouraged. A generalizable model should not use the validation data in any way during training. Therefore, whether the proposed method is generalizable to new data sets is in doubt. I would suggest the authors to prune the ensemble on the training set and check the performance on the validation set. Response: For survival data, the pruning step is delayed until prediction is performed, because predictions are risk scores on an arbitrary scale for which a per-sample error measure is not readily available. This is in contrast to ensemble pruning for regression problems, where a per-sample error can be easily computed and models having highly correlated errors are pruned. As referee 2 suggests, the pruning step could be performed via cross-validation on the training data. However, our pruning step does not take survival times or censoring status into account, therefore we prefer to delay the pruning step as long as possible as to avoid overfitting on the training data. If the additional costs associated with storing the ensemble before pruning are prohibitive, we recommend that pruning should be performed via cross-validation on the training data. There is no instruction in the code documentation about how to apply the code to new data sets. Adding such information can greatly increase the chance that the code will be used by other researchers. Response: Our source code is accompanied by a README file explaining all steps necessary to reproduce our results. The source code can be downloaded from Synapse (http://dx.doi.org/10.7303/syn3647478) as well as GitHub (https://github.com/tum-camp/dream-prostate-cancer-challenge). Can the authors mine some knowledge from the trained model? For example, what are the most important features? Where are the baseline (i.e. Halabi’s model1) features in the ranked list? Such analysis of the model can be helpful to biomedical researchers and doctors. Response: Please see our response to question 8 of referee 1. Minor issues: In Algorithms 1 & 2, how did the authors choose the minimum desired performance cmin and the desired set of ensemble S? Response: We chose cmin = 0.66 based on results of the with-in trial validation (figure 2): approximately 30% of the experiments performed worse. The final ensemble consists of all base learners in the top 5% according the combined accuracy and diversity score (see table 2 and algorithm 2). Both cmin and S remained fixed throughout our experiments and were not optimised. Page 6, paragraph 2, line 3: “Median” should be changed to “standard deviation” or some other measures of variance, because in a within-trial validation the “median” is not directly related to “the difference between observed time points in the training and test data” (lines 5-6). Response: Thank you for the suggestion, we replaced median by standard deviation in the manuscript. Page 8, paragraph 2, the last 8 lines: This example is not very convincing. A model considering all features trained on the first dataset will assign a very small (if not zero) weight to feature 3, which will compensate little for the fact that feature 3 is important in the second dataset. Response: We agree that the example was inadequate to explain this observation. We replaced it by referencing the work by Meinshausen and Bühlmann, who showed that models with embedded feature selection suffer from false positive selections in high dimensions. Page 8, paragraph 5: What numerical difficulties did the authors encounter so that they could not include the Cox regression in the ensembles? Is there anything special about Cox model that makes it harder to train than other base learners? Response: Please see our response to question 7 of referee 1. It is not explicitly stated in the paper that the authors are from Team CAMP. Response: We mentioned that we participated under the name “Team CAMP” at the end of the introduction and in the section “Challenge hold-out data”. Grammar: Page 4, last paragraph: “within-in trial validation” should be “within-trial validation”; “between trials validation” should be “between-trial validation”. Response: Thank you, we corrected these errors in the manuscript."
}
]
},
{
"id": "19237",
"date": "14 Mar 2017",
"name": "Amber L Simpson",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe an ensemble approach for predicting survival in prostate cancer patients as part of the 2015 Prostate Cancer DREAM (Dialogue for Reverse Engineering Assessments and Methods) Challenge. Patients included in the study had metastatic, castrate-resistant prostate cancer, an advanced cancer with poor prognosis. The authors are commended for undertaking a difficult problem and providing an elegant solution incorporating Cox, gradient boosting, random survival forest, and SVM for time-dependent analysis.\n\nAddressing these points would improve the manuscript:\n1) The luxury of a challenge is that authors are positioned to use knowledge gained from the challenge to improve their prediction model. The intent of sharing these datasets is to develop the best biomarker that can be used to change patient selection for therapy. Can the authors comment on what they would do differently now that they have considered methods proposed by other groups in the challenge? How can others use the lessons learned in this challenge to make the best biomarker possible?\n\n2) The authors should also comment on the generalizability of their methods to other problems.\n\n3) The paper is a good technical companion paper to the overview paper that was recently released, which should be cited 1.\n\n4) For those unfamiliar with the challenge, it is important to note that the challenge organizers confirmed performance on the validation data as noted by a reviewer above. This information should be incorporated into the manuscript, as it is not readily apparent.\n\n5) How does the model perform relative to published clinical nomograms? For example, the Armstrong nomogram achieved a concordance index of 0.69. Can the authors comment on the improvement over existing methods? One could argue that the slight improvement is not worth the overhead of employing ensemble methods 2-4\n\n6) How does predicting survival change the management of these patients? For example, would bad actors be selected for a different treatment or spared from treatment? If so, it may be appropriate to calculate positive and negative predictive value for specific time points. Maximizing positive and negative predictive value may also make sense. The proposed method could aid in chemoprevention, as an example.\n\n7) Is it possible to make the model publicly available as a nomogram (see nomograms.org)? Clinicians will not have the ability to download and install the code, but they may be interested in the results for individual patients.\n\n8) How does the ensemble method compensate for highly correlated variables?\n\n9) How was feature selection performed?\n\n10) Listing the features would be helpful for clinicians looking to refine/improve existing nomograms.",
"responses": [
{
"c_id": "2828",
"date": "27 Jun 2017",
"name": "Sebastian Pölsterl",
"role": "Author Response",
"response": "The authors describe an ensemble approach for predicting survival in prostate cancer patients as part of the 2015 Prostate Cancer DREAM (Dialogue for Reverse Engineering Assessments and Methods) Challenge. Patients included in the study had metastatic, castrate-resistant prostate cancer, an advanced cancer with poor prognosis. The authors are commended for undertaking a difficult problem and providing an elegant solution incorporating Cox, gradient boosting, random survival forest, and SVM for time-dependent analysis. Addressing these points would improve the manuscript: The luxury of a challenge is that authors are positioned to use knowledge gained from the challenge to improve their prediction model. The intent of sharing these datasets is to develop the best biomarker that can be used to change patient selection for therapy. Can the authors comment on what they would do differently now that they have considered methods proposed by other groups in the challenge? How can others use the lessons learned in this challenge to make the best biomarker possible? Response: Several teams, including the winning team (FIMM-UTU), implemented methods to carefully select patients from the three studies constituting the training data such that they are not too different from the target study, which was used for the final evaluation. We believe that a considerable improvement can be gained by discarding outliers from the training data. The authors should also comment on the generalizability of their methods to other problems. Response: Our proposed solution relies on heterogeneous ensembles, which are comprised of survival models to predict the risk of death. Hence, our approach is directly applicable to any data with right censored survival times. For other problems, such as classification or regression, the ensemble selection and ensemble pruning need to be adapted by choosing an appropriate performance measure (see line 11 of algorithm 1 and line 14 of algorithm 2). In fact, the original authors of heterogeneous ensembles investigated ten performance metrics for classification and Rooney et al. proposed to using the mean squared error for regression problems. Therefore, heterogeneous ensembles are applicable to a wide range of learning problems. The paper is a good technical companion paper to the overview paper that was recently released, which should be cited 1. Response: We updated reference 16 to refer to the paper in The Lancet Oncology. For those unfamiliar with the challenge, it is important to note that the challenge organizers confirmed performance on the validation data as noted by a reviewer above. This information should be incorporated into the manuscript, as it is not readily apparent. Response: We updated the last paragraph of the “Validation scheme” section and the first paragraph of the “Challenge hold-out data” to emphasize that validation was carried out by the challenge organizers. How does the model perform relative to published clinical nomograms? For example, the Armstrong nomogram achieved a concordance index of 0.69. Can the authors comment on the improvement over existing methods? One could argue that the slight improvement is not worth the overhead of employing ensemble methods2-4 Response: In subchallenge 1a, submissions of all participating teams were compared to the model by Halabi et al., which was considered the state-of-the-art risk prediction model prior to the challenge. Only submissions with a statistically better performance than the model by Halabi et al. were considered for the final evaluation (see section Validation scheme in our manuscript for further details). Our proposed model achieved an iAUC score of 0.7646 on the challenge’s hold data, whereas the model by Halabi et al. achieved a score of 0.7432, which is significantly worse: the Bayes factor of the proposed model vs. Halabi et al. model, is 12.2, which indicates strong evidence. How does predicting survival change the management of these patients? For example, would bad actors be selected for a different treatment or spared from treatment? If so, it may be appropriate to calculate positive and negative predictive value for specific time points. Maximizing positive and negative predictive value may also make sense. The proposed method could aid in chemoprevention, as an example. Response: We agree that ultimately the focus should be on improving patient treatment, but at the same time computational methods can only hint at potentially interesting biomarkers or patient subgroups, whether this information is useful in the clinic requires additional research, e.g., to rule-out harmful side-effects. Data in the Prostate Cancer DREAM Challenge are collated based on comparator arm data sets of Phase III prostate cancer clinical trials, where all patients received docetaxel and prednisone in the comparator arm. Therefore, we could not determine whether differences in survival can be attributed to different treatment types. If outcome information from multiple treatments were available, it would indeed be very interesting to infer the optimal treatment by maximizing positive and negative predictive value over time instead of specificity and sensitivity as the iAUC metric used in the challenge does. Is it possible to make the model publicly available as a nomogram (see nomograms.org)? Clinicians will not have the ability to download and install the code, but they may be interested in the results for individual patients. Response: Unfortunately, it is often difficult to understand how an ensemble method relates the input to variables to each other in order to form a prediction, which is especially true for heterogeneous ensembles, because of their non-linear nature. A nomogram describes a non-linear model only inadequately, because it gives each variable only a single weight and usually lacks high-order interactions. However, there are several alternative ways to obtain insight. For instance, Breiman (Machine Learning, 45:1, 2001. http://dx.doi.org/10.1023/A:1010933404324) suggested a variable importance measure for random forests that could be adapted. The j-th feature is randomly permuted for all out-of-bag samples and run down the corresponding tree. The output is the relative increase in prediction error as compared to if the j-th feature is intact. Feature with a larger increase in prediction error, are considered more important to the ensemble. If one wants to infer which interactions among features the ensemble considers, more sophisticate methods are available (e.g. Henelius et al., SLDS 2015. http://dx.doi.org/10.1007/978-3-319-17091-6_5). How does the ensemble method compensate for highly correlated variables? Response: Whether the ensemble compensates for highly correlated variables depends on choice of base learners. Here, all base learners account for multicolinearities. The penalized Cox model and survival support vector machine use a ridge (L2) penalty, gradient boosting with regression trees and random survival forests recursively split the data based on a single feature, and gradient boosting with componentwise least squares selects only one feature in each iteration such that the error is maximally reduced. Hence, all models can be trained despite highly correlated variables in the data. How was feature selection performed? Response: We did not perform feature selection prior to constructing the heterogeneous ensemble, however, the ensemble comprised base learners that implicitly perform feature selection when trained on high-dimensional data, namely random survival forest and gradient boosting models. The remaining models (penalized Cox model and survival support vector machine) do not perform feature selection and only account for multicolinearities. Listing the features would be helpful for clinicians looking to refine/improve existing nomograms. Response: We trained models on different subsets of the data, ranging from 383 features for data from the MAINSAIL trial to 217 features when combining data of all three trials (see table 1). More details on the extracted features are available from the supplementary material and at https://www.synapse.org/#!Synapse:syn4650470."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2676
|
https://f1000research.com/articles/5-2340/v1
|
19 Sep 16
|
{
"type": "Software Tool Article",
"title": "GeneBreak: detection of recurrent DNA copy number aberration-associated chromosomal breakpoints within genes",
"authors": [
"Evert van den Broek",
"Stef van Lieshout",
"Christian Rausch",
"Bauke Ylstra",
"Mark A. van de Wiel",
"Gerrit A. Meijer",
"Remond J.A. Fijneman",
"Sanne Abeln",
"Evert van den Broek",
"Stef van Lieshout",
"Christian Rausch",
"Bauke Ylstra",
"Mark A. van de Wiel",
"Gerrit A. Meijer"
],
"abstract": "Development of cancer is driven by somatic alterations, including numerical and structural chromosomal aberrations. Currently, several computational methods are available and are widely applied to detect numerical copy number aberrations (CNAs) of chromosomal segments in tumor genomes. However, there is lack of computational methods that systematically detect structural chromosomal aberrations by virtue of the genomic location of CNA-associated chromosomal breaks and identify genes that appear non-randomly affected by chromosomal breakpoints across (large) series of tumor samples. ‘GeneBreak’ is developed to systematically identify genes recurrently affected by the genomic location of chromosomal CNA-associated breaks by a genome-wide approach, which can be applied to DNA copy number data obtained by array-Comparative Genomic Hybridization (CGH) or by (low-pass) whole genome sequencing (WGS). First, ‘GeneBreak’ collects the genomic locations of chromosomal CNA-associated breaks that were previously pinpointed by the segmentation algorithm that was applied to obtain CNA profiles. Next, a tailored annotation approach for breakpoint-to-gene mapping is implemented. Finally, dedicated cohort-based statistics is incorporated with correction for covariates that influence the probability to be a breakpoint gene. In addition, multiple testing correction is integrated to reveal recurrent breakpoint events. This easy-to-use algorithm, ‘GeneBreak’, is implemented in R (www.cran.r-project.org) and is available from Bioconductor (www.bioconductor.org/packages/release/bioc/html/GeneBreak.html).",
"keywords": [
"structural chromosomal aberrations",
"recurrent breakpoint genes",
"molecular characterization",
"cancer genome",
"copy number aberration profile",
"computational method"
],
"content": "Introduction\n\nTumor development is driven by irreversible somatic genomic aberrations such as small nucleotide variants (SNVs) and chromosomal aberrations including numerical as well as structural changes1,2. Genome-wide somatic DNA copy number aberrations (CNA) profiling is a widely established approach to characterize chromosomal aberrations in cancer genomes. At present, application of computational methods has mainly been focused on the analysis of numerical aberrations of chromosomal segments. Recently, evidence is emerging that genes affected by structural chromosomal aberrations, i.e. genes affected by chromosomal breaks, represent a biologically and clinically relevant class of mutations in many cancer types including solid tumors3–6. Importantly, the actual locations of chromosomal CNA-associated breakpoints, which are the points of copy number level shift in somatic CNA profiles, indicate underlying chromosomal breaks and thereby genomic locations affected by somatic structural aberrations5–12. Hence, the wide availability of large series of high-resolution DNA copy number data by for instance array-Comparative Genomic Hybridization (CGH) or by (low-pass) whole genome sequencing (WGS) approaches enables to systematically search for regions and genes that are affected by CNA-associated structural chromosomal changes. Computational methods determining numerical CNAs, consequently, also yield CNA-associated breakpoint locations. However, it is not trivial to identify genes that are recurrently affected by CNA-associated chromosomal breakpoints across (large) series of cancer samples since this methodology also requires dedicated computational methods including comprehensive statistical evaluation.\n\nWe here provide a computational method, ‘GeneBreak’, that identifies chromosomal breakpoint locations using DNA copy number profiles. A tailored annotation approach maps breakpoint locations to genes for each individual profile. Moreover, dedicated comprehensive cohort-based statistical analysis including correction for covariates that influence the probability to be a breakpoint gene and multiple testing pinpoints genes that are non-randomly and recurrently affected by chromosomal breaks across multiple tumor samples5. ‘GeneBreak’ is implemented in R (www.cran.r-project.org) and is available from Bioconductor (www.bioconductor.org/packages/release/bioc/html/GeneBreak.html). The Bioconductor vignette describes a detailed example workflow of CNA data obtained by analysis of 200 array-CGH samples. A schematic overview of computational methods is depicted in Figure 1.\n\nGeneBreak’ requires already segmented DNA copy number data from array-CGH or WGS approaches. The first step involves detection of breakpoint locations. Next, breakpoint locations will be mapped to gene annotations in order to identify genes affected by DNA breakpoints. The final step performs comprehensive cohort-based statistical analyses including correction for multiple testing to reveal both recurrent breakpoint locations and breakpoint genes. The breakpoint frequencies can be visualized with a built-in plot function. This example visualizes the breakpoint locations (vertical black bars) and breakpoint genes (horizontal red bars) on the p-arm of chromosome 20 identified in a cohort of 352 advanced colorectal cancers. The genes labeled with a name are statistically significant recurrent breakpoint genes (FDR<0.1).\n\n\nMethods\n\nThe breakpoint detection method we provide is amenable for data from any DNA copy number discovery platform, e.g. array-CGH and (low-pass) WGS, and copy number detection algorithm. For optimal results, ‘GeneBreak’ takes DNA copy number data that are pre-processed by the R-package ‘CGHcall’13 or ‘QDNAseq’14, both based on the Circular Binary Segmentation algorithm15, as input. Alternatively, segmented values (log2-ratios) from a different copy number detection algorithm can be used. In addition, it is recommended to provide discrete DNA copy number states (e.g. loss, neutral, gain) that can be used for breakpoint selection. Bioconductor vignette and manual describe commands and workflows in detail (See Supplementary material).\n\nBreakpoints are defined by the chromosomal locations that separate the contiguous DNA copy number segments pinpointed by a segmentation algorithm. ‘GeneBreak’ identifies chromosomal breakpoint locations for each individual DNA copy number profile. Instead of taking all detected breakpoints, users may want to define more precisely what breakpoints to take into account, based on the two flanking DNA copy number segment characteristics. One of the following three selection options can be applied. A) Copy number-deviation: this selects breakpoints where the shift in log2-ratio between two consecutive DNA copy number segments exceeds the user-defined threshold; B) CNA-associated breakpoints: this selects all breakpoints between consecutive DNA copy number segments, except for breakpoints flanked by two copy number neutral segments; C) CNA-breakpoints: this selects only those breakpoints flanked by segments with dissimilar discrete DNA copy number states.\n\nDue to the typical granularity of the DNA copy number profile data localization (distance between microarray probes or bin size of WGS copy number data), the detected breakpoints that are defined by the genomic start position of the copy number segments, in fact represent a chromosomal interval.\n\nFor identification of genes affected by chromosomal breakpoints the built-in gene annotations can be used. Alternatively, a user-defined gene annotation file can be provided (see Bioconductor vignette and manual for further details). The implemented mapping approach identifies genes that are associated with one or multiple chromosomal breakpoint intervals.\n\nCohort-based identification of recurrent breakpoint events can be performed on both genome location- and gene-level. The default statistical analysis includes standard Benjamini-Hochberg false discovery rate (FDR) correction for multiple testing. This method assumes the same permutation null- distribution for all candidate breakpoint events for the analysis of breakpoints at the level of genomic location. For the gene level however, we recommend to apply the built-in regression-based correction for covariates that may influence the breakpoint probability including the number of breakpoints in a tumor profile, the number of gene-associated features and the gene length by gene-associated feature coverage. In addition, a more comprehensive and powerful dedicated Benjamini-Hochberg FDR correction that accounts for discreteness in the null-distribution is supplied16. Commands and example workflow can be found in Bioconductor vignette and manual.\n\n\nUse case\n\nWe applied our method to 352 high-resolution array-CGH samples from a series of advanced colorectal cancers17 following CNA detection using ‘CGHcall’13. Array-CGH data are available in the Gene Expression Omnibus database under accession number GSE63216 (www.ncbi.nlm.nih.gov/projects/geo/). We selected for the CNA-associated breakpoints (setting: ‘CNA-associated’), used gene annotations from ensembl (human genome NCBI build36/hg18, release 54) and applied the dedicated Benjamini-Hochberg-type FDR correction (setting: ‘Gilbert’), for recurrent breakpoint gene identification. A total of 748 genes appeared to be recurrently affected by chromosomal breaks (FDR<0.1)5. Breakpoint frequencies of chromosome 20p are visualized with the built-in plot function (Figure 1; see Bioconductor vignette and manual for further details about this function). Interestingly, patient stratification based on recurrent gene breakpoints and well-known point mutations by propagation to the predefined STRING human protein interaction network revealed one CRC subtype with very poor prognosis, which supported clinical relevance of this class of somatic aberrations in advanced colorectal cancers5.\n\n\nConclusion\n\nGenome instability including numerical and structural somatic chromosomal aberrations is a hallmark of cancer. Several tools are available that focus on detection of numerical aberrations of large chromosome segments. The R-package ‘GeneBreak’ extracts additional information from CNA data. ‘GeneBreak’ provides an easy-to-use algorithm, which handles identification of genomic breakpoint locations, mapping of breakpoints to genes and includes a comprehensive statistical approach to reveal recurrent breakpoint genes from series of tumor samples. Therefore, ‘GeneBreak’ can be applied to detect CNA-associated chromosomal breaks in individual tumor samples and facilitates detection of recurrent breakpoint genes across multiple tumor samples.\n\n\nData and software availability\n\nPublicly available copy number data used for the use case is deposited at Gene Expression Omnibus database under accession number GSE63216 (https://protect-eu.mimecast.com/s/6LQhBmNGvCG).\n\nSoftware available from: C www.bioconductor.org/packages/release/bioc/html/GeneBreak.html and https://protect-eu.mimecast.com/s/aLGhBqmpgF2\n\nLatest source code: https://github.com/F1000Research/GeneBreak/releases/tag/v1.0\n\nArchived source code as at the time of publication: F1000Research/Genebreak, doi: 10.5281/zenodo.15393718\n\nLicense: GPL 2",
"appendix": "Author contributions\n\n\n\nEvdB, GM, RF and SA conceived the study. EvdB, SvL, MvdW, GM, RF and SA designed the workflow and EvdB, SvL and MvdW developed and tested the code. MvdW provided expertise in biostatistics. CR and BY provided expertise in analysis of CNA data obtained by array-CGH and WGS. EvdB, RF and SA prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the VUmc-Cancer Center Amsterdam [to E.vd.B.]; performed within the framework of the Center for Translational Molecular Medicine, DeCoDe project [03O-101]; and CTMM-TraIT [05T-401 to EvdB, SvL, BY, GM, RF and SA].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nGeneBreak vignette.\n\nClick here to access the data.\n\nGeneBreak Manual\n\nClick here to access the data.\n\n\nReferences\n\nStratton MR, Campbell PJ, Futreal PA: The cancer genome. Nature. 2009; 458(7239): 719–724. PubMed Abstract | Publisher Full Text | Free Full Text\n\nForbes SA, Bindal N, Bamford S, et al.: COSMIC: mining complete cancer genomes in the Catalogue of Somatic Mutations in Cancer. Nucleic Acids Res. 2011; 39(Database issue): D945–D950. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMitelman F, Johansson B, Mertens F: The impact of translocations and gene fusions on cancer causation. Nat Rev Cancer. 2007; 7(4): 233–245. PubMed Abstract | Publisher Full Text\n\nInaki K, Liu ET: Structural mutations in cancer: mechanistic and functional insights. Trends Genet. 2012; 28(11): 550–559. PubMed Abstract | Publisher Full Text\n\nvan den Broek E, Dijkstra MJ, Krijgsman O, et al.: High Prevalence and Clinical Relevance of Genes Affected by Chromosomal Breaks in Colorectal Cancer. PLoS One. 2015; 10(9): e0138141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMalhotra A, Lindberg M, Faust GG, et al.: Breakpoint profiling of 64 cancer genomes reveals numerous complex rearrangements spawned by homology-independent mechanisms. Genome Res. 2013; 23(5): 762–776. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdwards PA: Fusion genes and chromosome translocations in the common epithelial cancers. J Pathol. 2010; 220(2): 244–254. PubMed Abstract | Publisher Full Text\n\nHermsen M, Snijders A, Guervós MA, et al.: Centromeric chromosomal translocations show tissue-specific differences between squamous cell carcinomas and adenocarcinomas. Oncogene. 2005; 24(9): 1571–1579. PubMed Abstract | Publisher Full Text\n\nMuggeo VM, Adelfio G: Efficient change point detection for genomic sequences of continuous measurements. Bioinformatics. 2011; 27(2): 161–166. PubMed Abstract | Publisher Full Text\n\nRitz A, Paris PL, Ittmann MM, et al.: Detection of recurrent rearrangement breakpoints from copy number data. BMC Bioinformatics. 2011; 12: 114. PubMed Abstract | Publisher Full Text | Free Full Text\n\nToloşi L, Theißen J, Halachev K, et al.: A method for finding consensus breakpoints in the cancer genome from copy number data. Bioinformatics. 2013; 29(14): 1793–1800. PubMed Abstract | Publisher Full Text\n\nLiu H, Zilberstein A, Pannier P, et al.: Evaluating translocation gene fusions by SNP array data. Cancer Inform. 2012; 11: 15–27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan de Wiel MA, Kim KI, Vosse SJ, et al.: CGHcall: calling aberrations for array CGH tumor profiles. Bioinformatics. 2007; 23(7): 892–894. PubMed Abstract | Publisher Full Text\n\nScheinin I, Sie D, Bengtsson H, et al.: DNA copy number analysis of fresh and formalin-fixed specimens by shallow whole-genome sequencing with identification and exclusion of problematic regions in the genome assembly. Genome Res. 2014; 24(12): 2022–2032. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOlshen AB, Venkatraman ES, Lucito R, et al.: Circular binary segmentation for the analysis of array-based DNA copy number data. Biostatistics. 2004; 5(4): 557–572. PubMed Abstract | Publisher Full Text\n\nGilbert PB: A modified false discovery rate multiple-comparisons procedure for discrete data, applied to human immunodeficiency virus genetics. Appl Statist. 2005; 54(1): 143–158. Publisher Full Text\n\nHaan JC, Labots M, Rausch C, et al.: Genomic landscape of metastatic colorectal cancer. Nat Commun. 2014; 5: 5457. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBroek E: F1000Research/GeneBreak. Zenodo. 2016. Data Source"
}
|
[
{
"id": "16416",
"date": "09 Jan 2017",
"name": "Tobias Marschall",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneBreak is an R package to help identifying recurrent breakpoints of copy number variants (CNVs). While the offered analyses are straightforward from a methodological point of view, this package can be valuable in practice, providing an easy and reproducible way to conduct such analyses. I appreciate that it is available from bioconductor (and hence easily installable), openly developed on github, and archived on zenodo.\nI merely have some minor suggestions for improvements:\nWhen trying to use the package, I couldn't open the example data (got a \"data set ‘copynumber.data.chr20’ not found\" error). Could you verify that it's available?\n\nFirst sentence: SNV commonly means \"single nucleotide variation\" (not \"small\").\n\nP1, L9: \"Recently, ...\" Of course what you consider \"recent\" is a matter of taste, but here you are citing a review paper from 2007. I wouldn't call this recent.\n\nP1, Methods, L3: \"... and copy number detection algorithm\" Either explain what exactly you mean here, or remove.\n\nP1, paragraph \"Due to the typical granularity [...], in fact represent a chromosomal interval.\" I can guess what you mean here, but writing this more clearly would be good.\n\nP1, \"This method assumes the same permutation null- distribution for all candidate breakpoint events for the analysis of breakpoints at the level of genomic location.\" Could you describe in more detail how the null distribution is obtained?\n\nP1, \"In addition, a more comprehensive and powerful dedicated Benjamini-Hochberg FDR correction that accounts for discreteness in the null-distribution is supplied.\" The Benjamini-Hochberg procedure is a well defined statistical method. I would rephrase the respective sentence(s) to explicitly say that you are talking about Gilbert's method.",
"responses": [
{
"c_id": "2739",
"date": "06 Jul 2017",
"name": "Evert van den Broek",
"role": "Reader Comment",
"response": "We thank the reviewer for careful evaluation of our work and providing helpful recommendations. As suggested by the reviewer, we rephrased some sentences and provided a more detailed description of the used statistics. With respect to the error by loading the example data of GeneBreak, we verified availability of the example data on different computers with different operating systems (MacOS and Linux) on which we installed the GeneBreak package from Bioconductor and CGHcall with all dependencies. The data (‘copynumber.data.chr20.rda’) was also available in the ‘data’ directory that was retrieved from Bioconductor. The exact code we used is provided by https://www.bioconductor.org/packages/release/bioc/vignettes/GeneBreak/inst/doc/GeneBreak.R."
}
]
},
{
"id": "18598",
"date": "13 Feb 2017",
"name": "Angel Rubio",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper shows an inspiring vision of the copy number changes in the genome focusing on the \"changes\" more than on the levels of change. The underlying reasoning is that a copy number change, if occurs within the loci occupied by a gene, implies an alteration in the coding sequence of the gene.\nIn addition, it is shown that these changes occur recurrently, i.e. the loci where the copy number changes tend to be similar in different samples with the same type of cancer. The methodology has been uploaded to Bioconductor. The stringent quality checks of Bioconductor guarantees the availability for different platforms and, in fact, the vignette is easy to follow and use.\nMy main concern with this paper is the (lack of) description of the statistical method to state the recurrence of the copy number changes. Within the methods section is only stated that there are two methods (genome location and gene-level) but the differences between them or the underlying statistical model is missing.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2340
|
https://f1000research.com/articles/6-1061/v1
|
05 Jul 17
|
{
"type": "Method Article",
"title": "Surgical placement of a wireless telemetry device for cardiovascular studies of bovine calves",
"authors": [
"Joseph M. Neary",
"Vincent Mendenhall",
"Dixon Santana",
"Vincent Mendenhall",
"Dixon Santana"
],
"abstract": "Background: Domestic cattle (Bos taurus) are naturally susceptible to hypoxia-induced pulmonary arterial hypertension; consequently, the bovine calf has been used with considerable success as an animal model of the analogous human condition. Studies to date, however, have relied on instantaneous measurements of pressure and cardiac output. Here, we describe the surgical technique for placement of a fully implantable wireless biotelemetry device in a bovine calf for measurement of pulmonary arterial and left ventricular pressures, right ventricular output, and electrocardiogram. Methods: Three, 2-month old bovine calves underwent left-sided thoracotomies. A transit-time flow probe was placed around the pulmonary artery and solid-state pressure catheters inserted into the pulmonary artery and left ventricle. Biopotential leads were secured to the epicardium. The implant body was secured subcutaneously, dorso-caudal to the incision. Results: The implant and sensors were successfully placed in two of the three calves. One calf died from ventricular fibrillation following left ventricular puncture prior to pressure sensor insertion. Anatomical discrepancies meant that either 4th or 5th rib was removed. The calves recovered quickly with minimal complications that included moderate dyspnea and subcutaneous edema. Conclusions: Left thoracotomy is a viable surgical approach for wireless biotelemetry studies of bovine calf cardiovascular function. The real-time, contemporaneous collection of cardiovascular pressures and output, permits pathophysiological studies in a naturally susceptible, large animal model of pulmonary arterial hypertension.",
"keywords": [
"biotelemetry",
"large animal",
"hemodynamics",
"cardiopulmonary",
"waveform"
],
"content": "Introduction\n\nThe bovine calf (Bos taurus) has provided invaluable insight into the structural, mechanical, and hemodynamic changes that occur during the onset and progression of pulmonary arterial hypertension (PAH)1–4. Hemodynamic studies of calves conducted to date, however, have typically required the calf to be manually restrained while laterally recumbent2,4.\n\nFully implantable biotelemetry devices with transit-time volumetric flow measurement capability are available for use in animals. This technology permits real-time, contemporaneous collection of hemodynamic variables without the risks, or potentially confounding effects, associated with the catheterization of calves with compromised cardiac function. Animal welfare is also improved for two major reasons: first, cardiac function and vascular pressures can be closely monitored so that experimental and humane endpoints can be established using cardiac output and pressure variables, and second, invasive procedures are limited to the initial surgery performed prior to the onset of pulmonary hypertension. In this report, we describe a surgical technique for the placement of biotelemetry equipment in a calf model of PAH.\n\n\nMethods\n\nAll procedures described in this study were pre-approved by the Texas Tech University Animal Care and Use Committee (Protocol 16024-03). All efforts were made to ameliorate any suffering experienced by the animals in the study by monitoring the effectiveness of the analgesics used throughout the study and combining analgesics that had complementary mechanisms of action.\n\nThree male, castrated Jersey calves, weighing between 47 and 54 kg, were obtained from a dairy in West Texas. Calves were collected at 2-months of age. At this age, calves are less susceptible to gastrointestinal pathogens than neonatal calves and less susceptible to respiratory pathogens than older calves with waning colostrum-derived maternal antibodies5,6.\n\nCalves were fed 4 to 5-liters of colostrum within 24-hours of birth. They were subsequently fed 12–14% of their body weight per day with calf milk replacer reconstituted in warm water (0.15 kg at ≥ 22% crude protein on a dry matter basis per 1 L of water). Calves were fed milk replacer twice per day until 6-weeks of age when they were fed once per day. From 2-weeks of age they were provided with ad libitum access to a pelleted complete ration calf starter (≥ 20% crude protein, dry matter basis). Calves were individually housed on a raised slatted floor to reduce pathogen transmission and soiling of the calves’ coats. Halter training was performed over several days commencing one day after arrival from the dairy farm. In brief, training involved accustoming calves to human touch, the feel of an adjustable nylon-rope head halter, being led, and standing still.\n\nCalves were fasted for 12-hours prior to surgery. One-hour prior to surgery, a 16-gauge, 2” (5 cm) catheter was placed in jugular vein. Intradermal lidocaine (2%) provided local analgesia. Pre-operative medications were then given. These included the broad-spectrum antibiotic ceftiofur sodium (2 mg/kg iv; Naxcel, Zoetis, Parsippany, NJ, USA) and the non-steroidal anti-inflammatory drug meloxicam (0.1 mg/kg iv; Metacam, BoehringerIngelheim, Vetmedica, Inc., Duluth, GA, USA). The area between the pre-scapular region to the 10th rib, and from the dorsum to the sternum was clipped on both sides of the chest.\n\nPerivascular cuffs with 3 cm internal diameters were used for flow measurements. Correct sizing was determined prior to starting this study from echocardiographic measurements of mainstem pulmonary arterial diameters of 2-month old Jersey calves (n = 5) (Vivid i and 3S-RS 2.0 to 3.6 MHz phased array transducer probe; General Electric, Fairfield, CT, USA).\n\nCalves were induced using a combination of diazepam (0.25 mg/kg iv), ketamine (5 mg/kg iv), and buprenorphine (0.005 mg/kg iv). Following induction, calves were intubated with a cuffed silicone Murphy eye endotracheal tube (10mm ID) and maintained on isoflurane for the duration of surgery. The isoflurane vapor was set at 5% at induction with oxygen flow at 20 mL/kg/min. This was subsequently reduced to between 0.5–3% with oxygen flow at 10 mL/kg/min. Flow rates and vapor settings were adjusted according to the plane of anesthesia. Calves were ventilated to effect using positive-pressure ventilation (approximately 400–600 mL tidal volume at 8–12 breaths/min, 18–20 cm H2O; SAV 2500, SurgiVet, Smiths Medical, Dublin, OH, USA). Lactated ringers containing 5% dextrose was provided at 300 to 400 mL/h (9 mL/kg/h iv).\n\nPhysiologic measurements collected every 5-minutes included body temperature (rectal thermometer), end-tidal CO2 and breathing rate, heart rate, oxyhemoglobin saturation (pulse oximeter on tongue), capillary refill time, mucous membrane color, and non-invasive arterial blood pressures (antebrachial cuff). Arterial blood was collected from a 20-gauge 1” (2.5 cm) catheter in either the auricular or brachial artery every 60-minutes and analyzed on portable blood-gas analyzer (VetScan i-STAT 1, Abaxis, Union City, CA, USA).The paralytic agent atracurium (0.4 mg/kg iv) was administered once the calf was connected to the positive-pressure ventilator and prior to the initiation of surgery. A reversal agent, neostigmine, was available, but not used.\n\nPrior to the initiation of surgery, intercostal nerve blocks were performed by injecting 3 mL of lidocaine (2%) in the dorsal aspect of the third, fourth, and fifth intercostal spaces. Lidocaine was injected subcutaneously along the fourth intercostal space. The skin was incised along the fourth intercostal space (#15 blade). A left anterolateral thoracotomy through the fourth intercostal space was performed using electrosurgery (Bovie 2350-V, Bovie Medical Corporation, Purchase, NY, USA). Power settings of 30 W for both cut and coagulation modes were used. With a periosteal elevator, the entire length of the rib periosteum was elevated and transected anteriorly and posteriorly with rib cutters. The 5th rib was removed from the first two calves and the 4th rib was removed from the third calf. Finochetto retractors (33 cm spread, blades: 19 cm long and 5 cm deep) were placed to improve cardiac access. Next, the pericardium was opened and stay sutures placed to retract it away from the surgical field. Care was taken to retract the phrenic and vagal nerves with stay sutures. These nerves cross the left lateral aspect of the mainstem pulmonary artery.\n\nFirst, a solid-state pressure sensor was placed in the apex of the left ventricle. To do this, the apex of the heart was manually elevated and a purse-string suture placed using 1.5 or 2.0 Metric polypropylene suture on a double-armed, taper point needle. The cardiac apex was manually elevated according to hemodynamic tolerance. The heart was intermittently lowered back into the chest to avoid severe hypotension and bradycardia. In the first two surgeries, a 16-gauge, 5 cm needle served as a stylet inside a splittable introducer (EG-ACC-PID7F, Transonic EndoGear Inc, Ithaca, NY, USA). A 14-gauge 5 cm needle was used as a stylet in the third surgery (Figure 1A). After placement of the introducer in the apical wall, the stylet was removed and the pressure sensor inserted. The introducer was then peeled apart to leave the pressure sensor embedded within the myocardium. The purse-string was tightened and the sensor secured using a Chinese finger-trap suture pattern (1.5 or 2.0 Metric polypropylene suture). A 2% lidocaine infusion was given (1mg/kg slow iv) during the third surgery prior to manipulation of the heart or pulmonary artery to reduce the risk of dysrhythmias.\n\n(A) In the first two surgeries, a 16-gauge, 5 cm needle served as a stylet inside a splittable introducer (left). A 14-gauge 5 cm needle was used as a stylet inside a beveled introducer in the third surgery (right). (B) 3.5 Metric braided silk suture was inserted through two eyelets located on the lateral aspects of the flow probe perivascular cuff creating two loops of suture.\n\nNext, one of the two biopotential leads was placed adjacent to the left ventricular pressure sensor. A 20-gauge, 2.5 cm needle served as a channel in the myocardium, through which 2 to 3 cm of the lead was fed. The needle was removed leaving approximately 1 cm of the biopotential lead embedded in the myocardial wall. The free end of the lead was secured in 2 to 3 places using 1.5 or 2.0 Metric polypropylene suture in a simple interrupted pattern. Arterial blood pressure was closely monitored while the cardiac apex was manually elevated.\n\nThe mainstem pulmonary artery was freed from connective tissue using a combination of sharp dissection and electrosurgery. Prior to placement of the perivascular flow probe cuff, threads of 3.5 Metric braided silk suture were placed through two eyelets located on the lateral aspects of the cuff base (as it appears in Figure 1B). This created two separate loops of suture so that following placement of the cuff around the base of the mainstem pulmonary artery, one loop passed on the cranial side of the pulmonary artery and the other passed on the caudal side (Figure 2A). Next, a purse-string suture (2.0 Metric polypropylene) was placed in the pulmonary artery wall distal to the flow probe and 2 to 3 cm proximal to the main trunk arterial bifurcation. The arterial wall in the center of the purse string was punctured with a scalpel blade (#11) and a second solid-state pressure sensor inserted. The sensor was secured using a Chinese finger trap pattern, as previously described (Figure 2A). The flowprobe (EG-32QAU-X Transonic EndoGear Inc) was then secured to the perivascular cuff using the loops of suture previously placed through eyelets in the cuff (Figure 2B). The second biopotential lead was secured to the right auricle in the first calf. To avoid the minor bleeding experienced in the first calf, the lead was embedded within the atrioventricular myocardium of the third calf.\n\nTo facilitate implant removal, telemetry leads were bundled together and secured with 3.5 Metric braided silk suture (Figure 2C). A subcutaneous pocket was created dorso-caudal to the thoracotomy incision where the telemetry implant (EG2-Q1S3tM25, Transonic EndoGear Inc), enclosed in a polypropylene mesh, was placed so that the antenna was dorsally located. The battery was placed in a subcutaneous pocket, ventro-caudal to the thoracotomy incision (Figure 2D).\n\n(A) Loops of silk suture, originating from the lateral aspect of the perivascular cuff, pass on each side of the mainstem pulmonary artery. A pressure sensor (green) has been inserted into the pulmonary artery distal to the perivascular flow probe cuff. (B) The flow probe was then secured to the perivascular cuff using loops of suture previously placed through cuff eyelets. (C) Telemetry leads were bundled together and secured with 3.5 Metric braided silk suture. (D) The implant body and battery were placed in subcutaneous pockets dorso-caudal and ventro-caudal to the thoracotomy incision, respectively.\n\nTwo to four cable ties were used to approximate the thoracotomy incision and suture the muscles in an anatomical fashion. Hemostats were used to pull cable ties through the intercostal muscles. Once all cable ties were placed, towel clamps were used to grasp the ribs and minimize the thoracotomy incision, while the second surgeon tightened each cable in an alternating pattern until adequate closure was obtained.\n\nThe chest wall was closed in two layers using 3.5 Metric polydioxanone suture in a simple continuous pattern. The skin was closed with a running subcuticular pattern (3.5 Metric polydioxanone suture). Time from first incision to skin closure was 3.5 hours. Anesthesia recovery was uneventful. The calves walked back to the pen 30-minutes after turning the isoflurane vapor to 0%.\n\nA 42 cm long 30 Fr chest tube (7 mm ID, 10 mm OD) with side holes was used to remove air from the chest cavity following skin closure. The tube was placed within the chest cavity prior to thoracotomy incision reduction with cable ties. A Valsalva maneuver was performed to decompress the free intrathoracic air, expand the lung, and avoid a tension pneumothorax. The external end of the tube was submerged in saline throughout this procedure.\n\nMeloxicam (0.1mg/kg sid iv) and buprenorphine (0.005 mg/kg q 8h iv) were provided for post-operative pain relief. Ceftiofur sodium (1mg/kg sid iv) was continued for 2-days post-surgery.\n\n\nResults\n\nSurgeries were performed on August 23rd and 24th (JMN and VM) and December 5th, 2016 (JMN and DS). The first and third surgeries were performed with minimal complications. The second calf, however, died from ventricular fibrillation following puncture of the left ventricular apex prior to pressure sensor placement. The splittable introducer sheathing the 16-gauge needle appeared to snag on the myocardium as it was inserted through the ventricular wall. This may have been attributable to the loose fit (Figure 1A). A 14-gauge was subsequently found to be a more appropriate fit for the introducer and was used as a stylet in the third surgery. The introducer used in the third surgery was also beveled at the tip to ease tissue passage (Figure 1A). A cardiac defibrillator was not available; consequently, intracardiac and intravenous epinephrine were administered to correct the dysrhythmia, but to no effect. Lidocaine (2%) was intravenously infused prior to cardiac manipulation in the third calf to reduce the risk of dysrhythmias.\n\nCalves were stable throughout the surgeries. The greatest variations in heart rate (70 to 250 beats per minute) and arterial blood pressure (100/77 to 154/125, systolic/diastolic) were observed during the first surgery. Arterial oxyhemoglobin saturations were ≥ 88% and typically > 95%. Body temperatures decreased by 2°C by the end of the surgeries.\n\nThe 5th rib was the most appropriate rib to remove during the first surgery. This optimized access to the pulmonary artery and cardiac apex. The 5th rib was also removed during the second surgery, but, in hindsight, removal of the 4th rib would have been preferable. Because of these anatomic discrepancies, access to the left ventricle and pulmonary artery was carefully appraised during the third surgery so that the most appropriate rib – in this case, the 4th rib – was removed.\n\nThe first calf presented with mildly labored breathing and diminished lung sounds over the left chest the morning after surgery. Normal lung sounds and breathing were restored following 2-days of furosemide (4mg/kg bid iv). The third calf developed moderate subcutaneous edema around the implant body and battery 3-days after surgery, which resolved 2-days later without treatment. Calves resumed a normal appetite within 3-days of surgery. Calves remained healthy until the end of the study 18-days (Calf 1) and 17-days (Calf 3) post-surgery.\n\nAlthough the perivascular cuff appeared loose during surgery, transit-time volume flow measurement stabilized approximately 6-hours (calf 1; see data file: Calf 1 – Day of surgery7) and 3-hours (calf 3; see data file: Calf 3 – Day of surgery7) following the completion of surgery (Figure 3). However, it was not until approximately 4 days post-surgery that flow measurements became consistent between readings (Data files: Calf 1 – 4-days post-surgery, and Calf 3 – 4-days post-surgery7). Electrocardiogram recordings became sporadic on the 4th-day (Calf 1) and 10th-day (Calf 3) post-surgery and ceased altogether the following day in both calves.\n\nPulmonary arterial flow (L/min, top channel), pulmonary arterial pressure (mmHg, second channel), left ventricular pressure (mmHg, third channel), and electrocardiogram (ECG)(mV, bottom channel) over a 5-second period from a 2-month old Jersey calf one-day post-surgery.\n\n\nDiscussion\n\nThe results of this study indicate that a left thoracotomy is a viable surgical approach for wireless biotelemetry studies of bovine calf cardiovascular function. The contemporaneous, real-time collection of cardiovascular pressures and transit-time volume flow in a bovine model of pulmonary arterial hypertension permits detailed pathophysiological studies of hemodynamically complex disease processes such as pulmonary arterial hypertension.\n\nAn important consideration prior to the initiation of a biotelemetry study involving transit-time volume flow measurement is selection of an appropriately sized perivascular cuff. In our study, the mainstem pulmonary artery of 2-month old calves was measured by echocardiography. Vessel diameter measurement from cadavers underestimates the true in vivo vessel size as the effects of intravascular and intrathoracic pressures on vessel distension are ignored.\n\nThe calves in our study quickly recovered from surgery and showed minimal evidence of post-operative pain; consequently, only a short recovery period of ≤ 5 days, was necessary prior to the initiation of the subsequent study. The calves were also halter-trained prior to surgery to minimize the stress associated with animal handling. Stress is deleterious to wound healing and surgical recovery8; consequently, the recovery period may have been longer had the calves not been halter-trained. Another benefit of halter training is that procedures, such as echocardiography, can be performed on a relaxed, standing animal. This minimizes any stress-induced perturbations of the acquired data.\n\nThis study was limited by the small number of animals studied. We have, however, successfully described a surgical technique for the placement of a fully implantable wireless biotelemetry device in a bovine calf for measurement of pulmonary arterial and left ventricular pressures, right ventricular output, and electrocardiogram. The techniques described in this study will likely be refined in future studies.\n\nIn conclusion, left thoracotomy is a viable surgical approach for wireless biotelemetry studies of bovine calf cardiovascular function. Successful surgical implantation and application of this technology has considerable potential to advance our understanding of the hemodynamic changes that occur during the onset and progression of pulmonary arterial hypertension in a naturally susceptible bovine animal model.\n\n\nData availability\n\nTelemetry data files for calves 1 and 3 are available at http://dx.doi.org/10.7910/DVN/N2US1Y7.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nResearch funding was provided by Texas Tech University.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nStenmark KR, Meyrick B, Galie N, et al.: Animal models of pulmonary arterial hypertension: the hope for etiological discovery and pharmacological cure. Am J Physiol Lung Cell Mol Physiol. 2009; 297(6): L1013–32. PubMed Abstract | Publisher Full Text\n\nStenmark KR, Fasules J, Hyde DM, et al.: Severe pulmonary hypertension and arterial adventitial changes in newborn calves at 4,300 m. J Appl Physiol (1985). 1987; 62(2): 821–830. PubMed Abstract\n\nLammers SR, Kao PH, Qi HJ, et al.: Changes in the structure-function relationship of elastin and its impact on the proximal pulmonary arterial mechanics of hypertensive calves. Am J Physiol Heart Circ Physiol. 2008; 295(4): H1451–H1459. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHunter KS, Albietz JA, Lee PF, et al.: In vivo measurement of proximal pulmonary artery elastic modulus in the neonatal calf model of pulmonary hypertension: development and ex vivo validation. J Appl Physiol (1985). 2010; 108(4): 968–975. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFoster DM, Smith GW: Pathophysiology of diarrhea in calves. Vet Clin North Am Food Anim Pract. 2009; 25(1): 13–36. PubMed Abstract | Publisher Full Text\n\nChase CC, Hurley DJ, Reber AJ, et al.: Neonatal immune development in the calf and its impact on vaccine response. Vet Clin North Am Food Anim Pract. 2008; 24(1): 87–104. PubMed Abstract | Publisher Full Text\n\nJoseph N: Data for publication: Surgical placement of a wireless telemetry device for cardiovascular studies of bovine calves. Harvard Dataverse, V1. 2017. Data Source\n\nGouin JP, Kiecolt-Glaser JK: The impact of psychological stress on wound healing: methods and mechanisms. Immunol Allergy Clin North Am. 2011; 31(1): 81–93. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "24047",
"date": "12 Jul 2017",
"name": "Guillaume Chanoit",
"expertise": [
"Reviewer Expertise Veterinary surgery",
"cardiovascular research",
"clinical veterinary research",
"large animal models of disease"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article is nicely written and the methodology is clearly explained. There is a clear rationale to develop the technique that the authors are presenting. The telemetry techniques used in cardiovascular research on animals are contributing to animal welfare (reduction and refinement alternatives), and are reducing overall animal research costs. Some articles describe the technique in small animal models but very few examples of its use in large animals are available in the literature.\n\nAs the paper only includes 3 cases, one of which died before data acquisition, this report should be considered as the very early learning curve of the technique presented.\n\nThe surgical approach described included a rib resection to allow sufficient exposure to the structures of interest. I wonder if performing two thoracotomies (in the 3rd and 4th or 4th and 5th intercostal spaces) would maybe alleviate the need for a rib resection, which is known to be a potentially painful procedure. It may also be possible to consider an hybrid approach that would allow a device to be implanted in the pulmonary artery (PA) using right heart catheterization for PA pressure measurement combined with minimally invasive surgical placement of the bipotential leads and pressure sensor.\n\nThe animals were given buprenorphine postoperatively. Have they been pain scored to determine that buprenorphine was sufficient and that stronger opioids were not needed? Calves are considered pray animals and are therefore expressing pain a more concealed manner which may give an initial false impression that their pain level is low.\n\nIt would be interesting to know how long the authors think the devices surgically placed will last in a growing calf. Working with other animal models (such as pig) the growth of the animal has to be taken in consideration for any chronic study- I presume it would be the same for a study in calves?\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Yes\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "28615",
"date": "29 Jan 2018",
"name": "Susana Astiz",
"expertise": [
"Reviewer Expertise Animal Production (ruminants)",
"Large Animal Models"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is an innovating work, well written and accurately described that describes a (short-term) successful surgical technique to implant a wireless telemetry device in calves.\nI just miss a bit more “general” information about arterial hypertension studies to have an idea on how long should such studies be, and to justify the election of large animal models. On the other side, this study has to be understood as preliminary work and results, with several issues susceptible to be improved. Specific suggestions would be, the reduction of the size of the devices to be implanted, in order to make possible the use of minimally invasive surgery, and in order to avoid, mainly, any rib resection.\nOther more specific comments are the following:\nPreorperative care, first paragraph; the age of the calves 82 m old) are justify with a reference (the 6th). I suggest to take the following reference instead of or additive to the 6th: The median age stated in this very recent study for respiratory problems in dairy calves is 29-60 days of age.1\nRegarding the following paragraph of this section, a very detailed description is written about how the calves were managed during the time before arriving at the hospital. However, it would be more useful to know how long the calves were at the hospital before the surgeries were performed. Days? weeks? And information about how they were managed at the hospital the days before and after the surgery (feeding, m2/animal, individual housing or not…). How long did the halter training take?\nParagraph “Postoperative recovery”; It is stated that one administration of meloxicam was given. I would rather advise to repeat the meloxicam treatment once more (every three or two days, at least twice), such that the analgesia is assured for longer time, specifically when a rib had to be resected.\nResults section; In this section, under the Postoperative recovery subsection, it is stated that “Calves remained healthy until the end of the study 18-days (Calf 1) and 17-days (Calf 3) post-surgery.” Did the calves die after those time periods? Or were they still alive? Could we have this information? I do ignore how long a cardiovascular study hypoxia-induced pulmonary arterial hypertension should ideally last, but to know how long those calves live after the surgery, would be a relevant information to consider performing this surgery by other research groups.\nFinally, a more detailed description on how the calves recovered (not only intake, but feeding and social behavior, vocalizations, pain score, etc.) is desireble.\n\nIs the rationale for developing the new method (or application) clearly explained? Partly\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Partly\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1061
|
https://f1000research.com/articles/6-1040/v1
|
03 Jul 17
|
{
"type": "Opinion Article",
"title": "Developing a strategy for computational lab skills training through Software and Data Carpentry: Experiences from the ELIXIR Pilot action",
"authors": [
"Aleksandra Pawlik",
"Celia W.G. van Gelder",
"Aleksandra Nenadic",
"Patricia M. Palagi",
"Eija Korpelainen",
"Philip Lijnzaad",
"Diana Marek",
"Susanna-Assunta Sansone",
"John Hancock",
"Carole Goble",
"Patricia M. Palagi",
"Eija Korpelainen",
"Philip Lijnzaad",
"Diana Marek",
"Susanna-Assunta Sansone",
"John Hancock",
"Carole Goble"
],
"abstract": "Quality training in computational skills for life scientists is essential to allow them to deliver robust, reproducible and cutting-edge research. A pan-European bioinformatics programme, ELIXIR, has adopted a well-established and progressive programme of computational lab and data skills training from Software and Data Carpentry, aimed at increasing the number of skilled life scientists and building a sustainable training community in this field. This article describes the Pilot action, which introduced the Carpentry training model to the ELIXIR community.",
"keywords": [
"Life sciences",
"bioinformatics",
"training",
"IT and computational skills",
"data analysis",
"capacity building",
"software carpentry",
"data carpentry"
],
"content": "Introduction\n\nAs research in life sciences develops on the fast track, the need for federated resources and infrastructure, and coordinating activities supporting researchers, is becoming increasingly important. ELIXIR is a European research infrastructure with a mission to manage and safeguard the increasing volume of data generated by life science research. It coordinates and sustains bioinformatics resources across its member states and help researchers to more easily find, analyse, share data and exchange biological data. ELIXIR follows a Hub and Nodes model, with a single Hub based in Hinxton, United Kingdom, and a growing number of Nodes located at centres of excellence throughout Europe. At the time of writing, ELIXIR has 20 national Nodes, and European Bioinformatics Institute (EMBL-EBI; co-located with the Hub), working as a separate Node.\n\nProviding necessary training to researchers to tackle emerging research data manipulation and computing issues is of key priority and one of ELIXIR’s main missions (van Gelder et al., 2016).\n\nThe ELIXIR Hub strongly encourages interactions and collaborations between the Nodes. Such interactions are supported by short-term Pilot actions1 that are funded by ELIXIR. The goal of these projects is to “tackle major European challenges in life science data access, high-performance computing and the interoperability of public biological and biomedical data resources”. Pilot actions usually involve several ELIXIR Nodes to build, test and demonstrate the value of the distributed infrastructure.\n\nOne of these collaborative activities is providing training in essential skills for life science researchers throughout ELIXIR. In part, these skills encompass computational and digital data manipulation and analysis skills. Even though there are a lot of materials available to teach these topics, they are scattered, hard to discover and access, or assume too much prior knowledge. Contrarily, there is a lack of training material in biocuration and for the development of content standards (terminologies, minimum information checklists, exchange formats) and their use (e.g. in annotation tools). Nevertheless, funders and researchers increasingly call for enhanced, standards-driven experimental annotation at the source, and for data sharing to maximise data reproducibility and reuse, in order to drive science and scientific discoveries (Wilkinson et al., 2016).\n\nTraining activities are one of the main focus areas of the ELIXIR UK Node, and the Training Theme that it leads. In 2014, the UK Node, in collaboration with and support from ELIXIR Finland, ELIXIR Netherlands, ELIXIR Switzerland and ELIXIR Sweden, proposed a Pilot action for “Working up and building the foundation for Data Carpentry and Software Carpentry within ELIXIR” to tackle this evident training gap.\n\nThe Pilot project had several overarching objectives. The first was to launch the Carpentry initiatives in the ELIXIR community and leverage the work of the Software Carpentry (SWC) and Data Carpentry (DC) initiatives from the US and Canada by reusing their existing training materials and well-tested, successful and popular model of hands-on teaching. Secondly, we wanted to tap into the international community built for years around the Carpentries, which included experienced researchers and trainers in life sciences who have been delivering and developing training on a regular basis. In the long run, the goal was to develop a self-sustainable pool of professional Carpentry instructors within ELIXIR who would deliver training across the Nodes, by incorporating the Instructor Training course, developed by the Carpentries. Finally, we wanted to join and help grow this international community by contributing the newly trained instructors, as well as further developing and adapting materials in the life sciences domain.\n\nSWC is an international collaboration aimed at teaching researchers (without prior knowledge or training in IT) basic software development skills. This initiative is very well established internationally and has been running since 1998.\n\nSWC training courses are highly-interactive two-day workshops that give researchers training in essential software development skills, but in the context of how they contribute to improving research productivity and help produce robust and reproducible science. In other words, they look beyond just teaching people to code (i.e. syntax of a language or commands they can run) - teaching people how to code and coding best practices is as important (Wilson, 2016).\n\nThe materials used for both Carpentry workshops are openly available and developed by the community of experts and experienced instructors. The way workshops are delivered has been devised and perfected over the years of practice and is based on pedagogical approaches, taking into account learners’ backgrounds, reducing cognitive load, and giving and receiving constructive feedback. They employ techniques such as live coding, working in pairs and peer learning to make the training the most effective.\n\nSWC training courses cover four core topics:\n\n• task automation using command line to help with repeating common tasks,\n\n• structured code development, so that scientists produce code that is readable, testable, sustainable and reusable (demonstrated using either Python or R),\n\n• version control for backing up, collaborating and sharing code, and\n\n• introduction to structuring data using SQL and preparing it for further processing.\n\nDC started in 2015 as a separate programme inspired by SWC. Both programmes maintain close ties, which help to build and share a community of practice among the instructors and expand the base of teaching materials and available trainers. DC aims to teach the skills that will enable researchers to be more effective and productive in working with data. As in SWC, teaching is delivered through intensive two-day workshops.\n\nContrary to SWC, DC designs the workshops to fit into needs of particular domains (e.g. life sciences, social sciences, digital humanities and library carpentries, etc.). The core DC workshop curriculum covers topics such as:\n\n• caveats and best practices of working with spreadsheets for data organisation\n\n• data reading/processing/manipulation/visualisation with R or Python,\n\n• introduction to structuring data using SQL and preparing it for further processing, and\n\n• introduction to OpenRefine for data cleaning.\n\nThe instructors delivering the material are members of the SWC/DC instructor network and have completed training designed to prepare them to use “the Carpentry way” of teaching the skills for effective and productive work with research data. All instructors are volunteers and do not get remuneration for the workshops they teach - they do it because of their love of teaching or because they want to give back to the community. For that reason growing a large pool of instructors (primarily peer-researchers) allows for responding to the massive demand for the workshops.\n\nBoth Carpentries are aimed at providing researchers with the essential lab skills for computational science. They do this by focusing on the core skills and making sure that the best practices are passed on in a useful and effective way. Carpentries do not aim to teach audiences specific technical aspects of research - such an audience requires different training (within ELIXIR this type of training is covered by the Train the Developers Programme (van Gelder et al., 2016)). However, SWC/DC can act as an important connector between researchers and service providers, allowing both sides to communicate better and work more effectively.\n\n\nPilot action overview and goals\n\nThe Pilot action was delivered between end of the March 2015 and January 2016, and its goals were as follows:\n\n1. introduce the SWC/DC workshops model in ELIXIR Nodes and expand the number of organisations in the ELIXIR community capable of organizing SWC/DC events, as well as expand the SWC/DC global training network;\n\n2. introduce the SWC/DC material development model in ELIXIR Nodes and improve existing materials with ELIXIR-relevant training (during the dedicated material creation hackathons);\n\n3. build a pool of certified SWC/DC instructors within ELIXIR Nodes to create a self-sustainable training community.\n\nWe aimed to train as many researchers within the budget and to familiarise ELIXIR training coordinators and the training community from the Nodes with the Carpentry model of teaching. In addition, to introduce them to the ways of how to expand existing and develop new SWC/DC life sciences training materials so they can continue to carry on this practice in their Nodes.\n\nELIXIR Nodes would become empowered to run Carpentry events, and be able to contribute to both initiatives by becoming a part of the vibrant international SWC/DC community and expand the SWC/DC training network. One of their key contributions would be the collaborative development and improvement of training materials. The contents need to be updated and expanded depending on the community needs, and ELIXIR member organisations are important representatives of the life sciences community.\n\nThe first step in material development component was identifying the SWC/DC materials that needed further work and development. The new materials for life sciences would then be assessed through test runs in consecutive Carpentry workshops piloted by the Nodes. The assessment of the outcomes was planned through follow-up surveys and interviews to determine what people actually would adopt and the impact it would have on their research. Finally, this would lead to adopting the new content for regular teaching.\n\nThe long-term goal was focused on capacity building in ELIXIR - not just training researchers, but growing a pool of certified instructors and a self-sustainable training community. In order to ensure the quality of teaching at the SWC/DC workshops, at least one of the instructors needs to be an officially certified Carpentry instructor. Certification is obtained through completing the Carpentry Instructor Training course. By training instructors at different ELIXIR Nodes, the Pilot action helped these Nodes to evolve towards being able to run the workshops independently. The teaching methods and techniques discussed at the instructor training are also applicable for training in other topics. Therefore, this event contributed to the overall training capacity building of the Nodes.\n\n\nPilot action delivery\n\nWith some SWC/DC workshops already happening, mainly within the ELIXIR UK Node, we had an opportunity to demonstrate the relevance of this type of training for life sciences. In the UK, the Carpentry workshops were coordinated by the Software Sustainability Institute (SSI). The close collaboration between the UK Node and the SSI substantially facilitated the delivery of the Pilot - one of the key people involved in writing the proposal and delivering the Pilot was the training lead at the SSI at the time, and the deputy head of the UK Node was a co-investigator at the SSI. The Pilot action was planned to last for 18 months, but we managed to deliver all tasks within the first 12 months.\n\nThe delivery of the Pilot action was coordinated by the ELIXIR UK Node and started in late 2014 with outreach and engagement to ensure broad participation of the Nodes in the planned events. In order to streamline communication with other Nodes, we sent a request for volunteers (one per Node) to step in and become a SWC/DC Coordinator for their Nodes. The information about the planned Pilot activities would be passed on to the Coordinators who would then disseminate it within their Node. This was particularly important at the beginning of the Pilot, as SWC/DC workshops were relatively unknown among ELIXIR research organisations in 2014.\n\nReaching out to find the Coordinators was an iterative process. A number of people in ELIXIR were actively involved. We posted the call for Coordinators on the ELIXIR UK, as well as the main ELIXIR websites. The ELIXIR Training Coordinator Group (TrCG) was engaged from the very beginning and helped circulate announcements and facilitate communication with the Node members.\n\nAs a result, volunteers from 10 Nodes (out of 17 at the time) stepped in. Most of them did not know much about the SWC/DC workshops, but they all had strong interest in teaching. The call for Carpentry Coordinators specified that ideally their work responsibilities should be related to training so that the effort related to coordination would align with their regular responsibilities.\n\nThe Pilot was delivered as follows. Firstly, we ran four-day events that combined training material creation hackathon (two days) with a regular train-the-researcher workshop (two days). We ran two of these combined events - one in Finland and one in the Netherlands. These were then followed by an Instructor Training event, aimed at selected participants from the first two events who showed the most interest and enthusiasm about the programme and were willing to become instructors themselves.\n\nThe material creation hackathons consisted of two parts: the first one was to introduce the idea and running of SWC/DC workshops, curriculum and model of training; the second part was focused on improving existing and developing new training materials specifically focused on life sciences and its various sub-domains. Through these hackathons, we actually wanted to demonstrate another distinctive feature of SWC/DC - collaborative material development. All SWC and DC training materials, websites and other documents, are developed and shared by the community via GitHub. This approach has proven to be very successful, allowing for maximum inclusivity and production of high quality training materials avoiding redundancy in contents (Wilson et al., 2014; Wilson et al., 2016). As part of the hackathons, we wanted to train people to become proficient in this method of working, as well as create more materials.\n\nCombined hackathon and workshop in Finland. The first combined event was held in Helsinki, Finland, in March 2015, hosted by ELIXIR Finland at the CSC IT Centre for Science.\n\nThe training material creation hackathon was advertised among the Nodes with the idea of bringing together 15 participants representing as many Nodes as possible to ensure wide representation and dissemination. Eventually 12 participants from 11 Nodes were present at the material creation hackathon (the first two days of the event).\n\nThe material development covered during the hackathon included:\n\n• development of the dplyr module for the R lessons;\n\n• development of next generation sequencing data analysis lessons;\n\n• development of RNAseq data analysis using BRB Digital Gene Expression;\n\n• improvements on the shell lessons;\n\n• improvement of the spreadsheet lessons.\n\nThe facilitators at the hackathon were two experienced Carpentry instructors - one of which was also among the Pilot proposers and a member of the DC Steering Committee at the time. After leading the hackathon, they also taught at the DC workshop that immediately followed (the last two days of the event). The workshop itself was attended by 29 researchers from the life sciences domain. The workshop was widely advertised across ELIXIR, so the attendees were not only the local researchers based in Finland, but also a few from other Nodes.\n\nThe attendees of the hackathon were strongly encouraged to stay for the last two days at the DC workshop as helpers and observers. We did not make it a requirement, since it would make the whole event almost a week long (including the time needed for travel), and may not be possible for some of them to commit to.\n\nCombined hackathon and workshop in the Netherlands. The second event of the Pilot action, hosted by the ELIXIR Netherlands Node, was carried out similarly to the one in Finland. The hackathon and the workshop were co-located and run at the University Medical Centre Utrecht in June 2015. This time 19 participants represented 10 Nodes at the hackathon. To increase the outreach, we encouraged the Nodes to delegate a different representative to the one that was present in Finland - so we had an overlap of only five participants.\n\nThe material development took place in three groups:\n\n• Group 1 worked on creating training materials on using ELIXIR Cloud resources;\n\n• Group 2 worked on a decision tree for using cloud computing;\n\n• Group 3 worked on different aspects of understanding how to use one's data for genomics. In particular the group worked on describing the file formats, file manipulation, pipeline integration, post-assembly - de novo RNA Transcriptome Analysis, handling blast annotation output and verifying data.\n\nOne of the facilitators (and the Pilot proposer) was the same as in Finland. They were joined by another experienced instructor from the US, and a fellow member of the DC Steering Committee. This helped with providing background information to the participants about the workshops and both initiatives. After the hackathon, these instructors taught at the co-located DC workshop that followed and was attended by 30 researchers. Again, the workshop was advertised widely to allow representatives from all ELIXIR Nodes to receive the training. Similarly to the workshop in Finland, the one held at the University Medical Centre Utrecht was also mainly attended by local researchers.\n\nThe workshop received a lot of interest, not only from researchers based at universities in the Netherlands and nearby Nodes, but also from industry. One of the companies based in the Netherlands and focusing on bioinformatics contacted the organizers and asked if two of the company researchers could participate in the workshop. As the collaboration with this company is potentially beneficial for both the SWC/DC Foundations and the company, the researchers were welcome to attend. They had a very positive experience and as a result the company requested an internal workshop, which was delivered in December 2015 by two newly trained instructors from the Netherlands eScience Centre.\n\nInstructor Training. The final event of the Pilot action was the SWC/DC Instructor Training. This covered the process and pedagogy of learning, as well as best practices in teaching, and was not just limited to teaching computational skills. The workshop was delivered over two days and was hosted by the ELIXIR Switzerland Node in Lausanne, Switzerland, in January 2016. Using the communication network we developed during the organisation of the preceding events, we reached out to the ELIXIR Nodes to fill the spaces for the workshop. Eight Nodes were represented at the event and 20 participants were trained as SWC/DC instructors (see Figure 1 below for the spread of participants per Node). As with the previous events, participants had financial support from the Pilot action budget.\n\nThis Instructor Training was delivered by two trainers. One of them was the same instructor and facilitator who ran the events in Finland and the Netherlands. This helped with organisation and coordination of the workshop. The second trainer was the Executive Director of the Data Carpentry Foundation, which gave an excellent opportunity for the attendees to discuss various details of planned implementation of DC at their Nodes.\n\nWe explicitly advertised the Instructor Training as an event addressed at those representatives of the Nodes who were already interested in training and who aimed to become active Carpentry instructors engaging with the community and running workshops in the future. By the end of 2016, 17 attendees of the Instructor Training have completed the final “Carpentry Instructor Checkout Procedure” and became certified as SWC and/or DC Instructors (out of 20 trained in total - which is a high rate of 85%).\n\n\nMain outcomes\n\nOwing to the Pilot action, we trained around 300 researchers and increased the understanding of the SWC/DC training programme, curriculum and model of delivery among the ELIXIR Nodes (see the summary in Table 1). At the beginning of 2015, the Carpentry workshops were known and run primarily in the US, Canada and the UK. By the end of Pilot, the Carpentry programmes are far more known in Europe, and are starting to be endorsed and implemented by an increasing number of Nodes. In total, participants from the following 13 Nodes took part in the Pilot action: Norway, Finland, Italy, Belgium, Switzerland, Netherlands, Slovenia, Estonia, Czech Republic, France, United Kingdom, Israel and Portugal.\n\nYellow: material creation hackathons, green: workshops, blue: instructor trainings.\n\nThe training material creation hackathons run during the Pilot allowed the participants to familiarise themselves with the particularities of collaborative material development (one of the main features of SWC/DC). The new materials that were developed (i.e. not contributions to the existing materials) still need to be improved and reviewed. As training develops within ELIXIR, these may become part of the official curriculum, according to the needs of audiences.\n\nThe Pilot provided solid foundations for setting up a regular training programme across ELIXIR in computational skills for life sciences. Apart from the growing number of workshops in the UK, Carpentry training events started taking place in other Nodes. Following the Pilot workshops, ELIXIR Slovenia immediately hosted a workshop in July 2015, during which 29 researchers were trained. ELIXIR Belgium organised a workshop in November 2015 and trained 35 researchers. ELIXIR Switzerland organised three workshops: (1) a SWC workshop in Lausanne in June 2016, where 30 researchers were trained; (2) a SWC workshop in Basel in June 2016, where 40 researchers were trained; and (3) a DC workshop in Zurich in July 2016, where 35 researchers were trained. In the Netherlands, 50 researchers were trained in two further workshops, one in January 2016 for Life Scientists and one in April 2016 for the Netherlands Institute for Space Research.\n\nThe Pilot action helped in growing the certified SWC/DC instructor pool within the ELIXIR Nodes. It turned out that the demand for Instructor Training from the ELIXIR community was so high that the UK Node, which was the main coordinator of the Pilot, arranged and financed two more Instructor Training events:\n\n• Instructor Training hosted at the University College London in October 2015, during which 19 new instructors were trained;\n\n• Instructor Training hosted at the University of Manchester in November 2015, during which 24 new instructors were trained.\n\nIn total, 59 researchers were trained in the Pilot workshops and 221 at follow-up events inspired by the Pilot and funded locally by Nodes. Also, 20 new SWC/DC instructors were trained as part of the Pilot, and 43 at follow-up instructor training events.\n\n\nFollow-up\n\nThe Pilot action received a lot of interest and positive feedback among the ELIXIR Nodes. We have seen further workshops being hosted within institutions in ELIXIR nodes. The follow-up actions included several wider-scope goals.\n\nELIXIR and SWC/DC Foundations are finalising (as of summer 2017) the work on a new partnership agreement. SWC/DC run a partnership programme that offers various benefits to organisations choosing different levels of partnering agreements. Due to the size and scope of ELIXIR, the partnership is tailored to the specific needs of the Nodes. The agreement will not only include support for running workshops and instructor trainings, but also assist with developing sustainable training network and possibly incubation of bioinformatics-specific teaching materials.\n\nThe ELIXIR Training Coordinators group is looking into integrating Instructor Training into the ELIXIR Train the Trainer programme. Most Nodes already have trainers delivering courses; however, there is still room for growing that pool. SWC/DC Instructor Training may be used as part of professional development - depending on the needs of the specific organisations within ELIXIR.\n\n\nConclusions\n\nThe Pilot action was an exercise not only in delivering training, but also in outreaching and disseminating SWC/DC principles in a large-scale international and multi-partner project, such as ELIXIR. The goal was to train researchers in IT skills and introduce SWC/DC workshops across the Nodes. The challenge we faced was to clarify the misconceptions about how the workshops are delivered, their purpose related infrastructure, as well as the application of this training in the life sciences context. We are confident that we have mitigated these risks by bringing in the representatives of the Nodes directly into workshops and hands-on sessions on material development. The participants had a close-up experience of how the workshops operate.\n\nThe Pilot helped in forming some suggestions and ideas for possible improvements in developing training programmes.\n\nThe engagement of TrCG was essential. In particular, the Coordinators from the Netherlands, Finland and Switzerland (i.e. the Pilot collaborating Nodes) were very helpful. Maintaining this group within ELIXIR is vital for successful growth of its training activities.\n\nHowever, despite communicating with the TrCG and reaching out to the Nodes through different communication channels during the time period of the Pilot (2015), not all Nodes responded. We have tried to find SWC/DC coordinators in each Node, but possibly it was a bit too early and we will revisit this effort in 2017. Most Nodes did not know enough about the Carpentry trainings and/or do not have a training infrastructure in place (being too small). Furthermore, many Nodes already have their own training programme in place, and it has to be made transparent to them how the SWC/DC training programme can complement that programme.\n\nIn 2016, a survey in the ELIXIR nodes was undertaken, and the majority of the nodes did show interest in learning more about SWC/DC and in hosting workshops and hackathons. At this moment, ELIXIR is setting up a collaboration agreement with the SWC/DC Foundations to make it able to roll out workshops over the nodes and, most importantly, to run instructor trainings. In this way, ELIXIR is working towards building a sustainable and self-expanding Carpentry network in Europe.\n\n\nNotes\n\n1As of November 2015, the ELIXIR Pilot actions have been renamed as Implementation Studies.",
"appendix": "Author contributions\n\n\n\nAP, CWGvG, PMP, EK, SAS and CG co-wrote the Pilot proposal. AP was also the key person in organising and delivery of the workshops and training, having taught at all three Pilot events. CG was the deputy-head of the UK Node at the time, as well as co-investigator for the SSI, and provided the key link between ELIXIR and SWC/DC Foundations. CG also secured funds for additional workshops and training (in particular two follow-up instructor trainings). CWGvG, EK, PL, DM and PMP were also local hosts and organisers of the three Pilot events. JH is the UK Node coordinator, and provided support for running workshops. AN helped with the delivery of the workshops, and co-wrote the manuscript with AP and CWGvG, with all authors being involved in its various revisions. AN prepared figures and tables based on numbers collected by AP, CWGvG and AN. CWGvG and AN co-lead the negotiations on partnership agreement between SWC/DC Foundations and ELIXIR.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by ELIXIR, the research infrastructure for life-science data. ELIXIR-UK is funded by the Biotechnology and Biological Sciences Research Council, the Medical Research Council and the Natural Environment Research Council (grants BB/L005069/1 and BB/P017193/1). ELIXIR and ELIXIR-UK have also received funding from the European Union’s Horizon 2020 research and innovation programme (agreement no 676559).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors want to thank all SWC/DC instructors and helpers who were involved in running the SWC/DC workshops and hackatons in the Pilot, as well as Greg Wilson (Software Carpentry’s Executive Director at the time of the Pilot) and Tracy Teal (Data Carpentry’s Executive Director) for their help and discussions.\n\n\nReferences\n\nvan Gelder C, Sarah M, Allegra V, et al.: Report on the training needs identified across the ELIXIR community. 2016. Publisher Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson G: “Software Carpentry: lessons learned” [version 2; referees: 3 approved]. F1000 Research. 2016; 3: 62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson G, Aruliah DA, Brown CT, et al.: Best Practices for Scientific Computing. PLoS Biol. 2014; 12(1): e1001745. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson G, Jennifer B, Karen C, et al.: Good Enough Practices in Scientific Computing. arXiv preprint arXiv: 1609.00037. 2016, Retrieved March 27, 2017. Publisher Full Text"
}
|
[
{
"id": "24015",
"date": "17 Jul 2017",
"name": "Erin A. Becker",
"expertise": [
"Reviewer Expertise Instructor training",
"evidence-based teaching practices"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting article presenting a process for implementing the SWC/DC model of instructor training to a large, pan-European research community.\n\nOne statement should be corrected. The description of DC says that \"DC designs the workshops to fit into needs of particular domains (e.g. life sciences, social sciences, digital humanities and library carpentries, etc.).\" This is inaccurate as DC materials do not currently exist for digital humanities or social sciences (although social sciences materials are in progress and plans for digital humanities curricula are in discussions). Additionally, library materials are managed under Library Carpentry as an independent organization.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "24013",
"date": "17 Jul 2017",
"name": "Anelda P. van der Walt",
"expertise": [
"Reviewer Expertise Bioinformatics",
"eResearch"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article provides a good overview of a pilot project that was run through ELIXIR to develop a training program to teach foundation computing and data skills to researchers.\nThe authors give context of ELIXIR's mission and aims and how the pilot was put together and rolled out. It also provides information about the pilot's impact, specifically mentioning subsequent activities that took place because of the project.\nGiven that this is an opinion article, it would be interesting to learn more about what ELIXIR did before employing the Carpentries model in terms of computational and data training. What other training platforms or methodologies have they tried and how do the Carpentries compare to other models they've employed. It would also be very useful if the authors could explain exactly what aspects of the Carpentries appealed to ELIXIR specifically and caused them to adopt it during the pilot. They did describe both Software and Data Carpentry, but it would be useful to explicitly describe why the Carpentries had such an appealing fit for ELIXIR’s need. There are many organisations these days that resemble the ELIXIR structure of hub and nodes, not only in the life sciences but also in nanoscience, mathematics and statistics, and other areas. This kind of information may help those who are not familiar with the Carpentries to evaluate whether a similar approach for developing computational capacity in their organisations may be relevant to their specific contexts.\nIn some cases the authors use words such as \"community of experts\" and \"experienced instructor\", \"perfected\". Two of the core philosophies of the Carpentries are openness and collaboration. Although the Carpentries include a large number of experts and experienced instructors, the community embraces anyone who can contribute in any way and is by no means an elitist club of experts. In many cases lesson contributions are made by learners or newly qualified instructors. The Carpentries is specifically open and welcoming to anyone, not only the experts. The quality of the lesson material could equally be attributed to the collaborative development process, peer review, and repetitive use in workshop settings, and to leadership in its development by experts who often also play the role of lesson maintainers. The Carpentries are ever-evolving organisations that constantly learn from their community of instructors and learners, as well as research about learning and teaching in order to offer more relevant training, to improve the lesson material, the format of the workshops, and the teaching methodologies. Instead of describing workshops as having been \"perfected\" it could potentially rather be described as \"constantly evolving to incorporate the latest research on teaching and learning\" or something to this effect.\nThe authors could reference the original Data Carpentry publication where it indicates that Data Carpentry started running workshops in 2014 (http://ijdc.net/index.php/ijdc/article/view/10.1.135/386).\nThe authors should note that SQL was removed as a core topic from the Software Carpentry curriculum in May 2015 (https://software-carpentry.org/blog/2015/03/and-now-we-are-three.html).\nAnother interesting datapoint might be to show how many of the newly trained instructors delivered the training that followed after completion of the pilot action. Were ELIXIR members teaching ELIXIR workshops? Were they teaching at non-ELIXIR workshops as one of the outcomes of the pilot was to help build instructors that could serve the broader community? The authors specifically stated that this was one of the intentions of the pilot as well.\nIn the section under “Outreach”, the authors mention that the Pilot commenced in late 2014 but under the section “Pilot action overview and goals” it is stated that the pilot ran from March 2015. Could the authors please clarify?\nIn a study done by Beth Duckles in 2015 Carpentry instructors were asked what personal and professional benefits were associated with being involved in the Carpentries. This report is a valuable reference to provide more clarity on the reason for instructors volunteering to teach. A number of reasons for teaching were listed including but not limited to (1) having the opportunity to travel, (2) building their resumé (teaching experience), (3) enjoying being part of a community, (4) improving their teaching skills and learning from other instructors, (5) learning to communicate and present more efficiently, and more. The authors may want to provide more concrete reasons for why instructors teach and reference the report available at https://software-carpentry.org/files/bib/duckles-instructor-engagement-2016.pdf\nMoving the section on Software and Data Carpentry before the section about ELIXIR and the Carpentries may make more sense logically.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "24016",
"date": "21 Jul 2017",
"name": "Laurent Gatto",
"expertise": [
"Reviewer Expertise Computional biology",
"proteomics",
"data science",
"research software development",
"reproducible research."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript describes the efforts within the ELIXIR project to improve teaching of essential data analysis skills that are currently lacking in the research community. To do so, the Pilot action for \"Working up and building the foundation for Data Carpentry (DC) and Software Carpentry (SC) within ELIXIR\" aims at\nlaunch DC and SC workshops under the umbrella of ELIXIR benefit from the existing qualified instructor pool and support the training of a pool of instructors within the ELIXIR community contribute to the DC/SC community in general by growing the number of instructors (point above), and develop or adapt new material dedicated to life sciences.\nThe manuscript reports on this pilot action. The article also provides a good overview of the interaction between ELIXIR and the Carpentries, and how the latter can contribute to the research community through specific organisations.\nI have two specific suggestions that could possibly further improve the paper.\nFirstly, there are several references to roles without ever specifying who these people were. Some clues are given in the author contributions section, but it would be useful to name those contributors directly. For example in\n\"The close collaboration between the UK Node and the SSI substantially facilitated the delivery of the Pilot - one of the key people involved in writing the proposal and delivering the Pilot was the training lead at the SSI at the time, and the deputy head of the UK Node was a co-investigator at the SSI.\"\nit would be helpful to know who was the person that facilitated the collaboration between the UK Node and SSI.\nSimilarly, in\n\"We sent a request for volunteers (one per Node) to step in and become a SWC/DC Coordinator for their Nodes.\"\nIt would be useful the name those volunteers.\nGetting credit for such efforts is important to support the kind of activities that are promoted by the pilot action, as it relies, as are Carpentries workshops themselves, on volunteering. In addition, I feel that being explicit as to whom did what would help consolidating a wider community around the effort.\nMy second point relates to the assessment of success, both in terms of new instructors and participants. Do the authors have data on whether the newly trained instructors have they been helpers, (lead) instructors or organised workshops themselves, and have they been able to follow up with workshop participants to assess to what extend they apply what they have learnt.\nI spotted one typo in the introduction: \"It coordinates and sustains bioinformatics resources across its member states and help[s] researchers ...\".\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1040
|
https://f1000research.com/articles/6-577/v1
|
26 Apr 17
|
{
"type": "Antibody Validation Article",
"title": "Validation of commercial ERK antibodies against the ERK orthologue of the scleractinian coral Stylophora pistillata",
"authors": [
"Lucile Courtial",
"Vincent Picco",
"Gilles Pagès",
"Christine Ferrier-Pagès",
"Lucile Courtial",
"Gilles Pagès"
],
"abstract": "The extracellular signal-regulated protein kinase (ERK) signalling pathway controls key cellular processes, such as cell cycle regulation, cell fate determination and the response to external stressors. Although ERK functions are well studied in a variety of living organisms ranging from yeast to mammals, its functions in corals are still poorly known. The present work aims to give practical tools to study the expression level of ERK protein and the activity of the ERK signalling pathway in corals. The antibody characterisation experiment was performed five times and identical results were obtained. The present study validated the immune-reactivity of commercially available antibodies directed against ERK and its phosphorylated/activated forms on protein extracts of the reef-building coral Stylophora pistillata.",
"keywords": [
"Antibody validation",
"ERK",
"Corals",
"MAPK"
],
"content": "Introduction\n\nMitogen activated protein kinases (MAPKs) are highly conserved proteins involved in signalling pathways and control key cellular processes such as proliferation, differentiation, migration, survival and apoptosis (Dhillon et al., 2007). The MAPK gene family encompasses three major subfamilies: the extracellular signal-regulated kinase (ERK), p38/HOG and c-Jun N-terminal kinase (JNK) groups. The ERK family is the most studied in mammals (Boulton et al., 1990; Dhillon et al., 2007) because it is involved in meiosis, mitosis and post mitotic functions in differentiated cells, as well as in the oxidative stress response and wound healing (Johnson & Lapadat, 2002; Matsubayashi et al., 2004; Runchel et al., 2011). The ERK gene family is evolutionnarily conserved and is found in all eukaryotes, including yeasts, plants, vertebrates and anthozoans (Chen et al., 2001; Widmann et al., 1999). Although recent molecular studies have shown the existence of ERK genes in different coral species (Mayfield et al., 2010; Siboni et al., 2012; van de Water et al., 2015), ERK activity and specific functions are not yet clearly defined. ERK activation occurs through phosphorylation of the Threonine and Tyrosine residues of an ERK-specific TEY motif by the upstream kinases of ERK, the mitogen-activated protein kinase kinase (MAPKK or MEK). ERK phosphorylation on these residues is classically considered the most appropriate readout for the activity of the ERK signalling pathway. However, it has never been monitored in corals. Overall, MAPK activities in corals have only been investigated once, in a study focusing on the JNK subfamily (Courtial et al., 2017).\n\nIn this work, we used the scleractinian coral Stylophora pistillata, a very abundant species in most tropical reefs (Veron & Stafford-Smith, 2000). We applied the same protocol as in Courtial et al. (2017) to demonstrate the efficiency of antibodies directed against the mammalian phosphorylated forms of ERK (pERK) and total ERK to detect the ERK orthologs in S. pistillata (Table 1). According to the manufacturer’s instructions, the antibody used in this study and directed against the Thr2020/Tyr204 di-phosphorylated active ERK (Thermo Scientific Pierce; MA5-15174) showed reactivity with fruit fly, human, mink, mouse, non-human primate, pig, rat and zebrafish. The immunogen used to generate this rabbit IgG monoclonal antibody was a synthetic phosphopeptide corresponding to residues surrounding the phospho-Thr202/Tyr204 of the human p44/ERK1 MAP kinase. This antibody is not cross-reactive with the corresponding phosphorylated residues of either JNK/SAPK or p38. The ERK1/ERK2 antibody (Thermo Scientific Pierce; MA5-15605) used in the study previously showed reactivity with human and mouse samples. The immunogen used to generate this mouse IgG2b monoclonal antibody was a purified recombinant fragment of human MAPK.\n\n\nMethods\n\nNubbins of Stylophora pistillata were collected from five mother colonies maintained in the aquaria facilities of the Centre Scientifique de Monaco. Two small nubbins were cut from each mother colony, and were allowed to heal for four weeks in 15 L open system tanks before the experiments. Corals were maintained in the same conditions as the mother colonies, i.e. at 25°C, under a photosynthetic active radiation of 200 µmol photon.m-2.s-1 provided by 400 W metal halide lamps (HPIT, Philips) and were fed twice a week with Artemia salina nauplii. Seawater in the tanks was continuously renewed at a rate of 10 L.h-1.\n\nImmortalized skin fibroblasts (BJ-EHLT cells) were kindly provided by E. Gilson’s lab (IRCAN) and cultured in Dulbecco's Modified Eagle's Medium (Invitrogen, Villebon-sur-Yvette, France) supplemented with 10% heat-inactivated fetal calf serum (Dutscher, Brumath, France) at 37°C in an atmosphere of 5% CO2, as previously described (Biroccio et al., 2013).\n\nIncubations were performed in 100 mL beakers containing one coral nubbin each, and filled with 40 mL of 0.45 μm filtered seawater. They were placed in the dark for one hour in either a control condition containing 0.005% DMSO (vehicle) or a condition with 5 μmol.L-1 UO126 (Selleck Chemicals), a MEK inhibitor (Tang et al., 2003). The incubation medium was continuously stirred using magnetic stirrers at a constant temperature of 25°C. At the end of the incubation, nubbins were frozen and kept at – 80°C prior to western blot analysis.\n\nImmuno-detections were performed as in Courtial et al. (2017; Table 2 and Table 3). Briefly, coral tissue was removed from the skeleton in 1 mL Laemmli buffer (1.5 X, Laemmli, 1970) using an air-pick. Samples were then sonicated for 30 seconds, and centrifuged (3 × 5 minutes at 15 000 g) to remove the lipid supernatant and debris. Fibroblasts were washed twice in phosphate buffered saline solution (PBS), lyzed in Laemmli buffer directly in the dishes and sonicated for 30 seconds. The total protein concentration of all samples was determined using a BCA protein Assay Kit (Thermo Fisher Scientific), according to the manufacturer’s recommendation. 1,4 Dithiothreitol (1 mM) and bromophenol blue (0.1%) were added to the samples, which were then heated for 5 minutes at 95°C.\n\n60 μg of proteins were separated on 10% polyacrylamide gels at 300 mA and 110 V at room temperature. Proteins were then transferred on a PVDF membrane at 4°C overnight in Dunn’s transfer buffer at 200 mA. After a rinse in distilled water, membranes were saturated for 30 minutes in PBS - 3% low fat milk, rinsed in PBS and incubated with primary antibodies diluted in PBS - 1% low fat milk at 4°C overnight. The antibody directed against Thr2020/Tyr204 di-phosphorylated active ERK was from Thermo Scientific Pierce (rabbit monoclonal; MA5-15174; batch no. OC1680806); the anti-ERK1/2 antibody was from Thermo Scientific Pierce (mouse monoclonal; MA5-15605; batch no. PH1895491). After extensive washing in PBS – 0.1% Tween 20, membranes were incubated for 2 hours at room temperature in the simultaneous presence of IRDye 680RD goat anti-mouse (925-68070) and IRDye 800CW goat anti-rabbit (925-32211; Li-COR Biotechnology GmbH, Bad Homburg, Germany) secondary antibodies, or with anti-mouse and anti-rabbit HRP-conjugated antibody. Another set of extensive rinsing in PBS – 0.1% Tween 20 was performed before membranes were imaged with an Odyssey device (LI-COR Biosciences, Lincoln, Nebraska) to detect fluorescence and HRP activity using Millipore ECL.\n\nDensitometric analysis of the western blots was performed using Image Studio v2.1 software (Li-COR Biosciences). Intensity of the pERK signal was normalized to the intensity of ERK signal. The relative intensities between control and inhibitor conditions were compared using a t-test. Statistical analysis was done using the software Graphpad Prism v5.03.\n\n\nResults and discussion\n\nIn order to confirm the presence of an ERK ortholog in corals, the human protein sequence of ERK1 (NP_001035145) was compared to the transcriptome database of Stylophora pistillata using the BLAST software (Altschul et al., 1990; Liew et al., 2014). An open reading frame was retrieved from the best hit sequence with a predicted molecular weight of 42 kDa (Spi_isotig05348). This sequence (hereafter referred to as Spi-ERK for S. pistillata ERK) is the only one that shows an homology as high as 81%, 80% and 78% with the protein sequences of the cnidarians Nematostella vectensis ERK (Nv-ERK; XP_001629498.1), Hydra vulgaris ERK (Hv-ERK; XP_002154499.3) and the human MAPK8/JNK1 (Hs-ERK1), respectively (Figure 1) (Krishna et al., 2013; Putnam et al., 2007). These sequences all contain both the conserved kinase domains (Hanks & Hunter, 1995) and the TEY motif of the catalytic domain, which is unique for ERK orthologs (Davis, 2000; Figure 1). An interesting point to note is that a unique sequence showing these features is present in N. vectensis and H. vulgaris genomes, as well as in the S. pistillata transcriptome database. This result suggests that a single ortholog of ERK is present in these cnidarians, as opposed to the two genes encoding ERKs in most mammalian genomes (Ip & Davis, 1998). Furthermore, based on the high level of sequence conservation between distant species (Hanks & Hunter, 1995), antibodies directed against portions of the ERK human proteins may recognize ERKs from other species. Accordingly, we detected a single immune-reactive band with the total-ERK antibody by western blot on S. pistillata extracts (Figure 2A and Figure S1). Spi-ERK should retain the mechanism of activation by phosphorylation of the Threonine and the Tyrosine residues of the ERK-specific TEY motif. Hence, the MA5-15174 antibody directed against the phosphorylated Thr202 and the Tyr204 (i.e. the phosphorylated TEY motif) should detect a phosphorylated TEY motif of Spi-ERK (phospho-ERK). This is consistent with what we observed, as we detected a unique immune-reactive band of approximately 40 kDa with both antibodies (Figure 2A).\n\nThe ERK orthologs of Stylphora pistillata (Spi-ERK), Nematostella vectensis (Nv-ERK), Hydra vulgaris (Hv-ERK), and the human MAPK8/ERK1 (Hs-ERK1) protein sequences are shown. The ERK-specific TEY motif is highlighted in red. The eleven conserved kinase domains are underlined.\n\n(A) Fluorescent immunoblot revealing activated (pERK) and total forms of ERK (ERK) present in Stylphora pistillata nubbins. Molecular weight standards in kilo Daltons (kDa) are indicated on the left side of the figure. (B) Immunoblot performed with ERK and pERK antibodies on protein extracts from coral nubbins incubated in the absence (Control) or presence of the MEK inhibitor U0126. Densitometric analysis of activated ERK intensities is presented on the right of the figure. The medians and standard deviations of three independent experiments are presented (***, p<0.01, t-test).\n\nInterestingly, the fluorescent immunoblot technique showed that the bands detected with the phosphorylated- and the total-ERK antibodies mostly co-migrate, suggesting that the same protein is detected (Figure 2A). The slight electrophoretic migration shift of the band detected with the anti-phosphorylated ERK antibody would be consistent with the phosphorylation of the threonine and tyrosine residues of the TEY motif as previously described (Aoki et al., 2011). These results suggest that ERK and its phosphorylated form are correctly recognized by the antibodies.\n\nRNAi interference techniques are not available in coral, and the confirmation that the immune reactive bands observed here specifically correspond to ERK could not be obtained through this medium. In order to test the specificity of the antibodies, we therefore used U0126, a very potent and selective inhibitor of MEK (Bain et al., 2007). U0126 was previously shown to efficiently block MEK activity in a wide variety of organisms, including cnidarians (Hasse et al., 2014; Picco et al., 2007; Röttinger et al., 2004). When the inhibitor was added to the seawater, the intensity of the band detected by the anti-total ERK did not vary, while the intensity of the band detected with the anti-phosphorylated ERK antibody was significantly reduced (Figure 2B and Figure S1). Altogether, our results strongly suggest that the proteins detected with the two antibodies were ERK and pERK.\n\nTo assess the performance of these antibodies, we compared the signal obtained on S. pistillata and human fibroblasts protein extracts (Figure 3 and Figure S2). We loaded on the same gel 10µg of fibroblast total protein extract and different amounts of S. pistillata extracts (ranging from 80 to 10 µg). A signal comparable to the one obtained with the fibroblast extract was observed using 40 µg of coral proteins for both antibodies. This suggests that the affinity of the antibodies towards the coral proteins may be lower than for their human counterparts.\n\nImmunoblot performed with anti-ERK and anti-phospho-ERK on total protein extracts of human fibroblasts (BJ) and Stylphora pistillata. The amount of protein loaded in each lane is indicated on the figure.\n\n\nConclusion\n\nThis work showed that MA5-15174 and MA5-15605 are two specific antibodies that can be used to quantitatively assess Stylophora pistillata ERK phosphorylation/activity in different experimental or environmental conditions. We demonstrated the specificity of these antibodies and their good affinity towards their coral targets. It therefore provides the coral research community with a potent tool for the analysis of the activity of a signalling pathway involved in a wide variety of biological processes.\n\n\nData availability\n\nFigure S1. Uncropped blot images for Figure 2 and supplementary replicates. (A) Biological replicates of fluorescent immunoblots performed in control conditions (Ct) are shown (Replicates 1 and 2). The portions of the images used in the main text are outlined. (B) Biological replicates of immunoblots performed on protein extracts from coral nubbins incubated in the absence (Control) or presence of the MEK inhibitor U0126 (UO) (Replicates 1 to 5). The portions of the images used in the main text are outlined. doi, 10.5256/f1000research.11365.d159188 (Courtial et al., 2017a)\n\nFigure S2. Uncropped blot images for Figure 3. The portions of the images used in the main text are outlined. doi, 10.5256/f1000research.11365.d159189 (Courtial et al., 2017b)",
"appendix": "Author contributions\n\n\n\nCFP, GP, LC and VP conceived and designed the experiments. LC and VP performed the experiments. CFP, GP, LC and VP analyzed the data. CFP and GP contributed reagents/materials/analysis tools. CFP, GP, LC and VP wrote the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFinancial support to CFP, GP, LC and VP was provided by the Centre Scientifique de Monaco and Pierre and Marie Curie University.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank Y. Cormerais for his help in the experimental setup, N. Caminiti-Seconds for the first coral extracts and assays as well as Pr. Denis Allemand, Director of the Centre Scientifique de Monaco for scientific support.\n\n\nReferences\n\nAltschul SF, Gish W, Miller W, et al.: Basic local alignment search tool. J Mol Biol. 1990; 215(3): 403–410. PubMed Abstract | Publisher Full Text\n\nAoki K, Yamada M, Kunida K, et al.: Processive phosphorylation of ERK MAP kinase in mammalian cells. Proc Natl Acad Sci U S A. 2011; 108(31): 12675–12680. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBain J, Plater L, Elliott M, et al.: The selectivity of protein kinase inhibitors: a further update. Biochem J. 2007; 408(3): 297–315. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiroccio A, Cherfils-Vicini J, Augereau A, et al.: TRF2 inhibits a cell-extrinsic pathway through which natural killer cells eliminate cancer cells. Nat Cell Biol. 2013; 15(7): 818–828. PubMed Abstract | Publisher Full Text\n\nBoulton TG, Yancopoulos GD, Gregory JS, et al.: An insulin-stimulated protein kinase similar to yeast kinases involved in cell cycle control. Science. 1990; 249(4964): 64–67. PubMed Abstract | Publisher Full Text\n\nChen Z, Gibson TB, Robinson F, et al.: MAP kinases. Chem Rev. 2001; 101(8): 2449–2476. PubMed Abstract | Publisher Full Text\n\nCourtial L, Picco V, Gorver R, et al.: The c-Jun N-terminal kinase prevents oxidative stress induced by UV and thermal stresses in corals and human cells. Sci Rep. 2017; 7: 45713. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCourtial L, Picco V, Pagès G, et al.: Dataset 1 in: Validation of commercial ERK antibodies against the ERK orthologue of the scleractinian coral Stylophora pistillata. F1000Research. 2017a. Data Source\n\nCourtial L, Picco V, Pagès G, et al.: Dataset 2 in: Validation of commercial ERK antibodies against the ERK orthologue of the scleractinian coral Stylophora pistillata. F1000Research. 2017b. Data Source\n\nDavis RJ: Signal transduction by the JNK group of MAP kinases. Cell. 2000; 103(2): 239–252. PubMed Abstract | Publisher Full Text\n\nDhillon AS, Hagan S, Rath O, et al.: MAP kinase signalling pathways in cancer. Oncogene. 2007; 26(22): 3279–90. PubMed Abstract | Publisher Full Text\n\nHanks SK, Hunter T: Protein kinases 6. The eukaryotic protein kinase superfamily: kinase (catalytic) domain structure and classification. FASEB J. 1995; 9(8): 576–596. PubMed Abstract\n\nHasse C, Holz O, Lange E, et al.: FGFR-ERK signaling is an essential component of tissue separation. Dev Biol. 2014; 395(1): 154–166. PubMed Abstract | Publisher Full Text\n\nIp YT, Davis RJ: Signal transduction by the c-Jun N-terminal kinase (JNK)--from inflammation to development. Curr Opin Cell Biol. 1998; 10(2): 205–219. PubMed Abstract | Publisher Full Text\n\nJohnson GL, Lapadat R: Mitogen-activated protein kinase pathways mediated by ERK, JNK, and p38 protein kinases. Science. 2002; 298(5600): 1911–1912. PubMed Abstract | Publisher Full Text\n\nKrishna S, Nair A, Cheedipudi S, et al.: Deep sequencing reveals unique small RNA repertoire that is regulated during head regeneration in Hydra magnipapillata. Nucleic Acids Res. 2013; 41(1): 599–616. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaemmli UK: Cleavage of structural proteins during the assembly of the head of bacteriophage T4. Nature. 1970; 227(5259): 680–685. PubMed Abstract | Publisher Full Text\n\nLiew YJ, Aranda M, Carr A, et al.: Identification of microRNAs in the coral Stylophora pistillata. PLoS One. 2014; 9(3): e91101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatsubayashi Y, Ebisuya M, Honjoh S, et al.: ERK activation propagates in epithelial cell sheets and regulates their migration during wound healing. Curr Biol. 2004; 14(8): 731–735. PubMed Abstract | Publisher Full Text\n\nMayfield AB, Hsiao YY, Fan TY, et al.: Evaluating the temporal stability of stress-activated protein kinase and cytoskeleton gene expression in the Pacific reef corals Pocillopora damicornis and Seriatopora hystrix. J Exp Mar Bio Ecol. 2010; 395(1–2): 215–222. Publisher Full Text\n\nPicco V, Hudson C, Yasuo H: Ephrin-Eph signalling drives the asymmetric division of notochord/neural precursors in Ciona embryos. Development. 2007; 134(8): 1491–1497. PubMed Abstract | Publisher Full Text\n\nPutnam NH, Srivastava M, Hellsten U, et al.: Sea Anemone Genome Reveals Ancestral Eumetazoan Gene Repertoire and Genomic Organization. Science. 2007; 317(5834): 86–94. PubMed Abstract | Publisher Full Text\n\nRöttinger E, Besnardeau L, Lepage T: A Raf/MEK/ERK signaling pathway is required for development of the sea urchin embryo micromere lineage through phosphorylation of the transcription factor Ets. Development. 2004; 131(5): 1075–1087. PubMed Abstract | Publisher Full Text\n\nRunchel C, Matsuzawa A, Ichijo H: Mitogen-activated protein kinases in mammalian oxidative stress responses. Antioxid Redox Signal. 2011; 15(1): 205–218. PubMed Abstract | Publisher Full Text\n\nSiboni N, Abrego D, Seneca F, et al.: Using bacterial extract along with differential gene expression in Acropora millepora Larvae to decouple the processes of attachment and metamorphosis. PLoS One. 2012; 7(5): e37774. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTang QQ, Otto TC, Lane MD: Mitotic clonal expansion: A synchronous process required for adipogenesis. Proc Natl Acad Sci U S A. 2003; 100(1): 44–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan de Water JA, Leggat W, Bourne DG, et al.: Elevated seawater temperatures have a limited impact on the coral immune response following physical damage. Hydrobiologia. 2015; 759(1): 201–214. Publisher Full Text\n\nVeron JE, Stafford-Smith M: Corals of the World. Volumes 1–3. Aust Inst Mar Sci. Townsville, Aust. 2000. Reference Source\n\nWidmann C, Gibson S, Jarpe MB, et al.: Mitogen-activated protein kinase: conservation of a three-kinase module from yeast to human. Physiol Rev. 1999; 79(1): 143–180. PubMed Abstract"
}
|
[
{
"id": "22707",
"date": "30 May 2017",
"name": "Andrea Pitzschke",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript provides first insight into a putative MAPK in the coral Stylophora pistillata.\n\nExperiments include protein extraction and immunoblot analysis.\n\nOverall: The experiments that had been performed are properly designed. However, the manuscript lacks sufficiently-detailed information, as well as controls (protein loading). Conclusions are premature or should be re-phrased.\n\nDetailed points of criticism:\n\nTitle “orthologue” is inappropriate. Should be “homologue”.\n\nMethods\nP3 “small rubbins selected” – please be more specific about size and sampling: “tissue removed from coral”: be more specific. Tissue primarily from the surface, how deep was the cut into the material? (I suggest to include a schematic figure incl. scale-bar). This information is important because inhibitors (e.g. UO126) will only diffuse over a short distance, i.e. not reach deeper layers.\nP4: “extensive washing”: duration and number of solution changes missing\nFig.2B: “% or control” rather OF control. The error bar in the control sample is irrelevant, as it is defined as strictly 100%. There is no documentation of protein loading (e.g. Coomassie-stained membrane after immunodetection). The U126-independent intensity of the ERK-Signal is insufficient as control.\n\nConclusions: “…antibody can be used…in different experimental or environmental conditions” This conclusion is premature. As a minimum, the authors should perform an induction experiment. The inhibitory approach (U126) only evidences that a MAPKK is the upstream regulator. Coral research community will only benefit from the antibody and the current study if dynamic ERK activity responses can be monitored.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nAre sufficient details of materials, methods and analysis provided to allow replication by others? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2844",
"date": "03 Jul 2017",
"name": "Lucile Courtial",
"role": "Author Response",
"response": "Overall: The experiments that had been performed are properly designed. However, the manuscript lacks sufficiently-detailed information, as well as controls (protein loading). Conclusions are premature or should be re-phrased. We thank Dr. Pitzschke for her comments and suggestions. We have added information to the manuscript and performed an additional experiment to answer her concerns as detailed bellow. Detailed points of criticism: Title “orthologue” is inappropriate. Should be “homologue”. We have used the term “orthologue” in our manuscript in its definition notably given by Walter Fitch (Fitch 1970, 2000), that is: “Orthology is that relationship where sequence divergence follows speciation, that is, where the common ancestor of the two genes lies in the common ancestor of the taxa from which the two sequences were obtained”. Dr. Pitzschke is right, the Spi- and Hs-ERK proteins are homologues but, on top of that, they also are orthologues. We therefore still think that the term “orthologue” is more accurate in our case. Methods P3 “small nubbins selected” –please be more specific about size and sampling: “tissue removed from coral”: be more specific. Tissue primarily from the surface, how deep was the cut into the material? To be more specific, we added the following sentences in the revised manuscript: P2: “Two small nubbins (3-5 cm long) were cut off from each mother colony and were allowed to heal for four weeks in 15 L open system tanks before the experiments.” P3: “Briefly, nubbins were airbrushed in 1 mL Laemmli buffer (i.e. lysing buffer, 1.5 X, Laemmli 1970) using an air-pick (5 bars) to remove the totality of the tissues surrounding the skeleton was removed from coral.” This information is important because inhibitors (e.g. UO126) will only diffuse over a short distance, i.e. not reach deeper layers. We added a sentence to prevent further doubts concerning the bioavailability of U0126. P6: “In order to test the specificity of the antibodies, we therefore used U0126, a very potent and selective inhibitor of MEK (Bain et al. 2007). The limited thickness of the animal tissue covering the skeleton and the very large surface of contact of both ectoderm and endoderm with the seawater render S. pistillata suitable for treatment with drugs directly diluted in the seawater as we previously showed (Courtial et al. 2017) …” P4: “extensive washing”: duration and number of solution changes missing We added precisions in the materials and methods: P4 : “4 x 30 minutes” Fig.2B: “% or control” rather OF control. The error bar in the control sample is irrelevant, as it is defined as strictly 100%. There is no documentation of protein loading (e.g. Coomassie-stained membrane after immunodetection). The U126-independent intensity of the ERK-Signal is insufficient as control. We added the amido black colored membranes in Figure 2 and Figure S1 as a loading control. Conclusions: “…antibody can be used…in different experimental or environmental conditions” This conclusion is premature. As a minimum, the authors should perform an induction experiment. The inhibitory approach (U126) only evidences that a MAPKK is the upstream regulator. Coral research community will only benefit from the antibody and the current study if dynamic ERK activity responses can be monitored. We performed an additional experiment and added a figure and related text in the manuscript to justify our statement (P9)."
}
]
},
{
"id": "23532",
"date": "16 Jun 2017",
"name": "Immacolata Castellano",
"expertise": [
"Reviewer Expertise Biochemistry"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper by Courtial et al. describes the cross-reactivity of two commercial antibodies produced against the mammalian forms of ERK for the scleractinian coral Stylophora pistillata. This should open new perspective for the study of ERK signalling in response to different environmental cues.\n\nThe paper is clear and well written, however it lacks of some references.\nIn the Introduction, the authors should cite other invertebrates where ERK signalling is known to be conserved and regulated by environmental cues, for example Ciona intestinalis (Castellano et al, PlOS One 2014, Castellano et al, Open Biology 2015). Similarly, in Results and Discussion, when the authors say that .. \"a single orthologue of ERK is present in these cnidarians, as opposed to the two genes encoding ERKs in most mammalian genomes”, they should specify that also in other invertebrates, only one ERK form was found (Russo et al, JBC 2004; Castellano et al, PloS One 2014). Also the use of the MEK inhibitor U0126 was assessed in C. intestinalis (Castellano et al, Open Biology 2015).\n\nFinally the authors pay attention along the text to some errors, i.e. change “Thr2020/204” with Thr202/204, and “through this medium” with “through this method”.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nAre sufficient details of materials, methods and analysis provided to allow replication by others? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2843",
"date": "03 Jul 2017",
"name": "Lucile Courtial",
"role": "Author Response",
"response": "The paper by Courtial et al. describes the cross-reactivity of two commercial antibodies produced against the mammalian forms of ERK for the scleractinian coral Stylophora pistillata. This should open new perspective for the study of ERK signalling in response to different environmental cues. The paper is clear and well written, however it lacks of some references. We thank Dr. Castellano for her comments and suggestions. We have added information to the manuscript to answer her concerns as detailed bellow. In the Introduction, the authors should cite other invertebrates where ERK signalling is known to be conserved and regulated by environmental cues, for example Ciona intestinalis (Castellano et al, PlOS One 2014, Castellano et al, Open Biology 2015). We added a precision and the reference in the text: “The ERK gene family is evolutionnarily conserved and is found in all eukaryotes, including yeasts, plants, vertebrates and invertebrates (Widmann et al. 1999; Chen et al. 2001; Castellano et al. 2014).” Similarly, in Results and Discussion, when the authors say that .. \"a single orthologue of ERK is present in these cnidarians, as opposed to the two genes encoding ERKs in most mammalian genomes”, they should specify that also in other invertebrates, only one ERK form was found (Russo et al, JBC 2004; Castellano et al, PloS One 2014). We added the precision in the text:P4 “This result suggests that a single ortholog of ERK is present in these cnidarians, consistently with previous work where only one ERK ortholog was found (Russo et al. 2004; Castellano et al. 2014) but as opposed to the two genes encoding ERKs in most mammalian genomes (Ip and Davis 1998).” Also the use of the MEK inhibitor U0126 was assessed in C. intestinalis (Castellano et al, Open Biology 2015). Despite extensive search into the reference cited by Dr. Castellano, we could not find any experiment using U0126 in this paper. The aforementioned reference instead reports the use of a dual specificity phosphatase inhibitor. Moreover, the work of Picco et al. (2007) cited in the manuscript already reports the use of U0126 in Ciona embryos. We therefore did not include the suggested citation in the text. Finally the authors pay attention along the text to some errors, i.e. change “Thr2020/204” with Thr202/204, and “through this medium” with “through this method”. We changed the errors in the revised manuscript."
}
]
},
{
"id": "23149",
"date": "22 Jun 2017",
"name": "María L. Parages",
"expertise": [
"Reviewer Expertise Molecular Ecology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript the authors Courtial et al. validate the use of commercial ERK antibodies for the detection of MAPK-like proteins in Coral. Although the paper is well written and has the quality to be indexed, a few issues should be figure out before its final acceptance.\n\nIn terms of samples preparation and to allow future replication by others researchers would be appropriate to give more information about how to prepare the samples. For example, “Briefly, coral tissue was removed from the skeleton in 1 mL Laemmli buffer” so, how much coral tissue will be dissolved and resupended it in 1 mL of Laemmli Buffer? Less than 0.5 gr? More?\nWhile it is true that the authors note that immune-detection were performed as in Courtial et al. (2017), and that DTT (Dithiothreitol) and BFB (Bromophenol blue) were added to the samples and heated (5 minutes at 95°) before loading the gels, it is not specified how tissue extraction was performed. I must assume that the Lysis Buffer used was Laemmli Buffer? And in this case, how they have been unable to detect phosphorylated ERK? As far as I know lysis buffer for phosphorylated proteins usually contents EDTA or EGTA to chelate Mg2+/Ca2+, DTT for reduction of disulfide bonds, serine protease inhibitor (Aprotinin/Leupeptine), phosphatase inhibitors to block dephosphorylation like Na orthovanadate, or Beta-glycerophosphate (false substrate for phosphatase, between others… They also keep everything ice cold? Also, it surprises me that they used 3% low fat milk for membranes blocking, due to also screws up phosphor-tyrosine detection.\nRegarding the presence of an ERK ortholog in coral, the authors show the sequence with code Spi_isotig05348 (Spi_ERK) has the best hit with the human ERK protein (NP_0011035145) and they make reference to Liew et al., 2014. I was looking for this sequence into this paper, and I could not find it. The authors should provide in which database is the transcriptome, as well as, the Spi_ERK sequence itself.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nAre sufficient details of materials, methods and analysis provided to allow replication by others? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2842",
"date": "03 Jul 2017",
"name": "Lucile Courtial",
"role": "Author Response",
"response": "In this manuscript the authors Courtial et al. validate the use of commercial ERK antibodies for the detection of MAPK-like proteins in Coral. Although the paper is well written and has the quality to be indexed, a few issues should be figure out before its final acceptance. We thank Dr. Parages for her comments and suggestions. To answer her concerns, we have added information and replaced a citation in the manuscript as detailed bellow. In terms of samples preparation and to allow future replication by others researchers would be appropriate to give more information about how to prepare the samples. For example, “Briefly, coral tissue was removed from the skeleton in 1 mL Laemmli buffer” so, how much coral tissue will be dissolved and resupended it in 1 mL of Laemmli Buffer? Less than 0.5 gr? More? All of the tissue from 3-5cm long S. pistillata nubbins was used. We added the precision in the materials and methods: P4: “Two small nubbins (3-5 cm long) were cut off from each mother colony and were allowed to heal for four weeks in 15 L open system tanks before the experiments.” P5: “Briefly, nubbins were airbrushed in 1 mL Laemmli buffer (2% SDS, 10% glycerol, 50mM Tris HCL pH7) ( Laemmli 1970) using an air-pick (5 bars) to remove the totality of the tissues surrounding the skeleton was removed from coral.” While it is true that the authors note that immune-detection were performed as in Courtial et al. (2017), and that DTT (Dithiothreitol) and BFB (Bromophenol blue) were added to the samples and heated (5 minutes at 95°) before loading the gels, it is not specified how tissue extraction was performed. I must assume that the Lysis Buffer used was Laemmli Buffer? And in this case, how they have been unable to detect phosphorylated ERK? As far as I know lysis buffer for phosphorylated proteins usually contents EDTA or EGTA to chelate Mg2+/Ca2+, DTT for reduction of disulfide bonds, serine protease inhibitor (Aprotinin/Leupeptine), phosphatase inhibitors to block dephosphorylation like Na orthovanadate, or Beta-glycerophosphate (false substrate for phosphatase, between others… Coral tissues were indeed lyzed in laemmli buffer. To prevent any doubt when reading the methods, we added a precision in the text: P3: “Briefly, nubbins were airbrushed in 1 mL Laemmli buffer (2% SDS, 10% glycerol, 50mM Tris HCL pH7) (Laemmli 1970) using an air-pick (5 bars) to remove the totality of the tissues surrounding the skeleton.” To answer the concern of Dr. Parages about the possible phosphatase activity in the samples, we would like to specify the fact that, due to a high concentration of SDS, the Laemmli buffer is a strongly denaturing buffer for proteins, including phosphatases. Therefore, this buffer, commonly used at room temperature, preserves phosphorylation of the proteins without the need to add phosphatases inhibitors (see Picco et al. 2016 for example). They also keep everything ice cold? Also, it surprises me that they used 3% low fat milk for membranes blocking, due to also screws up phosphor-tyrosine detection. We fully agree with Dr. Parages, the use of milk as a blocking buffer is usually not recommended for the detection of phospho-proteins as it may contain phospho-proteins that can interact with the anti-phospho primary antibodies. However, we have successfully used this blocking buffer in diverse experimental setups, including human cultured cells, ascidian embryos as well as coral lysates. We used this buffer in the course of this study because it gives far less background noise than any other blocking buffers tested. Regarding the presence of an ERK ortholog in coral, the authors show the sequence with code Spi_isotig05348 (Spi_ERK) has the best hit with the human ERK protein (NP_0011035145) and they make reference to Liew et al., 2014. I was looking for this sequence into this paper, and I could not find it. The authors should provide in which database is the transcriptome, as well as, the Spi_ERK sequence itself. We are grateful to the reviewer for questioning this particular point as it allowed us to uncover a significant error in the citation we used. The reference for the Spi EST containing the ERK orthologue open reading frame was obtained from the database generated during the study by Karako-Lampert, Zoccola et al. (Plos One 2014) and not the one by Liew et al (p.7) . The database containing the sequence can be downloaded from this address: http://data.centrescientifique.mc/CSMdata-stylodata.html. As mentioned in the manuscript, the ERK1 human protein sequence was blasted (tblastn) against the Karako-Lampert database using the blast tool hosted on a publically accessible local server of the Centre Scientifique de Monaco (http://data.centrescientifique.mc/blast/blast.php). The correct reference has been added to the manuscript. Due to current major security concerns for our servers, we have chosen not to include the aforementioned URL in the present manuscript. However, this URL is openly disclosed in the Karako-Lampert et al. paper, which should allow readers to access the database without the needing to contact corresponding authors."
}
]
}
] | 1
|
https://f1000research.com/articles/6-577
|
https://f1000research.com/articles/6-861/v1
|
09 Jun 17
|
{
"type": "Data Note",
"title": "Draft genome sequencing of the sugarcane hybrid SP80-3280",
"authors": [
"Diego Mauricio Riaño-Pachón",
"Lucia Mattiello",
"Lucia Mattiello"
],
"abstract": "Sugarcane commercial cultivar SP80-3280 has been used as a model for genomic analyses in Brazil. Here we present a draft genome sequence employing Illumina TruSeq Synthetic Long reads. The dataset is available from NCBI BioProject with accession PRJNA272769.",
"keywords": [
"sugarcane",
"long reads",
"polyploid",
"genomics"
],
"content": "Introduction\n\nSugarcane is an economically important crop used as source of sugar, ethanol and electricity generation1. Sugarcane has a haploid genome of ~1Gpb, however, modern sugarcane cultivars are polyploids derived from interspecific hybridization between S. officinarum L. and S. spontaneum L., reaching up to 130 chromosomes distributed among ~12 homo(eo)logous groups2,3, with a total genome size reaching 10Gpb4. Its complex genome structure has hampered genome sequencing, assembly and annotation. Partial genomic sequences are available5–8, as well as transcriptome sequences9–11, but there are not whole genome assemblies available to date. Here we used the Illumina TruSeq Synthetic Long Read sequencing technology to survey the genome of cultivar SP80-3280. The generated long reads and their assembly have been made public and will provide useful information for functional genomics studies.\n\n\nMaterials and methods\n\nThe leaf rolls of greenhouse grown, two-month old plants of sugarcane cultivar SP80-3280 (provided by Centro de Tecnologia Canavieira, Piracicaba, São Paulo), were collected and immediately frozen in liquid nitrogen. The plant tissue was ground up to become fine powder, and high molecular weight DNA was extracted from 100 mg of fresh frozen tissue using CTAB (Sigma-Aldrich, USA) and chloroform:isoamyl alcohol (Sigma-Aldrich, USA) as previously described12. 6µg of DNA were sent to Illumina (CA, USA) for DNA sequencing using TruSeq Synthetic long read technology13, through their FastTrack Sequencing Service. Sequencing was performed on an Illumina HiSeq2000 system using paired-end chemistry. Nine long read libraries, each generating approx. 600Mbps, were generated, giving an estimated coverage between 4 and 5 of the monoploid genome. A total of 1,378,917 reads longer than 1.5Kbp, or 5,642,855,018 bases, were generated. The underlying 1,966,604,928 short reads amount to 393,320,985,600bp, which would translate to an estimated coverage of 393x of the haploid genome. The maximum read length was 20,918bp, with 36% of the reads being longer than 4.5Kbp. Possible contaminants were removed by comparison against the NCBI’s nucleotide database using BLAST14, keeping only the reads with best hits against Viridiplantae, resulting in 1,224,061 useful for assembly. Prior to assembly, reads originating from mitochondria (NC_008360.1) and chloroplast (NC_005878.2) were excluded using mirabait (http://mira-assembler.sourceforge.net/). Reads longer than 1.5Kbp were assembled using Celera’s WGS Assembler v8.2, using similar parameters as previously described13, except for some of the error parameters that were left in their default settings, i.e., ‘unitiger=bogart, merSize=31, ovlMinLen=100’, and the parameters ovlErrorRate, cnsErrorRate, cgwErrorRate, utgGraphErrorRate, utgGraphErrorLimit, utgMergeErrorRate, utgMergeErrorLimit. A non-redundant assembly was created using CD-HIT15, merging 100% identical sequences and sub-sequences.\n\n\nData availability\n\nRaw sequencing data are available at NCBI SRA; the long reads with accession number SRX845504, and the underlying short reads with accessions SRX853961 to SRX853969. The SP80-3280 assembly is available with accession number GCA_002018215.1. All data can be found under the BioProject.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by institutional funds from CTBE/CNPEM to DMRP and a Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) grant to LM (2012/23345-0). The research was developed with support from CENAPAD-SP (Centro Nacional de Processamento de Alto Desempenho em São Paulo), project UNICAMP/FINEP-MCT.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors are grateful to Larissa Prado da Cruz (CTBE/CNPEM) for assistance with molecular biology procedures.\n\n\nReferences\n\nLong SP, Karp A, Buckeridge MS, et al.: Feedstocks for Biofuels and Bioenergy. In Bioenergy & Sustainability: bridging the gaps. (eds. Souza GM, Victoria RL, Joly CA & Verdade LM), UNESCO. 2015; 302–347. Reference Source\n\nGrivet L, Arruda P: Sugarcane genomics: depicting the complex genome of an important tropical crop. Curr Opin Plant Biol. 2002; 5(2): 122–127. PubMed Abstract | Publisher Full Text\n\nD’Hont A: Unraveling the genome structure of polyploids using FISH and GISH; examples of sugarcane and banana. Cytogenet Genome Res. 2005; 109(1–3): 27–33. PubMed Abstract | Publisher Full Text\n\nLe Cunff L, Garsmeur O, Raboin LM, et al.: Diploid/polyploid syntenic shuttle mapping and haplotype-specific chromosome walking toward a rust resistance gene (Bru1) in highly polyploid sugarcane (2n approximately 12x approximately 115). Genetics. 2008; 180(1): 649–660. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiller JR, Dilley KA, Harkins DM, et al.: Initial genome sequencing of the sugarcane CP 96-1252 complex hybrid [version 1; referees: 1 approved]. F1000Res. 2017; 6: 688. Publisher Full Text\n\nGrativol C, Regulski M, Bertalan M, et al.: Sugarcane genome sequencing by methylation filtration provides tools for genomic research in the genus Saccharum. Plant J. 2014; 79(1): 162–172. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOkura VK, de Souza RS, de Siqueira Tada SF, et al.: BAC-Pool Sequencing and Assembly of 19 Mb of the Complex Sugarcane Genome. Front Plant Sci. 2016; 7: 342. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Setta N, Monteiro-Vitorello CB, Metcalfe CJ, et al.: Building the sugarcane genome for biotechnology and identifying evolutionary trends. BMC Genomics. 2014; 15(1): 540. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMattiello L, Riaño-Pachón DM, Martins MC, et al.: Physiological and transcriptional analyses of developmental stages along sugarcane leaf. BMC Plant Biol. 2015; 15: 300. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoang NV, Furtado A, Mason PJ, et al.: A survey of the complex transcriptome from the highly polyploid sugarcane genome using full-length isoform sequencing and de novo assembly from short read sequencing. BMC Genomics. 2017; 18(1): 395. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBelesini AA, Carvalho FMS, Telles BR, et al.: De novo transcriptome assembly of sugarcane leaves submitted to prolonged water-deficit stress. Genet Mol Res. 2017; 16(2). PubMed Abstract | Publisher Full Text\n\nPorebski S, Bailey LG, Baum BR: Modification of a CTAB DNA extraction protocol for plants containing high polysaccharide and polyphenol components. Plant Mol Biol Rep. 1997; 15(1): 8–15. Publisher Full Text\n\nMcCoy RC, Taylor RW, Blauwkamp TA, et al.: Illumina TruSeq synthetic long-reads empower de novo assembly and resolve complex, highly-repetitive transposable elements. PLoS One. 2014; 9(9): e106689. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAltschul SF, Gish W, Miller W, et al.: Basic local alignment search tool. J Mol Biol. 1990; 215(3): 403–410. PubMed Abstract | Publisher Full Text\n\nFu L, Niu B, Zhu Z, et al.: CD-HIT: accelerated for clustering the next-generation sequencing data. Bioinformatics. 2012; 28(23): 3150–3152. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23398",
"date": "15 Jun 2017",
"name": "Jason Miller",
"expertise": [
"Reviewer Expertise Genome assembly"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSummary:\nThe Data Note, \"Draft genome sequencing of the sugarcane hybrid SP80-3280\", describes a sugarcane genome assembly that is available at NCBI. The TruSeq method was applied to a monoploid sugarcane cultivar to generate a 1.2 gigabase assembly with a 8433 contig N50 according to GenBank. This is the first sugarcane genome assembly so it will be of interest to the field. This data note is especially useful because it describes the sequence filtering by size, blast, mirabit, and cd-hit prior to release.\n\nSuggestions:\n\nThe sentence, “there are not whole genome assemblies available”, probably should say “there are no whole genome assemblies available”. The text could be made clearer by presenting all the statics for underlying short reads before getting to the synthetic long read stats, and by specifying that the blast filter was applied to the long reads. I would appreciate a reference for Celera Assembler, but that is just me.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": [
{
"c_id": "2832",
"date": "03 Jul 2017",
"name": "Diego Mauricio Riaño-Pachón",
"role": "Author Response",
"response": "Dear Dr. Miller, thank you very much for your review of our data note. We have followed your main suggestions, and they are available as version 2 of the data note. Best regards, Diego"
}
]
},
{
"id": "23667",
"date": "21 Jun 2017",
"name": "Chakravarthi Mohan",
"expertise": [
"Reviewer Expertise Sugarcane genetic engineering",
"transcriptomics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe data note entitled 'Draft genome sequencing of the sugarcane hybrid SP80-3280' is perhaps the first report describing the whole genome of sugarcane, a complex polyploid and its availability in NCBI will be a boon to sugarcane researchers.\nThe study is well planned, executed and well drafted. The data presented here would be particularly useful for functional genomic studies in sugarcane.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": [
{
"c_id": "2833",
"date": "03 Jul 2017",
"name": "Diego Mauricio Riaño-Pachón",
"role": "Author Response",
"response": "Dear Dr. Mohan, thanks you for your review of our data note. In version 2 of the note we have added links for the genome annotation in addition to the genome assembly. Best regards, Diego"
}
]
}
] | 1
|
https://f1000research.com/articles/6-861
|
https://f1000research.com/articles/6-1034/v1
|
30 Jun 17
|
{
"type": "Research Article",
"title": "Te Waka Kuaka, Rasch analysis of a cultural assessment tool in traumatic brain injury in Māori",
"authors": [
"Hinemoa Elder",
"Karol Czuba",
"Paula Kersten",
"Alfonso Caracuel",
"Kathryn McPherson",
"Karol Czuba",
"Paula Kersten",
"Alfonso Caracuel",
"Kathryn McPherson"
],
"abstract": "Background: The aim was to examine the validity of a new measure, Te Waka Kuaka, in assessing the cultural needs of Māori with traumatic brain injury (TBI). Methods: Māori from around Aotearoa, New Zealand were recruited. 319 people with a history of TBI, their whānau (extended family members), friends, work associates, and interested community members participated. All completed the 46-item measure. Rasch analysis of the data was undertaken. Results: All four subscales; Wā (time), Wāhi (place), Tangata (people) and Wairua practices (activities that strengthen spiritual connection) were unidimensional. Ten items were deleted because they did not fit the model, due to statistically significant disordered thresholds, non-uniform differential item functioning (DIF) and local dependence. Five items were re-scored in the fourth subscale resulting in ordered thresholds. Conclusions: Rasch analysis facilitated a robust validation process of Te Waka Kuaka.",
"keywords": [
"traumatic brain injury",
"Māori",
"Rasch analysis",
"measurement"
],
"content": "Introduction\n\nTraumatic brain injury (TBI) in Māori is a significant health problem. Recent population data shows that Māori youth are three times more likely to sustain clinically significant TBI compared to non-Māori (Feigin et al., 2013). A complicating factor in responding to Māori with TBI has been the lack of understanding of the cultural importance of injury to the brain and head to Māori, given the primacy placed on the head in Māori culture. For instance, ‘he tapu te upoko’ is a well-known saying from Te Ao Māori (the Māori world) which means, the head is sacred, clearly indicating the important ‘place’ of brain injury from a cultural perspective (28th Māori Battalion, 2011). Recent work has explored these concepts and developed a Māori theory and praxis of TBI (Elder, 2013a; Elder, 2013b). What that research found was that the concepts of wā (time), wahi (place), tangata (people) and wairua practices (activities that strengthen the unique connection between Māori people and the universe) were central to Māori in navigating recovery. Indeed, how much time is taken, where assessment and treatment takes place, who is present at assessments, and what culturally salient activities are embedded in these assessments and treatment are well understood by practitioners as being critical to the engagement of Māori whānau, although formal research in these areas has not been conducted. Practice-based evidence also shows that without these factors being implemented, Māori whānau disengage from services and therefore miss out on rehabilitation interventions, leading to compromised outcomes. Allocating enough time when working with Māori has recently been identified as vital to ensuring cultural practices are undertaken and therefore more accurate assessment and recommendations are provided (Elder et al., 2016). These aspects of comprehensive assessment of Māori may be in tension with clinical imperatives that emphasize efficiencies of time and prioritize brevity of assessment and treatment.\n\nWhile some needs of patients and relatives after a TBI are held trans-culturally, others depend on the specific social and cultural context in which people live. As tools for the assessment of these needs are influenced by culture, measures adapted from other cultures have shown substantial differences between countries, even if they share historical roots and language (Norup et al., 2015).\n\nDespite some, albeit variable, awareness by health practitioners and researchers of these cultural issues (Harwood, 2010; Harwood et al., 2012), no measures have been developed that might help conceptualize the magnitude and nature of the cultural needs associated with Māori TBI. Such measures should enable tailored responses to these needs and thereby improve recovery outcomes and communication between whānau and clinicians, and therefore improve the quality of assessments. The lack of such measures means that Māori cultural needs in the context of TBI lack recognition and attention, or if there is some awareness of these on the part of clinicians the approach is not systematically provided or monitored (Elder, 2012).\n\nMeasures used to monitor recovery and needs post-TBI, such as neuropsychological tests, have been developed elsewhere; and Māori cultural norms and validation in the Māori community have not been carried out, although such work is now underway in the context of the ageing brain (Dudley, 2016). This issue is well recognized as contributing to difficulties in interpreting scores for Māori (Ogden & McFarlane-Nathan, 1997). Experts in cross-cultural neuropsychology warn that adaptations of tools across cultures has serious drawbacks that affect all stages of the assessment: review of records; interviews; neuropsychological testing; and interpretation of results (Puente et al., 2013). Having measures developed by Māori for Māori is therefore a critical issue in ensuring cultural validity. Indeed, there continues to be some debate about what can be measured and how this could occur in a culturally authentic way, given the experience of historical measures being used as a means of cultural marginalization of Māori (Durie, 2004). Developing such measures aligns with the literature on Patient Reported Outcome Measures (PROM) that recognizes these measures as a central component to improving multiple facets of care and support, raising the quality of outcomes from illness and injury including in TBI (Friedly et al., 2014; Reeve et al., 2013). The need for a dual purpose tool which serves to assess both cultural needs and also measure outcomes with cultural salience for Māori is apparent from clinical experience, and is frequently requested by Māori whānau seeking tools they feel reflect their realities. The lack of such measures in the literature indicates this is a significant gap that needs to be addressed.\n\nThis study aimed to examine the internal construct validity of a post-TBI assessment of Māori cultural needs and outcome measures by Rasch analysis.\n\n\nMethods\n\nA 46 item draft scale was developed from verbatim quotes taken from transcripts of an earlier phase of the study (see Supplementary File 1), and refined using a culturally responsive method (Elder & Kersten, 2015). Rangahau Kaupapa Maori (Māori research approaches determined and conducted by Māori, with the goal of supporting Māori health advancement) were utilized.\n\nThe statements used in the first iteration of the tool came from Māori participants in marae wānanga (traditional learning fora). The items were then refined via four focus groups, with the final group of participants having experienced TBI. This was to ensure the items were acceptable to those with direct experience, and that the items had face validity in addressing the sub-scale areas and were easily understood. The measure was then completed by 319 participants from a range of settings in the North Island of Aotearoa, New Zealand, between June and November 2015. They included attendees at Kura Reo; a week-long total immersion Te Reo Māori wānanga (Māori language learning environment). The attendees had a range of proficiencies in speaking Te Reo Māori, from beginner to expert level.\n\nPeople were invited to participate in two ways. First, via Māori health service providers, appointments were set up with the first author. Second, wānanga groups were offered participation and the first author provided a presentation about the project, answered questions and provided oversight of completion of the tool. Inclusion criteria were Māori with TBI, or non-Māori who were part of Māori whānau (extended families), for example by marriage, whānau members, friends of Māori with TBI, those with work connections with Māori with TBI and Māori community members concerned about TBI. TBI was defined by self-reporting, as either confirmed, possible or unknown. Information was collected about TBI severity and placed into mild, moderate, severe and unknown categories, however given the questionable accuracy of self-reporting, this data was not included in our analysis. The emphasis here was on offering participation to whānau as well as to individuals affected by TBI. This reflects the centrality of whānau as a health and wellbeing construct, which is well recognised in Māori scholarship (Durie, 2001) and tikanga (cultural lore) (Moko-Mead, 2003). Indeed, the theoretical basis of this tool proposes that TBI affects the whole whānau and that the whole whānau needs to considered as “the patient” (Elder, 2013a). All 319 participants provided written informed consent. The research was approved by the Health and Disability Ethics Committee of NZ (14/CEN/17) and by Te Whare Wānanga o Awanuiārangi, the first author’s institution (EC14 034HE). Participants were supervised by the first author or a research assistant, when completing the draft 46-item outcome measure. These data were then entered into the Rasch analysis software programme, RUMM2030 (Andrich et al., 2010).\n\nThe instrument resulting from the earlier research (Elder, 2013a) contained four subscales and 46 items. The four subscales were labeled Wā (time), Wāhi (place), Tangata (people) and Wairua practices (Wairua is defined here as an aspect of health and well-being characterized as a unique connection between Māori people and all aspects of the universe). The participants were invited to score each of the items as strongly agree, agree, disagree or strongly disagree. While debate continues around whether or not to include a neutral response option in surveys or assessment tools, the rationale used here aligns with others who have shown absence of a neutral option encourages mental effort to engage with the item and negates the effect of social desirability bias (Krosnick et al., 2002). Other demographic information was collected about each participant as presented in Table 1.\n\nAll analyses of each of the subscales were carried out using RUMM2030 (Andrich et al., 2010) in order to determine the fit of the data to the Rasch model. Rasch analysis is a probabilistic mathematical model that draws on item response theory with the advantage of estimating the item difficulty and the person ability separately, which is not possible using measures based on classical test theory (Hays et al., 2000). The 1-parameter logistic function enables item difficulty to vary but assumes all items discriminate equally. Before Rasch analysis is used to transform ordinal observation data into linear measures, the Rasch fit statistics are examined to enable assessment of any threats to linear measurement (Haigh et al., 2001; Whiteneck et al., 2011).\n\nRasch analysis is used to assess the measurement properties of existing measures and to guide the development of new ones (Czuba et al., 2016). The Rasch model states that the outcome of an encounter between a person and an item is governed by the product of the construct of interest of the person, together with the easiness of the item (Bond & Fox, 2001). The person’s estimate of cultural needs is derived by dividing the percentage of items that scored highly, by the percentage of items scored in the low range, and then by taking the natural log.\n\nScalable items are important because they capture difficulty, and make the measurement useful in a practical sense. For instance, an item with high difficulty means it more urgently needs to be acted upon, and fluctuations can be monitored. Likewise, items which capture low, and intermediate levels of need are important in a measure, so that both lower and intermediate levels of need can be identified, and changes over time can be monitored and responded to. In the Rasch model, the item difficulty is estimated by calculating the odds of success in identifying those who scored highly and those who scored in the low range.\n\nEach item within the scale has its own level of difficulty on the trait (item parameter), and every person has his or her own level of “ability/trait”. Item parameters are estimated independently from the person parameters and once they are identified they can be placed along the same interval scaled ruler. The item and person performance probabilities determine the interval sizes on the “ruler” of the measure.\n\nA number of tests were performed to assess the fit of the subscales to the Rasch model. Fit to the assumptions of the model can have a number of contributing factors which are explained in detail elsewhere (Andrich, 1988; Bond & Fox, 2001; Kersten & Kayes, 2011; Tennant & Conaghan, 2007). It is important to note that ‘misfit’ should not be taken to mean that the item has no merit or is of no interest, but rather that it does not fit the unidimensional structure of a measure (or in this case domain). If this is the case, collapsing scores or moving an item to a different domain is considered, for items that do not fit but add discriminatory information. Table 2 presents a brief overview of the central Rasch analytical concepts and the actions that can be taken in the case of conditions not being met for the transfer from ordinal to linear scores.\n\nKey:\n\nA: Thresholds represent points where the probability of scoring either of the two adjacent categories is 50%. If it is not the case, one would observe disordered thresholds where the individual score cannot be reliably interpreted.\n\nB: Extreme scores (much lower than -2.5, or much higher than 2.5) indicate issues with response pattern which may include: responding according to a socially desired norm, carelessness with responding or low motivation in responding. As such data would not add any meaningful information to the calibration process, it has been suggested to consider excluding extreme persons from the sample (Bond & Fox, 2001; Tennant & Conaghan, 2007).\n\nC: Local dependency occurs when a person’s response to one item is reflected in their response to another item\n\nD: Two subsets of items are identified by PCA: one with positively loading items and one with negatively loading items. Two estimates derived from these subtests are then tested by using an independent t-test. If the result is insignificant at p≤0.05, the unidimensionality is supported.\n\nE: Targeting of the scale to the latent trait allows identification of floor and ceiling effects.\n\nF: DIF occurs when people from different groups (for example, males and females) with equal amounts of the underlying trait do not respond to items in a similar manner.\n\nG : A testlet is a bundle of items that share a common stimulus.\n\nFor Rasch analyses, reasonably well-targeted samples of 150 are reported to have 99% confidence that the estimated item difficulty is within ± ½ logit, and n=243 for poorly targeted samples (Linacre, 1994). Our sample of 319 was therefore optimal for the purpose of this analysis.\n\n\nResults\n\nThis section reports the analysis of results for each Te Waka Kuaka subscale separately. There were no missing data in the dataset. Please see Supplementary File 2 for the complete final version of Te Waka Kuaka, and Supplementary File 1 for the draft version, from which items were deleted.\n\nThe proposed subscale had 9 initial items all concerned with the broad concept of time. These items were not specifically linked to issues such as time to access treatment or time since injury. Rather, time in this subscale is concerned with what needs to happen first in time, the role of time in facilitating healing, taking time for a range of purposes and flexibility of time schedules.\n\nThe initial analysis of the Wā subscale showed that there were no items with statistically significant disordered thresholds, and the scale was unidimensional. However, the scale did not fit the Rasch model because of a significant (p=0.0005) item-trait interaction chi-square, and a particularly high mean persons location (2.8; SD=1.5). 23% (n=60) of the sample had extreme scores, and so were deleted from the analysis, the remainder of n=259 provided a robust sample to analyse. Deletion of the subgroup improved the mean persons location (2.3; SD=1.2), but did not result in an improvement in item-trait interaction.\n\nFurther examination of the items revealed three (items 3, 5 and 9) that were misfitting the model. Item 3, “whakawhanaungatanga (the process of making connections with others) at the beginning sets the scene for the journey” functioned differently according to iwi (tribe), with the “other” group being an outlier. Also, the item did not fit the Rasch model, with item fit residual of -2.825, and chi-square probability of 0.006. Importantly, the item seemed to identify issues already captured by items 1 (Starting the process of wairua healing is the first thing that needs to happen for our whānau), 2 (The journey of wairua healing is enhanced with time), and 8 (whakawhanaungatanga time builds, to keep hope and dreams alive). Hence, it was deleted from this subscale.\n\nItem 5, “It is important that kaimahi (health workers) are flexible in their schedules of work”, had a high fit residual (2.722; p=0.0006) indicating the item does not fit the scale. It was also deleted from the subscale.\n\nLastly, Item 9, “Whānau unity and strength builds healing” showed local dependency problems with item 3 “whakawhanaungatanga at the beginning sets the scene for healing”. It also displayed non-uniform differential item functioning (DIF) for relationship (see Table 1). A number of possible solutions described in Table 3 were tested, however, only deletion of the item led to solving the local dependence with item 3.\n\nPSI: Person Separation Index; Alpha: Cronbach’s alpha. ‘First’ refers to the analysis of results of the raw ordinal data; ‘Final’ refers to the analysis of results of the Rasch-transformed data.\n\nKey:\n\n1: Ideally, mean fit residual statistics should be close to a mean of zero with a standard deviation of one.\n\nThese modifications improved the fit of the Wā subscale and provided the final solution (see Table 3). The resulting 6-item scale was unidimensional and the item-trait interaction was non-significant (p= 0.1237). The reliability of the subscale is relatively low (PSI=0.56). The targeting of the subscale Wā was skewed, suggesting people on average scored towards the upper end of the scale (Figure 1)\n\nThe proposed Wāhi subscale included 10 items concerned with aspects to do with places, such as those of cultural significance as well as clinics and hospitals.\n\nUpon initial examination of the Wāhi subscale, it was found that the item-trait interaction chi-square was highly significant (p<0.00001) and the scale was not unidimensional. None of the items showed disordered response category thresholds. Further analysis of DIF and fit statistics revealed four items that required specific attention: items 10, 11, 16 and 17.\n\nItems 10, “The use of pepeha within treatment would support the healing”, and item 17, “Whānau from home are an essential link with home”, had uniform DIF by TBI severity. These items were combined into a testlet with item 13, “Whakaairo (carvings) teach important lessons that help with healing”, which had showed non-significant DIF in an opposite direction. This resulted in these opposing directional DIF cancelling each other out.\n\nItem 11, “Being inside buildings like hospitals does not help me”, had a very high fit residual of 7.785 (p<0.00001), demonstrating it did not fit the subscale. This item was therefore removed.\n\nItem 16, “Gathering, preparing and eating food from home is an important part of healing”, showed uniform DIF by location and TBI. This item was combined into a testlet with item 19 as this item visually showed to have DIF in the opposite direction (non-significant), “being on the marae is a good place to start to feel strong again”, and therefore these DIF in opposite directions cancelled each other out.\n\nThese modifications improved the fit of the subscales to the Rasch model and provided the final solution (Table 3). The final subscale had 9 items and was unidimensional, the item-trait interaction was non-significant, and no DIF was observed. The reliability of subscale Wāhi was good (PSI=0.78) and the targeting acceptable (Figure 2).\n\nThis subscale is concerned with people involved with the person with TBI and their whānau and had a total of 15 statements.\n\nThe initial analysis of the Tangata subscale showed that the scale was unidimensional and none of the items had statistically significant disordering of response categories thresholds. However, the scale did not fit the Rasch model with statistically significant item-trait interaction chi-square (p=0.0002).\n\nFurther examination revealed three pairs of items with high residual correlations. Item 22, “Within whānau there are a lot of resources”, was locally dependent (residual correlation = 0.25) on item 23, “within the whānau is the rongoā” (rongoā is the Māori word for medicine). From a theoretical point of view, these two items consider two very similar concepts. However, item 23 is focused more specifically on the healing process, whereas item 22 (Within whānau there are a lot of resources) is much less specific as to what sort of resources might be available, when and for what purpose. Furthermore, item 23 showed a better spread on the latent trait of interest (3.8 versus 2.8 logits). Therefore, item 22 was deleted from this subscale.\n\nItem 26, “Māori have a different point of view from Pākehā (non-Māori of European ancestry)”, was locally dependent upon item 27 (0.405) “Māori cultural needs are different from Pākehā”. Theoretically, cultural needs secondary to the culturally determined injury to wairua are critical to the functioning of this tool, in order to best understand how whānau conceptualise these needs. While asking about similar issues, item 27 more specifically asks about cultural needs, whereas item 26 was more vague, referring only to a different point of view. Hence, the decision was made to delete item 26.\n\nItem 28, “When health workers relate to the culture of the whānau outcomes are improved”, was locally dependent (residual correlation = 0.444) on item 29, “When health workers support whānau to address wairua outcomes are improved”. Item 29, was deemed to be theoretically more important, because it more directly measures the issue of wairua, which is central to the theory of the cultural aspect of injury. Therefore, item 28 was deleted from the subscale.\n\nDeletion of these three items improved fit of data to the model and provided the final solution (Table 3). The item-trait interaction chi-square was non-significant, the scale was unidimensional and no DIF was observed. The reliability of the subscale Tangata was good (PSI=0.740) and the targeting was acceptable (Figure 3).\n\nWairua practices is a phrase used to describe activities that strengthen wairua. Wairua is an area of hauora (health and wellbeing) that conveys the unique connection between Māori and all aspects of the universe. While wairua is mentioned in other subscales, wairua is the primary focus of this subscale. This subscale consisted of 12 items.\n\nThe initial analysis of the Wairua subscale found that the scale was unidimensional, but it did not fit the Rasch model (p<0.0001). Moreover, there was one misfitting item, one item showed non-uniform DIF, two items were locally dependent and a number of items had statistically significantly disordered response category thresholds.\n\nItem 35, “Practices that strengthen wairua are as important as clinical interventions”, was found to be misfitting with chi-square p=0.00014. The item was deleted and fit to the model improved.\n\nExamination of item 46, “Use of Te Reo Māori means wairua is being strengthened”, identified non-uniform DIF by location and statistically significant disordering of response category thresholds. The decision was made to delete this item and this improved fit to the model.\n\nItems 43, “Romiromi (type of massage) can be a powerful healing tool”, and 42, “Mirimiri (type of massage) can be a powerful healing tool”, were found to be locally dependent (residual correlation = 0.638). Because these types of massage are very similar and mirimiri (massage) is more commonly known, item 42 was retained and item 43 was deleted.\n\nFive items, 36, 38, 39, 44 and 45 showed statistically significant disordered thresholds. The lower two response categories (“strongly disagree” and “disagree”) of these items were collapsed into one category. This modification further improved fit of data to the model and provided the final solution for the Wairua subscale. The scale fit the model with non-significant item-trait interaction and was unidimensional. The reliability of the scale was good (PSI=0.733) and the targeting was acceptable (Figure 4). Scoring was modified accordingly.\n\nTable 4 presents the relative difficulty of each item of the Te Waka Kuaka subscales. The easier the item, the higher the expected scores are for people with high levels of investigated construct.\n\nKey:\n\n*: Indicates a testlet made of two or more original NFI items. Testlet is scored by summing up the scores from included items.\n\n\nDiscussion\n\nThis study presents the Rasch analysis of a new measure, Te Waka Kuaka, for use in assessment of Māori cultural needs following traumatic brain injury.. Given the over representation of Māori with TBI (Feigin et al., 2012) alongside Māori beliefs about the sacred quality of the head, ‘he tapu te upoko’, (Moko-Mead, 2003) this scale is much needed. This investigation was done to examine the validity of Te Waka Kuaka. Our analysis identified ten items that did not fit the Rasch model and were deleted. The resulting four subscales fit the Rasch model and were unidimensional.\n\nVery few measures developed to assess Māori specific aspects of health exist. One that has been used in the area of mental health and addictions is called “Hua Oranga” (Durie & Kingi, 1997). The Hua Oranga operates a well-known framework called “Te Whare Tapa Whā” (the four walled house). This framework does not have an underpinning theory. It presents four constructs, whānau (extended family), wairua (spirituality), hinengaro (mind) and tinana (body). While some analyses of the psychometric properties of this measure have been made, we are not aware of any previous measure being developed using Rasch analysis (Harwood et al., 2012; McClintock et al., 2011). Overall, the Hua Oranga measure was developed in a different manner to Te Waka Kuaka and measures a construct of hauora, without theoretical basis, rather than four subscales based on a theory of brain injury.\n\nFrom a clinical perspective, responses to a number of the items were interesting. Item 3 highlighted that there were a range of groups for whom the item functioned differently, by iwi (tribal group) and with the “other” group being an outlier. One interpretation of this is that the small non-Māori “other” group had a different understanding of whakawhaunaungatanga. This is not unexpected, given that the concept is Māori-specific. Also, it is possible that differing iwi (tribal) groups conceptualise this activity in different ways. This finding added to a richer understanding of whanaungatanga itself. Item 11, “being inside buildings like hospitals does not help me”, was a statement that came from the preliminary research. While this statement may have assisted in considerations about the location of rehabilitation processes, the item did not have explicit theoretical salience regarding the wairua aspects of the injury. These were considered better assessed by item 19 “being on marae is a good place to start to feel strong again”. The negative frame of the statement (“does not help me”) was thought to contribute to a different perception of the item by participants, compared to the positively framed items.\n\nThe lower PSI (0.56) of the Wā subscale indicates that most of the participants scored those items highly. The heterogeneity of the participants resulted from the wide range of iwi (tribal) affiliations represented, arising from different parts of Aoeatoa, New Zealand. This also meant a range of competencies in Te Reo Māori were represented. Despite this heterogeneity, the importance of the concept of time was evident from the likelihood that these items would be highly endorsed. Recognition of responses to items in this subscale, especially those most strongly endorsed, is of clinical importance and can directly inform priorities in subsequent management strategies.\n\nThe spread of difficulty of Te Waka Kuaka items was relatively narrow: between -1 and 1 (see Table 4). Including deleted items did not affect the spread. Similarly, it is possible that because the method of deriving the items was culturally conservative, that is, developed on marae (traditional meeting houses), albeit urban, rural and remote, the items do not address Māori cultural needs that are either very easy and or very difficult to endorse. Given the positive skew in this sample, further testing could be undertaken with people who are less in touch with their Māori cultural identity. We hypothesise that the sample would score more towards the lower end of the Te Waka Kuaka subscales.\n\nOne of the limitations of the study was that the wider sample of possible participants is unknown, so no response rate can be calculated. However, given the large sample size the analysis itself remains robust.\n\nDissemination of the findings of the analysis to research partners, namely health and education providers in the Māori community, has led to widespread requests for use of Te Waka Kuaka in settings outside of TBI rehabilitation. This is an unexpected development. One approach being considered is to develop a further study protocol to collect this data. Analysis would then enable better understanding of the scope of the tool’s application.\n\nClinical implications of the use of the tool are significant. By being able to clearly and quickly identify the immediate needs of the whānau means that the whānau themselves and the health workers can focus on addressing those needs without delay. How these needs change can be easily reviewed and this can in turn guide further tailoring of supports. Given the theoretical importance of addressing the cultural aspect of TBI, namely the injury to wairua, it is vital to ensure these cultural needs are thoroughly monitored and responded to. In this way, healing the cultural injury is likely to improve the recovery process, as well as outcomes for the whānau.\n\n\nConclusions\n\nTe Waka Kuaka is a new measure that has been in development to assess the cultural needs of Māori with TBI. This paper reports the Rasch analysis phase. Our findings show that the revised subscales are unidimensional and fit the Rasch model. Te Waka Kuaka can now enable valid and accurate measurement of Māori cultural needs following TBI. Future research examining the responsiveness of Te Waka Kuaka would be a useful addition to better understanding the applicability of this measure.\n\n\nData availability\n\nDataset 1. Data file containing responses from all participants that completed Te Waka Kuaka. DOI, 10.5256/f1000research.11500.d166175 (Elder et al., 2017).",
"appendix": "Author contributions\n\n\n\nHE devised and carried out research, analysis and writing of paper. KC assisted with analysis and writing up of paper. PK assisted in design, analysis support and writing of paper. AC assisted in analysis, support and writing up of paper. KM assisted in writing of paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis research was funded by the Health Research Council of NZ [14-060].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors thank Te Hiku Hauora, Whakawhiti Ora Pai, Te Kura Reo ki Ōtaki, Te Kura Reo ki Kahungunu, Te Pīnakitangi ki te Reo Kairangi, Kearoa Mokaraka, Raumāhoe Ani Morris, Whaea Moe Milne, Health Research Council of NZ.\n\n\nSupplementary material\n\nSupplementary File 1: Te Waka Kuaka: original 46-item draft measure.\n\nClick here to access the data.\n\nSupplementary File 2: Te Waka Kuaka: final 36-item bilingual version.\n\nClick here to access the data.\n\n\nReferences\n\n28th Māori Battalion: C Company haka.http://www.28maoribattalion.org.nz/. 2011. Reference Source\n\nAndrich D: Rasch Models for Measurement Series: Quantitative Applications in the Social Sciences No. 68. London, England: Sage Publications. 1988. Reference Source\n\nAndrich D, Sheridan B, Luo G: RUMM2030 (Comupter Software and Manual). Perth, Australia: RUMM Laboratory 2030. 2010. Reference Source\n\nBond TG, Fox CM: Applying the Rasch Model. Fundamental Measurement in the Human Sciences. London, England: Lawrence Erlbaum Associates. 2001. Reference Source\n\nCzuba KJ, Kersten P, Kayes NM, et al.: Measuring Neurobehavioral Functioning in People With Traumatic Brain Injury: Rasch Analysis of Neurobehavioral Functioning Inventory. J Head Trauma Rehabil. 2016; 31(4): E59–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDudley M: [Developing a Māori theory of dementia as the basis of culturally robust neuropsychological testing for ageing Māori]. 2016.\n\nDurie M: Mauri ora, dynamics of Māori health. Auckland: Oxford University Press. 2001. Reference Source\n\nDurie M: Understanding health and illness: research at the interface between science and indigenous knowledge. Int J Epidemiol. 2004; 33: 1138–1143. PubMed Abstract | Publisher Full Text\n\nDurie M, Kingi TK: A framework for measuring Māori mental health outcomes. Retrieved from Wellington; 1997. Reference Source\n\nElder H: An examination of Maori tamariki (child) and taiohi (adolescent) traumatic brain injury within a global cultural context. Australas Psychiatry. 2012; 20(1): 20–23. PubMed Abstract | Publisher Full Text\n\nElder H: Indigenous theory building for Māori children and adolescents with traumatic brain injury and their extended family. Brain Impairment. 2013a; 14(3): 406–414. Publisher Full Text\n\nElder H: Te Waka Oranga. An indigenous intervention for working with Māori children and adolescents with traumatic brain injury. Brain Impairment. 2013b; 14(3): 415–424. Publisher Full Text\n\nElder H, Kersten P: Whakawhiti Kōrero, a Method for the Development of a Cultural Assessment Tool, Te Waka Kuaka, in Māori Traumatic Brain Injury. Behav Neurol. 2015; 2015: 8, 137402. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElder H, Czuba K, Kersten P, et al.: Dataset 1 in: Te Waka Kuaka, Rasch analysis of a cultural assessment tool in traumatic brain injury in Māori. F1000Research. 2017. Data Source\n\nElder H, Kersten P, McPherson K, et al.: Making time: deeper connection, fuller stories, best practice. Annals of Psychiatry and Mental Health. 2016; 4(6): 1079–1082. Reference Source\n\nFeigin VL, Theadom A, Barker-Collo SL, et al.: Incidence of traumatic brain injury in New Zealand: a population-based study. Lancet Neurol. 2013; 12(1): 53–64. PubMed Abstract | Publisher Full Text\n\nFriedly J, Akuthota V, Amtmann D, et al.: Why disability and rehabilitation specialists should lead the way in patient-reported outcomes. Arch Phys Med Rehabil. 2014; 95(8): 1419–1422. PubMed Abstract | Publisher Full Text\n\nHaigh R, Tennant A, Biering-Sorensen F, et al.: The use of outcome measures in physical medicine and rehabilitation within Europe. J Rehabil Med. 2001; 33(6): 273–278. PubMed Abstract | Publisher Full Text\n\nHarwood M: Rehabilitation and indigenous peoples: the Māori experience. Disabil Rehabil. 2010; 32(12): 972–977. PubMed Abstract | Publisher Full Text\n\nHarwood M, Weatherall M, Talemaitoga A, et al.: An assessment of the Hua Oranga outcome instrument and comparison to other outcome measures in an intervention study with Maori and Pacific people following stroke. N Z Med J. 2012; 125(1364): 57–67. PubMed Abstract\n\nHays RD, Morales LS, Reise SP: Item response theory and health outcomes measurement in the 21st century. Med Care. 2000; 38(9 Suppl): II28–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKersten P, Kayes NM: Outcome measurement and the use of Rasch analysis, a statistics-free introduction. New Zealand Journal of Physiotherapy. 2011; 39(3): 92–99. Reference Source\n\nKrosnick JA, Holbrook AL, Berent MK, et al.: The impact of \"no opinion\" response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opin Quart. 2002; 66(3): 371–403. Publisher Full Text\n\nLinacre JM: Sample size and item calibration [or Person Measure] stability. Rasch Measurement Transactions. 1994; 7(4): 328. Reference Source\n\nMcClintock K, Mellsop G, Kingi T: Development of a culturally attuned psychiatric outcome measure for an indigenous population. International journal of Culture and Mental Health. 2011; 4(2): 128–143. Publisher Full Text\n\nMoko-Mead H: Tikanga Māori, living by Māori values. Wellington: Huia. 2003. Reference Source\n\nNorup A, Perrin PB, Cuberos-Urbano G, et al.: Family needs after brain injury: A cross cultural study. NeuroRehabilitation. 2015; 36(2): 203–214. PubMed Abstract | Publisher Full Text\n\nOgden JA, McFarlane-Nathan G: Cultural bias in the neuropsychological assessment of young Māori men. New Zealand Journal of Psychology. 1997; 26: 2–12. Reference Source\n\nPuente AE, Perez-Garcia M, Vilar-Lopez R, et al.: Neuropsychological Assessment of Culturally and Educationally Dissimilar Individuals. In F. En Paniagua & AM. Yamada (Eds.), Handbook of Multicultural Mental Health: Assessment and Treatment of Diverse Populations. (Second ed). San Diego CA, USA Academic Press, 2013; 225–241. Publisher Full Text\n\nReeve BB, Wyrwich KW, Wu AW, et al.: ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013; 22(8): 1889–1905. PubMed Abstract | Publisher Full Text\n\nTennant A, Conaghan PG: The Rasch measurement model in rheumatology: what is it and why use it? When should it be applied, and what should one look for in a Rasch paper? Arthritis Rheum. 2007; 57(8): 1358–1362. PubMed Abstract | Publisher Full Text\n\nWhiteneck GG, Dijkers MP, Heinemann AW, et al.: Development of the participation assessment with recombined tools-objective for use after traumatic brain injury. Arch Phys Med Rehabil. 2011; 92(4): 542–551. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "23969",
"date": "05 Jul 2017",
"name": "Stephen McKenna",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI like the approach of generating statements from local people, rather than adapting measures developed in other cultures and assuming they are meaningful to the indigenous population. Also, the authors are open about the weaknesses in the scales identified by the Rasch analyses.\nHowever, the article is very unclear about the nature of the measure they are testing or what its purpose is. Reading the literature about the instrument development the same lack of clarity is found. Is it intended for children and adolescents with TBI or for all people with TBI? The items represent health beliefs so this is not an outcomes measure. If it is measuring health beliefs do these differ dependent on the condition? As the items were not generated from people who had experienced TBI, why should the measure be relevant to a TBI population? This lack of relevance appears to explain why the four scales are poorly targeted to the study population. Clinicians and family members are notoriously poorly informed on the impact of disease on others.\nAs some of the study population did have a TBI it would be important to test for DIF by the presence of the condition.\nTo produce an instrument specific to TBI, fundamental work would need to be done with a representative sample of people who had experienced TBI.\nAs well as being poorly targeted, the reliability of the scales is average to poor. The poor targeting suggests that the instrument would not be responsive to changes in health beliefs, if this was an intention of the instrument.\nThe article covers Rasch analysis of the data. While fit to the Rasch model is fundamental to the validity of an instrument, additional analyses are required to show that the measure works as expected. For example, data should be presented showing that scores are related to perceived severity of TBI.\nMuch of the information in the Methods section covers work previously conducted, rather than that conducted in the present study. This should be included in the Introduction if it is deemed relevant.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
},
{
"id": "24646",
"date": "06 Oct 2017",
"name": "Sara Simblett",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper reports a psychometric validation of a new measure to assess cultural needs of Māori with traumatic brain injury. It is an important piece of research that has embedded engagement with local Māori communities and their specific social and cultural context at the heart of designing the new measure. It draws on qualitative methods to generate items that are then rigorously tested using Rach analysis techniques, presenting a very thoughtful and through piece of research, which has clear practical application. This type of translational research could make a significant contribution to closing gaps in service provision and shaping new developments for clinical services that meet the needs of their service users including minority groups. There are a number of other specific strengths to this work including a very clear introduction of the Rasch analysis techniques that are described well for an audience less familiar with these approaches; and the inclusion of the raw data file as well as copies of the questionnaire items in an open access format for readers to view. Additionally, the comprehensive dataset without any missing data, which must have taken a considerable effort to collect.\nBelow are a few aspects that could be addressed as means of improving the paper further:\nThe characteristics of the sample could be expanded upon. More specifically, it is stated that participants self-reported whether or not they had experienced a traumatic brain injury (TBI). It is not clear about the severity of these injuries and, for some, the TBI was only ‘possible’. I think more clarity on how this was assessed would be useful and I wonder if ‘head injury’ might be more accurate than ‘TBI’, especially if there is no medical evidence to support a head injury leading to a TBI in all cases. If this further data could be gathered and TBI diagnoses established this would considerably enhance the validity of the results reported in this paper in relation to TBI. Variables such as time post injury may moderate responses to the measure and so would also be useful contextual information.\n\nThe sample size would allow for a secondary analysis on only participants with a confirmed diagnosis of TBI. Alternatively, a clinical sample of people using TBI health services could be recruited to test the reliability of the results.\n\nAlthough the supplementary file provides some further details about the qualitative component of the initial development of the items, some more information would be useful within the paper itself. For example, the content of the initial interviews and the characteristics of the people including in the focus groups.\n\nSimilarly, further details about the methods of recruiting participants to answer the questionnaire would help the reader to understand the context relating to the sample selected to validate the measure.\n\nDetails about the four subscales are described in the results section but I would have liked to have read this information earlier in the methods section, when the questionnaire is introduced. I wonder if the reader may benefit from a glossary so that they can more easily understand words that they are unfamiliar with. For example, in Table 1.\n\nI could not find reference to a Rasch analysis of the questionnaire as a whole. It might be worth adding that the psychometric properties of the subscales were explored individual because responses to the questionnaire as a whole did not meet the requirements of the Rasch model.\n\nIn the analysis of the ‘time’ subscale a relatively large subgroup were removed from the analysis. I would be interested in understanding more about the characteristics of this subgroup.\n\nThe low internal consistency (PSI) for the ‘time’ subscale may indicate that it is measuring a number of underlying constructs. Did you perform a Rasch factor analysis to explore whether it is best divided into two dimensions? In figure 1 there seems to be two clusters of items. Only one of which is well targeted to the sample.\n\nAs a general point, the questionnaire as it stands is very long and the exploration of psychometric properties of the items individually provides an opportunity to select the ‘best’ items to contribute to the measure of several underlying constructs. From looking at Figures 1 and 3, for example, there seems to be a number of items where they is a consensus of endorsement – are these items best removed because they do not contribute to separating out any of the abilities of the persons in the sample?\n\nIn the discussion the clinical implications are rightly addressed. This measure could be very useful for establishing people’s cultural preferences and values. However, I question whether there is likely to be change on the constructs measured as values are more likely to be disposition and not so strongly related to state, unless moderated by factors such as outlook on life more generally. I believe this is, however, scope for future research.\nThank you for the opportunity to review this paper, which describes a mixed methods approach that have enormous potential for creating valid, reliably and appropriate measure for using in rehabilitation after head injury/TBI.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1034
|
https://f1000research.com/articles/6-664/v1
|
10 May 17
|
{
"type": "Software Tool Article",
"title": "TIDDIT, an efficient and comprehensive structural variant caller for massive parallel sequencing data",
"authors": [
"Jesper Eisfeldt",
"Francesco Vezzi",
"Pall Olason",
"Daniel Nilsson",
"Anna Lindstrand",
"Jesper Eisfeldt",
"Francesco Vezzi",
"Pall Olason",
"Daniel Nilsson"
],
"abstract": "Reliable detection of large structural variation ( > 1000 bp) is important in both rare and common genetic disorders. Whole genome sequencing (WGS) is a technology that may be used to identify a large proportion of the genomic structural variants (SVs) in an individual in a single experiment. Even though SV callers have been extensively used in research to detect mutations, the potential usage of SV callers within routine clinical diagnostics is hindered by high computational costs, usage of non-standard output format, and limited support for the various sequencing platforms and libraries. Another well known, but not well-addressed problem is the large number of benign variants and reference errors present in the human genome that further complicates analysis. Here we present TIDDIT, a time efficient variant caller, that uses discordant read pairs as well as the depth of coverage and split reads to detect and classify a large spectrum of SVs. As part of the software suite, TIDDIT also includes a database functionality that enables filtering for rare variants and reduces the number of false positive calls and background noise. Benchmarked against five state-of-the-art SV callers, TIDDIT performs at an equal/superior level while using only 2 CPU hours per sample. Thanks to its speed, sensitivity, flexibility and ability to easily detect variants on a wide range of WGS library types, TIDDIT solves many of the problems that are currently hindering the utilization of WGS for SV calling in clinical settings.",
"keywords": [
"Variant calling",
"Whole genome sequencing",
"Structural variation"
],
"content": "Introduction\n\nGenomic structural variants (SVs) are defined as large genomic rearrangements and consist of inversion and translocation events, as well as deletions and duplications1. SVs have been shown to be both the direct cause and a contributing factor in many different human genetic disorders, as well as in more common phenotypic traits2–4.\n\nIn genetic diagnostics, current techniques such as FISH5 and microarray studies6 have limited resolution, and the information obtained often needs to be complemented by additional methods for a correct interpretation5,6. Previous studies have shown that whole genome sequencing can be used to successfully identify and characterize structural variants in a single experiment7.\n\nThe advent of systems like the Illumina HiSeqX allows researchers to sequence Whole Human Genomes (WHG) at high (i.e. 30X) coverage and at a relatively low cost (i.e. 1000$ per WHG)8. The ability to produce large amounts of data at an unprecedented speed has initiated a flourishing of computational tools that are able to identify (i.e. call) structural variants and/or chromosomal breakpoints (i.e. the exact position in the chromosome at which SV takes place). These tools are commonly called variant callers, or simply, callers. Variant callers generally require an alignment file (in BAM format) as input and try to identify differences between the reference genome and the donor/patient genome. To detect structural variants, callers generally use heuristics based on different signals in the WGS data. These signals include discordant pairs9,10, read-depth11, and split-reads12. Some callers try to reconstruct the patient sequence by applying either local13 or global14 de novo assembly techniques. Depending on the size, variant type, and characteristics of the sequencing data, the most suitable method for detecting a variant will differ15.\n\nThanks to the ability to produce high quality sequencing data at a relatively low cost, as well as the potential to detect any variant from a single experiment, whole exome sequencing16,17 and whole genome sequencing7,18 could be highly useful in clinical diagnostics, especially to study rare disease causing variants. However, to avoid high validation costs, highly precise, yet sensitive callers are needed. To further complicate the situation, an abundance of sequencing platforms and library preparation methods are available19. Sequencing data generated from these different sources have different properties, such as read length and coverage20. As an example, it has been shown that large insert mate pair libraries are well suited to detect SVs21, mainly due to the ability to span repetitive regions and complex regions that act as drivers of structural variation22 and due to the sensitivity derived from a large physical span coverage compared to small insert size sequencing coverage.\n\nHere we present a new variant caller, TIDDIT. The name highlights the ability to detect many different types of SVs; including but not restricted to translocations, inversions, deletions, interspersed duplications, insertions and tandem duplications. TIDDIT utilizes discordant pairs and split reads to detect the genomic location of structural variants, as well as the read depth information for classification and quality assessment of the variants. By integrating these WGS signals, TIDDIT is able to detect both balanced and unbalanced variants. Finally, TIDDIT supports multiple paired-end sequencing library types, characterized by different insert-sizes and pair orientations.\n\nTo simplify the search for rare disease causing variants, TIDDIT is distributed with a database functionality dubbed SVDB (Structural Variant DataBase). SVDB is used to create structural variant frequency databases. These databases are queried for rare disease causing variants, as well as variants following certain inheritance patterns. Utilizing the database functionality, the analysis of rare variants may be prioritized, thus speeding up the diagnosis of rare disease causing variants. To our knowledge, no available caller provides such an extensive framework to call and evaluate rare disease causing structural variants.\n\n\nMethods\n\nDetection of structural variants. TIDDIT requires a coordinate-sorted BAM file as input. There are two phases; in the first phase, coverage and insert size distribution are computed from the BAM file. These data will be used in the subsequent phase. In the second phase, TIDDIT scans the BAM file for discordant pairs and split reads and uses these signals to detect and classify structural variants. These two signals are pooled together by redefining each signal as two genomic coordinates Si = (pi1, pi2), for a split read the pi1 position is given to the position of the primary alignment, and the pi2 position is given to the position of the supplementary alignment of that read. On the other hand, for discordant pairs, the pi1 position is given to the the read having the smallest genomic coordinate, and the pi2 position is given to the position of the read that has the largest genomic coordinate. A read pair is deemed to be discordant if the reads map to different contigs, or if the distance exceeds a threshold distance, Td. By default, Td is set to three times the standard deviation plus the average value of the insert size distribution. Every time a SV signal is identified, TIDDIT switches from reading-mode to discovery-mode. As soon as the signal Si = (pi1, pi2) is identified, the set D is initialized:\n\n\n\nAt this point TIDDIT searches for other evidence in the neighborhood. Every time a new signal is identified it is added to set D. The construction of this set is halted only when no other signal is identified in W consecutive positions. In more detail, if (p11) = cj (i.e., p11 is a position j on chromosome c), then D will contain the following signals:\n\n\n\n\n\nThe first condition guarantees that D will contain only signals found after position cj, i.e. the position of the first signal within the set D. The existential clause guarantees that the set D will not contain any signal that is too far away from the signals within the set D. D is obtained by reading the ordered BAM file and populating a data structure with all the detected discordant pairs and split reads. If no signal is identified after reading W positions from the last position containing a read added to D, the discovery phase is closed. TIDDIT also records information about local coverage and read orientation while constructing D. Once D is computed, TIDDIT partitions it into distinct sets D1, D2, …, Dk such that:\n\n\n\n\n\n\n\n\n\nIn other words, D is divided into non overlapping partitions, Dk requiring that all p2 positions form a cluster with properties analogous to the p1 positions. Once D is divided into partitions, TIDDIT checks if any partition represents a structural variant or if it is only noise. In the case of it being a structural variant, TIDDIT tries to associate the identified signal with a specific type of variation. A set Dk is discarded (i.e. the SV is not reported) if the number of pairs forming the set (i.e. the cardinality of the set) is below a given threshold. This threshold is used to control the sensitivity and specificity of TIDDIT. In general, the number of discordant pairs is dependent on multiple factors, and may vary considerably throughout the genome. Therefore the user may need to fine tune the required number of discordant pairs based on the downstream analysis. All callable structural variants in D are reported, and thereafter, D is discarded. TIDDIT will then return to read mode, starting with the next available read pair.\n\nClassification of structural variants. TIDDIT identifies candidate variations using discordant pairs and split reads. To determinate the variant type, TIDDIT analyses read orientation, as well as the coverage across the region of the first reads, second reads, and the region in between. TIDDIT characterizes three levels of coverage: low, normal and high. If C˜ is the average coverage computed over the entire genome sequence, then the coverage across a region, C is deemed normal if it is satisfying the following condition:\n\nP ← The ploidy of the organism\n\n\n\nIf the coverage across a region is lower than normal, it is classified as low coverage. Likewise, if the coverage is higher than normal coverage, it is classified as a high coverage region. The patterns of the variants detected by TIDDIT are represented in Figure 1.\n\nThereafter, the read orientation and coverage is used for classification of the SV. Deletions (A) are characterized by low coverage across the deleted region (shown in red), and normal orientation. Interspersed duplications (B) are characterized by high coverage across the duplicated region (shown in red), but normal coverage across the other regions affected by the SV. Tandem duplications (C) are characterized by high coverage across the duplicated region (shown in red), the entire duplicated region is found between the discordant pairs/split reads (shown in red), the read pair orientation is inverted compared to the expected library orientation. Inversions (D) are characterized by the orientation of the read pairs (red-purple and red-blue) bridging the inverted region (shown in red). When aligning these discordant pairs to the reference, the reads of the discordant pairs will have the same orientation, forward/forward or reverse/reverse. For split reads, the orientation of the secondary alignment will differ from the primary alignment. Lastly, translocations (E) are characterized by the read pairs bridging the translocated region (red) and its surroundings (blue and purple). Either these regions belong to another contig than the translocated region, or to the same contig but, in contrast to interspersed duplications, the coverage across the translocated region is normal.\n\nIn a deletion event (Figure 1A) a region is absent in the patient genome but present in the reference. When aligned to the reference, the read pairs flanking the deleted region will have a larger insert size than what is expected based on the library insert size distribution. Moreover, split reads will be formed in such a way that one part of the read is aligned to one side of the breakpoint, and another part of the read will be aligned to the other side of the deletion. Furthermore, the coverage of the lost region will be lower than expected. Therefore, TIDDIT will classify a variant as a deletion if the region flanked by the discordant pairs/split reads is classified as a low coverage region.\n\nIn an interspersed copy number event (Figure 1B), an extra copy of a region within the genome (the red sequence in Figure 1B) is positioned distant from the original copy. In this case, there will be read pairs and reads bridging the duplicate copy and the distant region. When aligned to the reference, the read pairs will appear to have unexpectedly large insert size, and the reads will appear split across the reference. Thus, TIDDIT will detect these signals. Furthermore, the coverage of the copied region will be higher than expected. By scanning the coverage of the regions where the reads of the discordant pairs are located, TIDDIT will find which region has an extra copy. The interspersed duplication is reported as two events, an intrachromosomal translocation between the duplicated region and the position of the copy, and a duplication across the duplicated region.\n\nIn a tandem duplication event (Figure 1C), the extra copy is positioned adjacent to the original copy. Since the distance between the segments is small, there will be pairs where one read is located at the end of the original copy, and the other read is located at the front of the duplicate copy (Figure 1C). When aligned to the reference, the insert size of these read pairs will be as large as the tandem copy itself (Figure 1C). Similarly, there will be reads bridging the two copies. These reads will appear split when mapped to the reference. Furthermore, the orientation of these read pairs and split reads will be inverted. Moreover, the coverage across the entire genomic region will be higher than expected. Thus to classify tandem duplications, TIDDIT will search for sets of discordant pairs/split reads that have inverted read pair orientation as well as high coverage between the read pairs.\n\nIn an inversion event (Figure 1D), the sequence of a genomic region is inverted. When aligning the sequencing data the insert size of the read pairs bridging the breakpoints will appear to be larger than expected. Furthermore, both read pairs will get the same orientation, such as reverse/reverse or forward/forward. TIDDIT employs the read pair orientation in order to identify inversions. Given that the inversion is large enough, TIDDIT will find the pairs bridging the breakpoints of the inversion, and will classify the variant as an inversion if the orientation of the discordant pairs are forward/forward or reverse/reverse orientation, and if the orientation of the primary/secondary alignments are forward/reverse, reverse/forward.\n\nIn an interchromosomal translocation event (Figure 1E), the first read and second read will map to different contigs (Figure 1E). Reads bridging the translocated segment will appear split between these two contigs. Any read pair mapping to two different contigs is counted as a discordant pair, and any set of signals mapping to different contigs will be classified as an interchromosomal translocation. Intrachromosomal translocation events are similar. They are balanced events, where a genomic region has been translocated to another location within the same chromosome. When aligned to the reference region, these variants will give rise to signals where one read is mapping to the translocated region, and the other read mapping to the region where the translocated region is positioned. This will give rise to pairs having larger insert size than expected. However, since there is no change in copy number, the coverage will be normal across the discordant pairs. The orientation of the reads forming the discordant pairs will depend on whether the translocated region is inserted in its original orientation, or if it is inverted relative to its original orientation. Thus the read pairs may retain the standard library orientation, but the orientation could also be inverted. Therefore, intrachromosomal translocations are classified by scanning for discordant pairs having either forward/reverse or reverse/forward orientation and normal coverage in the region between those reads.\n\nFiltering of structural variant calls. For each called variation, several statistical values are calculated. They serve two purposes: to provide more information and to filter out noise. In the former case, statistics are employed to understand the structure of the variant and to relate it to the rest of the genome. In the latter case, filters are employed to improve the precision of TIDDIT. TIDDIT utilizes four complementary filters: Expected_links, Few_links, Unexpected_coverage, and Smear. These heuristics are used to set the FILTER column of the VCF generated by TIDDIT.\n\nThe main goal of Expected_links is to filter variants caused by random events such as contamination or sequencing errors. It uses a statistical model to compute the expected number of discordant pairs23 using the library insert size, read length, ploidy of the organism and coverage across the region affected by the structural variant. A variant that is defined by less than 40% of the expected number of pairs will fail the Expected_links quality test, and is set to Expected_links in the FILTER column of the VCF. The statistical model supports variants that are called using discordant pairs, hence for calls based on split reads exclusively, the number of split reads divided by coverage and ploidy is used as an estimate of the expected number of split reads.\n\nFew_links aims to filter out calls that are caused by reference errors and misalignment of repetitive regions. As mentioned previously, a variant is defined as a set of positions, Dk=(pz1,pz2). In order to compute the Few_links filter, for each Dk, TIDDIT creates another set called DSpurr, containing spurious read pairs. Spurious reads pairs are pairs that belong to the interval identified by Dk, but whose mates align to a different chromosome from the one where the pairs form Dk. In other words, TIDDIT checks if a genomic location where a SV can be called is linked to multiple other events. In this case the suspicion is that the called SV is the consequence of a repetitive element. If the fraction of spurious read pairs is too high, the variant within Dk is considered unreliable, and thus its filter flag is set to Few_links. The fraction of spurious read pairs is considered too high if the following formula holds true:\n\nP ← The ploidy of the organism\n\n\n\nSmear is a filter designed to remove variants called due to large repetitive regions. In these regions, the split reads and discordant pairs will map in a chaotic manner, hence these regions may appear to be affected by large structural variation. These calls are recognized by searching for variants were the regions of the first read and its mate overlap, or where the regions of the primary and secondary alignment overlap. If this is the case, the variants will fail the Smear test. The filter flag of variants that fail this test is set to Smear.\n\nLastly, Unexpected_coverage also aims to filter out calls caused by reference errors and misalignments, but unlike the Few_links, it employs coverage information. The Unexpected_coverage filter uses the coverage across the region of the p1 signals as well as the region of p2 to determine the quality of a variant call. If the coverage of any of these regions is 10 or more times higher than the average library coverage, the variant will fail the Unexpected_coverage test, and its FILTER column is set to Unexpected_coverage.\n\nAny variant that passes these filters is set to PASS, those that fail are set according to the filter that rejected the variant. By removing all variants that did not pass this quality control, the precision of TIDDIT improves considerably.\n\nUsing the structural variant frequency database (SVDB) software, the user may compare variants found in different samples and annotate the VCF files with the frequency of each variant. The frequency database is built with multiple VCF files containing structural variant information. The VCF files may be generated using any caller that reports structural variants according to the VCF standard. By removing high frequency variants from the VCF file, rare variants may be detected. The database could also perform trio analysis and filter out variants following a certain frequency pattern within a family.\n\nThe database is an SQLite database. It contains one entry per input variant. These entries describe the genomic position, variant type, and sample id of the variant. Moreover, each variant is given a unique index number. These SQLite databases may then be used either directly to provide frequency annotation, or they may be exported to a VCF file.\n\nExport. The variants of the SQLite structural variant database may be exported to a VCF file. The VCF file is a standard structural variant VCF file, generated by clustering similar variants within the SQLite database, and reporting each cluster as a single VCF entry. The INFO field of each VCF entry contains four custom tags. The FRQ tag describes the frequency of the variant, the OCC tag describes the total number of samples carrying the variant, the NSAMPLES tag describes the total number of samples within the database, and lastly, the variant tag describes the position and id of each clustered variant.\n\nThe clustering of the variants is performed using one out of two methods, either an overlap based clustering method or DBSCAN24.\n\nAnnotation. The main purpose of the structural variant frequency database is to query it and use it for frequency annotation. The frequency database is queried using a VCF file. All variants within the VCF file will be annotated with the frequency of that variant within the frequency database. The database used for querying may either be an SQLite database, an exported VCF file, or a multi-sample VCF file such as the thousand genome structural variant VCF.\n\nIf an SQLite database is chosen, two separate algorithms are available, overlap or DBSCAN. If DBSCAN is chosen, all variants of the SQLITE database and the query VCF are clustered using DBSCAN. Thereafter, the frequency of each query variant is set to the number of separate samples represented in the cluster of that query variant.\n\nOverlap based clustering. The most critical part when building and querying the database is to determine if two SVs represent the same event or not. When using the overlap based clustering algorithm, two interchromosomal variants are considered equal if the distance between their breakpoints does not exceed a certain distance threshold. This distance is set to 5 kilobases (kB) as default. However, the user may change it to suit any kind of data. Secondly, for intrachromosomal variants fulfilling the breakpoint distance criteria, the overlap O is computed and used to determine if the variants are similar enough. For a given chromosome, to compute O each variant var is regarded as an ordered set of genomics coordinates:\n\nvar = {i, i + 1,..., j — 1, j}\n\nThe overlap parameter O is defined as the cardinality of the intersection of two variants, divided by the cardinality of the union of the same overlapping variants:\n\n\n\nWhere var1 and var2 are two overlapping variants (O equals to 0 if the variants are not overlapping). The default threshold value of the overlap parameter is 0.6.\n\nThe database software extracts the variant type from the ALT field. By default, variants of different type will not be considered equal even if the overlap is high enough. When constructing the database, the variants of the VCF file are transferred into an SQLite database. This database can then be queried directly using the previously described procedure. The database could also be exported into a VCF file.\n\nWhen exporting the database, variants from different patients overlap in complex patterns and form chains where not all variants in one chain are overlapping according to the user set parameters. Consider three variants, varA, varB, and varC as an example;\n\nThe overlap between variants varA and varB aswell as varA and varC satisfies the overlap settings, O. However, the overlap between varC and varB is not big enough:\n\n\n\nTo resolve these cases, all the variants, V found in all patients are divided into sets χ based on the overlap threshold, O such that\n\n\n\n\n\nEach Vi in χi may not have a high enough overlap with all Vj in χi , instead the variants in χi may form complex chain-like patterns of overlap. To resolve these chains of variants, each χi is divided further into subsets of variants, χji, such that\n\n\n\nEach set χji is represented by one single variant, Vji. And for each variant Vk in χji, the overlap threshold of Vji is satisfied.\n\nThe sub clusters χji are computed by defining two sets, A and B.\n\nA = B = χi\n\nVji is picked at random from A, and added to χji. Thereafter, Vji is tested for overlap against each variant within B. A variant Vk in B will be added to χji if it satisfies the overlap threshold. Once the overlap is calculated between Vji and each Vk in B, A is redefined as A \\χji = A, thereafter Vji is printed as a database entry, and the frequency of Vji is set to the number of patients carrying a variant found in χji. If the set A is non-empty, a new χji is initiated and another Vji is sampled from A, and the described process is repeated again.\n\nDBSCAN clustering. The second clustering algorithm used by SVDB is DBSCAN. DBSCAN requires two parameters, epsilon and minPTS24. At default epsilon is set to 500 bases, and minPTS is set to 2 samples. However, these parameters may be changed by the user.\n\nThe DBSCAN clustering is performed by dividing each chromosome and variant type into separate sub-databases; thereafter, a 2-dimensional coordinate system is defined for each sub-database. For intrachromosomal variants, the x coordinate of this plane corresponds to the start position of the variant, and the y coordinate within the plane corresponds to the end position of the variant.\n\nInterchromosomal translocations involve two chromosomes, so when clustering these variants, the contig id is sorted according to lexicographic order. Out of the two contigs involved in the rearrangement, the contig ordered first is set to the x axis, and the contig last in order is set to be the y axis. Thereafter, each variant is added to the plane as described for intrachromosomal variants. This procedure is repeated for any possible chromosome pair, and each variant type on each chromosome pair.\n\nOnce each plane is defined, the variants within each separate plane are clustered using DBSCAN.\n\nDetection of structural variants. TIDDIT requires a coordinate sorted BAM file as input, and may be run in two separate modes; variant calling or coverage computation. The coverage computation mode is used to compute the coverage across the entire genome, and returns a BED file as output. The variant calling mode is run to analyse SV across the entire genome. All the detected SVs are returned in a single VCF file.\n\nSystem requirements. TIDDIT has been tested on a large number of datasets; in general, TIDDIT will perform variant calling in less than 2 hours(3) using a single CPU core, and 2 gigabytes of RAM memory or less. The time consumption and memory usage is mainly dependent on the coverage and quality of the input data.\n\nTIDDIT has been tested on Linux as well as Apple macOS. TIDDIT is easy to install and requires only standard c++ libraries. TIDDIT is installed using cmake (https://cmake.org/) and make (https://www.gnu.org/software/make/).\n\nDownstream analysis. TIDDIT generates a VCF file containing structural variant calls across the entire genome. Initially, these calls may be filtered based on the quality filters described in the implementation section, as well as the SVDB software, using either internal samples or an external dataset such as thousand genomes structural variants25 (ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/integrated_sv_map/). The TIDDIT output VCF is compatible with any tool which supports VCF, hence there’s a large number of tools available for further analysis of the variant calls. Typically, the VCF file is annotated using software such as VEP26 or snpEFF27. Tools such as VCFtools28 and BEDtools29 may be used to remove low quality calls, or filter calls within specific genomic regions.\n\n\nResults\n\nThe performance of TIDDIT was evaluated using simulated, as well as public datasets containing a large number of validated variants of different size and type. The performance of TIDDIT was compared to current top of the line callers, including Lumpy30, Delly10, CNVnator11, Manta9, and Fermikit14. These are well known callers that have been tested throughout numerous projects.\n\nThree separate datasets were used to evaluate TIDDIT. First, a simulated dataset generated using Simseq31 and SVsim (https://github.com/gregoryfaust/svsim) was used. Next, two public large scale sequencing datasets were used, namely the NA12878 sample, and the HG002 sample32. Truth sets of validated variants were found for each of the public datasets32,33. A detailed description of the benchmarking of TIDDIT is given in Supplementary File 1.\n\nTo test the performance of the callers, the tools were run on the simulated dataset. The dataset consists of four separate samples, one per variant type. The coverage of these simulated samples was set to 25X, and the read length and insert size was set to 100 and 350 bp, respectively. For each variant type (deletions, duplications, inversions, translocations), 6000 variants were simulated. Hence, the simulated dataset contains 24000 known variants of different type. The results are presented in Table 1. For each caller the sensitivity and precision was computed. The sensitivity differs between different callers and variant types. Table 1 shows that Fermikit is the caller with the lowest sensitivity. TIDDIT, however, is consistently the caller with the highest sensitivity. It is important to notice how the high sensitivity of TIDDIT is coupled with extremely high precision. In other words, TIDDIT is not only able to call all the real variation in the dataset, but it is also one of the tools least affected by false positives. For instance, looking more closely at the translocations in Table 1 we can see that TIDDIT has the best trade-off between sensitivity and precision, revealing the highest sensitivity out of all callers, and precision that was only 0.02 lower than Delly and Manta.\n\nThe variants were simulated using SVsim and Simseq.\n\nLastly, to complement the benchmarking on the simulated dataset, and to test TIDDIT on a more diverse set of sequencing data, the same callers were run on the NA12878 and HG002 samples32. The results are presented in Table 2. We computed both sensitivity and precision, in three size intervals. Events 0–100 bp in size are usually not considered structural variation, but they were kept as a size interval anyway since some callers also detect these variants.\n\nThese public datasets contain validated deletions of various sizes.\n\nWith the NA12878 sample, the performance differed widely between callers and variant sizes (Table 2). With variants larger than 1000 bp, TIDDIT had the highest detection rate and second best precision. FermiKit was the most precise tool, but it must be noted that such result was achieved at the expense of sensitivity, since Fermikit had almost half the sensitivity of TIDDIT. In other words, FermiKit was not able to call many true variants, but the variants that were called were more likely to be correct. On the other hand, TIDDIT was able to call almost all the validated large ( i.e. ≥ 1000 bp) variants, but many calls done by TIDDIT did not overlap with the validated ones. However, since the truthset only contains high quality deletion calls, these non-overlapping calls are not necessarily incorrect. With medium (i.e., 100−1000 bp) and small (i.e., ≤ 100 bp) size variants the performance of the callers differed greatly. For these variants, Manta had the highest sensitivity, while TIDDIT had the highest precision.\n\nMoving on to the HG002 sample, it was found that most variant callers performed worse than with the NA12878 sample. No variant caller produced any significant number of true positive calls in the range of 0–100 bp. TIDDIT had the highest sensitivity on variants larger than 100 bases, and was one of the most precise callers (Table 2).\n\nThe computational performance of the six tools was also determined. Despite not being the most important parameter to consider, CPU time rising in importance, especially if one needs to run analysis in a Compute Infrastructure as a Service (e.g. Google Cloud or Amazon). The CPU hour consumption of the callers was measured while analyzing each sample. The results of the measurements are presented in Table 3. During the analysis, each caller except Fermikit was run on a single core of an Intel Xeon E5-2660 CPU, and Fermikit was run using 16 Intel Xeon E5-2660 CPU cores.\n\nIt was found that CNVnator and TIDDIT are the most efficient callers, while FermiKit is by far the most expensive caller to run.\n\nEach caller except Fermikit was run on a single core of a Intel Xeon E5-2660 CPU. Fermikit was run on 16 CPU cores. The CPU hour consumption of the Simseq data is reported as the median time consumption across the four Simseq samples.\n\nEvaluation of the database functionality. The performance of the database functionality was evaluated by building databases of different sizes. These databases were built by randomly sampling individuals from a set of 209 that were sequenced through the thousand genomes project25. These individuals are listed in Supplementary File 1.\n\nFigure 2 shows how the fraction of unique hits within the database gets lower as the size of the database increases, therefore improving the ability to find unique variants for new patients. A unique hit is defined as a variant that has only been found in the query itself. Already, a relatively small database filtered out a significant amount of variants. On average, a sample queried against a database consisting of only 10 samples contain about 25% unique variants. Still, a larger database filters out more variants. Each query sample was found to contain 7.5% (i.e., ∼250) unique structural variants when filtered against a database containing 200 samples. Since each caller reports a relatively large fraction of false positives (Table 2), the frequency database is necessary to reduce the number of variants. Moreover the frequency database can be use to filter out recurring technical errors connected to the library preparation, sequencing chemistry and alignment of the sequencing data.\n\nThe structural variant databases may also be used to benchmark different tools and settings, as well as to compare the SVs within a family or population. As an example, the database functionality was used to study differences between populations sequenced through the thousand genomes project25. Three different populations were selected; Han Chinese from Bejing (CHB), Japanese from Tokyo (JPT), and Yoruba from Ibadan (YRI). In total, 25 samples were analysed; 10,5, and 10 samples from the CHB, JPT, and YRI populations, respectively. All of these samples were analysed using TIDDIT. The similarity of the samples was determined by creating one database per sample, and querying each sample against each database. The fraction of similar SVs was determined by computing the number of common/similar SVs divided by the total number of SVs in that sample. A more detailed description of the analysis and the selected samples is given in Supplementary File 1. It was found that the three populations are distinct based on their SVs. Furthermore, the CHB and JPT populations are relatively similar compared to the YRI population (Figure 3). The CHB population appears to be more homogeneous than the other two populations, with the exception of one individual which appears to be similar to the JPT individuals. On the other hand, the YRI population appears to be most diverse population, and is divided into two clear subpopulations.\n\nThese individuals were sequenced through the thousand genomes project. The heatmap is coloured based on the similarity between individuals, and shows that the populations can be distinguished based on their SVs.\n\n\nDiscussion\n\nSix structural variant callers were benchmarked on simulated data generated by simseq (Table 1), and two public datasets, including NA12878 and HG002 (Table 2). Compared to the other callers, TIDDIT performs well on SV larger than 1000 bp (Table 1, Table 2). TIDDIT is able to identify large structural variants in many experimental setups: low or high coverage, short or long fragment size (i.e., Paired End or Mate Pair) (Table 1, Table 2). Furthermore, TIDDIT has a good balance between sensitivity and precision. Despite being one of the most sensitive tools, it is also one of the most precise tools. These two characteristics (high sensitivity and high precision) in conjunction with a low computational demand makes TIDDIT one of the most capable tools available today for the identification of large SVs greater than 1 kbp from WGS data. TIDDIT does not perform well on small variants (Table 2), however TIDDIT performs really well on large variants, especially balanced variants (Table 1). Since TIDDIT is efficient, produces high quality variant calls, and performs well in multiple settings, TIDDIT could be a great addition to WGS pipelines.\n\nEven though the presented benchmarking is extensive, it is not fully complete. The public datasets lack large enough truth sets for balanced variants and duplications. Since the callers perform differently on different variants (Table 1), it would be of value to benchmark the callers against a more varied set of variants. Moreover, the variants of these truth sets are generally small. For instance, the median size of the deletions of NA12878 is about 250 bases, which is smaller than the traditional size of structural variation3.\n\nThe Human genome contains a large amount of repetitive regions, and each individual carries a large number of structural variants34,35. Due to these reasons, the number of detected structural variants per samples is generally high. Thus, finding a rare disease-causing variant among such a large number of common variants is difficult and time consuming. TIDDIT is distributed together with structural variant database software. This software package uses structural variant VCF files to construct variant frequency databases. The annotation provided by these databases is then used to filter common variants as well as reference errors.\n\nBy filtering out high frequency variants from the VCF file, rare disease causing variants can easily be detected (Figure 2). Moreover, the database could be constructed to follow variants that all samples have in common, such as an inherited disease variant within a family, or known disease causing variants, as well as to search for variants following a certain inheritance pattern, or to compare the SVs of different populations (Figure 3). The structural variant databases could also be used to benchmark different software tools, settings, and library preparation methods. The database functionality can be used for results obtained from any tool that generates a valid VCF file as format.\n\n\nConclusions\n\nTIDDIT is an efficient and comprehensive structural variant caller, supporting a wide range of popular sequencing libraries. Not only does TIDDIT have the functionality of a structural variant caller, it also has a set of functions that helps the user perform further analysis of the bam file. These functions include depth of sequencing coverage analysis and structural variant database functionality. By utilizing these functions, TIDDIT could either perform advanced analysis on it’s own or be used to perform a wide range of tasks within a variant analysis pipeline. TIDDIT has already been employed in many studies and demonstrated its potential not only with the commonly used Nextera mate pair libraries from Illumina7,18,36 but also with the TrueSeq Nano and PCR-free Paired End libraries37.\n\n\nSoftware availability\n\nLatest source code: https://github.com/J35P312/TIDDIT\n\nArchived source code as at the time of publication: http://doi.org/10.5281/zenodo.43951738\n\nLicense: GNU General Public License version 3.0 (GPLv3)",
"appendix": "Author contributions\n\n\n\nJE, FV and DN wrote the code. PO investigated callers and datasets used for benchmarking. AL, DN, FV, and JE designed the TIDDIT algorithm and the functional specification of TIDDIT. JE wrote the benchmarking scripts and performed the benchmarking. All authors participated in the writing of the article.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Swedish Research Council [2012-1526 to AL]; the Marianne and Marcus Wallenberg foundation [2014.0084 to AL]; the Swedish Society for Medical Research [S14-0210 to AL]; the Stockholm County Council; the Harald and Greta Jeanssons Foundation; the Ulf Lundahl memory fund through the Swedish Brain Foundation; the Nilsson Ehle donations and the Erik Rönnberg Foundation.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors would like to acknowledge support from Science for Life Laboratory, the National Genomics Infrastructure (NGI), and Uppmax for providing assistance in massive parallel sequencing and computational infrastructure.\n\n\nSupplementary material\n\nSupplementary File 1: Supplementary methods. A description of the benchmarking of the SV callers.\n\nClick here to access the data.\n\nSupplementary File 2: SV simulation pipeline. The pipeline used to generate the simulated SV datasets.\n\nClick here to access the data.\n\n\nReferences\n\nAlkan C, Coe BP, Eichler EE: Genome structural variation discovery and genotyping. Nat Rev Genet. 2011; 12(5): 363–376. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLindstrand A, Davis EE, Carvalho CM, et al.: Recurrent CNVs and SNVs at the NPHP1 locus contribute pathogenic alleles to Bardet-Biedl syndrome. Am J Hum Genet. 2014; 94(5): 745–754. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStankiewicz P, Lupski JR: Structural variation in the human genome and its role in disease. Annu Rev Med. 2010; 61: 437–455. PubMed Abstract | Publisher Full Text\n\nViljakainen H, Andersson-Assarsson JC, Armenio M, et al.: Low Copy Number of the AMY1 Locus Is Associated with Early-Onset Female Obesity in Finland. PLoS One. 2015; 10(7): e0131883. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBishop R: Applications of fluorescence in situ hybridization (fish) in detecting genetic aberrations of medical significance. Bioscience Horizons. 2010; 3(1): 85–95. Publisher Full Text\n\nBejjani BA, Shaffer LG: Application of array-based comparative genomic hybridization to clinical diagnostics. J Mol Diagn. 2006; 8(5): 528–533. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHofmeister W, Nilsson D, Topa A, et al.: CTNND2-a candidate gene for reading problems and mild intellectual disability. J Med Genet. 2015; 52(2): 111–122. PubMed Abstract | Publisher Full Text\n\nHayden EC: Technology: The $1,000 genome. Nature. 2014; 507(7492): 294–5. PubMed Abstract | Publisher Full Text\n\nChen X, Schulz-Trieglaff O, Shaw R, et al.: Manta: rapid detection of structural variants and indels for germline and cancer sequencing applications. Bioinformatics. 2016; 32(8): 1220–2. PubMed Abstract | Publisher Full Text\n\nRausch T, Zichner T, Schlattl A, et al.: DELLY: structural variant discovery by integrated paired-end and split-read analysis. Bioinformatics. 2012; 28(18): i333–i339. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbyzov A, Urban AE, Snyder M, et al.: CNVnator: an approach to discover, genotype, and characterize typical and atypical CNVs from family and population genome sequencing. Genome Res. 2011; 21(6): 974–984. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYe K, Schulz MH, Long Q, et al.: Pindel: a pattern growth approach to detect break points of large deletions and medium sized insertions from paired-end short reads. Bioinformatics. 2009; 25(21): 2865–2871. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNarzisi G, O’Rawe JA, Iossifov I, et al.: Accurate de novo and transmitted indel detection in exome-capture data using microassembly. Nat Methods. 2014; 11(10): 1033–1036. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H: FermiKit: assembly-based variant calling for Illumina resequencing data. Bioinformatics. 2015; 31(22): 3694–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTattini L, D’Aurizio R, Magi A: Detection of Genomic Structural Variants from Next-Generation Sequencing Data. Front Bioeng Biotechnol. 2015; 3: 92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTham E, Lindstrand A, Santani A, et al.: Dominant mutations in KAT6A cause intellectual disability with recognizable syndromic features. Am J Hum Genet. 2015; 96(3): 507–513. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLindstrand A, Grigelioniene G, Nilsson D, et al.: Different mutations in PDE4D associated with developmental disorders with mirror phenotypes. J Med Genet. 2014; 51(1): 45–54. PubMed Abstract | Publisher Full Text\n\nNilsson D, Pettersson M, Gustavsson P, et al.: Whole-Genome Sequencing of Cytogenetically Balanced Chromosome Translocations Identifies Potentially Pathological Gene Disruptions and Highlights the Importance of Microhomology in the Mechanism of Formation. Hum Mutat. 2017; 38(2): 180–192. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMardis ER: Next-generation sequencing platforms. Ann Rev Anal Chem (Palo Alto Calif). 2013; 6: 287–303. PubMed Abstract | Publisher Full Text\n\nQuail MA, Smith M, Coupland P, et al.: A tale of three next generation sequencing platforms: comparison of Ion Torrent, Pacific Biosciences and Illumina MiSeq sequencers. BMC Genomics. 2012; 13(1): 341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKloosterman WP, Guryev V, van Roosmalen M, et al.: Chromothripsis as a mechanism driving complex de novo structural rearrangements in the germline. Hum Mol Genet. 2011; 20(10): 1916–1924. PubMed Abstract | Publisher Full Text\n\nMedvedev P, Stanciu M, Brudno M: Computational methods for discovering structural variation with next-generation sequencing. Nat Methods. 2009; 6(11 Suppl): S13–S20. PubMed Abstract | Publisher Full Text\n\nSahlin K, Vezzi F, Nystedt B, et al.: BESST--efficient scaffolding of large fragmented assemblies. BMC Bioinformatics. 2014; 15(1): 281. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEster M, Kriegel HP, Sander J, et al.: A density-based algorithm for discovering clusters in large spatial databases with noise. In Kdd. 1996; 96: 226–231. Reference Source\n\n1000 Genomes Project Consortium, Auton A, Brooks LD, et al.: A global reference for human genetic variation. Nature. 2015; 526(7571): 68–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLaren W, Gil L, Hunt SE, et al.: The Ensembl Variant Effect Predictor. Genome Biol. 2016; 17(1): 122. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCingolani P, Platts A, Wang le L, et al.: A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w1118; iso-2; iso-3. Fly (Austin). 2012; 6(2): 80–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDanecek P, Auton A, Abecasis G, et al.: The variant call format and VCFtools. Bioinformatics. 2011; 27(15): 2156–2158. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuinlan AR, Hall IM: BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010; 26(6): 841–842. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLayer RM, Chiang C, Quinlan AR, et al.: LUMPY: a probabilistic framework for structural variant discovery. Genome Biol. 2014; 15(6): R84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBenidt S, Nettleton D: SimSeq: a nonparametric approach to simulation of RNA-sequence datasets. Bioinformatics. 2015; 31(13): 2131–2140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZook JM, Catoe D, McDaniel J, et al.: Extensive sequencing of seven human genomes to characterize benchmark reference materials. Sci Data. 2016; 3: 160025. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParikh H, Mohiyuddin M, Lam HY, et al.: svclassify: a method to establish benchmark structural variant calls. BMC Genomics. 2016; 17: 64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFujimoto A, Nakagawa H, Hosono N, et al.: Whole-genome sequencing and comprehensive variant analysis of a Japanese individual using massively parallel sequencing. Nat Genet. 2010; 42(11): 931–936. PubMed Abstract | Publisher Full Text\n\nSimon-Sanchez J, Scholz S, Fung HC, et al.: Genome-wide SNP assay reveals structural genomic variation, extended homozygosity and cell-line induced alterations in normal individuals. Hum Mol Genet. 2007; 16(1): 1–14. PubMed Abstract | Publisher Full Text\n\nNord KH, Lilljebjörn H, Vezzi F, et al.: GRM1 is upregulated through gene fusion and promoter swapping in chondromyxoid fibroma. Nat Genet. 2014; 46(5): 474–477. PubMed Abstract | Publisher Full Text\n\nBramswig NC, Lüdecke HJ, Pettersson M, et al.: Identification of new TRIP12 variants and detailed clinical evaluation of individuals with non-syndromic intellectual disability with or without autism. Hum Genet. 2017; 136(2): 179–192. PubMed Abstract | Publisher Full Text\n\nFrancesco JE, Nilsson D: J35P312/TIDDIT: TIDDIT-1.0.3 [Data set]. Zenodo. 2017. Data Source"
}
|
[
{
"id": "22619",
"date": "01 Jun 2017",
"name": "Giuseppe Narzisi",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes the implementation of a new structural variant caller, called TIDDIT, that uses multiple forms of evidence to call structural variants (deletions, duplications, inversions, and translocations) in whole genome sequencing experiments. In addition to structural variant calling, TIDDIT includes database functionality that helps reduce errors and makes it adaptable to diverse applications (e.g. rare and de novo variant detection).\n\nStructural variant calling is an area of research that is in strong need for more reliable and sensitive bioinformatics methods. TIDDIT has the potential to be a valuable open source tool and, overall, we think that the method warrants publication but some further refinement is necessary. We hope that the authors will address the following major and minor comments:\n\nMajor Comments:\n\nThe authors list the current problems in the field as computational costs, non-standard output formats, and limited support for different sequencing platforms/library types. We don’t believe they have adequately described the tools they compare against with respect to these limitations. For example, both CNVnator and Manta have reasonable computational costs (2 and 3 core hours respectively), while Manta outputs VCF and does not list any platform/library type as a known limitation of their tool. A better description of how their work solves these limitations compared to other tools is required. Alternatively, they could shift focus towards the novelty of the database functionality.\n\nGenerally, there is sufficient information provided for interpretation. However, there are some cases where further explanations are required. First, Manta is compared against for both the simulated data and NA12878 but is subsequently dropped from analysis of HG002 with no explanation. The authors should provide an explanation for this either in the text or as a footnote to the figures. Second, there is no description of the clustering method selected for the evaluation of database functionality (overlap based and DBSCAN clustering). The two clustering methods are described only from a technical stand point without providing details on how selection would affect the output or what use cases might be appropriate for each method.\n\nWhile the results generally support the conclusions made, the language used tends to overstate the differences between the tested methods: “TIDDIT is consistently the caller with the highest sensitivity”, “The high sensitivity of TIDDIT is coupled with extremely high precision”, “Despite being one of the most sensitive tools, it is also one of the most precise tools”. For example, while it is true that TIDDIT is the caller with highest sensitivity in the simulated datasets (table 1), the margin of victory is often quite low (only 0.01 in deletions and duplications – with lower precision than Manta, the next most sensitive caller). The authors should adjust the language used in the paper to provide a more truthful and honest description of the real performance of the tool as reported in the tables. Significant better performance is indeed achieved for simulated translocations but the improvements on the other classes of variants seem to be limited.\n\nWe found the description for resolving the chain-like pattern of overlaps in the “overlap based clustering” to be quite confusing and hard to follow as currently described in the paper. The authors should state more clearly the problem. Is this a specific issue that has not been properly addressed by the community so far? Also, it is usually helpful to explain some of the complexity in interval analysis by including figures that elucidate the details of the process.\n\nMinor Comments:\nThe methods used to generate the simulated data is described well enough, however, in this case making the simulated data sets available would also have been practical and would facilitate reproducing the results independently. The code is easily installed and provides sufficient documentation via the help options to get started with using the tool.\n\nSimulated structural variants seem to be created at random locations, however real variants tend to happen in a non-random fashion along the genome, in particular around repetitive sequences. It is important to emphasize in the paper the limitations of the simulated data, which may also explain partially why sensitivity on real data is significantly worse than in the simulated experiment.\n\nThe caption for Figure 2 seems particularly long for such a simple figure and many of the details given in the figure caption would be better placed in the main text.\n\nEquations in the paper should be numbered.\n\nOn the equation on page 3, the variable W is used to indicate the number of consecutive (base pair) positions used to halt the construction of a set. This value seems to play a significant role in partitioning the genome to identify structural variants. However, there is no information on what value is used and whether it is a user parameter.\n\nOn page 6: “These calls are recognized by searching for variants [were -> where] the regions of the first read and its mate overlap, or where the regions of the primary and secondary alignment overlap.”\n\nThe authors report TIDDIT’s system requirements to be only 2 hours using a single CPU and 2 GB of RAM. However, these numbers are uninformative without reporting also the amount of data (e.g., sequence coverage, number of reads, etc.) that was used to test the tool.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": [
{
"c_id": "2823",
"date": "30 Jun 2017",
"name": "Jesper Eisfeldt",
"role": "Author Response",
"response": "Hello there! Thanks for taking your time to reviewing, and sorry for taking such a long time to respond! We agree with your comments, and we have incorporated them in the manuscript: Major Comments: 1. We now focus more on the database functionality, but we also mention the number of callers that combine low computational cost and the ability to detect all types of SV is limited. 2. Manta does not support variant calling on data having read orientation reverse-forward, hence it was excluded from the HG002 dataset which is sequenced using a mate-pair library. We have now added a short description of the two samples, and also a statement why manta was dropped from the analysis of HG002. We wanted to use the mate-pair data of HG002 sample mainly to show that TIDDIT performs well on both standard paired-end data and mate-pair data. 3. Thanks, we have adjusted the language to make it more neutral. 4. We have rewritten the overlap based clustering section completely, now it contains an example of such complex cluster of variants; as well as a more practical explanation of the algorithm. Minor Comments: 1. Due to the large size of the files we decided not to upload the simulated data. Potentially, we could generate the data and make it available on request during a limited time. We are happy to hear that the code is easily installed! 2. That’s true, the simulated variants were randomly positioned throughout the genome. We have added some comments to the benchmark subsection of the discussion section were we comment on this. The main reasons we simulated the variants that way was to reduce the risk of any selection bias, as well as to simplify the simulation. We agree that the most truthful simulation would include more realistically positioned variants, and perhaps even a large number of known disease causing SV. 3. We summarized the caption 4. Thanks, we added numbering to the equations! 5. We added some details on how W is set; In short, we found it through benchmarking on about 350 WGS samples of patients carrying known SV of clinical relevance, which were sequenced using a variety of libraries. However, since we cannot make the patient data publicly available we cannot present our internal benchmarking in the paper. 7 We got the statistics from running TIDDIT on the NA12878 sample, we have now added some more detail to that statement. Thanks again for reviewing! Regards Jesper on behalf of the authors"
}
]
},
{
"id": "23324",
"date": "26 Jun 2017",
"name": "Bud Mishra",
"expertise": [
"Reviewer Expertise Genomics and Computational Biology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper has already two reviewers and they have addressed many of the issues that I would have raised. My main concern however is that the interpretation of the results depend highly on the accuracy of the reference used and that the human reference is still incomplete (non-haplotypic), error-ridden, and evolving. It is unclear how the tool will adapt to the future when large number of accurate haplotypic whole genome references will become available. What makes it even worse is that the software is built on a group of intuitively justifiable heuristics with many hyper-parameters hard-coded. However, all the other tools it competes against also suffer from these problems and the genomics community has been strangely oblivious of these foundational problems.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-664
|
https://f1000research.com/articles/6-565/v1
|
25 Apr 17
|
{
"type": "Research Article",
"title": "Lumboperitoneal shunt insertion without fluoroscopy guidance: Accuracy of placement in a series of 107 procedures",
"authors": [
"Sabah Al-Rashed",
"Haider Kareem",
"Neeraj Kalra",
"Linda D’Antona",
"Mouness Obeidat",
"Bhavesh Patel",
"Ahmed Toma",
"Haider Kareem",
"Neeraj Kalra",
"Linda D’Antona",
"Mouness Obeidat",
"Bhavesh Patel",
"Ahmed Toma"
],
"abstract": "Background: Lumboperitoneal (LP) shunts were the mainstay of cerebrospinal fluid diversion therapy for idiopathic intracranial hypertension (IIH). The traditionally cited advantage of LP shunts over ventriculoperitoneal (VP) shunts is the ease of insertion in IIH. This needs to be placed at the level of L3/4 to be below the level of the spinal cord. The objective of this study was to analyse the position of LP shunts inserted without portable fluoroscopy guidance. Methods: A retrospective analysis of radiology was performed for patients who underwent lumboperitoneal shunts between 2006 and 2016 at the National Hospital for Neurology and Neurosurgery. Patients who had insertion of a LP shunt without fluoroscopy guidance were selected. Patients without post-procedural imaging were excluded. A retrospective analysis of the clinical notes was also performed. Results: Between 2006 and 2016, 163 lumboperitoneal shunts were inserted in 105 patients. A total of 56 cases were excluded due to lack of post-procedural imaging; therefore, 107 post-procedural x-rays were reviewed. In 17 (15.8%) cases the proximal end of the LP shunt was placed at L1/L2 level or above. Conclusions: Insertion of LP shunts without portable fluoroscopy guidance gives a 15.8% risk of incorrect positioning of the proximal end of the catheter. We suggest that x-ray is recommended to avoid incorrect level placement. Further investigation could be carried out with a control group with fluoroscopy against patients without.",
"keywords": [
"Lumboperitoneal shunt",
"LP shunt",
"fluoroscopy"
],
"content": "Introduction\n\nHistorically, lumboperitoneal (LP) shunts were the mainstay of cerebrospinal fluid (CSF) diversion therapy for idiopathic intracranial hypertension (IIH). The traditionally cited advantage of LP shunts over ventriculoperitoneal (VP) shunts is the ease of insertion in IIH patients who usually have small and sometimes difficult to catheterise ventricles1–4.\n\nMultiple studies have shown that when functional, LP shunts are effective in alleviating headaches and improving or stabilising visual symptoms in patients with IIH5–8. Studies have shown that IIH patients who underwent LP shunting had improvement in both visual acuity and visual fields with patients also reporting an improvement in headache symptoms post LP shunting6,9. In these previous studies, and in many others, the most common complication was shunt obstruction, with up to 65% of cases requiring revision in one study9. Other less frequent yet significant complications of LP shunts include infection, radiculopathy, shunt migration, syrinx, low pressure headaches, tonsillar herniation, subdural haematomas and potential damage of the distal end of the spinal cord.\n\nThe conus medullaris is the tapered, lower end of the spinal cord. Multiple cadaveric studies have demonstrated the level of the conus medullaris to be between T12 and L310. Other studies report that the conus reaches the adult level by two years of age and lies at an average position of L1 to L211. This position was also confirmed by a large radiological study performed in 199810. Due to the proximity of the distal end of the spinal cord, it is best practice to avoid the insertion of LP shunts higher than L2/3 level. The ideal position for this procedure is considered to be at L3/L4 level or below.\n\nThe primary advantage of a LP over a VP shunt is the ability to cannulate the CSF space, in this case the thecal sac, as opposed to having to cannulate the very commonly found slit ventricles associated with IIH when considering a VP shunt1–4. However, there are also a series of challenges associated with this procedure.\n\nGenerally LP shunt patients are positioned in the lateral position to provide simultaneous access to the lumbar spine and flank. Percutaneous cannulation of the thecal sac can be very challenging, often requiring specific long needles. Additionally, it is often difficult to get the flexion (“foetal position”) required in these patients to open the interlaminar space and allow for the needle to access the thecal sac. Once the proximal catheter enters the thecal sac it needs to be threaded cranially into position, which is at times challenging as the catheter often kinks within the significant tissue volume. Following placement of the proximal catheter, the remainder then needs to be tunneled through the subcutaneous tissue into the flank region. At this point, while in the lateral position and with the significant amount of adipose tissue, the surgeon then needs to identify the peritoneum. This can prove to be quite challenging given the non-anatomic patient’s position as well as the fact that gravity is working against the surgeon. At this point the catheter is then passed into the flank within the peritoneum.\n\nThe insertion of the catheter into the lumbar CSF space determines the success or failure of the LP shunt. Since this involves a manual manoeuvre with a “blind\" tap, the catheter may be inadvertently placed incorrectly and migration of the shunt catheters is a common experience. The lumbar catheter can migrate relative to the thecal sac (usually into the subcutaneous space), and the peritoneal catheter can likewise come out of the peritoneum. The incidence ranges from 3–20%. Migration complications have been noted to be more common in the paediatric population4,12,13. When a catheter migrates out of the thecal sac, a subcutaneous collection of spinal fluid can be observed.\n\nNewly onset radicular pain has been noted to occur with LP shunts. This may result from catheter migration or localised inflammation leading to arachnoiditis. The onset of symptoms may necessitate shunt revision. The incidence of developing newly onset radicular pain ranges from 5–6%13,14.\n\nThe efficacy of fluoroscopic guidance in the placement of a lumbar catheter in patients treated with an LP shunt has been reported9. The method includes using intraoperative portable fluoroscopy with contrast medium. The direction of the inserted catheter can be confirmed, and loop formation or absence thereof can be detected intraoperatively. It is possible to confirm that the catheter has not migrated into the extra-CSF space or the intervertebral foramen containing the spinal nerve roots. Improved visibility of the catheter in the spine, by filling it with contrast medium, is the key to the success of this procedure15. Intraoperative fluoroscopic guidance has become widely available in last two decades. Despite its efficacy, it exposes the patients and the staff to radiation. Moreover, there are possible side effects and restrictions related to the use of contrast medium, such as allergy, anaphylactic shock and acute renal failure. For these and other reasons, there is still a significant number of LP shunts operations that are performed without fluoroscopic guidance.\n\nThe National Hospital for Neurology and Neurosurgery offers dedicated hydrocephalus services and receives quaternary referrals from centres situated in the UK and abroad. The present study describes the accuracy of LP shunt placement without intraoperative fluoroscopic guidance over a 10-year period.\n\n\nMethods\n\nAn analysis of the hospital electronic records identified 163 LP shunt procedures performed without fluoroscopic guidance. They were performed between 2006 and 2016. Cases with no post-procedural imaging were excluded (56), due to the impossibility to identify the level of the proximal catheter.\n\nPost-procedural imaging was reviewed and reported by two independent operators who were blinded to each other’s results and verified using visible anatomical landmarks on the x-ray, the location of the proximal end of the catheter was recorded. In all cases, the imaging used was lumbar x-ray. Clinical notes were also reviewed for those patients who had incorrect positioning of the proximal catheter to identify potentially related signs and symptoms, such as lumbar radiculopathy.\n\nThe data collection and analysis was carried out using Microsoft Excel 2010.\n\n\nResults\n\nBetween 2006 and 2016, 163 LP shunt procedures were performed without intraoperative fluoroscopic guidance. After exclusion of the cases without post-procedural imaging (56), a total of 107 procedures performed on 73 patients were selected. A total of 57 patients were female and 16 were male (M:F, 1:3.5), the mean age was 41 years (± 13 SD) ranging from 16–69 years.\n\nThe review of all post-procedural imaging showed that in 17 cases (15.8%) patients had the proximal catheter placed at the level of L1/L2 or above (T12/L1, 1.8%; L1/2, 14.0%) (Figure 1). On the other hand, in 94 cases (84.2%), patients had the proximal tip of the catheter placed at the level of L2/L3 or below (L2/3, 33.0%; L3/4, 37.4%; L4/5, 12.0%; L5/S1, 1.8%) (Table 1).\n\nAn analysis of the clinical notes of the patients who had mispositioned LP shunts was carried out for a minimal post-operative period of one year. None of the patients complained of signs or symptoms related to possible distal spinal cord damage.\n\n\nDiscussion\n\nThis study demonstrates that, without intraoperative fluoroscopic guidance, an LP shunt insertion procedure can lead to a mispositioned proximal catheter in 15.8% of cases. Despite none of our patients presenting with any signs or symptoms of spinal cord damage, this risk needs to be considered when performing this procedure “blindly”.\n\nOne of the biggest challenges in performing LP shunts in IIH patients is often related to their habitus. It is in fact recognised that a strong association between IIH and obesity exists16. Approximately 70–80% of IIH patients are obese and over 90% are overweight16. In this group of patients finding the anatomical landmarks, maintaining them and inserting the lumbar catheter at the correct level, can represent a technical challenge; this is especially true when the procedure is performed without fluoroscopy guidance.\n\nWe suggest that the use of intra-operative imaging guidance should be adopted: this practice could reduce the incidence of mispositioned LP shunts and therefore decrease the risk of significant spinal cord damage, which may have serious, irreversible consequences.\n\nThe results of this series must be interpreted considering the limitations of the nature of any retrospective study. It could be argued that results achieved by our unit could vary markedly from those achieved at other units. We also do not take into account for operator experience, which may be partially responsible for the differences in success rate, and again may vary from individual to individual. Ultimately, to prove the efficacy and benefits of intraoperative imaging for LP shunt insertion, large, prospective, randomised controlled studies should be performed.\n\n\nConclusions\n\nWhile this series is too small to conclude whether intraoperative imaging should be used to minimize the risk of misplaced proximal LP shunt catheters, it prepares the basis for further prospective studies. Our results suggest that LP shunt insertion without fluoroscopic guidance has a 15.8% risk of misplacement of the proximal catheter, and for this reason the use of intraoperative image guidance is suggested to reduce the risk of spinal cord damage and its potentially catastrophic consequences.\n\n\nEthical statement\n\nEthical approval and registration was obtained from the National Hospital for Neurology and Neurosurgery.\n\nThis study was performed as part of an audit to analyse the current practice against the department policy standards.\n\n\nData availability\n\nDataset 1: X-rays showing the final position of the lumboperitoneal (LP) shunt in patients that underwent LP shunt insertion without fluoroscopic guidance. Each page of the dataset indicates a different procedure. doi, 10.5256/f1000research.11089.d15468617",
"appendix": "Author contributions\n\n\n\nConcept and design of study: SA-R, HK; acquisition of data: BP, LD’A; analysis and/or interpretation of data: NK, AT; drafting the manuscript: SA-R, MO, NK.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nIngraham FD, Sears RA, Woods RP, et al.: Further studies on the treatment of experimental hydrocephalus; attempts to drain the cerebrospinal fluid into the pleural cavity and the thoracic duct. J Neurosurg. 1949; 6(3): 207–15. PubMed Abstract | Publisher Full Text\n\nMatson DD: A new operation for the treatment of communicating hydrocephalus; report of a case secondary to generalized meningitis. J Neurosurg. 1949; 6(3): 238–47. PubMed Abstract | Publisher Full Text\n\nVander Ark GD, Kempe LG, Smith DR: Pseudotumor cerebri treated with Lumbar-peritoneal shunt. JAMA. 1971; 217(13): 1832–1834. PubMed Abstract | Publisher Full Text\n\nSpetzler RF, Wilson CB, Grollmus JM: Percutaneous lumboperitoneal shunt. Technical note. J Neurosurg. 1975; 43(6): 770–3. PubMed Abstract | Publisher Full Text\n\nBurgett RA, Purvin VA, Kawasaki A: Lumboperitoneal shunting for pseudotumor cerebri. Neurology. 1997; 49(3): 734–9. PubMed Abstract | Publisher Full Text\n\nEl-Saadany WF, Farhoud A, Zidan I: Lumboperitoneal shunt for idiopathic intracranial hypertension: patients’ selection and outcome. Neurosurg Rev. 2012; 35(2): 239–43; discussion 243–4. PubMed Abstract | Publisher Full Text\n\nBinder DK, Horton JC, Lawton MT, et al.: Idiopathic intracranial hypertension. Neurosurgery. 2004; 54(3): 538–51; discussion 551–2. PubMed Abstract | Publisher Full Text\n\nAbubaker K, Ali Z, Raza K, et al.: Idiopathic intracranial hypertension: lumboperitoneal shunts versus ventriculoperitoneal shunts–case series and literature review. Br J Neurosurg. 2011; 25(1): 94–9. PubMed Abstract | Publisher Full Text\n\nBurgett RA, Purvin VA, Kawasaki A: Lumboperitoneal shunting for pseudotumor cerebri. Neurology. 1997; 49(3): 734–9. PubMed Abstract | Publisher Full Text\n\nSaiffudin A, Burnett SJ, White J: The variation of position of the conus medullaris in an adult population. A magnetic resonance imaging study. Spine (Phila Pa 1976). 1998; 23(13): 1452–1456. PubMed Abstract | Publisher Full Text\n\nWilson DA, Prince JR: John Caffey award. MR imaging determination of the location of the normal conus medullaris throughout childhood. AJR Am J Roentgenol. 1989; 152(5): 1029–32. PubMed Abstract | Publisher Full Text\n\nChumas PD, Kulkarni AV, Drake JM, et al.: Lumboperitoneal shunting: a retrospective study in the pediatric population. Neurosurgery. 1993; 32(3): 376–83; discussion 383. PubMed Abstract | Publisher Full Text\n\nEisenberg HM, Davidson RI, Shillito J Jr: Lumboperitoneal shunts. Review of 34 cases. J Neurosurg. 1971; 35(4): 427–31. PubMed Abstract | Publisher Full Text\n\nAoki N: Lumboperitoneal shunt: clinical applications, complications, and comparison with ventriculoperitoneal shunt. Neurosurgery. 1990; 26(6): 998–1003; discussion 1003–4. PubMed Abstract | Publisher Full Text\n\nSato K, Shimizu S, Oka H, et al.: Intraoperative fluoroscopy with contrast medium for correct lumbar catheter placement in lumboperitoneal shunts. Kitasato Med J. 2013; 43: 155–158. Reference Source\n\nSubramaniam S, Fletcher WA: Obesity and Weight Loss in Idiopathic Intracranial Hypertension: A Narrative Review. J Neuroophthalmol. 2016; 1–9. PubMed Abstract | Publisher Full Text\n\nAl-Rashed S, Kareem H, Kalra N, et al.: Dataset 1 in: Lumboperitoneal shunt insertion without fluoroscopy guidance: Accuracy of placement in a series of 107 procedures. F1000Research. 2017. Data Source"
}
|
[
{
"id": "22457",
"date": "03 May 2017",
"name": "Uygur Er",
"expertise": [
"Reviewer Expertise Spine surgery"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors claimed that using a fluoroscopic system may reduce the risk of misplacement of the proximal catheter. However, they did not see any spinal cord injuries or other complications due to misplacement. In conclusion section this needs clarifying. Misplacement rate was given but this was emphasized as a technical complication, not a harmful cause.\n\nThe second vital point is that it is unclear the meaning of misplacement according to this article. It should be clarified “misplacement”. Did they use this term as the inserting vertebral level or the end position of the proximal catheter? Figure 1 showed wrong insertion points, but in the introduction section this was defined as “misplaced inside thecal sac due to migration”. Besides, position of the patient doesn’t change vertebral levels due to anatomical landmarks. For example, in lateral position, superior iliac line passes L-4/5 level like in anatomical position. Authors should give some strictly defined methods using fluoroscopy, positioning and evaluation to allow replication by others.\n\nFollowing clarifying these points, this article may accepted for indexing after reviewing of the revised version.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "22202",
"date": "08 May 2017",
"name": "Rafid Al-mahfoudh",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe population studied needs to be accurately defined - do all the LP (Lumboperitoneal) shunt patients in this study have a diagnosis of IIH? Have the authors used fluoroscopic guidance for LP shunts at their institution? This should be mentioned and if yes, then a comparison of proximal catheter placement could be made with and without fluoroscopy.\n\nSome of the introduction would be best moved to the discussion (probably paragraph 5 onwards) as this mostly elaborates on possible causes which may explain the results of the study.\nResult section - clarify the indication for LP shunt (do all patients have a diagnosis of IIH)?\nThe drive for quality in healthcare in general and a reduction in revision surgery specifically, continues to gain momentum worldwide. The authors discuss the possibility that intraoperative fluoroscopy can improve accurate LP shunt placement and should be praised for providing an honest appraisal of their results / misplaced proximal catheter rates.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22200",
"date": "26 May 2017",
"name": "Rossana Romani",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is interesting and it can be indexed after revision. Suggestions for revision as follows:\nThe introduction is too long and it needs to be shortened. The majority of the concepts in the introduction should be in the paragraph of the discussion.\n\nA further paragraph regarding the surgical techniques and the insertion of the lumbar catheter should be added in the Results section. Details of the surgical technique are in the Introduction section.\n\n17 patients had post-operative misplacement of the lumbar catheter with no symptoms nevertheless the authors conclude that intraoperative image guidance is suggested in all procedures to avoid misplacement of the lumbar catheter. The authors need to report the literature data regarding the misplacement of the lumbar catheter and clinical symptoms related.\n\nI agree that further randomised clinical trial with a large number of patients can clarify the utility of intraoperative image guidance during lumboperitoneal shunt insertion. The conclusions need revision.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-565
|
https://f1000research.com/articles/5-2416/v1
|
28 Sep 16
|
{
"type": "Research Article",
"title": "Breeding novel solutions in the brain: a model of Darwinian neurodynamics",
"authors": [
"András Szilágyi",
"István Zachar",
"Anna Fedor",
"Harold P. de Vladar",
"Eörs Szathmáry",
"István Zachar",
"Anna Fedor",
"Harold P. de Vladar",
"Eörs Szathmáry"
],
"abstract": "Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.",
"keywords": [
"attractor network",
"autoassociative neural network",
"evolutionary search",
"Darwinian dynamics",
"neurodynamics",
"learning",
"problem solving"
],
"content": "Introduction\n\nThe idea that functional selection on a large set of neurons and their connections takes place in the brain during development1–3 is now experimentally validated4–7. As originally portrayed, this process is only one round of variation generation and selection, even if it requires several years. Evolution by natural selection works differently: variants are generated and then selected in iterative rounds. The field of “Neural Darwinism”1–3 fails to include generation of variants and thus could justifiably be regarded as a misnomer because the process that it describes is not evolutionary in the strict sense8. Evidence indicated that the development of the brain is more “constructivist”9 than pictured by the original selectionist accounts: for example, repeated rounds of neuron addition and loss happen during development10. Structural plasticity (synaptic remodelling) is now known to be a lifelong process with implications for memory and learning (e.g. 11,12). The addition and deletion of synapses and neurons takes several hours or days13. Our main goal here is to present a proof of principle that bona fide evolutionary dynamics could happen in the brain on a much faster time scale.\n\nMaynard Smith14 identified multiplication, inheritance and variability as necessary features of evolution. In genetic evolution the variability operators are mutation and recombination. If there are hereditary traits that affect the survival and/or the fecundity of the units, then in a population of these units, evolution by natural selection can take place. While this characterization qualitatively outlines the algorithmic aspect of evolution15, concrete realizations require also quantitative conditions: population size cannot be too small (if it is too small, neutral drift dominates over selection16) and replication accuracy cannot be too low (if it is too low, hereditary information is lost17). Note, that this description says nothing about the nature of the units: they could be genes, organisms, linguistic constructions or anything else.\n\nThe proper implementation of an evolutionary process within the nervous system could have major implications for neuroscience and cognition8,18–25. A main benefit of neuro-evolutionary dynamics would be that it could harness the parallelism inherent in the nervous system and the redistribution of resources at the same time. The latter process means that hopeless variants are thrown away and are replaced in the “breeding space” by more promising ones8. Another important aspect of the process is that it is generative: it could explain where new hypotheses and new policies come from in Bayesian approaches to cognition26,27 and reinforcement learning28–30, respectively. Bayesian inference and natural selection are analogous31,32 in that candidate hypotheses in the brain (the prior distribution) represent a population of evolutionary units, which are evaluated, or selected, based on the evidence. There is a mathematical isomorphism between the discrete-time replicator equation and Bayesian update31. The likelihood function is analogous to the fitness function and the posterior distribution to the selected population. Relations like this suggest that Bayesian update could be one of the pillars of “universal Darwinism”33. We believe that convincing models for neuro-evolution could empower Bayesian approaches by providing a mechanism to generate candidate hypotheses.\n\nAttractor networks have been used (among others) as models of long-term memory, which are able to complete partial input34,35. These networks consist of one layer of units that recurrently connect back to the same layer. The recurrent connections can learn (store) a set of patterns with a Hebbian learning rule. Later, if these patterns or their noisy versions are used to provoke the network, it settles on the original patterns after several rounds of activation updates on the recurrent weights (recall), thus stored patterns act as attractors. It is of high importance that the existence of such networks has been experimentally validated in the visual cortex of awake mice by optogenetic methods36. Some versions of the learning rule allow for iterative learning without catastrophic forgetting and enable palimpsest memory. A network with palimpsest memory is able to learn new patterns one-by-one, while sequentially forgetting earlier patterns.\n\nIn this paper we describe a model that implements evolution of activation patterns in the brain with the help of attractor networks. We see it as a model of problem solving, which is able to generate new candidate solutions to a problem based on past experiences. Any cognitive problem of the brain is encoded by the activity pattern of neurons. We represent neurons as binary units, being able to continuously maintain firing in one state or the other. A group of neurons at any time therefore has a binary activation pattern. In our model, the units of evolution are these activation patterns, represented as bitstrings. Attractor neural networks can store activation patterns stably for a considerable time in form of corresponding attractors and are able to recall them given the appropriate trigger (Figure 1A). This memory allows for heredity, which is indispensable for Darwinian dynamics (in genetic populations memory is the genotype pool). Attractor neural networks can generate new pattern variants in different ways (corresponding to mutation in a genetic system), see below under Discussion. Owing to memory and pattern generation, the possibility of iterated selection over a population of activation patterns becomes feasible. Our approach thus offers a more natural way to incorporate hereditary dynamics in models of cognitive problem solving at a faster scale that could be provided by, say, structural plasticity (cf. 37). This fast-scale dynamics is missing from Edelmanian Neural Darwinism.\n\nA) Architecture of multiple attractor networks performing Darwinian search. Boxed units are attractor networks. Each network consists of N neurons (N = 5 in the figure, represented as black dots). Each neuron receives input from the top (1) and generates output at the bottom (2). Each neuron projects recurrent collaterals to all other neurons (but not to itself), forming thus N × (N – 1) synapses. The weight matrix of the synapses is represented here as a checkerboard-like matrix, where different shades indicate different weights on the connections. Selection and replication at the population level is as follows: 1) Each network receives a different noisy copy of the input pattern. 2) According to its internal attractor dynamics, each network returns an output pattern. 3) All output patterns are pooled in the implicit working memory (grey box with dashed outline), where they are evaluated and a fitness wi is assigned to the ith pattern. 4) The best pattern(s) is selected based on fitness. 5) One of the networks is randomly chosen to learn the pattern that was selected, with additional noise (dashed arrow). 6) The selected pattern is copied back to the networks as input to provoke them to generate the next generation of output patterns. B) Lifecycle of candidate solution patterns during a cognitive task. Patterns are stored in the long-term memory as attractors of autoassociative neural networks. When provoked, networks produce output patterns, which are stored in implicit working memory. These patterns are evaluated and selected. Patterns that are good fit to the given cognitive problem can increase their chance to appear in future generations in two possible, non-exclusive ways: 1) selected patterns are retrained to some networks (learning) and 2) selected patterns are used as inputs for the networks (provoking). The double dynamics of learning and provoking ensures that superior solutions will dominate the system. Erroneous copying of patterns back to the networks for provoking and learning and noisy recall are the sources of variation (like mutations).\n\nThe patterns represent candidate hypotheses or candidate solutions to a problem, which are evaluated based on a fitness function that measures their goodness as a solution. The best patterns are selected and copied (with variation) back to the networks, which in turn generate the next generation of patterns (Figure 1B). Stored patterns constitute the long-term memory; output patterns constitute the working memory (Figure 1B). While pattern generation is a simple recall task, which is only able to reproduce previously learnt patterns, the whole system is able to generate new variants due to noisy recall, spurious patterns (see later), noisy copying of patterns, and iterative learning, thus enabling the evolution of novel solutions.\n\n\nMethods\n\nRecurrent attractor networks. The basic units in our model are attractor networks. Attractor networks are recurrent neural networks consisting of one layer of units that are potentially fully connected. An attractor neural network produces the same (or highly correlated) output whenever the same input is provided (omitting retraining). The pattern that was learned becomes the attractor point of a new basin of attraction, i.e. it is the prototype pattern that the attractor network should return. Consequently, an attractor with a non-zero sized basin should also return the same output to different input patterns. However, the amount and type of correlation of input patterns that retrieve the same prototype, i.e., the actual structure of the basin of attraction, is hard to assess, let alone visualize. Still, it is safe to assume that most input patterns correlated with the prototype, produce the same output – the prototype itself.\n\nThe Hopfield network is a recurrent artificial neural network with binary neurons at nodes and weighted connectivity between nodes, excluding self-connections. According to the usual convention, the two states of binary neurons are +1 and -1. In our model, a neuron fires (state +1) if the total sum of incoming collaterals is greater than 0. Accordingly, the update rule has the following form:\n\n\n\nThe original Hebbian (covariance) learning rule has the following form (where m is the index of the patterns):\n\n\n\n\n\nThe Hebb rule is both local and incremental. A rule is local if the update of a connection depends only on the information available on either side of the connection (including information coming from other neurons via weighted connections). A rule is incremental if the system does not need information from the previously learnt patterns when learning a new one, thus the update process uses the present values of the weights and the new pattern. The above update rule performs immediate update of the connection weights (“one shot” process; not a limit process requiring multiple update rounds). The covariance rule has a capacity of 0.14 N58. However, if during learning the system reaches its capacity and further patterns are presented, catastrophic forgetting ensues and the network will be unable to retrieve any of the previously stored patterns, forgetting all it has learnt.\n\nTo overcome this problem and to preserve the favorable properties of the covariance rule (one-shot, local and incremental updating) Storkey has introduced a palimpsest learning scheme41 as follows:\n\n\n\nand\n\n\n\nUsing the above rule, the memory becomes palimpsest (i.e. new patterns successively replace earlier ones during learning) with a capacity of C = 0.25 N (for details and proper definition of palimpsest capacity, see 41).\n\nAn interesting feature of some autoassociative neural networks is the appearance of spurious patterns. In some cases, the network converges to a pattern different from any other patterns learnt previously. These spurious patterns can be the linear combination of an odd number of stored patterns:\n\n\n\nwhere S is the number of the stored patterns58. This effect can be thought of as an effective implementation of a neuronal recombination operator.\n\nSelection. For the selection experiment, we used NA structurally identical attractor networks, each consisting of N neurons, implementing Storkey’s palimpsest learning rule. NA = 20 networks (N = 200) were initially trained with random patterns plus a special pattern for each. The 20 special training patterns were as follows. The worst special pattern was the uniform -1, the best special pattern was the uniform +1. Intermediate special patterns had increasing number of +1-s from the left. Fitness was measured as the relative Hamming similarity from the globally best target Otarget (i.e. the proportion of +1-s in the pattern). The worst special pattern was trained only to network #1, the second worst to #2, etc., while the best special pattern (which was the target pattern) was trained to network #20. In this scenario, no further training occurred (i.e., the dashed arrows on Figure 1 are not there). Assuming that the attractor basins of these patterns overlap among networks (Figure 2A) the output of one network will be the cue to trigger one or more close special patterns in other networks. The special patterns ensure that there exists a search trajectory leading from the worst to the best pattern. Starting from any arbitrary initial pattern, if any of the special patterns gets triggered at any time, the system can quickly converge to the optimum.\n\nA) Four time steps of selection, from top to bottom. At each step, we only show the network that produces the best output (numbered); the rest of the networks are not depicted. In each time step the networks are provoked by a new pattern that was selected from the previous generation of patterns. Different attractor networks partition the pattern-space differently: blobs inside networks represent basins of attraction. At start, the topmost network (#3) is provoked with an input pattern. It then returns the center of the attractor basin which is triggered by the input. When the output of this network is forwarded as input to the next network (#11), there is a chance that the new attractor basin has a center that is closer to the global optimum. If there is a continuity of overlapping attractor basins through the networks from the initial pattern (top) to the global optimum (bottom), then the system can find the global optimum even without learning. B) Learning in attractor networks. Network #5, when provoked, returns an output pattern that is used to train network #9 (blue arrow). As the network learns the new pattern, the palimpsest memory discards an earlier attractor (with the gray basin), a new basin (purple shape) forms around the new prototype (purple ×) and possibly many other basins are modified (basins with dotted outlines). Black dots indicate attractor prototypes (i.e. learnt patterns). With learning, successful patterns could spread in the population of networks. Furthermore, if learning is noisy and a network might learn a slightly different version of the pattern, new variation is introduced to the system above the standing variation. This allows finding the global optimum even if it was not pre-trained to any network. The gray arrow in the background indicates the timeline of network #9.\n\nAfter initial training, each network received the same random input and generated an output according to its internal attractor dynamics. The output population was evaluated and the best output Obest was selected based on its fitness. Noisy copies (with μI, where μ is the per-bit mutation probability) of Obest were redistributed for each network as new input for the next generation. These steps were iterated until fitness reached the theoretical optimum (i.e. the system found special pattern #20). The crucial assumption for selection to work is continuity, namely the possibility that the output of one attractor of one network could fall in a different attractor basin of another network returning an output that is closer to the global optimum than the input was (see Figure 1 and Figure 2).\n\nEvolutionary optimization on a single-peak landscape. In contrast to purely selective dynamics, in the evolutionary experiment, networks could learn new patterns during the search process. At start, each network was trained with a different set of random patterns. The fitness of a pattern is defined as the relative (per bit) Hamming similarity between the given pattern and an arbitrarily set globally best target pattern Otarget. The selection process for the actual best output Obest and redistribution of its noisy copies (with μI = 0.005) for input was the same as before. Most importantly, the mutated versions (with μT = 0.01) of Obest were also used for retraining NT different networks in each generation (see Figure 1): this forms the basis for the Darwinian evolutionary search over attractor networks, as it allows for replication with variation of (learnt) patterns over networks (thin lines in Figure 3).\n\nLines represent the evolution in four different populations, where a different number of networks were retrained. Each population consisted of 10 networks (see the rest of the parameters under the Methods section). Thin lines: stochastic attractor dynamics; thick lines: simulated attractor dynamics (abstract networks always return the stored attractor prototype that is closest to the actual input, with 0.001 per bit probability noise; capacity to store Cfix = 30 patterns, μO = 0.002). Parameters: N = 200, NA = 20, μT = 0.01, μI = 0.005, elitist selection, keeping the best one only from each output generation; retraining selects random networks (never the same in a given generation). Fitness is the relative Hamming similarity to the global optimum.\n\nWe have compared the search behavior of our system of attractor networks with a simpler model. In this model networks were represented as abstract storage units, which could store exactly Cfix patterns (Cfix was set to be close to the actual capacity of networks). When such a storage unit receives an input pattern it simply returns the closest (in Hamming distance) of its stored patterns as output, with additional noise (μO = 0.001). The units simulate the almost perfect recall property of attractor networks and effectively approximate attractor behavior. We compared evolution in this simple model with evolution in the system of attractor networks (thick and thin lines in Figure 3).\n\nOptimization in a changing environment. In order to test the effect of memory on successive search, we have implemented a periodically changing selective environment, i.e., we periodically changed the fitness function. The environment alternated between E1 and E2, with a stable period length of TE = 2000. Each environmental change reset the global optimum: for this scenario, we assumed a uniform +1 sequence for E1 and its inverse, uniform -1 for E2 as global optima, and used the relative Hamming similarity as a fitness measure.\n\nIn the first phase of the simulation, networks were allowed to learn in each environment for a total of Tnolearn = 12000 generations (three periods per environments). Afterwards, learning was turned off to test the effect of memory. To make sure that the optimal pattern was not simply carried over as an output pattern from the previous environment but was recalled from memory, the input patterns were set to random patterns (instead of inheriting the previous output population) at the start of each new environmental period after Tnolearn. This ensures that the population could only maintain high fitness afterwards in an environment if the optimum was stored and could be successfully recalled (see Figure 4). In order to assess the memory of a network, we also measured the distance between the actual best output of the population and the closest one of the set of previously learned patterns within the same network (as different networks have different training history). A small distance indicates that the network outputs a learned pattern from memory (i.e. recalls it) instead of a spurious pattern.\n\nBlue: average fitness; green: best fitness; purple: distance of the best output of the population from the closest one stored in memory (for details, see main text). Grey and white backgrounds represent the changing environment: We alternated two global optimums at every 2000th generation. After the 12000th generation, we turned off learning (thick vertical line) and set the input to random patterns after each changing of the environment. Parameters: NA = 100, N = 100, NT = 40, fitness is the relative Hamming similarity to the actual optimum.\n\nFor this scenario, we introduced a different selection method (also used in the next section). Each network in the population produces an output according to its internal attractor dynamics and the input it received from the previous generation. From all output sequences one was randomly chosen and mutated (µR = 1/N per bit mutation rate). If the mutant had a higher fitness than the worst of the output pool, the worst pattern was replaced by it (elimination of the worst). Furthermore, in the case of a superior mutant, it was also trained to NT number of different networks. Lastly, the resulting output population is shuffled and fed to the networks as input in the next generation (except when the environment changes and input is reset externally).\n\nOptimization on a difficult landscape. To investigate the applicability of this optimization process, we adopted a complex, deceptive landscape with scalable correlation, and also modified the selection algorithm introduced above. We used the general building-block fitness (GBBF) function of Watson and Jansen38. According to the GBBF function, each sequence of length N is partitioned into blocks of uniform length P, so that N = P B (P, B ∈ Z+) where B is the number of blocks. For each block, L arbitrarily chosen subsequences are designated as local optima, with randomly chosen but higher-than-average subfitness values. The overall fitness F(G) of a pattern G (“genotype”) is as follows:\n\n\n\n\n\n\n\nwhere f(gi) is the fitness contribution of the ith block in the pattern, tj is the jth local optimum of length P (all L different optima are the same for each block in our experiments) with subfitness value wj > 1, and d is the Hamming distance. Consequently, this landscape has many local optima, a single global optimum and a highly structured topology. Furthermore, since there are no nonlocal effects of blocks, each block can be optimized independently, favoring a metapopulation search.\n\nAccordingly, in this experiment, we introduced multiple populations of attractor networks. Each population of NA attractor neural networks forms a deme and ND demes are arranged in a 2D square lattice of Moore neighborhood (all of the eight surrounding demes are considered neighbors). Demes might accept output sequences from neighboring demes with a low probability pmigr per selection event; this slow exchange of patterns can provide the necessary extra variability for recombination. These demes correspond to the groups of columns in the brain.\n\nNetworks in the deme are the same as those used in previous experiments. However, selection is modeled in a different way, similar to the selective dynamics outlined in 38. In turn, we only give a brief description. Given a deme, each network produces an output according to its internal attractor dynamics and the input it received from the previous generation. Output sequences are pooled and either one or two is randomly chosen for mutation or recombination, respectively (i.e. no elitist selection). With probability prec, two-point recombination is performed of the two selected partners, with 1-prec probability, a single selected sequence is mutated, with µR = 1/N per bit mutation rate. With pmigr probability, the recombinant partner is chosen from another neighboring deme instead of the focal one. Next, the output(s) of recombination or mutation are calculated: if the resulting sequence (any of the two recombinants or the mutant) has a higher fitness than the worst of the output pool, it is replaced by the better one (elimination of the worst). Furthermore, in the case of a superior mutant or recombinant, it is also trained to NT number of different networks within the deme. Lastly, the resulting output population is shuffled and fed to the networks as input in the next generation. Each deme is updated in turn according to the outlined method; a full update of all networks in all demes constitutes a generation (i.e. a single time step).\n\nThe GBBF landscape was set up identically to the test case in 38, as follows. For each block uniformly, two target sequences of length P, T1 and T2, were appointed. T1 is the uniform plus-one sequence T1 = {+1}P and T2 is alternating between -1 and +1 (T2 = {-1, +1}P/2). According to the fitness rule (Equation 5–Equation 6 in 38 and Equation 1–Equation 3 above), the best subfitness of each block in a sequence can be calculated and the sum of all the subfitness values is the fitness of the global optimum sequence. Thus for sake of simplicity, we used relative fitness values with the global optimum (the uniform +1 sequence) having maximal fitness 1. The sequence(s) with lowest fitness always have a nonzero value.\n\nThe source code of all models and data presented in this paper is freely available as a supplement to this paper.\n\n\nResults\n\nSelection. We should distinguish between two processes: (i) search without learning among the stored patterns to find the best available solution (i.e., selection without step 5 on Figure 1A), and (ii) search with learning: retrain one or more networks with the selected and mutated patterns (Figure 1A with step 5). The first is a purely selectionist approach because it cannot generate heritable variants, while the second implements Darwinian evolution because learning changes the output behavior of the networks, thus they generate new patterns. First, we analyze the strictly selectionist version, and then the evolutionary version of the model.\n\nIn the selectionist version we pre-trained each network with a random set of patterns (excluding the target pattern) and started by provoking them with a different random input. Each network produced an output pattern according to its own attractors and then the best pattern was selected. This pattern was used in turn to provoke the networks in the next generation, and so on. This search has found among all the available stored (pre-trained) patterns the one with the highest fitness; it could not find the global optimum, as the networks were not pre-trained with it and there was no way for new variants to appear in this simulation.\n\nNext, we specifically composed the sets of pre-training patterns: each network was pre-trained with random patterns as before but also with one special pattern. This set of special patterns (in which individual patterns can be ordered according to gradually increasing fitness values) delineate a route to the optimum through overlapping basins of attractors in different networks (see Figure 2A) so that we can test whether in this simplified case the algorithm converges quickly to the optimum. The first population was initiated with the special pattern that was farthest from the optimum. We have found that the selected output gets closer to the optimum in each generation, but the optimization process is saltatory: it skips over many intermediate neighboring special patterns (and thus networks). This is due to the fact that attractor basins of neighboring special patterns were highly overlapping. For example, in Figure 2A, the stored special pattern of network #3 is in the basins of stored special patterns of networks #4–#11, and since the stored pattern of network #11 is closest to the optimum, networks #4–#10 were skipped. A typical sequence of networks generating the actual best output is: #3, #11, #17 and #20 (of 20 networks; for actual parameters, see Figure 2A).\n\nEvolution. Learning new patterns as attractors (Figure 2B) allows networks to adapt to the problem and perform evolutionary search. The results of the evolutionary experiments clearly prove that a population of attractor networks can implement evolutionary search in problem spaces of different complexity (i.e. different levels of correlation and deceptiveness).\n\nEvolution on a simple fitness landscape. In this scenario, neither the global optimum nor a route toward it is assumed to pre-exist in the system as in the selectionist experiments: networks are pre-trained only with random patterns. Even under these stringent conditions, we have found that the system can converge to the global optimum, and this convergence is robust against a wide range of mutation rates. Our simplified abstract model, which always returns the stored prototype that is closest to an input, behaves qualitatively the same way (see Figure 3). The speed of convergence to the optimum is mainly affected by the number of retrained networks (Figure 3): as we increase the number of networks that are retrained we find a faster fitness increase, albeit with diminishing returns. Mutation has an optimal range in terms of the speed of evolution. On one hand, if mutation rate is too low evolution slows down, because there is not enough variation among patterns. On the other hand, if mutation rate is too high it hinders evolution as the offspring is too dissimilar to the parent and cannot exploit the attractor property of the system. When mutation rate is zero, the source of variation is only the probabilistic input-output behavior of the networks due to their asynchronous update and the appearance of spurious patterns when the input is too far from the stored patterns.\n\nWhile the attractor networks have memory, due to the monotonic, single-peak nature of the fitness landscape there is no need to use it: the system works almost equally well if the networks only store the last trained pattern (i.e., weights are deleted before each learning event). Next, we present experiments where both the attractor property and the palimpsest memory of the networks are used.\n\nEvolution in a changing environment. In this experiment we alternated two environments: in every 2000th generation the target pattern (the optimum), against which fitness was measured, was changed. From an evolutionary point of view, this can be perceived as a changing environment, whereas from a cognitive point of view, this procedure simulates changing task demands. Figure 4 shows that the system found and learnt the optima of each of the two environments separately. Then, after generation 12000, we switched off learning. The fact that networks are nevertheless able to recall the target pattern right after the environmental change proves that they use previously stored memories. After we switched off learning, we used random patterns to provoke networks at the first generation of each new environment. A single network that can recall the optimum from the random input is enough to produce a correct output that is amplified by selection for the next generational input, ultimately saturating the population with optimal patterns. This experiment effectively proves that a system of attractor networks can reliably recall earlier stored solution patterns, therefore solves the problem faster in an alternating environment than a system without long-term memory.\n\nEvolution on a difficult fitness landscape. The previous evolutionary experiment (where search was on a single-peak fitness landscape with a single population of networks) is a proof of principle of the effectiveness of our evolutionary algorithm. In order to assess the capacity of population search of attractor networks, we introduce a considerably harder fitness landscape with higher dimensionality, where the deceptiveness of the problem can be tuned. The GBBF fitness landscape of 38 provides a method to easily generate scalable and complex landscapes with many deceptive local optima. The complexity of the problem requires the introduction of multiple interacting populations of networks. Though explicit spatial arrangement of the networks is not required to solve the problem, we have nevertheless included it in our implementation to imitate real spatial arrangement of neurons in the brain. Locality allows the exchange of information among neighboring populations (i.e. recombination) that is essential to solve the GBBF problem (or similar deceptive problems) in a reasonable time.\n\nWe have investigated the performance of search in a metapopulation with different problem sizes (pattern lengths; see Figure 5). Results indicate that despite the vastness of the search space, the metapopulation is always able to converge to the global optimum, given enough time. The most complex landscape of 100-bit patterns is of size 2100 with one global optimum and a huge number of local optima. The metapopulation consists of 105 neurons (100 populations of 10 networks each with 100 neurons per network) and can find the single global optimum in ~104 time steps. The limit of further increasing the problem size is in the computational capacity of our resources.\n\nEach curve is an average of 10 independent iterations. ND = 10 × 10 demes, NA = 10 networks per deme, N neurons per network, patterns of length N are partitioned to blocks of size P = 10 (B = N/P blocks per pattern), prec = 0.1, µR = 1/N, pmigr = 0.004, NT = 5. Note the logarithmic x axis. Inset: single simulation at N = 80, B = 8 (other parameters are the same). Plateaus roughly correspond to more and more blocks being optimized by finding the best subsequence on the building-block fitness landscape.\n\n\nDiscussion\n\nSummary of results. Attractor networks can be used to implement selection, replication and evolutionary search. As a proof of principle, we showed that attractor networks find the global optimum in a purely selectionist model (i.e. without learning) if they are pre-trained with attractors that overlap in their basins and lead to the optimum. The population can effectively select over the standing variation of all stored patterns and find the trajectory to the (single) peak of the fitness landscape (see Figure 2B). Furthermore, if learning is allowed during search, the relative frequency of good patterns (those closer to the optimum) can be increased by re-training networks with such patterns, so that they are stored in the long-term memory in more copies (Figure 3). Overwriting an older memory trace with a new pattern corresponds to a copying operation, potentially with mutations. A particularly interesting aspect of a population of attractor networks in the given coupling is that even if learning is not erroneous, the Lamarckian nature of inheritance of patterns (as output → memory → output; see Figure 1B) means that there is room for heritable variation to emerge at other stages of the cycle, thus implementing Darwinian dynamics.\n\nThe explicit benefit of memory manifests in a periodically changing environment. In a single, stable environment, memory is not very useful because the attractor property acts against exploring variation, and networks might be even slower than gradient hill-climbers (i.e. searchers who generate mutant output blindly and only take a step on the landscape if the output is better than the input). However, in a periodically changing environment, attractor networks with memory are able to recall the actual global optimum if they have already encountered the given environment in the past and stored its optimum; hill-climbers or naïve attractor networks lacking the necessary memory would have to perform the search all over again. Attractor network can recall the appropriate pattern even if there is no initial cue for the population to know which environment it is in. A network with memory can complete a partial cue and thus can recall the global optimum. After learning has ceased, it is enough to only have a few networks in the population that can recall the optimum from random cues that are accidentally close to the actual optimum (maximal fitness is immediately 1 in the population, see green curve in Figure 4). Selection then simply amplifies these optimal output patterns in the population (as output is fed back as input in the next generation) until all networks receive optimal patterns as input. At that point, average fitness also reaches the maximum (Figure 4, blue curve). A network without memory would have to search for the new optima in each environment over and over again, finding the whole uphill path on the landscape.\n\nWe also proved that a metapopulation of attractor networks can successfully optimize on a complex, correlated landscape of higher dimensionality. This is a notoriously hard optimization problem (cf. 38) as hill-climbers can easily get stuck in deceptive local optima. The spatial organization of attractor networks resembles the spatial organization of neural structures of the cortex and it allows parallel optimization of subproblems. By this independent optimization of solution parts, local populations can exchange successful partial solutions with each other and form recombinants that are necessary to solve such complex problems in reasonable time.\n\nEvolutionary context. We have chosen attractor networks to demonstrate Darwinian neurodynamics because (i) the search for existing solutions uses the same architecture for generating, testing and storing novel ones; (ii) stored patterns help evolutionary search by readily employing related past (learnt) experience, and (iii) the particular choice of Storkey’s model naturally results in some new recombinant patterns. This is an important point because, as we know from population genetics39, recombination speeds up the response to selection by creating new variants40.\n\nOur choice of implementation of attractor networks, following Storkey’s model41, is based on three important aspects: (i) it has palimpsest memory, so that it can learn new and forget old patterns without catastrophic forgetting, as happens in Hopfield and other networks; (ii) its attractors are roughly of equal size and are well-structured according to a Hamming distance measure, and (iii) unlike most other neural networks, it is able to store correlated patterns. The downside is that these networks require full connectivity, which is neuronally unrealistic. However, its functional properties reflect well what we know of long-term memory in the brain, which is enough for a proof or principle of an evolutionary implementation of neuronal function. To our knowledge no model exists in the literature that could satisfy all requirements above and that, at the same time, works with diluted connectivity42.\n\nIt is important to clarify that the units of evolution in our proposed Darwinian neurodynamics system are the activation patterns: they are copied potentially with errors, selected and amplified. However, patterns live in two stages: in the working memory and in the long-term memory (cf. Figure 1). This implies different inheritance methods (routes to pass on information) from what is expected in a purely genetic system. Changed attractor prototypes imply changed outputs, just like a mutated genotype implies a different phenotype. However, in our proposed system, changes made to output patterns (by mutation) can also be “inherited” by the stored attractor prototypes via learning. Furthermore, there is another difference in the dynamics, explained in turn.\n\nDarwinian evolution is often described as a parallel, distributed search on a fitness landscape8. The population, as an amoeba, climbs the peaks of the landscape via iterative replication and selection in successive generations. The attractors, however, impose a different mode of evolution, because they simply return the prototype pattern closest to the input, even if it is less fit than the input pattern itself. Consequently, attractor networks work against variability and slow down hill-climbing. However, attractor networks resist fitness increase only half of the time on average; the other half of the time they effectively push inferior patterns uphill in the fitness landscape at a speed much higher than that expected for the same (reduced) amount of genetic variation. Consequently, attractor networks can facilitate evolution21.\n\nWe stress the importance of evolutionary combinatorial search. In cases where ab initio calculation of molecular functionality is impossible, artificial selection on genetic variants has proven to be an extremely useful method to generate molecular solution to functional problems, as experiments on the generation of catalytic RNA molecules (ribozymes) illustrate (see 43 for a recent review). By the same token when a brute force numerical calculation of a combinatorial functionality problem is impossible for the brain, given the adequate architecture it could (and we suggest it does) use an evolutionary search mechanism, as shown in this paper.\n\nImplementation in the brain. It is of primary importance that all the components in our ‘algorithmic diagram’ (Figure 1) can be implemented by mechanisms known in the brain. It is likely that the cortical hypercolumn44 behaves like an (at least one) attractor network. The reciprocal connections between the long-term and working memory networks are assumed to be like those in Figure 3 in 45. We propose that, first, the reinforcing signal from the basal ganglia via the thalamus keeps active the good candidate pattern solutions in the rewarded auto-associative network and, second, that the latter sends a copy of the active pattern to (unconscious) working memory for eventual redistribution. When there is an increase in the quality of a solution (fitness increase) or when a local or global optimum is reached, the central executive elicits the transmission of a copy of the solution to the conscious memory46.\n\nOur proposed mechanism relies on information transmission of patterns between cortical groups and relevant subcortical structures with variation, and in this way it differs from all previous models. A discussion of timescales is in order. Without learning newly generated variants, the selective mode would require about the same time as suggested by 47 as the “cognitive cycle” (based on data), but without perception at the beginning of the cycle, i.e. it would be in the 160–310 ms range. In the evolutionary mode, learning of new variants is required which would take more time. A conservative estimate for the reliable expression of changed synaptic weights is between seconds to minutes48,49. The second scale would allow several dozen cycles/generations per minute—a very good number in artificial selection experiments. A more accurate estimate of timing will require a fully neuronal model of our proposed mechanism.\n\nCopying of information is necessary for evolutionary processes: this is where previous approaches22–24,50 have been too artificial. There are four well known instances where scholars invoke copying of information in the brain: (i) the transfer of information from short to long-term memory35,51,52, (ii) the transfer of information from specialized cortical areas to working memory53 or to the global workspace54,55 and, finally (iii) the possible copying of patterns from working memory to conscious processing56. Undoubtedly, all these approaches require accurate information transfer between different brain components57.\n\nThere are three sources of variation in our system: (i) due to the finite size of our networks and asynchronous update of the neurons, the output patterns show some variation even if a network is repeatedly provoked by the same input pattern, (ii) acknowledging the noisiness of neuronal transmission we introduce variation akin to mutations when patterns are transmitted among the blocks of the model, and finally (iii) we have realized that “spurious patterns”58 emerge as by-products of previously trained patterns and they might act as (new) recombinants of learnt patterns that facilitate the evolutionary search.\n\nThe non-conscious or implicit working memory, which has received considerable attention lately46,56, is crucial for our proposed mechanism. Irrespective of whether working memory overlaps with the conscious domain59 or not56 (in the latter case a ‘conscious copy’ must be sent from working memory to conscious access), the important factor is that the bound on the number of patterns that can be held in the unconscious part of the working memory is larger than that of the conscious working memory59. In other words, our mechanism suggests that the total storage capacity of the unconscious network population is much higher than that of the conscious one. Crucially, there is support for this requirement: there is evidence that the central executive function of working memory is not restricted to the conscious domain either46. The relatively large capacity of (unconscious) working memory can hold not one, but several patterns selected by the cortex-striatum-basal ganglia loop. This type of selection can be realized by a winner-share-all (WSA) mechanism60.\n\nThe latter point requires special attention. The reader is referred to the recent review by 61 on models of action selection and reinforcement learning. We wish to make a few critical points in this regard. First, as we are considering problem solving that unfolds its capacity online, there is no reason to select one pattern, since the interim patterns are not alternative actions but only candidate solutions. They can be turned into actions sometimes during, or only at the very end, of the evolutionary search. Weak lateral inhibition within the evaluation mechanism enhances value differences in selection, but a single winner is not selected60,61. Second, parallelism of the evolutionary approach loses considerable power if the evaluations are not done in parallel, and if poor solutions cannot be replaced by better solutions in the storage. (In a subsequent study we shall show that the number of parallel evaluations are allowed to be considerably smaller than population size but also that purely serial evaluation of candidates is a killer). Third, it is perfectly possible that the WSA part is implemented by the cortex rather than the striatum (cf. 61): we are agnostic on this point for the time being. (Admittedly that option would require a different version of the full model). Fourth, we maintain that parallel survival of a number of candidates should happen, and the mechanism for this might have evolved with selection for complex (offline) problem solving. Fifth, it is well possible that WSA is gradually reduced towards WTA (one winner takes all) during evolutionary search in the brain: this would also guard against premature convergence early and fast convergence towards the end of the search.\n\nTo sum up, we have seen that a process analogous to natural selection can be rendered into a neuronal model employing known neurophysiological mechanisms. Now we discuss relations to some other publications and outline future work.\n\nRelated work. Several examples show that evolution with neurodynamics can be more powerful than either of the components alone. Fernando et al.25 proved that the path evolution algorithm – which includes both elements of structural plasticity62,63 and iterative generation of variation – is more powerful in several regards than classical genetic algorithms. Fernando et al. have also shown that replication combined with Hebbian learning is more powerful than classical natural selection in a model of mechanistic copying of binary activity22. De Vladar and Szathmáry provided proof that the synergy between selection and learning results in increased evolvability; also they pointed out that synaptic plasticity helps escaping impasses and build circuits that are tailored to the problem21. Finally, in a recent model Fernando and his colleagues have used autoencoders for the generation of “neuronal genotypes”64. Since autoencoders produce compressed representations of the input, we expect them to successfully replace the identity function (i.e. bit by bit copying, as in DNA replication). Indeed, applying this neural component within the context of a genetic algorithm turned out to be rewarding.\n\nUnless the envisaged information transfer is accurate enough in space and time, the evolutionary dynamics breaks down. Similar to genetic evolution, where the error threshold17 had to be raised before long informative genomes could arise by evolving adaptations for more accurate copying65, in the neuronal context the element of accuracy was raised by Adams18. In his “Hebb and Darwin” paper Adams talks about synaptic replication and synaptic mutation as important ingredients for a Darwinian view of the brain. Synaptic replication means either the strengthening of an existing synapse, or the making of a new synapse between two neurons that already have one synapse between them. Adams’ is an important insight: evolutionary dynamics does not need copying for selection (scoring or strengthening is enough), but it needs copying with errors to test the new variants against the others. Synaptic mutation happens when a neuron grows a synapse towards a neighboring neuron with which previously it had no contact. Interestingly, these thoughts preceded the burst of interest in structural synaptic plasticity (SSP,62,63). Following his expansion-renormalization model for SSP66 Kilgard observes that when SSP is used for learning something new, this could be regarded as a Darwinian mechanism37, as it generates and tests variations in successive rounds, based on what is already there (unlike the original models of “neural Darwinism”). Kilgard’s mechanism has not been formalized yet (although see 21), but the path evolution model25 bears some relationship to it.\n\nWe share the view of Eliasmith53 that the cortex/basal ganglia/thalamus/cortex loop plays a crucial role not only in elementary action selection but also in symbolic reasoning. We conjecture that non-primate animals (in particular mammals and birds) employ the same (or at least an analogous) loop in order to retrieve old and to innovate new solutions, in a similar way as we have shown using our elementary model.\n\nAnother view to which we feel strongly related to is that of Bayesian models that advocate “theory learning as stochastic search in a language of thought”27. We are reasonably confident that we have found a candidate mechanism for the search process. If true, the rugged learning landscape in Figure 3 of 27 can be directly interpreted as the fitness landscape of our neuro-evolutionary model. A task for the future is to work out the explicit relations in detail. We note again the formal link between Bayesian inference and evolutionary selection31,32 mentioned in the Introduction. Our mechanism (Figure 1) could in principle implement, with appropriate modifications, an estimation of distribution algorithm (EDA). The population-based incremental learning (PBIL) algorithm consists of the following steps67: (i) Generate a population from the probability vector; (ii) Evaluate and rank the fitness of each member; (iii) Update the probability vector based on the elite individual; (iv) Mutate the probability vector; (v) Repeat steps (i-iv) until a finish criterion is met. EDA can work better than copying algorithms making it an interesting line to pursue.\n\nFuture work will be to link our recurrent model with the feedforward autoencoder model of Fernando et al.64, since the latter can generate interesting genotypes (better substrates for selection) due to the emerging compressed representations of the inputs.\n\nAs two experts aptly remark: “The Bayesian brain falls short in explaining how the brain creates new knowledge” (68 p. 9). We suggest that neuronal evolutionary dynamics might serve as a remedy.\n\n\nData and software availability\n\nZenodo: IstvanZachar/Neurodynamics: Publication release, doi: 10.5281/zenodo.15411369.\n\nThe algorithm described in the paper is also available on GitHub at https://github.com/IstvanZachar/Neurodynamics.",
"appendix": "Author contributions\n\n\n\nESz concieved the model. ASz and IZ coded the model. ASz, IZ, AF and HPdV contributed to the model and designed the experiments. ASz and IZ run the experiments and analyzed the data. All authors contributed to writing the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe research leading to these results has received funding from the European Union Seventh Framework Program (FP7/2007-2013) under grant agreement numbers 308943 (INSIGHT project) and 294332 (EvoEvo project).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are thankful to Luc Steels, Chrisantha Fernando, Mauro Santos, and Thomas Filk for useful comments and discussions.\n\n\nReferences\n\nChangeux JP: Neuronal man: The biology of mind. Princeton, NJ: Princeton University Press; Translated by Garey, L. 1985. Reference Source\n\nChangeux JP, Courrége P, Danchin A: A theory of the epigenesis of neuronal networks by selective stabilization of synapses. Proc Natl Acad Sci U S A. 1973; 70(10): 2974–2978. PubMed Abstract | Free Full Text\n\nEdelman GM: Neural Darwinism. The theory of neuronal group selection. New York: Basic Books; 1987. Reference Source\n\nWilliams RW, Rakic P: Elimination of neurons from the rhesus monkey’s lateral geniculate nucleus during development. J Comp Neurol. 1988; 272(3): 424–436. PubMed Abstract | Publisher Full Text\n\nO’Leary DD: Development of connectional diversity and specificity in the mammalian brain by the pruning of collateral projections. Curr Opin Neurobiol. 1992; 2(1): 70–77. PubMed Abstract | Publisher Full Text\n\nRabinowicz T, de Courten-Myers GM, Petetot JM, et al.: Human cortex development: estimates of neuronal numbers indicate major loss late during gestation. J Neuropathol Exp Neurol. 1996; 55(3): 320–328. PubMed Abstract | Publisher Full Text\n\nMiller-Fleming TW, Petersen SC, Manning L, et al.: The DEG/ENaC cation channel protein UNC-8 drives activity-dependent synapse removal in remodeling GABAergic neurons. eLife. 2016; 5: pii: e14599. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFernando C, Szathmáry E, Husbands P: Selectionist and evolutionary approaches to brain function: a critical appraisal. Front Comput Neurosci. 2012; 6: 24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuartz SR, Sejnowski TJ: The neural basis of cognitive development: a constructivist manifesto. Behav Brain Sci. 1997; 20(4): 537–556; discussion 556-96. PubMed Abstract | Publisher Full Text\n\nBandeira F, Lent R, Herculano-Houzel S: Changing numbers of neuronal and non-neuronal cells underlie postnatal brain growth in the rat. Proc Natl Acad Sci U S A. 2009; 106(33): 14108–14113. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaroni P, Donato F, Muller D: Structural plasticity upon learning: regulation and functions. Nat Rev Neurosci. 2012; 13(7): 478–490. PubMed Abstract | Publisher Full Text\n\nBernardinelli Y, Nikonenko I, Muller D: Structural plasticity: mechanisms and contribution to developmental psychiatric disorders. Front Neuroanat. 2014; 8: 123. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTetzlaff C, Kolodziejski C, Markelic I, et al.: Time scales of memory, learning, and plasticity. Biol Cybern. 2012; 106(11–12): 715–726. PubMed Abstract | Publisher Full Text\n\nMaynard Smith J: The problems of biology. USA: Oxford University Press; 1986. Reference Source\n\nMaynard Smith J: Genes, memes, and minds. The New York Review of Books. 1995; 42(19): 46–48. Reference Source\n\nKimura M: The Neutral Theory of Molecular Evolution. Cambridge: Cambridge University Press; 1983. Publisher Full Text\n\nEigen M: Selforganization of matter and the evolution of biological macromolecules. Naturwissenschaften. 1971; 58(10): 465–523. PubMed Abstract | Publisher Full Text\n\nAdams P: Hebb and Darwin. J Theor Biol. 1998; 195(4): 419–438. Publisher Full Text\n\nCalvin WH: The brain as a Darwin Machine. Nature. 1987; 330(6143): 33–34. PubMed Abstract | Publisher Full Text\n\nCalvin WH: The cerebral code: thinking a thought in the mosaics of the mind. Cambridge, MA: MIT Press. 1996. Reference Source\n\nde Vladar HP, Szathmáry E: Neuronal boost to evolutionary dynamics. Interface Focus. 2015; 5(6): 20150074. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFernando C, Goldstein R, Szathmáry E: The neuronal replicator hypothesis. Neural Comput. 2010; 22(11): 2809–2857. PubMed Abstract | Publisher Full Text\n\nFernando C, Karishma KK, Szathmáry E: Copying and evolution of neuronal topology. PLoS One. 2008; 3(11): e3775. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFernando C, Szathmáry E: Natural selection in the brain. In: Glatzeder B, Goel V, von Müller A, editors. Towards a theory of thinking. vol. 5 of On thinking. Berlin/Heidelberg: Springer-Verlag; 2010; 291–322. Publisher Full Text\n\nFernando C, Vasas V, Szathmáry E, et al.: Evolvable neuronal paths: a novel basis for information and search in the brain. PLoS One. 2011; 6(8): e23534. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKemp C, Tenenbaum JB: The discovery of structural form. Proc Natl Acad Sci U S A. 2008; 105(31): 10687–10692. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUllman TD, Goodman ND, Tenenbaum JB: Theory learning as stochastic search in the language of thought. Cognitive Dev. The Potential Contribution of Computational Modeling to the Study of Cognitive Development: When, and for What Topics? 2012; 27(4): 455–480. Publisher Full Text\n\nBörgers T, Sarin R: Learning through reinforcement and replicator dynamics. J Econ Theory. 1997; 77(1): 1–14. Publisher Full Text\n\nNiekum S, Barto AG, Spector L: Genetic programming for reward function search. IEEE Trans Auton Ment Dev. 2010; 2(2): 83–90. Publisher Full Text\n\nSutton RS, Barto AG: Introduction to reinforcement learning. 1st ed. Cambridge, MA, USA: MIT Press; 1998. Reference Source\n\nHarper M: The replicator equation as an inference dynamic. ArXiv e-prints. 2009. Reference Source\n\nShalizi CR: Dynamics of Bayesian updating with dependent data and misspecified models. Electron J Statist. 2009; 3: 1039–1074. Publisher Full Text\n\nCampbell JO: Universal Darwinism As a Process of Bayesian Inference. Front Syst Neurosci. 2016; 10: 49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHopfield JJ: Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982; 79(8): 2554–2558. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRolls ET, Treves A: Neural networks and brain function. Oxford, New York: Oxford University Press; 1998. Publisher Full Text\n\nCarrillo-Reid L, Yang W, Bando Y, et al.: Imprinting and recalling cortical ensembles. Science. 2016; 353(6300): 691–694. PubMed Abstract | Publisher Full Text\n\nKilgard MP: Harnessing plasticity to understand learning and treat disease. Trends Neurosci. 2012; 35(12): 715–722. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWatson RA, Jansen T: A building-block royal road where crossover is provably essential. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation. GECCO ’ 07. New York, NY, USA: ACM; 2007; 1452–1459. Publisher Full Text\n\nMaynard Smith J: The Evolution of Sex. Cambridge University Press; 1978. Reference Source\n\nMaynard Smith J: The units of selection. Novartis Found Symp. 1998; 213: 203–11; discussion 211-7. PubMed Abstract\n\nStorkey AJ: Efficient covariance matrix methods for Bayesian gaussian processes and Hopfield neural networks. Imperial College, Department of Electrical Engineering, Neural System Group; 1999. Reference Source\n\nSompolinsky H: Computational neuroscience: beyond the local circuit. Curr Opin Neurobiol. Theoretical and computational neuroscience. 2014; 25: xiii–xviii. PubMed Abstract | Publisher Full Text\n\nMüller S, Appel B, Balke D, et al.: Thirty-five years of research into ribozymes and nucleic acid catalysis: where do we stand today? [version 1; referees: 2 approved]. F1000Res. 2016; 5: pii: F1000 Faculty Rev-1511. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMountcastle VB: Modality and topographic properties of single neurons of cat’s somatic sensory cortex. J Neurophysiol. 1957; 20(4): 408–434. PubMed Abstract\n\nRolls ET: Attractor networks. Wiley Interdiscip Rev Cogn Sci. 2010; 1(1): 119–134. PubMed Abstract | Publisher Full Text\n\nSoto D, Silvanto J: Reappraising the relationship between working memory and conscious awareness. Trends Cogn Sci. 2014; 18(10): 520–525. PubMed Abstract | Publisher Full Text\n\nMadl T, Baars BJ, Franklin S: The timing of the cognitive cycle. PLoS One. 2011; 6(4): e14803. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGustafsson B, Asztely F, Hanse E, et al.: Onset Characteristics of Long-Term Potentiation in the Guinea-Pig Hippocampal CA1 Region in Vitro. Eur J Neurosci. 1989; 1(4): 382–394. PubMed Abstract | Publisher Full Text\n\nHirsch JC, Crepel F: Use-dependent changes in synaptic efficacy in rat prefrontal neurons in vitro. J Physiol. 1990; 427(1): 31–49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFernando C, Szathmáry E: Chemical, neuronal, and linguistic replicators. In Pigliucci M, Müller G, editors. Cambridge, MA: MIT Press; 2010; 209–250. Publisher Full Text\n\nNadel L, Land C: Commentary-Reconsolidation: Memory traces revisited. Nat Rev Neurosci. 2000; 1(3): 209–212. Publisher Full Text\n\nNadel L, Moscovitch M: Memory consolidation, retrograde amnesia and the hippocampal complex. Curr Opin Neurobiol. 1997; 7(2): 217–227. PubMed Abstract | Publisher Full Text\n\nStewart TC, Choo X, Eliasmith C: Symbolic Reasoning in Spiking Neurons: A Model of the Cortex/Basal Ganglia/Thalamus Loop. In: 32nd Annual Meeting of the Cognitive Science Society; 2010. Reference Source\n\nDehaene S, Kerszberg M, Changeux JP: A neuronal model of a global workspace in effortful cognitive tasks. Proc Natl Acad Sci U S A. 1998; 95(24): 14529–14534. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShanahan M: A spiking neuron model of cortical broadcast and competition. Conscious Cogn. 2008; 17(1): 288–303. PubMed Abstract | Publisher Full Text\n\nJacobs C, Silvanto J: How is working memory content consciously experienced? The ‘conscious copy’ model of WM introspection. Neurosci Biobehav Rev. 2015; 55: 510–519. PubMed Abstract | Publisher Full Text\n\nKumar A, Rotter S, Aertsen A: Spiking activity propagation in neuronal networks: reconciling different perspectives on neural coding. Nat Rev Neurosci. 2010; 11(9): 615–627. PubMed Abstract | Publisher Full Text\n\nHertz J, Palmer RG, Krogh AS: Introduction to the Theory of Neural Computation. 1st ed. Perseus Publishing; 1991. Reference Source\n\nOberauer K: Access to information in working memory: exploring the focus of attention. J Exp Psychol Learn Mem Cogn. 2002; 28(3): 411–421. PubMed Abstract | Publisher Full Text\n\nFukai T, Tanaka S: A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winners-share-all. Neural Comput. 1997; 9(1): 77–97. PubMed Abstract | Publisher Full Text\n\nMorita K, Jitsev J, Morrison A: Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond. Behav Brain Res. 2016; 311: 110–121. PubMed Abstract | Publisher Full Text\n\nChklovskii DB, Mel BW, Svoboda K: Cortical rewiring and information storage. Nature. 2004; 431(7010): 782–788. PubMed Abstract | Publisher Full Text\n\nHoltmaat A, Svoboda K: Experience-dependent structural synaptic plasticity in the mammalian brain. Nat Rev Neurosci. 2009; 10(9): 647–658. PubMed Abstract | Publisher Full Text\n\nChurchill AW, Sigtia S, Fernando C: Learning to generate genotypes with neural networks. Evolutionary Computation. 2015. Reference Source\n\nMaynard Smith J, Szathmáry E: The major transitions in evolution. Oxford: Freeman & Co. 1995. Reference Source\n\nReed A, Riley J, Carraway R, et al.: Cortical map plasticity improves learning but is not necessary for improved performance. Neuron. 2011; 70(1): 121–131. PubMed Abstract | Publisher Full Text\n\nBaluja S, Caruana R: Removing the genetics from the standard genetic algorithm. Morgan Kaufmann Publishers; 1995. 38–46. Publisher Full Text\n\nFriston K, Buzsáki G: The Functional Anatomy of Time: What and When in the Brain. Trends Cogn Sci. 2016; 20(7): 500–511. PubMed Abstract | Publisher Full Text\n\nZachar I: IstvanZachar/Neurodynamics: Publication release. 2016. Data Source"
}
|
[
{
"id": "16922",
"date": "11 Oct 2016",
"name": "Karl Friston",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a thought provoking simulation study of Darwinian neurodynamics. It uses populations of attractor networks to illustrate the distinction between purely selectionist and evolutionary optimisation. This demonstration rests upon the dynamical instability of the neuronal networks considered – and the explicit introduction of variation or mutations in graduating from a selectionist to an evolutionary scheme. The paper is rather dense and I do not pretend to follow all the subtleties and nuances; however, the basic ideas are compelling and are described with sufficient clarity and detail for the interested reader to understand. There are a few points of clarification that I think you could attend to. In addition, there are some minor grammatical improvements you could consider.\n\nMajor points\nI think you need to overview your simulations so that the reader knows where you are going. I would recommend something like:\n“We will present a series of simulations graduating from purely selectionist schemes to evolutionary schemes in the face of a changing environment. In the first set of simulations we preclude variation in transmission over generations to examine the sufficiency of dynamical instabilities in supporting a selective process. This involves taking the outputs of one neural network and using them (after selection) to train another network. In the second set of simulations, we consider evolution proper and the transmission of patterns from generation to generation. Here, the patterns that are transmitted are subject to mild mutations (and selection) to illustrate the efficiency with which optimal (high adaptive fitness) patterns emerge.”\n\nI think you need to explain simply what is being optimised in your simulations. I would suggest something like:\n\n“In what follows, we will treat a pattern of activations over binary neurons as the unit of selection. In the general case, the adaptive fitness of this pattern may be some complicated function that is contextualised by the current inputs and may or may not be a function of the history of inputs and outputs. To keep things simple, we will just consider the fitness of a pattern in terms of its hamming distance to some target pattern. This means, we are effectively using selectionist and evolutionary schemes to optimise the connections (and ensuing dynamics) to recover a target pattern.”\n\nWhen talking about the utility of dynamical instability in providing a basis for selection, you might want to refer to the work of Ivan Tyukin and colleagues1,2. These authors have studied chaotic systems in the context optimisation – and their neuronal counterparts.\n\nAt the end of your discussion, I think you can usefully pursue the Bayesian brain hypothesis. I would suggest something like:\n“In fact, there may be a deep connection between the selectionist dynamics illustrated in this paper and the Bayesian brain. This follows from the fact that the Bayesian brain can use Bayesian model selection to identify its most plausible hypotheses about the world. In this sense, the selective mechanisms we have demonstrated become Bayesian model selection, if we use marginal likelihood or variational free energy as adaptive fitness. See for example Sella and Hirsh, 20053 and Friston, 20134. Crucially, the evolutionary role of mutations and variations provides the extra ingredient required for structure learning; namely the elaboration of a model or hypothesis space.\"\nMinor points\n\nPage 3:\nReplace \"Bayesian update\" with \"Bayesian updates\". You might want to add a footnote about reentry and neural Darwinism when you say that “this fast scale dynamics is missing from Edelmanian neural Darwinism.” I suspect that Edelman considered variation an important aspect of neural Darwinism and that this was mediated by reentrant dynamics that shows the dynamical and structural instabilities that you refer to.\n\nPage 5 and throughout:\nReplace \"was trained only to\" with \"was presented only to\".\n\nPage 5:\nReplace \"this allows finding the global\" with \"this allows the system to find\". Replace \"system above the\", with \"system above and beyond the\". I would say “… can quickly converge to the optimum – providing the output of each network is delivered to the appropriate network that successively converges on the global optimum.”.\nPage 6:\nReplace \"at start\" with \"at the start\". I would remove the simulations based upon the simpler model (i.e. the thin lines in Figure 3). These simulations and their description do not add much to the text – or any insight. It would be less distracting if these simulations and their discussion were removed.\nPage 8 and throughout:\nReplace \"provoking them\" with \"perturbing them\". Replace \"performed of\" with \"performed in\".\n\nPage 8:\nIt would be useful to mention the (Stuart Kauffman) notion of second order selection and selection for selectability (or evolvibility). In other words, you should discuss the optimisation of the mutation rates in relation to the volatility of the environment.\n\nPage 9:\nReplace \"solves the problem\" with \"solving the problem\".\n\nPage 10:\nReplace \"networks in the given\" with \"networks under the given\". Later replace \"attractor network\" with \"attractors networks\". Later replace \"works with diluted\" with \"work with diluted\". Finally, replace \"explained in turn\" with \"explained next\".\n\nI hope that these comments help should any revision be required.",
"responses": [
{
"c_id": "2802",
"date": "29 Jun 2017",
"name": "István Zachar",
"role": "Author Response",
"response": "We are thankful for Karl Friston for his invaluable comments and suggestions. Please also see our answers to the other Reviewers for some relevant answers.About reentry and neural Darwinism: We previously discussed the limitations of Edelman’s so-called “neural Darwinism” [1] in the light of standard evolutionary dynamics [2]. Here we consider two important aspects: the roles of variation and selection and reentry [1,3] and how these relate to our model. In general, there are no units of evolution in the Edelman model, only units of selection. Edelman considered three subsequent phases of brain development and function (i) developmental variation and selection (supported by production and elimination of neurons and their connections), (ii) experiential selection (resting on changes in the strength of a population of synapses) and (iii) reentry (where some connections between two or more maps become stronger due to changes in synaptic weights). Note that beyond stage (i) there is no real novelty emerging. The situation is more like that in combinatorial chemistry where functional molecules are selected from a large pool of generated variants. Crucially, there is no replication with variation in this picture. Reentry has two different functions in Edelman’s picture: the linking of different maps and a critical contribution to consciousness. We further direct Friston’s attention to the answer we have provided for László Acsády about the realization of reentry in the cortex-basal ganglia-thalamus-cortex loop.Points 1, 2. We have opted to borrow your expressions of Points 1 and 2 that perfectly summarize our original intention, and included them within the Methods section to introduce our selectionist and evolutionary experiments.Point 3. Chaotic itinerancy is very different from our model. Our search uses the overlaps between attractors in different networks and is guided by evaluation of interim findings. Darwinian neurodynamics harnesses neuronal parallelism for search more profoundly than chaotic itinerancy.Point 4. We have added an explanation highlighting the connection between selectionist dynamics and the Bayesian brain. We additionally cite Iwasa, who pioneered the concept as well as Barton & de Vladar who refined it (and it would be odd not to cite some of the work of de Vladar, a co-author of this paper, who merged and developed further the ideas of Iwasa and Sella & Hirsch).Considering Figure 3. and the suggestion to remove the simpler model, we decided rather not to modify this section. Actually, thin lines represent the behaviour of the complex attractor network-model introduced in the paper and thick lines represent the simpler model that superficially approximate attractor dynamics without actually using networks. The whole point of this image is the comparison between our proof-of-principle attractor network model and a baseline, abstracted model.Certainly to Stuart Kauffman’s work is an important aspect that is present in our studies. We added some explanations in this direction. However, note that the notion of evolvability is not Kauffman’s (although he certainly did a good work on it and on its popularisation), but is attributed, under that name, to Dawkins and back to Riedl (1975) (but not using that term).Most of the minor points were corrected except in a few cases where they went explicitly against our intended meaning. For example we rather stick with the word \"provocation\", as \"perturbation\" has a different meaning to what we intended here.References:1 Edelman GM. Neural Darwinism. The theory of neuronal group selection. New York: Basic Books; 1987.2 Fernando C, Szathmáry E, Husbands P. Selectionist and evolutionary approaches to brain function: a critical appraisal. Frontiers in Computational Neuroscience. 2012;6(24). http://dx.doi.org/10.3389/fncom.2012.00024.3 Edelman GM. Neural Darwinism: Selection and reentrant signaling in higher brain function. Neuron. 1993;10(2):115–125. http://www.sciencedirect.com/science/article/pii/089662739390304A."
}
]
},
{
"id": "16965",
"date": "25 Oct 2016",
"name": "Stuart J. Edelstein",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article continues the important effort of the authors and their colleagues to advance a line of research capable of fleshing out the somewhat vague concept subsumed under the heading Neural Darwinism. While the general idea that subtle functions of the brain involve Darwinian features has persisted over the last 40 years (beginning with Changeux et al.1) an up-to-date synthesis has been lacking and the authors are to be congratulated for their sustained effort to achieve this goal. Part of the difficulty encountered is related to ambiguities concerning exactly what features of neo-Darwinism so successfully applied to population genetics and molecular biology, can be imported into concepts of neuroscience. In the present effort, many terms from the classical Darwinian literature are invoked, but how they are translated from population genetics to computational neuroscience is not always clear. In this respect a glossary could be very helpful, which would include the classical definition and the neural application, as applied for example to terms such as: replication, hereditary variation, breeding, fitness, mutation, deme, generation, Lamarckian, etc. In particular, the hallmark of fitness in evolution is expansion of population size, but whether and how this criterion is applicable in neuroscience should be addressed.\nA second challenge is to define the level at which Darwinian principles are applied. More explicit attention could be given to dendritic spines (an obvious target for LTP, STDP), axonal pruning (see for example Kolodkin and Tessier-Lavigne2), or neural circuitry. Cortical columns are invoked in passing, but should be evaluated in more detail. Each of these levels is stochastic in some respects, but to what extent does the variation implicit in neural Darwinism require additional mechanisms? Concerning the results presented for the various simulations performed, several valuable points were raised in the comments by Karl Friston. In addition, it would be helpful to make clearer connections with respect to the putative neuronal structures simulated. For comparison, the recent success of deep learning algorithms should be considered, in so far as they do or do not mimic Darwinian mechanisms of the brain. Moreover, since a global optimum is set for the simulations, what criteria would establish optimality in a natural Darwinian system within the brain? Finally, what is the relationship of the high number of “generations” (20,000 in Figure 4) to putative neuronal processes?\nOverall, describing neuronal activity using Darwinian terminology is a double-edged sword. On the one hand, a thorough application of Darwinian principles to brain science involves many one-to-one correspondences that must be clearly articulated. On the other hand, since the words are familiar, their usage carries immediate associations that can obscure understanding and become an ambiguous jargon. It would be hard to over-estimate the difficulty of finding the right balance, especially for scholars already immersed in the quest and possessing their own specific understanding of terms employed. Therefore, navigating through these troubled waters, requires extreme vigilance of language, and an addition effort by the authors in preparing their final version would be extremely helpful.",
"responses": [
{
"c_id": "2801",
"date": "29 Jun 2017",
"name": "István Zachar",
"role": "Author Response",
"response": "We are extremely thankful for Stuart J. Edelstein for his comments and suggestions. Concerning the terms we used here to relate objects and processes from neuroscience and classical Darwinism, we have included a glossary that summarizes the important evolutionary and neurobiological concepts that might be useful to clarify across disciplines. We are certain that this is not an absolute and perfect mapping of concepts, but should be enough to support our understanding of how the brain might work under Darwinian principles. We further direct our Reviewer’s attention to the answers we have provided for László Acsády on the grounding of neural Darwinism within the brain (as we understand it), especially our answers to points #1, #2 about the Discussion, as those also concern the questions raised here. We simply did not want to replicate our words unnecessarily."
}
]
},
{
"id": "17462",
"date": "25 Nov 2016",
"name": "László Acsády",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a truly enlightening and thought provoking paper which utilizes Darwinian logic to explain neuronal network dynamics. The manuscript is a nice example of how approaches in a different discipline (population genetics) may yield fresh insights into age old problems of neuroscience. I have the following notes, suggestions and questions to the authors.\nIntroduction:\nOne drawback of the paper is that the parallelism between genetic and neuronal Darwinism is not made clear right at the onset. We should be aware of what the authors mean by parent, offspring, multiplication, mutation, selection in case of neuronal activity right from the beginning in order to follow the logic of the paper.\nResults:\nI miss the clear demonstration of the “Selection” experiments showing that it is not able to find the global optimum (e.g as an additional panel to Fig 2). I also think we need some form of quantification here, how many simulations were run, how significant the result was…etc.etc\n\nI also miss the formal demonstration of the effect of mutation rate on the speed of evolution. Since this is a crucial concept, I would dedicate a separate figure for that.\n\nI would like to see, how implementing palimpsest memory affects the performance of the model and how this depends on whether the system use dense or sparse coding. Presently this is only briefly mentioned in the Method section, but since this may have important implications it may be good provide some more details. Intuitively, more sparse coding may tolerate the lack of palimpsest memory.\n\nNeuronal noise considered as “mutation” in the model is enlightening. Still, I feel there are some basic differences here. During evolution genetic mutations can get stabilized when they reach the global optimum whereas neuronal noise is inherent to the system and not necessary change with evolution. Can it be demonstrated or is there any evidence that neuronal noise decreases as the system approaches the optimum state?\n\nWhat were the connection weights of the recurrent (i.e. new input) patterns relative to the weights of the local, autoassociative connections? Can the model perform better/worse by changing the relative weights of these connections? Note that many original autoassociative models worked with a “detonator” synapse as an input and weaker local connections1.\nDiscussion:\nI would not necessarily constrain the model to implicit memories. I think “implicit” here refers to the unconscious effort to recall the best target pattern not to type of memory item to be recalled. The term “implicit memory” evokes mainly procedural memories and indeed the authors place the model in the cortex-basal ganglia loop. I don’t see why recall of an episodic memory trace by the CA3 recurrent network cannot follow the same evolutionary logic even though hippocampal memories are not considered as “implicit”.\n\nIn the cortex-basal ganglia-thalamus loop, it is not really known how exactly the cortical output will affect the return signal from the thalamus but, in any case, the signal goes through significant dimension reduction2 and the final output of basal ganglia may also affect thalamic firing in different ways3 (i.e it is “mutated” a lot). The question is, how the properties of the model network changes if the final output is not directly fed back to the system but undergoes various (but consistent) signal transformation.\nMinors:\nI miss the definition of “best pattern”. Can this term be equated with “pattern with highest fitness”?\n\nI would also support a short glossary with the neurobiological relevance of the crucial ecological concepts (fitness, generation, landscape, mutation).\n\nI guess subheadings in the Results section are not appropriate. “Selection” and “Evolution” are the two main subheadings and all the others are subsections of the “Evolution” section.",
"responses": [
{
"c_id": "2800",
"date": "29 Jun 2017",
"name": "István Zachar",
"role": "Author Response",
"response": "We are grateful for the review of László Acsády, and his suggestions to improve our paper. We refer to the points raised by our Reviewer by a hashmark (#). Please also see our answers to the other Reviewers for some relevant answers. Introduction #1. There is limited similarity only between neural Darwinism and the model and thoughts presented in this paper. In both cases “neuronal groups” are important in terms function, although the term “neuronal assembly” is more traditional and functionally relevant. Synapses are important inasmuch as they contribute to the functionality of the assembly. There is a form of reentry in both models but they play very different roles: in our case it means that the same or variant information is fed back, after evaluation, to the same population of assemblies and get stored in multiple copies. It is this multiplication component that supports “neuronal replication” of candidate solutions. A note is in order about the search mechanism in the space of candidate solutions. In the classic Deheane-Changeux model for prefrontal cortex functionality in terms of action production and selection on the example of the Wisconsin card sorting test [1], search for new candidate solutions is random and their evaluation is strictly serial. A non-random but still serial search process, for which there is neural evidence, is chaotic itinerancy [2-4]. Except for the learning phase our attractors are locally stable, classical ones. In contrast, chaotic itinerancy rests on so-called “attractor ruins” from which there is always the possibility of dynamical escape in certain directions via transient chaotic bridges. Since these attractor ruins are deterministic entities, escape directions are not random. The pertinent question is whether these directions are random relative to those leading to better candidate solutions. One can imagine that synaptic plasticity combined with reinforcement learning might strengthen certain escape routes with experience. An explanatory table and an accompanying figure were added to the text to guide the reader among the terms we use and to explain the relationship we imply between genetic and neuronal Darwinism. Results #1. In case of Selection experiments the system can always find the global optimum, the best special pattern (it was trained to network #20). The overlapping basins of the series of special patterns among networks guarantee that there is a trajectory from the basin of any one of the special patterns to the best special pattern. All simulations were started from noisy copies of a random input pattern similar (in Hamming-distance) to the worst special pattern. We ran 1000 independent simulations, the average number and variance of the rounds needed to converge to the optimum were 3.5± 0.64. Results #2. A new figure and a few sentences were added to demonstrate the effect of mutation rate on the speed of evolution. Results #3. Palimpsest memory is essential to our model. Because of the repeated retraining of the networks with the selected patterns, without palimpsest memory all networks – even if the patterns are sparse – sooner or later would reach catastrophic forgetting. Consequently, palimpsest memory is necessary for evolutionary processes. We used dense coding for the reason of computational time. Results #4. In the present state of the model, the two types of noise (ml and mT) are considered to be constant. Using decreasing ml with increasing fitness does not change the qualitative outcome of the evolutionary process (results not shown). Changes in mT do not affect the convergence too much, see the answer for question #2 above.) Results #5. To be in line with Storkey’s original model we used similar weights to the recurrent patterns and autoassociative connections. In the present state of the model, we have not tested the palimpsest behavior and the retrieval ability with other than 1:1 relative weights. Discussion #1, #2. We have presented our model as potentially realisable by the cortex-basal ganglia-thalamus cortex loop. Actually, noting the massively parallel processing in the brain in general, added to the evaluative function of the mentioned loop, we would be surprised if no true evolutionary behaviour could be found during complicated problem-solving tasks. But we agree that it is not necessary to restrict the proposed cognitive architecture to this concrete loop: recall of an episodic memory trace by the CA3 recurrent network [5] can in principle follow the same evolutionary logic, even though hippocampal memories are not considered as “implicit”. A deeper question considers how “informational reentry” in the Darwinian sense can be realized by the cortex-basal ganglia-thalamus-cortex loop. In this loop it is not really known how exactly the cortical output will affect the return signal from the thalamus but, in any case, the signal goes through significant dimension reduction [6] and the final output of basal ganglia may also affect thalamic firing in different ways ([7], i.e. it is “mutated” a lot). The question is, how the properties of the model network change if the final output is not directly fed back to the system but undergoes various (but consistent) signal transformation. We think that the key to our kind of mechanism to work is exactly the consistency of signal transformation. A variant mechanism might look like this: the thalamus pointedly activates those cortical networks where the best (few) activity pattern(s) came from (as before, the number of candidate patterns “kept alive” depends on the stringency of selection in the striatum). These could in turn spread in the cortex through the horizontal connections among local attractors. Passing of information from attractor network to another is a common element in many relevant dynamical models [8,9]. Minor points were all corrected according to Reviewer’s suggestions. References: 1 Dehaene S, Changeux JP. The Wisconsin Card Sorting Test: Theoretical Analysis and Modeling in a Neuronal Network. Cerebral Cortex. 1991;1(1):62. http://dx.doi.org/10.1093/cercor/1.1.62. 2 Kaneko K, Tsuda I. Chaotic itinerancy. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2003;13(3):926–936. http://dx.doi.org/10.1063/1.1607783. 3 van Leeuwen C. Chaos breeds autonomy: connectionist design between bias and baby-sitting. Cognitive Processing. 2008;9(2):83–92. http://dx.doi.org/10.1007/s10339-007-0193-8. 4 Tyukin I, Tyukina T, van Leeuwen C. Invariant template matching in systems with spatiotemporal coding: A matter of instability. Neural Networks. 2009;22(4):425–449. http://www.sciencedirect.com/science/article/pii/S089360800900015X. 5 Treves A, Rolls ET. Computational constraints suggest the need for two distinct input systems to the hippocampal CA3 network. Hippocampus. 1992;2(2):189–199. http://dx.doi.org/10.1002/hipo.450020209. 6 Bar-Gad I, Bergman H. Stepping out of the box: information processing in the neural networks of the basal ganglia. Current Opinion in Neurobiology. 2001;11(6):689–695. http://www.sciencedirect.com/science/article/pii/S0959438801002707. 7 Goldberg JH, Farries MA, Fee MS. Basal ganglia output to the thalamus: still a paradox. Trends in Neurosciences. 2013;36(12):695–705. http://www.sciencedirect.com/science/article/pii/S0166223613001574. 8 Rolls ET, Treves A. Neural networks and brain function. Oxford, New York: Oxford University Press; 1998. http://opac.inria.fr/record=b1094909. 9 Rolls ET. Attractor networks. Wiley Interdisciplinary Reviews: Cognitive Science. 2010;1(1):119–134. http://dx.doi.org/10.1002/wcs.1."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2416
|
https://f1000research.com/articles/5-2271/v1
|
08 Sep 16
|
{
"type": "Method Article",
"title": "Method-centered digital communities on protocols.io for fast-paced scientific innovation",
"authors": [
"Lori Kindler",
"Alexei Stoliartchouk",
"Leonid Teytelman",
"Bonnie L. Hurwitz",
"Lori Kindler",
"Alexei Stoliartchouk",
"Leonid Teytelman"
],
"abstract": "The Internet has enabled online social interaction for scientists that previously happened only in physical meetings and conferences. Yet despite these innovations in communication, dissemination of methods is often relegated to the slow process of academic publishing. Further, these methods remain static, with subsequent advances published elsewhere and unlinked. For communities undergoing fast-paced innovation, researchers need new capabilities to share, obtain feedback, and publish methods at the forefront of scientific development. For example, a renaissance in virology is now underway given the new metagenomic methods to sequence viral DNA directly from an environment. Metagenomics makes it possible to “see” natural viral communities that could not be previously studied through culturing methods. Yet, the knowledge of specialized techniques for the production and analysis of viral metagenomes remains in a subset of labs. This problem is common to any community using and developing emerging technologies and techniques. We developed new capabilities to create virtual communities in protocols.io, an open access platform, for disseminating protocols and knowledge at the forefront of scientific development. To demonstrate these capabilities, we present a virology community forum called VERVENet. These new features allow virology researchers to share protocols and their annotations and optimizations, connect with the broader virtual community to share knowledge, job postings, conference announcements through a common online forum, and discover the current literature through personalized recommendations to promote discussion of cutting edge research. Virtual communities in protocols.io enhance a researcher’s ability to: discuss and share protocols, connect with fellow community members, and learn about new and innovative research in the field. The web-based software for developing virtual communities is free to use on protocols.io. Data are available through public APIs at protocols.io.",
"keywords": [
"forum",
"virtual community",
"protocols",
"metagenomics",
"bioinformatics",
"virus",
"phage",
"bacteriophage"
],
"content": "Introduction\n\nThe Internet has enabled online social interaction for scientists that previously happened only in physical meetings and conferences. Twitter, Facebook, and ResearchGate1–3 provide valuable online forums that many researchers use to share knowledge. At the same time, academic publishing remains slow and particularly inefficient for communicating methodology. Protocols are often relegated to supplementary information, if shared at all. There is no good mechanism for easily discussing, troubleshooting, and improving published or unpublished techniques.\n\nThis need is even more apparent in emerging fields such as viral ecology where laboratory, field, and bioinformatics methods are being actively developed4. In particular, new metagenomic techniques to sequence viral DNA directly from environmental samples has led to rapid advances in both molecular and bioinformatic protocols5. These protocols, however, are highly specialized and generally only used in a few highly proficient labs because: (i) viral metagenomes (viromes) are difficult to produce due to low quantities of DNA and refined isolation and purification methods, (ii) the vast majority of viral sequences are unknown (usually >90%6) complicating bioinformatics analyses, and (iii) newly emerging comparative and functional metagenomic analyses exist but require on-going community refinement and development.\n\nGiven the experimental nature of these methods, the virology community has expressed a need to foster discussions about these protocols towards improved methodologies and increasing connectivity and collaboration among researchers7. The challenge is to develop a method-centered collaborative platform that recapitulates the functionality of a scientific meeting - a digital community for connecting with fellow researchers to share and discover state-of-the-art knowledge.\n\nHere we describe new capabilities in protocols.io (http://www.protocols.io), an open access platform, to create virtual communities to disseminate protocols and knowledge at the forefront of scientific development. To demonstrate these capabilities, we describe a viral ecology community forum called VERVENet (https://www.protocols.io/groups/verve-net) that strives to increase connectivity and knowledge dissemination in viral ecology research at all levels from undergraduates to accomplished viral ecologists. These new community features enhance a researcher’s ability to discuss and share protocols, connect with fellow community members, and learn about new and innovative research in the field. The web-based software for developing virtual communities is free for use on protocols.io, and further described here.\n\n\nProtocols.io: a platform to enable methods discussion and dissemination\n\nProtocols.io is a free service for industry and academic scientists to share or maintain private protocols for research8. The driving force behind software development is to provide a mechanism for scientists to share improvements and corrections to protocols, so that others are not continuously re-discovering knowledge that scientists have not had the time or wear-with-all to publish. Protocols.io provides a free, up-to-date, crowd-sourced protocol repository called protocols.io (http://www.protocols.io) for the life science community. This software is available as a web-based platform or smart phone App9,10 to enable mobile solutions for research and bench-work. Per best practices in mobile computing, these Apps offer extensive options and control of push notifications. In fall 2014, protocols.io offered a well-developed platform for users to share molecular methods, however no capabilities were in place to share methods among groups, or bioinformatics methods. To this end, the viral ecology community teamed up with protocols.io to create new group capabilities, develop bioinformatics protocols, and enhance discussion forums for news, methods, and literature.\n\nIntroducing VERVENet: the Viral Ecology Research and Virtual Exchange Network: The Viral Ecology Research and Virtual Exchange Network (VERVENet), is a collaboration between the University of Arizona and protocols.io, to deliver an online forum for the virology community. To enable this forum, new group functionality was built into protocols.io to promote scientific communication and collaboration. Specifically, group features were developed on top of existing capabilities to share molecular methods in order to (i) share protocols and their annotations and optimizations, (ii) fuel connectivity among viral ecology researchers for sharing data sets, knowledge, job postings, conference announcements through a common online forum called VERVENet and (iii) facilitate literature discovery through personalized recommendations to promote discussion on cutting edge viral ecology research. Through developing these interconnected resources in protocols.io for virtual communities, we developed a “go-to” site for viral ecology research11. Moreover, these tools are broadly useful to any community or individual lab for promoting scientific inquiry, reproduction of results, dissemination of protocols and re-use. Specifically, new forums can be created in a matter of minutes to enable connectivity among groups of any size, with tools described here under use cases.\n\n\nMethods\n\nCreating a user profile in protocols.io: Users can view protocols and all public content anonymously, but to interact with the platform, registration is necessary. Registration is quick as only e-mail and password are required to create an account; however, users are encouraged to create profiles containing their name, website, affiliation, and research interests. Others can search and find a user based on name or keywords. Moreover, user profiles are attached to any material on protocols.io that the user posts publically. User profiles also contain a field for ORCID12, so that researchers can tie their profile back to a common identifier and highlight their work in the field. Researchers can also include a biography that describes how they got into the field and what intrigues them. Thus, profiles allow users to add in their own content, rather than simply browse existing content.\n\nAdding protocols in protocols.io: After registration, new protocols can be entered (Figure 1). By default, all protocols are private and can be shared with individual collaborators or any of the groups. The protocols are structured with tabs for the “steps”, “description”, ‘guidelines’, and “comments”. When entering the steps, a list of components that can be added to the steps is located on the far right and allows a clear detailing of wetlab or computational portions of the method. Related steps of the protocol can also be easily grouped together into “sections” such as ‘preparation,’ ‘DNA extraction,’ and ‘analysis’, etc. Steps may be entered one by one by typing into the text box or by pasting steps from another file, facilitating import of existing protocols. For each step annotations can be added to make notes on specific steps. Once complete, the protocol can be “run” in a step-by-step format.\n\nProtocols are entered by providing a broad description, information about authors, any prior materials or background required, and detailed step-by-step methods to implement the protocol. Protocols can remain private to an individual or group, or released to the public.\n\nOnce a protocol has been created, there are several options for sharing it with collaborators or a group. To make the protocol publicly viewable, one will need to click the ‘publish’ button. A protocol can be reassigned to another individual with a protocols.io account. For ongoing development and changes to adding and using protocols, see tutorials (http://www.protocols.io/help) in protocols.io13.\n\nDeveloping groups in protocols.io: To create a group, one must have an account and be logged in. For example, here we describe the VERVE Net group, however it is possible to create any group. To create a group, users can click on their personal icon in the upper right hand corner and select “+ new group.” They will be prompted to enter a group name, image, description of the group, research interests, external website address, physical location of the group and an affiliation. The user will also decide if the group is open to anyone, by invitation only, or open to membership requests. In addition, the user can choose if the group is visible to others or private. Users are able to invite members into their group and control the privileges of their members. Moreover, as the owner of a group, the user is able to invite other subgroups, such as in the VERVE example individual labs are subgroups.\n\n\nUse case: VERVE Net: Virus Ecology Research and Virtual Exchange Network\n\nMolecular and bioinformatics protocols: Often, detailed “tricks of the trade” associated with lab, field, and bioinformatics protocols are not well-described in publications, and at best are stashed in supplemental materials. Practical information associated with running these protocols under varied conditions cannot be curated, documented, or discussed among students, postdocs, technicians, and faculty working in virology. Moreover, knowledge on when to use a particular version of a given protocol is not easily captured. Protocols.io provides a flexible mechanism wherein protocols can be documented in a stepwise fashion to easily pivot between molecular and bioinformatics methodologies, link to useful websites or code in Github14, or reference manuals or original source materials for protocols, as exemplified in the VERVENet forum.\n\nThe user entering the protocol may not necessarily be the author of the original method. However, by providing links to the primary work, users can attribute credit to the original author while at the same time adding their own updates to the method either while they enter it, or at a later time. Further, other users have the capability to add notes and warnings to existing protocols in protocols.io. This functionality includes a mechanism to email the protocol author for protocol troubleshooting. Corrections and updates made by the protocol authors and users automatically trigger notifications e-mailed to researchers who use that protocol. Lastly, users can “fork” or copy existing protocols for further refinement or alternate uses while still maintaining links back to the original for credit and reference. As such, the protocol is a living document for the community to reuse and continually refine.\n\nFor publication, authors have the option to enter detailed methods into protocols.io, issue a digital object identifier (DOI15), and link to the protocols.io record from the Methods section. This practice is now being encouraged in journal submissions and by funding agencies.\n\nProtocol collections: Because protocols are often used in conjunction with other protocols, protocols.io has the capability to link protocols into user-defined workflows. This is particularly important for publications that may use a collection of varied protocols (field, lab, and bioinformatics) that are derived from many sources (protocols from the user or other users). In providing a collection of protocols associated with a publication, the authors enable their work to be replicated, easy to follow, and transparent to other members of the community in a way that can be referenced and cited. For example, a collection of protocols derived from a recent publication on the human skin double stranded DNA skin virome is available in VERVENet16,17. Thus, collections provide a mechanism for furthering open-science efforts.\n\nProtocol collections also provide a mechanism to “learn by example” for early career scientists or those branching into a new area of scientific inquiry. In particular, detailed protocols associated with a toolkit or workshop, where multi-media options such as slides, video, or links to virtual machines with example datasets and code can be included18,19. This is particularly important for bioinformatics protocols that often include multiple programs and steps in an analysis for a given publication. Further, individual tools may have a collection of protocols that describe specific use-cases, example datasets, and varied options that they may wish to convey to their users.\n\nGroups and sharing: Individual members can form groups, where the owner has the ability to choose the level of accessibility for fellow members. The groups can share literature recommendations, discussions, protocols, news, events, and job opportunities (Figure 2). Subgroups can form under the umbrella of a larger group with a common interest. This subgroup/supergroup relationship allows smaller group activities to be shared with a larger virtual community with common interests. In the case of VERVENet, this supergroup links the broader research in virology with the subgroups of individual labs and more specific research interests such as plant viruses.\n\nGroups in protocols.io display information about the group objectives, members, subgroups, the group library and literature recommendations, group discussions, news, jobs, and events. Groups have the capacity to control access, from making groups and content public and allowing anyone to join, to restricted content and invitation only membership. VERVENet is an example of a public forum for virology.\n\nLiterature recommendations: Each of the groups includes a literature recommendation system. This algorithm provides personalized publication recommendations based on a library from a user or group. This algorithm is used to develop “libraries” for viral ecology user groups, that will continually recommend new publications based on growing reading lists from individual users that are part of the group. This functionality allows virologists to make their reading lists public therefore helping new scientists joining the field in their topic area. The libraries from “sub-groups” also fuel the shared public reading list within the VERVENet group, therefore creating enhanced fluidity and cross-posted content between the groups.\n\nLive online discussion forum: Each of the groups in protocols.io contains a live on-line discussion. Discussions can be generated directly on the discussion tab, or are cross-posted from discussions on specific protocols, news, or literature. Each of the discussions can reference outside websites, manuals, or online resources. This discussion forum enables users to discuss tips and tricks for specific protocols, review reagents linked to particular protocols, and reference outside resources that were not included in the original protocol.\n\nProtocols.io also includes “journal-club” capabilities to enable on-line discussions of published research by researchers and authors. Other unique features in protocols.io include: career advice forum including a panel of mentors20 and a “behind the article” essay forum21. These communication forums allow researchers to share their stories about how papers, protocols, or research efforts came about, that are both interesting to the community and informative for early career scientists.\n\nPlatform infrastructure and interoperability: Computers, tablets, and smart phones are becoming fundamental tools for scientists today. Furthermore, social networking and shared cyberinfrastructures are offering powerful new mechanisms to connect communities and science from across the world. Protocols.io leverages these powerful new tools and software capabilities to provide an online forum for viral ecology research to connect and share knowledge and resources. All components of protocols.io and the VERVENet forum are mobile-friendly and interoperable for use on diverse devices in the lab, on the desktop, or on the go.\n\nContent and adoption: The VERVENet group currently contains 359 live protocols, 121 news articles, and 50 job opportunities. There is an event calendar that contains workshops and conferences specific to virology through the fall of 2016. We have 183 members and 22 subgroups. Examples of subgroups include: Plant Virus Ecology Network which originally formed in 200722, the Chlorovirus Group, ECOGEO23, and 18 individual labs. The International Society for Viruses of Microorganisms has listed VERVE Net on their website24 as a resource.\n\n\nDiscussion and conclusions\n\nThe primary goal of new group functionality in protocols.io is to provide a robust web-application for sharing up to date protocols, literature, and community features (news, jobs, discussions). This work is exemplified in VERVE Net, a virtual community forum for virology. Fundamental to this goal is the ability for researchers to establish groups based on similar interests and share knowledge, without apriori knowledge of key members in a given field.\n\nWe have designed an infrastructure that has multiple entry points for establishing relationships among users ranging from self-proclaimed groups or areas of interest, to options to join groups maintained by others in an area of interest to the user fueled by related protocols or reading lists. Moreover, news feeds about funding opportunities, job postings, or collaborative research opportunities can be fine-tuned according to interest. These connections will allow the forum to evolve naturally given rapidly developing trends and new protocols. Protocols.io is open access and is both, free to read and free to publish. The revenue and sustainability model is based on the sale of data services to reagent vendors (most popular protocols, protocol improvements, and reagent-protocol links).\n\nProtocols.io is a central resource to connect, collaborate, share and innovate within virtual communities. The VERVENet forum demonstrates how this new group functionality allows researchers to promote scientific inquiry, reproduction of results, and dissemination and optimization of both molecular and bioinformatics protocols, as a virtual community.\n\n\nData and software availability\n\nProtocols.io and the VERVENet commuity forum are committed to open access for data content and interoperability. To that end, the content in protocols is available through an Application Programming Interface (API) for advanced data mining and no registration is required to view protocols, comments or annotations. Details on the API and use are documents on the protocols.io website25. Users can also access digitally archived data through the Center for Open Science and CLOCKSS.",
"appendix": "Author contributions\n\n\n\nLK wrote the manuscript, tested the platform, added content and provided feedback on features and functionality. AS developed the platform, tested and designed features and functionality. LT and BLH designed VERVENet, tested the system, provided feedback on features and functionality, and wrote the manuscript. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nLeonid Teytelman and Alexei Stoliartchouk are employees of protocols.io and both own equity in the company.\n\n\nGrant information\n\nThis work was funded by a grant to B.L.H. and L.T. from the Gordon Betty Moore Foundation (GBMF4733).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Celina Gomez and James Thornton for adding “seed” protocols into VERVENet, and Vladimir Frolov for development of the interface and group functionality.\n\n\nReferences\n\nEllison NB, Steinfield C, Lampe C: The benefits of Facebook “friends:” Social capital and college students’ use of online social network sites. J Comput Mediat Commun. 2007; 12(4): 1143–68. Publisher Full Text\n\nKwak H, Lee C, Park H, et al.: What is Twitter, a social network or a news media? In: Proceedings of the 19th international conference on World wide web. ACM, 2010; 591–600. Publisher Full Text\n\nThelwall M, Kousha K: ResearchGate: Disseminating, communicating, and measuring Scholarship? J Assn Inf Sci Tec. 2015; 66(5): 876–89. Publisher Full Text\n\nWeinbauer MG, Rowe JM, Wilhelm SW, et al.: Manual of Aquatic Viral Ecology. 2010. Publisher Full Text\n\nBrum JR, Sullivan MB: Rising to the challenge: accelerated pace of discovery transforms marine virology. Nat Rev Microbiol. 2015; 13(3): 147–59. PubMed Abstract | Publisher Full Text\n\nHurwitz BL, Sullivan MB: The Pacific Ocean virome (POV): a marine viral metagenomic dataset and associated protein clusters for quantitative viral ecology. PLoS One. 2013; 8(2): e57355. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAquatic Viruses. Timeline | Facebook. [cited 2016 Aug 9]. Reference Source\n\nTeytelman L, Stoliartchouk A: Protocols.io: Reducing the knowledge that perishes because we do not publish it. Inf Serv Use. 2015; 35(1–2): 109–15. Publisher Full Text\n\nZappyLab I: protocols.io on the App Store. App Store. 2016; [cited 2016 Mar 21]. Reference Source\n\nprotocols.io - Android Apps on Google Play. protocols.io. 2016; [cited 2016 Mar 21]. Reference Source\n\nVERVE Net - research group on protocols.io. protocols.io. 2016; [cited 2016 Mar 21]. Reference Source\n\nHaak LL, Fenner M, Paglione L, et al.: ORCID: a system to uniquely identify researchers. Learn Publ. 2012; 25(4): 259–64. Publisher Full Text\n\nprotocols.io - Life Sciences Protocol Repository. protocols.io. 2016; [cited 2016 Mar 21]. Reference Source\n\nDabbish L, Stuart C, Tsay J, et al.: Social Coding in GitHub: Transparency and Collaboration in an Open Software Repository. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. New York, NY, USA: ACM; 2012; 1277–86. (CSCW ’12). Publisher Full Text\n\nPaskin N: Digital object identifier (doi®) system. Encyclopedia of library and information sciences. 2008; 3: 1586–92. Publisher Full Text\n\nHannigan GD, Meisel JS, Tyldsley AS, et al.: The human skin double-stranded DNA virome: topographical and temporal diversity, genetic enrichment, and dynamic associations with the host microbiome. MBio. 2015; 6(5): e01578–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe Human Skin dsDNA Virome: Topographical and Temporal Diversity, Genetic Enrichment, and its Dynamic Associations with the Host Microbiome protocol by Geoffrey Hannigan on protocols.io. protocols.io. 2016; [cited 2016 Mar 21]. Reference Source\n\nQIIME:Moving Pictures of the human microbiome protocol by Bonnie Hurwitz on protocols.io. protocols.io. 2016; [cited 2016 Mar 21]. Reference Source\n\nCaporaso JG, Kuczynski J, Stombaugh J, et al.: QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010; 7(5): 335–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAcademic Career Forum - research group on protocols.io. protocols.io. 2016; [cited 2016 Mar 21]. Reference Source\n\nAuthor essays on published articles. pubchase. 2016; [cited 2016 Mar 21]. Reference Source\n\nMalmstrom CM, Melcher U, Bosque-Pérez NA: The expanding field of plant virus ecology: historical foundations, knowledge gaps, and research directions. Virus Res. 2011; 159(2): 84–94. PubMed Abstract | Publisher Full Text\n\nECOGEO 2015 Workshop I Final Report | EarthCube. ECOGEO. 2016; [cited 2016 Mar 21]. Reference Source\n\nInternational Society for Viruses of Microorganisms. International Society for Viruses of Microorganisms. 2016; [cited 2016 Mar 21]. Reference Source\n\nprotocols.io for developers. protocols.io. 2016; [cited 2016 Mar 21]. Reference Source"
}
|
[
{
"id": "16177",
"date": "16 Sep 2016",
"name": "David D. Dunigan",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper describes a relatively new on-line forum referred to as VERVENet that was developed with the private company protocols.io. VERVENet seeks to provide a virtual gathering place for viral ecologists where information on experimental protocols, literature, upcoming conferences and job listings are centralized and made available, openly. The paper describes most of these features and provides some examples to help encourage the reader to explore VERVE Net online.\n\nThere are some concerns that may need further clarification in this publication relating to 1) navigation, 2) types of data being supported, and 3) intellectual property rights.\n\nAt a practical level, VERVENet is still in its infancy and one would expect it to continue to develop in response to the community input. Thus, the concerns we have with navigating in VERVENet at this time will likely be corrected/ adjusted with continued use and input by users. The authors should address how they intend to respond to the intended community when suggestions are proposed. Who has the “authority” to make the changes and how will those changes be made available? Given the potential (and presumed intended) for a highly dynamic forum, will there be an archive of information maintained?\n\nIt is not quite clear from the manuscript how navigation by particular topics works. Is it possible to find a specific protocol without having to scroll through all entries, as well as to find easily a collection of protocols related to a particular field, e.g,. nucleic acid isolation and manipulation; proteomics; bioinformatics, etc? If these functions exist in the current version of VERVENet, it would help to describe this more fully.\n\nThe authors indicate that VERVENet was initiated, in part, due to “…a renaissance in virology..given the new metagenomic methods...” Although it is true that there is a large influx of metagenomic data in recent years, DNA sequence data is not the only type of data of interest to viral ecologists. The authors should discuss what types of data are being supported. For example, how are image data sets being handled, given the relatively large file size? How will VERVENet adapt as other types of ‘omics’ data become common within the viral ecology community? Clearly, DNA sequence is not the only data of interest, RNA sequence is equally important to virologists, so the authors may wish to indicate the range of data intended for VERVENet.\n\nThe authors have taken on the responsibility of creating an online forum with an emphasis on methods. These methods are created by individuals and generally made available to anyone whether they register with protocols.io or not, at varying levels depending on the authors’ decisions. It is not clear who holds the intellectual property rights at that point. The authors should be transparent about the rights of the protocols.io contributing authors, where the legal authority begins and ends, and how contributors can both share and protect their intellectual property should they choose. Another concern that should be addressed is what is the fate of VERVENet should protocols.io change ownership. Are there contingency plans in the company statutes that would ensure that groups like VERVENet be supported? What happens if they are not supported? Who has legal ownership the information? Also, given that the current funding is through the Gordon and Betty Moore Foundation, what happens to the content of VERVENet should that funding be removed or reduced? These things need some clarification.\n\nThe title of the paper does not give a full reflection of the contents of the paper.\n\nThere are several statements in the manuscript where the authors tend to use emphatic language that is not easily supporter. For example, the first sentence of the Abstract infers that social interactions of scientists ONLY occurs at physical meetings and conferences. This statement is not only inaccurate, it undermines the credibility of the information that follows. The authors should re-read their paper and consider how some of their statements may be perceived. There are several examples.",
"responses": [
{
"c_id": "2728",
"date": "29 Jun 2017",
"name": "Bonnie Hurwitz",
"role": "Author Response",
"response": "Thank you for your time in reviewing our manuscript. Please see our comments to your suggested revisions below:Comment 1: At a practical level, VERVENet is still in its infancy and one would expect it to continue to develop in response to the community input. Thus, the concerns we have with navigating in VERVENet at this time will likely be corrected/ adjusted with continued use and input by users. The authors should address how they intend to respond to the intended community when suggestions are proposed. Who has the “authority” to make the changes and how will those changes be made available? Given the potential (and presumed intended) for a highly dynamic forum, will there be an archive of information maintained?Response 1: In terms of the software, protocols.io offers three methods for feedback directly from users: twitter, email to protocols developers (info@protocols.io), and through a feedback forum where users and developers alike can respond. These comments are then used to fuel future development. Further, protocols.io recently initiated an ambassadors program where power users (usually graduate students or postdocs) that are directly connected to diverse communities provide feedback from a user-perspective. VERVE Net is one such community and currently has an ambassador to further develop the community. Thus, future development is guided by community input from these sources. We have amended the manuscript to clarify this important point.Information from the forum is maintained through the CLOCKKS digital archiving project (https://www.clockss.org/clockss/Home) as described in the manuscript under “Data and software availability”.Comment 2: It is not quite clear from the manuscript how navigation by particular topics works. Is it possible to find a specific protocol without having to scroll through all entries, as well as to find easily a collection of protocols related to a particular field, e.g,. nucleic acid isolation and manipulation; proteomics; bioinformatics, etc? If these functions exist in the current version of VERVENet, it would help to describe this more fully.Response 2: Protocols on protocols.io can be “tagged” to allow users to quickly find protocols or collections of protocols in a particular area of interest. Users can also find protocols or other content using the global search at the top of each page that allows users to search within the entire forum, or specific sections of the forum. We have updated the text to clarify the search features.Comment 3: The authors indicate that VERVENet was initiated, in part, due to “…a renaissance in virology..given the new metagenomic methods...” Although it is true that there is a large influx of metagenomic data in recent years, DNA sequence data is not the only type of data of interest to viral ecologists. The authors should discuss what types of data are being supported. For example, how are image data sets being handled, given the relatively large file size? How will VERVENet adapt as other types of ‘omics’ data become common within the viral ecology community? Clearly, DNA sequence is not the only data of interest, RNA sequence is equally important to virologists, so the authors may wish to indicate the range of data intended for VERVENet.Response 3: Indeed, VERVENet is not just a forum to discuss ‘omics datasets but is inclusive of all types of data on studying viruses. The forum is meant to be a place to discuss newly emerging methods in viral ecology that includes but is not limited to ‘omics datasets. Also, while images, videos and tables can be added to protocols/steps, the protocols.io platform is not a data storage site like dropbox, GitHub, figshare, datadryad or CyVerse. To clarify, we added several sentences in the introduction.Comment 4: The authors have taken on the responsibility of creating an online forum with an emphasis on methods. These methods are created by individuals and generally made available to anyone whether they register with protocols.io or not, at varying levels depending on the authors’ decisions. It is not clear who holds the intellectual property rights at that point. The authors should be transparent about the rights of the protocols.io contributing authors, where the legal authority begins and ends, and how contributors can both share and protect their intellectual property should they choose. Another concern that should be addressed is what is the fate of VERVENet should protocols.io change ownership. Are there contingency plans in the company statutes that would ensure that groups like VERVENet be supported? What happens if they are not supported? Who has legal ownership the information? Also, given that the current funding is through the Gordon and Betty Moore Foundation, what happens to the content of VERVENet should that funding be removed or reduced? These things need some clarification.Response 4: All public content on protocols.io is open access and is clearly labeled as CC-BY at the footer of each page. Moreover, the Terms of Service (https://www.protocols.io/terms#tos1) are explicit: \"We claim no intellectual property rights over the material you provide to the Sites/Apps. Your profile and materials uploaded remain yours. However, by publishing your protocols to be viewed publicly, you agree to allow others to view your Content. By setting your protocols to be viewed publicly, you agree to allow others to view and fork your protocols.\" Content generated by our users belongs to the users and is licensed under the Creative Commons Attribution License (CC-BY). As discussed above, the archiving via CLOCKSS and the mirroring with COS are meant to ensure long-term digital preservation of all knowledge on protocols.io, even in the case of bankruptcy or change of control.VERVENet does not rely on continued funding from GBMF. The support for this and other groups comes from the revenue streams at protocols.io. Ensuring long-term sustainability, independent of grants from funding agencies was an important component of the original VERVE grant application.Comment 5: The title of the paper does not give a full reflection of the contents of the paper.Response 5: The title reflects the general application of developing digital communities in protocols.io for which VERVENet is an example. We kept the title more general to reflect the broad utility of the forum. While VERVENet was the first group on protocols.io and the grant for it enabled the creation of the functionality described here, it is meant for wide use and has already growth to more than 250 public and private groups and communities such as PROT-G, Open Plant, MinION, and others. Comment 6: There are several statements in the manuscript where the authors tend to use emphatic language that is not easily supporter. For example, the first sentence of the Abstract infers that social interactions of scientists ONLY occurs at physical meetings and conferences. This statement is not only inaccurate, it undermines the credibility of the information that follows. The authors should re-read their paper and consider how some of their statements may be perceived. There are several examples.Response 6: Thank you for pointing this out. Social interactions at conferences was only meant to be an example of the many interactions scientists have daily. We updated the manuscript to omit/rephrase any emphatic language per your suggestion."
}
]
},
{
"id": "16953",
"date": "18 Oct 2016",
"name": "Lawrence Patrick Kane",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis well-written report describes the creation of a new platform for facilitating collaborative work among and between research groups. This platform takes advantage of the existing protocols.io site and App, and so is focused on methodological sharing, although other types of information are also covered. This approach fits well with the recent emphasis (from NIH and others) to address the issue of reproducibility in the biomedical literature. Of course, the ultimate utility of this (or any) platform will depend on what those in the field make it.\nOne item that the authors may wish to clarify is the protocols.io business model, which is touched upon near the end of this piece, but is likely not familiar to many readers. This is not meant to impugn the data services model, but just to ensure its transparency.",
"responses": [
{
"c_id": "2401",
"date": "29 Jun 2017",
"name": "Bonnie Hurwitz",
"role": "Author Response",
"response": "Thank you for reviewing our manuscript. We have updated the text to clarify the business model for protocols.io per your suggestion."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2271
|
https://f1000research.com/articles/6-1025/v1
|
29 Jun 17
|
{
"type": "Software Tool Article",
"title": "valr: Reproducible genome interval analysis in R",
"authors": [
"Kent A. Riemondy",
"Ryan M. Sheridan",
"Austin Gillen",
"Yinni Yu",
"Christopher G. Bennett",
"Jay R. Hesselberth",
"Kent A. Riemondy",
"Ryan M. Sheridan",
"Austin Gillen",
"Yinni Yu",
"Christopher G. Bennett"
],
"abstract": "New tools for reproducible exploratory data analysis of large datasets are important to address the rising size and complexity of genomic data. We developed the valr R package to enable flexible and efficient genomic interval analysis. valr leverages new tools available in the ”tidyverse”, including dplyr. Benchmarks of valr show it performs similar to BEDtools and can be used for interactive analyses and incorporated into existing analysis pipelines.",
"keywords": [
"Genomics",
"Intervals",
"BEDtools",
"reproducibility",
"R",
"RStudio"
],
"content": "Introduction\n\nA routine bioinformatic task is the analysis of the relationships between sets of genomic intervals, including the identification of DNA variants within protein coding regions, annotation of regions enriched for nucleic acid binding proteins, and computation of read density within a set of exons. Command-line tools for interval analysis such as BEDtools1 and BEDOPS2 enable analyses of genome-wide datasets and are key components of analysis pipelines. Analyses with these tools commonly combine processing intervals on the command-line with visualization and statistical analysis in R. However, the need to master both the command-line and R hinders exploratory data analysis, and the development of reproducible research workflows built in the RMarkdown framework.\n\nExisting R packages developed for interval analysis include IRanges3, bedr4, and GenometriCorr5. IRanges is a Bioconductor package that provides interval classes and methods to perform interval arithmetic, and is used by many Bioconductor packages. bedr is a CRAN-distributed package that provides wrapper R functions to call the BEDtools, BEDOPS, and tabix command-line utilities, providing out-of-memory support for interval analysis. Finally, GenometriCorr provides a set of statistical tests to determine the relationships between interval sets using IRanges data structures. These packages provide functionality for processing and statistical inference of interval data, however they require a detailed understanding of S4 classes (IRanges) or the installation of external command-line dependencies (bedr). Additionally, these packages do not easily integrate with the recent advances provided by the popular tidyverse suite of data processing and visualization tools (e.g. dplyr, purrr, broom and ggplot2)6. We therefore sought to develop a flexible R package for genomic interval arithmetic built to incorporate new R programming, visualization, and interactivity features.\n\n\nMethods\n\nvalr is an R package that makes extensive use of dplyr, a flexible and high-performance framework for data manipulation in R7. Additionally, compute intensive functions in valr are written in C++ using Rcpp to enable fluid interactive analysis of large datasets8. Interval intersections and related operations use an interval tree algorithm to efficiently search for overlapping intervals9. BED files are imported and handled in R as data_frame objects, requiring minimal pre or post-processing to integrate with additional R packages or command-line tools.\n\nvalr is distributed as part of the CRAN R package repository and is compatible with Mac OS X, Windows, and major Linux operating systems. Package dependencies and system requirements are documented in the valr CRAN repository.\n\n\nUse cases\n\nTo demonstrate the functionality and utility of valr, we present a basic tutorial for using valr and additional common use cases for genomic interval analysis.\n\nInput data. valr provides a set of functions to read BED, BEDgraph, and VCF formats into R as convenient tibble (tbl) data_frame objects. All tbls have chrom, start, and end columns, and tbls from multi-column formats have additional pre-determined column names. Standards methods for importing data (e.g. read.table, readr::read_tsv) are also supported provided the constructed dataframes contain the requisite column names (chrom, start, end). Additionally, valr supports connections to remote databases to access the UCSC and Ensembl databases via the db_ucsc and db_ensembl functions.\n\n\n\n\n\nExample of combining valr tools. The functions in valr have similar names to their BEDtools counterparts, and so will be familiar to users of the BEDtools suite. Also, similar to pybedtools10, a python wrapper for BEDtools, valr has a terse syntax. For example, shown below is a demonstration of how to find all intergenic SNPs within 1 kilobase of genes using valr. The BED files used in the following examples are described in the Data Availability section.\n\n\n\nVisual documentation. By conducting interval arithmetic entirely in R, valr is also an effective teaching tool for introducing interval analysis to early-stage analysts without requiring familiarity with both command-line tools and R. To aid in demonstrating the interval operations available in valr, we developed the bed_glyph() tool which produces plots demonstrating the input and output of operations in valr in a manner similar to those found in the BEDtools documentation. Shown below is the code required to produce glyphs displaying the results of intersecting x and y intervals with bed_intersect(), and the result of merging x intervals with bed_merge() (Figure 1).\n\n\n\nAnd this glyph illustrates bed_merge():\n\n\n\nGrouping data. The group_by function in dplyr can be used to execute functions on subsets of single and multiple data_frames. Functions in valr leverage grouping to enable a variety of comparisons. For example, intervals can be grouped by strand to perform comparisons among intervals on the same strand.\n\n\n\nComparisons between intervals on opposite strands are done using the flip_strands() function:\n\n\n\nBoth single set (e.g. bed_merge()) and multi set operations will respect groupings in the input intervals.\n\nColumn specification. Columns in BEDtools are referred to by position:\n\n\n\nIn valr, columns are referred to by name and can be used in multiple name/value expressions for summaries.\n\n\n\nAPI. The major functions available in valr are shown in Table 1.\n\nThis demonstration illustrates how to use valr tools to perform a “meta-analysis” of signals relative to genomic features. Here we analyze the distribution of histone marks surrounding transcription start sites, using H3K4Me3 Chip-Seq data from the ENCODE project.\n\nFirst we load packages and relevant data.\n\n\n\nThen, we generate 1 bp intervals to represent transcription start sites (TSSs). We focus on + strand genes, but - genes are easily accommodated by filtering them and using bed_makewindows() with reversed window numbers.\n\n\n\nNow we use the .win_id group with bed_map() to calculate a sum by mapping y signals onto the intervals in x. These data are regrouped by .win_id and a summary with mean and sd values is calculated.\n\n\n\nFinally, these summary statistics are used to construct a plot that illustrates histone density surrounding TSSs (Figure 2).\n\n\n\n(A) Summarized coverage of human H3K4Me3 Chip-Seq coverage across positive strand transcription start sites on chromosome 22. Data presented +/- SD.\n\nEstimates of significance for interval overlaps can be obtained by combining bed_shuffle(), bed_random() and the sample_ functions from dplyr with interval statistics in valr.\n\nHere, we examine the extent of overlap of repeat classes (repeatmasker track obtained from the UCSC genome browser) with exons in the human genome (hg19 build, on chr22 only, for simplicity) using the jaccard similarity index. bed_jaccard() implements the jaccard test to examine the similarity between two sets of genomic intervals. Using bed_shuffle() and replicate() we generate a data_frame containing 100 sets of randomly selected intervals then calculate the jaccard index for each set against the repeat intervals to generate a null-distribution of jaccard scores. Finally, an empirical p-value is calculated from the null-distribution.\n\n\n\n\n\nIn order to ensure that valr performs fast enough to enable interactive analysis, key functionality is implemented in C++. To test the speed of major valr functions we generated two data_frames containing 1 million randomly selected 1 kilobase intervals derived from the human genome (hg19). Most of the major valr functions complete execution in less than 1 second, demonstrating that valr can process large interval datasets efficiently (Figure 3A).\n\n(A) Timings were calculated by performing 10 repetitions of indicated functions on data frames preloaded in R containing 1 million random 1 kilobase x/y intervals generated using bed_random(). (B) Timings for executing functions in BEDtools v2.25.0 or equivalent functions in valr using the same interval sets as in (A) written to files. All BEDtools function outputs were written to /dev/null/, and were timed using GNU time. Timings for valr functions in (B) include times for reading files using read_bed() functions and were timed using the microbenchmark package.\n\nWe also benchmarked major valr functions against corresponding commands in BEDtools. valr operates on data_frames already loaded into RAM, whereas BEDtools performs file-reading, processing, and writing. To compare valr against BEDtools we generated two BED files containing 1 million randomly selected 1 kilobase intervals derived from the human genome (hg19). For valr functions, we timed reading the table into R (e.g. with read_bed()) and performing the respective function. For BEDtools commands we timed executing the command with the output written to /dev/null. valr functions performed similarly or faster than BEDtools commands, with the exception of bed_map and bed_fisher (Figure 3B).\n\nCommand-line tools like BEDtools and bedops can be incorporated into reproducible workflows (e.g., with snakemake11), but it is cumbersome to transition from command-line tools to exploratory analysis and plotting software. RMarkdown documents are plain text files, amenable to version control, which provide an interface to generate feature rich PDF and HTML reports that combine text, executable code, and figures in a single document. valr can be used in RMarkdown documents to provide rapid documentation of exploratory data analyses and generate reproducible work-flows for data processing. Moreover, new features in RStudio, such as notebook viewing, and multiple language support enable similar functionality to another popular notebook platform jupyter notebooks.\n\nAdditionally, valr seamlessly integrates into R shiny12 applications allowing for complex interactive visualizations relating to genomic interval analyses. We have developed a shiny application (available on Gitub) that explores ChiP-Seq signal density surrounding transcription start sites and demonstrates the ease of implementing valr to power dynamic visualizations.\n\n\nSummary\n\nvalr provides a flexible framework for interval arithmetic in R/Rstudio. valr functions are written with a simple and terse syntax that promotes flexible interactive analysis. Additionally by providing an easy-to-use interface for interval arithmetic in R, valr is also a useful teaching tool to introduce the analyses necessary to investigate correlations between genomic intervals, without requiring familiarity with the command-line. We envision that valr will help researchers quickly and reproducibly analyze genome interval datasets.\n\n\nData and software availability\n\nThe valr package includes external datasets stored in the inst/extdata/ directory that were used in this manuscript. These datasets were obtained from the ENCODE Project13 or the UCSC genome browser14. BED files were generated by converting the UCSC tables into BED format. BED and BEDgraph data was only kept from chromosome 22, and was subsampled to produce file sizes suitable for submission to the CRAN repository. The original raw data is available from the following sources:\n\nhela.h3k4.chip.bg.gz SRA record: SRR227441, ENCODE identifier: ENCSR000AOF\n\nhg19.refGene.chr22.bed.gz ftp://hgdownload.soe.ucsc.edu/goldenPath/hg19/database/refGene.txt.gz\n\nhg19.rmsk.chr22.bed.gz ftp://hgdownload.soe.ucsc.edu/goldenPath/hg19/database/rmsk.txt.gz\n\nhg19.chrom.sizes.gz ftp://hgdownload.soe.ucsc.edu/goldenPath/hg19/database/chromInfo.txt.gz\n\ngenes.hg19.chr22.bed.gz ftp://hgdownload.soe.ucsc.edu/goldenPath/hg19/database/refGene.txt.gz\n\nhg19.snps147.chr22.bed.gz ftp://hgdownload.soe.ucsc.edu/goldenPath/hg19/database/snp147.txt.gz\n\nvalr can be installed via CRAN using install.packages(\"valr\").\n\nvalr is maintained at http://github.com/rnabioco/valr.\n\nLatest valr source code is available at http://github.com/rnabioco/valr.\n\nThe latest stable version of source code is at: https://github.com/rnabioco/valr/archive/v0.3.0.tar.gz\n\nArchived source code at the time of publication: http://doi.org/10.5281/zenodo.81540315\n\nLicense: MIT license.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the RNA Bioscience Initiative (funded by a Transformational Research Award from the University of Colorado School of Medicine), a grant from the National Institutes of Health (R35 GM119550 to J.H.), the Colorado Office of Economic Development and International Trade (CTGGI 2016-2096), the BioFrontiers Computing Core at the BioFrontiers Institute, University of Colorado at Boulder and the Intramural Research Program of the National Library of Medicine.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThis work was in part completed during an NIH sponsored Hackathon hosted by the Biofrontiers Department at the University of Colorado at Boulder.\n\n\nReferences\n\nQuinlan AR, Hall IM: BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010; 26(6): 841–842. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeph S, Kuehn MS, Reynolds AP, et al.: BEDOPS: high-performance genomic feature operations. Bioinformatics. 2012; 28(14): 1919–1920. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLawrence M, Huber W, Pagès H, et al.: Software for computing and annotating genomic ranges. PLoS Comput Biol. 2013; 9(8): e1003118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaider S, Waggott D, Lalonde E, et al.: A bedr way of genomic interval processing. Source Code Biol Med. 2016; 11: 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFavorov A, Mularoni L, Cope LM, et al.: Exploring massive, genome scale datasets with the GenometriCorr package. PLoS Comput Biol. 2012; 8(5): e1002529. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWickham H: tidyverse: Easily Install and Load ’Tidyverse’ Packages. R package version 1.1.1, 2017. Reference Source\n\nWickham H, Francois R: dplyr: A Grammar of Data Manipulation. R package version 0.5.0, 2016. Reference Source\n\nEddelbuettel D, François R: Rcpp: Seamless R and C++ integration. J Stat Softw. 2011; 40(8): 1–18. Publisher Full Text\n\nCormen TH, Leiserson CE, Rivest RL, et al.: Introduction to Algorithms. 2nd Ed. Cambridge (Massachusetts): MIT Press; 2001. Reference Source\n\nDale RK, Pedersen BS, Quinlan AR: Pybedtools: a flexible Python library for manipulating genomic datasets and annotations. Bioinformatics. 2011; 27(24): 3423–3424. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKöster J, Rahmann S: Snakemake--a scalable bioinformatics workflow engine. Bioinformatics 2012; 28(19): 2520–2522. PubMed Abstract | Publisher Full Text\n\nChang W, Cheng J, Allaire JJ, et al.: shiny: Web Application Framework for R. R package version 1.0.3. 2017. Reference Source\n\nENCODE Project Consortium: An integrated encyclopedia of DNA elements in the human genome. Nature. 2012; 489(7414): 57–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRosenbloom KR, Armstrong J, Barber GP, et al.: The UCSC Genome Browser database: 2015 update. Nucleic Acids Res. 2015; 43(Database issue): D670–D681. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHesselberth J, kriemo, sheridar, et al.: rnabioco/valr: Zenodo release. Zenodo. 2017. Data Source"
}
|
[
{
"id": "23916",
"date": "10 Jul 2017",
"name": "Robert A. Amezquita",
"expertise": [
"Reviewer Expertise Computational immunology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nRiemondy and colleagues have made a valuable tool, `valr`, available to the general public, one which deserves much merit for bringing modern R idioms from the `tidyverse` into the world of genomic research. As a fellow bioinformatician who has struggled with the idiosyncrasies of the aforementioned tools for interval manipulation in R, `valr` addresses many of the usability issues associated with these legacy methods by fundamentally altering the user experience. Consequently, there are two main advantages to adopting `valr` for interval manipulation in R: ease of writing, and ease of reading, code. Thus, this referee wholeheartedly endorses `valr`, and hopes to see more work that brings many of the `tidyverse` philosophies over to working with genomics in R.\n\nNonetheless, there is some room for improvement of the associated manuscript to better help explain the philosophy and usage of `valr`, and its place amongst the many tools for the manipulation of genomic intervals.\nFirstly, in the introduction, it is mentioned that there exist `IRanges` methods that utilize the S4 convention, whereas `valr` utilizes a less formal schema where 3 columns, `chrom`, `start`, `end`, are present in the `data_frame` object. Indeed, it may be of use to expand upon such design choices that were made, and what advantages/disadvantages are made in using this less formal schema, and any other highly pertinent choices that affect user experience. In addition, one line mentions integration with other `tidyverse` tools, and should expand upon this with either one to a few specific examples or explain this point in more detail. Additionally, it should also be pointed out how `valr` builds upon these existing toolkits, and either expands upon/adopts their conventions. One way might be to create a table comparing functions between `valr`/bedtools/GenomicRanges might be helpful for a reader to see that the toolkit will be easily adoptable. Indeed, its mentioned that the syntax is similar to bedtools in the use cases, and might be good to mention in the introduction as well. Thus, an expanded introduction/additional section explaining the uniqueness of `valr` would help to better \"sell\" when one should use `valr` and why.\nIn performing benchmarking, it would be useful to include one or two leading R tools, such as GenomicRanges, into the calculations, as this is likely how many R programmers currently perform interval manipulations natively in R, and I suspect would likely show an impressive performance improvement by relation.\n`valr` presents an exciting new development in the R+Genomics realm, and this referee is hopeful that this sort of development helps fuel further `tidyomics` tools for R bound together by a cohesive philosophy, great user experience, and pointed utility.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "24101",
"date": "10 Jul 2017",
"name": "Ryan K. Dale",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe a package for manipulating genomic interval data in R using principles from the \"tidyverse\" for the data structures and API. This sets it apart from existing tools such as GenomicRanges or bedr which have their own ways of storing and manipulating data. As a result, valr should be easier to pick up and integrate with the rest of the R ecosystem, and the “tidyverse” in particular. Illustrative examples give the reader a taste for the package while highlighting the novel features.\n\nIn general, this looks to be a very useful tool. The code quality is excellent and it is great to see so many tests including the addition of regression tests as issues are identified.\n\nMy comments are very minor:\n\nGroup-by code listing: comment \"# intersect tbls by strand\" should be \"# group tbls by strand\"\n\nBioconductor might be a more appropriate repository than CRAN\n\nDescription of in-memory usage: I see from the software documentation that BAM and VCF will be supported in the future, and the documentation explicitly mentions that valr operates on data in-memory. The section comparing with BEDTools briefly mentions the in-memory aspect, but it would be helpful to be clearer about memory usage in the manuscript, especially as users attempting to use large BAM files may run out of memory.\n\nThis is just a suggestion for improvement: Over the years, numerous bugs from corner cases have been found and handled in BEDTools. It would greatly increase confidence in the underlying algorithms you have written if there is input/output parity between valr and BEDTools, at least for the tools that overlap the two packages. For example I see some test cases that use input from the BEDTools test suite (e.g., test_cluster.r), but don't check the output. It should be straightforward to check the output against that provided by the BEDTools test suite. Correspondingly it would be good for BEDTools to use valr input/expected output in its test suite.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1025
|
https://f1000research.com/articles/6-125/v1
|
10 Feb 17
|
{
"type": "Research Article",
"title": "Variation in rice root traits assessed by phenotyping under drip irrigation",
"authors": [
"T. Parthasarathi",
"K. Vanitha",
"S. Mohandass",
"Eli Vered",
"V. Meenakshi",
"K. Vanitha",
"S. Mohandass",
"Eli Vered",
"V. Meenakshi"
],
"abstract": "Background: Roots are the key elements in water saving rice cultivation. So, the response of rice roots are to be phenotyped under varied drip irrigation treatments. Methods: This study describes an investigation on rice root phenotyping under drip irrigation treatments in split-split plot design. Two lateral spacing levels (0.8 and 1.2m), two depths of irrigation (5-10 and 15-20 cm) by solar powered and well operated irrigation were tested using TNRH 180, JKRH 3333 and ADT(R)45 rice genotypes during the summer season (2013 & 2014) in Coimbatore, India. Conventional aerobic irrigation was considered as control. Results and Discussion: An increased root length, root density (length and weight), root Adinosine Tri Phosphotase enzyme activity, root volume and filled grain percentage were favored in aerobic rice under the conditions of 0.8m lateral distance with 5-10cm depth of sub surface drip irrigation (SDI). Improved root characteristics were observed in JKRH 3333 rice hybrid, and root density and thickness favored the filled grains and yield increment in rice by drip irrigation. The 0.8m lateral distance laid out at 5-10cm depth SDI proliferated more roots at subsurface soil layer with significant yield increment in rice.",
"keywords": [
"Drip",
"rice",
"phenotyping",
"root",
"yield"
],
"content": "Introduction\n\nAerobic rice refers to rice grown under non-water stagnation. The growth of the root system in rice is restricted under aerobic conditions, which is the reason for poor yield (Kato et al., 2010). In aerobic soil conditions, the soil’s interaction with rice is primarily focused on the root system. Therefore, the root system is the first barrier to face stress found in aerobic soil conditions. Water management of rice crops is a sensitive tool in aerobic rice cultivation practice, which has been demonstrated by alteration in rice root anatomy (Mostajeran & Rahimi-Eichi, 2008).\n\nA deeper root system of rice eases water stress and improves the uptake of nutrients and water in deep soil layers (Lilley et al., 1994). Rice cultivars with deep rooting and higher root density are much more favorable to aerobic conditions than lowland conditions (Matsuo et al., 2010).\n\nRegulation of root traits to aerobic conditions could be attained by phenotyping for root traits and rhizosphere management by application of water and nutrients to the root zone (Zhang et al., 2010). Drip irrigation stimulates fibrous root production with specific changes in root system architecture (Raj et al., 2013). Similarly, drip irrigation with humigation plants shows favorable root growth with grain yield in rice (Vanitha & Mohandass, 2013).\n\nThis study focuses on phenotyping of root traits, such as root length, density, and distribution, under various drip treatments, related to the root response of rice genotypes. Consequently, the phenotyping of root traits and grain yield under drip environment was analyzed by the present study.\n\n\nMethods\n\nA root phenotyping experiment was conducted during the summer season of 2013 and 2014, using JKRH 3333, TNRH 180 and ADT(R)45 as the test rice varieties at Tamil Nadu Agricultural University (Coimbatore, Tamil Nadu, India). Seeds were manually sown in the field at 20×10cm spacing. The open pan evaporation (PE) values (125% pan evaporation) were used to calibrate irrigation scheduling, and drip irrigation was supplied via pipe at 40mm outside diameter (OD) by 7.5HP motor with a pressure of 1.5kg/cm-2 from a bore well. Solar powered and well-operated drip irrigation sources, 0.8 and 1.2m lateral distances, 5–10 and 15–20cm depth sub-surface drip (SDI) were the treatments adopted at field level. The conventional aerobic practice was scheduled at 1.25 irrigation water (IW)/cumulative pan evaporation (CPE) ratio to 3.0cm. It was named as conventional aerobic rice. The recommended dose of 150:50:50 kg:ha-1: NPK water soluble fertilizers were used to fertigate the crops using the Venturi flume at weekly intervals. Further information on genotypes, experimental set up and fertigation schedule are given in Supplementary File 1.\n\nRoot length was estimated during the flowering phase (80 days after sowing) from core samples (Kato et al., 2006). Rice roots were removed carefully from the soil by root auger without damaging the roots. After the samples were oven-dried at 80°C for 72h, root lengths and weights were measured. Root length (m hill-1) = sample root length (cm) × total root weight (g)/sample root weight (g). Root dry weight was expressed as g/hill. The specific root length was estimated as a ratio by root length to root dry weight. Four soil cores (50mm diameter, 35cm depth) per plot were taken next to the plant and between the rows (20cm) with a soil sampler. Cores of soil were separated into 0–15 and 15–35cm, then washed using water and sieved by 0.5mm mesh sieve. The root length (RLD), as well as root mass density (RMD), was determined using the formula of Pantuwan et al. (1997), and the values were expressed as cm/cm3 and mg/cm3 of the soil, for RLD and RMD respectively. Root volume was recorded using the water displacement technique (Bridgit & Potty, 2002) and expressed as cm3/hill.\n\nCore sampled roots were washed thoroughly and dehydrated using 80, 90 and 100% alcohol. Dehydrated roots were embedded in slides using paraffin. Slides were kept for imaging the root system using camera (Sony 12.1 megapixel) mounted Leica D1000 microscope at 10X magnification (Guo et al., 2008).\n\nAdenosine triphosphatase (ATPase) activity of the root was assayed according to Wayne (1955) at 32°C using ATP (sodium salt) as a substrate, and the reactions were terminated by the addition of 2.0 mL cold 10% trichloroacetic acid. The ATPase enzyme activity was expressed as µg Pi g-1 h-1.\n\nHarvesting of the crop (grain) was performed from the net plot level (2.4×7.0m) at 120 days after sowing. The yield of rice grain was calculated to hectares at 14% moisture level and expressed as kg ha-1. The filled grain percentage (%) was calculated by the ratio of total filled grain with total spikelet numbers in panicles. The dry weight of grain and total grain dry weight per hill ratio was used to measure the harvest index (HI; %) at harvest stage of the crop (Yoshida et al., 1971)\n\nThe recorded mean data were analyzed with AgRes software (version 7.01) ANOVA (Analysis of Variance) package for researchers 1994, Pascal Intl software solutions. Significance was assessed at 95% (p<0.05) and 99% (p<0.01) confidence level (Gomez & Gomez, 1984). F values were calculated using the method as described in http://www.biokin.com/tools/fcrit.html.\n\n\nResults\n\nTotal root length (TRL) is the size of the total root system, which is the major determinant for water and nutrient uptake. The drip irrigation system used in the present study in aerobic rice showed significant variation among root traits. Regarding the TRL for the different rice genotypes, a longer length was observed in with JKRH 3333 (6.2m/hill), followed by TNRH 180 (50.7m/hill) and ADT(R) 45 (38.6m/hill) (Table 1).\n\nAmong the genotypes, a significantly higher root volume was observed in JKRH 3333 (66cm3/hill), followed by TNRH 180 (61.7cm3/hill) and ADT(R)45 (53.2cm3/hill). From the main plot treatment, the solar drip irrigation recorded an increased root volume of 43.4% compared with the well-operated drip irrigation treatment.\n\nLD: lateral distance; S.Ed: standard error difference; CD: critical difference. *significance level at 0.05%; *significance at 0.01%; NS: not significant.\n\nThe various genotypes of rice had varying RLD values: 1.513cm/cm3 (JKRH 3333); 1.267 cm/cm3 (TNRH 180); and 1.077cm/cm3 [ADT(R)45]. Conventional aerobic rice observed a decreased RMD value of 1.133cm/cm3. Among the genotypes, JKRH 3333 had a higher RMD value (1.214mg/cm3) with statistical significance over TNRH 180 (1.109 mg/cm3) and ADT(R)45 (0.996 mg/cm3). The root density changed in the drip system, which was higher for the JKRH 3333 genotype than the other genotypes (Figure 1). The root dry weight (2.56 g/hill) and specific root length (0.160) was found higher in JKRH 3333 over the rest (Figure 2). Comparing the drip treatments, increased root dry weight observed in 0.8 m LD (2.5g/hill) laid out at 5–10 cm SDI over conventional rice (1.9g/hill).\n\nThe root ATPase activity of JKRH 3333 (33.1µg Pi/g/h) showed was more statistically significant supremacy than TNRH 180 (29.5µg Pi/g/h) and ADT(R)45 (23.8µg Pi/g/h). Among the drip irrigation treatments, increased root activity was obvious in 0.8m LD in SDI laid at the soil depth of 5–10cm treatment (31.2µg Pi/g/h) and lesser activity was evident in conventional aerobic rice (26.4µg Pi/g/h).\n\nGenotypic variation of rice showed an increased filled grain percentage in JKRH 3333 (88.2%) followed by TNRH 180 (85.0%) and ADT(R) 45 (76.4%). The SDI system at 0.8m lateral distance was found to be higher (85.4%). Higher HI values were observed in JKRH 3333 (39.2%) followed by TNRH 180 (38.5%) and ADT(R) 45 (37.8%). The solar operated drip irrigation treatments were significantly superior with an elite value of 39.3% compared with the well-operated drip irrigation system (37.7%).\n\nJKRH 3333 genotype was statistically superior among all the genotypes in grain yield. The grain yield was observed to be significantly higher in the solar operated drip irrigation treatment (4817kg/ha) compared with well-operated drip irrigation (4313kg/ha) (Table 2). Among the performance of genotypes under drip irrigated aerobic rice, JKRH 3333 was statistical superior in mean grain yield (4831kg/ha) followed by TNRH 180 (4639kg/ha) and ADT(R)45 (4224kg/ha).\n\nLD: lateral distance; S.Ed: standard error difference; CD: critical difference. **significance level at 0.05%; *significance at 0.01%; NS: not significant.\n\n\nDiscussion\n\nRoots are the main component in the absorption of water and minerals, which are essential in plant physiological processes. Fageria (2007) observed that root length followed a significant quadratic response with the advancement of plant age from 19 to 120 days after sowing, and shows a linear increase in root length during flowering.\n\nFavored root length under SDI at 5–10cm treatment is due to deep rooting of rice to combat water limited conditions. Genotypic variation in TNRH 180 revealed deep rooting to reduce the limited water application effect. The increased root growth and development of the root system help the rice to explore the wider area of soil and the deeper soil layers for water and nutrients. These results were corroborated with Henry et al. (2011) in rice under drought.\n\nThe genotype JKRH 3333 registered an increased root length and specific root length of 34.9% and 3.9% over conventional aerobic rice (Figure 2). Specific root length was an indicator for environmental changes. The genetic potential of this rice genotype for maintenance of increased root length favors lateral root branching (Figure 3). This effect was in accordance with Kato & Okami (2011) in rice.\n\nRoot volume of plants covers huge soil volumes and water uptake from the soil in water-limited conditions (Kanbar, 2004). Altered root volumes were observed in the present study under SDI with a 5–10cm drip laid out at 0.8m LD, due to greater assimilation allocation in rice roots by drip irrigation. Similar results were observed by Parthasarathi et al. (2012) under drip irrigation.\n\nThe root length density (RLD) is the length of roots per unit volume of soil, is an important parameter required to understand plant performance. In the present experiment, the SDI at 5–10cm depth using JKRH 3333 increased the RLD and RMD, due to the root zone of rice exposed to frequent wetting and slight drying and nutrient accessibility. The dry weight of roots was 36.8% superior in JKRH 3333 hybrid under drip irrigation. Similar variation obtained in rice was observed by Vanitha (2011) and could support the present results. This unique response of root length and mass density under drip irrigation to improve nutrient and water accessibility was due to more root proliferation at topsoil. Comparing the root images of genotypes (Figure 3) revealed that, even though the appearance of white roots was common, an increase in root numbers and density was higher in drip irrigation.\n\nLight energy absorbed by chlorophyll is converted into stable chemical energy and drives ATP formation via ATPase in the plastids of roots. ATPase is widely present in plant tissues and involved in the active transport of ions across membranes of the cell (Martínez-Ballesta et al., 2003). In the present study, higher levels of ATPases were observed in SDI + 0.8m LD at 5–10cm lateral depth with the JKRH 3333 hybrid.\n\nThe grain filling percentage is an important contributory factor to grain yield. The SDI laid out at 5–10cm depth with 0.8m LD treatment registered more grain production and filling percentage. Among the genotypes, the hybrid (JKRH 3333) excelled the variety in filled grain percentage by 15.4% (Figure 4). The increase in the water supply to the spikelets might reduce the floret abortion during flowering, and may be the reason behind higher filled grains in SDI. These results are indirectly supported by Kato et al. (2008) in aerobic rice.\n\nHarvest index (HI) reflects the proportion of assimilate distribution between economic and total biomass (Donald & Hamblin, 1976). Among the genotypes, a higher in HI was recorded in JKRH 3333 with a 1.6 and 4.5% increment over the TNRH 180 and ADT(R)45 genotypes, respectively (Figure 3). This might be attributed to the fact that producing a larger sink size and efficient transport of assimilates from leaves and stems (‘source’) into developing spikelets (‘sinks’), thus resulting in the increased grain yield (Guan et al., 2010).\n\nThe higher grain yield of JKRH 3333 recorded a 21.4% increase in drip over conventional aerobic rice cultivation. Comparing the depth of SDI treatment at a 5–10cm soil depth achieved a 18.9 and 13.0% increased yield over 15–20cm soil depth and conventional aerobic irrigation method, respectively. The SDI system maintained equal soil wetting, reduced the evaporation with direct point application of water in root, which improves the grain yield of rice. A previous study supports this argument (Douh et al., 2013).\n\n\nConclusions\n\nThis drip-irrigated aerobic rice study concluded that there is an increase in grain yield along with increased root parameters. Based on the data of lateral spacing, discharge variations and the root characters of rice under drip significantly showed that there was characteristic flexibility in the roots of the rice plant. The root length, root density, root hairs and root ATPase activity exhibited a significant association with filled grain percentage and grain yield. The genotype JKRH 3333 showed 14.3% increased grain yield with favorable root density and root dry weight over ADT(R)45. It could be recommended that 0.8m lateral distance laid out at 5–10cm depth SDI may proliferated more roots at subsurface soil layer with a yield increment in rice.\n\n\nData availability\n\nDataset 1: Response of root traits phenotyped under different drip irrigation treatments. doi, 10.5256/f1000research.9938.d151043 (Parthasarathi et al., 2017).\n\nSource of irrigation: S1, solar powered; S2, well operated.\n\nDrip treatments: T1, 0.8m LD; T2, 1.2m LD; T3, 5–10cm; T4, 15–20cm; T5, conventional aerobic rice.\n\nGenotypes: V1, TNRH 180; V2, JKRH 3333; V3, ADT(R)45.",
"appendix": "Author contributions\n\n\n\nTP, SM, EV, and KV designed the experiments. TP performed the experiments in the field. TP and VM analyzed the data using statistics. TP, VM, and KV contributed reagents/materials/analysis tools. TP wrote the manuscript. KV and VM corrected the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests disclosed.\n\n\nGrant information\n\nThe project was funded by Netafim Irrigation Ltd., Israel.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary materials\n\nSupplementary File 1: Detailed information on experimental set up (Table S1), genotypes (Table S2) and fertigation schedule (Table S3) of drip irrigated rice study.\n\nClick here to access the data.\n\n\nReferences\n\nBridgit AJ, Potty NN: Influence of root characters on rice productivity in iron soils of Kerala. Int Rice Res News. 2002; 27(1): 45–46.\n\nDonald CM, Hamblin J: The biological yield and harvest index of cereals as agronomic and plant breeding criteria. Adv Agron. 1976; 28: 361–405. Publisher Full Text\n\nDouh B, Boujelben A, Khila S, et al.: Effect of subsurface drip irrigation system depth on soil water content distribution at different depths and different times after irrigation. Larhyss J. 2013; 13: 7–16. Reference Source\n\nGomez KA, Gomez AA: Analysis of data from a series of experiments. In: Statistical Procedures for Agricultural Research. second ed. John Wiley & Sons, New York, 1984; 316–356. Reference Source\n\nGuan YS, Serraj R, Liu SH, et al.: Simultaneously improving yield under drought stress and non-stress conditions: a case study of rice (Oryza sativa L.). J Exp Bot. 2010; 61(15): 4145–56. PubMed Abstract | Publisher Full Text\n\nGuo D, Xia M, Wei X, et al.: Anatomical traits associated with absorption and mycorrhizal colonization are linked to root branch order in twenty-three Chinese temperate tree species. New Phytol. 2008; 180(3): 673–683. PubMed Abstract | Publisher Full Text\n\nHenry A, Gowda VR, Torres RO, et al.: Variation in root system architecture and drought response in rice (Oryza sativa): phenotyping of the OryzaSNP panel in rainfed lowland fields. Field Crops Res. 2011; 120(2): 205–214. Publisher Full Text\n\nKato Y, Kamoshita A, Yamagishi J: Growth of Three Rice Cultivars (Oryza sativa L.) under Upland Conditions with Different Levels of Water Supply. Plant Prod Sci. 2006; 9(4): 435–445. Publisher Full Text\n\nKato Y, Kamoshita A, Yamagishi J: Preflowering Abortion Reduces Spikelet Number in Upland Rice (Oryza sativa L.) under Water Stress. Crop Sci. 2008; 48(6): 2389–2395. Publisher Full Text\n\nKato Y, Okami M, Tajima R, et al.: Root response to aerobic conditions in rice, estimated by Comair root length scanner and scanner-based image analysis. Field Crops Res. 2010; 118(2): 194–198. Publisher Full Text\n\nKato Y, Okami M: Root morphology, hydraulic conductivity and plant water relations of high-yielding rice grown under aerobic conditions. Ann Bot. 2011; 108(3): 575–583. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLilley JM, Fukai S: Effect of timing and severity of water deficit on four diverse rice cultivars I. Rooting pattern and soil water extraction. Field Crops Res. 1994; 37(3): 205–213. Publisher Full Text\n\nMartínez-Ballesta MC, Martínez V, Carvajal M: Aquaporin functionality in relation to H+-ATPase activity in root cells of Capsicum annuum grown under salinity. Physiol Plant. 2003; 117(3): 413–420. PubMed Abstract | Publisher Full Text\n\nMatsuo N, Ozawa K, Mochizuki T: Physiological and morphological traits related to water use by three rice (Oryza sativa L.) genotypes grown under aerobic rice systems. Plant Soil. 2010; 335(1): 349–361. Publisher Full Text\n\nMostajeran A, Rahimi-Eichi V: Drought stress effects on root anatomical characteristics of rice cultivars (Oryza sativa L.). Pak J Biol Sci. 2008; 11(18): 2173–2183. PubMed Abstract | Publisher Full Text\n\nPantuwan G, Fukai S, Cooper M, et al.: Root traits to increase drought resistance in rainfed lowland rice. In: Breeding strategies for rainfed lowland rice in drought-prone environments. Proceedings of International Workshop held at Ubon Ratchatani, Thailand, 5–8 November, 1996. ACIAR, Canberra, Australia. 1997; 170–179. Reference Source\n\nParthasarathi T, Vanitha K, Lakshamanakumar P, et al.: Aerobic rice-mitigating water stress for the future climate change. Int J Agron Plant Prod. 2012; 3(7): 241–254. Reference Source\n\nParthasarathi T, Vanitha K, Mohandass S, et al.: Dataset 1 in: Variation in rice root traits assessed by phenotyping under drip irrigation. F1000Research. 2017. Data Source\n\nRaj A, Muthukrishnan P, Ayyadurai P: Root Characters of Maize as Influenced by Drip Fertigation Levels. Am J Plant Sci. 2013; 4(2): 340–348. Publisher Full Text\n\nVanitha K: Physiological comparison of surface and sub-surface drip system in aerobic rice (Oryza sativa L.). P.hD. (Ag.) Thesis submitted to Tamil Nadu Agricultural University, Coimbatore 641 003. India. 2011; 400.\n\nVanitha K, Mohandass S: Effect of Azophosmet biofertigation on microbial population in aerobic rice (Oryza sativa L.). Adv in Environ Biol. 2013; 7(13): 3895–3898. Reference Source\n\nWayne K: Mitochondrial ATPase. In: (eds.) Colowick, S.P. and N.O. Kaplan. Methods in Enzymology. Academic Press, New York, USA. 1955; 2: 593–595. Publisher Full Text\n\nYoshida S, Foron DA, Cock JH: Laboratory manual for physiological studies of rice. International Rice Research Institute, Los Baños, Philippines. 1971; 70.\n\nZhang F, Shen J, Zhang J, et al.: Rhizosphere processes and management for improving nutrient use efficiency and crop productivity: implications for China. Adv Agron. 2010; 107: 1–32. Publisher Full Text"
}
|
[
{
"id": "20988",
"date": "15 Mar 2017",
"name": "Kazuki Saito",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe experimental set up was not properly written. In the abstract, the authors indicated a split-split plot design. In M&M, there were four factors including (i) lateral spacing levels (0.8 and 1.2m), (2) depth of irrigation (5-10 and 15-20 cm), (3) solar powered and well operated irrigation, and (4) variety (TNRH 180, JKRH 3333 and ADT(R)45). However, there was no info on experimental design, plot size, and no. replications used for this study.\n\nFurthermore, it is not clear how these factors were analyzed. The authors showed the main factor only. There were no information on interaction effects among source of drip irrigation, variety, and drip treatments.\n\nWithout examining such interactions, the authors could not draw the conclusion that JKRH3333 is best suitable material under drip irrigated conditions, and 0.8m lateral distance laid out at 5–10cm depth sub-surface drip may proliferated more roots at subsurface soil layer with a yield increment in rice. There might have been interaction among variety x drip irrigation on traits measured in this study.\n\nThe authors could report the effect of each factor on traits in the results section rather than describing the results in each trait for different factors. Then, the interaction effect should be highlighted. If there is significant interaction, detailed results should be shown (e.g. if there is variety x drip treatment on yield, yield of each variety in different drip treatment should be shown).\n\nSome of the results were not described in the results section but the discussion section (e.g. figures 2 and 3). These should be reported in the results section.",
"responses": [
{
"c_id": "2814",
"date": "28 Jun 2017",
"name": "PARTHASARATHI THEIVASIGAMANI",
"role": "Author Response",
"response": "Dear Kazuki Saito I accept your comments on including the interaction effect of treatments. My co-authors also accepted your view. In the version one, interactive effects of treatments are missing. As per your comments, I have included the main, sub and sub-sub plot, interaction treatment results in the paper. I have mentioned the replications, experimental design, plot dimension in the paper. Also, please view the supplementary file for additional details such as experiment layout, drip layout. Thank you for your peer review to made the manuscript better. Please review the revised version (Version 2) and give your comments. Thanking you. Kind regards Parthasarathi"
}
]
}
] | 1
|
https://f1000research.com/articles/6-125
|
https://f1000research.com/articles/6-1016/v1
|
28 Jun 17
|
{
"type": "Opinion Article",
"title": "Sanger sequencing as a first-line approach for molecular diagnosis of Andersen-Tawil syndrome",
"authors": [
"Armando Totomoch-Serra",
"Manlio F. Marquez",
"David E. Cervantes-Barragán",
"Armando Totomoch-Serra",
"David E. Cervantes-Barragán"
],
"abstract": "In 1977, Frederick Sanger developed a new method for DNA sequencing based on the chain termination method, now known as the Sanger sequencing method (SSM). Recently, massive parallel sequencing, better known as next-generation sequencing (NGS), is replacing the SSM for detecting mutations in cardiovascular diseases with a genetic background. The present opinion article wants to remark that “targeted” SSM is still effective as a first-line approach for the molecular diagnosis of some specific conditions, as is the case for Andersen-Tawil syndrome (ATS). ATS is described as a rare multisystemic autosomal dominant channelopathy syndrome caused mainly by a heterozygous mutation in the KCNJ2 gene. KCJN2 has particular characteristics that make it attractive for “directed” SSM. KCNJ2 has a sequence of 17,510 base pairs (bp), and a short coding region with two exons (exon 1=166 bp and exon 2=5220 bp), half of the mutations are located in the C-terminal cytosolic domain, a mutational hotspot has been described in residue Arg218, and this gene explains the phenotype in 60% of ATS cases that fulfill all the clinical criteria of the disease. In order to increase the diagnosis of ATS we urge cardiologists to search for facial and muscular abnormalities in subjects with frequent ventricular arrhythmias (especially bigeminy) and prominent U waves on the electrocardiogram.",
"keywords": [
"Sanger sequencing",
"Andersen-Tawil",
"KCNJ2",
"genetic testing",
"Next Generation Sequencing",
"clinical diagnosis",
"mutation"
],
"content": "Introduction\n\nIn 1977, Frederick Sanger developed a new method for DNA sequencing based on the chain termination method, where nucleotides in a single-stranded DNA molecules are determined by complementary synthesis of polynucleotide chains, based on the selective incorporation of chain-terminating dideoxynucleotides driven by the DNA polymerase enzyme1. For this method, Sanger was awarded in 1980 with a second Nobel Prize in Chemistry, and nowadays this method is still known as the Sanger method of DNA sequencing, becoming a standard method in clinical genetics. The present opinion article wants to remark that, targeted SSM is still effective in specific clinical scenarios at a lower cost as a diagnostic method compared to new technologies for sequencing, one example is the detection of Andersen-Tawil syndrome (ATS).\n\n\nNext-generation sequencing: available for everyone?\n\nNext-generation sequencing (NGS) technology, also known as massive parallel, high throughput or deep sequencing, is gradually replacing the traditional SSM as the first choice method for screening mutations in genetic cardiovascular diseases2.\n\nThe genetic heterogeneity in long QT3 and Brugada syndromes4 has made this new genetic testing approach mandatory. The advantages of NGS versus the SSM in cases of genetic heterogeneity are undeniable, but NGS is still expensive and unaffordable for developing countries. The SSM remains the gold standard for sequencing short fragments of DNA (<1000 bases), previously amplified by PCR.\n\n\nAndersen-Tawil Syndrome: a rare disease\n\nATS, also named Long QT syndrome type 7, is described in the Online Mendelian Inheritance in Man database (OMIM) as a multisystem autosomal dominant channelopathy syndrome caused by a heterozygous mutation in the KCNJ2 gene (OMIM Entry - *600681) on chromosome 17q24.3. Periodic paralysis, ventricular arrhythmia, and distinctive dysmorphic features characterize it. Until 2015, the only gene thought to be affected was the potassium voltage-gated channel subfamily J member 25 (the KCNJ2 gene), which encodes the alpha subunit protein of the Kir2.1 channel composed of tetramers6. Mutations in this gene have been reported in 60% of clinically suspected cases (which are classified as ATS type 1)7. Less than 200 cases with the KCNJ2 gene affected have been described worldwide since the discovery of the first mutations in 20018,9. In 2014, a novel variant (c.472A>G; p.Thr158Ala) in a second gene, KCNJ5, was associated with ATS in one Japanese patient taken from a cohort of 21 patients that had previously been screened negative for mutations in KCNJ2. The KCNJ5 gene protein (potassium channel Kir 3.4 protein) has an interaction with the KCNJ2 protein that leads to a dominant negative effect in the channel formed, related to the ATS phenotype10. No additional cases of KCNJ5 mutations in independent series of ATS patients have been reported; the frequency of KCNJ5 mutations in ATS has to be determined in the future. With the widespread use of NGS, it is possible that in the next years we could discover new genes that explain part of the genetic heterogeneity observed in ATS, clarifying some of the 40% of clinically suspected negative cases that do not have a mutation in KCNJ2 (nowadays classified as ATS type 2)11.\n\n\nA special gene: KCNJ2\n\nThe KCNJ2 gene has particular structural characteristics that makes it attractive for direct SSM, such as a relatively short sequence of 17,510 base pairs (bp), and a coding region with near to 5,000 bp with two exons (exon 1=166 bp and exon 2=5220 bp). Also, half of the mutations are located in the C-terminal cytosolic domain, and have a mutational hotspot in the residue Arg218; as we have addressed before, this gene explains the phenotype in 60% of ATS cases, fulfilling the clinical criteria.\n\n\nATS in mestizo populations: the first description of ATS in the Mexican population\n\nFifteen years have passed since the first family with ATS in a Mexican population was reported by Canun et al.12 Recently, a second proband was diagnosed in a different Mexican family, finding the mutation p.Arg218Trp in KCNJ213.\n\n\nMultidisciplinary approach: a productive collaboration\n\nA multidisciplinary approach is extremely useful to study suspicious cases of hereditary sudden death syndrome. For ATS, the team must include a cardiologist, a neurologist and a clinical geneticist. It is very important that each of these physicians had expertise in the evaluation of subjects with sudden cardiac death syndrome. After a common agreement on suspicion of ATS, the whole coding region and intron boundaries of the non-coding region in KCNJ2 could be sequenced with the SSM14.\n\n\nClinical ATS data that needs to be considered\n\nPhenotypically, Canun et al11 suggested that recognition of facial and limb dysmorphism (broad forehead, bushy eyebrows, small eyes, bulbous nose, malar and mandibular hypoplasia, crowded teeth, clinodactyly in the 5th finger and cutaneous syndactyly in 2–3 toes) associated with ATS could help establish a correct ATS diagnosis. We believe that it is important that all cardiologists dealing with subjects with ventricular arrhythmias, specifically frequent ventricular premature beats in bigeminy, are aware of such distinctive phenotypic characteristics and also search for muscular disorders (weakness in limbs or periodic paralysis).\n\n\nSanger sequencing is still a useful method\n\nThe SSM is nearly 40 years old, and it remains a useful molecular tool for genetic testing. It has its limitations because it is time-consuming, has limited use for long DNA fragments and is unable to detect sequences out of the region contemplated. We consider using “directed” SSM as first-line approach for diagnosis of suspected cases of ATS in places where NGS is not an option for genetic testing (due to low availability or high cost).",
"appendix": "Author contributions\n\n\n\nMFM and ATS conceptualized the article. ATS and MFM drafted the first and second version of the article. MFM, ATS and DECB review and approved the article.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors are grateful to their patients.\n\n\nReferences\n\nSanger F, Nicklen S, Coulson AR: DNA sequencing with chain-terminating inhibitors. Proc Natl Acad Sci U S A. 1997; 74(12): 5463–5467. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSikkema-Raddatz B, Johansson LF, de Boer EN, et al.: Targeted next-generation sequencing can replace Sanger sequencing in clinical diagnostics. Hum mutat. 2013; 34(7): 1035–1042. PubMed Abstract | Publisher Full Text\n\nAlSenaidi KS, Wang G, Zhang L, et al.: Long QT syndrome, cardiovascular anomaly, and findings in ECG-guided genetic testing. IJC Heart & Vessels. 2014; 4: 122–128. Publisher Full Text\n\nHu D, Barajas-Martínez H, Terzic A, et al.: ABCC9 is a novel Brugada and early repolarization syndrome susceptibility gene. Int J Cardiol. 2014; 171(3): 431–442. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLu CW, Lin JH, Rajawat YS, et al.: Functional and clinical characterization of a mutation in KCNJ2 associated with Andersen-Tawil syndrome. J Med Genet. 2006; 43(8): 653–59. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLopes CM, Zhang H, Rohacs T, et al.: Alterations in conserved Kir channel-PIP2 interactions underlie channelopathies. Neuron. 2002; 34(6): 933–44. PubMed Abstract | Publisher Full Text\n\nMárquez MF, Totomoch-Serra A, Vargas-Alarcón G, et al.: Andersen-Tawil syndrome: a review of its clinical and genetic diagnosis with emphasis on cardiac manifestations. Arch Cardiol Mex. 2014; 84(4): 278–285. PubMed Abstract | Publisher Full Text\n\nPlaster NM, Tawil R, Tristani-Firouzi M, et al.: Mutations in Kir2.1 cause the developmental and episodic electrical phenotypes of Andersen’s syndrome. Cell. 2001; 105(4): 511–519. PubMed Abstract | Publisher Full Text\n\nTristani-Firouzi M, Jensen JL, Donaldson MR, et al.: Functional and clinical characterization of KCNJ2 mutations associated with LQT7 (Andersen syndrome). J Clin Invest. 2002; 110(3): 381–388. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKokunai Y, Nakata T, Furuta M, et al.: A Kir3.4 mutation causes Andersen-Tawil syndrome by an inhibitory effect on Kir2.1. Neurology. 2014; 82(12): 1058–1064. PubMed Abstract | Publisher Full Text\n\nNguyen HL, Pieper GH, Wilders R: Andersen-Tawil syndrome: clinical and molecular aspects. Int J Cardiol. 2013; 170(1): 1–16. PubMed Abstract | Publisher Full Text\n\nCanún S, Pérez N, Beirana LG: Andersen syndrome autosomal dominant in three generations. Am J Med Genet. 1999; 85(2): 147–56. PubMed Abstract | Publisher Full Text\n\nMárquez MF, Totomoch-Serra A, Burgoa JA, et al.: Abnormal electroencephalogram, epileptic seizures, structural congenital heart disease and aborted sudden cardiac death in Andersen-Tawil syndrome. Int J Cardiol. 2015; 180: 206–209. PubMed Abstract | Publisher Full Text\n\nWard LD, Kellis M: Interpreting noncoding genetic variation in complex traits and human disease. Nat Biotechnol. 2012; 30(11): 1095–1106. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "24341",
"date": "19 Jul 2017",
"name": "Oscar Campuzano",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is a well written manuscript focused on use of Sanger technology in genetic diagnosis. Currently, despite to NGS technology allows a cost-effective analysis of hundreds genes in a reduced time, Sanger sequencing remains as gold standard for validation of variants identified using NGS, segregation of variants in family members, and analysis of small genes, such as KCNJ2.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "26267",
"date": "06 Oct 2017",
"name": "Estelle Gandjbakhch",
"expertise": [
"Reviewer Expertise Genetics of cardiomyopathies and channelopathies"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion article by Totomoch-Serra et al.refers to the usefulness of directed sequencing of KCNJ2 by Sanger method for molecular diagnosis of Andersen-Tawil syndrome (ATS), a rare QT long syndrome associating ventricular arrhythmias, prominent U wave, dysmorphic facial or skeletal features and periodic paralysis.\n\nSanger sequencing remains a useful method for directed sequencing of specific clinical syndromes with low genetic heterogeneity as ATS, especially in places where NGS is not accessible. In another hand, NGS is particularly useful and cost effective in inherited cardiomyopathies or channelopathies associated with high genetic heterogeneity (like dilated cardiomyopathy) or caused by mutations in larges genes (as RYR2 or TTN) where Sanger sequencing remains a long and expensive process. NGS can also be useful at second level when directed molecular diagnosis with Sanger method is negative.\nIn case where ATS is suspected, the approach described by the authors with directed sanger sequencing at first line followed by NGS in case of negative screening is a cost-effective approach for ATS molecular diagnosis.\n\nI would only suggest describing in more details the clinical features (including symptoms or family history) suggestive of ATS diagnosis that should direct to KCNJ2 sequencing at first line. Addition of a typical ECG could also be useful for the reader. Comparison of costs between NGS and Sanger sequencing could also be interesting.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "26270",
"date": "16 Oct 2017",
"name": "Coeli M. Lopes",
"expertise": [
"Reviewer Expertise Electrophysiology of ion channels"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe opinion article by Totomoch-Serra and colleagues suggests that a “targeted” Sanger Sequencing Method (SSM) is an appropriate first-line approach to diagnose Andersen-Tawil syndrome (ATS). Although Next-Generation Sequencing (NGS) technology, especially in cases of genetic heterogeneity, is gradually replacing the traditional SSM as the method of choice for genetic testing, its high cost and unavailability often preclude its routine use, particularly in developing countries. The authors maintain that for conditions that require sequencing of short fragments of DNA (<1000 bases), previously amplified by PCR, SSM remains the gold standard. In ATS, where 60% of the reported cases involve mutations of the KCNJ2 gene (that codes for the Kir2.1 channel protein) exhibiting autosomal dominant Mendelian inheritance, half the mutations are located in the C-terminal cytosolic domain with a hotspot at Arg218 (ATS type 1). Characteristic developmental abnormalities coupled to specific ventricular arrhythmias and/or muscular disorders can identify ATS patients who could then undergo SSM to identify the putative causative mutations in KCNJ2 with a potential 60% success. NGS could then be applied if the patient falls within the 40% of the cases that are not due to KCNJ2 (ATS type 2) with the hope to discover additional genes responsible for the disease and determine their frequency of occurrence. The opinion expressed by the authors is reasonable and advisable to physicians especially in cases where cost and access to NGS are problematic. Yet, it is likely a matter of time before cost and access to NGS improve and limit the choice of SSM for genetic testing. SSM has certainly served us well for the past 40 years and continues to serve us well in cases like ATS type 1 but it is inevitable that NGS will eventually make it obsolete. Until then, the authors make their case convincingly through this opinion article.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-1016
|
https://f1000research.com/articles/6-542/v1
|
21 Apr 17
|
{
"type": "Opinion Article",
"title": "What do hypnotics cost hospitals and healthcare?",
"authors": [
"Daniel F. Kripke"
],
"abstract": "Hypnotics (sleeping pills) are prescribed widely, but the economic costs of the harm they have caused have been largely unrecognized. Randomized clinical trials have proven that hypnotics increase the incidence of infections. Likewise, hypnotics increase the incidence of major depression and cause emergency admissions for overdoses and deaths. Epidemiologically, hypnotic use is associated with cancer, falls, automobile accidents, and markedly increased overall mortality. This article considers the costs to hospitals and healthcare payers of hypnotic-induced infections and other severe consequences of hypnotic use. These are a probable cause of excessive hospital admissions, prolonged lengths of stay at increased costs, and increased readmissions. Accurate information is scanty, for in-hospital hypnotic benefits and risks have scarcely been studied -- certainly not the economic costs of inpatient adverse effects. Healthcare costs of outpatient adverse effects likewise need evaluation. In one example, use of hypnotics among depressed patients was strongly associated with higher healthcare costs and more short-term disability. A best estimate is that U.S. costs of hypnotic harms to healthcare systems are on the order of $55 billion, but conceivably as low as $10 billion or as high as $100 billion. More research is needed to more accurately assess unnecessary and excessive hypnotics costs to providers and insurers, as well as financial and health damages to the patients themselves.",
"keywords": [
"hospital costs",
"health care costs",
"hypnotics and sedatives",
"infection",
"depressive disorders",
"overdose",
"mortality",
"epidemiology"
],
"content": "Briefly summarizing the risks of hypnotics\n\nEvidence of hypnotic harms is growing - the American Geriatrics Society has recommended that the popular hypnotic drugs be avoided for older patients, who are almost half of hospital patients1. Similarly, the American College of Physicians (ACP) recommended that cognitive behavioral therapy should be the first-choice treatment for insomnia, and the ACP guideline expressed doubt on whether hypnotics were worth the risks, even as secondary choices for short-term use2,3.\n\nThe more severe risks of hypnotic drugs are rarely recognized.\n\nRandomized controlled trials prove:\n\na) Hypnotics increase incidence of infections, with a mean 44% increase in controlled trials4. Moreover, infections have a causal role in depression and suicide5.\n\nb) Hypnotics increase the incidence of major depression by more than two-fold6.\n\nDeath certificates prove hypnotics and other benzodiazepine agonists are involved in about 1 out of 3 U.S. opiate overdose deaths and may be present in about half of suicides5.\n\nEpidemiologic studies demonstrate more risks associated with hypnotics:\n\na) In-hospital falls, e.g. over 3 times as many falls have been observed among patients receiving zolpidem7. Outpatient falls are also increased.\n\nb) Hypnotic use is associated with up to double the motor vehicle crash rate8.\n\nc) Emergency room visits related to hypnotic ingestions have been increasing9.\n\nd) Rates of specific cancers, especially lung and esophagus, have multiplied among hypnotic users5.\n\ne) According to electronic records systems in 5 countries, overall mortality has increased 2-fold to 4-fold among patients receiving hypnotics, after adjustments for comorbid risk factors and confounders5.\n\nHypnotic harms to patients have been documented in more detail elsewhere, with critiques on the strengths and limitations of the evidence5,10. Here the focus is on financial harms to health systems.\n\nWhat has been missing from current documentation is a detailed report on the cost of hypnotics to hospitals, insurers, and managed care, where the minimal benefits are weighed against the severe harms. Factual economic data have been so sparse that we must use fragmentary evidence and some speculation to estimate how much hypnotics cost the U.S. medical systems. Additional studies are needed before precise cost estimates can be made.\n\n\nThe benefits of hypnotics are trivial or absent, without documented cost savings\n\nAn authoritative systematic review biased towards hypnotics, limited to subjective outcomes of outpatient insomnia patients, restricted to published controlled trials, and including studies of greater-than-recommended doses, found low-strength evidence for weak benefits for two “Z” hypnotics used mainly in doses higher than recommended3,5. Insufficient evidence of any benefit was found for the other benzodiazepine agonists3. Moreover, the authors stated that, “it is not known how many minutes’ change in SOL, TST, or WASO indicate clinically meaningful improvement3.” In other words, it is not known if the weak benefits reported were clinically meaningful even at high doses that are considered unsafe. A definitive review of objective polysomnographic data that included data from unpublished trials concluded that hypnotics produced little or no objective improvements in total sleep in recommended doses and no verified overall health benefits5.\n\n\nAvailable information about the cost of hypnotic harms to hospitals and healthcare\n\nUp to now, medical literature has projected costs of insomnia harms but has hardly mentioned what the harms from treating with hypnotics may cost. The presence of insomnia is obviously confounded by association with prescription of hypnotics, though not as closely as one might expect5,10. In a study of 55,000,000 managed care patients, only 31% of those receiving hypnotic medication had a diagnosis of insomnia11. The fraction of insomnia patients receiving hypnotics is quite variable depending on the patient samples and definitions of insomnia. Another complication is that using hypnotics may actually cause insomnia12, at least following hypnotic withdrawal. Consequences of insomnia such as absenteeism, automobile crashes, and increased medical costs were estimated to be costing the U.S. over $15 billion in 199313. Several more recent cost estimates have been far higher. These studies generally made little attempt to differentiate costs caused by insomnia itself from costs of confounding comorbidities and correlated hypnotic harms14. Several insomnia studies were sponsored by hypnotic manufacturers or others with interests in attributing the costs to insomnia.\n\nSome studies have attributed costs associated with hypnotic prescribing to insomnia, ignoring that less than half of the prescriptions are given to patients with diagnosed insomnia11,15. One study used a prescription claim for a hypnotic as an explicit marker for insomnia, in order to compare cohorts with and without insomnia among 87,461 depressed patients. Hypnotic use was associated with more comorbidity-adjusted hospitalization, more frequent ER visits, 12-month healthcare costs that were $3,918 higher, and more short-term disability16. For the authors to attribute these cost correlates of hypnotic prescription claims to insomnia (or underlying depression) and not to the hypnotics themselves seemed illogical. A similarly flawed study of a national sample of insured workers found yearly health costs were $936 higher among those with insomnia, but 2/3 of the insomnia cohort were defined by receiving hypnotics without having received a recorded diagnosis of insomnia17. Another nationwide study found that insomnia was correlated with prolonged hospital stay, but lacked data to determine whether length of stay was more closely correlated with hypnotic use rather than with insomnia diagnoses18. A study of insomnia patients both before and after treatment versus controls found an 85% increase in health costs of insomnia patients treated with sedatives/hypnotics as opposed to insomnia patients that were not treated with hypnotics19. The authors attributed this difference to more serious underlying conditions amongst those treated, without considering the possibility that the treatment itself was increasing costs. A study in Taiwan found that in contrast with a cohort without insomnia that did not use sedatives or hypnotics, a comorbidity-matched cohort with a diagnosis of insomnia suffered more acute myocardial infarction and stroke, but only among those taking hypnotics or other sedatives20. This may suggest that after control for insomnia, it was the sedative/hypnotics causing myocardial infarctions and stroke. Other studies relating insomnia to health care costs have explicitly found greater healthcare costs among those given prescription treatment for insomnia21,22. A Mayo Clinic study found that hospital patients who received zolpidem had a 2% longer length-of-stay (not statistically significant), possibly due to their triple hazard of falls7. Only 32% of these patients had diagnosed insomnia.\n\nAlthough a 2% average increase in length of stay might appear small, small mean increases would cost billions of dollars if extended throughout the United States. Another study found that in-hospital benzodiazepine prescriptions were associated with 23% higher readmissions23. We must recognize that without randomized placebo controls, none of these studies can offer definite proof on whether hypnotics or insomnia cause increased health costs.\n\nIt is ironic that several of the studies that document hypnotic prescriptions as being associated with increased healthcare costs were sponsored by hypnotic manufacturers, when they had intended to attribute these costs to insomnia.\n\n\nIn-hospital hypnotic cost benefits and risks\n\nFifty years ago, it was routine to prescribe an “as-needed” hypnotic with almost every hospital admission. In 1982, Perry and Wu reviewed 331 charts of a distinguished teaching hospital and reported that, “Most surgical patients (96%) and a large number of medical patients (46%) had hypnotic agents prescribed on admission without a recorded reason, without the patient’s request or knowledge, and without a statement in the medical chart indicating whether the therapeutic objectives were met24.” Personal communications indicate that routine hypnotic prescribing without evidence of benefit is still a common practice in many of the most renowned academic medical centers. For example, the recent Mayo Clinic report listed that only 32% of patients given zolpidem had an insomnia diagnosis7.\n\nAn up-to-date systematic review of 15 in-hospital controlled trials of hypnotic drugs going back to 1983 found that only one of the included trials (of intravenous dexmedetomidine) showed a convincing advantage for sleep efficiency25, even though several of the studies involved such intravenous drugs. Out of the 15 studies, 5 showed some evidence that oral benzodiazepines reduced sleep latency, but most of the treated patients still had abnormal sleep latencies exceeding 30 minutes. The review concluded with “insufficient evidence to suggest that pharmacotherapy improves the quality or quantity of sleep in hospitalized patients suffering from poor sleep25,” and no other health or cost benefits were documented.\n\nThe controlled hospital trials were not designed to assess the costs of hypnotic harms; however, I know of no formal studies on the health cost of harms produced by in-hospital administration of hypnotics. It is hard to imagine how drugs that are known to increase the incidence of infection and depression and are strongly associated with in-hospital falls could fail to increase hospital costs.\n\n\nHypnotic risks cannot be justified when given to patients without diagnosed insomnia\n\nAs previously mentioned, most prescriptions for hypnotics are given to patients without diagnosed insomnia, even though insomnia is the sole approved indication for most hypnotics. Zolpidem takes up over 70% of the contemporary U.S. hypnotics’ market. Most zolpidem prescriptions have been given to patients who had one or more hazardous contraindications, such as concomitant use of opiates or other sedatives, age over 60, alcoholism, history of depression or use of antidepressants15,26,27. Most outpatient hypnotic prescriptions have been renewals beyond recommended durations at above-recommended doses15,27,28. This lack of indication or documented benefit is characteristic of hypnotic prescribing, and it is hard to understand what could justify the risks and costs of supplying the benzodiazepine-agonist hypnotics.\n\n\nDetailed estimates of hypnotic harms costs\n\nExcessive mortality is the most expensive harm caused by hypnotics. It is possible to loosely estimate the related costs. The 2006–2008 estimate from the Geisinger Health Study supplement indicated hypnotics cause roughly 18% of all adult deaths29. Considering that about 27% of Medicare costs (U.S. government payments for healthcare of people aged mainly over 65) are incurred in the last year of life, mainly shortly before death, the costs of hypnotics to Medicare in 2015 caused by increased mortality rates could be roughly $30 billion: 0.18 × 0.27 × $618.7 (pulled from a Google search with terms “$618.7 billion Medicare”). Current U.S. hypnotic prescriptions may be about as frequent as they were in 2006–2008, but current Medicare expenditures in 2017 would be a bit higher. Not all Medicare expenditures in the year before death would be related to the damage cause by hypnotics, but they would be counterbalancing hypnotic-related expenditures for patients before the year of death. Moreover, a substantial portion of the medical costs would have fallen on payers other than Medicare such as Medicaid, a government health provider for all-age indigent people. The number of deaths statistically associated with hypnotic use may greatly overestimate the deaths attributable causally to hypnotics, but likewise the attributable deaths may be underestimated10. This $30 billion yearly Medicare cost estimate is quite possibly inaccurate, but it represents, in my opinion, the best approximation of the cost magnitude for hypnotic-caused mortality, based on current evidence.\n\nThe costs of hypnotic-induced infections cannot be accurately estimated. We can gain a perspective on hospital infection costs from 2013 data on the U.S. hospital costs for treatment of septicemia and pneumonia alone, which together were estimated as reaching about $33 billion30. Between 5–10% of these costs came from readmissions. Hypnotics have been proven to cause infections, e.g., benzodiazepines were associated with 54% higher rates of pneumonia31. Of course, not all infections treated in hospitals are caused by hypnotics or arose in-hospital. I might imagine that hypnotic-caused inpatient-treated infections could cost anywhere up to $20 billion per year. Also, hospital-acquired infections would not be included in Medicare payments but fall on other funding sources.\n\nHypnotics increase the incidence of depression5. Estimating that there are about 14.8 million people in the U.S. each year suffering from major depressive disorders, and that 5.8% took hypnotics16,32, the total medical costs would add up to about $3.4 billion per year for depression attributable to hypnotics, if the added healthcare cost was around $3918 for each person.\n\nThe medical costs of falls among U.S. adults aged 65 or older were estimated to be about $32 billion for 201533. Of these costs, around 63% covered hospitalizations, 21% emergency department visits, and 16% outpatient visits. The average cost per fall was about $30,000. Unfortunately, there are no data available that estimate total numbers of outpatient or inpatient falls attributable to hypnotics in the U.S. However in 2010, among patients hospitalized at the Mayo Clinic, I infer from the number of falls among patients who received zolpidem, their adjusted hazard ratio, and the total falls, that 29% of total inpatient falls could be attributable to zolpidem, although there are only data on 11.8% of patients receiving any7. Presumably, the costs of in-hospital falls were not included in Medicare charges.\n\nAutomobile crashes in 2010 were estimated to generate $23 billion in U.S. medical costs34. It is known that people who take sedatives such as zolpidem have higher crash rates. Taken from Washington State health plan data8, sedative users had around twice the crash rate when compared to non-users, after controlling for comorbidities. Nationally, between 3–10% of adults take a hypnotic each year, so we might infer that 3–10% of crashes could be caused by hypnotics, costing roughly $0.6–2.0 billion per year for medical costs.\n\nNational U.S. costs of cancer medical care are projected to reach $158 billion in 202035. If we use the Geisinger Health Study29 as a model, $1 to $3 billion of cancer care costs could be associated with hypnotic use each year.\n\nCombining costs of excess mortality, infections, depression, falls, automobile crashes, and cancer, my best estimate is that hypnotics cost hospitals and medical payers somewhere around $55 billion per year, acknowledging an uncertainty range that falls between $10 billion and $100 billion. Similarly, assuming that from about 250 million adults in the U.S., 3% to 10% take hypnotics in a given year, and estimating yearly costs related to the hypnotics to range between $93617 and $391816,17, we can estimate the costs to fall in between $6.3 to $95.4 billion, consistent with the cumulative cost estimate taken from harm components. Wherever the true costs may fall, within that $10 to $100 billion range, these costs are great enough that studies are needed to assess the costs more reliably.\n\n\nWhat more can be learned?\n\nWith the recent expansion of electronic medical records, many hospital systems and insurance systems already have sufficient data in their existing electronic records to estimate the outcomes and costs associated with hypnotics prescribing, including hospital admissions, infections, falls, and incident delirium and dementia, lengths of stay, and readmissions. Such available data could give us a much clearer idea of costs associated with in-hospital hypnotic prescribing, but control for comorbidities and other confounders could not assure an accurate estimate of the causal component of associated costs.\n\nFortunately, it is becoming increasingly possible to utilize genetic data and “Mendelian randomization” to effectively compare groups who received hypnotics due to random genetic propensities with those who did not36. With the increasingly widespread development of personalized medicine, involving genotyping and whole-genome analyses, an increasing number of hospital systems will have accumulated sufficient genetic data to isolate the causal contribution of hypnotics to infection, hospitalization, depression, hospital readmissions, cancer and mortality.\n\nFor ethical and practical reasons, and for reasons of liability, it appears unlikely that large enough randomizing hypnotic vs placebo drug trials will ever be carried out to demonstrate the costs of hypnotic harms accurately. Fortunately, an alternative randomizing strategy relying on patient choice after education and patient-empowerment has been suggested: such studies might be integrated into the wellness-promotion and cost-reduction programs of managed care organizations37.\n\nUntil more reliable data are assembled, managed care and insurance administrators would be wise to gather from available evidence that the costs of hypnotic harms exceeds any benefits.",
"appendix": "Competing interests\n\n\n\nThe author has no financial interests or conflicts to declare. The author was the Co-Director of Research at the Scripps Clinic Viterbi Family Sleep Center until May, 2016. Since the 1979 publication of hypnotics’ epidemiology from the American Cancer Society CPSI study, the author has been a frequent critic of hypnotics’ risks and benefits, especially through his non-profit internet web site, www.DarkSideOfSleepingPills.com. He has advised the USA Food and Drug Administration to take certain actions to reduce hypnotic risks (Petition available at https://www.regulations.gov/docket?D=FDA-2015-P-3959), and related litigation has arisen to encourage FDA action.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nBy the American Geriatrics Society 2015 Beers Criteria Update Expert Panel: American Geriatrics Society 2015 Updated Beers Criteria for Potentially Inappropriate Medication Use in Older Adults. J Am Geriatr Soc. 2015; 63(11): 2227–46. PubMed Abstract | Publisher Full Text\n\nQaseem A, Kansagara D, Forciea MA, et al.: Management of Chronic Insomnia Disorder in Adults: A Clinical Practice Guideline From the American College of Physicians. Ann Intern Med. 2016; 165(2): 125–33. PubMed Abstract | Publisher Full Text\n\nWilt TJ, MacDonald R, Brasure M, et al.: Pharmacologic Treatment of Insomnia Disorder: An Evidence Report for a Clinical Practice Guideline by the American College of Physicians. Ann Intern Med. 2016; 165(2): 103–12. PubMed Abstract | Publisher Full Text\n\nJoya FL, Kripke DF, Loving RT, et al.: Meta-analyses of hypnotics and infections: eszopiclone, ramelteon, zaleplon, and zolpidem. J Clin Sleep Med. 2009; 5(4): 377–83. PubMed Abstract | Free Full Text\n\nKripke DF: Hypnotic drug risks of mortality, infection, depression, and cancer: but lack of benefit [version 2; referees: 2 approved]. F1000Res. 2017; 5: 918. Publisher Full Text\n\nKripke DF: Greater incidence of depression with hypnotic use than with placebo. BMC Psychiatry. 2007; 7: 42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKolla BP, Lovely JK, Mansukhani MP, et al.: Zolpidem is independently associated with increased risk of inpatient falls. J Hosp Med. 2013; 8(1): 1–6. PubMed Abstract | Publisher Full Text\n\nHansen RN, Boudreau DM, Ebel BE, et al.: Sedative hypnotic medication use and the risk of motor vehicle crash. Am J Public Health. 2015; 105(8): e64–e69. PubMed Abstract | Publisher Full Text\n\nBush DM: Emergency Department Visits for Adverse Reactions Involving the Insomnia Medication Zolpidem. The CBHSQ Report. Rockville (MD): Substance Abuse and Mental Health Services Administration (US); 2013-. 2013. PubMed Abstract | Free Full Text\n\nKripke DF: Mortality risk of hypnotics: strengths and limits of evidence. Drug Saf. 2016; 39(2): 93–107. PubMed Abstract | Publisher Full Text\n\nJhaveri M, Seal B, Pollack M, et al.: Will insomnia treatments produce overall cost savings to commercial managed-care plans? A predictive analysis in the United States. Curr Med Res Opin. 2007; 23(6): 1431–43. PubMed Abstract | Publisher Full Text\n\nKripke DF: Hypnotics cause insomnia: evidence from clinical trials. Sleep Med. 2014; 15(9): 1168–9. PubMed Abstract | Publisher Full Text\n\nRoth T: An overview of the report of the national commission on sleep disorders research. Eur Psychiatry. 1995; 10(Suppl 3): 109s–13s. PubMed Abstract | Publisher Full Text\n\nLéger D, Bayon V: Societal costs of insomnia. Sleep Med Rev. 2010; 14(6): 379–89. PubMed Abstract | Publisher Full Text\n\nFord ES, Wheaton AG, Cunningham TJ, et al.: Trends in outpatient visits for insomnia, sleep apnea, and prescriptions for sleep medications among US adults: Findings from the National Ambulatory Medical Care Survey 1999–2010. Sleep. 2014; 37(8): 1283–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTian H, Abouzaid S, Gabriel S, et al.: Resource utilization and costs associated with insomnia treatment in patients with major depressive disorder. Prim Care Companion CNS Disord. 2012; 14(5): pii: PCC.12m01374. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPollack M, Seal B, Joish VN, et al.: Insomnia-related comorbidities and economic costs among a commercially insured population in the United States. Curr Med Res Opin. 2009; 25(8): 1901–11. PubMed Abstract | Publisher Full Text\n\nGamaldo AA, Beydoun MA, Beydoun HA, et al.: Sleep Disturbances among Older Adults in the United States, 2002–2012: Nationwide Inpatient Rates, Predictors, and Outcomes. Front Aging Neurosci. 2016; 8: 266. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson LH, Whitebird RR, Schultz J, et al.: Healthcare utilization and costs in persons with insomnia in a managed care population. Am J Manag Care. 2014; 20(5): e157–e165. PubMed Abstract\n\nHsu CY, Chen YT, Chen MH, et al.: The Association Between Insomnia and Increased Future Cardiovascular Events: A Nationwide Population-Based Study. Psychosom Med. 2015; 77(7): 743–51. PubMed Abstract | Publisher Full Text\n\nSarsour K, Kalsekar A, Swindle R, et al.: The association between insomnia severity and healthcare and productivity costs in a health plan sample. Sleep. 2011; 34(4): 443–50. PubMed Abstract | Free Full Text\n\nByles JE, Mishra GD, Harris MA, et al.: The problems of sleep for older women: changes in health outcomes. Age Ageing. 2003; 32(2): 154–63. PubMed Abstract | Publisher Full Text\n\nPavon JM, Zhao Y, McConnell E, et al.: Identifying risk of readmission in hospitalized elderly adults through inpatient medication exposure. J Am Geriatr Soc. 2014; 62(6): 1116–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerry SW, Wu A: Rationale for the use of hypnotic agents in a general hospital. Ann Intern Med. 1984; 100(3): 441–6. PubMed Abstract | Publisher Full Text\n\nKanji S, Mera A, Hutton B, et al.: Pharmacological interventions to improve sleep in hospitalised adults: a systematic review. BMJ Open. 2016; 6(7): e012108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoore TJ: ISMP Quarter Watch: Monitoring FDA MedWatch Reports. Philadelphia PA, ISMP Quarter Watch; Accessed 5-6-2015. Reference Source\n\nBertisch SM, Herzig SJ, Winkelman JW, et al.: National use of prescription medications for insomnia: NHANES 1999–2010. Sleep. 2014; 37(2): 343–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaufmann CN, Spira AP, Depp CA, et al.: Continuing Versus New Prescriptions for Sedative-Hypnotic Medications: United States, 2005–2012. Am J Public Health. 2016; 106(11): 2019–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKripke DF, Langer RD, Kline LE: Hypnotics' association with mortality or cancer: a matched cohort study. BMJ Open. 2012; 2(1): e000850. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTorio CM, Moore BJ: National Inpatient Hospital Costs: The Most Expensive Conditions by Payer, 2013: Statistical Brief #204. Rockville MD, AHRQ Healthcare Cost and Utilization Project (HCUP). Accessed 2006. PubMed Abstract Free Full text\n\nObiora E, Hubbard R, Sanders RD, et al.: The impact of benzodiazepines on occurrence of pneumonia and mortality from pneumonia: a nested case-control and survival analysis in a population-based cohort. Thorax. 2012; 68(2): 163–70. PubMed Abstract | Publisher Full Text\n\nBrower KJ, McCammon RJ, Wojnar M, et al.: Prescription sleeping pills, insomnia, and suicidality in the National Comorbidity Survey Replication. J Clin Psychiatry. 2011; 72(4): 515–21. PubMed Abstract | Publisher Full Text\n\nBurns ER, Stevens JA, Lee R: The direct costs of fatal and non-fatal falls among older adults - United States. J Safety Res. 2016; 58: 99–103. PubMed Abstract | Publisher Full Text\n\nBlincoe LJ, Miller TR, Zoloshnja E, et al.: The economic and societal impact of motor vehicle crashes, 2010 (Revised). Washington, D.C.: National Highway Traffic Safety Administration; 2015. Reference Source\n\nMariotto AB, Yabroff KR, Shao Y, et al.: Projections of the cost of cancer care in the United States: 2010–2020. J Natl Cancer Inst. 2011; 103(2): 117–128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurgess S, Malarstig A: Using Mendelian randomization to assess and develop clinical interventions: limitations and benefits. J Comp Eff Res. 2013; 2(3): 209–12. PubMed Abstract | Publisher Full Text\n\nTannenbaum C, Martin P, Tamblyn R, et al.: Reduction of inappropriate benzodiazepine prescriptions among older adults through direct patient education: the EMPOWER cluster randomized trial. JAMA Intern Med. 2014; 174(6): 890–8. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22745",
"date": "25 May 2017",
"name": "Michael A. Grandner",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall this is an important contribution. A few relatively minor concerns:\n1. The term \"proven\" is used quite a bit. It may be better to use the more qualified \"shown\" or \"demonstrated.\"\n2. The paper by Moloney and colleagues (2011) demonstrates quite clearly that rates of hypnotic use are high, not really tied to insomnia diagnosis, and often chronic. This may aid in making the case in this article.\n3. In addition to mentioning the ACP guideline, even the recent AASM guidelines for prescription hypnotic use state that evidence for using hypnotics is overall low.\n4. There is often causal language that may be more strong than the evidence warrants. For example, it is not clear that the role of hypnotics in suicides is causal. Certainly, anecdotal evidence suggests that people overdose on sleeping pills, but the cited report has a whole paragraph about suicides that makes mostly circumstantial connections. Clearly there is a relationship, but the causal role of hypnotics is not so clearly demonstrated.\n5. If hypnotics don't produce benefits, then why do people keep taking them? For example, even if the medications do not directly impact on sleep, there may be evidence that the effects on suppressing memory may outweigh the effects on sleep. So they may not actually improve sleep much but reduce the memory or perception of sleeplessness. If this reduces the discomfort associated with insomnia (i.e., it removes your memory for the events) then it could potentially have some sort of reinforcing benefit in that regard. Just saying that there are no benefits may be a bit oversold.\n6. The \"Detailed estimates\" section does not actually include detailed estimates. Rather, it provides several general estimates for a number of specific pathways by which hypnotics could induce harms. Perhaps this section could be better labeled.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "2727",
"date": "26 May 2017",
"name": "Daniel F. Kripke",
"role": "Author Response",
"response": "Dr. Grandner’s has provided several very helpful suggestions which will be incorporated in a revision of the manuscript after the initial reviews. He mentioned two points of interest deserving comment here. First, there is the question of whether there is evidence that hypnotics cause suicide, going beyond the extensive evidence for association. This is discussed in more detail in my recent more detailed review of the evidence for hypnotic harms: doi: 10.12688/f1000research.8729.2 . There are several kinds of evidence. Controlled trials and meta-analysis have shown that hypnotics cause incident depressions and cause incident infections. Controlled trials are our best tests of causality. There is much new evidence that infection causes depression and suicides, and depression itself is widely believed to be the main cause of suicide. Second, in a narrower sense, when a medical examiner reports a hypnotic drug as a cause of intentional overdose death (and there are thousands of such examples), that is an expert finding of causality. Since no specific costs of suicides and suicide attempts could be included in the hypnotic cost estimates offered, causality of suicide is not a key issue for the manuscript being reviewed. More study is certainly needed. Incidentally, my petition to the FDA specifically mentioned suicides as one endpoint which should be incorporated in expanded Phase IV randomized safety trials of hypnotics. Second, there is the question of why people keep taking hypnotics. This is a question for the ages, since throughout the history of magical beliefs and shamanism, people have consumed countless remedies including sleep remedies that are scientifically considered non-beneficial. Moreover, benzodiazepine-agonist hypnotics are known to be addicting drugs that are sometimes sought for pleasure, but equally important, these hypnotics cause withdrawal insomnia. There is much evidence that people persist in using hypnotics because of the withdrawal insomnia and anxiety that occur when consumers try to stop. As the reviewer suggested, people may take hypnotics seeking the amnesia that hypnotics produce, just as alcoholics drown their sorrows. That amnesia has not been recognized and advertised as a medical benefit. I am not aware of evidence that hypnotic-induced amnesia is medically beneficial, though it does often elicit a favorable subjective report. In hospitals, there might be an argument for helping patients to forget their anxieties, pains, and discomforts, but if the same amnesic effects contribute to falls, confusion, and dementia, it may be a costly strategy."
}
]
},
{
"id": "23573",
"date": "19 Jun 2017",
"name": "Leon C. Lack",
"expertise": [
"Reviewer Expertise Circadian and sleep research",
"clinical research into treatment of insomnia",
"bright light therapy",
"napping research"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author has done considerable work in assembling data relating the use of sedatives and hypnotic drugs to harmful outcomes and increased healthcare costs. The main thrust of this article is, assuming that hypnotic use causes harmful health outcomes, what would be the economic costs of this harm?\nHowever, the question about hypnotic use by itself causing harm and health morbidity is still frustratingly not completely resolved. The author amasses a wealth of evidence associating the two, lending strength to the notion of causality but not confirming it. There is good experimental evidence showing negative effects of hypnotics including slowed reactions, cognitive and memory impairments, impaired balance arising from their sedating effects both before sleep and, for longer acting hypnotics, after sleep. However, the evidence for the direct effects of hypnotics on physical health measures is not as strong since these are largely correlation studies.\nThe greater increase in health morbidity and mortality in those prescribed hypnotics may arise from the possibility that hypnotics are more likely to be prescribed by health professionals in cases with greater and more serious co-morbidity and those likely to have increased mortality risk. This self selection of hypnotic prescriptions to those likely to have higher morbidity and mortality may account for all or some of the subsequent elevated risks. The author, himself, admits that this may not be resolved without randomized control trials (with some diagnosed insomniacs not receiving hypnotics or receiving non-drug therapies instead). However, it may be difficult to obtain ethics approval for a study allocating insomnia sufferers to a non-treatment control over a long period of time (e.g 2-3 years to measure long term health outcomes). The only opportunity might be to compare pharmacotherapy alone with cognitive/behavior therapy with long term follow-ups of sleep and many health outcomes. This should be acceptable ethically.\nThe difficulty with drawing causal conclusions from epidemiological studies is nicely illustrated by one of the cited references with some of the most compelling evidence, the study by Anderson, et al 1. In their study they reported on an insomnia subset of 5,773 of which 75% were treated with a prescription medication and the other 25% were not and all were followed up after a 12 month period. The treated subset was more likely to have a mental health diagnosis and anti-depressant medications. These differences were significant between the groups at baseline which supports the notion that it is the higher morbidity patients more likely to receive hypnotic medication. These initial morbidity differences were then amplified at the follow-up and the medicated group showed greater increases in health costs ($4276 vs $2309) over that time. This is strong circumstantial evidence that the hypnotic medications were causing an increase in medical costs. At least one could say with greater certainty that the medications were not curing their insomnia and reducing health costs. The only lingering doubt, however, is not knowing the outcome had they not been medicated. Would they have been even worse off, no different, or, as Dr. Kripke is suggesting, better off? We simply cannot be certain until randomized controlled trials are conducted.\nA crude but good analogy to the point I am making is the effect of hospitalization. Should a 70 year old with a severe chest infection be hospitalized? Those admitted to hospital certainly have a higher morbidity and mortality risk. Does that mean someone with these symptoms will be better advised to stay out of the hospital?\nIn the mean time, the weight of the evidence presented by the author is compelling. My only suggestion is for the author not to slip into the use of terminology that the causal connection is proven. By doing so he is in jeopardy of reducing the strength and consistency of the correlation data. Let the data be presented and just percolate in the reader's mind. It will thus be more effectively taken up than if the author takes the one step too far to insist upon the causal link. Already there is too much research being presented implying causal links between sleep variables (e.g. reported total sleep time) and health outcomes from epidemiological studies. Let's not contribute to this inappropriate trend.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2816",
"date": "28 Jun 2017",
"name": "Daniel F. Kripke",
"role": "Author Response",
"response": "Dr. Lack’s very thoughtful emphasis on the problem of causality highlights a crucial issue for readers of this manuscript about hypnotic costs. His review contributes to the discussion already in the manuscript calling for more studies including studies of causality. Likewise, the manuscript specifically acknowledged the problem of associational studies potentially being confounded by comorbidities. The problem of comorbidities is emphasized a bit more in Version 2. Until more reliable data become available, the manuscript argues that managed care and insurance administrators should weigh the best available evidence of whether the costs of hypnotic harms are likely to exceed the value of hypnotic benefits. As in most areas of medicine, timely decisions about hypnotic treatments and reimbursements must be made based on incomplete and imperfect evidence. Decisions need to be made daily and cannot be deferred for years more study. We still do not have definitive randomized controlled trials about whether cigarettes cause cancer in humans. After millions of people had died from cancers “associated with” cigarette smoking, it became time to advise against cigarette smoking based on available evidence that association probably indicates a large element of causation. The reader may be sure that this manuscript on costs never implies causation without causal evidence, but uses the word “association” whenever there is insufficient evidence of an element of causation. Some added emphasis on this distinction is included in the Version 2 revision. This response to both reviews is a welcome opportunity to elaborate on the issues of causation versus association that were not presented at length in a manuscript about costs, since that would repeat much detail previously published.1,2 However, just as it would be an error to slip into language implying causation when only association has been demonstrated, it would be an error not to recognize existing human trials and meta-analysis confirming evidence of causation. Moreover, in the case of hypnotics, as with cigarettes, there is much animal causal data and in vitro data supporting the evidence of association studies. The sleep disorders community has been slow to recognize and replicate the existing findings as the causal evidence of serious hypnotic risks has accumulated. Joya et al.3 performed formal meta-analysis of 38 randomized controlled trials of 8828 participants randomized to 4 hypnotics and 4383 participants randomized to placebos, in whom the overall infection causation hazard ratio was 1.44 (1.25-1.64, confidence interval, P<0.00001). If any manufacturer could do a meta-analysis showing that this analysis was wrong, would they not have done so? How much more evidence is needed? As a matter of fact, Sanofi subsequently did an independent analysis of their Ambien data and informed the FDA that they could confirm that Ambien (zolpidem CR) causation of infections such as influenza was “probably real.”4 The FDA expert agreed. Would skeptics nevertheless question that hypnotic causation of infections such as influenza creates medical costs and sometimes deaths? For an invited lecture, Kripke did an informal analysis of incident depressions in randomized trials of 4 hypnotics (including data available through the FDA that manufacturers did not publish). There were 5535 participants randomized to hypnotics with 2% incidence of depression and 2318 participants randomized to placebos with 0.9% incidence, yielding a depression causation risk ratio of 2.1 (P<0.002).5 If any manufacturer could do a larger, more formal meta-analysis showing that hypnotics do not cause depression, would one not think they would have published? In an important study of 593 participants randomized to eszopiclone and 195 randomized to placebo, the rate of discontinuation for depression was 2.0% in the eszopiclone group and 0.0% in the placebo group, a result that appears statistically significant. In their modesty, the authors did not even mention that the study had proven that eszopiclone causes infection.6 Note that these were new, incident depressions, not exacerbations, since the study excluded patients with any DSM-IV Axis I psychiatric diagnosis at baseline.6 Would skeptics nevertheless question that depression elevates medical costs and sometimes causes suicides? The previous detailed review also discussed at length the many death certificates that mention hypnotics as one of the causes of death and likewise discussed why these are probably underestimates.1 For 2015, the U.S. CDC’s WONDER data base listed over 10,000 deaths with causes of death that included a barbiturate, benzodiazepine, or a Z-drug (the Z-drug category included anti-epileptics), but for many reasons, the 10,000 reported deaths is believed to be an underestimate. Would skeptics nevertheless argue that death certificates listing a hypnotic as a cause of death are evidence for association, not causality? To summarize, there are certainly adequate data confirming that hypnotics cause some of the harms discussed in this manuscript and some of costs resulting. There are also controlled trials showing that hypnotics cause impaired balance and poor on-road driving performance, though controlled trials with automobile accidents or falls as endpoints have not yet reached adequate size. Further research is needed to better define the magnitude of costs from hypnotic harms, but some causal role of hypnotics in several of the harms resulting has already been confirmed. Dr. Lack’s comments are very helpful in reiterating the need for expanded studies. The problems of ethical approval are not insurmountable. If it is ethical to administer hypnotics to patients in short-term hospitalization, would it not be ethical to perform randomized trials focusing on whether hypnotic prescribing increases or decreases hospital costs? Short-term hospital studies might also provide useful evidence regarding causality of serious infections and depression. Even for early death as an endpoint, short-term randomizing clinical trials could be ethical and have adequate power. Since the epidemiology suggests that the greatest risk-ratio of hypnotics may be within the first 30 administrations, mainly among frail and elderly patients, randomized placebo-controlled clinical trials might only need to last 30 days among frail and vulnerable participants (who currently receive much of the hypnotics prescribed). To be sure, large numbers of participants would be needed for adequate power in relatively brief randomization designs. Another alternative would be the randomized education-patient-empowerment technique initiated by Dr. Tannenbaum.7 Another alternative would be Mendelian randomization studies. Randomizing studies of causality are feasible if those who advocate and provide hypnotics recognize their responsibilities to assess the risks and costs. 1. Kripke DF. Hypnotic drug risks of mortality, infection, depression, and cancer: but lack of benefit [version 2]. F1000Research. 2017;5: 918 ( http://dx.doi.org/10.12688/f1000research.8729.2 ). 2. Kripke DF. Mortality risk of hypnotics: strengths and limits of evidence. Drug Saf. 2016;39: 93-107. 3. Joya FL, Kripke DF, Loving RT, Dawson A, Kline LE. Meta-analyses of hypnotics and infections: eszopiclone, ramelteon, zaleplon, and zolpidem. J Clin Sleep Med. 2009;5(4): 377-83. 4. Farkas R. Center for Drug Evaluation and Research Approval Package for: Application Number: 019908Orig1s032s034 021774Orig1s013s015. Silver Spring, MD, FDA. Accessed 2013: accessible through https://www.accessdata.fda.gov/scripts/cder/daf/ under zolpidem. 5. Kripke DF. Greater incidence of depression with hypnotics than with placebo. BMC Psychiatry. 2007;7:42. 6. Krystal AD, Walsh JK, Laska E, Caron J, Amato DA, Wessel TC, Roth T. Sustained efficacy of eszopiclone over 6 months of nightly treatment: results of a randomized, double-blind, placebo-controlled study in adults with chronic insomnia. Sleep. 2003;26(7): 793-9. 7. Tannenbaum C, Martin P, Tamblyn R, Benedetti A, Ahmed S. Reduction of inappropriate benzodiazepine prescriptions among older adults through direct patient education: the EMPOWER cluster randomized trial. JAMA Intern Med. 2014;174(6): 890-8."
}
]
}
] | 1
|
https://f1000research.com/articles/6-542
|
https://f1000research.com/articles/6-520/v1
|
20 Apr 17
|
{
"type": "Opinion Article",
"title": "The internet trade of counterfeit spirits in Russia – an emerging problem undermining alcohol, public health and youth protection policies?",
"authors": [
"Maria Neufeld",
"Dirk W. Lachenmeier",
"Stephan G. Walch",
"Jürgen Rehm",
"Maria Neufeld",
"Stephan G. Walch",
"Jürgen Rehm"
],
"abstract": "Counterfeit alcohol belongs to the category of unrecorded alcohol not reflected in official statistics. The internet trade of alcoholic beverages has been prohibited by the Russian Federation since 2007, but various sellers still offer counterfeit spirits (i.e., forged brand spirits) over the internet to Russian consumers, mostly in a non-deceptive fashion at prices up to 15 times lower than in regular sale. The public health issues arising from this unregulated trade include potential harm to underage drinkers, hazards due to toxic ingredients such as methanol, but most importantly alcohol harms due to potentially increased drinking volumes due to low prices and high availability on the internet. The internet sale also undermines existing alcohol policies such as restrictions of sale locations, sale times and minimum pricing. The need to enforce measures against counterfeiting of spirits, but specifically their internet trade should be implemented as key elements of alcohol policies to reduce unrecorded alcohol consumption, which is currently about 33 % of total consumption in Russia.",
"keywords": [
"alcoholic beverages",
"internet sales",
"counterfeit",
"unrecorded alcohol",
"methanol poisoning"
],
"content": "Unrecorded consumption in Russia\n\nUnrecorded alcohol is alcohol, not reflected in official statistics, but consumed as a beverage1. Since it is untaxed, it is usually much cheaper than regular alcohol2. For Russia, unrecorded consumption is estimated at about 33.4% (5.3 litres of pure alcohol per capita per year) of the consumed alcohol (3; for an overview see 4).\n\nPrevious studies in Russia have focused on the consumption of alcohol surrogates5–9 or homemade alcohol6,10,11, but comparatively little research has been done on other forms of unrecorded products, such as counterfeit alcohol12,13. In the course of an ongoing longitudinal study on unrecorded alcohol consumption in Western Siberia14, we have observed various internet sellers of counterfeit spirits, ranging from individual offers on social networks and micro-blogs to specialized online shops offering forged expensive spirits brands.\n\n\nThe structure and role of internet shops: legal issues\n\nDespite the fact that internet trade of alcoholic beverages was prohibited in 2007 as part of measures to reduce counterfeit sales15, we recorded more than 25 online sellers of counterfeit alcohol, all of which appeared to be maintained and operational (unpublished report). All of the observed platforms were structured similarly, typically featuring a product catalogue, a section about the delivery process and payment, a FAQ section and a contact page; they offered a delivery service against payment, typically for bulk orders only. The prices of the offered products were considerably lower than the regular market prices for retail sale, for example the prices of international vodka brands were 6 times lower than in regular sale, and prices of international spirits such as rum and whiskey were 10–15 times lower. Qualitative interviews revealed that consumers were well aware of the fact, that the beverages offered were not originals16.\n\n\nExperiences from the research project\n\nOur first attempt to order online failed. The seller kept the 100% advance payment and never answered our e-mails. The failure to deliver however, may have been the result of a common internet scam rather than a legal issue connected with the official ban on the online trade of alcohol. For the second order a seller with a 50% advance payment scheme was chosen; over 100 bottles of counterfeit spirits were delivered within the following four weeks. One of the ordered international vodkas was not delivered, but instead we received a cheaper Russian brand. Although all of the delivered bottles had Russian excise stamps attached to them, we suspected these were counterfeits because of the low price of the product, and this was later confirmed by chemical analyses. However, the appearance of the bottles was in good accordance with the original products. The accompanying documents of the delivery package indicated payment of freight costs, but not the payment for alcohol itself. Processing payments for the delivery of alcohol is legal according to Russian law and seems to be an important loophole in the existing legislation frequently used by various online sellers, regardless of whether their products are counterfeits or not17.\n\n\nPotential health issues of counterfeit alcohol\n\nThere are several health issues arising from this unregulated online trade of counterfeit spirits. One problem is the potential harm to health of underage drinkers, a vulnerable group, specifically as the developing brain may be affected18. Undermining of youth protection laws by internet trade because of lack of regulation or its enforcement has been observed in European Union countries as well19.\n\nSince the manufacturing of counterfeits does not follow unified guidelines of product composition and safety, they might contain harmful and toxic ingredients besides ethanol itself. For example methanol poisonings with counterfeit branded spirits have occurred regularly in Russia over the last two years, resulting in several deaths14,20. These cases corroborate the observed lack of enforcement of food safety standards in internet trade21. However, methanol intoxications are only isolated cases in relation to the general problem of ethanol’s adverse effects. Hence, the main problem with counterfeit spirits lies with their cheap price, high availability and their high demand by certain population groups leading to heavy consumption. Studies suggest that consumption of non-deceptive counterfeits (products of which the consumers are fully aware that they are counterfeits usually because of their cheap price) is associated with consumers of lower socio-economic strata, heavy drinking and consumption of further types of unrecorded alcohol, including alcohol surrogates13,16. Cheaper alcohol products have been linked to most risky patterns of consumption associated with premature mortality22–24.\n\nThe illegal internet sale of counterfeit alcohol does not only evade tax payments and undermines youth protection policies, it also undermines various other alcohol policy measures introduced in Russia over the last years to reduce the harmful use of alcohol25, such as restrictions of sale locations and sale times and the fixed minimum price on alcoholic beverages25,26.\n\n\nConclusions\n\nResearch in Russia suggests that the Internet has become an important trade channel of counterfeits12–14 with the observed online sellers apparently operating as an intermediate link in the distribution chain of counterfeit alcohol in Russia, meeting the consumer’s stable demand for cheap alcohol. Therefore, measures against counterfeiting of alcohol should be part of specific policies to reduce unrecorded alcohol consumption and alcohol-related harms in Russia. Consistent monitoring of the entire production and supply chain of alcoholic beverages as well as effective denaturing of alcohol not intended for human consumption should be considered as the two key elements to achieve this goal27.",
"appendix": "Author contributions\n\n\n\nDWL and JR conceptualized the article. MN drafted a first version of the text, DWL, SGW and JR revised the text and all authors approved of the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nLachenmeier DW, Gmel G, Rehm J: Unrecorded alcohol consumption. In: Boyle P, Boffetta P, Lowenfels AB, Burns H, Brawley O, Zatonski W, et al., editors. Alcohol: Science, Policy, and Public Health. Oxford, U.K.: Oxford University Press; 2013; 132–42. Publisher Full Text\n\nRehm J, Kailasapillai S, Larsen E, et al.: A systematic review of the epidemiology of unrecorded alcohol consumption and the chemical composition of unrecorded alcohol. Addiction. 2014; 109(6): 880–93. PubMed Abstract | Publisher Full Text\n\nProbst C, Merey A, Rylett M, et al.: Unrecorded alcohol use: A global modelling study based on Delphi assessments and survey data. Toronto, ON: Centre for Addiction and Mental Health, 2017.\n\nRehm J, Poznyak V: On monitoring unrecorded alcohol consumption. Alcoholism and Drug Addiction. 2015; 28(2): 79–89. Publisher Full Text\n\nMcKee M, Suzcs S, Sárváry A, et al.: The composition of surrogate alcohols consumed in Russia. Alcohol Clin Exp Res. 2005; 29(10): 1884–8. PubMed Abstract | Publisher Full Text\n\nSolodun YV, Monakhova YB, Kuballa T, et al.: Unrecorded alcohol consumption in Russia: toxic denaturants and disinfectants pose additional risks. Interdiscip Toxicol. 2011; 4(4): 198–205. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTomkins S, Saburova L, Kiryanov N, et al.: Prevalence and socio-economic distribution of hazardous patterns of alcohol drinking: study of alcohol consumption in men aged 25–54 years in Izhevsk, Russia. Addiction. 2007; 102(4): 544–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBobrova N, Kurilevitch S, Malyutina D, et al.: Drinking non-beverage (surrogate) alcohol: a qualitative study in Novosibirsk, Russia. Eur J Public Health. 2007; 17(2): 137–138. Reference Source\n\nGil A, Polikina O, Koroleva N, et al.: Availability and characteristics of nonbeverage alcohols sold in 17 Russian cities in 2007. Alcohol Clin Exp Res. 2009; 33(1): 79–85. PubMed Abstract | Publisher Full Text\n\nZaigraev G: The Russian Model of Noncommercial Alcohol Consumption. In: Haworth A, Simpson R, editors. Moonshine Markets: Issues in Unrecorded Alcohol Beverage Production and Consumption. New York and Hove: Brunner-Routledge; 2004; 31–40.\n\nRadaev V: Impact of a new alcohol policy on homemade alcohol consumption and sales in Russia. Alcohol Alcohol. 2015; 50(3): 365–72. PubMed Abstract | Publisher Full Text\n\nKotelnikova Z: Explaining counterfeit alcohol purchases in Russia. Alcohol Clin Exp Res. 2017; 41(4): 810–9. PubMed Abstract | Publisher Full Text\n\nKotelnikova Z: Consumption of counterfeit alcohol in contemporary Russia: The role of cultural and structural factors. 2014; [cited 02/07/2017]. Reference Source\n\nNeufeld M, Lachenmeier D, Hausler T, et al.: Surrogate alcohol containing methanol, social deprivation and public health in Novosibirsk, Russia. Int J Drug Policy. 2016; 37: 107–10. PubMed Abstract | Publisher Full Text\n\nRg.ru. Postanovlenie Pravitel'stva Rossijskoj Federacii ot 27 sentjabrja 2007 g. N 612 g. Moskva \"Ob utverzhdenii Pravil prodazhi tovarov distancionnym sposobom\". [Resolution of the Government of the Russian Federation Government September 27th 2007 N 612, Moscow \"On approving the rules of distance sales\"]. 2007; [cited 04/05/2017]. Reference Source\n\nNeufeld M, Wittchen HU, Rehm J: Drinking patterns and harm of unrecorded alcohol in Russia: a qualitative interview study. Addict Res Theory. 2017; 25(4): 310–7. Publisher Full Text\n\nPravoved.ru. Juridicheskaja konsul'tacija onlajn. [Online legal advice]. 2017; [cited 04/05/2017]. Reference Source\n\nEwing SW, Sakhardande A, Blakemore SJ: The effect of alcohol consumption on the adolescent brain: A systematic review of MRI and fMRI studies of alcohol-using youth. Neuroimage Clin. 2014; 5: 420–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBöse W, Löbell-Behrends S, Marx G, et al.: Internet mail-order sale of alcohol: Do youth protection laws contain double standards? Sucht. 2010; 56(6): 397–8. Publisher Full Text\n\nVinogradova P: Poslednjaja rjumka [The last glass]. 2015; [cited 04/05/2017]. Reference Source\n\nLachenmeier DW, Löbell-Behrends S, Böse W, et al.: Does European Union food policy privilege the internet market? Suggestions for a specialized regulatory framework. Food Control. 2013; 30(2): 705–13. Publisher Full Text\n\nLeon DA, Saburova L, Tomkins S, et al.: Hazardous alcohol drinking and premature mortality in Russia: a population based case-control study. Lancet. 2007; 369(9578): 2001–9. PubMed Abstract | Publisher Full Text\n\nZaridze D, Lewington S, Boroda A, et al.: Alcohol and mortality in Russia: prospective observational study of 151,000 adults. Lancet. 2014; 383(9927): 1465–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShield KD, Rylett M, Rehm J: Public health successes and missed opportunities. Trends in alcohol consumption and attributable mortality in the WHO European Region, 1990–2014. Copenhagen, Denmark: WHO European Region; 2016. Reference Source\n\nNeufeld M, Rehm J: Alcohol consumption and mortality in Russia since 2000: are there any changes following the alcohol policy changes starting in 2006? Alcohol Alcohol. 2013; 48(2): 222–30. PubMed Abstract | Publisher Full Text\n\nKhaltourina D, Korotayev A: Effects of Specific Alcohol Control Policy Measures on Alcohol-Related Mortality in Russia from 1998 to 2013. Alcohol Alcohol. 2015; 50(5): 588–601. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Global strategy to reduce the harmful use of alcohol. Geneva, Switzerland: World Health Organization; 2010. Reference Source"
}
|
[
{
"id": "22029",
"date": "03 May 2017",
"name": "Andrey V. Korotayev",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNotwithstanding the impressive successes achieved by the Russian government and civil society since 2005 in the reduction of the excessive alcohol consumption and the associated harm, the alcohol consumption in Russia still remains very high and constitute the single most important cause of the excessive mortality in this country1. That is why it is so important to identify new measures that could contribute to the further reduction of the alcohol consumption in Russia, and that is why the reviewed article appears so timely and significant. Indeed, the authors demonstrate quite convincingly that the illegal internet sale of counterfeit alcohol undermines various alcohol policy measures introduced in Russia over the last years to reduce the harmful use of alcohol, such as restrictions of sale locations and sale times and the fixed minimum price on alcoholic beverages.\n\nThe only problem with the article is that its authors do not offer any effective recommendations whose implementation could result in the elimination of the illegal internet sale of alcohol. In fact, they recommend two measures: \"consistent monitoring of the entire production and supply chain of alcoholic beverages as well as effective denaturing of alcohol not intended for human consumption\". However, I do not think that the authors themselves believe that the implementation of these two measures will result in the elimination of the illegal internet sale of alcohol. Thus, I would suggest that they should think about some additional policy measures. In this respect, it would be very useful if they could study how other countries have managed to solve this problem. In addition, the recommendation to establish \"consistent monitoring of the entire production and supply chain of alcoholic beverages\" does not appear to be sufficiently specific - the authors should specify what measures should be taken in order to establish such a monitoring.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2808",
"date": "27 Jun 2017",
"name": "Dirk W. Lachenmeier",
"role": "Author Response",
"response": "Thank you for providing some further background into the issue of alcohol-related harm in Russia. As requested, we have added some further aspects on potential measures into the conclusion section. The problem appears to be rather on the enforcement side than on the policy side. For example, higher penalties were recently introduced into the general laws against counterfeiting. Otherwise, we agree with the reviewer that control of internet trade beyond borders is an extremely challenging task and that adequate responses by authorities need to be developed with priority (but such research would be beyond the scope of this 1000-word opinion piece)."
}
]
},
{
"id": "23405",
"date": "12 Jun 2017",
"name": "Yury Evgeny Razvodovsky",
"expertise": [
"Reviewer Expertise Alcohol-related problems"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe illegal trade of the counterfeit alcohol is the growing problem in many European countries1. Especially cause worries about recent growth of the infiltration the counterfeit alcoholic beverages into online markets and social media platforms1.\nThe paper written by Neufeld and coauthors addresses the relatively new phenomenon, which did not have much space in the scientific literature: the illegal internet trade of the counterfeit alcohol in Russia2. The topic itself is extremely important from a public health perspective, because consumption of alcohol surrogates and counterfeit spirits was identified as one of the major contributor to alcohol-related deaths toll in Russia3,4. Therefore, this paper is the helpful contribution to fill up this gap. The authors reasonably argue that the internet market of the counterfeit spirits has become an important trade channel of the unrecorded alcohol in Russia, which poses risk to human health due to toxic chemicals, such as methanol; and, also undermines the measures of the alcohol control policy, introduced in this country over the last decade.\nThe scale of this problem is well illustrated by the case of the mass poisoning in Krasnoyarsk city (Siberia) in 2015, due to consumption of the falsification whisky bought on the Internet, which killed almost 30 people. According to newspaper reports, over 6.000 bottles of Jack Daniels were seized in this city, which contained a mixture of methanol, ethanol and the water.\nIn relation to this, the Russian government should consider a number of potentially effective approaches addressing to the problem of the counterfeit spirits, including raising public awareness of the life treating danger, posed by these products, and also taking the legal actions against the people, offering counterfeit alcoholic beverages for sale on websites and social media platforms.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2807",
"date": "27 Jun 2017",
"name": "Dirk W. Lachenmeier",
"role": "Author Response",
"response": "We thank the reviewer for his insight and providing a further example that strengthens our arguments. Regarding approaches, we have added some further suggestions in the revision of our article (see also response to the second reviewer below)."
}
]
}
] | 1
|
https://f1000research.com/articles/6-520
|
https://f1000research.com/articles/5-1987/v1
|
15 Aug 16
|
{
"type": "Research Article",
"title": "Protein domain architectures provide a fast, efficient and scalable alternative to sequence-based methods for comparative functional genomics",
"authors": [
"Jasper J. Koehorst",
"Edoardo Saccenti",
"Peter J. Schaap",
"Vitor A. P. Martins dos Santos",
"Maria Suarez-Diez",
"Edoardo Saccenti",
"Peter J. Schaap",
"Vitor A. P. Martins dos Santos",
"Maria Suarez-Diez"
],
"abstract": "A functional comparative genome analysis is essential to understand the mechanisms underlying bacterial evolution and adaptation. Detection of functional orthologs using standard global sequence similarity methods faces several problems; the need for defining arbitrary acceptance thresholds for similarity and alignment length, lateral gene acquisition and the high computational cost for finding bi-directional best matches at a large scale.\nWe investigated the use of protein domain architectures for large scale functional comparative analysis as an alternative method. The performance of both approaches was assessed through functional comparison of 446 bacterial genomes sampled at different taxonomic levels.\nWe show that protein domain architectures provide a fast and efficient alternative to methods based on sequence similarity to identify groups of functionally equivalent proteins within and across taxonomic bounderies. As the computational cost scales linearly, and not quadratically with the number of genomes, it is suitable for large scale comparative analysis. Running both methods in parallel pinpoints potential functional adaptations that may add to bacterial fitness.",
"keywords": [
"Bacterial genomics",
"Bacterial functionome",
"Orthology",
"Horizontal gene transfer",
"clustering",
"semantic annotation"
],
"content": "Introduction\n\nComparative analysis of genome sequences has been pivotal to unravel mechanisms shaping bacterial evolution like gene duplication, loss and acquisition1,2, and helped in shedding light on pathogenesis and genotype-phenotype associations3,4.\n\nComparative analysis relies on the identification of sets of orthologous and paralogous genes and subsequent transfer of function to the encoding proteins. Technically orthologs are defined as best bi-directional hits (BBH) obtained via pairwise sequence comparison among multiple species and thus exploits sequence similarity for functional grouping. Sequence similarity-based (SB) methods present a number of shortcomings. First, a generalized minimal alignment length and similarity cut-off need to be arbitrarily selected for all, which may hamper proper functional grouping. Second, sequence and function might differ across evolutionary scales. Protein sequences change faster than protein structure and proteins with same function but with low sequence similarity have been identified5,6. SB methods may fail to group them hampering a functional comparison. This limitation becomes even more critical when comparing either phylogenetically distant genomes or gene sequences that were acquired with horizontal gene transfer events.\n\nThird, time and memory requirements scale quadratically with the number of genomes to be compared. Recent technological advancements are resulting in thousands of organisms and billions of proteins being sequenced7, rendering SB approaches of limited applicability for comparisons at the larger scales.\n\nTo overcome these bottlenecks, protein domains have been suggested as an alternative for defining groups of functionally equivalent proteins8–10 and have been used to perform comparative analyses of Escherichia coli9, Pseudomonas10, Streptococcus11 and for protein functional annotation12,13. A protein domain architecture describes the arrangement of domains contained in a protein and is exemplified in Figure 1. As protein domains capture key structural and functional features, protein domain architectures may be considered to be better proxies to describe functional equivalence than a global sequence similarity14. The concept of using the domain architecture to precisely describe the extent of functional equivalence is exemplified in Figure 2. Moreover, once the probabilistic domain models have been defined, mining large sets of individual genome sequences for their occurrences is a considerably less demanding computational task than an exploration of all possible bi-directional hits between them15,16.\n\nAlthough the proteins obviously share a common core, four distinct domain architectures involving six protein domains were observed in (1) Enterobacteriacee, (2) H. pylori, (3) Pseudomonas and (4) Cyanobacteria.\n\nDomains are probabilistic models of amino acids coordinates obtained by hidden Markov modeling (HMM) built from (structure based) multiple sequence alignments. Domain architectures are linear combinations of these domains representing the functional potential of a given protein sequence and constitute the input for DAB clustering. SB-orthology clusters inherit functional annotations via best bi-directional hits above a predefined sequence similarity cut-off score. The information content decreases when moving from the overall function to the sequence level.\n\nBuilding on these observations we aim at exploring the potential of domain architecture-based (DAB) methods for large scale functional comparative analysis by comparing functionally equivalent sets of proteins, defined using domain architectures, with standard clusters of orthogonal proteins obtained with SB methods. We compared the SB and DAB approach by analysing i) the retrieved number of singletons (i.e. clusters containing only one protein) and ii) the characteristics of the inferred pan- and core-genome size considering a selection of bacterial genomes (both gram positive and negative) sampled at different taxonomic levels (species, genus, family, order and phylum). We show that the DAB approach provides a fast and efficient alternative to SB methods to identify groups of functionally equivalent/related proteins for comparative genome analysis and that the functional pan-genome is more closed in comparison to the sequence based pan-genome. DAB approaches can complement standardly applied sequence similarity methods and can pinpoint potential functional adaptations.\n\n\nMethods\n\nBacterial species were chosen on the basis of the availability of fully sequenced genomes in the public domain: two species (Listeria monocytogenes and Helicobacter pylori), three genera (Streptococcus, Pseudomonas, Bacillus), one family (Enterobacteriaceae), one order (Corynebacteriales), and one phylum (Cyanobacteria) were selected. For each a set of 60 genome sequences were considered, except for L. monocytogenes for which only 26 complete genome sequences were available. Maximal diversity among genome sequences was ensured by sampling divergent species (when possible) at each taxonomic level. Genome sequences were retrieved from the European Nucleotide Archive database (www.ebi.ac.uk/ena). A full list of genomes analyzed is available in the Data availability section.\n\nTo avoid bias due to different algorithms used for the annotation of the original deposited genome sequences, all genomes were de novo re-annotated using the SAPP framework (1.0.0)10. In particular, the FASTA2RDF, GeneCaller (a semantic wrapper for Prodigal (2.6.2)17) and InterPro (interproscan-5.17-56.0) (a semantic wrapper for InterProScan18) modules were used to handle and re-annotate the genome sequences. This resulted in 446 annotated genomes (7 × 60 genomes and 1 × 26 genomes) with provenance. For each annotation step the provenance information (E-value cut off, score, originating tool or database) was stored together with annotation information in a graph database (RDF-model) and can be reproduced through the SAPP framework (http://semantics.systemsbiology.nl).\n\nThe positions (start and end on the protein sequence) of domains having Pfam19, TIGRFAMs20 and InterPro21 identifiers were extracted through SPARQL querying of the graph database and domain architectures were retrieved for each protein individually. The domain starting position was used to assess relative position in the case of overlapping domains; alphabetic ordering was used to order domains with the same starting position or when the distance between the starting position of overlapping domains was < 3 amino acids. Labels indicating N-C terminal order of identified domains were assigned to each protein in such a way that the same labels were assigned to proteins sharing the same domain architecture.\n\nTo make a direct comparison possible only protein sequences containing at least one protein domain signature were considered for analysis. BBH were obtained using Blastp (2.2.28+) with an E-value cutoff of 10−5 and -max_target_seqs of 105. OrthaGogue (1.0.3)22 combined with MCL (14-137)23 was used to identify protein clusters on the base of sequence similarity.\n\nDomain architecture based clusters were built by clustering proteins with the same labels using bash terminal commands (sort, awk). The number of proteins sharing a given domain architecture in each genome was stored in a 446 × 21054 (genomes × domain architectures) matrix; from this a binarized presence-absence matrix was obtained and used for multivariate analysis.\n\nA Heaps’ law model was fit to the abundance matrices using 5 × 103 random genome ordering permutations and the micropan R package24.\n\nSAPP, a Semantic Annotation Pipeline with Provenance which stores results in a graph database10, used for genome handling and annotation, is available at semantics.systemsbiology.nl. Matrix manipulations and multivariate analysis were performed using the R software (3.2.2).\n\n\nResults\n\nSB and DAB approaches were compared by considering eight sets of genome sequences sampled at different taxonomic levels, from species to order, preserving phylogenetic diversity (see Table 1). Each set contained 60 genome sequences, except for Listeria monocytogenes for which only 26 complete genomes were publicly available. To facilitate the comparison between DAB and SB clusters only protein sequences that contained at least one domain were considered. On average, 85% of the protein sequences contain at least one domain from the InterPro database (see Table 1). Values range from 77±4% for Cyanobacteria to 91 ± 4% for Enterobacteriaceae (which include E. coli). Since the overall results were the same for gram negative and gram positive bacteria, we will show and comment only on results for the latter. Results obtained for gram negative bacteria are shown in the Supplementary material section.\n\nDAB has been performed using HMM from Pfam (29.0) and InterPro (interproscan-5.17-56.0). Fraction refers to the fraction of proteins with at least one protein domain. Core- and pan- indicate the sizes of the core- and pan- genomes (based on the sample) and singletons refers to the number of clusters with only one protein.\n\nA standard BBH workflow was used to obtain SB protein clusters for the eight sets. We started by calculating the total number of clusters, corresponding to the pan-genome size, as shown in Table 1. Then we considered protein cluster persistence, that is the number of genomes where at least one member of the cluster is present, divided by the total number of genomes considered. Results are shown in Figure 3.\n\nCluster persistence is defined as the relative number of genomes with at least one protein assigned to the cluster. The frequency of SB clusters according to their persistence is shown.\n\nThe ratio between the size of the core-genome (clusters with persistence of 1, i.e. present in all genomes) and the number of singletons decreased with evolutionary distance (see Table 1). It ranged from 3.51 and 3.07 at species level (H. pylori and L. monocytogenes respectively) to 0.05 and 0.06 when considering members of the same order (Corynebacteriales) and phylum (Cyanobacteria) respectively. A similar pattern is observed when directly comparing the sizes of the pan- and core- genomes of the sampled genomes. Within the gram negative bacteria this ratio ranges from 0.69 for members of the same species (H. pylori) to 0.05 for members of the same phylum (Cyanobacteria) with intermediate values (0.12) for sequences from the same genus (Pseudomonas).\n\nDomain architectures directly rely on the definition of protein domain models: those were retrieved from Pfam, InterPro and TIGRFAMs databases. However, TIGRFAMs results were not further considered because of a lower coverage. As shown in Table 1, as expected partly overlapping results were obtained when different domain databases were used. The number of singletons was larger when using InterPro rather than Pfam and for the latter we also observed larger core-genome size. These discrepancies can be due to the fact InterPro aggregates different resources (including Pfam and TIGRFAMs) and domain signatures arising from different databases are integrated with different identifiers in InterPro. In light of this we focused on results obtained using Pfam whose current release (30.0) contains hidden Markov models for over 16300 domain families. Size and persistence of groups of functionally equivalent proteins obtained using Pfam domains are presented in Figure 4.\n\nThe frequency of DAB clusters according to their persistence is shown.\n\nSimilar to what has been observed in the SB case we observed a decrease of the ratio between the size of the core genome and the number of singletons when higher taxonomic levels are considered. For organisms of the same species (H. pylori and L. monocytogenes) the ratio was 5.09 and 4.30, respectively, while for member of the same order (Corynebacteriales) and phylum (Cyanobacteria) it was 0.55 and 0.009 respectively. Similarly, also the ratio between the size of the core- and pan-genome decreases as higher taxonomic levels are considered, ranging from 0.54 for H. pylori to 0.04 for Cyanobacteria.\n\nWe compared the clusters obtained using both approaches and the proteins assigned to them. The number of one-to-one relationships (indicating a complete agreement) between SB and DAB clusters is indicated in Table 2 and ranges from 648 (for H. pylori) to 1680 (in Pseudomonas) corresponding to 50% and 25% of the pan-genome. This indicates that results of SB and DAB clustering tend to be more similar when working at closer phylogenetic distances. However, more complicated cases occur when proteins in a single SB cluster are assigned to various DAB clusters including singletons and vice versa. An overview of the possible mismatches between SB and DAB clusters is in Figure 5. The observed frequency of the different types of cluster mismatches are given in Figures 6 (for SB clusters per DAB cluster) and Figure 7 (for DAB clusters per SB cluster). For L. monocytogenes we found 378 1d → 1s DAB cluster mismatches, (Figure 5, panel A, top case) meaning that in those cases sequences in a DAB cluster are a subset of the sequences in the corresponding SB cluster. This lower number of sequences in the DAB cluster could be due to, for instance an insertion or expansion of a domain, leading to SB clustered sequences with partly overlapping but distinct domain architectures as is depicted in Figure 1. Similarly there are 399 1s → 1d clusters, which could be caused by a sequence similarity score which is too low due to a horizontal acquisition of the gene1 or to a fast protein evolution at the sequence level. Moving from species to order, the number of SB clusters assigned to six or more DAB clusters increases (and the converse is also true). In general, DAB clusters tend to be assigned to more than one SB cluster more often than the counterpart. This is due to the fact that as the phylogenetic distance increases, sequence similarity decreases and thus detection of true orthology relationships becomes more difficult, while in many cases the domain architectures are preserved.\n\nMismatches of SB and DAB derived clusters (marked by S and D respectively) can occur in two directions. Panel A: possible cases of mismatch when counting the number of SB clusters the sequences in a DAB cluster are assigned to. 1d → 1s denotes that all sequences from the D cluster are assigned to the same S cluster. 1d → Ns denotes that sequences in a single D cluster are assigned to N distinct S clusters with N ≥ 1. Similarly, (panel B) 1s → Nd denotes that sequences in a single S cluster are assigned to N distinct D clusters with N ≥ 1.\n\nEach bar represents the relative frequency of one DAB cluster containing sequences assigned to {1, 2, ... , 5} and 6 or more SB clusters. Axis labels follow notation in Figure 5.\n\nEach bar represents the relative frequency of one SB cluster containing sequences assigned to {1, 2, · · · , 5} and 6 or more DAB clusters. Axis labels follow notation in Figure 5.\n\nProteins contained in a single DAB cluster but assigned to multiple SB clusters contain mostly ABC transporters-like (PF00005) or Major Facilitator Superfamily (MFS, PF07690) domains. This is not surprising considering that such generic functions are usually associated with a high sequence diversity. Conversely, ABC transporters are found in multiple DAB clusters. However, many of them are grouped into a single SB cluster with ATPase domain containing proteins (1s → Nd case).\n\nWe observed distinct architectures with one of two very similar domains, the GDSL-like Lipase/Acylhydrolase and the GDSL-like Lipase/Acylhydrolase family domain (PF00657 and PF13472 respectively) and those architectures were often seen clustered using a SB approach. However, architectures containing both domains were also identified, pointing to a degree of functional difference as a result of convergent or divergent evolution. Still, the corresponding sequences remain similar enough as to be indistinguishable when a SB approach is used.\n\nFor SB clustering we also observed the case of identical protein sequences not clustered together, probably because of the tie breaking implementation when BBH are scored.\n\nIn all cases we found the size of both the pan- and the core-genome to be larger when a SB approach is used to identify gene clusters and SB approaches lead to a larger number of singletons than DAB ones. This indicates that DAB clusters are assigned to several SB clusters, many of them consisting of just one protein.\n\nWhen going from species to phylum level, the ratio between the number of DAB and SB singletons changes from 0.48 and 0.41 (for H. pylori and L. monocytogenes respectively) to 0.19 and 0.40 when considering organisms of a higher taxonomic level (Corynebacteriales and Cyanobacteria respectively).\n\nWe investigated the predicted size of the pan-genome upon addition of new sequences. Heaps’ law regression can be used to estimate whether the pan-genome is open or closed25 through the fitting of the decay parameter α; α < 1 indicates openness of the pan-genome (indicating that possibly many clusters remain to be identified within the considered set of sequences), while α > 1 indicates a closed one; the α values are given in Table 3. In all cases the pan-genome is predicted to be open; however, α values obtained using DAB clusters (αDAB) are systematically closer to one than the αSB obtained with the standard sequence similarity approach.\n\nα < 1 indicates an open pan-genome\n\nDAB clusters can be labeled by their domain architecture and since this is a formal description of functional equivalence, results of independently obtained analyses can be combined. Figure 8 shows the results of a principal component analysis of the combined DAB clusters for selected genomes from seven taxa. The first two components account for a relatively low explained variance (29%) still grouping of genomes from the same taxa is apparent. High functional similarity among genomes of the same species (H. pylori and L. monocytogenes) is reflected by the compact clustering, while phylogenetically more distant genomes appear scattered in the functional space defined by the principal components.\n\nPrincipal component analysis of functional similarities of 446 genomes based on the presence/absence of domain architectures on the corresponding genomes. The variance explained by the first two components is indicated on axes labels.\n\n\nDiscussion\n\nWe have shown that domain architecture-based methods can be used as an effective approach to identify clusters of functionally equivalent proteins, leading to results similar to those obtained by classical methods based on sequence similarity. The DAB approach takes advantage of the large computational effort that has already been devoted to the identification and definition of protein domains in dedicated databases such as Pfam. Protein domain models are built using large scale sequence comparisons which is an extremely computationally intensive task. However, once the domain models are defined, mining a sequence for domain occurrences is much less demanding task. Indeed, the task with the higher computational load (the definition of the domains) is performed only once and results can be stored and re-used for further analysis. This provides an effective scalable approach for large scale functional comparisons which by and large is independent of phylogenetic distances between species. The chosen set of domain models and the database used as a reference greatly impact the results. Highly redundant databases, such as InterPro increase the granularity of the obtained clusters, biasing downwards and upwards the size of the pan- and core-genome sizes, respectively as shown in Table 1. The changes in the size of core- or pan-genome and their relationship with the evolutionary distance does not depends on the particular method considered and decrease of the ratio between core- and pan-genome sizes is observed when phylogenetic distance increases.\n\nIn many cases a one-to-one correspondence could be established between DAB and SB clusters indicating that often the sequence can be used as a proxy for function. At first this may seem a trivial result but it has a profound implication: domain model databases (in this case Pfam) contain enough information, encoded by known domain models, to represent the quasi totality of biological function encoded in the bacterial sequences analyzed here. However, it is important to stress that the comparisons have been performed considering sequences with known domains, representing currently around 85% of the genome coding content, a number that will only increase in the future.\n\nA significant advantage of the DAB method over the SB method is that the domain architecture captured within a cluster can be used as a formal description of the function. Currently, more than 20% of all separable domains in the Pfam database, are so-called domains of unknown function (DUFs). Despite this, in bacterial species they are often essential26. With the DAB method they are formally included and often semantically linked to one or more domains of known function.\n\nA content-wise formal labeling of DAB clusters makes a seamless integration of multiple independently performed DAB analysis possible. This allows for a comparison of potential functionomes across taxonomic boundaries, as presented in Figure 8, while new genomes can be added at a computational cost O(n), with n the number of genomes to be analyzed. On the other hand, addition of a new genome using an SB approach require a new set of all-against-all sequence comparisons which come at a O(n2) computational cost.\n\nThe bimodal shape of the distributions presented in Figure 3 and Figure 4 indicates the relative role of horizontal gene transfer and vertical descent when shaping bacterial genomes: the first peak accounts for sequences (or functions) only present in a small number of genome sequences which have been a likely acquired by horizontal gene transfer. The second peak accounts for high persistence genetic regions representing genes (or functions) belonging to the taxon core which have been likely acquired by vertical descent.\n\nA measure of the impact of vertical descent and horizontal gene transfer is provided by the ratio between the core- and pan-genome sizes. The number of singletons provides a measure of the number of genes horizontally acquired from species outside the considered group.\n\nTwo of the most prominent differences between both approaches are the number of retrieved singletons and the core- to pan-genome size ratio. Multiple members of the same taxon might acquire the same function through horizontal gene transfer27. This is likely to occur given that they would have similar physiological characteristics, hence they would tend to occupy a similar niche or, at least, more similar than when comparing species from different taxa. As the origin of the horizontally acquired genes may vary for each organism an SB approach will correctly recognize the heterologous origin of the corresponding sequences and those will be assigned to singletons. However, the probabilistic hidden Markov models used for domain recognition are better at recognizing the functional similarity of the considered sequences and clusters them together.\n\nAnother indication of the relative impact of horizontal and vertical gene acquisition events is provided by the openness or closeness of the genome. Values for the decay parameter α in Table 3 indicate a relatively large impact of horizontal gene transfer. Within the considered taxa we observed αDAB > αSB, meaning that the sequence diversity is larger than the functional diversity: upon addition of new genomes to the sample the rate of addition of new sequence clusters appears higher than the rate of addition of new functions.\n\n\nConclusions\n\nAs protein domain databases have evolved to the point where DAB and SB approaches produce similar results in closely related organisms, the DAB approach provides a fast and efficient alternative to SB methods to identify groups of functionally equivalent/related proteins for comparative genome analysis. The lower computational cost of DAB approaches makes them the better choice for large scale comparisons involving hundreds of genomes.\n\nHighly redundant databases, such as InterPro, are best suited for domain based protein annotation, but are not effective for DAB clustering if the goal is to identify clusters of functionally equivalent proteins. Currently Pfam is for this task a better alternative.\n\nDifferences between DAB and SB approaches increase when the goal is to study bacterial groups spanning wider evolutionary distances. The functional pan-genome is more closed in comparison to the sequence based pan-genome. Both methods have a distinct approach towards horizontally transferred genes, and the DAB approach has the potential to detect functional equivalence even when sequence similarities are low.\n\nComplementing the standardly applied sequence similarity methods with a DAB approach pinpoints potential functional protein adaptations that may add to the overall fitness.\n\n\nData availability\n\nList of genomes used for the analysis at different phylogenetic levels. The genomes are grouped per taxonomic lineage used in this study.\n\nBacillus\n\nGCA_000523045 Bacillus subtilis BEST7003\n\nGCA_000782835 Bacillus subtilis\n\nGCA_000832885 Bacillus thuringiensis str. Al Hakam\n\nGCA_000473245 Bacillus infantis NRRL B-14911\n\nGCA_000832585 Bacillus anthracis\n\nGCA_000590455 Bacillus pumilus\n\nGCA_000831065 Bacillus bombysepticus\n\nGCA_000833275 Bacillus anthracis str. Turkey32\n\nGCA_000952895 Bacillus sp.\n\nGCA_000259365 Bacillus sp. JS\n\nGCA_000143605 Bacillus cereus biovar anthracis str. CI\n\nGCA_000186745 Bacillus subtilis BSn5\n\nGCA_000987825 Bacillus methylotrophicus\n\nGCA_000706725 Bacillus lehensis G1\n\nGCA_000815145 Bacillus sp. Pc3\n\nGCA_000496285 Bacillus toyonensis BCT-7112\n\nGCA_000742855 Bacillus mycoides\n\nGCA_000169195 Bacillus coagulans 36D1\n\nGCA_000835145 Bacillus amyloliquefaciens KHG19\n\nGCA_000321395 Bacillus subtilis subsp. subtilis str. BSP1\n\nGCA_000009045 Bacillus subtilis subsp. subtilis str. 168\n\nGCA_000293765 Bacillus subtilis QB928\n\nGCA_000025805 Bacillus megaterium DSM 319\n\nGCA_000747345 Bacillus sp. X1(2014)\n\nGCA_000833005 Bacillus amyloliquefaciens\n\nGCA_000408885 Bacillus paralicheniformis ATCC 9945a\n\nGCA_000742895 Bacillus anthracis str. Vollum\n\nGCA_000829195 Bacillus sp. OxB-1\n\nGCA_000800825 Bacillus sp. WP8\n\nGCA_000706705 Bacillus subtilis subsp. subtilis str. OH 131.1\n\nGCA_000338735 Bacillus subtilis XF-1\n\nGCA_000832445 Bacillus anthracis\n\nGCA_000747335 Bacillus anthracis\n\nGCA_000008505 Bacillus thuringiensis serovar konkukian str. 97-27\n\nGCA_000195515 Bacillus amyloliquefaciens TA208\n\nGCA_000209795 Bacillus subtilis subsp. natto BEST195\n\nGCA_000017425 Bacillus cytotoxicus NVH 391-98\n\nGCA_000877815 Bacillus sp. YP1\n\nGCA_000177235 Bacillus cellulosilyticus DSM 2522\n\nGCA_000344745 Bacillus subtilis subsp. subtilis 6051- HGW\n\nGCA_000227485 Bacillus subtilis subsp. subtilis str. RO-NN-1\n\nGCA_000494835 Bacillus amyloliquefaciens CC178\n\nGCA_000011145 Bacillus halodurans C-125\n\nGCA_000724485 Bacillus methanolicus MGA3\n\nGCA_000018825 Bacillus weihenstephanensis KBAB4\n\nGCA_000005825 Bacillus pseudofirmus OF4\n\nGCA_000017885 Bacillus pumilus SAFR-032\n\nGCA_000583065 Bacillus methylotrophicus Trigo-Cor1448\n\nGCA_000349795 Bacillus subtilis subsp. subtilis str. BAB-1\n\nGCA_000306745 Bacillus thuringiensis Bt407\n\nGCA_000011645 Bacillus licheniformis DSM 13 = ATCC 14580\n\nGCA_000497485 Bacillus subtilis PY79\n\nGCA_000009825 Bacillus clausii KSM-K16\n\nGCA_000227465 Bacillus subtilis subsp. spizizenii TU-B-10\n\nGCA_000971925 Bacillus subtilis KCTC 1028\n\nGCA_000972245 Bacillus endophyticus\n\nGCA_000242895 Bacillus sp. 1NLA3E\n\nGCA_000832485 Bacillus thuringiensis\n\nGCA_000830075 Bacillus atrophaeus\n\nGCA_000146565 Bacillus subtilis subsp. spizizenii str. W23\n\nCorynebacteriales\n\nGCA_000016005 Mycobacterium sp. JLS\n\nGCA_000758405 Mycobacterium abscessus subsp. bolletii\n\nGCA_000283295 Mycobacterium smegmatis str. MC2 155\n\nGCA_001021045 Corynebacterium testudinoris\n\nGCA_000341345 Corynebacterium halotolerans YIM 70093 = DSM 44683\n\nGCA_000525655 Corynebacterium falsenii DSM 44353\n\nGCA_000255195 Corynebacterium diphtheriae HC04\n\nGCA_000523235 Nocardia nova SH22a\n\nGCA_000026685 Mycobacterium leprae Br4923\n\nGCA_000980815 Corynebacterium camporealensis\n\nGCA_000328565 Mycobacterium sp. JS623\n\nGCA_000015405 Mycobacterium sp. KMS\n\nGCA_000987865 [Brevibacterium] flavum\n\nGCA_001020985 Corynebacterium mustelae\n\nGCA_001021065 Corynebacterium uterequi\n\nGCA_000177535 Corynebacterium resistens DSM 45100\n\nGCA_000011305 Corynebacterium efficiens YS-314\n\nGCA_000835265 Mycobacterium avium subsp. paratuberculosis\n\nGCA_000739455 Corynebacterium imitans\n\nGCA_000831265 Mycobacterium kansasii 662\n\nGCA_000819445 Corynebacterium humireducens NBRC 106098 = DSM 45392\n\nGCA_000770235 Mycobacterium avium subsp. avium\n\nGCA_000980835 Corynebacterium kutscheri\n\nGCA_000010225 Corynebacterium glutamicum R\n\nGCA_000590555 Corynebacterium argentoratense DSM 44202\n\nGCA_000247715 Gordonia polyisoprenivorans VH2\n\nGCA_000416365 Mycobacterium sp. VKM Ac-1817D\n\nGCA_000418365 Corynebacterium terpenotabidum Y-11\n\nGCA_000092225 Tsukamurella paurometabola DSM 20162\n\nGCA_000442645 Corynebacterium maris DSM 45190\n\nGCA_000277125 Mycobacterium intracellulare ATCC 13950\n\nGCA_000196695 Rhodococcus equi 103S\n\nGCA_000828995 Mycobacterium tuberculosis str. Kurono\n\nGCA_000006605 Corynebacterium jeikeium K411\n\nGCA_000022905 Corynebacterium aurimucosum\n\nGCA_001021025 Corynebacterium epidermidicanis\n\nGCA_000010105 Rhodococcus erythropolis PR4\n\nGCA_000092825 Segniliparus rotundus DSM 44985\n\nGCA_000758245 Mycobacterium bovis\n\nGCA_000184435 Mycobacterium gilvum Spyr1\n\nGCA_000829075 Mycobacterium avium subsp. hominissuis TH135\n\nGCA_000214175 Amycolicicoccus subflavus DQS3-9A1\n\nGCA_000769635 Corynebacterium ulcerans\n\nGCA_000626675 Corynebacterium glyciniphilum AJ 3170\n\nGCA_001026945 Corynebacterium pseudotuberculosis\n\nGCA_000026445 Mycobacterium liflandii 128FXT\n\nGCA_000013925 Mycobacterium ulcerans Agy99\n\nGCA_000954115 Rhodococcus sp. B7740\n\nGCA_000143885 Gordonia sp. KTR9\n\nGCA_000014565 Rhodococcus jostii RHA1\n\nGCA_000179395 Corynebacterium variabile DSM 44702\n\nGCA_000732945 Corynebacterium atypicum\n\nGCA_000723425 Mycobacterium marinum E11\n\nGCA_000230895 Mycobacterium rhodesiae NBB3\n\nGCA_000344785 Corynebacterium callunae DSM 20147\n\nGCA_000010805 Rhodococcus opacus B4\n\nGCA_000982715 Rhodococcus aetherivorans\n\nGCA_000298095 Mycobacterium indicus pranii MTCC 9506\n\nGCA_000833575 Corynebacterium singulare\n\nGCA_000023145 Corynebacterium kroppenstedtii DSM 44385\n\nCyanobacteria\n\nGCA_000317085 Synechococcus sp. PCC 7502\n\nGCA_000011385 Gloeobacter violaceus PCC 7421\n\nGCA_000014585 Synechococcus sp. CC9311\n\nGCA_000012465 Prochlorococcus marinus str. NATL2A\n\nGCA_000737535 Synechococcus sp. KORDI-100\n\nGCA_000013205 Synechococcus sp. JA-3-3Ab\n\nGCA_000021825 Cyanothece sp. PCC 7424\n\nGCA_000063505 Synechococcus sp. WH 7803\n\nGCA_000022045 Cyanothece sp. PCC 7425\n\nGCA_000316575 Calothrix sp. PCC 7507\n\nGCA_000316685 Synechococcus sp. PCC 6312\n\nGCA_000012505 Synechococcus sp. CC9902\n\nGCA_000317475 Oscillatoria nigro-viridis PCC 7112\n\nGCA_000063525 Synechococcus sp. RCC307\n\nGCA_000317695 Anabaena cylindrica PCC 7122\n\nGCA_000014265 Trichodesmium erythraeum IMS101\n\nGCA_000817325 Synechococcus sp. UTEX 2973\n\nGCA_000737575 Synechococcus sp. KORDI-49\n\nGCA_000317125 Chroococcidiopsis thermalis PCC 7203\n\nGCA_000017845 Cyanothece sp. ATCC 51142\n\nGCA_000020025 Nostoc punctiforme PCC 73102\n\nGCA_000018105 Acaryochloris marina MBIC11017\n\nGCA_000757865 Prochlorococcus sp. MIT 0801\n\nGCA_000317045 Geitlerinema sp. PCC 7407\n\nGCA_000012625 Synechococcus sp. CC9605\n\nGCA_000737595 Synechococcus sp. KORDI-52\n\nGCA_000317635 Halothece sp. PCC 7418\n\nGCA_000025125 Candidatus Atelocyanobacterium thalassa isolate ALOHA\n\nGCA_000010625 Microcystis aeruginosa NIES-843\n\nGCA_000317065 Pseudanabaena sp. PCC 7367\n\nGCA_000312705 Anabaena sp. 90\n\nGCA_000316515 Cyanobium gracile PCC 6307\n\nGCA_000316605 Leptolyngbya sp. PCC 7376\n\nGCA_000317025 Pleurocapsa sp. PCC 7327\n\nGCA_000009705 Nostoc sp. PCC 7120\n\nGCA_000013225 Synechococcus sp. JA-2-3B’a(2-13)\n\nGCA_000757845 Prochlorococcus sp. MIT 0604\n\nGCA_000317515 Microcoleus sp. PCC 7113\n\nGCA_000734895 Calothrix sp. 336/3\n\nGCA_000007925 Prochlorococcus marinus subsp. marinus str. CCMP1375\n\nGCA_000021805 Cyanothece sp. PCC 8801\n\nGCA_000019485 Synechococcus sp. PCC 7002\n\nGCA_000317655 Cyanobacterium stanieri PCC 7202\n\nGCA_000316625 Nostoc sp. PCC 7107\n\nGCA_000011465 Prochlorococcus marinus subsp. pastoris str. CCMP1986\n\nGCA_000316665 Rivularia sp. PCC 7116\n\nGCA_000317105 Oscillatoria acuminata PCC 6304\n\nGCA_000317435 Calothrix sp. PCC 6303\n\nGCA_000317555 Gloeocapsa sp. PCC 7428\n\nGCA_000478825 Synechocystis sp. PCC 6714\n\nGCA_000204075 Anabaena variabilis ATCC 29413\n\nGCA_000317575 Stanieria cyanosphaera PCC 7437\n\nGCA_000161795 Synechococcus sp. WH 8109\n\nGCA_000011345 Thermosynechococcus elongatus BP-1\n\nGCA_000317615 Dactylococcopsis salina PCC 8305\n\nGCA_000284135 Synechocystis sp. PCC 6803 substr. GT-I\n\nGCA_000024045 Cyanothece sp. PCC 8802\n\nGCA_000317495 Crinalium epipsammum PCC 9333\n\nGCA_000317675 Cyanobacterium aponinum PCC 10605\n\nGCA_000012525 Synechococcus elongatus PCC 7942\n\nEnterobacteriaceae\n\nGCA_000259175 Providencia stuartii MRSN 2154\n\nGCA_000214805 Serratia sp. AS13\n\nGCA_000330865 Serratia marcescens FGI94\n\nGCA_001010285 Photorhabdus temperata subsp. thracensis\n\nGCA_000364725 Candidatus Moranella endobia PCVAL\n\nGCA_000521525 Buchnera aphidicola str. USDA (Myzus persicae)\n\nGCA_000517405 Candidatus Sodalis pierantonius str. SOPE\n\nGCA_000012005 Shigella dysenteriae Sd197\n\nGCA_000196475 Photorhabdus asymbiotica\n\nGCA_000750295 Salmonella enterica subsp. enterica serovar Enteritidis\n\nGCA_000007885 Yersinia pestis biovar Microtus str. 91001\n\nGCA_000739495 Klebsiella pneumoniae\n\nGCA_000252995 Salmonella bongori NCTC 12419\n\nGCA_000270125 Pantoea ananatis AJ13355\n\nGCA_000215745 Enterobacter aerogenes KCTC 2190\n\nGCA_000092525 Shigella sonnei Ss046\n\nGCA_000020865 Edwardsiella tarda EIB202\n\nGCA_000023545 Dickeya dadantii Ech703\n\nGCA_000238975 Serratia symbiotica str. ’Cinara cedri’\n\nGCA_000975245 Serratia liquefaciens\n\nGCA_000006645 Yersinia pestis KIM10+\n\nGCA_000224675 Enterobacter asburiae LF7a\n\nGCA_000007405 Shigella flexneri 2a str. 2457T\n\nGCA_001022275 Citrobacter freundii\n\nGCA_000963575 Klebsiella michiganensis\n\nGCA_000504545 Cronobacter sakazakii CMCC 45402\n\nGCA_000012025 Shigella boydii Sb227\n\nGCA_000814125 Enterobacter cloacae\n\nGCA_000987925 Yersinia enterocolitica\n\nGCA_000011745 Candidatus Blochmannia pennsylvanicus str. BPEN\n\nGCA_000255535 Rahnella aquatilis HX2\n\nGCA_000952955 Escherichia coli\n\nGCA_000695995 Serratia sp. FS14\n\nGCA_000648515 Citrobacter freundii CFNIH1\n\nGCA_001022295 Klebsiella oxytoca\n\nGCA_000147055 Dickeya dadantii 3937\n\nGCA_000348565 Edwardsiella piscicida C07-087\n\nGCA_000742755 Klebsiella pneumoniae subsp. pneumoniae\n\nGCA_000027225 Xenorhabdus bovienii SS-2004\n\nGCA_000247565 Wigglesworthia glossinidia endosymbiont of Glossina morsitans morsitans (Yale colony)\n\nGCA_000828815 Candidatus Tachikawaea gelatinosa\n\nGCA_000022805 Yersinia pestis D106004\n\nGCA_001006005 Serratia fonticola\n\nGCA_000018625 Salmonella enterica subsp. arizonae serovar 62:z4,z23:-\n\nGCA_000478905 Candidatus Pantoea carbekii\n\nGCA_000410515 Enterobacter sp. R4-368\n\nGCA_000148935 Pantoea vagans C9-1\n\nGCA_000444425 Proteus mirabilis BB2000\n\nGCA_000747565 Serratia sp. SCBI\n\nGCA_001022135 Kluyvera intermedia\n\nGCA_000757825 Cedecea neteri\n\nGCA_000294535 Pectobacterium carotovorum subsp. carotovorum PCC21\n\nGCA_000834375 Yersinia pseudotuberculosis YPIII\n\nGCA_000043285 Candidatus Blochmannia floridanus\n\nGCA_000093065 Candidatus Riesia pediculicola USDA\n\nGCA_000834515 Yersinia intermedia\n\nGCA_000759475 Pantoea rwandensis\n\nGCA_000027065 Siccibacter turicensis z3032\n\nGCA_000582515 Yersinia similis\n\nGCA_000300455 Kosakonia sacchari SP1\n\nHelicobacter pylori\n\nGCA_000148855 Helicobacter pylori SJM180\n\nGCA_000021165 Helicobacter pylori G27\n\nGCA_000185245 Helicobacter pylori SouthAfrica7\n\nGCA_000093185 Helicobacter pylori v225d\n\nGCA_000277365 Helicobacter pylori Shi417\n\nGCA_000498315 Helicobacter pylori BM012A\n\nGCA_000270065 Helicobacter pylori F57\n\nGCA_000392455 Helicobacter pylori UM032\n\nGCA_000277385 Helicobacter pylori Shi169\n\nGCA_000008525 Helicobacter pylori 26695\n\nGCA_000270045 Helicobacter pylori F32\n\nGCA_000148915 Helicobacter pylori Sat464\n\nGCA_000185225 Helicobacter pylori Lithuania75\n\nGCA_000600045 Helicobacter pylori oki102\n\nGCA_000600205 Helicobacter pylori oki828\n\nGCA_000192335 Helicobacter pylori 2018\n\nGCA_000827025 Helicobacter pylori\n\nGCA_000590775 Helicobacter pylori SouthAfrica20\n\nGCA_000270025 Helicobacter pylori F30\n\nGCA_000148665 Helicobacter pylori 908\n\nGCA_000392515 Helicobacter pylori UM037\n\nGCA_000392475 Helicobacter pylori UM299\n\nGCA_000262655 Helicobacter pylori XZ274\n\nGCA_000008785 Helicobacter pylori J99\n\nGCA_000685745 Helicobacter pylori\n\nGCA_000185205 Helicobacter pylori Gambia94/24\n\nGCA_000826985 Helicobacter pylori 26695-1\n\nGCA_000315955 Helicobacter pylori Aklavik117\n\nGCA_000498335 Helicobacter pylori BM012S\n\nGCA_000277405 Helicobacter pylori Shi112\n\nGCA_000224535 Helicobacter pylori Puno120\n\nGCA_000317875 Helicobacter pylori Aklavik86\n\nGCA_000600185 Helicobacter pylori oki673\n\nGCA_000196755 Helicobacter pylori B8\n\nGCA_000439295 Helicobacter pylori UM298\n\nGCA_000348885 Helicobacter pylori OK310\n\nGCA_000307795 Helicobacter pylori 26695\n\nGCA_000013245 Helicobacter pylori HPAG1\n\nGCA_000392535 Helicobacter pylori UM066\n\nGCA_000185185 Helicobacter pylori India7\n\nGCA_000213135 Helicobacter pylori 83\n\nGCA_000685705 Helicobacter pylori\n\nGCA_000224575 Helicobacter pylori SNT49\n\nGCA_000600085 Helicobacter pylori oki112\n\nGCA_000023805 Helicobacter pylori 52\n\nGCA_000348865 Helicobacter pylori OK113\n\nGCA_000259235 Helicobacter pylori HUP-B14\n\nGCA_000020245 Helicobacter pylori Shi470\n\nGCA_000270005 Helicobacter pylori F16\n\nGCA_000192315 Helicobacter pylori 2017\n\nGCA_000685665 Helicobacter pylori\n\nGCA_000600165 Helicobacter pylori oki422\n\nGCA_000255955 Helicobacter pylori ELS37\n\nGCA_000021465 Helicobacter pylori P12\n\nGCA_000600145 Helicobacter pylori oki154\n\nGCA_000224555 Helicobacter pylori Puno135\n\nGCA_000011725 Helicobacter pylori 51\n\nGCA_000148895 Helicobacter pylori Cuz20\n\nGCA_000817025 Helicobacter pylori\n\nGCA_000178935 Helicobacter pylori 35A\n\nListeria monocytogenes\n\nGCA_000438745 Listeria monocytogenes\n\nGCA_000438705 Listeria monocytogenes\n\nGCA_001027125 Listeria monocytogenes\n\nGCA_000438725 Listeria monocytogenes\n\nGCA_000197755 Listeria monocytogenes\n\nGCA_001027245 Listeria monocytogenes\n\nGCA_001027085 Listeria monocytogenes\n\nGCA_001005925 Listeria monocytogenes\n\nGCA_000746625 Listeria monocytogenes\n\nGCA_000382925 Listeria monocytogenes\n\nGCA_000438665 Listeria monocytogenes\n\nGCA_000800335 Listeria monocytogenes\n\nGCA_001027165 Listeria monocytogenes\n\nGCA_000438605 Listeria monocytogenes\n\nGCA_000438585 Listeria monocytogenes\n\nGCA_000808055 Listeria monocytogenes\n\nGCA_000950775 Listeria monocytogenes\n\nGCA_001027065 Listeria monocytogenes\n\nGCA_000600015 Listeria monocytogenes\n\nGCA_001027205 Listeria monocytogenes\n\nGCA_000438685 Listeria monocytogenes\n\nGCA_001005985 Listeria monocytogenes\n\nGCA_000438625 Listeria monocytogenes\n\nGCA_000681515 Listeria monocytogenes\n\nGCA_000438645 Listeria monocytogenes\n\nGCA_000210815 Listeria monocytogenes\n\nPseudomonas\n\nGCA_000829885 Pseudomonas aeruginosa\n\nGCA_000510285 Pseudomonas monteilii SB3078\n\nGCA_000988485 Pseudomonas syringae pv. syringae B301D\n\nGCA_000013785 Pseudomonas stutzeri A1501\n\nGCA_000759535 Pseudomonas cremoricolorata\n\nGCA_000953455 Pseudomonas pseudoalcaligenes\n\nGCA_000981825 Pseudomonas aeruginosa\n\nGCA_000661915 Pseudomonas stutzeri\n\nGCA_000508205 Pseudomonas sp. TKP\n\nGCA_000014625 Pseudomonas aeruginosa UCBPP-PA14\n\nGCA_000019445 Pseudomonas putida W619\n\nGCA_000316175 Pseudomonas sp. UW4\n\nGCA_000498975 Pseudomonas mosselii SJ10\n\nGCA_000473745 Pseudomonas aeruginosa VRFPA04\n\nGCA_000691565 Pseudomonas putida\n\nGCA_000730425 Pseudomonas fluorescens\n\nGCA_000007805 Pseudomonas syringae pv. tomato str. DC3000\n\nGCA_000349845 Pseudomonas denitrificans ATCC 13867\n\nGCA_000026105 Pseudomonas entomophila L48\n\nGCA_000689415 Pseudomonas knackmussii B13\n\nGCA_000325725 Pseudomonas putida HB3267\n\nGCA_000412695 Pseudomonas resinovorans NBRC 106553\n\nGCA_000831585 Pseudomonas plecoglossicida\n\nGCA_000756775 Pseudomonas sp. 20_BN\n\nGCA_000590475 Pseudomonas stutzeri\n\nGCA_000829255 Pseudomonas aeruginosa\n\nGCA_000761155 Pseudomonas rhizosphaerae\n\nGCA_001038645 Pseudomonas stutzeri\n\nGCA_000264665 Pseudomonas putida ND6\n\nGCA_000007565 Pseudomonas putida KT2440\n\nGCA_000494915 Pseudomonas sp. VLB120\n\nGCA_000226155 Pseudomonas aeruginosa M18\n\nGCA_000213805 Pseudomonas fulva 12-X\n\nGCA_000194805 Pseudomonas brassicacearum subsp. brassicacearum NFM421\n\nGCA_000336465 Pseudomonas poae RE*1-1-14\n\nGCA_000828695 Pseudomonas protegens Cab57\n\nGCA_000800255 Pseudomonas parafulva\n\nGCA_000257545 Pseudomonas mandelii JR-1\n\nGCA_000012205 Pseudomonas savastanoi pv. phaseolicola 1448A\n\nGCA_000816985 Pseudomonas aeruginosa\n\nGCA_000746525 Pseudomonas alkylphenolia\n\nGCA_000496605 Pseudomonas aeruginosa PA1\n\nGCA_000204295 Pseudomonas mendocina NK-01\n\nGCA_000829415 Pseudomonas sp. StFLB209\n\nGCA_000012265 Pseudomonas protegens Pf-5\n\nGCA_000412675 Pseudomonas putida NBRC 14164\n\nGCA_000397205 Pseudomonas protegens CHA0\n\nGCA_000648735 Pseudomonas syringae pv. actinidiae ICMP 18884\n\nGCA_000012245 Pseudomonas syringae pv. syringae B728a\n\nGCA_000761195 Pseudomonas chlororaphis subsp. aurantiaca\n\nGCA_000818015 Pseudomonas balearica DSM 6083\n\nGCA_000219605 Pseudomonas stutzeri ATCC 17588 = LMG 11199\n\nGCA_000219705 Pseudomonas putida S16\n\nGCA_000511325 Pseudomonas sp. FGI182\n\nGCA_000508765 Pseudomonas aeruginosa LES431\n\nGCA_000297075 Pseudomonas pseudoalcaligenes CECT 5344\n\nGCA_000517305 Pseudomonas cichorii JBC1\n\nGCA_000963835 Pseudomonas chlororaphis\n\nGCA_000327065 Pseudomonas stutzeri RCH2\n\nGCA_000271365 Pseudomonas aeruginosa DK2\n\nStreptococcus\n\nGCA_000211015 Streptococcus pneumoniae SPN034183\n\nGCA_000210975 Streptococcus pneumoniae INV104\n\nGCA_000203195 Streptococcus gallolyticus subsp. gallolyticus ATCC BAA-2069\n\nGCA_001020185 Streptococcus pyogenes\n\nGCA_000253155 Streptococcus oralis Uo5\n\nGCA_000696505 Streptococcus equi subsp. zooepidemicus CY\n\nGCA_000463355 Streptococcus intermedius B196\n\nGCA_000698885 Streptococcus thermophilus ASCC 1275\n\nGCA_000014205 Streptococcus sanguinis SK36\n\nGCA_000007045 Streptococcus pneumoniae R6\n\nGCA_000306805 Streptococcus intermedius JTH08\n\nGCA_000196595 Streptococcus pneumoniae TCH8431/19A\n\nGCA_000262145 Streptococcus parasanguinis FW213\n\nGCA_001026925 Streptococcus agalactiae\n\nGCA_000251085 Streptococcus pneumoniae ST556\n\nGCA_000019025 Streptococcus pneumoniae Taiwan19F-14\n\nGCA_000211055 Streptococcus pneumoniae SPN994039\n\nGCA_000688775 Streptococcus sp. VT 162\n\nGCA_000231905 Streptococcus suis D12\n\nGCA_000026665 Streptococcus pneumoniae ATCC 700669\n\nGCA_000283635 Streptococcus macedonicus ACA-DC 198\n\nGCA_000014365 Streptococcus pneumoniae D39\n\nGCA_000019265 Streptococcus pneumoniae Hungary19A-6\n\nGCA_000299015 Streptococcus pneumoniae gamPNI0373\n\nGCA_000019985 Streptococcus pneumoniae CGSP14\n\nGCA_000463395 Streptococcus constellatus subsp. pharyngis C232\n\nGCA_000187935 Streptococcus parauberis NCFD 2020\n\nGCA_000253315 Streptococcus salivarius JIM8777\n\nGCA_000427055 Streptococcus agalactiae ILRI112\n\nGCA_000246835 Streptococcus infantarius subsp. infantarius CJ18\n\nGCA_000427075 Streptococcus agalactiae ILRI005\n\nGCA_000007465 Streptococcus mutans UA159\n\nGCA_000831165 Streptococcus anginosus\n\nGCA_000147095 Streptococcus pneumoniae 670-6B\n\nGCA_000817005 Streptococcus pneumoniae\n\nGCA_000180515 Streptococcus pneumoniae SPNA45\n\nGCA_000441535 Streptococcus lutetiensis 033\n\nGCA_000210955 Streptococcus pneumoniae OXC141\n\nGCA_000009545 Streptococcus uberis 0140J\n\nGCA_000648555 Streptococcus iniae\n\nGCA_000027165 Streptococcus mitis B6\n\nGCA_000018985 Streptococcus pneumoniae JJA\n\nGCA_000270165 Streptococcus pasteurianus ATCC 43144\n\nGCA_000479315 Streptococcus sp. I-P16\n\nGCA_000478925 Streptococcus anginosus subsp. whileyi MAS624\n\nGCA_000019825 Streptococcus pneumoniae G54\n\nGCA_000017005 Streptococcus gordonii str. Challis substr. CH1\n\nGCA_000479335 Streptococcus sp. I-G2\n\nGCA_000385925 Streptococcus oligofermentans AS 1.3089\n\nGCA_000210935 Streptococcus pneumoniae INV200\n\nGCA_000211035 Streptococcus pneumoniae SPN994038\n\nGCA_000221985 Streptococcus pseudopneumoniae IS7493\n\nGCA_000006885 Streptococcus pneumoniae TIGR4\n\nGCA_000018965 Streptococcus pneumoniae 70585\n\nGCA_000348705 Streptococcus pneumoniae PCS8235\n\nGCA_000210995 Streptococcus pneumoniae SPN034156\n\nGCA_000231925 Streptococcus suis ST1\n\nGCA_000019005 Streptococcus pneumoniae P1031\n\nGCA_000188715 Streptococcus dysgalactiae subsp. equisimilis ATCC 12394\n\nGCA_000026585 Streptococcus equi subsp. equi 4047",
"appendix": "Author contributions\n\n\n\nJJK, MSD, ES, PJS participated in the set-up of the research. JJK and MSD were responsible for the analysis. JJK, ES, PJS, MSD and VdS wrote the manuscript. All authors critically revised the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was partly supported by the European Union’s Horizon 2020 research and innovation programme (EmPowerPutida, Contract No. 635536, granted to Vitor A P Martins dos Santos).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nCluster persistence is defined as the relative number of genomes with at least one protein assigned to the cluster. The plots show frequency of SSB clusters according to their persistence. Publicly available and complete genome sequences assigned to each taxon were selected so that phylogenetic diversity within the taxon was preserved, as described in materials and methods. 60 distinct genome sequences were considered for each taxon shown.\n\nThe plots show frequency of DAB clusters according to their persistence.\n\nEach bar represents the relative frequency of one SSB cluster containing sequences assigned to 1, 2, ... , 5 and 6 or more DAB clusters.\n\nEach bar represents the relative frequency of one DAB cluster containing sequences assigned to 1, 2, ... , 5 and 6 or more SSB clusters.\n\n\nReferences\n\nPuigbò P, Lobkovsky AE, Kristensen DM, et al.: Genomes in turmoil: quantification of genome dynamics in prokaryote supergenomes. BMC Biol. 2014; 12(1): 66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGogarten JP, Doolittle WF, Lawrence JG: Prokaryotic evolution in light of gene transfer. Mol Biol Evol. 2002; 19(12): 2226–2238. PubMed Abstract | Publisher Full Text\n\nDutilh BE, Backus L, Edwards RA, et al.: Explaining microbial phenotypes on a genomic scale: GWAS for microbes. Brief Funct Genomics. 2013; 12(4): 366–380. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPallen MJ, Wren BW: Bacterial pathogenomics. Nature. 2007; 449(7164): 835–842. PubMed Abstract | Publisher Full Text\n\nJoshi T, Xu D: Quantitative assessment of relationship between sequence similarity and function similarity. BMC Genomics. 2007; 8(1): 222. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKuipers RK, Joosten HJ, Verwiel E, et al.: Correlated mutation analyses on super-family alignments reveal functionally important residues. Proteins. 2009; 76(3): 608–616. PubMed Abstract | Publisher Full Text\n\nGoodwin S, McPherson JD, McCombie WR: Coming of age: ten years of next-generation sequencing technologies. Nat Rev Genet. 2016; 17(6): 333–351. PubMed Abstract | Publisher Full Text\n\nYang S, Doolittle RF, Bourne PE: Phylogeny determined by protein domain content. Proc Natl Acad Sci U S A. 2005; 102(2): 373–378. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnipen LG, Ussery DW: A domain sequence approach to pangenomics: applications to Escherichia coli [version 2; referees: 2 approved]. F1000Res. 2013; 1: 19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoehorst JJ: High throughput functional comparison of 432 genome sequences of pseudomonas using a semantic data framework. Submitted, 2016.\n\nSaccenti E, Nieuwenhuijse D, Koehorst JJ, et al.: Assessing the Metabolic Diversity of Streptococcus from a Protein Domain Point of View. PLoS One. 2015; 10(9): e0137908. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAddou S, Rentzsch R, Lee D, et al.: Domain-based and family-specific sequence identity thresholds increase the levels of reliable protein function transfer. J Mol Biol. 2009; 387(2): 416–430. PubMed Abstract | Publisher Full Text\n\nThakur S, Guttman DS: A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies. BMC Bioinformatics. 2016; 17(1): 260. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPonting CP, Russell RR: The natural history of protein domains. Annu Rev Biophys Biomol Struct. 2002; 31: 45–71. PubMed Abstract | Publisher Full Text\n\nEddy SR: Profile hidden Markov models. Bioinformatics. 1998; 14(9): 755–763. PubMed Abstract | Publisher Full Text\n\nVan Domselaar GH, Stothard P, Shrivastava S, et al.: BASys: a web server for automated bacterial genome annotation. Nucleic Acids Res. 2005; 33(Web Server issue): W455–W459. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHyatt D, Chen GL, Locascio PF, et al.: Prodigal: prokaryotic gene recognition and translation initiation site identification. BMC Bioinformatics. 2010; 11: 119. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJones P, Binns D, Chang HY, et al.: InterProScan 5: genome-scale protein function classification. Bioinformatics. 2014; 30(9): 1236–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFinn RD, Coggill P, Eberhardt RY, et al.: The Pfam protein families database: towards a more sustainable future. Nucleic Acids Res. 2016; 44(D1): D279–D285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaft DH, Selengut JD, White O: The TIGRFAMs database of protein families. Nucleic Acids Res. 2003; 31(1): 371–373. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMitchell A, Chang HY, Daugherty L, et al.: The InterPro protein families database: the classification resource after 15 years. Nucleic Acids Res. 2015; 43(Database issue): D213–D221. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEkseth OK, Kuiper M, Mironov V: orthAgogue: an agile tool for the rapid prediction of orthology relations. Bioinformatics. 2014; 30(5): 734–736. PubMed Abstract | Publisher Full Text\n\nvan Dongen S: Graph clustering by flow simulation. PHD Thesis, University of Utrecht, 2000. Reference Source\n\nSnipen L, Liland KH: micropan: an R-package for microbial pan-genomics. BMC Bioinformatics. 2015; 16(1): 79. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTettelin H, Masignani V, Cieslewicz MJ, et al.: Genome analysis of multiple pathogenic isolates of Streptococcus agalactiae: implications for the microbial \"pan-genome\". Proc Natl Acad Sci U S A. 2005; 102(39): 13950–13955. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoodacre NF, Gerloff DL, Uetz P: Protein domains of unknown function are essential in bacteria. MBio. 2014; 5(1): e00744–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoucy SM, Huang J, Gogarten JP: Horizontal gene transfer: building the web of life. Nat Rev Genet. 2015; 16(8): 472–482. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "15679",
"date": "01 Sep 2016",
"name": "Antonio Rosato",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a very detailed and insightful analysis of the performance of alignment-based vs domain-based methods for comparative genomics. For the two methods, the proteins encoded by a selection of genomes are clustered based on pairwise sequence alignments or on their domain architectures, respectively. The first method is in principle more accurate and has higher coverage, whereas the second method is significantly faster and thus more suitable to cope with the explosion of genome information.\nThe authors demonstrate that domain-based methods provide results that are well in line with alignment-based methods. Consequently, their speed advantage does not compromise accuracy. In addition, the authors suggest that the Pfam database works better than InterPro for the present clustering purpose.\nThis article can benefit from some improvements:\nIt is not clear to me why the labels within the colored boxes representing domains of Figure 1 differ in the top panel (Domain architectures) and the bottom panel (Domains)\n\nThe new genome annotations generated by the authors should be made available to allow others to reproduce their calculations. It would be useful to have some data on the overall difference with respect to the original annotation\n\nThere are no details on the parameters used for domain identification such as E-value cut-offs. The latter has a strong impact on the number of singletons (1). It would be even more useful if the authors provided VMs with the complete setup for the entire procedure (from reannotation to clustering)\n\nThe header SB is misaligned in Table 1. Why did the authors report the fraction of proteins containing at least one InterPro domain when the rest of the analysis is based on Pfam domains?\n\nI find the section \"Comparison of DAB and SB clusters\" difficult to read. In part this is due to the fact that the authors in the text describe actual numbers while Figures 6 and 7 report percentages. In particular, why should the \" horizontal acquisition of the gene \" reduce the sequence similarity score (i.e. increase the E-value of the blastp alignment)? Furthermore, preservation of domain architecture at high phylogenetic distances has been extensively analyzed in the literature. References should be added\n\nIt could be useful to combine Figures 6 and 7 to have a synoptic view\n\nTable 1 shows that InterPro domains provide pangenomes that are not only always larger than the pangenomes obtained from Pfam domains but sometimes even larger than SB-derived pangenomes (e.g. H. pylori or Cyanobacteria). How is this possible?\n\nThe low value of alpha in the Heaps regression for L. monocytogenes afforded by the DAB is striking and should be analyzed in more detail\n\nThe line break after \"transfer events\" in the second paragraph of the introduction is not needed\n\nIn the Supplementary material, SSB should SB",
"responses": [
{
"c_id": "2276",
"date": "24 Nov 2016",
"name": "Jasper Koehorst",
"role": "Author Response",
"response": "It is not clear to me why the labels within the colored boxes representing domains of Figure 1 differ in the top panel (Domain architectures) and the bottom panel (Domains) In the older version the labels in the top referred to the domain names whereas the labels on the bottom contained the PFAM identifiers. The figure has been changed so that only one set of labels is presented. The new genome annotations generated by the authors should be made available to allow others to reproduce their calculations. It would be useful to have some data on the overall difference with respect to the original annotation. The reviewer raises a very interesting topic that has been the focus of a different study. We have performed a detailed analysis of the differences between the original and the de novo annotation in a set of 432 Pseudomonas genomes. In that case, an average difference of 153 genes per genome was detected. Differences in annotations were observed at all functional levels (EC numbers, GO terms and protein domains). The magnitude of the differences correlated with the date the original annotation. The manuscript is currently under review and we will include the reference as soon as it is published. The SAPP annotation framework used to generate these files can be found at http://semantics.systemsbiology.nl/. Extensive documentation is available at http://sapp.readthedocs.io. A section (reproducibility) has been added indicating the workflow to reproduce the analysis here presented. We have included how annotations are compared. There are no details on the parameters used for domain identification such as E-value cut-offs. The latter has a strong impact on the number of singletons (1). We agree that the choice of the E-value cut off plays a critical role on domain detection and greatly impacts the size of the core-genome. However, as reported in InterPro: “The signatures contained within InterPro are produced in different ways by different member databases, so their E-values and/or scoring systems cannot be meaningfully compared” therefore we have selected the intrinsic cutoff within InterPro [Mitchel et al 2015]. This has been mentioned in the Material and Methods section: Identification of domains was done using the intrinsic InterPro cut-off that represents in each case the E-values and the scoring systems of the member databases (Mitchel 2015). It would be even more useful if the authors provided VMs with the complete setup for the entire procedure (from reannotation to clustering) The SAPP annotation framework used to generate these files can be found at http://semantics.systemsbiology.nl/. Extensive documentation is available at: http://sapp.readthedocs.io. A section has been added indicating the workflow to reproduce the analysis here presented. The header SB is misaligned in Table 1. Why did the authors report the fraction of proteins containing at least one InterPro domain when the rest of the analysis is based on Pfam domains? We have modified Table 1 and included an additional column with the fraction of proteins containing at least one Pfam domain. I find the section \"Comparison of DAB and SB clusters\" difficult to read. In part this is due to the fact that the authors in the text describe actual numbers while Figures 6 and 7 report percentages. In particular, why should the \" horizontal acquisition of the gene \" reduce the sequence similarity score (i.e. increase the E-value of the blastp alignment)? We have rephrased the sentence on horizontal gene acquisition, it now reads: Similarly, there are 399 1s → 1d clusters. Each of these cases represent a sequence cluster where all the sequences share the same domain architecture, but other sequences exist with the same architecture that have not been included in the cluster due to a too low similarity score. The low similarity between sequences with the same domain architecture could be due to a horizontal acquisition of the gene or to a fast protein evolution at the sequence level. Genes acquired from high phylogenetic distances could greatly vary in sequence while presenting the same domain architecture. Furthermore, preservation of domain architecture at high phylogenetic distances has been extensively analyzed in the literature. References should be added. The following paragraph has been added to the introduction: Domain architectures have been shown to be preserved at large phylogenetic distances both in prokaryotes and eukaryotes (Koonin 2002, Kummerfeld 2009). This lead to the use of protein domain architectures to classify and identify evolutionarily related proteins and to detect homologs even across evolutionarily distant species (Bjorklund 2005, Fong 2007, Song 2007, Lee 2009). Structural information encoded in domain architectures has also been deployed to accelerate sequence search methods and to provide better homology detection. Examples are CDART (Geer 2002) which finds homologous proteins across significant evolutionary distances using domain profiles rather than direct sequence similarity, or DeltaBlast (Boratyn 2012) where a database of pre-constructed a position-specific score matrix is queried before searching a protein-sequence database. Considering protein domain content, order, recurrence and position has been shown to increase the accuracy of protein function prediction (Messih 2012) and has led to the development of tools for protein functional annotation, such as UniProt-DAAC (Dougan 2016) which uses domain architecture comparison and classification for the automatic functional annotation of large protein sets. The systematic assessment and use of domain architectures is enabled by databases containing protein domain information such as UniProt (Uniprot Consortium 2015), Pfam (Finn 2016), TIGRFAMs (Haft 2003) and InterPro (Mitchell 2015), SMART (Letunic 2015) and PROSITE (Sigrist 2012), that also provide graphical view of domain architectures. It could be useful to combine Figures 6 and 7 to have a synoptic view Figures 6 and 7 (and supplementary figures) have been combined. Table 1 shows that InterPro domains provide pangenomes that are not only always larger than the pangenomes obtained from Pfam domains but sometimes even larger than SB-derived pangenomes (e.g. H. pylori or Cyanobacteria). How is this possible? InterPro aggregates protein domain signatures from different databases, which leads to redundancy of the domain models. This redundancy causes overlaps between the entries and an increase of the granularity of the clusters retrieved: this can bias downwards the size of the pan-genome and upwards the size of the core- genome, as shown in Table 1. The low value of alpha in the Heaps regression for L. monocytogenes afforded by the DAB is striking and should be analyzed in more detail We thank the reviewer for this very interesting observation. We have investigated the low value of alpha in this case and the following paragraph has been added The alpha DAB value retrieved for L. monocytogenes is strikingly low. Heaps law regression relies on the selected genomes providing a uniform sampling of selected taxon, here species. Analysis of the domain content of the selected genomes shows a divergent behaviour of strain LA111 (genome id GCA\\_000382925-1). This behaviour is clear in Figure 7 (PCA), where GCA\\_000382925-1 appears as an outlier of the L.monocytogenes group. Removal of these outlier leads to alpha DAB=1.04 and alpha SB=0.64, which emphasizes the need for uniform sampling prior to Heaps regression analysis. The line break after \"transfer events\" in the second paragraph of the introduction is not needed The line break has been removed. In the Supplementary material, SSB should SB This typo has been fixed."
}
]
},
{
"id": "15680",
"date": "06 Sep 2016",
"name": "Robert D. Finn",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article by Koehorst et al. describes a comparison of two approaches for clustering genomes sequences for the purpose of performing comparative genomics. The principle behind the two approaches, sequenced based clustering and domain based clustering, is described well in the introduction. The motivation of the article is clear and well founded. However, the details provided about how domain assignments are actually performed and handled throughout the experiment generated so many questions, these have clouded the validity of any conclusions.\n\nHow was InterPro used to assign a domain architecture? As the database presents a hierarchy of protein families and domains, unlike Pfam and TIGRFAM, there are numerous overlaps between the entries. Some of these are trivial C-terminal to N-terminal overlaps, while others are complex arrangements that cannot be simply represented as described. If three overlapping domains from InterPro are in the same hierarchy, which domain is used? If all member databases are used, this will account for the explosion of clusters in the InterPro based-clustering seen in Table 1. If InterPro accessions are used (e.g. as seen in the condensed view of a sequence on the InterPro website) then numbers are surprising.\n\nHow were Family vs Domain “types” handled from InterPro or Pfam? In InterPro, type families tend to be near full length protein families. In Pfam, they represent a more heterogeneous bag of entries that are yet to be established as a ‘domain’.\n\nPfam has a notion of related families, termed clans. Here the entries may not be intended to represent functionally distinct domains, but rather can represent a collection of families representing a continuum of evolution. How are entries belonging to a clan handled? How would the results differ if entries in one clan were treated as a single entity, for example, all P-loop NTPases as CL0023? How does this influence the sequence cluster to domain architecture relationships (schematicly shown in Figure 5).\n\nWhy was the N-terminal starting position used to assess position of the domain? What is the effect of choosing the mid-point?\n\nBoth Pfam and TIGRFAM use HMMER version 3, which uses local-local alignment algorithm. How are partial hits to an HMM handled? Would two partial domain matches that occur due to an insertion between two halves of a domain be treated differently (see Triant and Pearson, 2015)?\n\nOther comments:\nThe use of domain architectures as an approach for accelerating sequence searching is not that novel, for example, CD-ART has been available for many years. Domain architecture views have been present in most domain databases (e.g. Pfam, SMART, Prosite) for over a decade, and used in genomic contexts. A more extensive overview of the use of domain architectures in the field is desirable.\n\nThe composite graphs presented in Figures 6, 7 and supplementary figures use different scales, so make the graphs hard to compare.\n\nWhen the domain based clusters are compared to the sequence based clusters, it would be interesting to understand whether the number of domains that makes up the domain architecture influences the correlations to the sequence based clusters. Do single domain architectures predominated the 1:1 clusters?\n\nMany readers may be unaware of the thresholds employed in InterProScan relate to the individual databases, so greater clarity is required.\n\nWhy is the versioned InterProScan described as a semantic wrapper?",
"responses": [
{
"c_id": "2277",
"date": "24 Nov 2016",
"name": "Jasper Koehorst",
"role": "Author Response",
"response": "Thank you for the review, we have responded to your comments below: How was InterPro used to assign a domain architecture? As the database presents a hierarchy of protein families and domains, unlike Pfam and TIGRFAM, there are numerous overlaps between the entries. Some of these are trivial C-terminal to N-terminal overlaps, while others are complex arrangements that cannot be simply represented as described. If three overlapping domains from InterPro are in the same hierarchy, which domain is used? If all member databases are used, this will account for the explosion of clusters in the InterPro based-clustering seen in Table 1. If InterPro accessions are used (e.g. as seen in the condensed view of a sequence on the InterPro website) then numbers are surprising. All member databases in InterPro were used. We partly took into account trivial N- terminal overlaps by alphabetically ordering the domains when distances between starting position were <3 amino acids. After analysing the results, we agree that this was not enough and this is the most likely cause of the explosion of this of clusters. As the reviewer suggests, taking the the full hierarchy of protein families and domains within InterPro would be required for comparative genome analysis based on domain architectures. We have now better explained the selection criteria in the Materials and Methods section: The positions (start and end on the protein sequence) of domains having Pfam, TIGRFAMs and InterPro identifiers were extracted through SPARQL querying of the graph database and domain architectures were retrieved for each protein individually. InterPro aggregates protein domain signatures from different databases. Here no pruning for redundancies has been done. Identification of domains was done using the intrinsic InterPro cut-off that represents in each case the e-values and the scoring systems of the member databases. The domain starting position was used to assess relative position in the case of overlapping domains; alphabetic ordering was used to order domains with the same starting position or when the distance between the starting position of overlapping domains was <3 amino acids. Labels indicating N-C terminal order of identified domains were assigned to each protein in such a way that the same labels were assigned to proteins sharing the same domain architecture. We have commented on this point in the discussion, where the use of InterPro is addressed. This paragraph now reads: The chosen set of domain models and the database used as a reference greatly impact the results. InterPro aggregates protein domain signatures from different databases, which leads to redundancy of the domain models. This redundancy causes overlaps between the entries and an increase of the granularity of the clusters retrieved: this can bias downwards the size of the pan-genome and upwards the size of the core- genome, as shown in Table 1. In InterPro this redundancy is taken into account by implementing a hierarchy of protein families and domains. The entries at the top of these hierarchies correspond to broad families or domains that share higher level structure and/or function; the entries at the bottom correspond to specific functional subfamilies or structural/functional subclasses of domains \\cite{mitchell_interpro_2015}. Using InterPro for DAB clustering would require taking into account the hierarchy of protein families and domains: however, this would pose challenges of its own and would require discrimination of the functional equivalence of different signatures within the same hierarchy. We have also added the following to the conclusion To enable DAB approaches for highly structured databases, such as InterPro, the hierarchy of protein families and domains within has to be explicitly considered. How were Family vs Domain “types” handled from InterPro or Pfam? In InterPro, type families tend to be near full length protein families. In Pfam, they represent a more heterogeneous bag of entries that are yet to be established as a ‘domain’. No distinction has been introduced as there don’t seem to be general rules that apply to all cases. In the discussion section a paragraph has been added on the effects of the structure of the databases. Pfam has a notion of related families, termed clans. Here the entries may not be intended to represent functionally distinct domains, but rather can represent a collection of families representing a continuum of evolution. How are entries belonging to a clan handled? How would the results differ if entries in one clan were treated as a single entity, for example, all P-loop NTPases as CL0023? How does this influence the sequence cluster to domain architecture relationships (schematicly shown in Figure 5). The reviewer raises here an interesting point that we have now discussed. The following has been added to the first paragraph of the discussion section. Another source of redundancy are functionally equivalent domains from distantly related sequences. Pfam represents this notion through related families, termed clans, where relationship may be defined by similarity of sequence, structure or profile-HMM. Clans might contain functionally equivalent domains, however it is not clear whether this is always the case as the criteria for clan definition includes functional similarity but not functional equivalence. Members of a clan have diverging sequences and very often SB approaches would recognize the evolutionary distance between the sequences and group them in different clusters. If we were to assume that members of a clan are functionally equivalent and collect them in the same DA cluster, we will have a higher number of cases where a single DA cluster is split in multiple sequence clusters 1d→Ns. Also there would be higher number of cases of sequence clusters with the same DA but no exactly matching the DA clusters (1s→1d cases). Why was the N-terminal starting position used to assess position of the domain? The following line has been rewritten in the Methods section Labels indicating N-C terminal order of identified domains were assigned to each protein using the starting position of the domains: the same labels were assigned to proteins sharing the same domain architecture. What is the effect of choosing the mid-point? We have commented on this in Results and Discussion. The following paragraph has been added: The starting position of the domains was used to generate labels indicating N-C terminal order of identified domains. The labels were used only for clustering as proteins sharing the same labels were assigned to the same clusters. Choosing instead the mid-point or the C-terminal position could affect the labeling but it not the obtained clusters. Both Pfam and TIGRFAM use HMMER version 3, which uses local-local alignment algorithm. How are partial hits to an HMM handled? Would two partial domain matches that occur due to an insertion between two halves of a domain be treated differently (see Triant and Pearson, 2015)? In the discussion we have added a subsection on the limitations on DAB approaches. There we have added the following: Partial domain hits might arise as a result of alignment, annotation and sequence assembly artifacts (cite Triant et al.). To reduce the number of partial domain hits additional pruning could be implemented to distinguish these cases. However, this is an open problem that requires caution as it could influence the functional capacity of an organism and clustering approaches using DA. The use of domain architectures as an approach for accelerating sequence searching is not that novel, for example, CD-ART has been available for many years. Domain architecture views have been present in most domain databases (e.g. Pfam, SMART, Prosite) for over a decade, and used in genomic contexts. A more extensive overview of the use of domain architectures in the field is desirable. We have added the paragraph in the introduction regarding domain architectures, comparison of domain architectures and their use for sequence search. We have also discussed on how these have been included in domain databases and, as also suggested by the first reviewer, on the preservation of domain architectures at high phylogenetic distances. The following paragraph has been added to the introduction: Domain architectures have been shown to be preserved at large phylogenetic distances both in prokaryotes and eukaryotes (Koonin 2002, Kummerfeld 2009). This lead to the use of protein domain architectures to classify and identify evolutionarily related proteins and to detect homologs even across evolutionarily distant species (Bjorklund 2005, Fong 2007, Song 2007, Lee 2009). Structural information encoded in domain architectures has also been deployed to accelerate sequence search methods and to provide better homology detection. Examples are CDART (Geer 2002) which finds homologous proteins across significant evolutionary distances using domain profiles rather than direct sequence similarity, or DeltaBlast (Boratyn 2012) where a database of pre-constructed position-specific score matrix is queried before searching a protein-sequence database. Considering protein domain content, order, recurrence and position has been shown to increase the accuracy of protein function prediction (Messih 2012) and has led to the development of tools for protein functional annotation, such as UniProt-DAAC (DougaFn 2016) which uses domain architecture comparison and classification for the automatic functional annotation of large protein sets. The systematic assessment and use of domain architectures is enabled by databases containing protein domain information such as UniProt (Uniprot Consortium 2015), Pfam (Finn 2016), TIGRFAMs (Haft 2003) and InterPro (Mitchell 2015), SMART (Letunic 2015) and PROSITE (Sigrist 2012), that also provide graphical view of domain architectures. The composite graphs presented in Figures 6, 7 and supplementary figures use different scales, so make the graphs hard to compare. Figures 6 and 7 have been combined (also supplementary figures). When the domain based clusters are compared to the sequence based clusters, it would be interesting to understand whether the number of domains that makes up the domain architecture influences the correlations to the sequence based clusters. Do single domain architectures predominated the 1:1 clusters? We have looked into this and single domain architectures predominated the 1:1 clusters. A table has been added to the text (Table 3). Many readers may be unaware of the thresholds employed in InterProScan relate to the individual databases, so greater clarity is required. This point was also raised by A. Rosato. We have further explained the selected thresholds in the material and methods. Identification of domains was done using the intrinsic InterPro cut-off that represents in each case the e-values and the scoring systems of the member databases. Why is the versioned InterProScan described as a semantic wrapper? This line has been re-written, now it is explained that the versioned InterProScan stores the output in the RDF data model."
}
]
},
{
"id": "15678",
"date": "15 Sep 2016",
"name": "David M. Kristensen",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe limitations of global sequence similarity based methods to identify proteins that perform similar functions are well-known. Thus, the approach described in this manuscript of using domain-based clustering of orthologous groups (DAB) represents an exciting and very welcome addition to the field. Or at least it will when it is fully developed, although this manuscript has not convinced me that it outperforms other methods at its current level of development, and I have several substantial reservations about some of its content:\nAs the first reviewer also mentioned, methods such as CDART and DELTA-BLAST (published in 2002 and 2012, respectively) have been available for many years. The latter even seems to aim to perform the exact same function as DAB, by considering domain architectures. How is DAB different or better? I suspect that DAB may have greater accuracy since it uses HMMs rather than PSSMs, but this remains to be shown, and DELTA-BLAST is far easier for a user to run, since it is available as a webserver.\n\nThe comparison performed in this manuscript appears to fall prey to a straw man argument. In some cases, but not all, re-writing the relevant sections of the manuscript would help to avoid any misconceptions in this regard.\na) The issue of replacing a O(n2) cost with a O(n) one upon addition of a new genome was dealt with over 15 years ago, so the statement \"On the other hand, addition of a new genome using an SB approach require a new set of all-against-all sequence comparisons which come at a O(n2) computational cost\" is false - at least as it is currently written. It is true that building groups of orthologs do require an initial O(n2) computational cost, but once those orthologous groups are formed, methods such as COGNITOR (first published in the year 2000) work extremely quickly and efficiently to assign genes in newly-sequenced genomes to existing groups. In fact, COGNITOR works in the exact same manner in which DAB uses pre-computed domain databases to achieve the much lower O(n) cost, although in COGNITOR's case it searches against a pre-computed database of orthologous groups (of which there are far fewer than domains, so with a smaller \"n\" it would actually be faster than DAB).\nIn should be noted that despite DAB's somewhat higher cost, it has the theoretical potential to achieve better accuracy than COGNITOR (at least in some cases) since as a global sequence similarity approach, the latter does not explicitly consider domain architecture. At least not in an automated fashion - doing so would require manual curation of its results, which is often done by careful researchers, but is not a process that is scalable to handle the ever-decreasing cost and ever-increasing amounts of genomic data. Although since a comparison with COGNITOR was not included in the manuscript, either in terms of speed or accuracy, it is unknown how much more useful DAB would be in practice.\nb) Even the initial O(n2) cost does not have to be terribly burdensome, since the SIMAP method pre-computes and stores BLAST results between all pairs of sequenced genomes anyway, and then uses efficient database retrieval methods to report the stored results. When a new genome is added, O(n) new comparisons have to be made - for a total accumulated cost of O(n2), although with the work spread out over many years - and these in turn are useful for many other purposes, thus mitigating the construction costs. For instance, the EGGNOG database uses this method to build groups of orthologs.\nc) Why was only a single SB method chosen to be a representative for this entire class of approaches? Multiple forms of DAP were tested, whereas the only SB method used for comparison was one that uses a strict e-value cutoff of 1e-5, in the form of OrthaGogue and the OrthoMCL method. Also, why was the latter chosen to be this single representative? The latter approach was designed (nearly a decade and a half ago) for eukaryotic organisms, and while it has been applied more recently to bacteria as well, it is by no means the only - or even necessarily the best - approach for prokaryotic genomes. One advantage that it has is that it is completely automated, and thus is \"easy\" for people to use (even if, as this manuscript points out, horribly slow due to the O(n2) procedure that it uses). On the other hand, methods like CDART and COGTRIANGLES are all also automated (the latter of which uses no arbitrary e-value cutoff - that is, the results are robust to e-values over an immense range such as 1e-5, 1, 10, or even well beyond that on up to 100, or even 1000), and some pre-computed databases (such as COGs, representing the protein families present in the last common ancestor of all cellular life several billions of years ago) even take advantage of further manual validation, and from which pre-computed groups can be identified in newly-sequenced genomes by the fully automated and even easier approaches such as DELTA-BLAST and COGNITOR. Is it at least possible that the poorer performance of SB methods in comparison to DAB as shown in the current manuscript is due to the choice of this particular SB method? I for one would have loved to see a comparison against the new release of the COGs database last year, since due to its being manually curated it acts as a sort of \"Gold Standard\" that can be tested against, with perhaps the EGGNOG groups being used as a more realistic measure of what a purely automated method can do without human supervision. Likely, DAB would fall somewhere in-between, and which would benefit the community of researchers who want to do comparative genomics of prokaryotic organisms to have a fully automated method that was demonstrated to surpass the existing fully automated methods. As it now stands though, DAB has only been shown to surpass OrthoMCL, which is not hard to do at all. Indeed, as seventh paragraph of the Discussion section (starting \"Two of the most prominent...\") states, unlike DAB, the SB methods were not able to cluster together the proteins with functional similarity but little sequence identity, especially across wider taxonomic ranges - which of course is what would be expected from a SB method that uses an e-value cutoff of 1e-5.\nd) Above and beyond the choice of SB method, it also seems that there may have been a bug in its implementation. The statement \"For SB clustering we also observed the case of identical protein sequences not clustered together, probably because of the tie breaking implementation when BBH are scored.\" However, this was not supposed to happen, due to the within-species reciprocal BBH procedure that is used. In contrast, the tie breaking refers to between-species comparisons, but as shown in Figure 1 of the OrthoMCL paper (http://www.ncbi.nlm.nih.gov/pubmed/12952885), these two sources of information were supposed to have been combined together to form the final orthologous groups. If the proteins were highly similar (e.g., 99%) then perhaps a tie-breaking could be explained, but for 100% identical proteins - e.g., produced by a tanden duplication event - then they should have been collected into the group. One possibility is that this particular SB method simply was not designed to handle the large numbers of extremely closely-related genome assemblies that are available today, since at the time, very few instances of multiple genomic assemblies were available for the same species. If this explanation was demonstrated to be the reason why these identical proteins were not clustered together, that would be another reason for a user to choose to use DAB over this particular SB method. In any case (bug, design flaw, or something else), this event could greatly contribute to explaining some of the results that were observed whereby this single SB method found so many more singletons than DAB with Pfam - i.e., fixing the bug, or using some other SB method, may move many of those singletons into clusters. Although it would not explain why DAB with InterPro found even more singletons than this SB method?\n\nDAB has a lot of potential, but its limitations need to be made more clear:\na) Why and how is the matrix of domain architecture binarized? Specifically, what if multiple copies of a domain are present? And does order matter - such as the architectures shown in Figure 2 of \"A+B\" and \"B+A\"? So, would \"B+A+A\" be a different architecture? And, as another reviewer also pointed out, what about \"complicated\" domain topologies where domains are interrupted by the insertion of another domain? Another major aspect of partial topologies is if DAB only recognizes some but not all of a newly-discovered architecture. E.g., a protein with architecture A+B+C+D, where A is known but B, C and D domains are not yet known. How would this be handled by DAB? Would it be reduced to appear merely as a single-domain \"A\" architecture? If so, how could that be distinguished from an architecture such as A+Z, which would also be reduced to appear just as a single-domain A? It seems like global sequence similarity methods might be more useful in those particular scenarios? i.e., if all the above domains were the same length, and a coverage threshold was used, then A+B+C+D could not be put into the same group as A+Z and A. Therefore, DAB seems primarily useful to either quickly extend known architectural types into a newly sequenced genome, but at the cost of not being able to work with unknown types.\nb) For newly sequenced genomes that are not yet well-characterized enough to have all of their domains present in the domain databases, DAB can be severely handicapped in comparison to global sequence similarity methods that do not have this limitation. In particular, Table 1 shows that up to nearly a fifth of the H. pylori and Cornebacteriales genomes are not able to be assigned to domain families. Even these numbers are merely lower-bound estimates, since brand-new architectures are expected to be discovered constantly, and yet these may incorporate at least one element that is known - such as the aforementioned A+B+C+D architecture, where only the A domain is represented in Pfam, but B and C and D are unknown. And yet it seems likely that even the fact that these domains are unknown would go unrecognized by the DAB approach - unless a factor is added to look for large segments of a gene that do not have matches in the databases of known domains. Therefore, the cost of DAB not being able to work with unknown architectural types might be quite high indeed. Worse, the exact value of that cost is also likewise unknown, and yet it would seem to be the single crucial piece of information that is most sorely needed in order to answer the question: does the benefits of DAB outweigh its costs?\n\nIf the goal is to bring together groups of proteins that have functional equivalence, then why was the only comparison that was done performed against the presence/absence membership of SB orthology approaches? Would it not have been better to actually measure the functional consistency observed within the SB groups, and within the DAB groups, in order to show that the latter was higher than the former? Many other methods that purport to improve upon the state-of-the-art orthology prediction process do just that - for instance Figure 4 of http://www.ncbi.nlm.nih.gov/pubmed/19148271 shows several comparisons with similarity of GO terms, enzyme nomenclature (EC), gene expression, and syntenic local neighborhood tests, with 12 different methods of orthology prediction. While neighborhood conservation is irrelevant for the issue of functional equivalence, the former three (or at least GO terms) would help to answer whether DAB is truly better than SB at the task of measuring functional equivalence. It would also help to answer whether this improved functional equivalence would be outweighed by the costs of being unable to handle unknown domain architectures, especially for highly divergent new genomes. If not, DAB may still be useful to check the consistency of existing orthologous groups in terms of their architecture, at least when domain architectures are expected to be completely known in advance - e.g., microevolutionary variations within a species where mutational events may disrupt a protein's function - but for other tasks such as the discovery of a new phyla of cellular life that contains radically different domain architectures, global similarity methods may be preferable instead.\n\nFinally, some minor points concerning Figure 2:\nthe vertical arrows seem to be pointing the wrong direction - a gene sequence undoubtebly contains more information content than a mere functional description. e.g., if I were to give you a GO code for molecular function, or biological process, then I could not tell you whether the original gene sequence is closer to one type of bacteria vs another type; but if I had the original gene sequence, then I could answer that question as well as many more.\n\nI did not see a description of how amino acid coordinates are used anywhere else in the manuscript, either in DAB itself or in the comparison? In short, what does \"Structure\" have to do with anything, other than the general theoretical flow of \"sequence begets structure which begets function\"? If the purpose of Figure 2 is to describe the flowchart of DAB specifically though, it should focus only on the relevant elements. I suppose Structure could have meant how the sequence alignment was made, but if that were true, then DAB would only work for domain families for which a structure is available, instead of those for which only genomic or individual gene sequence has been provided.\n\nThe ordering also seems unclear - wouldn't BBHs inform HMM domains, which then in turn inform domain architectures? Or if starting with BBHs, then how could architectures possibly be known prior to knowing the domains themselves? Or if it should be read from top to bottom as shown, how exactly does one start with Function (e.g., a GO term) and then, somehow via Structure, thereby arrive at a Sequence alignment? Specifically, is a Pfam entry a \"Function\", from which the Sequence alignment is downloaded? Or are Function and the Sequence alignment both part of the starting Pfam entry (and then again, what does any of that have to do with Structure)? From which domains are found (but aren't Pfam entries domains to begin with?), and then BBHs are made from the domain architectures? (an extremely different way of doing the BBH procedure, which is normally done via Sequence alignments). In any case, as pointed out by other reviewers, the methodology used by DAB is not clearly explained in this figure, nor in the manuscript text.\n\nAlso, the last paragraph of the Discussion uses the word \"closeness\", but I think \"closedness\" was intended.",
"responses": [
{
"c_id": "2278",
"date": "24 Nov 2016",
"name": "Jasper Koehorst",
"role": "Author Response",
"response": "As the first reviewer also mentioned, methods such as CDART and DELTA-BLAST (published in 2002 and 2012, respectively) have been available for many years. The latter even seems to aim to perform the exact same function as DAB, by considering domain architectures. How is DAB different or better? I suspect that DAB may have greater accuracy since it uses HMMs rather than PSSMs, but this remains to be shown, and DELTA-BLAST is far easier for a user to run, since it is available as a webserver. Following the suggestions made by the other reviewers we have added a paragraph in the Introduction regarding domain architectures, comparison of domain architectures and their use for sequence search. We have also discussed on how these have been included in domain databases and on the preservation of domain architectures at high phylogenetic distances. We agree that most likely HMMs outperform PSSMs, however as the reviewer says, that is a topic that would required a dedicated investigation. Here our goal was to used domain architectures for functional comparative genomics, and we agree that a similar approach could be implemented using PSSM. Regarding usability, we have used SAPP (semantic annotation platform with provenance) for genome analysis and annotation. SAPP is able to store the results in the RDF data model, that can be then queried using SPARQL. This tool is available with a web interface and is available at http://semantics.systemsbiology.nl/ The comparison performed in this manuscript appears to fall prey to a straw man argument. In some cases, but not all, re-writing the relevant sections of the manuscript would help to avoid any misconceptions in this regard. The issue of replacing a O(n2) cost with a O(n) one upon addition of a new genome was dealt with over 15 years ago, so the statement \"On the other hand, addition of a new genome using an SB approach require a new set of all-against-all sequence comparisons which come at a O(n2) computational cost\" is false - at least as it is currently written. We have amended the above mentioned sentence to: On the other hand, addition of a new genome using an SB approach require a new set of all-against-all sequence comparisons which come at a O(n2) computational cost. However, approaches has been proposed to overcome this shortcomings of SB methods, such as COGNITOR which reduces the computational to O(n) by using pre-computed databases. It is true that building groups of orthologs do require an initial O(n2) computational cost, but once those orthologous groups are formed, methods such as COGNITOR (first published in the year 2000) work extremely quickly and efficiently to assign genes in newly-sequenced genomes to existing groups. In fact, COGNITOR works in the exact same manner in which DAB uses pre-computed domain databases to achieve the much lower O(n) cost, although in COGNITOR's case it searches against a pre-computed database of orthologous groups (of which there are far fewer than domains, so with a smaller \"n\" it would actually be faster than DAB). In should be noted that despite DAB's somewhat higher cost, it has the theoretical potential to achieve better accuracy than COGNITOR (at least in some cases) since as a global sequence similarity approach, the latter does not explicitly consider domain architecture. At least not in an automated fashion - doing so would require manual curation of its results, which is often done by careful researchers, but is not a process that is scalable to handle the ever-decreasing cost and ever-increasing amounts of genomic data. We have commented on the analogy between DAB and COGNITOR. In this respect, the DAB approach is similar in to the approach implemented in COGNITOR, by searching against existing databases of domains architectures. Although since a comparison with COGNITOR was not included in the manuscript, either in terms of speed or accuracy, it is unknown how much more useful DAB would be in practice. The focus of the paper was not to propose a comparative analysis of different methods but rather to present and contextualize the use of domain architecture for comparative genomics. However, we want to stress that we are not claiming that DA methods are superior to SB but that are an efficient and scalable alternative. Even the initial O(n2) cost does not have to be terribly burdensome, since the SIMAP method pre-computes and stores BLAST results between all pairs of sequenced genomes anyway, and then uses efficient database retrieval methods to report the stored results. When a new genome is added, O(n) new comparisons have to be made - for a total accumulated cost of O(n2), although with the work spread out over many years - and these in turn are useful for many other purposes, thus mitigating the construction costs. For instance, the EGGNOG database uses this method to build groups of orthologs. Why was only a single SB method chosen to be a representative for this entire class of approaches? Multiple forms of DAB were tested, whereas the only SB method used for comparison was one that uses a strict e-value cutoff of 1e-5, in the form of OrthaGogue and the OrthoMCL method. Also, why was the latter chosen to be this single representative? We have added the following to the discussion: To asses whether DAB results were consistent with those of SB methods we chosen. OrthaGogue as a representative of the latter class. Several tools such as COGNITOR and MultiPARANOID are available that implement different algorithm solutions to the task of identifying homologous sequences; however, despite different implementation, they all rely on sequence similarity as a proxy for functional equivalence. Here we considered SB methods as a golden standard for functional comparative genomics, especially when organisms within close evolutionary proximity are considered. Our aim was to investigate whether using HMMs instead of sequence similarity would yield similar results, thereby justifying their use for large scale functional genome comparisons. Regarding domain architectures, we have explored different alternatives, as we have seen that the chosen database or set of reference domains plays a critical role, an example is the low coverage of TIGRFAM preventing obtention of reasonable clusters. The latter approach was designed (nearly a decade and a half ago) for eukaryotic organisms, and while it has been applied more recently to bacteria as well, it is by no means the only - or even necessarily the best - approach for prokaryotic genomes. One advantage that it has is that it is completely automated, and thus is \"easy\" for people to use (even if, as this manuscript points out, horribly slow due to the O(n2) procedure that it uses). On the other hand, methods like CDART and COGTRIANGLES are all also automated (the latter of which uses no arbitrary e-value cutoff - that is, the results are robust to e-values over an immense range such as 1e-5, 1, 10, or even well beyond that on up to 100, or even 1000), and some pre-computed databases (such as COGs, representing the protein families present in the last common ancestor of all cellular life several billions of years ago) even take advantage of further manual validation, and from which pre-computed groups can be identified in newly-sequenced genomes by the fully automated and even easier approaches such as DELTA-BLAST and COGNITOR. Is it at least possible that the poorer performance of SB methods in comparison to DAB as shown in the current manuscript is due to the choice of this particular SB method? I for one would have loved to see a comparison against the new release of the COGs database last year, since due to its being manually curated it acts as a sort of \"Gold Standard\" that can be tested against, with perhaps the EGGNOG groups being used as a more realistic measure of what a purely automated method can do without human supervision. Likely, DAB would fall somewhere in-between, and which would benefit the community of researchers who want to do comparative genomics of prokaryotic organisms to have a fully automated method that was demonstrated to surpass the existing fully automated methods. As it now stands though, DAB has only been shown to surpass OrthoMCL, which is not hard to do at all. Indeed, as seventh paragraph of the Discussion section (starting \"Two of the most prominent...\") states, unlike DAB, the SB methods were not able to cluster together the proteins with functional similarity but little sequence identity, especially across wider taxonomic ranges - which of course is what would be expected from a SB method that uses an e-value cutoff of 1e-5. Above and beyond the choice of SB method, it also seems that there may have been a bug in its implementation. The statement \"For SB clustering we also observed the case of identical protein sequences not clustered together, probably because of the tie breaking implementation when BBH are scored.\" However, this was not supposed to happen, due to the within-species reciprocal BBH procedure that is used. In contrast, the tie breaking refers to between-species comparisons, but as shown in Figure 1 of the OrthoMCL paper (http://www.ncbi.nlm.nih.gov/pubmed/12952885), these two sources of information were supposed to have been combined together to form the final orthologous groups. If the proteins were highly similar (e.g., 99%) then perhaps a tie-breaking could be explained, but for 100% identical proteins - e.g., produced by a tanden duplication event - then they should have been collected into the group. One possibility is that this particular SB method simply was not designed to handle the large numbers of extremely closely-related genome assemblies that are available today, since at the time, very few instances of multiple genomic assemblies were available for the same species. If this explanation was demonstrated to be the reason why these identical proteins were not clustered together, that would be another reason for a user to choose to use DAB over this particular SB method. In any case (bug, design flaw, or something else), this event could greatly contribute to explaining some of the results that were observed whereby this single SB method found so many more singletons than DAB with Pfam - i.e., fixing the bug, or using some other SB method, may move many of those singletons into clusters. Although it would not explain why DAB with InterPro found even more singletons than this SB method? We have added a paragraph in the discussion regarding why the InterPro hierarchy has to be taken into account, also we mention this in the conclusion section. The hierarchical structure produces an increase in the domain multiplicity as many are related to each other. As a results an artificial variability in the DA is introduced leading to a higher number of singletons. DAB has a lot of potential, but its limitations need to be made more clear. We have added a new section to the Discussion: Limitations of DAB approaches Why and how is the matrix of domain architecture binarized? Specifically, what if multiple copies of a domain are present? We understand that our phrasing may have caused some confusion and we apologize for unclarity. The matrix of domain architectures is only binarized (presence/absence) to compute the PCA shown in Fig. 8, not to compare DAB and SB clustering. We have rephrased this in the Materials and Methods section: ...a binarized presence-absence matrix was obtained and used solely for principal component analysis. [..] does order matter - such as the architectures shown in Figure 2 of \"A+B\" and \"B+A\"? So, would \"B+A+A\" be a different architecture? And, as another reviewer also pointed out, what about \"complicated\" domain topologies where domains are interrupted by the insertion of another domain? Another major aspect of partial topologies is if DAB only recognizes some but not all of a newly-discovered architecture. E.g., a protein with architecture A+B+C+D, where A is known but B, C and D domains are not yet known. How would this be handled by DAB? Would it be reduced to appear merely as a single-domain \"A\" architecture? If so, how could that be distinguished from an architecture such as A+Z, which would also be reduced to appear just as a single-domain A? It seems like global sequence similarity methods might be more useful in those particular scenarios? i.e., if all the above domains were the same length, and a coverage threshold was used, then A+B+C+D could not be put into the same group as A+Z and A. Therefore, DAB seems primarily useful to either quickly extend known architectural types into a newly sequenced genome, but at the cost of not being able to work with unknown types. For newly sequenced genomes that are not yet well-characterized enough to have all of their domains present in the domain databases, DAB can be severely handicapped in comparison to global sequence similarity methods that do not have this limitation. In particular, Table 1 shows that up to nearly a fifth of the H. pylori and Cornebacteriales genomes are not able to be assigned to domain families. Even these numbers are merely lower-bound estimates, since brand-new architectures are expected to be discovered constantly, and yet these may incorporate at least one element that is known - such as the aforementioned A+B+C+D architecture, where only the A domain is represented in Pfam, but B and C and D are unknown. And yet it seems likely that even the fact that these domains are unknown would go unrecognized by the DAB approach - unless a factor is added to look for large segments of a gene that do not have matches in the databases of known domains. Therefore, the cost of DAB not being able to work with unknown architectural types might be quite high indeed. Worse, the exact value of that cost is also likewise unknown, and yet it would seem to be the single crucial piece of information that is most sorely needed in order to answer the question: does the benefits of DAB outweigh its costs? The reviewer raises a very interesting point regarding how extensive available knowledge on protein domains is. The high agreement between the results of DAB and SB methods is only possible because databases of protein domains have enough information. Still, we believe many domains remain to be identified and in the scenarios the reviewer mentions DAB methods will be limited. We have added the following to the Discussion section, under the “Limitations of DAB approaches” header. Still around 15% of the genome coding content corresponds to sequences with no identified protein domains. DAB approaches can be complemented with SB methods to consider these sequences or even protein sequences with low domain coverage, possible indicating the location of protein domains yet to be identified. We have extended the paragraph in the Materials and Methods where domain architectures are defined to further emphasize that N- C- terminal domain order is an inherent part of domain architecture definition. Labels indicating N-C terminal order of identified domains were assigned to each protein using the starting position of the domains: the same labels were assigned to proteins sharing the same domain architecture. In the Introduction we have added a paragraph regarding the use of protein domain architecture in protein annotations and we have included references to previous works showing that domain order is often key for the function of the protein and that domain duplications/insertions can also alter the function of the protein. Moreover, a similar point on how domain architectures were defined and how the hierarchical relationships between protein domains, families and clans has been raised by R. Finn and a paragraph has been added in the Discussion (see answer to R. Finn’s comments). If the goal is to bring together groups of proteins that have functional equivalence, then why was the only comparison that was done performed against the presence/absence membership of SB orthology approaches? Would it not have been better to actually measure the functional consistency observed within the SB groups, and within the DAB groups, in order to show that the latter was higher than the former? Many other methods that purport to improve upon the state-of-the-art orthology prediction process do just that - for instance Figure 4 of http://www.ncbi.nlm.nih.gov/pubmed/19148271 shows several comparisons with similarity of GO terms, enzyme nomenclature (EC), gene expression, and syntenic local neighborhood tests, with 12 different methods of orthology prediction. While neighborhood conservation is irrelevant for the issue of functional equivalence, the former three (or at least GO terms) would help to answer whether DAB is truly better than SB at the task of measuring functional equivalence. It would also help to answer whether this improved functional equivalence would be outweighed by the costs of being unable to handle unknown domain architectures, especially for highly divergent new genomes. If not, DAB may still be useful to check the consistency of existing orthologous groups in terms of their architecture, at least when domain architectures are expected to be completely known in advance - e.g., microevolutionary variations within a species where mutational events may disrupt a protein's function - but for other tasks such as the discovery of a new phyla of cellular life that contains radically different domain architectures, global similarity methods may be preferable instead. We have added the following section dedicated to limitations of DAB methods: We have shown that domain architecture-based methods can be used as an effective approach to identify clusters of functionally equivalent proteins, leading to results similar to those obtained by classical methods based on sequence similarity. However, whether DAB methods are more accurate than SB methods to assess functional equivalence will require further analysis. In this light, results of functional conservation for both approaches could be compared in terms of GO similarity and/or EC number. The performance of DAB methods may be sub-optimal when dealing with newly sequenced genomes that are not yet well-characterized enough to have all of their domains present in domain databases, since DAB methods will be unable to handle unknown architectural types. Around 15% of the genome coding content corresponds to sequences with no identified protein domains. DAB approaches can be complemented with SB methods to consider these sequences or even protein sequences with low domain coverage, possible indicating the location of protein domains yet to be identified. Since DAB methods rely on the constant upgrading of public resources like UniProt and Pfam databases, an initial assessment of domain coverage appears as a sine qua non condition for application of these methods. DAB approaches could be used to assess the consistency of existing orthologous groups in terms of their domain architectures, at least when domain architectures are expected to be completely known in advance (for instance in the case of micro-evolutionary variations within a species where mutational events may disrupt a protein's function). For other purposes, such as the discovery of a new phyla of cellular life that contains radically different domain architectures, global similarity methods may be preferred. Finally, some minor points concerning Figure 2: The vertical arrows seem to be pointing the wrong direction - a gene sequence undoubtedly contains more information content than a mere functional description. e.g., if I were to give you a GO code for molecular function, or biological process, then I could not tell you whether the original gene sequence is closer to one type of bacteria vs another type; but if I had the original gene sequence, then I could answer that question as well as many more. I did not see a description of how amino acid coordinates are used anywhere else in the manuscript, either in DAB itself or in the comparison? In short, what does \"Structure\" have to do with anything, other than the general theoretical flow of \"sequence begets structure which begets function\"? If the purpose of Figure 2 is to describe the flowchart of DAB specifically though, it should focus only on the relevant elements. I suppose Structure could have meant how the sequence alignment was made, but if that were true, then DAB would only work for domain families for which a structure is available, instead of those for which only genomic or individual gene sequence has been provided. The ordering also seems unclear - wouldn't BBHs inform HMM domains, which then in turn inform domain architectures? Or if starting with BBHs, then how could architectures possibly be known prior to knowing the domains themselves? Or if it should be read from top to bottom as shown, how exactly does one start with Function (e.g., a GO term) and then, somehow via Structure, thereby arrive at a Sequence alignment? Specifically, is a Pfam entry a \"Function\", from which the Sequence alignment is downloaded? Or are Function and the Sequence alignment both part of the starting Pfam entry (and then again, what does any of that have to do with Structure)? From which domains are found (but aren't Pfam entries domains to begin with?), and then BBHs are made from the domain architectures? (an extremely different way of doing the BBH procedure, which is normally done via Sequence alignments). In any case, as pointed out by other reviewers, the methodology used by DAB is not clearly explained in this figure, nor in the manuscript text. We have edited the Figure for clarity incorporating the reviewer’s suggestions. Also, the last paragraph of the Discussion uses the word \"closeness\", but I think \"closedness\" was intended. The typo has been amended."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1987
|
https://f1000research.com/articles/6-735/v1
|
22 May 17
|
{
"type": "Research Article",
"title": "Monitoring respiration and oxygen saturation in patients during the first night after elective bariatric surgery: A cohort study",
"authors": [
"Liselott Wickerts",
"Sune Forsberg",
"Frederic Bouvier",
"Jan G. Jakobsson",
"Liselott Wickerts",
"Sune Forsberg",
"Frederic Bouvier"
],
"abstract": "Background: Obstructive sleep apnoea and obese hypoventilation is not uncommon in patients with obesity. Residuals effect from surgery/anaesthesia and opioid analgesics may worsen respiration during the first nights after bariatric surgery. The aim of this observational study was to monitor respiration on the first postoperative night following elective bariatric surgery. Methods: This observational study aimed to determine the incidence and severity of hypo/apnea. Oxygen desaturation was analysed by continuous respiratory monitoring. Results: 45 patients were monitored with portable polygraphy equipment (Embletta, ResMed) during the first postoperative night at the general ward following elective laparoscopic bariatric surgery. Mean SpO2 was 93%; 10 patients had a mean SpO2 of less than 92% and 4 of less than 90%. The lowest mean SpO2 was 87%. There were 16 patients with a nadir SpO2 of less than 85%, lowest nadir SpO2 being 63%. An Apnoea Hypo/apnea Index (AHI) > 5 was found in 2 patients only (AHI 10 and 6), and an Oxygen Desaturation index (ODI) > 5 was found in 3 patients (24, 10 and 6, respectively). 3 patients had more prolonged (> 30 seconds) apnoea with nadir SpO2 81%, 83% and 86%. Conclusions: A low mean SpO2 and short episodes of desaturation were not uncommon during the first postoperative night following elective bariatric surgery in patients without history of night time breathing disturbance. AHI and/or ODI of more than 5 were only rarely seen. Night-time respiration monitoring provided sparse additional information. Thus, it seems reasonable to have low risk patients at general ward already in their first night after bariatric surgery.",
"keywords": [
"obesity",
"bariatric surgery",
"general anaesthesia",
"postoperative polygraphy"
],
"content": "Introduction\n\nObesity is on the increase in the western world and is associated with the development of several diseases. It is a major risk factor for cardiovascular disease and diabetes, two of the leading causes of death globally. Many efforts have been made in trying to treat the condition (http://www.who.int/mediacentre/factsheets/fs311/en/). The adverse effects on pulmonary function are also well documented1. With an increasing BMI, the risk for chronic daytime hypoventilation escalates, characterized by an arterial carbon dioxide pressure (PCO2) exceeding 45 mmHg1. Complications include atelectasis, hypoxemia, pulmonary embolism and subsequent acute ventilation failure, and may evolve during perioperative and postoperative phases1. Adverse effects are not concentrated to daytime; obesity is the most frequent predisposing factor of obstructive sleep apnoea syndrome (OSAS)1. Changes in the breathing pattern and pulmonary function may indeed cause compromise of oxygenation and increased risk for oxygen desaturation.\n\nThe early postoperative period with residual anaesthetic and analgesic effects may put an obese patient that has just undergone a laparoscopic bariatric procedure at risk for respiratory compromise. It has been debated whether early care should be carried out in a high dependency ward or could be safely done in a general ward.\n\nThe present study aimed to monitor first postoperative night respiration, breathing patterns and oxygenation with sleep breathing equipment, in patients having undergone elective laparoscopic bariatric surgery. We wanted to assess whether we could define risk factors associated with hypo/apnea and desaturation episodes.\n\n\nMethods\n\nThis is an explorative cohort study; the study protocol was approved by the ethical committee at Karolinska Institutet [Dnr 2015/118 – 31/1 La]. Patients were included in the study after having provided verbal and written informed consent. Each patient filled out a questionnaire to determine if there was a suspicion of OSAS preoperatively; the questionnaire included the Epworth Sleepiness Scale (ESS). Patients were monitored after surgery from 10 pm during the first postoperative night, until 6 am the next morning, with a portable OSAS breathing pattern monitor Embletta (ResMed). The registration included information about airflow from a nasal cannula, thoracic respiratory movements by an elastic band around the thorax and percutaneous O2 saturation and heart rate from a pulse oximeter.\n\nAll patients had anaesthesia and postoperative pain management in accordance to the routines of the department. All patients received premedication with 2 tablets of slow release 655mg paracetamol and 10 mg slow release oral oxycodone prior to surgery. Patients were preoxygenated by FiO2 1.0 and by CPAP of 6 cm H20 in the anaesthetic machine Aisys (GE Healthcare). Anaesthesia was induced with remifentanil target control infusion (TCI), set at a target of 6.0 ng/ml. After 90 seconds the patient was put to sleep with a bolus injection of propofol 2–3 mg/kg. When the patient got apnoeic the ventilation mode was changed to pressure control ventilation-volume guaranteed (PCV-VG), and the patient received neuromuscular blocker rocuronium, followed by endotracheal intubation. Anaesthesia was maintained with sevoflurane and remifentanil titrated to clinical signs of adequate anaesthesia, and with a BIS (Medtronic, Covidien BIS LoC 2 Channel) value between 25 and 50. All patients received postoperative nausea and vomiting (PONV) prophylaxis with betamethasone, ondansetron and droperidol. 10 – 15 mg of morphine was administered at the beginning of surgery, as a start dose for the postoperative pain relief regime. The patient had laparoscopic surgery, gastric bypass or sleeve gastrectomy, as decided by the surgeon. Postoperative care was provided in accordance to routines; fentanyl 25–50 micrograms was used as rescue analgesia and a further 1–5 mg morphine administered as needed. Postoperative respiratory care included oxygen supplementation to satisfactory saturation and once per hour blowing in a T-piece with a one-way valve mouthpiece (Intersurgical). Patients were transferred to the general ward when fully awake and with stable vital signs for 30 minutes. No intervention was done apart from the night breathing monitoring.\n\nAll data is presented as mean and standard deviation. The breathing data was evaluated in accordance to standard assessment,\n\nthe AHI was calculated as hypo and/or apnoea/hour\n\nthe ODI was calculated as oxygen saturation decrease of > 4 for 30 seconds/hour\n\n\nResults\n\nThere were 52 patients initially included in the study, but 6 were excluded as they had a diagnosis of OSAS and 1 further patient was excluded as the procedure became merely a diagnostic laparoscopy. Forty-five patients were included in the study.\n\nThe mean age for the 45 patients studied was 39 (ranging from 19 – 68 years), and the mean BMI was 37 (ranging from 32 – 53). Surgery and anaesthesia was uneventful, mean duration of surgery was 54 minutes (ranging from 27 – 97 minutes) and mean duration of anaesthesia was 108 (ranging from 65 – 217 minutes). Most patients (n=35), had a sleeve gastrectomy and 10 had a gastric bypass procedure.\n\nMean time between end of anaesthesia and start of the polygraphy was 611 ± 122 minutes and mean duration of polygraphy monitoring was 463 ± 51 minutes.\n\nMean saturation (SpO2) during the polygraphy was 93% (ranging from 87 – 97). There were 10 patients with a mean SpO2 of less than 92% and 4 with mean of less than 90, with the lowest mean SpO2 being 87%. There were 16 patients with a nadir SpO2 of less than 85%, lowest nadir SpO2 being 63%.\n\nOnly 2 patients had an AHI > 5; (AHI 10 and 6). Both underwent sleeve gastrectomy. They also had an ODI > 5 (10 and 24, respectively). These patients had mean saturation 88% and 91% during the registration and SpO2 nadir of 79% and 81%.\n\nIn total, 3 patients had an ODI > 5 (24, 10 and 6, respectively). Additionally, 3 patients had more prolonged (> 30 second) apnoea with nadir SpO2 81%, 83% and 86%.\n\nWe could not see any difference apart from a shorter duration of anaesthesia between the surgical procedures (See Table 1 and Table 2)\n\nGBP: gastric bypass; Sleeve: sleeve gastrectomy; f: female; m: male.\n\nGBP: gastric bypass; Sleeve: sleeve gastrectomy.\n\n\nDiscussion\n\nWe found, somewhat surprisingly, only very minor respiratory disturbances in the cohort of patients having undergone elective bariatric surgery. No patients had a hypopnea index above thirty - only 2 had an AHI above 5. A majority of patients had low oxygen saturation of around 93%, and short episodes of saturation below 85% were not uncommon. Thus, the main finding is mild hypoxia and episodes of desaturation, but hypo/apnea monitoring does not provide much additional information. Signs of more pronounced airway compromise causing hypo/apnea were only rarely seen. Low mean saturation and desaturation episodes were however not uncommon. All our patients had no further complications.\n\nThe risk for postoperative hypoxia has been known for long. Jones et al. published a review in 1990 addressing the risk for low oxygenation during the early postoperative period, and its multifactorial etiology2. Low saturation and mild hypoxia has also been reported in previous studies.\n\nWe are not aware of any previous study explicitly monitoring hypo/apnea during the first postoperative night after bariatric surgery Zaremba et al. studied polysomnography in patients during the early postoperative course, while patients were still in the PACU3. They found that 64% of the 33 patients with complete postoperative polysomnography data had signs of sleep-disordered breathing with an AHI greater than 5/h early after recovery from anaesthesia. The respiratory response to hypoxia and hypercapnia caused by airway obstruction is compromised following major surgery4. Chung et al. studied respiration during the preoperative stage and the first, second and third postoperative nights5. Our results are in line with these results, first night after surgery. Female patients and patients with no or mildly compromised nocturnal breathing showed only minor changes during the first postoperative night in Chung et al’s study as well as in ours. Age, preoperative respiration disturbance and smoking were found to be risk factors for hypo/apnea6.\n\nWe used a standard portable breath and saturation monitor, the Embletta system. The portable systems have been shown to be accurate tool for assessment of sleep apnoea7. We did not include EEG monitoring and we did not attach the abdominal movement tracing. Merely the nasal flow, thoracic and saturation was recorded. AHI scores were > 5 for mild, > 15 for intermediate and > 30 for significant airway compromise, or “sleep apnoea”. We assessed SpO2 < 94 as mild hypoxia. The monitoring was initiated during the first evening, an average of ten hours after the end of anaesthesia. It should also be acknowledged that the mean BMI in our cohort was 37, versus s mean BMI of 44 in the US studies3.\n\nNocturnal oxygen desaturations are not uncommon during the first postoperative night, also in patients undergoing other surgical procedures. Shirmakana et al. found that oxygen desaturation was frequent in patients having undergone breast surgery8. Bowdle looked at a mixed group of ambulatory surgery patients and also found an increase in hypo/apnea and desaturation in the first night after surgery9.\n\nThere are recent guidelines from the US association for sleep apnoea10. Germany has also addressed the importance of adequate perioperative care of obese patients at risk for sleep apnoea11. The preoperative evaluation by registration of hypo/apnea is strongly recommended, however advice given around the early postoperative period is sparse.\n\nTo put our findings into perspective, our patients had a mean BMI of merely 37. Additionally, we had a majority (39 out of 45) of female patients, and BMI-associated AHI abnormalities are more commonly seen in males12. We did not use the abdominal movement monitoring band to avoid additional abdominal pain. All patients had multi-modal analgesia and opioids were administered as restrictively as possible to maintain adequate pain control.\n\nIn conclusion, we have found that elective “low risk” obese patients that have had uncomplicated laparoscopic bariatric surgery, have low saturation during the first postoperative night and may experience short episodes of oxygen saturation of less than 85%, but hypo/apnea is rare and monitoring of obstruction seems not to be of major value. The potential clinical impact of the mild hypoxia and short episodes of desaturations requires further study.\n\nWe have followed the STROBE guidelines.\n\n\nData availability\n\nDataset 1. Raw sleep monitoring data from 45 patients that was used as a basis for the findings in this study. DOI, 10.5256/f1000research.11519.d16177513",
"appendix": "Author contributions\n\n\n\nJJ and SF were in charge of the study design protocol and processed applications. LW and SF had major roles in patient recruitment and collection of study data. All authors contributed equally to data analysis and manuscript preparation.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFinancial support was obtained from the research department at TioHundra Norrtälje.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nKoenig SM: Pulmonary complications of obesity. Am J Med Sci. 2001; 321(4): 249–79. PubMed Abstract | Publisher Full Text\n\nJones JG, Sapsford DJ, Wheatley RG: Postoperative hypoxaemia: mechanisms and time course. Anaesthesia. 1990; 45(7): 566–73. PubMed Abstract | Publisher Full Text\n\nZaremba S, Shin CH, Hutter MM, et al.: Continuous Positive Airway Pressure Mitigates Opioid-induced Worsening of Sleep-disordered Breathing Early after Bariatric Surgery. Anesthesiology. 2016; 125(1): 92–104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNieuwenhuijs D, Bruce J, Drummond GB, et al.: Ventilatory responses after major surgery and high dependency care. Br J Anaesth. 2012; 108(5): 864–71. PubMed Abstract | Free Full Text\n\nChung F, Liao P, Elsaid H, et al.: Factors associated with postoperative exacerbation of sleep-disordered breathing. Anesthesiology. 2014; 120(2): 299–311. PubMed Abstract | Publisher Full Text\n\nChung F, Liao P, Yang Y, et al.: Postoperative sleep-disordered breathing in patients without preoperative sleep apnea. Anesth Analg. 2015; 120(6): 1214–24. PubMed Abstract | Publisher Full Text\n\nGjevre JA, Taylor-Gjevre RM, Skomro R, et al.: Comparison of polysomnographic and portable home monitoring assessments of obstructive sleep apnea in Saskatchewan women. Can Respir J. 2011; 18(5): 271–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShirakami G, Teratani Y, Fukuda K: Nocturnal episodic hypoxemia after ambulatory breast cancer surgery: comparison of sevoflurane and propofol-fentanyl anesthesia. J Anesth. 2006; 20(2): 78–85. PubMed Abstract | Publisher Full Text\n\nBowdle TA: Nocturnal arterial oxygen desaturation and episodic airway obstruction after ambulatory surgery. Anesth Analg. 2004; 99(1): 70–6. PubMed Abstract | Publisher Full Text\n\nChung F, Memtsoudis SG, Ramachandran SK, et al.: Society of Anesthesia and Sleep Medicine Guidelines on Preoperative Screening and Assessment of Adult Patients With Obstructive Sleep Apnea. Anesth Analg. 2016; 123(2): 452–73. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFassbender P, Herbstreit F, Eikermann M, et al.: Obstructive Sleep Apnea-a Perioperative Risk Factor. Dtsch Arztebl Int. 2016; 113(27–28): 463–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErnst G, Bosio M, Salvado A, et al.: Difference between apnea-hypopnea index (AHI) and oxygen desaturation index (ODI): proportional increase associated with degree of obesity. Sleep Breath. 2016; 20(4): 1175–1183. PubMed Abstract | Publisher Full Text\n\nWickerts L, Forsberg S, Bouvier F, et al.: Dataset 1 in: Monitoring respiration and oxygen saturation in patients during the first night after elective bariatric surgery: A cohort study. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23609",
"date": "20 Jun 2017",
"name": "Peter N. Benotti",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nHypoxemia in the immediate postoperative period is a well-known and feared complication after bariatric surgery. Risk factors for hypoxemia include patient age (likely because of the increase in closing volume with aging), loss of lung volume during induction and maintenance of general anesthesia, alterations in respiratory drive related to general anaesthetics and use of opiates, as well as the the presence of obstructive sleep apnea.1,2 The exact incidence of hypoxemia and hypercarbia after bariatric surgery has not been well studied, but several studies in relatively small numbers of patients have documented worrisome levels of hypoxemia.2,3\nWickerts and co-workers4 have studied 45 good risk bariatric surgery patients. The mean age of the cohort was 39, the mean BMI was 37, and patients with known Obstructive Sleep Apnea were excluded. They all underwent uncomplicated and brief laparoscopic foregut procedures for obesity and were studied for 8 hours with pulse oximetry during the first postoperative night. Study findings included 22% with a mean SaO2 < 92%, 9% with mean SaO2 < 90% (lowest mean SaO2 was 87%), and 36% with nadir SaO2 < 85% (lowest nadir SaO2 was 63%). In addition, sleep disordered breathing events were documented in several patients. No clinical events related to hypoxia or sleep disordered breathing are recorded.\nThese findings are important as because they do document a small, but significant risk of potentially dangerous postoperative hypoxia even in low risk patients. Recently published guidelines recommend continuous monitoring with pulse oximetry in the early postoperative period.4 Guidelines for perioperative management of low risk patients as studied here are lacking.\nAdditional studies of the epidemiology of perioperative hypoxemia and hypercarbia complicating bariatric surgery are needed in order to begin to identify potentially modifiable patient-specific risk factors and interventions such is inspiratory muscle training3 which might reduce the risks of this potentially dangerous problem. The authors are encouraged to continue this survey and attempt to relate hypoxemia to risk factors. In addition, continued studies to include EKG monitoring with pulse oximetry might add to the clinical significance of these findings. The authors comment that the observed episodes of desaturation were brief, but they do not share any data regarding the duration of desaturations. I would be concerned about the possible clinical impact of 30 seconds of apnea in an unmonitored setting on the initial post operative night following bariatric surgery\nIn this era of cost constraints and limited patient access to bariatric surgery, cost reduction strategies designed to encourage same day bariatric surgery procedures are popular. Improved ability to recognize and address risk factors for postoperative hypoxemia should be an important component of the selection process for accelerated care.\nThe phrase that “hypoxia not only stops the engine, but destroys the machinery” has been attributed to JBS Haldane. Case reports of respiratory arrest and sudden unexplained fatality after bariatric surgery suggest that he was correct.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2812",
"date": "20 Jun 2017",
"name": "Jan Jakobsson",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Dear RefereeThank you for your important and constructive comments on our paper. Bariatric surgery do increase and enhanced recovery pathways are not uncommonly used. The aim of our study was to assess respiration/oxygenation in \"low risk patients\" undergoing elective laparoscopic \"low risk bariatric surgery\" and my expectation was indeed that we should observe more. Still agree even the observations found should be interpreted in a context; avoiding patient risk. We did not monitor duration of desaturation unfortunately but the hypopnea time was registered and was in two patients prolonged. The portable device used did not log duration of desaturation. We did not monitor ECG and/or sign of myocardial ischemia. There were however no complaints of chest pain or any other signs/symptoms alerting us to suspect myocardial or other ischemic events.We were in the group of patients not able to identify explicit risk factors.The literature is extensive and we are more than happy to add suggested papers.We are planning to continue our efforts looking a´t the first nigh respiratory events and will in these studies add ECG.I hope this response can be seen as acceptable and further suggestions for future studies are most welcome.Best regardsJan Jakobsson, on behalf of the authors"
}
]
},
{
"id": "23655",
"date": "20 Jun 2017",
"name": "Frances Chung",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis observational study sought to determine the incidence and severity of hypopnea or apnea on the first night after surgery, in patients who did not have a diagnosis of OSA undergoing elective laparoscopic bariatric surgery. They monitored 45 patients with a portable polygraphy during the first postoperative night. They report that oxygen desaturation and short episodes of desaturation occurred in some patients, but there were few patients with an AHI or ODI greater than five. They conclude that respiratory monitoring for low risk patients on the general ward provides little additional information.\n\nAlthough this is an interesting study, this manuscript can be improved by providing more details in the Methods and Results. In the Introduction, the authors state that they wished to assess risk factors associated with hypopnea or apnea and desaturation. However, the presence of co-morbidities is not mentioned and there is no analysis of the risk factors that were associated with the patients that experienced desaturation/apnea/hypopnea. The questionnaire that was used to determine the risk of OSAS is indicated to be Epworth Sleepiness scale. The validation of this scale to screen for sleep apnea is relatively modest. It is not clear whether patients with obstructive sleep apnea are excluded from the study after use of questionnaire. Is sleep apnea one of the exclusion criteria of the study? The results of the questionnaire to determine risk of OSAS should be reported, it is unclear whether the patients who participated in the study were at low risk or may be at risk or have undiagnosed OSAS.\n\nThe definition of apnea/hypopnea should be stated. The explanation for the sample size should be added. The number of patients receiving oxygen on the ward should be added, especially as it is mentioned in the methods section that patients received oxygen supplementation postoperatively. In Tables 1 and 2, the authors compare gastric bypass vs. sleeve gastrectomy, however, the authors have not stated whether they expected a difference between these 2 groups. There are some grammatical errors in the manuscript.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2815",
"date": "21 Jun 2017",
"name": "Jan Jakobsson",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Dear referee,Thank you for review and most appreciated/important comments:Patients with known sleep related respiratory disturbances, diagnosed or suspected obstructive sleep apnea was excluded.We used the ESS scale for screening, 22 of our patients had an ESS 0-5, 14 had a score of 6 – 10 and 6 patients had ESS of more than 10. Highest ESS was 16 in one 26 year old female patient with a BMI of 41. She had a mean SpO2 of 92% and a nadir SpO2 of 79. She did not show and hypo or apneas during the observation period. There are 3 patients where ESS is missing.Apnea was classified in accordance to the American Academy of Sleep Medicine (AASM) as a drop in the peak signal excursion by ≥ 90% of pre-event baseline air flow signal. The duration of the ≥90% drop in sensor signal must be ≥10 seconds. Hypopnea was classified by as a drop in the peak signal excursion by ≥ 30% of pre-event baseline. The duration of the ≥ 30% drop in signal excursions must be ≥ 10 seconds (Berry et al 2012).Berry RB, Budhiraja R, Gottlieb DJ, Gozal D, Iber C, Kapur VK, et al. Rules for scoring respiratory events in sleep: update of the 2007 AASM Manual for the Scoring of Sleep and Associated Events. Deliberations of the Sleep Apnea Definitions Task Force of the American Academy of Sleep Medicine. J Clin Sleep Med. American Academy of Sleep Medicine; 2012 Oct 15;8(5):597–619.We have analysed the results also taking the ESS into account, but we failed to see any clear association between ESS and respiration/saturation see Table 3Table 3 ESS 0-5(n=22)ESS 6-10(n=14)ESS > 10(n=6)Age (years36 ± 1340 ± 1043 ± 18BMI35 ± 536 ± 438 ± 3Mean SpO292 ± 2.493 ± 2.193 ± 1.9Nadir SpO285 ± 6.184 ± 9.584 ± 5.1AHi0.7 ± 2.30.1 ± 0.40.2 ± 0.4ODi1.7 ± 5.30.4 ± 1.10We did not know whether surgical procedure had an impact and therefore tested also for surgical technique.We hope that these responses is in line with your expectations, an update manuscript is on the way.Best regardsJan Jakobsson, on behalf of the authors"
}
]
}
] | 1
|
https://f1000research.com/articles/6-735
|
https://f1000research.com/articles/6-456/v1
|
10 Apr 17
|
{
"type": "Review",
"title": "Recent insights into the implications of metabolism in plasmacytoid dendritic cell innate functions: Potential ways to control these functions",
"authors": [
"Philippe Saas",
"Alexis Varin",
"Sylvain Perruche",
"Adam Ceroi",
"Alexis Varin",
"Sylvain Perruche",
"Adam Ceroi"
],
"abstract": "There are more and more data concerning the role of cellular metabolism in innate immune cells, such as macrophages or conventional dendritic cells. However, few data are available currently concerning plasmacytoid dendritic cells (PDC), another type of innate immune cells. These cells are the main type I interferon (IFN) producing cells, but they also secrete other pro-inflammatory cytokines (e.g., tumor necrosis factor or interleukin [IL]-6) or immunomodulatory factors (e.g., IL-10 or transforming growth factor-β). Through these functions, PDC participate in antimicrobial responses or maintenance of immune tolerance, and have been implicated in the pathophysiology of several autoimmune diseases. Recent data support the idea that the glycolytic pathway (or glycolysis), as well as lipid metabolism (including both cholesterol and fatty acid metabolism) may impact some innate immune functions of PDC or may be involved in these functions after Toll-like receptor (TLR) 7/9 triggering. Some differences may be related to the origin of PDC (human versus mouse PDC or blood-sorted versus FLT3 ligand stimulated-bone marrow-sorted PDC). The kinetics of glycolysis may differ between human and murine PDC. In mouse PDC, metabolism changes promoted by TLR7/9 activation may depend on an autocrine/paracrine loop, implicating type I IFN and its receptor IFNAR, explaining a delayed glycolysis. Moreover, PDC functions can be modulated by the metabolism of cholesterol and fatty acids. This may occur via the production of lipid ligands that activate nuclear receptors (e.g., liver X receptor [LXR]) in PDC or through limiting intracellular cholesterol pool size (by statins or LXR agonists) in these cells. Finally, lipid-activated nuclear receptors (i.e., LXR or peroxisome proliferator activated receptor) may also directly interact with pro-inflammatory transcription factors, such as NF-κB. Here, we discuss how glycolysis and lipid metabolism may modulate PDC functions and how this may be harnessed in pathological situations where PDC play a detrimental role.",
"keywords": [
"plasmacytoid dendritic cells",
"immunometabolism",
"cholesterol",
"fatty acid",
"LXR",
"PPAR",
"type I interferon",
"glycolysis"
],
"content": "1. Introduction\n\nMore and more data are available concerning the role of cellular metabolism in innate immune cells, such as macrophages or conventional dendritic cells (cDC)1–3. However, few data are available currently concerning plasmacytoid dendritic cells (PDC). PDC belong to the family of dendritic cells (DC) and possess specific features that distinguish them from cDC. PDC represent the main type I interferon (IFN) secreting cells and play a critical role in antimicrobial immune responses. The involvement of PDC through IFN-α secretion has also been reported in several autoimmune diseases (see section 2.4). Furthermore, PDC release other pro-inflammatory cytokines, as well as immunoregulatory factors. In these ways, they may exert pro-inflammatory functions or, on the contrary, participate in tolerance mechanisms. In this Review article, based on recent publications4–6, we will discuss how the innate immune functions of PDC may be modulated by or dependent on the glycolytic pathway (also known as glycolysis) and lipid metabolism, including the metabolism of cholesterol and fatty acids. Before that, we will describe the innate functions of PDC. PDC also have the capacity to present antigens to T cells and to polarize CD4+ helper T cell responses, as well as those interacting with B cells7. These functions will not be discussed in this article, since no sufficient data are available concerning the impact of metabolism on the capacity of PDC to interact with the adaptive immune system. Although the “immunometabolism” (as defined in Ref#2) includes six metabolic pathways (glycolysis, the tricarboxylic acid [TCA] cycle, the pentose phosphate pathway, fatty acid oxidation, fatty acid synthesis and amino acid metabolism) that influence immune cell effector functions2, this article will focus on glycolysis and lipid metabolism (extended to the cholesterol metabolism). Concerning amino acid metabolism, PDC innate functions have been shown to be modulated by mammalian target of rapamycin (mTOR) signaling. This central metabolic regulator, mTOR can sense amino acid sufficiency in lysosomes, and promotes mRNA translation and lipid synthesis to support cell growth and proliferation2. Furthermore, mTOR, through its association with regulatory-associated protein of mTOR (RAPTOR), constitutes the mTOR complex 1 (mTORC1), which is connected with other metabolic pathways (see section 3.1). This will be discussed briefly, since the role of mTOR in innate immune cell functions has been reviewed recently3,8. Finally, concerning amino acid metabolism, PDC are able to sense amino acid deficiency through their expression of GCN2 (general control nonderepressible 2) serine/threonine kinase. Indeed, the suppression of interleukin (IL)-6 production in PDC by indoleamine 2,3-dioxygenase (IDO) involves GCN2 kinase9 (see section 2.3).\n\nWhat are the main roles of the immunometabolism in immune cells? First of all, this is a way to provide energy. Cells need energy to execute cellular functions, such as survival, proliferation or cytokine secretion. This energy is provided as adenosine triphosphate (ATP) by several pathways. The first is glycolysis, which involves the conversion of glucose to pyruvate in the cytosol. The second pathway is the TCA cycle (also called the Krebs cycle), which donates electrons to the electron transport chain located in the mitochondria to fuel oxidative phosphorylation or respiration (OXPHOS). This OXPHOS process generates ATP in the mitochondria. Other substrates, such as fatty acids via β-oxidation (also called fatty acid oxidation [FAO]), can replenish the TCA cycle to fuel OXPHOS10. In addition to substrates used for energy production and de novo biosynthesis, mitochondrial metabolic pathways (such as the TCA, FAO, or OXPHOS) provide substrates for epigenetic modifications of DNA and histones11,12. This is the case, for instance, of acetyl-CoA for histone acetylation, which is associated with active transcription11. This connects mitochondrial metabolism to epigenetic regulation12. This specific aspect will be briefly discussed in the Conclusions of this article.\n\n\n2. The innate immune functions of plasmacytoid dendritic cells\n\nPDC belong to the DC family and possess specific features that distinguish them from cDC13. These features include: the capacity to rapidly and massively produce type I IFN (i.e., IFN-α/β), the expression of a particular set of pathogen-recognition receptors (PRR), leading to the recognition of specific pathogen-associated molecular pattern (PAMP) and damage-associated molecular pattern (DAMP) molecules, as well as a preferential localization in lymphoid organs7.\n\nPDC were firstly identified in human as the major IFN-α producing cells, and thus initially called IPC (IFN-α producing cells)14,15. After this characterization in human, its murine PDC counterpart was then isolated16–19. Human PDC are usually identified as CD4+, CD303+ (previously known as BDCA-2), and CD123high, whereas mouse PDC are CD11cint, B220+, SIGLEC-H+, and CD317+7. Despite the difference in phenotypes of human and mouse PDC, PDC from both species exhibit a conserved genetic signature with some common genes (e.g., tlr7)20. Moreover, PDC exhibit specific transcription factors, such as the transcription factor E2-2 (also known as TCF4) or SPIB7. A differentiation/ontogeny process distinct from those of cDC has been reported7.\n\nDevelopment of PDC from hematopoietic stem cells occurs in the bone marrow. As mentioned above, PDC-defining transcription factors have been identified, such as E2-2 (TCF4) or SPIB7. After differentiation in the bone marrow, PDC are released into the bloodstream for homing to different lymphoid tissues21. Thus, PDC isolated from blood of healthy donors or patients consist in PDC migrating to these tissues7. In steady state, PDC reside mainly in T cell-rich areas in lymph nodes and secondary lymphoid organs7. Localizations of PDC in other lymphoid organs, such as Peyer’s patches of the gut22, and tonsils23, have been reported. PDC residing in non-lymphoid tissue – such as the airways24 and the liver25 – exert a critical role in steady state by regulating mucosal immunity and maintaining tolerance to inhaled or ingested antigens26. Finally, PDC are also present in the thymus during homeostatic conditions, where they play a role in central tolerance27–29. In contrast, during infections or autoimmune diseases, PDC migrate to inflamed lymph nodes14 or inflamed epithelia14,23,30. The microenvironment in which PDC are present may influence oxygen and nutrient availability, which impacts on metabolic pathways (see section 3).\n\nPDC constitute a DC subset, specialized in antimicrobial immune responses. This occurs mainly through a rapid type I IFN (IFN-α/β) production7,31,32. Selective depletion of PDC by genetic approaches supports the critical role of PDC for early IFN-α production after microbial infections33–35. The type I IFN response is triggered by Toll-like receptor (TLR) signaling after PAMP recognition.\n\nCompared to cDC, PDC express a limited number of PRR. TLR 7/9 are expressed by both human and mouse PDC7,36. These two endoplasmic TLRs allow PDC to recognize cytosine-phosphate guanosine (CpG)-rich unmethylated DNA from bacteria and DNA viruses,7,31,37, as well as viral single-stranded RNA (ssRNA),31,38,39, respectively. In addition, PDC are able to recognize via TLR7/9 mammalian nucleic acids40, in particular when these nucleic acids are complexed or associated with antimicrobial peptides (e.g., LL37)41,42. Once PDC have sensed pathogens or DAMP through endoplasmic TLR, signaling is mediated via MyD88 (myeloid differentiation factor 88), a docking protein for IRAK1⁄4 (IL-1R-associated kinase 1/4), and the ubiquitin ligase TRAF6 (tumor necrosis factor [TNF] receptor-associated factor 6). IFN-regulatory factor 7 (IRF7) is then phosphorylated and translocates into the nucleus to induce type I IFN gene and IFN-inducible gene transcription (Figure 1)43. This is true for human PDC36. Concerning mouse PDC, other intermediates may participate in type I IFN mRNA transcription in the TLR-dependent IRF7 signaling pathway. This involves a complex, associating TRAF3, IRAK1, osteopontin, PI3K (phosphatidylinositol 3-kinase) and IKKα (IκB kinase-α)36. A critical role of PI3K has also been reported for type I IFN production by human PDC44. In addition, in TLR7- or TLR9-activated human PDC, TRAF6 can also recruit TAK1 (transforming growth factor [TGF]-β-activating kinase; also known as MAP3K7 for mitogen-activated protein kinase kinase kinase 7) to trigger the synthesis of pro-inflammatory cytokines via NF-κB activation36. In mouse PDC, TAK-1/MAP3K7 activates the mitogen-activated protein kinase (MAPK) pathway that upregulates costimulatory molecule expression (e.g., CD40, CD80 or CD86)36. Both human and mouse PDC have been shown to secrete TNF-α4–6,45,46, IL-64–6,45,46, IL-845–47 or granulocyte macrophage colony-stimulating factor (GM-CSF)45. This TLR-induced cytokine synthesis is regulated in PDC by the translocation of NF-κB, p38 MAPK and c-Jun N-terminal kinase (JNK) into the nucleus. In human PDC, the association of NF-κB p65 and p50 subunits with IRF5 appears to be the master inducer of IL-6 mRNA transcription36. Depending on the TLR9 ligand used, the cytokine response can be different. For instance, type A CpG-containing oligonucleotide (CpG-ODN) (CpGA) induces mainly type I IFN production, whereas type B CpG-ODN (CpGB) induces pro-inflammatory cytokine secretion and upregulation of co-stimulatory molecules48.\n\nThis figure summarizes different signaling pathways described in the literature to promote metabolic changes or to be modulated by immunometabolism in plasmacytoid dendritic cells. This includes: endosomal TLR 7 and TLR9, membrane IL-3 receptor (associating CD131 to CD123), GM-CSF receptor (associating CD131 to CD116), and IFN-α receptor (IFNAR associating IFNAR1 and IFNAR2). Only the main pathways with main effector molecules are depicted. For more details, please refer to the main text. Abbreviations (not defined in the main text): IFIG, IFN-I-induced genes; nfκB1, NF-κB gene.\n\nIn addition to virus or pathogen-expressing TLR7/TLR9 ligands or synthetic ligands, PDC can be activated to release cytokines by other stimuli, including DAMP. PDC express RAGE (receptor for advanced glycation end products), a PRR that recognizes high-mobility group box-1 (HMGB1)49,50, a nuclear DNA-binding protein released from necrotic cells51. PDC can also be activated by neutrophil extracellular traps (NET), released by dying or activated neutrophils. These NET contain DNA fibers, histones, as well as a large amount of LL37 and HMGB152,53. Human PDC can also be activated by CD154 from activated platelets54 or endothelial-derived microvesicles6,46. Other PRR are present in the cytoplasm of PDC and dedicated to RNA virus recognition. This is the case of retinoic acid-inducible gene (RIG)-I-like receptors, DHX9 or DHX3655.\n\nIn addition to type I IFN and pro-inflammatory cytokine production, PDC have regulatory and immunosuppressive functions26. This has been demonstrated by in vivo PDC depletion studies24,56. For instance, PDC exert immunoregulatory functions in the lung, preventing deleterious asthmatic reactions24. Moreover, PDC may express immunosuppressive factors that confer tolerogenic properties26. One major factor is the enzyme IDO26,57,58. This enzyme is involved in the catabolism of the essential amino acid, tryptophan, and the synthesis of kynurenines. Tryptophan is required for T cell proliferation and kynurenines have immunosuppressive properties. Engagement of several receptors, including CD80/CD86, or TLR959, participates in active IDO induction. Amino acid withdrawal resulting from IDO enzymatic activity, stimulates the GCN2 kinase in PDC and then prevents IL-6 secretion by PDC9. Moreover, IDO exerts a regulatory function independently of its catabolic activity by participating in TGF-β-signaling pathway60. Receptors expressed by PDC, such as immunoglobulin-like transcript 761 or CD30362 may also inhibit TLR7/9-mediated type I IFN production. In addition, PDC can produce, under certain circumstances, high levels of immunosuppressive cytokines, such as IL-1019,25 or TGF-β56,63. In addition, PDC participate in the maintenance of immune tolerance via the induction and/or expansion of regulatory T cells (please refer to Refs#7,63; this is out of the scope of this review).\n\nWhen one evokes cellular metabolism, cell survival has to be discussed. IL-3 has been identified as a critical factor for the development and survival of PDC64,65. This cytokine interacts with the IL-3 receptor associating two chains, the common chain CD131 and the IL-3Rα chain (CD123) that is highly expressed by PDC. Signaling through this receptor involves Janus kinase 2 (JAK2), Src kinases, transcription factors STAT3/STAT5 (signal transducer and activator of transcription 3/5) and Akt (Figure 1)66. Another cytokine that shares the common chain CD131 and influences PDC survival with nearly the same signaling pathway is GM-CSF (Figure 1)65. Finally, PDC express IFNAR, the membrane receptor for type I IFN (IFN-α or IFN-β) that consists of two subunits, IFNAR1 and IFNAR2. Engagement of this receptor by its ligand activates JAK1 (associated with IFNAR1) and tyrosine kinase 2 (associated with IFNAR2) that phosphorylate and activate STAT1 and STAT2, respectively (Figure 1)66,67.\n\nBefore discussing the role of immunometabolism, the role of PDC in beneficial and detrimental immune responses will be briefly detailed. As type I IFN producing cells, PDC play a major beneficial role in antimicrobial immune responses7. However, uncontrolled IFN-α production in acute viral infection may be detrimental to the host. Moreover, high levels of type I IFN released by PDC may be detrimental in chronic inflammatory or autoimmune diseases7,54,68–71. This is the case of systemic lupus erythematosus (SLE)52–54, type 1 diabetes69, and psoriasis41,71. Furthermore, PDC may participate in inflammatory autoimmune disorders (i.e., systemic sclerosis70 or autoimmune vasculitis72) via the secretion of other pro-inflammatory cytokines than type I IFN72 or chimiokines70. Since PDC infiltrate inflamed tissues, they may release pro-inflammatory factors participating in the amplification of diseases. Therefore, PDC have been reported to infiltrate acute graft-versus-host disease lesions, including gastro-intestinal73 and cutaneous74 lesions. PDC emerge as cells present in atherosclerotic plaques and may play a role in atherosclerosis66,75. Atherosclerotic plaques are enriched in lipids. This may modify the lipid metabolism of infiltrated PDC by the uptake of lipid-enriched lipoproteins or oxidized lipoproteins, and subsequently PDC functions (see section 3.3.3). Finally, insufficient or exhausted production of IFN-α during chronic viral infections (e.g., chronic hepatitis C virus or HIV) has also been reported76,77. Thus, the metabolism of PDC may be pharmacologically modified in order to restore type I IFN production.\n\n\n3. The influence of the metabolism on innate immune functions of plasmacytoid dendritic cells\n\nThe kinase mTOR is a key regulator of different biological processes, including metabolism8. This is a serine/threonine protein kinase that senses and integrates signals (such as nutrients and oxygen) originating from the extracellular milieu, as well as intracellular signals78. In fact, mTOR is the catalytic subunit of two different complexes, mTORC1 and mTORC2. As mentioned briefly in the Introduction, mTORC1 is connected with several metabolic pathways. Indeed, mTORC1 promotes glycolysis through hypoxia-inducible factor 1α (HIF-1α)8, as well as cholesterol and fatty acid synthesis using TCA cycle intermediates through a pathway involving sterol regulatory element-binding proteins (SREBP) and the nuclear receptor, peroxisome proliferator-activated receptor (PPAR) γ8. Cholesterol and fatty acids are used as “building blocks”8 for complete maturation of endoplasmic reticulum (ER) and Golgi apparatus. Both organelles can promote the transport of pro-inflammatory cytokines within the cell that precedes their secretion8. In addition, mTORC1 can have a negative effect on mitochondrial OXPHOS by inducing the expression of type I IFN and production of nitric oxide, which subsequently promotes aerobic glycolysis8. This has been well described in macrophages8.\n\nConcerning PDC innate immune functions, mTOR plays an important role in type I IFN production79. The TLR9 ligand, CpGA, stimulates the rapid phosphorylation of mTOR and its downstream targets, the p70 ribosomal S6 kinase 1 and the eukaryotic translation initiation factor 4E-binding protein (4E-BP)79. Thus, mTOR is involved in the TLR9-induced type I IFN signaling pathway (Figure 2). Using the mTORC1 inhibitor rapamycin, the same authors demonstrated that TLR7/9-mediated production of type I IFN is inhibited in both human and mouse PDC79. These data have been confirmed in vivo using live attenuated viral yellow fever vaccine79. The virus responsible for yellow fever is an ssRNA virus and TLR7 recognizes ssRNA38. Rapamycin may act at two levels: i) by inactivating via the inhibition of 4E-BP phosphorylation the nuclear translocation of IRF7 required for type I IFN gene transcription; and ii) by blocking the formation of the TLR9-MyD88 complex via the p70 ribosomal S6 kinase 178. In addition to type I IFN production, the inhibition of mTORC1 reduces TLR7/9-induced TNF and IL-6 production by human and mouse PDC79. Overall, mTOR signaling is involved in the TLR9-induced type I IFN signaling pathway and blocking mTORC1 inhibits TLR7/9-stimulated secretion of pro-inflammatory cytokines (i.e., type I IFN, TNF and IL-6) by PDC. Thus, the use of mTOR inhibitors may block the detrimental pro-inflammatory functions of PDC in inflammatory disorders, but may also prevent the beneficial antimicrobial response of PDC.\n\n(A) The endosomal TLR7 pathway in human blood-derived PDC promotes early glycolysis (within minutes following TLR7 triggering; green font and green arrows), as attested by increased ECAR (extracellular acidification rate; a reflection of lactate excretion). This implicates the HIF-1α molecule that increases the GLUT1 glucose transporter allowing extracellular glucose entry and stimulates some enzymes involved in glycolysis (HK or PFK). Glycolysis in human PDC is required for TLR7-induced type I IFN production. A potential link with the activation of the mTORC1 complex can be seen, since this complex is activated by both endosomal TLR7 and TLR9 pathway in mouse and human PDC. Inhibitors of mTORC1 (RAP), of TLR7 signaling (chlor.), and of glycolysis (2-DG) are written in blue font. (B) The TLR9 pathway in mouse bone marrow-derived PDC promotes late glycolysis (after 24 hours) (grey font and grey arrows) via a type I IFN/IFNAR loop. This pathway also promotes fatty acid synthesis, FAO coupled with OXPHOS to generate ATP in a PPARα-dependent mechanism (violet font and violet arrows). This TLR9 pathway implicates the activation of mTORC1 in both mouse and human PDC. Specific inhibitors of fatty acid synthesis (C75), pyruvate entry in the mitochondrion (UK5099), TCA cycle (TOFA) or FAO (etoxomir) are written in blue font and have been used to demonstrate the promotion of fatty acid synthesis, FAO and OXPHOS inTLR9-induced type I IFN production. For more details, please refer to the main text. Abbreviations (not defined in the main text): HK, hexokinase; PFK, phosphofructokinase; RAP., rapamycin; chlor., chloroquine.\n\nLastly, mTORC1 stimulates glycolysis in innate immune cells, at least via HIF-1α induction, but also through the increased expression of the glucose transporter 1 (GLUT1) at the cell surface (Figure 2)8. GLUT1 enhances glucose uptake from the extracellular milieu8. Thus, the implication of mTOR signaling pathway in the response of PDC to TLR9 ligand79 suggests that glycolysis may be triggered in the cells. This will be discussed in the next section.\n\nThe glycolytic pathway (also known as glycolysis) involves the uptake of extracellular glucose (i.e., present in the microenvironment) and its conversion in the cytosol to generate pyruvate2. Most pyruvate is then excreted from the cells as lactate after a process called aerobic glycolysis. This is a relatively inefficient pathway for the generation of energy (i.e., cellular ATP) compared to the TCA cycle coupled to OXPHOS80. The preferential use of glycolysis versus the TCA cycle and OXPHOS depends on oxygen availability2,80. Glycolysis is favored during hypoxia, a situation encountered, for instance, in joints during rheumatoid arthritis or in the inflamed colon during Crohn’s disease80 - PDC can be present in both inflamed tissues (i.e., the synovial fluid81–83 and the colon73,84). This energy provided by glycolysis may represent the resources necessary for cytokine secretion, since PDC does not proliferate in the same way as adaptive cells (i.e., T cells). However, PDC become highly secretory upon activation7. Finally, growth factors triggering PI3K and MAPK in their signaling pathway promote, in theory, the cellular use of glycolysis2. This may be the case when PDC are stimulated by TLR7/9 ligands (see previous section).\n\nThe exposure of human PDC to two different ssRNA viruses, influenza virus (Flu) and Rhinovirus (RV) – triggering the TLR7 pathway via ssRNA recognition38 – activates HIF-1α4, a major regulator of metabolism (Figure 2A). Indeed, HIF-1α is critical for glycolysis to generate ATP, since it induces the expression of different glycolytic enzymes, such as hexokinase and phosphofructokinase80. In addition to HIF-1α activation, the TLR7 agonist gardiquimod, as well as Flu and RV, induces early glycolysis (within minutes) in human PDC, as attested by elevated extracellular acidification rate (ECAR; a reflection of lactate secretion in extracellular milieu, an indicator of glycolysis in real time) and elevated rates of lactate production4. Moreover, the inhibition of glycolysis by 2-deoxyglucose (2-DG; a glycolytic inhibitor) impairs ssRNA virus- or TLR7 ligand-induced type I IFN by PDC, as well as the upregulation of HLA-DR, CD80 and CD86 at PDC cell surface4. Furthermore, 2-DG inhibits the increase of IFNA, CD80 and CD86 mRNA induced by exposure to Flu4. This suggests that glycolysis induced by the TLR7 pathway regulates these genes at the transcriptional level. The involvement of the TLR7 pathway in glycolysis was supported by the use of chloroquine, known to disrupt endosomal acidification required for TLR7 signaling85. Chloroquine treatment inhibits the lactate production by PDC in response to Flu or gardiquimod4. Overall, ssRNA viruses enhance glycolysis in human PDC via the TLR7 pathway (Figure 2A). This finding was confirmed in vivo, since viral infection using live attenuated influenza vaccine increases glycolysis in ex vivo isolated human PDC and correlates with IFN-α production by these cells4.\n\nThese data contrast with a recent report showing that mouse PDC activation in response to the TLR9 ligand, CpGA, is not accompanied by a rapid change in ECAR (i.e., during the first 150 minutes). This contrasts with data obtained in the same experiments using cDC – as a control of PDC – stimulated by either lipopolysaccharide, Poly(I:C) or CpGA5. However, glycolysis is detected late (24 hours) in TLR9-stimulated mouse PDC5. Moreover, the authors also studied the role of the TLR7 pathway in mouse PDC, but not extensively. Again, they found a delayed activation of glycolysis after stimulation of mouse PDC with the TLR7 agonist imiquimod5. Thus, whether the origin of PDC (human4 versus mouse5) or their source (sorted from peripheral blood mononuclear cells4 versus sorted from FLT3-stimulated ligand-bone marrow cultures5) may explain this discrepancy remains to be determined. Another difference between these two studies lies in the direct effect of IFN-α on PDC metabolism. Treatment of human PDC with IFN-α is not sufficient to induce PDC lactate efflux (i.e., glycolysis) and IFN-α/β receptor (IFNAR) blockade does not affect PDC lactate efflux induced by Flu infection4. This suggests that type I IFN does not regulate in an autocrine/paracrine manner Flu-induced early glycolysis in human PDC. In contrast, this autocrine/paracrine loop involving type I IFN and its receptor may play a significant role in TLR-induced glycolysis in mouse PDC (Figure 2B)5. Nevertheless, all these data support that glycolysis plays a role as a source of energy in the production of type I IFN, pro-inflammatory cytokines (IL-6 and TNF) and costimulatory molecule upregulation by PDC in response to TLR7/9 activation. The modulation of this metabolic pathway may limit uncontrolled pro-inflammatory cytokines by PDC in pathological situations or may restore type I IFN production in chronic infectious diseases.\n\nIn this section, we will discuss fatty acid metabolism, including fatty acid oxidation (FAO; also known as mitochondrial β-oxidation) and fatty acid synthesis, as well as cholesterol metabolism. Lipid metabolism is regulated by many key enzymes. Some of these enzymes involved in de novo lipid synthesis are controlled by lipid-activated nuclear receptors, such as liver X receptor (LXR) or PPAR. The genes coding for these enzymes are thus called LXR or PPAR target genes, respectively. Among these LXR or PPAR target genes, one may cite FASN coding for fatty acid synthase86–88. These LXR or PPAR target genes code not only for enzymes involved in lipid metabolism, but also for transcription factors and transporters, and regulate also to glucose or amino acid metabolism. This is the case of the transcription factors, SREBP, or of glucose or cholesterol transporters involved in nutrient entry (e.g., the GLUT1 glucose transporter) or efflux, such as ATP binding cassette (ABC) transporters A1 and G1 (ABCA1 and ABCG1, respectively) involved in cholesterol efflux2,86,87,89,90. PPAR and LXR are both mainly found associated with retinoid X receptor (RXR) to form a heterodimer. Considered as permissive heterodimer receptors, they can be activated by the ligands of each partner (e.g., PPAR or RXR ligands; LXR or RXR ligands)88,90. While the involvement of these nuclear receptors in innate immune responses is well described for macrophages and cDC90, few data are available for PDC. Here, we will discuss how lipid metabolism modulates PDC innate immune functions and how PDC activation by TLR ligands or other stimuli modifies lipid metabolism.\n\n3.3.1. Fatty acid oxidation. The FAO pathway allows the conversion of fatty acids into numerous products in the mitochondria. These products, such as acetyl-CoA, NADH (nicotinamide adenine dinucleotide dehydrogenase) and FADH2 (the fully reduced form of flavin adenine dinucleotide [FAD]), can be used in the TCA cycle and the electron transport chain to generate energy2. As discussed before, normoxia supports the TCA cycle and OXPHOS, while hypoxia via HIF-1α activation followed by the induction of glycolytic enzymes leads to glycolysis2,80. The TCA cycle coupled to OXPHOS is the major metabolic pathway used by most quiescent or non-proliferating cells2.\n\nA recent well-received manuscript reports that FAO and mitochondrial OXPHOS play a critical role in murine PDC activation by the TLR9 pathway (Figure 2B)5. This is particularly well demonstrated for type I IFN production by these cells5. Mouse PDC stimulated by the TLR9 ligand, CpGA, exhibit an increase of basal oxygen consumption rate (OCR) and spare respiratory capacity (SRC)5. Both increased basal OCR and SRC are indicators of FAO. To directly demonstrate that CpGA increases FAO, the authors used etomoxir, an irreversible inhibitor of carnitine palmitoyl transferase I2. This enzyme is responsible for the entry of activated fatty acids (i.e., medium-chain and long-chain fatty acids conjugated with carnitine) into mitochondria for FAO2. Etomoxir inhibits both the increase of basal OCR and SRC induced by TLR9 ligand stimulation. Moreover, etomoxir limits the production of IFN-α and pro-inflammatory cytokines (TNF-α and IL-6) by PDC in response to CpGA stimulation. This inhibition of TLR9 activation by etomoxir also prevents the upregulation of CD86 expression at PDC cell surface5. Overall, TLR9-induced mouse PDC activation is accompanied by an increased FAO, and stimulation of this metabolic pathway is required for pro-inflammatory cytokine secretion and PDC maturation (i.e., CD86 upregulation).\n\nAn increase of basal OCR and SRC attesting for FAO has also been observed after the activation of mouse PDC by the TLR7 agonist imiquimod. Treatment of mouse PDC by etomoxir inhibits imiquimod-induced IFN-α production5, suggesting that FAO is also required for type I IFN production in response to TLR7 ligand. Implication of FAO in vivo was assessed using etomoxir and mice infected with the ssRNA lymphocytic choriomeningitis virus (LCMV). Etomoxir-treated and LCMV-infected mice exhibit reduced circulating IFN-α 3 days after infection and significantly more LCMV are detected in the liver and spleen of etomoxir-treated versus untreated infected mice5. This demonstrates the in vivo relevance of these data.\n\nChanges in basal OCR are not detected at early time points after murine PDC activation by CpGA, but this requires new gene transcription. This may suggest that IFN-α production and IFNAR signaling induced by CpGA stimulation may be responsible for these changes in PDC metabolism. Indeed, treatment with IFN-α alone is sufficient to induce increased FAO in mouse PDC5. Thus, increased FAO induced by CpGA stimulation in mouse PDC are therefore the results of an autocrine or paracrine loop involving the type I IFN signaling pathway.\n\nOne goal of the FAO pathway is to generate energy, by the production of a high number of ATP molecules2. This occurs in fact by fueling OXPHOS2. To definitively demonstrate the implication of energy provided by FAO coupled to OXPHOS in type I IFN-induced mouse PDC activation, ATP was quantified in response to IFN-α and different inhibitors were used5. Metabolic reprogramming of mouse PDC induced by IFN-α leads to enhanced ATP availability5. The quantity of ATP in response to CpGA activation is significantly reduced by the inhibition of either FAO (using etomoxir), pyruvate import into mitochondria required for the TCA cycle (using UK5099) or fatty acid synthesis (using tall oil fatty acid [TOFA])5. This confirms that type I IFN stimulation of mouse PDC generates significant amounts of ATP via the FAO pathway. This pathway fuels OXPHOS in this setting and is itself fueled by fatty acid synthesis, as demonstrated by the use of the inhibitor TOFA (Figure 2B).\n\nThe pathways responsible for the changes in metabolism induced by type I IFN in mouse PDC were studied by an unbiased RNA-seq based approach5. This analysis shows that OXPHOS is the major network induced by type I IFN, and FAO is connected to this network. Furthermore and surprisingly, this analysis reveals also a PPARα gene signature5. While PPARγ is expressed in macrophages and cDC90, the PPARα isoform is mainly and highly expressed in metabolic active tissues, such as the liver or brown adipose tissues88. After having confirmed that the PPARα isoform is expressed by bone marrow-derived and splenic mouse PDC, a PPARα antagonist, GW6471 was used5. GW6471 blocks both IFN-α production and the increase of basal OCR in response to CpGA activation. Furthermore, increased basal OCR and SRC in mouse PDC are observed after incubation with a PPARα agonist gemfibrozil, as well as with the combined PPARα and PPARγ agonist, muraglitazar5. Overall, this indicates that PPARα is involved in FAO and OXPHOS induced by CpGA activation (Figure 2B). This PPARα pathway in PDC functions will be discussed later in this review together with the other lipid-activated nuclear receptor, LXR (see the following two sections).\n\n3.3.2. Fatty acid synthesis. The fatty acid synthesis pathway allows cells to generate lipids that are necessary for cellular growth and proliferation2. Fatty acid synthesis is performed in the cytosol, and it uses citrate from the TCA cycle and exported from the mitochondria into the cytosol to generate fatty acids (Figure 2B). As mentioned above, de novo fatty acid synthesis is dependent on key enzymes, such as FASN, that are controlled by either the mTOR signaling pathway8, LXR86,87 or SREBP-1c2,91. Fatty acids generated by this synthesis can be used to fuel mitochondrial FAO2, to activate PPAR nuclear receptors89, or can be condensed with glycolysis-derived glycerol to produce triacylglycerol and phospholipids2. These latter two are key components of many cellular structures, such as the cell membrane, ER or Golgi apparatus2,8.\n\nThe inhibition of the fatty acid synthesis using two different inhibitors (TOFA, an inhibitor of acetyl-CoA carboxylase; C75, an inhibitor of FASN)2 prevents the increase of IFN-α, TNF-α, and IL-6 production by mouse PDC in response to CpGA stimulation5. As previously demonstrated for FAO and OXPHOS, fatty acid synthesis induced by the TLR9 pathway implicates a type I IFN/IFNAR loop5. Moreover, using the TOFA inhibitor before measuring both the quantity of ATP in response to CpGA activation and the expression of PPARα target genes, Acadl and Pltp in CpGA-activated mouse PDC5 allowed the authors to conclude that fatty acid synthesis induced in mouse PDC rather fuels FAO and OXPHOS than provides lipid ligands for PPARα activation. It is well described that the natural ligands of PPARα include different fatty acids, as well as numerous fatty acid derivatives and compounds with structural resemblance to fatty acids, such as acyl-CoAs, or oxidized fatty acids89. Here in PDC, fatty acids synthetized in response to the TLR9 signaling pathway do not seem to directly stimulate PPARα by providing PPARα ligands. One may hypothesize that, as reported for lipid-activated nuclear receptors86,88, activation of PPARα by TLR9 ligand may interfere with transcription factors3,88 that are critical for pro-inflammatory cytokine secretion by PDC (see section below).\n\n3.3.3. Cholesterol metabolism. Cholesterol is one of the major constituents of lipid rafts together with glycosphingolipids92. Cellular cholesterol content results from cholesterol uptake and biosynthesis through the mevalonate pathway, while its elimination from cells is mediated by cholesterol efflux. Cholesterol uptake involves plasma lipoproteins (mainly low density lipoprotein [LDL] and very low density lipoprotein [VLDL]) after interactions with their specific receptors, LDLR and VLDLR, respectively. Cholesterol efflux is mainly mediated by specific transporters, ABCA1 and ABCG1, in association with extracellular cholesterol acceptors, including apolipoproteins (APO) APOA1 and APOE, or lipoprotein particles (e.g., nascent high density lipoprotein [HDL] or HDL2) (Figure 3)93. ABCG1 is rather localized within the cell and seems to move sterols between intracellular compartments88. ABCA1 is more dedicated to extrude cholesterol derivatives outside the cell88. Cholesterol regulates critical cellular functions, including plasma membrane formation and fluidity, allowing the clustering of receptors into lipid rafts for efficient signaling92. These latter functions are implicated in signaling pathway regulation. Localization of signaling complexes within the lipid rafts is critical for certain receptors.\n\nActivation of the LXR pathway by “physiological” oxidized cholesterol derivatives (oxysterols) or synthetic LXR agonists induces the decrease of PDC intracellular cholesterol content by stimulating cholesterol efflux through ABCA1 cholesterol transporter, inhibiting cholesterol entry by decreasing LDL or VLDL receptor (LDLR and VLDLR, respectively) expression and inhibiting de novo cholesterol biosynthesis (also known as the mevalonate pathway). Cholesterol efflux via ABCA1 requires cholesterol acceptors, such as APOA1, and immature HDL (HDL2/3) to generate mature HDL that transport cholesterol towards the liver. Activation of this LXR pathway inhibits TLR7-induced NF-κB activation and phosphorylation of STAT5 and Akt in response to IL-3 stimulation (green font and green arrows). Inhibition of cholesterol biosynthesis by statins (violet font and arrows) inhibits TLR7/9-induced IRF7 translocation in the nucleus, as well as phosphorylation of p38 kinase, and consequently the production of type I IFN. For more details and abbreviations, please refer to the main text.\n\nCholesterol homeostasis is regulated at least by LXR. These nuclear receptors are expressed as two isoforms, including the ubiquitous LXRβ isoform and LXRα, which exhibits an expression restricted to cells with high cholesterol turnover (e.g., macrophages)86,90,94. The LXR pathway is activated by intermediates from the cholesterol biosynthesis (e.g., desmosterol), endogenous oxidized cholesterol derivatives (called oxysterols, such as 22(R)-hydroxycholesterol [22RHC]), and synthetic agonists (e.g., T0901317 or GW3965)86. LXR activation upregulates the expression of several genes involved in cholesterol homeostasis (i.e., LXR target genes), including: ABCA1, ABCG187, and APOE (related to cholesterol efflux)88,95, as well as the ‘inducible degrader of the low-density lipoprotein receptor’ (IDOL), preventing cholesterol uptake through LDLR/VLDLR degradation96. Overall, these mechanisms triggered by LXR activation participate in the decrease of intracellular cholesterol content. Again, the LXR pathway has been studied extensively in the regulation of macrophage functions, including cholesterol homeostasis and inflammatory responses, as well as apoptotic cell uptake (a process called efferocytosis)90. However, the role of cholesterol and LXR begins only to be deciphered in PDC innate functions.\n\nInhibition of the mevalonate pathway (i.e., de novo cholesterol biosynthesis) by statins (simvastatin and pitavastatin) blocks TLR7- and TLR9-induced type I IFN production by human PDC (Figure 3)97. This has been demonstrated with blood-derived PDC obtained from healthy donors, as well as from patients with SLE97. Statin treatment also inhibits TNF secretion by human PDC in response to either TLR7 (loxoribine) or TLR9 (CpGA) ligands97. These data have been confirmed in vivo using ssRNA Sendai virus97. Inhibition of de novo cholesterol synthesis by statins in PDC interferes with the p38 MAPK pathway, Akt and nuclear translocation of IRF7 (Figure 3)97. Finally, the inhibitory effect of statins has been tested in vivo and on mouse PDC, since treatment of C57BL/6 mouse with statins before triggering type I IFN production by ssRNA poly(U) injection decreases circulating IFN-α97. Thus, the inhibition of cholesterol synthesis in PDC is associated with an anti-inflammatory response. This confirms the global anti-inflammatory effects of statins98.\n\nRecently, we reported that the LXRβ isoform is expressed in human PDC6, as well as in mouse PDC (unpublished study, Ceroi A, Bonnefoy F, Angelot-Delettre F and Saas P), and that this LXR pathway is fully functional6. Using freshly blood-isolated human PDC and a PDC cell line CAL-1, a functional LXR pathway has been demonstrated, as attested by increased LXR target gene expression in response to three different LXR agonists (two synthetic agonists and an oxysterol, 22RHC that represents a more physiological LXR ligand). The activation of the LXR pathway in PDC reduces the pro-inflammatory cytokine secretion (IL-6 and TNF-α) induced by TLR7 triggering6. Moreover, these data obtained in human PDC from healthy donors6 and in leukemic PDC99 demonstrate that the LXR pathway interferes with TLR7-induced NF-κB activation at different levels, including a transcriptional repression of p65 NF-κB subunit and a reduced phosphorylation of this NF-κB subunit (Figure 3)6,99. Pretreatment of leukemic PDC with synthetic LXR agonists also reduces Akt and STAT5 phosphorylation in response to IL-3 (Figure 3)99. LXR stimulation in the PDC cell line CAL-1 increases cholesterol efflux via the upregulation of cholesterol transporters, such as ABCA1 (Figure 3)99. Although the cholesterol efflux was only tested using this CAL-1 PDC cell line99, upregulation of cholesterol transporters at mRNA and protein levels was also observed in human blood-derived PDC treated with LXR agonists6. Stimulation of cholesterol efflux by the addition of a cholesterol acceptor, APOA1, amplifies the effects of LXR activation in leukemic PDC, including inhibition of the IL-3 signaling pathway (Akt and STAT5 phosphorylation) and cell survival (Figure 3)99. This confirms the previous data using statins97, and suggests that modifying cholesterol homeostasis in PDC can be useful to limit their detrimental role in pathological situations. Alteration of PDC survival after modification of intracellular cholesterol content using either statins97 or LXR agonists99 is only detected at highest concentrations (100 µM). In addition, LXR activation in human PDC also increases microparticle internalization via the phosphatidylserine receptor (PtdSerR), BAI-16. This contrasts with data obtained in macrophages in which LXR stimulation induces another PtdSerR, called Mer-TK (Mer tyrosine kinase)100. Nevertheless, this suggests that stimulation of PDC via the LXR pathway may improve their capacity to eliminate circulating pro-inflammatory microparticles. Triggering LXR pathway using LXR agonists before exposure to pro-inflammatory endothelial-derived microparticles prevents NF-κB activation and pro-inflammatory cytokine production by human PDC6. This sustains an anti-inflammatory role of LXR agonists.\n\nA discrepancy exists concerning type I IFN production after inhibition of cholesterol biosynthesis (using statins97) and the massive decrease of intracellular cholesterol content (using LXR agonists6,99). Inhibition of type I IFN has been reported after statin treatment97, but not after cholesterol deprivation6. This may be due to a compensatory mechanism, since massively decreasing the pool size of synthesized cholesterol alone induces spontaneous type I IFN production associated with an antiviral immunity in bone marrow-derived macrophages101. This response occurs via the cGAS/STING/TBK1/IRF3 pathway101. Thus, LXR agonists could inhibit TLR7/9-mediated type I IFN by interfering with this signaling pathway together with massively decreasing the intracellular cholesterol pool size. This could explain why IFN-α production is unaffected after LXR agonist treatment of PDC. Another difference between the effect of statins and LXR agonists lies in the inhibition of IRF7 (but not of NF-κB phosphorylation by statins)97, whereas the activation of the LXR pathway in PDC blocks NF-κB activation via several mechanisms6,99 (see above). Overall, the manipulation of cholesterol metabolism in PDC can be proposed to limit their pro-inflammatory functions.\n\n\n4. Conclusions\n\nHere, we summarize data currently available showing that several metabolic pathways are triggered in PDC by different stimuli, including pro-inflammatory signals (e.g., TLR7/9 ligands or endothelial-derived microparticles), as well as anti-inflammatory signals (e.g., platelet-derived microparticles). These pathways comprise: the mTOR signaling pathway, glycolysis, FAO coupled to OXPHOS, fatty acid synthesis and cholesterol metabolism. All of these pathways are connected together, and they are globally necessary for efficient type I IFN production by PDC in response to TLR7/9-mediated activation. Few data are available, but it seems that induction of pro-inflammatory cytokines (e.g., TNF, IL-6 and IL-8) and costimulatory molecules (i.e., CD80 or CD86) also need most of these metabolic pathways. On the contrary, alteration of cholesterol metabolism associated with decreased intracellular cholesterol content either after inhibition of de novo cholesterol synthesis or LXR activation inhibits the pro-inflammatory functions of PDC. These data suggest that pharmacological manipulation of the host metabolism may be useful to reprogram altered PDC immune functions.\n\nSince cellular metabolism is highly dependent on the microenvironment (oxygen availability and nutrients), changes in the local tissue microenvironment may modulate PDC innate immune functions. This modulation of metabolism may result from exogenous metabolites that diffuse passively or through transporters into the PDC. Among these metabolites, one may find ligands of lipid-activated nuclear receptors, such as LXR or PPAR3,90.\n\nMicroenvironment and metabolic pathways may be modulated or controlled by microbiota. A recent study analyzed germ-free mice mono-colonized with each of the 53 human-resident bacterial species and the consequences of each bacterium on different immune cell subsets102. Among these 53 bacteria, some bacteria were identified as modifying PDC frequencies in the colon and the small intestine102. Among the genes for which their expression was correlated with PDC frequencies, the authors identified IFN-inducible signature transcripts, but also transcripts involved in lipid and protein metabolic pathways. Moreover, the Hif1a transcript coding for the metabolism regulator HIF-1α was also associated with PDC frequencies102. This suggests a connection between PDC frequencies in the gastro-intestinal tract and metabolic pathways and nutrients/metabolites provided by microbiota.\n\nWe discussed above the connection of metabolism with epigenetic regulation. While few data are available concerning epigenetic regulation of PDC innate immune functions, it has been shown that the inhibitor of histone deacetylase, valproic acid, alters human PDC functions, including the production of pro-inflammatory cytokines (IFN-α TNF and IL-6) in response to TLR9 ligand, CpGA103. Thus, it remains to be determined how metabolism may regulate epigenetic modification of DNA and histones in PDC.\n\nLastly, acute perturbations in intracellular lipid content, via for instance LXR activation, may also influence cell proliferation and survival by inducing significant ER stress88. This cellular organelle is responsible for protein folding. The result of this ER stress is an accumulation of unfolded proteins. This pathway leads also to the expression of the transcription factor X-box binding protein 1 (XBP1), which induces lipid synthesis. XBP1 has been shown to regulate cDC infiltrating ovarian tumors; accumulation of lipids in the tumor-infiltrating cDC following ER stress and XBP1 activation reduces their ability to present antigens, and thus impairs anti-tumor T-cell responses104. Whether this may occur in infiltrating PDC remains to be analyzed. Nevertheless, PDC have been shown to express XBP1105. This factor can be targeted with bortezomib, which maintains ER homeostasis105. This can be an additional way related to metabolism to block pro-inflammatory PDC functions. In conclusion, a better understanding of PDC immunometabolism may help to limit the detrimental effect of these cells and increase their beneficial role in the future.",
"appendix": "Author contributions\n\n\n\nPS, AV, SP and AC critically read, analysed and discussed the literature and conceived the outline of the manuscript. PS wrote the manuscript. SP, AV and AC edited the manuscript and provided valuable discussions and criticism.\n\n\nCompeting interests\n\n\n\nThe authors declare that they have no competing interests.\n\n\nGrant information\n\nThe work in this field is supported by the Agence Nationale de la Recherche (LabEx LipSTIC; ANR-11-LABX-0021), and the Conseil Régional de Bourgogne Franche-Comté (“soutien au LabEx LipSTIC” 2016).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe would like to thank Sarah Odrion for her help in editing our manuscript, and the members of our laboratory for their work.\n\n\nReferences\n\nO'Neill LA, Pearce EJ: Immunometabolism governs dendritic cell and macrophage function. J Exp Med. 2016; 213(1): 15–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nO'Neill LA, Kishton RJ, Rathmell J: A guide to immunometabolism for immunologists. Nat Rev Immunol. 2016; 16(9): 553–565. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPearce EJ, Everts B: Dendritic cell metabolism. Nat Rev Immunol. 2015; 15(1): 18–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBajwa G, DeBerardinis RJ, Shao B, et al.: Cutting Edge: Critical Role of Glycolysis in Human Plasmacytoid Dendritic Cell Antiviral Responses. J Immunol. 2016; 196(5): 2004–2009. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu D, Sanin DE, Everts B, et al.: Type 1 Interferons Induce Changes in Core Metabolism that Are Critical for Immune Function. Immunity. 2016; 44(6): 1325–1336. PubMed Abstract | Publisher Full Text\n\nCeroi A, Delettre FA, Marotel C, et al.: The anti-inflammatory effects of platelet-derived microparticles in human plasmacytoid dendritic cells involve liver X receptor activation. Haematologica. 2016; 101(3): e72–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwiecki M, Colonna M: The multifaceted biology of plasmacytoid dendritic cells. Nat Rev Immunol. 2015; 15(8): 471–485. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeichhart T, Hengstschläger M, Linke M: Regulation of innate immune cell function by mTOR. Nat Rev Immunol. 2015; 15(10): 599–614. PubMed Abstract | Publisher Full Text\n\nSharma MD, Hou DY, Liu Y, et al.: Indoleamine 2,3-dioxygenase controls conversion of Foxp3+ Tregs to TH17-like cells in tumor-draining lymph nodes. Blood. 2009; 113(24): 6102–6111. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPearce EL, Pearce EJ: Metabolic pathways in immune cell activation and quiescence. Immunity. 2013; 38(4): 633–643. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAudano M, Ferrari A, Fiorino E, et al.: Energizing Genetics and Epi-genetics: Role in the Regulation of Mitochondrial Function. Curr Genomics. 2014; 15(6): 436–456. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi W, Sivakumar R, Titov AA, et al.: Metabolic Factors that Contribute to Lupus Pathogenesis. Crit Rev Immunol. 2016; 36(1): 75–98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuilliams M, Ginhoux F, Jakubzick C, et al.: Dendritic cells, monocytes and macrophages: a unified nomenclature based on ontogeny. Nat Rev Immunol. 2014; 14(8): 571–578. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCella M, Jarrossay D, Facchetti F, et al.: Plasmacytoid monocytes migrate to inflamed lymph nodes and produce large amounts of type I interferon. Nat Med. 1999; 5(8): 919–923. PubMed Abstract | Publisher Full Text\n\nSiegal FP, Kadowaki N, Shodell M, et al.: The nature of the principal type 1 interferon-producing cells in human blood. Science. 1999; 284(5421): 1835–1837. PubMed Abstract | Publisher Full Text\n\nAsselin-Paturel C, Boonstra A, Dalod M, et al.: Mouse type I IFN-producing cells are immature APCs with plasmacytoid morphology. Nat Immunol. 2001; 2(12): 1144–1150. PubMed Abstract | Publisher Full Text\n\nNakano H, Yanagita M, Gunn MD: CD11c+B220+Gr-1+ cells in mouse lymph nodes and spleen display characteristics of plasmacytoid dendritic cells. J Exp Med. 2001; 194(8): 1171–1178. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFerrero I, Held W, Wilson A, et al.: Mouse CD11c+ B220+ Gr1+ plasmacytoid dendritic cells develop independently of the T-cell lineage. Blood. 2002; 100(8): 2852–2857. PubMed Abstract | Publisher Full Text\n\nMartin P, Del Hoyo GM, Anjuere F, et al.: Characterization of a new subpopulation of mouse CD8α+ B220+ dendritic cells endowed with type 1 interferon production capacity and tolerogenic potential. Blood. 2002; 100(2): 383–390. PubMed Abstract | Publisher Full Text\n\nRobbins SH, Walzer T, Dembélé D, et al.: Novel insights into the relationships between dendritic cell subsets in human and mouse revealed by genome-wide expression profiling. Genome Biol. 2008; 9(1): R17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu K, Victora GD, Schwickert TA, et al.: In vivo analysis of dendritic cell development and homeostasis. Science. 2009; 324(5925): 392–397. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi HS, Gelbard A, Martinez GJ, et al.: Cell-intrinsic role for IFN-α-STAT1 signals in regulating murine Peyer patch plasmacytoid dendritic cells and conditioning an inflammatory response. Blood. 2011; 118(14): 3879–3889. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSisirak V, Vey N, Vanbervliet B, et al.: CCR6/CCR10-mediated plasmacytoid dendritic cell recruitment to inflamed epithelia after instruction in lymphoid tissues. Blood. 2011; 118(19): 5130–5140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Heer HJ, Hammad H, Soullié T, et al.: Essential role of lung plasmacytoid dendritic cells in preventing asthmatic reactions to harmless inhaled antigen. J Exp Med. 2004; 200(1): 89–98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTokita D, Sumpter TL, Raimondi G, et al.: Poor allostimulatory function of liver plasmacytoid DC is associated with pro-apoptotic activity, dependent on regulatory T cells. J Hepatol. 2008; 49(6): 1008–1018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMatta BM, Castellaneta A, Thomson AW: Tolerogenic plasmacytoid DC. Eur J Immunol. 2010; 40(10): 2667–2676. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOkada T, Lian ZX, Naiki M, et al.: Murine thymic plasmacytoid dendritic cells. Eur J Immunol. 2003; 33(4): 1012–1019. PubMed Abstract | Publisher Full Text\n\nLi J, Park J, Foss D, et al.: Thymus-homing peripheral dendritic cells constitute two of the three major subsets of dendritic cells in the steady-state thymus. J Exp Med. 2009; 206(3): 607–622. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHadeiba H, Lahl K, Edalati A, et al.: Plasmacytoid dendritic cells transport peripheral antigens to the thymus to promote central tolerance. Immunity. 2012; 36(3): 438–450. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVermi W, Riboldi E, Wittamer V, et al.: Role of ChemR23 in directing the migration of myeloid and plasmacytoid dendritic cells to lymphoid organs and inflamed skin. J Exp Med. 2005; 201(4): 509–515. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGilliet M, Cao W, Liu YJ: Plasmacytoid dendritic cells: sensing nucleic acids in viral infection and autoimmune diseases. Nat Rev Immunol. 2008; 8(8): 594–606. PubMed Abstract | Publisher Full Text\n\nReizis B, Bunin A, Ghosh HS, et al.: Plasmacytoid dendritic cells: recent progress and open questions. Annu Rev Immunol. 2011; 29: 163–183. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCervantes-Barragan L, Lewis KL, Firner S, et al.: Plasmacytoid dendritic cells control T-cell response to chronic viral infection. Proc Natl Acad Sci U S A. 2012; 109(8): 3012–3017. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSwiecki M, Gilfillan S, Vermi W, et al.: Plasmacytoid dendritic cell ablation impacts early interferon responses and antiviral NK and CD8+ T cell accrual. Immunity. 2010; 33(6): 955–966. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTakagi H, Fukaya T, Eizumi K, et al.: Plasmacytoid dendritic cells are crucial for the initiation of inflammation and T cell immunity in vivo. Immunity. 2011; 35(6): 958–971. PubMed Abstract | Publisher Full Text\n\nPelka K, Latz E: IRF5, IRF8, and IRF7 in human pDCs - the good, the bad, and the insignificant? Eur J Immunol. 2013; 43(7): 1693–1697. PubMed Abstract | Publisher Full Text\n\nKadowaki N, Ho S, Antonenko S, et al.: Subsets of human dendritic cell precursors express different toll-like receptors and respond to different microbial antigens. J Exp Med. 2001; 194(6): 863–869. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDiebold SS, Kaisho T, Hemmi H, et al.: Innate antiviral responses by means of TLR7-mediated recognition of single-stranded RNA. Science. 2004; 303(5663): 1529–1531. PubMed Abstract | Publisher Full Text\n\nMoynagh PN: TLR signalling and activation of IRFs: revisiting old friends from the NF-kappaB pathway. Trends Immunol. 2005; 26(9): 469–476. PubMed Abstract | Publisher Full Text\n\nBarrat FJ, Meeker T, Gregorio J, et al.: Nucleic acids of mammalian origin can act as endogenous ligands for Toll-like receptors and may promote systemic lupus erythematosus. J Exp Med. 2005; 202(8): 1131–1139. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLande R, Gregorio J, Facchinetti V, et al.: Plasmacytoid dendritic cells sense self-DNA coupled with antimicrobial peptide. Nature. 2007; 449(7162): 564–569. PubMed Abstract | Publisher Full Text\n\nGanguly D, Chamilos G, Lande R, et al.: Self-RNA-antimicrobial peptide complexes activate human dendritic cells through TLR7 and TLR8. J Exp Med. 2009; 206(9): 1983–1994. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHonda K, Yanai H, Negishi H, et al.: IRF-7 is the master regulator of type-I interferon-dependent immune responses. Nature. 2005; 434(7034): 772–777. PubMed Abstract | Publisher Full Text\n\nGuiducci C, Ghirelli C, Marloie-Provost MA, et al.: PI3K is critical for the nuclear translocation of IRF-7 and type I IFN production by human plasmacytoid predendritic cells in response to TLR activation. J Exp Med. 2008; 205(2): 315–322. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBauer M, Redecke V, Ellwart JW, et al.: Bacterial CpG-DNA triggers activation and maturation of human CD11c-, CD123+ dendritic cells. J Immunol. 2001; 166(8): 5000–5007. PubMed Abstract | Publisher Full Text\n\nAngelot F, Seillès E, Biichlé S, et al.: Endothelial cell-derived microparticles induce plasmacytoid dendritic cell maturation: potential implications in inflammatory diseases. Haematologica. 2009; 94(11): 1502–1512. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPiqueras B, Connolly J, Freitas H, et al.: Upon viral exposure, myeloid and plasmacytoid dendritic cells produce 3 waves of distinct chemokines to recruit immune effectors. Blood. 2006; 107(7): 2613–2618. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKerkmann M, Rothenfusser S, Hornung V, et al.: Activation with CpG-A and CpG-B oligonucleotides reveals two distinct regulatory pathways of type I IFN synthesis in human plasmacytoid dendritic cells. J Immunol. 2003; 170(9): 4465–4474. PubMed Abstract | Publisher Full Text\n\nTian J, Avalos AM, Mao SY, et al.: Toll-like receptor 9-dependent activation by DNA-containing immune complexes is mediated by HMGB1 and RAGE. Nat Immunol. 2007; 8(5): 487–496. PubMed Abstract | Publisher Full Text\n\nDumitriu IE, Baruah P, Bianchi ME, et al.: Requirement of HMGB1 and RAGE for the maturation of human plasmacytoid dendritic cells. Eur J Immunol. 2005; 35(7): 2184–2190. PubMed Abstract | Publisher Full Text\n\nRovere-Querini P, Capobianco A, Scaffidi P, et al.: HMGB1 is an endogenous immune adjuvant released by necrotic cells. EMBO Rep. 2004; 5(8): 825–830. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarcia-Romo GS, Caielli S, Vega B, et al.: Netting neutrophils are major inducers of type I IFN production in pediatric systemic lupus erythematosus. Sci Transl Med. 2011; 3(73): 73ra20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLande R, Ganguly D, Facchinetti V, et al.: Neutrophils activate plasmacytoid dendritic cells by releasing self-DNA-peptide complexes in systemic lupus erythematosus. Sci Transl Med. 2011; 3(73): 73ra19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDuffau P, Seneschal J, Nicco C, et al.: Platelet CD154 potentiates interferon-alpha secretion by plasmacytoid dendritic cells in systemic lupus erythematosus. Sci Transl Med. 2010; 2(47): 47ra63. PubMed Abstract | Publisher Full Text\n\nBroz P, Monack DM: Newly described pattern recognition receptors team up against intracellular pathogens. Nat Rev Immunol. 2013; 13(8): 551–565. PubMed Abstract | Publisher Full Text\n\nBonnefoy F, Perruche S, Couturier M, et al.: Plasmacytoid dendritic cells play a major role in apoptotic leukocyte-induced immune modulation. J Immunol. 2011; 186(10): 5696–5705. PubMed Abstract | Publisher Full Text\n\nMunn DH, Sharma MD, Hou D, et al.: Expression of indoleamine 2,3-dioxygenase by plasmacytoid dendritic cells in tumor-draining lymph nodes. J Clin Invest. 2004; 114(2): 280–290. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSharma MD, Baban B, Chandler P, et al.: Plasmacytoid dendritic cells from mouse tumor-draining lymph nodes directly activate mature Tregs via indoleamine 2,3-dioxygenase. J Clin Invest. 2007; 117(9): 2570–2582. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMellor AL, Baban B, Chandler PR, et al.: Cutting edge: CpG oligonucleotides induce splenic CD19+ dendritic cells to acquire potent indoleamine 2,3-dioxygenase-dependent T cell regulatory functions via IFN Type 1 signaling. J Immunol. 2005; 175(9): 5601–5605. PubMed Abstract | Publisher Full Text\n\nPallotta MT, Orabona C, Volpi C, et al.: Indoleamine 2,3-dioxygenase is a signaling protein in long-term tolerance by dendritic cells. Nat Immunol. 2011; 12(9): 870–878. PubMed Abstract | Publisher Full Text\n\nCao W, Rosen DB, Ito T, et al.: Plasmacytoid dendritic cell-specific receptor ILT7-Fc epsilonRI gamma inhibits Toll-like receptor-induced interferon production. J Exp Med. 2006; 203(6): 1399–1405. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDzionek A, Sohma Y, Nagafune J, et al.: BDCA-2, a novel plasmacytoid dendritic cell-specific type II C-type lectin, mediates antigen capture and is a potent inhibitor of interferon alpha/beta induction. J Exp Med. 2001; 194(12): 1823–1834. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaas P, Perruche S: Functions of TGF-β-exposed plasmacytoid dendritic cells. Crit Rev Immunol. 2012; 32(6): 529–553. PubMed Abstract | Publisher Full Text\n\nGrouard G, Rissoan MC, Filgueira L, et al.: The enigmatic plasmacytoid T cells develop into dendritic cells with interleukin (IL)-3 and CD40-ligand. J Exp Med. 1997; 185(6): 1101–1111. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGhirelli C, Zollinger R, Soumelis V: Systematic cytokine receptor profiling reveals GM-CSF as a novel TLR-independent activator of human plasmacytoid predendritic cells. Blood. 2010; 115(24): 5037–5040. PubMed Abstract | Publisher Full Text\n\nChistiakov DA, Orekhov AN, Sobenin IA, et al.: Plasmacytoid dendritic cells: development, functions, and role in atherosclerotic inflammation. Front Physiol. 2014; 5: 279. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFritsch SD, Weichhart T: Effects of Interferons and Viruses on Metabolism. Front Immunol. 2016; 7: 630. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBanchereau J, Pascual V: Type I interferon in systemic lupus erythematosus and other autoimmune diseases. Immunity. 2006; 25(3): 383–392. PubMed Abstract | Publisher Full Text\n\nDiana J, Simoni Y, Furio L, et al.: Crosstalk between neutrophils, B-1a cells and plasmacytoid dendritic cells initiates autoimmune diabetes. Nat Med. 2013; 19(1): 65–73. PubMed Abstract | Publisher Full Text\n\nvan Bon L, Affandi AJ, Broen J, et al.: Proteome-wide analysis and CXCL4 as a biomarker in systemic sclerosis. N Engl J Med. 2014; 370(5): 433–443. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNestle FO, Conrad C, Tun-Kyi A, et al.: Plasmacytoid predendritic cells initiate psoriasis through interferon-alpha production. J Exp Med. 2005; 202(1): 135–143. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMillet A, Martin KR, Bonnefoy F, et al.: Proteinase 3 on apoptotic cells disrupts immune silencing in autoimmune vasculitis. J Clin Invest. 2015; 125(11): 4107–4121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBossard C, Malard F, Arbez J, et al.: Plasmacytoid dendritic cells and Th17 immune response contribution in gastrointestinal acute graft-versus-host disease. Leukemia. 2012; 26(7): 1471–1474. PubMed Abstract | Publisher Full Text\n\nMalard F, Bossard C, Brissot E, et al.: Increased plasmacytoid dendritic cells and RORγt-expressing immune effectors in cutaneous acute graft-versus-host disease. J Leukoc Biol. 2013; 94(6): 1337–1343. PubMed Abstract | Publisher Full Text\n\nDöring Y, Zernecke A: Plasmacytoid dendritic cells in atherosclerosis. Front Physiol. 2012; 3: 230. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoumelis V, Scott I, Gheyas F, et al.: Depletion of circulating natural type 1 interferon-producing cells in HIV-infected AIDS patients. Blood. 2001; 98(4): 906–912. PubMed Abstract | Publisher Full Text\n\nSzabo G, Dolganiuc A: The role of plasmacytoid dendritic cell-derived IFN alpha in antiviral immunity. Crit Rev Immunol. 2008; 28(1): 61–94. PubMed Abstract | Publisher Full Text\n\nCosta-Mattioli M, Sonenberg N: RAPping production of type I interferon in pDCs through mTOR. Nat Immunol. 2008; 9(10): 1097–1099. PubMed Abstract | Publisher Full Text\n\nCao W, Manicassamy S, Tang H, et al.: Toll-like receptor-mediated induction of type I interferon in plasmacytoid dendritic cells requires the rapamycin-sensitive PI(3)K-mTOR-p70S6K pathway. Nat Immunol. 2008; 9(10): 1157–1164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorcoran SE, O'Neill LA: HIF1α and metabolic reprogramming in inflammation. J Clin Invest. 2016; 126(10): 3699–3707. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan Krinks CH, Matyszak MK, Gaston JS: Characterization of plasmacytoid dendritic cells in inflammatory arthritis synovial fluid. Rheumatology (Oxford). 2004; 43(4): 453–460. PubMed Abstract | Publisher Full Text\n\nLande R, Giacomini E, Serafini B, et al.: Characterization and recruitment of plasmacytoid dendritic cells in synovial fluid and tissue of patients with chronic inflammatory arthritis. J Immunol. 2004; 173(4): 2815–2824. PubMed Abstract | Publisher Full Text\n\nCavanagh LL, Boyce A, Smith L, et al.: Rheumatoid arthritis synovium contains plasmacytoid dendritic cells. Arthritis Res Ther. 2005; 7(2): R230–240. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArimura K, Takagi H, Uto T, et al.: Crucial role of plasmacytoid dendritic cells in the development of acute colitis through the regulation of intestinal inflammation. Mucosal Immunol. 2017; In press. PubMed Abstract | Publisher Full Text\n\nLund JM, Alexopoulou L, Sato A, et al.: Recognition of single-stranded RNA viruses by Toll-like receptor 7. Proc Natl Acad Sci U S A. 2004; 101(15): 5598–5603. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHong C, Tontonoz P: Liver X receptors in lipid metabolism: opportunities for drug discovery. Nat Rev Drug Discov. 2014; 13(6): 433–444. PubMed Abstract | Publisher Full Text\n\nWójcicka G, Jamroz-Wisniewska A, Horoszewicz K, et al.: Liver X receptors (LXRs). Part I: structure, function, regulation of activity, and role in lipid metabolism. Postepy Hig Med Dosw (Online). 2007; 61: 736–759. PubMed Abstract\n\nKidani Y, Bensinger SJ: Liver X receptor and peroxisome proliferator-activated receptor as integrators of lipid homeostasis and immunity. Immunol Rev. 2012; 249(1): 72–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRakhshandehroo M, Knoch B, Muller M, et al.: Peroxisome proliferator-activated receptor alpha target genes. PPAR Res. 2010; 2010; pii: 612089. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKiss M, Czimmerer Z, Nagy L: The role of lipid-activated nuclear receptors in shaping macrophage and dendritic cell function: From physiology to pathology. J Allergy Clin Immunol. 2013; 132(2): 264–286. PubMed Abstract | Publisher Full Text\n\nHorton JD, Goldstein JL, Brown MS: SREBPs: activators of the complete program of cholesterol and fatty acid synthesis in the liver. J Clin Invest. 2002; 109(9): 1125–1131. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIkonen E: Cellular cholesterol trafficking and compartmentalization. Nat Rev Mol Cell Biol. 2008; 9(2): 125–138. PubMed Abstract | Publisher Full Text\n\nOram JF, Vaughan AM: ATP-Binding cassette cholesterol transporters and cardiovascular disease. Circ Res. 2006; 99(10): 1031–1043. PubMed Abstract | Publisher Full Text\n\nIm SS, Osborne TF: Liver x receptors in atherosclerosis and inflammation. Circ Res. 2011; 108(8): 996–1001. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDove DE, Linton MF, Fazio S: ApoE-mediated cholesterol efflux from macrophages: separation of autocrine and paracrine effects. Am J Physiol Cell Physiol. 2005; 288(3): C586–592. PubMed Abstract | Publisher Full Text\n\nHong C, Duit S, Jalonen P, et al.: The E3 ubiquitin ligase IDOL induces the degradation of the low density lipoprotein receptor family members VLDLR and ApoER2. J Biol Chem. 2010; 285(26): 19720–19726. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAmuro H, Ito T, Miyamoto R, et al.: Statins, inhibitors of 3-hydroxy-3-methylglutaryl-coenzyme A reductase, function as inhibitors of cellular and molecular components involved in type I interferon production. Arthritis Rheum. 2010; 62(7): 2073–2085. PubMed Abstract | Publisher Full Text\n\nBu DX, Griffin G, Lichtman AH: Mechanisms for the anti-inflammatory effects of statins. Curr Opin Lipidol. 2011; 22(3): 165–170. PubMed Abstract | Publisher Full Text\n\nCeroi A, Masson D, Roggy A, et al.: LXR agonist treatment of blastic plasmacytoid dendritic cell neoplasm restores cholesterol efflux and triggers apoptosis. Blood. 2016; 128(23): 2694–2707. PubMed Abstract | Publisher Full Text | Free Full Text\n\nA-Gonzalez N, Bensinger SJ, Hong C, et al.: Apoptotic cells promote their own clearance and immune tolerance through activation of the nuclear receptor LXR. Immunity. 2009; 31(2): 245–258. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYork AG, Williams KJ, Argus JP, et al.: Limiting Cholesterol Biosynthetic Flux Spontaneously Engages Type I IFN Signaling. Cell. 2015; 163(7): 1716–1729. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeva-Zatorsky N, Sefik E, Kua L, et al.: Mining the Human Gut Microbiota for Immunomodulatory Organisms. Cell. 2017; 168(5): 928–943.e911. PubMed Abstract | Publisher Full Text\n\nArbez J, Lamarthee B, Gaugler B, et al.: Histone deacetylase inhibitor valproic acid affects plasmacytoid dendritic cells phenotype and function. Immunobiology. 2014; 219(8): 637–643. PubMed Abstract | Publisher Full Text\n\nCubillos-Ruiz JR, Bettigole SE, Glimcher LH: Molecular Pathways: Immunosuppressive Roles of IRE1α-XBP1 Signaling in Dendritic Cells of the Tumor Microenvironment. Clin Cancer Res. 2016; 22(9): 2121–2126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHirai M, Kadowaki N, Kitawaki T, et al.: Bortezomib suppresses function and survival of plasmacytoid dendritic cells by targeting intracellular trafficking of Toll-like receptors and endoplasmic reticulum homeostasis. Blood. 2011; 117(2): 500–509. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22503",
"date": "05 May 2017",
"name": "Vassili Soumelis",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review by Saas et al. reports on an original question, rarely addressed in reviews: the role of metabolism on the innate functions of plasmacytoid dendritic cells. Authors have performed a very good job summarizing current knowledge in the field. They very extensively present the molecular and functional pathways involved in PDC biology, and the possible interactions with various metabolic pathways. This way represents the current state-of-the-art. A large number of references are cited, in the field of general PDC biology, including important founder papers, as well as in the field of metabolism, and the role of metabolic pathways in PDC.\nWe only have a few comments on the manuscript:\nThe discussion of PDC implication in disease could be expanded. In particular, authors could discuss the role of PDC in the tumor microenvironment. It is known that metabolism plays an important role in tumor development and antitumor immunity. Can this also be through affecting PDCs? Is there a crosstalk between PDC biology, metabolism, and tumor progression? Several articles in the past few years have addressed the function of PDC in cancer and could serve as a basis for a discussion. Other diseases could also be discussed in more details, in particular autoimmunity. Indeed, the pathways involved in PDC activation in disease context may be slightly different from exogenous purified TLR ligands, due to the diversity of stimuli and the complexity of the inflammatory microenvironments.\n\nFigure 2 is a bit too dense and confusing. We suggest to clarify it removing or reorganizing some of the information.\n\nThe article should also be edited for better English.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Partly\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": [
{
"c_id": "2794",
"date": "22 Jun 2017",
"name": "Philippe Saas",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We thank the reviewers for their constructive comments. We appreciate greatly these comments and we have tried to answer to them. Please find below a point by point response to the reviewers’ comments.1. The discussion of PDC implication in disease could be expanded. In particular, authors could discuss the role of PDC in the tumor microenvironment. It is known that metabolism plays an important role in tumor development and antitumor immunity. Can this also be through affecting PDCs? Is there a crosstalk between PDC biology, metabolism, and tumor progression? Several articles in the past few years have addressed the function of PDC in cancer and could serve as a basis for a discussion. Other diseases could also be discussed in more details, in particular autoimmunity. Indeed, the pathways involved in PDC activation in disease context may be slightly different from exogenous purified TLR ligands, due to the diversity of stimuli and the complexity of the inflammatory microenvironments.This is a very interesting suggestion (please also refer to comment #5 of reviewer #2). Indeed, metabolism may play an important role in both tumor progression and antitumor immunity. Thus, a sentence has been added in the paragraph dealing with PDC ontogeny and localization, and different references describing PDC infiltrates in tumors have been quoted. We have quoted original works as advised by reviewers #3. Two small paragraphs have been added in the section called “2.4 Implications of PDC in diseases”. Again, references have been added in this section, including original works and reviews on metabolism and cancer. Moreover, we have also evoked quickly in different parts of the revised version that the tumor microenvironment may impact on PDC metabolism. However, to the best of our knowledge, no specific data are available today. We do not want to speculate too much on the impact of tumor microenvironment on PDC functions. We have made no further comments on autoimmunity and the diversity of stimuli.2. Figure 2 is a bit too dense and confusing. We suggest to clarify it removing or reorganizing some of the information.We agree with this comment. We have simplified both panels A and B of Figure 2. In addition, we have modified a little bit the figure legend."
}
]
},
{
"id": "21755",
"date": "11 May 2017",
"name": "Nathalie Bendriss-Vermare",
"expertise": [
"Reviewer Expertise Oncoimmunology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review provides a balanced and comprehensive overview of the latest discoveries about the role of metabolism in plasmacytoid dendritic cell innate functions. This is a very original topic based on recent data supporting the idea that the glycolytic pathway as well as lipid metabolism may modulate the production of type I IFN by pDC or may be involved in this function after TLR7/9 triggering. In addition, this review provides new clues on how these metabolic pathways may be harnessed in pathophysiological contexts where pDC play a detrimental role.\nThis review has a comprehensive view of all relevant literature in the field and summarized very well the fundamental concepts, but there are some minor concerns that need to be addressed as below.\nAbstract is clearly written, but the text could be reduced by omitting the section about the differences related to the origin of pDC.\n\nPage 3: PRR stands for Pattern (but not Pathogen) Recognition Receptors\n\nPage 3: I would suggest to add that i) human pDC are usually identified as CD4+ CD303+ CD123high and CD11cnegative (to highlight the difference with mouse pDC that are CD11clow), ii) irf7 is also a master genes that is shared between human and mouse pDC, iii) CD317 (known as BST2, PDCA1).\n\nPage 5: the list of references related to the regulatory functions of pDC is too short. Some major articles should be added (Ochando 2006, Goubier 2008, Hadeiba 2008, Irla 2010, etc…).\n\nPage 5 paragraph 2.4: when the authors discuss the role of pDC in diseases, they focus on autoimmune disorders, GVHD, atherosclerosis, and chronic viral infections but cancer was omitted. Yet, metabolic pathways are deeply altered in the context of cancer and I would assume that the innate functions of pDC would be modulated by the metabolic changes occurring in tumor microenvironment. This needs to be discussed.\n\nPage 7: when the authors discuss the role of glycolysis on pDC functions upon TLR7 triggering, it is not clear whether this is a global regulation of the transcription or whether this is specific of genes related to type I IFN pathway.\n\nPage 10: it is not very interesting but not clear to me how the massive decrease of intracellular cholesterol content is connected to the activation of STING/cGAS pathway leading to spontaneous type I IFN production in macrophages. This should be clarified.\n\nThe figures nicely support and illustrate the main text of the review. Nevertheless, I would suggest to add a table summarizing the main effects of each metabolic pathway on pDC biology by comparing human versus mouse and TLR7 versus TLR9 stimulation.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": [
{
"c_id": "2793",
"date": "22 Jun 2017",
"name": "Philippe Saas",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We thank the reviewer for her constructive comments. We appreciate greatly these comments and we have tried to answer to them. Please find below a point by point response to the reviewer’s comments.1. Abstract is clearly written, but the text could be reduced by omitting the section about the differences related to the origin of pDC.We have modified the abstract accordingly.2. Page 3: PRR stands for Pattern (but not Pathogen) Recognition ReceptorsThis is true. We have corrected this abbreviation.3. Page 3: I would suggest to add that i) human pDC are usually identified as CD4+ CD303+ CD123high and CD11cnegative (to highlight the difference with mouse pDC that are CD11clow), ii) irf7 is also a master genes that is shared between human and mouse pDC, iii) CD317 (known as BST2, PDCA1).We have added these features according to the reviewer’s suggestions.4. Page 5: the list of references related to the regulatory functions of pDC is too short. Some major articles should be added (Ochando 2006, Goubier 2008, Hadeiba 2008, Irla 2010, etc…).We have not expanded too much this part of the text. We have quoted the original work of Ochando 2006, Goubier 2008, and Hadeiba 2008, since it allows us to evoke briefly the role of PDC in transplantation and oral tolerance. However, we have not quoted the work of Irla (2010), since it deals with regulatory T cells and we focus our review on the innate immune functions of PDC.5. Page 5 paragraph 2.4: when the authors discuss the role of pDC in diseases, they focus on autoimmune disorders, GVHD, atherosclerosis, and chronic viral infections but cancer was omitted. Yet, metabolic pathways are deeply altered in the context of cancer and I would assume that the innate functions of pDC would be modulated by the metabolic changes occurring in tumor microenvironment. This needs to be discussed.This is a very interesting suggestion (please also refer to comment #1 of reviewer #1). Indeed, metabolism may play an important role in both tumor progression and antitumor immunity. Thus, a sentence has been added in the paragraph dealing with PDC ontogeny and localization, and different references describing PDC infiltrates in tumors have been quoted. We have quoted original works as advised by reviewers #3. Two small paragraphs have been added in the section called “2.4 Implications of PDC in diseases”. Again, references have been added in this section, including original works and reviews on metabolism and cancer. Moreover, we have also evoked quickly in different parts of the revised version that the tumor microenvironment may impact on PDC metabolism.6. Page 7: when the authors discuss the role of glycolysis on pDC functions upon TLR7 triggering, it is not clear whether this is a global regulation of the transcription or whether this is specific of genes related to type I IFN pathway.Glycolysis after TLR7 triggering on human PDC seems necessary for most of the PDC innate immune functions (e.g., type I IFN, costimulatory molecules). However, the authors concentrated most of their work on type I IFN pathway. In the text, we wrote “2-DG inhibits the increase of IFNA, CD80 and CD86 mRNA induced by exposure to Flu 4 . This suggests that glycolysis induced by the TLR7 pathway regulates these genes at the transcriptional level.7. Page 10: it is not very interesting but not clear to me how the massive decrease of intracellular cholesterol content is connected to the activation of STING/cGAS pathway leading to spontaneous type I IFN production in macrophages. This should be clarified.This is true. The explanation provided in the first version of our manuscript was not clear. We have modified the sentence concerning this point.8. The figures nicely support and illustrate the main text of the review. Nevertheless, I would suggest to add a table summarizing the main effects of each metabolic pathway on pDC biology by comparing human versus mouse and TLR7 versus TLR9 stimulation.This is a very interesting suggestion. We have added a table summarizing the impact of pharmacological agents targeting the different metabolic pathways on TLR7/9 stimulation. Again, we focus on innate immune functions, including: type I IFN, IL-6 and TNF production as well as costimulatory molecule expression."
}
]
},
{
"id": "22647",
"date": "12 May 2017",
"name": "Edward J. Pearce",
"expertise": [
"Reviewer Expertise Immunometabolism"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the review by Saas et al., the authors discussed the collective knowledge of the metabolic status of pDC. Despite the fact that data on this subject are rather scarce, the authors prepared an interesting manuscript. They also included a section about the role and function of pDC which created a more comprehensive source of information.\n\nComments:\nAbstract needs more clarity, i.e. “Recent data support the idea that the glycolytic pathway (or glycolysis), as well as lipid metabolism (including both cholesterol and fatty acid metabolism) may impact some innate immune functions of PDC or may be involved in these functions after Toll-like receptor (TLR) 7/9 triggering. Some differences may be related to the origin of PDC (human versus mouse PDC or blood-sorted versus FLT3 ligand stimulated-bone marrow-sorted PDC).” If the authors want to keep this in the abstract they should probably mention what the differences are.\n\nPlease consider rearranging and shortening the Introduction to avoid repetition.\n\nWhen talking about the role of mTOR in pDCs, I think there is no need for a detailed introduction to mTOR – there are many recent authoratitive reviews on this subject. The authors should refer to one of these. For example, a review by David Sabatini.\n\nThe language of the review can be confusing at time. For example, the following sentence should probably be deleted for the sake of clarity: “After differentiation in the bone marrow, PDC are released into the bloodstream for homing to different lymphoid tissues. Thus, PDC isolated from blood of healthy donors or patients consist in PDC migrating to these tissues.”\n\nThe authors should make an attempt to reference original work rather than other reviews. For example, “Localizations of PDC in other lymphoid organs, such as Peyer’s patches of the gut”. Instead of Li et al. (2011)1, please cite the paper by Contractor et al.(2007)2.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Partly\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": [
{
"c_id": "2792",
"date": "22 Jun 2017",
"name": "Philippe Saas",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We thank the reviewers for their constructive comments. We appreciate greatly these comments and we have tried to answer to them. Please find below a point by point response to the reviewers’ comments.1. Abstract needs more clarity, i.e. “Recent data support the idea that the glycolytic pathway (or glycolysis), as well as lipid metabolism (including both cholesterol and fatty acid metabolism) may impact some innate immune functions of PDC or may be involved in these functions after Toll-like receptor (TLR) 7/9 triggering. Some differences may be related to the origin of PDC (human versus mouse PDC or blood-sorted versus FLT3 ligand stimulated-bone marrow-sorted PDC).” If the authors want to keep this in the abstract they should probably mention what the differences are.We agree with this comment. It is similar to the first comment of reviewer #2. We have deleted the following sentence: “..Some differences may be related to the origin of PDC (human versus mouse PDC or blood-sorted versus FLT3 ligand stimulated-bone marrow-sorted PDC).“2. Please consider rearranging and shortening the Introduction to avoid repetition.We prefer to keep the repetition for readers not involved in the fields of either metabolism or PDC. We also want to emphasize that few data are available for PDC compared to macrophages or conventional dendritic cells.3. When talking about the role of mTOR in pDCs, I think there is no need for a detailed introduction to mTOR – there are many recent authoratitive reviews on this subject. The authors should refer to one of these. For example, a review by David Sabatini.We have quoted a review by David Sabatini (Ref# 8).4. The language of the review can be confusing at time. For example, the following sentence should probably be deleted for the sake of clarity: “After differentiation in the bone marrow, PDC are released into the bloodstream for homing to different lymphoid tissues. Thus, PDC isolated from blood of healthy donors or patients consist in PDC migrating to these tissues.”We have changed the beginning of this paragraph. We have deleted the second sentence since it corresponded to a repetition of the previous paragraph and we have modified a little bit the mentioned sentences.5. The authors should make an attempt to reference original work rather than other reviews. For example, “Localizations of PDC in other lymphoid organs, such as Peyer’s patches of the gut”. Instead of Li et al. (2011)1, please cite the paper by Contractor et al. (2007)2.Now, we have quoted the paper by Contractor et al. (2007) in addition to those by Li et al. (2011). Both papers are original works. We have also followed the recommendation to quote original works when references on PDC infiltrating tumors have been added."
}
]
}
] | 1
|
https://f1000research.com/articles/6-456
|
https://f1000research.com/articles/6-972/v1
|
22 Jun 17
|
{
"type": "Research Article",
"title": "Accessibility to health services among migrant workers in the Northeast of Thailand",
"authors": [
"Suprawee Khongthanachayopit",
"Wongsa Laohasiriwong",
"Suprawee Khongthanachayopit"
],
"abstract": "Background. There is an increasing trend of trans-border migration from neighboring countries to Thailand. According to human rights laws, everyone must have access to health services, even if they are from other nationalities. However, a small minority of health personnel in Thailand discriminate against immigrant workers, as they are from a lower financial bracket. Methods. This cross-sectional study aims to determine the prevalence of accessibility to health services and factors associated with access to health services among migrant workers who work along the Northeast border of Thailand. A total of 621 legal migrant workers were randomly selected to respond to a structured questionnaire about the satisfaction of health services, using the 5As of health services: availability; accessibility; accommodation; affordability; acceptability. Associations between independent variables and access to health services were analysed\n\nusing multiple logistic regression analysis. Results. The results indicated that the majority of these registered migrant workers were female (63.9%) with an average age of 29± 8.61 years old, and were married (54.3%). Most of the workers worked at restaurants (80%), whereas only 20% were in agricultural sectors. Only 14% (95% CI: 11-17%) of migrant workers had access to health services. The factors that were significantly associated with accessibility to health service experienced ill health during the past one year (OR = 2.48; 95%CI; 1.54–3.97; p-value<0.001); have been married (OR = 2.32; 95% CI: 1.40 – 3.90; p-value <0.001). Conclusions. Most of the migrant workers could not access health services. The ones who did access health services were married or ill.",
"keywords": [
"accessibility",
"health service",
"migrant workers",
"curative"
],
"content": "Introduction\n\nMobilization of people across borders is widely spread around the world. There has been an increasing trend of migrant workers in Thailand, who are allowed to work all over the country. These individuals have increased by 13.18% since 2013, to comprise 87.99% of workers in 2014, totaling over 3 million individuals. These migrant workers are mostly from three nationalities: Burmese, Laotian and Cambodian1. The workers’ physical appearance, language, and culture are quite similar to the Thai population, which causes the numbers of migrant workers and patients from neighbouring countries to increase annually2. The country is in need of migrant workers for jobs that are mostly labour intensive both in agricultural and industrial sectors, which can be of a risky nature with lower wages. The number of migrants from many countries has rapidly increased as a result of economic development activities, trade and tourism along Thai borders. The growth of immigration is clearly seen, especially in the special economic zones, and Thailand is also an ASEAN Member State since December 2015.\n\nThe migrant workers mainly work as unskilled labour in dirty, dangerous and degrading conditions that leaves them exposed to a higher risk of communicable diseases, such as tuberculosis3. From the literature it is noted that 40% of migrant workers do not have a health insurance card, which results in lower access to healthcare services compared to those with a health insurance card4. It is mandatory that government healthcare services in the border provinces should serve these foreign patients, whether they can afford the medical expense or not. Several government healthcare institutes have used the budget allocated for Thai patients to support foreign patients5. However, in 2015 the Thai government attempted to solve these problems by allowing foreigners and migrant workers to purchase a health insurance card with different coverage periods and extended the coverage to the foreign workers. Even a migrant worker who is legally registered with the Ministry of Labour has numerous difficulties in using a government health insurance card, for example the employer confiscates the health card from the workers, or the workers prefer private clinics due to inadequate attention in public hospitals6. This obstructs migrant workers from having access to good healthcare. In addition, there are other factors, such as communications barriers, frustrations in contacting the government officers at the hospital, the distance from their residential areas or work place to the public hospital, that have hindered their access to health services, which, according to human rights, migrants must have equity of access to health care.\n\nThe concept of accessibility is a central objective of many health care systems. Nevertheless, there are substantial challenges to achieving this goal of health security for migrants. Access and how they experience their access to health service is important for the policy maker. A literature review of studies on accessibility to health services of migrant workers are limited, especially in Thailand. Data on accessibility to health service are not consistent and there are not enough studies on the given factors7. The literature on health and access to care of migrants is limited and different in focus and quality8. A previous study found that the migrant workers experienced alienation and inequality when they were treated at healthcare services9. Therefore, there is still ambiguity in the knowledge regarding the current situation of migrant workers in the Northeast and associated factors during their work in Thailand. This study examines the factors associated with access to health services among legal migrant workers in the Northeast of Thailand.\n\n\nMethods\n\nThis cross-sectional study aims to examine the prevalence of accessibility to health services and factors associated with access for legal migrant workers in the Northeast of Thailand. The study applied the concept of access developed by Penschansky & Thomas in 198110. The accessibility to health services in this study focused on satisfactory health services in terms of availability, accessibility, accommodation, affordability and acceptability (the 5As). To avoid recall bias, we trained the interviewer and carefully asked the questions in the migrants’ language (LAO).\n\nThe inclusion criteria were legal migrant workers, who were not of Thai nationality, but from LAO, and had registered as migrant workers with the Department of Employment, the Ministry of Labor, and had been working in Nakhon Phanom, Mukdahan and NongKhai province. The participants were migrant workers who had stayed in Thailand and had expired work permits dated 31 March 2016. Migrants were then selected randomly from a list once they re-registered.\n\nThe required sample size was estimated by using a formula for multiple logistic regression.11, to identify relationships between multiple independent variables and a dichotomous dependent variable. Hence, the sample size was 547, with 15% increase to allow for potential non-responders. Therefore, the total number of samples was 629 individuals. Due to incompletion of some questionnaires only 621 samples were included in this study.\n\nThe participants were selected in this study by systemic random sampling from the name list of re-registered migrant workers from three provinces that were located in the North east part of Thailand.\n\nWhen investigating access, we classified the dichotomous dependent variable into two groups: access and non - access. The questionnaire tool was developed from reviewing literature10,12,13 and was also pretested among 30 workers in Loei province, which is a different area from the data collection site. Most of these workers worked in factories. The feedback from these workers was that the questionnaire was complex and required simple language for it to be understood. Hence the questionnaire was made simpler in language and re distributed. Reliability was assessed using Cronbach’s alpha, yielding a score of 0.80, which was judged and accepted. Three experts (Prevention of HIV/AIDS Among Migrant Workers in Thailand [PHAMIT Project] Thailand; NaKhon Phanom University, Thailand; Mahasarakham University, Thailand) inspected and commented on the draft questionnaire, then revision was made to improve its validity. It was also validated by Khon Kaen University Ethics Committee. The study used a structured questionnaire. The question was applied to the concept of access developed by Penschansky & Thomas in 198110, which stated that access is a fit between patient need and actual outcome.\n\nThe data collection process was conducted by approaching a migrant at either their home or work place. Subsequently, the migrant workers were asked to respond to a structured questionnaire interview. All participants were interviewed by trained bilingual interviewers face-to-face. After data collection, the data was validated, coded and analysed using STATA® (ver. 13; College Station, TX, USA: Stata Corp).\n\nIn part 2 of the questionnaire “Knowledge of right and benefit in health insurance of migrant workers” 0, correct; 1, wrong. In part 3 “Expectation and satisfaction from health service” and part 4 “Access to health service”, three choices were offered; however, in STATA (multiple logistic regression), there was provision only for two choices, 0 and 1. Hence the choices 1, 2, 3 had to be limited to 0 and 1: 1,high or moderate;0,low (in dataset: 1, low; 2, moderate; 3, high).\n\nDescriptive statistics were used to examine the characteristics of migrant workers and the prevalence of access. Associations between independent variables and access to health services were calculated by using multiple logistic regression.\n\nThe researcher submitted the approval request to the Office of the Khon Kaen University Ethics Committee in Human Research, which was approved (approval number, HE 592096). A coding scheme was used for data collection, and every document relating to the participants, such as the questionnaire, was destroyed on completion of research.\n\nOnly oral consent and no written consent was obtained from all participants prior to participation. Only oral consent was obtained in order to protect the rights of the participants, since they wanted their information to be confidential (participants were worried that if they provided written consent, they would be vulnerable to government checks as they are from LAO and not citizens of Thailand).\n\n\nResults\n\nThe characteristics of the migrant workers are shown in Table 1. The results indicated that from the total of 621 legal migrant workers, the majority of these individuals were female (63.9%), married (54.3%) with the average age of 29±8.61 years old. Most of the workers worked at restaurants (80.0%), whereas only 20.0% were in agricultural sectors. The majority had a monthly income < 9,000 Baht. About one-third of the migrant workers were ill (37.2 %) in the past year.\n\nEven though 37.2% of the migrant workers were ill during the past one year, only 14% (95% CI: 11–17%) of migrant workers had access to health services (Table 2). The common illness that was found among migrant workers were musculoskeletal disorders (7.57%), diabetes mellitus (5.61%), antenatal care (4.76%), hypertension (2.21%) and allergy (1.76%). The average distance from their residence to the public hospital was 4.82±4.30 km, with 73.1% at a distance <5 km.\n\nFactors that had a relationship with access to health care service were age, income, marital status, occupation, the experience of illness during the past one year, knowledge of health insurance card, and place of residence, and these underwent simple logistic regression. Only the factors that had p<0.25 in the simple logistic regression were selected for further multivariate analysis using multiple logistic regression (Table 2).\n\nThe multivariable analysis identified only two factors that were associated with migrant workers access to health services. These factors were being married (adj. OR = 2.73; 95%CI: 1.39 – 3.90) and being ill during the past one-year (adj. OR = 2.48; 95% CI: 1.55 – 3.97). The results are shown in Table 3.\n\n\nDiscussion\n\nAbout one-third of the migrant workers who participated in the current study were ill during the past year (37.2%). However, the most common illness was musculoskeletal disorders and general illness. This may be related to the work that the migrants performed, since most of them work at restaurants, factories and in the agricultural fields. The results were similar to migrant farmworkers in the Northern Shenandoah Valley, in whom the most common health problems reported were musculoskeletal pain14.\n\nThe migrant workers seldom had severe health problems, maybe because they were mostly of an age that is usually healthy. In addition, all legal migrant workers had to have a physical examination before being allowed to register with the Ministry of Labor. This study, in accordance with another study in. Thailand15, stated that even though many Myanmar workers had access to the health service, around half of the migrants would not go to the health centers until their conditions worsened. This study found very poor access to health services (14%), which is a different result from a study among immigrants in Portugal, which stated that 77% of immigrants reported having used health services16.\n\nIn health care utilization amongst Shenzhen migrant workers who reported illness, 62.15% did not visit a doctor because of inability to pay17, which is the same reason why immigrants in Thailand in this study did not visit health services (72.1% ) - as they had a low income, less than 9,000 baht per month. Therefore, the main barriers to health access for the urban poor related to interacting effects of poverty18. Migrants did not use the health service in spite of the workers having a health insurance card and the distance from home to health center was not too far. This is in contrasts to another study that found that the most common reasons for non-utilization of a medical card was a lack of transportation and lack of knowledge of where to go for care19.\n\nThe multivariate analysis indicated that only two factors were associated with access to health services among migrant workers when controlling for other covariates. The first factor was that they experienced illness during the past year (adj. OR = 2.32; 95%CI: 1.40 – 3.90; p-value <0.001). Those with chronic illnesses had a high cost of health services, so the migrant workers used the service of the hospital whereas those with mild musculoskeletal disorders seldom used the health service card. They were used only for chronic illness, as treatment was expensive. In nearly all cases, poorer physical and mental health was a significant predictor of increased utilization. Perceived need and self-rated health were also associated with health services used in some studies20.\n\nThe second factor was marital status (adj. OR = 2.48; 95%CI: 1.54 – 3.97; p-value <0.001): those that were married might have better support from their partners to access the health service, and migrants could share news and information about the health services within their families. Moreover, they could get more social support from others when they had health problems. According to Babitsch 201220 which was a systematic review of studies from 1998–2011, married individuals use health services more than single individuals. In addition, Australian women who were separated, divorced, or living with children used a general practitioner more compared to their counterparts.\n\n\nConclusion\n\nThe overall prevalence of access to health services among migrant workers was 14%, which was rather low when compared to the prevalence of illness at 37.2%. The findings support that personal factors were statistically associated with access to health service. Those who had experienced illness during the past year would seek health services to cure their health problems, especially among those with severe illness and those who received support from family.\n\n\nData availability\n\nDataset 1: Raw data gathered from the questionnaire. doi, 10.5256/f1000research.11651.d16535721",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors would like to express sincere thanks and appreciation to all migrant workers who participated in this study.\n\n\nSupplementary material\n\nSupplementary File 1: Questionnaire asked to migrants workers, relating to accessibility of health services.\n\nClick here to access the data.\n\n\nReferences\n\nOffice of Foreign Affairs Administration, Department of Employment: Report on the results of the alien work permit application for 2014. Nonthaburi: The Office; 2014.\n\nChamchan C, Apipornchaisakul K: A situation analysis on health System strengthening for migrants in Thailand. Nakhon Pathom: Institute for Population and Social Research, Mahidol University; 2012. Reference Source\n\nNaing T, Geater A, Pungrassami P: Migrant workers’ occupation and healthcare-seeking preferences for TB-suspicious symptoms and other health problems: a survey among immigrant workers in Songkhla province, southern Thailand. BMC Int Health Hum Rights. 2012; 12: 22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuendelman S, Wier M, Angulo V, et al.: The effects of child-only insurance coverage and family coverage on health care access and use: recent findings among low-income children in California. Health Serv Res. 2006; 41(1): 125–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArchavanikul K: Migrant workers and Thailand’s health security system. Nakhon Pathom: Institute for Population and Social Research, Mahidol University; 2013. Reference Source\n\nWebber G, Spitzer D, Somrongthong R, et al.: Facilitators and barriers to accessing reproductive health care for migrant beer promoters in Cambodia, Laos, Thailand and Vietnam: a mixed methods study. Global Health. 2012; 8: 21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFernández-Mayoralas G, Rodríguez V, Rojo F: Health services accessibility among Spanish elderly. Soc Sci Med. 2000; 50(1): 17–26. PubMed Abstract | Publisher Full Text\n\nWoodward A, Howard N, Wolffers I: Health and access to care for undocumented migrants living in the European Union: a scoping review. Health Policy Plan. 2014; 29(7): 818–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYimyam S: Health behavior and access to reproductive health in Thai Yai female migrant worke. Public Health J. 2012; 42(3): 68–52.\n\nPenchansky R, Thomas JW: The concept of access: definition and relationship to consumer satisfaction. Med Care. 1981; 19(2): 127–40. PubMed Abstract | Publisher Full Text\n\nHsieh FY, Bloch DA, Larsen MD: A simple method of sample size calculation for linear and logistic regression. Stat Med. 1998; 17(14): 1623–34. PubMed Abstract | Publisher Full Text\n\nJantara B: Access to health services under the universal coverage policy among elderly in Khon Kaen municipal area. Khon Kaen: Graduate School, Khon Kaen University; 2006.\n\nWongkongdech A, Laohasiriwong W: Movement disability: situations and factors influencing access to health services in the Northeast of Thailand. Kathmandu Univ Med J. 2014; 12(47): 168–74. PubMed Abstract | Publisher Full Text\n\nKelly NJ: Migrant farmworkers in the Northern Shenandoah Valley: health status and access to care. Charlottesville (VA): Department of Nursing, University of Virginia; 2001.\n\nRakprasit J, Nakamura K, Seino K, et al.: Healthcare use for communicable diseases among migrant workers in comparison with Thai workers. Ind Health. 2017; 55(1): 67–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDias S, Gama A, Cortes M, et al.: Healthcare-seeking patterns among immigrants in Portugal. Heal Soc Care Community. 2011; 19(5): 514–21. PubMed Abstract | Publisher Full Text\n\nMou J, Cheng J, Zhang D, et al.: Health care utilisation amongst Shenzhen migrant workers: does being insured make a difference? BMC Health Serv Res. 2009; 9: 214. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAndermann A, CLEAR Collaboration: Taking action on the social determinants of health in clinical practice: a framework for health professionals. CMAJ. 2016; 188(17–18): E474–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeathers A, Minkovitz C, O’Campo P, et al.: Access to care for children of migratory agricultural workers: factors associated with unmet need for medical care. Pediatrics. 2004; 113(4): e276–282. PubMed Abstract | Publisher Full Text\n\nBabitsch B, Gohl D, von Lengerke T: Re-revisiting Andersen’s Behavioral Model of Health Services Use: a systematic review of studies from 1998–2011. Psycho-Soc Med. 2012; 9: Doc11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhongthanachayopit S, Laohasiriwong W: Dataset 1 in: Accessibility to health services among migrant workers in the Northeast of Thailand. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23741",
"date": "23 Aug 2017",
"name": "Songkramchai Leethongdee",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn my view as a referee, this is a good article and suitable for publication. I would give some recommendations as follow:\nPlease review much more on migrants policy and the state direction to arrange this problem.\n\nPlease fill in the gap between the state policy and the situation of this problem.\n\nWith regards to research findings please state and recommend to policy suggestions.\n\nDue to the results of his/her finding indicated that two factors were associated with access to health service among the cases, the first that they experienced illness during the past year and the second was marital status which related the previous research and evidenced. So I would suggest to author to contribute his/her own idea to respond or support the two findings as causes of problems in this article.\nI do agree in this article and feel acceptable after correcting as I have recommended.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23738",
"date": "26 Sep 2017",
"name": "Bhunyabhadh Chaimay",
"expertise": [
"Reviewer Expertise Epidemiology",
"public health"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very good article related to health services among migrant workers in Thailand, which is a hot issues. However, there are 4 issues to discuss in this article;\nThe objectives of the study mentioned in abstract, introduction and method are not relevance. Regarding the end of introduction mentioned only factors associated ….. but not mentioned about the prevalence of ….\n\nRegarding the method, authors mentioned that to avoid recall bias…. In my opinion, this should be information bias.\n\nAbout discussion, please check the accuracy of the effect size of factors marital status and experience of illness between table 3 and the discussion column 2, paragraph 3 and 4. These are not relevant to the results of the study.\n\nIn conclusion, the factors associated with access to health service mentioned are incomplete, which marital status factor is not mentioned yet.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-972
|
https://f1000research.com/articles/5-1573/v1
|
05 Jul 16
|
{
"type": "Study Protocol",
"title": "Automated analysis of retinal imaging using machine learning techniques for computer vision",
"authors": [
"Jeffrey De Fauw",
"Pearse Keane",
"Nenad Tomasev",
"Daniel Visentin",
"George van den Driessche",
"Mike Johnson",
"Cian O Hughes",
"Carlton Chu",
"Joseph Ledsam",
"Trevor Back",
"Tunde Peto",
"Geraint Rees",
"Hugh Montgomery",
"Rosalind Raine",
"Olaf Ronneberger",
"Julien Cornebise",
"Jeffrey De Fauw",
"Pearse Keane",
"Nenad Tomasev",
"Daniel Visentin",
"George van den Driessche",
"Mike Johnson",
"Cian O Hughes",
"Carlton Chu",
"Trevor Back",
"Tunde Peto",
"Geraint Rees",
"Hugh Montgomery",
"Rosalind Raine",
"Olaf Ronneberger",
"Julien Cornebise"
],
"abstract": "There are almost two million people in the United Kingdom living with sight loss, including around 360,000 people who are registered as blind or partially sighted. Sight threatening diseases, such as diabetic retinopathy and age related macular degeneration have contributed to the 40% increase in outpatient attendances in the last decade but are amenable to early detection and monitoring. With early and appropriate intervention, blindness may be prevented in many cases.\n\nOphthalmic imaging provides a way to diagnose and objectively assess the progression of a number of pathologies including neovascular (“wet”) age-related macular degeneration (wet AMD) and diabetic retinopathy. Two methods of imaging are commonly used: digital photographs of the fundus (the ‘back’ of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves). Changes in population demographics and expectations and the changing pattern of chronic diseases creates a rising demand for such imaging. Meanwhile, interrogation of such images is time consuming, costly, and prone to human error. The application of novel analysis methods may provide a solution to these challenges.\n\nThis research will focus on applying novel machine learning algorithms to automatic analysis of both digital fundus photographs and OCT in Moorfields Eye Hospital NHS Foundation Trust patients.\n\nThrough analysis of the images used in ophthalmology, along with relevant clinical and demographic information, Google DeepMind Health will investigate the feasibility of automated grading of digital fundus photographs and OCT and provide novel quantitative measures for specific disease features and for monitoring the therapeutic success.",
"keywords": [
"Optical Coherence Tomography",
"machine learning",
"artificial intelligence",
"diabetic retinopathy",
"neovascular age-related macular degeneration",
"ophthalmology",
"retina"
],
"content": "Background\n\nAge-related macular degeneration (AMD) is a degenerative retinal disease that can cause irreversible visual loss (Bressler, 2004). It is the leading cause of blindness in Europe and North America and accounts for over half of partially sighted or legally blind certifications in the UK (Bunce, 2010). Neovascular (“wet”) AMD is an advanced form of macular degeneration that historically has accounted for the majority of vision loss related to AMD. It is characterised by abnormal blood vessel growth that can result in hemorrhage, fluid exudation and fibrosis, and thus to local macular damage and ultimately vision loss (Owen, 2012).\n\nDiabetic retinopathy (DR) is the leading cause of blindness in working age populations in the developed world (Cheung, 2010). It is estimated that up to 50% of people with proliferative DR (characterised by neovascularisation) who do not receive timely treatment will become legally blind within 5 years (Shaw, 2010). Although up to 98% of severe visual loss due to DR can be prevented with early detection and treatment, once it has progressed vision loss is often permanent (Kollias, 2010). Indeed, 4,200 people in England every year are at risk of blindness caused by diabetic retinopathy and there are 1,280 new cases of blindness caused by diabetic retinopathy (Scanlon, 2008).\n\nTo diagnose these conditions and monitor their progression (and response to treatment) the presence and precise location of the lesions must be determined. Two imaging modalities are commonly used for this purpose: digital photographs of the fundus (the ‘back’ of the eye) and Optical Coherence Tomography (OCT, a modality that uses light waves in a similar way to how ultrasound uses sound waves) (Huang, 1991).\n\nClinics such as these have generated very large datasets of both digital fundus and OCT images. They are also very busy; ophthalmology clinics often have long waits and average times to urgent first treatments can be greater than two weeks in the busiest clinics.\n\nMachine learning algorithms make use of the rich, varied datasets to find high dimensional interactions between multiple data points (Murphy, 2012). Most machine learning methods can be thought of as a form of statistics. Some algorithms look for patterns in the data, identifying subgroups within the full sample (known as clustering). Others rely on specific clinical information against which the algorithm compares its predictions and adjusts accordingly (supervised learning).\n\nThere have been significant recent advances in the field of machine learning demonstrating algorithms able to learn how to accomplish tasks without instruction (Mnih, 2015; Silver, 2016). Recent healthcare applications of such algorithms have shed light on complex genetic interactions in autism (Uddin, 2014) and monitoring of physiological observations in intensive care (Clifton, 2013).\n\nThis study aims to combine traditional statistical methodology and machine learning algorithms to achieve automatic grading and quantitative analysis of both digital fundus photograph and OCT in Moorfields Eye Hospital NHS Foundation Trust patients (London, UK). Moorfields Eye Hospital is the leading provider of eye health services in the UK and a world-class centre of excellence for ophthalmic research and education. Should the research be successful, implementation of the outcomes would improve patient access to treatment and ease pressures on time and resources in ophthalmology clinics.\n\n\nAims and objectives\n\nExploratory study: investigate whether computer algorithms can detect and classify pathological features on eye imaging, including fundus digital photographs and OCT.\n\nIf the exploratory study is successful:\n\nTo provide novel image analysis algorithms to identify and quantify specific pathological features in eye imaging, using validated methods and expert clinical consensus.\n\nTo provide quantitative measurements disease progression, severity and to monitor the therapeutic success over time.\n\n\nStudy design\n\nThis is a retrospective, non-interventional exploratory study. Analyses performed in the study will be on fully anonymised (to achieve the primary objective) and pseudonymised (to achieve the secondary objective 2.2) retinal images (including fundus images and OCT). These images will contain no patient identifiable information.\n\nAll patients attending Moorfields Eye Hospital NHS Foundation Trust sites between 01/01/2007 and 29/02/2016, and who had digital retinal imaging (including fundus digital photographs and OCT) as part of their routine clinical care, will be eligible for inclusion in this study.\n\nHard copy examinations (i.e. physical photograph copies prior to digital storage) will be ineligible, and will be excluded by the nature of the original service evaluation requests from the Moorfields team. Data from patients who have previously manually requested that their data should not be shared, even for research purposes in anonymised form, and have informed Moorfields Eye Hospital of this, will be ineligible and removed by Moorfields Eye Hospital staff before research begins.\n\nApproximately 1 million examinations meet the above criteria.\n\nMost recent machine learning algorithms benefit from large datasets on which to train (tens to hundreds of thousands of data instances (Silver, 2016)). Across all machine learning applications the predictive power (as percentage of data instances correctly classified) of the algorithm depends on the size and quality of the dataset.\n\nThe sample size is informed by the existing literature and by DeepMind’s previous work in the field of machine learning (Mnih, 2015; Silver, 2016). Today’s most powerful deep neural networks can have millions or billions of parameters, so large amounts of data are needed to automatically infer those parameters during learning. Most problems in the medical domain are highly complex as they arise as an interplay of many clinical, demographic, behavioural and environmental factors that are correlated in non-trivial ways. This is even more true for state-of-the art deep learning methodologies that are expected to give the best results (Szegedy, 2014).\n\nFor all patients meeting inclusion/exclusion criteria the following electronic health record data will be required to complete this project successfully:\n\n(1) Digital Fundus Photographs\n\n(2) Digital OCT images\n\nIn addition to image data, the anonymised dataset will contain additional information required to train an algorithm (objective 1.1):\n\n• Demographic information shown to be associated with eye disease. This is because the retina differs by features such as age, as do the likelihoods, manifestations and progression of specific disease states.\n\n• Primary and secondary diagnostic labels describing what pathology is in the image (e.g., wet AMD, diabetic retinopathy) and the associated severity (e.g., grade of retinopathy/maculopathy in diabetic retinopathy).\n\n• Treatment information describing aspects of management alongside pathology information and clinical data such as visual acuity.\n\n• Model of imaging device.\n\nA second dataset will be pseudonymised to allow further investigation of disease progression and treatment effects. This will include the above information, and add temporal information allowing pseudonymised data to be joined over time with knowledge of the elapsed time between each:\n\n• Time elapsed between each data point and the first visit in weeks (Sunday to Sunday).\n\n• The dates of each scan will be removed so it will be impossible to identify an individual date or year of patient’s birth.\n\nThe anonymisation and pseudonymisation procedures adopted will remove any information not specified to further avoid transfer of patient identifiable information. All anonymisation and pseudonymisation will be formally checked by Moorfields Eye Hospital staff before transfer.\n\nIn order to develop the algorithm, Google DeepMind will work with the eye images, split by the known diagnoses, using machine learning and Artificial Intelligence techniques including but not limited to: supervised and semi-supervised convolutional neural networks, recurrent neural networks, unsupervised clustering, reinforcement learning (Murphy, 2012).\n\nFor a selection of images additional manual labels will be produced by experts to allow investigation of the clinical benefit that can be achieved through the use of machine learning in eye imaging. Pathological and anatomical features will be annotated by trained graders at Moorfields Eye Hospital and overseen by a consultant ophthalmologist with over 10 years of experience.\n\nDescriptive statistics will be used to describe specific pathological features extracted by the model on retinal images, both continuous and categorical (e.g. presence and size of an abnormality), and to compare them to an expert references.\n\nIn addition we will analyse the outcomes against recent Moorfields audit data on human performance to further understand the potential impacts of the model in clinical practice.\n\n\nData protection\n\nThis study requires existing retrospective data only; no prospective data are needed nor will be collected from patients, hospitals or healthcare workers. No direct patient contact will occur and necessary data will be anonymised or pseudonymised from this source dataset.\n\nAnonymisation and pseudonymisation of all image files and clinical information is performed and validated by Moorfields Eye Hospital staff at Moorfields Eye Hospital. No patient identifiable data will be transferred to Google DeepMind.\n\nDuring validation of the anonymisation procedure it was noted that the current anonymisation tool (TopCon IDConvert) in use at Moorfields had the potential to leave information that may identify the patient in a small number of datasets. DeepMind collaborators worked with Moorfields Eye Hospital IT team to develop an anonymisation script that reliably deletes all of these information in a second step. This new script will be used in the project and will be run by Moorfields Eye Hospital NHS Foundation Trust staff at Moorfields Eye Hospital. Google DeepMind will not have access to patient identifiable information at any time.\n\nGoogle DeepMind Health has developed and established a state-of-the-art secure patient information handling service utilising Common Criteria EAL4 compliant firewalls and on-disk encryption (using Advanced Encryption Standard with a 256-bit key) of all research data, all housed within an ISO 27001 compliant data centre. After anonymisation data will be transferred to our London, UK data centre. This data handling facility conforms to NHS HSCIC Information Governance Statement of Compliance Toolkit (formally assessed at level 3).\n\nAccess will be granted by the custodian of data and no other members of the team. Only those working directly on the data in a research capacity will have access.\n\nThe data sharing agreement between Google DeepMind and Moorfields Eye Hospital NHS Foundation Trust lasts for 5 years. After this period the agreement will be reviewed should future work seek to build on this project. After the data sharing agreement expires all data used in the study will be destroyed. No modification will be made based on the data after destruction.\n\nData destruction will involve the deletion of the encryption/decryption keys for all project volumes, and 8-way random data write to all physical disks within the Google DeepMind Health data infrastructure. A certificate of destruction will be provided to the Trust.\n\nThe algorithms developed during the study will not be destroyed. Google DeepMind Health knows of no way to recreate the patient images transferred from the algorithms developed. No patient identifiable data will be included in the algorithms.\n\n\nEthical considerations\n\nThe research on anonymised data received formal approval from the Moorfields Eye Hospital research and development office, responsible for approving studies working with anonymised data, on 16 Oct 2015 (reference 15/050).\n\nThe research on the pseudonymised dataset received formal Research Ethics Committee approval on 03 Jun 2016 (reference 16/EE/0253). Health Research Authority approval for this is in progress and no research on pseudonymised data will begin until this is approved.\n\nNo patient will be approached directly and this work will include no direct patient contact. Only anonymised or pseudonymised retrospective data collected as part of routine clinical care are included. No patient identifiable data will be collected as part of this study. In such cases the ICO code of practice states that explicit consent is not generally required (ICO, 2012).\n\nThe project is non-interventional and does not involve any direct patient contact. We do not anticipate any change to patient management. For pseudonymised data should expert assessment during model validation highlight a clinical error this will be raised to the appropriate clinical team. The primary point of call for adverse events should they arise will be the Trust information governance lead, and the responsible clinical team at Moorfields Eye Hospital will be notified.\n\nThe study will be monitored both internally and externally. Internally Google DeepMind managers (TB, JL) will oversee and monitor progress on a day-to-day basis, ensuring the protocol is adhered to and no compliance issues arise.\n\nClinical and methodological experts (PK, GR, RR) are working with Google DeepMind to further oversee the ethical, clinical and methodological considerations of the project and will advise on at least a weekly basis to ensure no deviation from the described protocol.\n\nExternally the information governance team at the Moorfields Eye Hospital will be consulted before commencing data collection, and weekly thereafter to ensure no deviation from the described protocol.\n\nGoogle DeepMind has access to the required data to support the research aims of this study. To ensure compliance with the common law principle of data confidentiality, Google DeepMind will only receive anonymised or pseudonymised data from the Trust. DeepMind works with Moorfields Eye Hospital to ensure accuracy and clarity in the data to allow useful and consistent interpretation at all times.\n\n\nDissemination\n\nThe results will be disseminated through normal academic channels, initially focusing on conference proceedings and the indexed peer reviewed literature relevant to the fields of machine learning, artificial intelligence, ophthalmology and clinical research. Patient and Public Involvement representatives will be involved at each stage of the research.\n\n\nConclusion\n\nWe propose an exploratory study and initial testing of machine learning algorithms to analyse and quantify pathological and anatomical features in eye images. The results will be compared to expert annotations.\n\nShould development be successful we plan to submit additional applications for permission to refine the model if required and ultimately investigate the performance in a clinical implementation setting.",
"appendix": "Author contributions\n\n\n\nAll authors contributed to study design and methodology. JDF, OR, JC, NT, DV, GD, MJ, CC and COH contributed to lay out the machine learning approaches. MJ helped setup the infrastructure for the work. PK and TP contributed expertise in ophthalmology. TB and JL contributed to project steering and information governance. GR, RR and HM contributed to methodological oversight.\n\n\nCompeting interests\n\n\n\nMoorfields Eye Hospital NHS Foundation Trust administration time spent on this work will be paid to the trust.\n\nThe Chief Investigator and some co-investigators are paid employees of Google DeepMind. Several co-investigators (PK, JL, GR, RR) are paid contractors for Google DeepMind.\n\n\nGrant information\n\nGoogle DeepMind is funding the research.\n\n\nAcknowledgments\n\nMr Will Kay is the Google DeepMind data custodian and is responsible for protection and security of the dataset described in this protocol.\n\nAcknowledgement is made to Moorfields Reading Center and Moorfields Information Governance team who provided input in the IG aspects of study design.\n\n\nReferences\n\nBressler NM: Age-related macular degeneration is the leading cause of blindness. JAMA. 2004; 291(15): 1900–1. PubMed Abstract\n\nBunce C, Xing W, Wormald R: Causes of blind and partial sight certifications in England and Wales: April 2007-March 2008. Eye (Lond). 2010; 24(11): 1692–9. PubMed Abstract | Publisher Full Text\n\nCheung N, Mitchell P, Wong TY: Diabetic retinopathy. Lancet. 2010; 376(9735): 124–136. PubMed Abstract | Publisher Full Text\n\nClifton L, Clifton DA, Pimentel MA, et al.: Gaussian processes for personalized e-health monitoring with wearable sensors. IEEE Trans Biomed Eng. 2013; 60(1): 193–197. PubMed Abstract | Publisher Full Text\n\nHuang D, Swanson EA, Lin CP, et al.: Optical coherence tomography. Science. 1991; 254(5035): 1178–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nInformation Commissioner's Office: Anonymisation: managing data protection risk code of practice. 2012; Accessed 2016. Reference Source\n\nKollias AN, Ulbig MW: Diabetic retinopathy: Early diagnosis and effective treatment. Dtsch Arztebl Int. 2010; 107(5): 75–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMnih V, Hinton G: Learning to Detect Roads in High-Resolution Aerial Images. European Conference on Computer Vision. 2010; 6316: 210–223. Publisher Full Text\n\nMurphy KP: Machine Learning: A Probabilistic Perspective. Adaptive Computation and Machine Learning Series. Cambridge, Mass.: MIT Press. 2012. Reference Source\n\nOwen CG, Jarrar Z, Wormald R, et al.: The estimated prevalence and incidence of late stage age related macular degeneration in the UK. Br J Ophthalmol. 2012; 96(5): 752–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScanlon PH: The English national screening programme for sight-threatening diabetic retinopathy. J Med Screen. 2008; 15(1): 1–4. PubMed Abstract\n\nShaw JE, Sicree RA, Zimmet PZ: Global estimates of the prevalence of diabetes for 2010 and 2030. Diabetes Res Clin Pract. 2010; 87(1): 4–14. PubMed Abstract | Publisher Full Text\n\nSilver D, Huang A, Maddison CJ, et al.: Mastering the game of Go with deep neural networks and tree search. Nature. 2016; 529(7587): 484–489. PubMed Abstract | Publisher Full Text\n\nSzegedy C, Liu W, Jia Y, et al.: Going deeper with convolutions. arXiv:1409.4842v1. 2014. Reference Source\n\nUddin M, Tammimies K, Pellecchia G, et al.: Brain-expressed exons under purifying selection are enriched for de novo mutations in autism spectrum disorder. Nature Genetics. 2014; 46(7): 742–747. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "15056",
"date": "19 Jul 2016",
"name": "Yit Yang",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThere is a good introduction of the two main conditions, AMD and DR and their burden on public health.\n\nVery little information on machine learning to convince the reader that the fundamental processes and pieces of the jigsaw are already available to apply to AMD and DR or in simpler healthcare scenarios.\n\nHow does MEH plan to locate the ID of those who have given instructions for their images NOT to be used? Is there a database for this specific parameter?\n\nGood attention to data security and data protection.\n\nNot very much information on specific data outcomes expectations. Eg how detailed will be grading be of a wet AMD lesion.\n\nOverall, a novel concept and worth exploring as it will be able to replace human workforce if successful.",
"responses": []
},
{
"id": "14781",
"date": "10 Oct 2016",
"name": "Sandrine Zweifel",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you very much for the opportunity to review the manuscript by De Fauw and co-workers. The topic is of great interest. Moorefields recently announced their research partnership with DeepMind Health. In this manuscript the project plan is detailed. One million examinations (digital fundus photographs and OCT scans) which met the inclusion criteria will be analysed using machine learning and Artificial Intelligence techniques.\nThe authors did not specify why they did not include angiography as an additional imaging modality which is usually used at baseline or at follow-up exams in patients with AMD and diabetic retinopathy. Please add this information.\nSince data security is an important issue to be discussed especially when evaluating such large data sets the authors need to provide more information regarding patient consent. Does everyone who is examined at Moorefields Eye Hospital give a general consent for evaluating their data? Patients are not specifically informed about the project with DeepMind Health, aren't they? So data of patients are only excluded if patients previously (independent of this project) requested that there data should not be shared.\nAlthough there might be some risk regarding \"data security\" it outweighs the potential of earlier detection and treatment of million of patients.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-1573
|
https://f1000research.com/articles/6-667/v1
|
12 May 17
|
{
"type": "Research Article",
"title": "Immunoprofiling of human uterine mast cells identifies three phenotypes and expression of ERβ and glucocorticoid receptor",
"authors": [
"Bianca De Leo",
"Arantza Esnal-Zufiaurre",
"Frances Collins",
"Hilary O.D. Critchley",
"Philippa T.K. Saunders",
"Bianca De Leo",
"Arantza Esnal-Zufiaurre",
"Frances Collins",
"Hilary O.D. Critchley"
],
"abstract": "Background: Human mast cells (MCs) are long-lived tissue-resident immune cells characterised by granules containing the proteases chymase and/or tryptase. Their phenotype is modulated by their tissue microenvironment. The human uterus has an outer muscular layer (the myometrium) surrounding the endometrium, both of which play an important role in supporting a pregnancy. The endometrium is a sex steroid target tissue consisting of epithelial cells (luminal, glandular) surrounded by a multicellular stroma, with the latter containing an extensive vascular compartment as well as fluctuating populations of immune cells that play an important role in regulating tissue function. The role of MCs in the human uterus is poorly understood with little known about their regulation or the impact of steroids on their differentiation status. The current study had two aims: 1) To investigate the spatial and temporal location of uterine MCs and determine their phenotype; 2) To determine whether MCs express receptors for steroids implicated in uterine function, including oestrogen (ERα, ERβ), progesterone (PR) and glucocorticoids (GR). Methods: Tissue samples from women (n=46) were used for RNA extraction or fixed for immunohistochemistry. Results: Messenger RNAs encoded by TPSAB1 (tryptase) and CMA1 (chymase) were detected in endometrial tissue homogenates. Immunohistochemistry revealed the relative abundance of tryptase MCs was myometrium>basal endometrium>functional endometrium. We show for the first time that uterine MCs are predominantly of the classical MC subtypes: (positive, +; negative, -) tryptase+/chymase- and tryptase+/chymase+, but a third subtype was also identified (tryptase-/chymase+). Tryptase+ MCs were of an ERβ+/ERα-/PR-/GR+ phenotype mirroring other uterine immune cell populations, including natural killer cells. Conclusions: Endometrial tissue resident immune MCs have three protease-specific phenotypes. Expression of both ERβ and GR in MCs mirrors that of other immune cells in the endometrium and suggests that MC function may be altered by the local steroid microenvironment.",
"keywords": [
"chymase",
"tryptase",
"oestrogen",
"steroid receptor",
"ERb",
"GR"
],
"content": "Introduction\n\nMast cells (MCs) are long-lived tissue resident immune cells, derived from CD34+/c-kit+ pluripotent progenitors, that reside in the bone marrow (Kirshenbaum et al., 1999). MC progenitors are recruited into peripheral tissues by chemokines secreted by stromal cells, which together with stem cell factor, a complex array of cytokines, and a range of micro-environmental factors, are reported to stimulate the development of tissue resident mature MCs (Valent et al., 1992). Mature MCs are usually classified according to the presence of one or more serine proteases (tryptase and/or chymase) in prominent cytoplasmic granules.\n\nMCs are typically phenotyped as MCTC, with granules containing both tryptase and chymase, or MCT, with granules only containing tryptase alone (Collington et al., 2011; Wernersson & Pejler, 2014). It has been reported that MCs maturing in different tissue microenvironments can vary widely in the amount of tryptase and chymase they contain (Caughey, 2007). When MCs are activated they de-granulate by exocytosis, releasing these serine proteases together with other inflammatory meditators (Lorentz et al., 2012; Tiwari et al., 2008). The female sex hormones, oestradiol and progesterone, are thought to have an impact on MCs because many pathophysiological conditions attributed to MC activity have a higher prevalence in females than males (Narita et al., 2007). Studies in non-reproductive tissue systems and those using the HMC-1 cell line (human MC line, (Butterfield et al., 1988)) have reported that MCs express the oestrogen receptor α isoform (ERα) and progesterone receptor (PR) (Jensen et al., 2010; Nicovani & Rudolph, 2002; Zaitsu et al., 2007). Some authors found evidence that MCs can be rapidly stimulated to degranulate by oestradiol via ERα (Zaitsu et al., 2007). Glucocorticoid treatments are reported to reduce the number of tissue resident MCs by reducing concentrations of stem cell factor that are required for MC survival (Finotto et al., 1997). Glucocorticoids are also reported to prevent MC activation via their high-affinity IgE receptor (Smith et al., 2002).\n\nThe human endometrium undergoes physiological cycles of cellular proliferation, differentiation and secretory activity during each menstrual cycle (Johannisson et al., 1987). In the absence of embryo implantation, the upper functional layer of the endometrium breaks down and is shed at menstruation, which is considered to bear the hallmarks of an inflammatory process (Jabbour et al., 2006). Endometrial tissue adjacent to the myometrium (basal compartment) is not shed during menstruation and is implicated in the rapid re-epithelialisation, cessation of bleeding and restoration of tissue homeostasis facilitating regeneration of the functional layer. This monthly tissue remodelling is regulated by changes in cyclical ovarian steroid hormones with oestrogen increasing cell proliferation, progesterone inducing functional maturation of stromal cells in preparation for implantation, and progesterone withdrawal precipitating a cascade of changes culminating in tissue breakdown and menstruation (Kelly et al., 2001; Maybin & Critchley, 2015; Salamonsen & Lathbury, 2000). Endometrial tissue contains stromal, epithelial, and endothelial cells, as well as a diverse population of immune cells, the most abundant of which are uterine natural killer cells (uNK) and macrophages (Evans & Salamonsen, 2012; Thiruchelvam et al., 2013). We, and others, have investigated the impact of steroids on uNK cells and macrophages and shown that they contain both receptors that can bind oestrogens (ERβ isoform) and glucocorticoids (Bombail et al., 2008; Henderson et al., 2003), but are immmuno-negative for ERα and PR. The concentrations of steroids in endometrial tissue are subject to local modulation by enzymes that metabolise sex steroids (androgens, oestrogens), as well as glucocorticoids (Bamberger et al., 2001; Gibson et al., 2013; Gibson et al., 2016; McDonald et al., 2006). The creation of a steroid rich microenvironment has an impact on the function of immune cells and the vasculature. For example, exposure of uNK to oestrogens increases their secretion of CCL2, which has an impact on vascular endothelial cells (Gibson et al., 2015), and likewise exposure of macrophages to cortisol results in the release of factors that induce altered endometrial endothelial cell expression of angiogenic genes (Thiruchelvam et al., 2016).\n\nThe basal portion of the endometrium sits directly on the myometrium, which is made up of three layers consisting of smooth muscle fibres and associated vasculature and stroma. The inner layer, adjacent to the endometrium, is also known as the junctional zone. This zone has circular muscle fibres and like the endometrium it is derived from the Mullerian duct (Uduwela et al., 2000), whilst the other layers develop from non-Mullerian tissue. The smooth muscle cells of the myometrium (myocytes) are active during the non-pregnant menstrual cycle, with uterine peristalsis constituting one of their fundamental functions (Kunz & Leyendecker, 2002). Like the endometrium, the myometrium is a steroid target organ with myocytes expressing receptors for oestrogens, progestagens and androgens, all of which can induce changes in gene expression (Chandran et al., 2016; Makieva et al., 2016).\n\nMCs have been identified in the human uterus, with reports that their phenotype is similar to lung MCs in terms of a response to secretagogues and release of prostaglandins, but a granule phenotype distinct to that of gut MCs (Massey et al., 1991). There has also been interest in the role played by MCs in myometrial contractions and in fibroids, although whether they play an important role in either has been questioned (Garfield et al., 2006; Menzies et al., 2011; Protic et al., 2016). A detailed study on uterine MCs was published by Jeziorska et al. (1995), who used immunohistochemistry to identify tryptase and chymase positive cells in 107 uterine samples taken across the menstrual cycle. They reported that there were similar MC numbers throughout the functional, basal layers of the endometrium, and myometrium during the menstrual cycle.\n\nIn summary, the role of MCs in the human uterus is poorly understood and little is known about their regulation, or the impact of steroids within the uterine microenvironment on their differentiation status. The current study used tissue sections of human uterus to define the spatial and temporal location of MCs in the myometrium and endometrium, and explored their phenotype by examining the pattern of expression of the proteases tryptase and chymase using fluorescent co-staining. We also examined expression receptors for oestrogen (ERα, ERβ), progesterone (PR) or glucocorticoids (GR) to determine their ability to respond directly to steroids.\n\n\nMethods\n\nEthical committee approval was obtained from the Lothian Research Ethics Committee (LREC; approval numbers, 10/S1402/59 and 16/ES/0007). Patients were recruited by dedicated research nurses from clinics treating women for benign gynaecological conditions, including heavy menstrual bleeding and fibroids. In all cases written patient consent was obtained prior to tissue collection. Full details of patients are provided in Supplementary Table 1. Patients were aged between 25–50 years (average of 39.8 years), reported regular menstrual cycles and had not taken any exogenous hormones in the three months prior to surgery. Stage of the menstrual cycle was evaluated by analysis of circulating steroid concentrations (P4, E2) using blood samples obtained at the time of surgery. Assays were performed by the Specialist Assay Service (Surf Facility, University of Edinburgh) and cycle stage was further confirmed by examination of tissue sections by an expert pathologist, Professor A.R.W. Williams (NHS, Royal Infirmary, Edinburgh). Critical inclusion criteria were the absence of pelvic pain, such as dysmenorrhea, absence of fibroids or presence of small fibroids only (<3 cm). Samples from a total of n=46 women were used in the course of this study.\n\nTotal RNA was extracted using the RNeasy Mini Kit (Qiagen, UK), according to manufacturer’s instructions. RNA concentration and purity was measured using the Nanodrop (LabTech International, UK) and standardised to 100ng/µl for all samples. Reverse transcription was performed using 100ng of RNA with 0.125× Superscript Enzyme in 1× VILO reaction mix (Thermo Fisher Scientific, UK) at 25°C for 10 minutes, followed by 42°C for 60 minutes and finally 85°C for 5 minutes. Quantitative PCR was performed using FAM labelled probes for TPSAB1 (number 20) and CMA1 (number 81) from the Universal Probe Library (Roche Diagnostics, UK) and VIC labelled human PPIA (Cyclophilin A) endogenous control (Thermo Fisher Scientific), with specific primers for TPSAB1 (forward 5’-cctgcctcagagaccttcc-3’; reverse 5’-acctgcttcagaggaaatgg-3’) and CMA1 (forward 5’-ttcacccgaatctcccatta-3’; reverse 5’-tcaggatccaggattaatttgc-3’) (Eurofins Genomics, UK). Primers directed against human cyclophillin A (CYC, PPIA) served as an endogenous control were supplied in a premade kit purchased from Thermo Fisher Scientific (catalogue number 4310883E). Each 15μl reaction consisted of 1μl of cDNA in 1× Express qPCR Supermix (Thermo Fisher Scientific) with 200nM of forward and reverse primer, 100nM probe, amplified for 40 cycles at 95°C for 15s followed by 60°C for 1 minute using the ABI Prism 7900HT Fast Real-Time PCR System (Applied Biosystems, UK). Analysis was by relative standard curve using tonsil mRNA (ASD-0088; Applied StemCell, USA), a positive control for mRNA expression of MC proteases (Irani et al., 1986).\n\nStatistical analysis was carried out using GraphPad Prism 6.0 (GraphPad Software, USA). Data are presented as the median. One-way ANOVA was used, and Kruskal-Wallis was performed as a secondary test with Dunn’s multiple comparisons test. Criterion for significance was p<0.05.\n\nImmunohistochemistry was carried out on “full thickness” (uterine lumen to endometrial-myometrial junction) human uterine sections to localize MCs to the different tissue layers: myometrium, basal endometrium and functional (luminal) endometrium. Uterine biopsies were fixed in 4% neutral buffered formalin, embedded in paraffin wax and cut to 5μm sections. Following dewaxing and rehydration, sections were blocked in methanol peroxide for 30 minutes on a rocker at room temperature (RT), followed by 30 minutes blocking in normal goat blocking serum (Sigma Aldrich, Dorset, UK), before primary antibody incubation at 4°C overnight: Tryptase, rabbit monoclonal, Abcam, UK; Chymase, mouse monoclonal, AbSerotec, UK; ERα, mouse monoclonal, Vector Laboratories, UK; ERβ mouse monoclonal, AbSerotec, UK. After washing in 1× Tris Buffered Saline + 0.05% Tween, slides were incubated with secondary antibody for 1 hour at RT, followed by 1:50 tyramide signal amplification (TSA Fluorescein Tyramide Reagent Pack, PerkinElmer, USA) for 10 minutes. Antigen retrieval was performed at pH6 in citrate buffer, and then further blocked with serum to avoid cross-reactivity. The second primary antibody was then added and incubated overnight at 4°C. Incubation with an appropriate secondary antibody at 4°C for 16 hours and a further TSA amplification step were carried out before counterstaining the sections with DAPI (1:500 dilution in TBS) and mounting with permafluor (ThermoFisher Scientific). Fluorescent images were acquired with a Zeiss Axioscan Z1 or a Zeiss 710 confocal microscope, and analysed with Zen Blue or Black software (version 2; Zeiss, Jena, Germany). Full antibody details can be found in Supplementary Table 2.\n\n\nResults\n\nIn tissue homogenates of endometrium, total concentrations of messenger RNAs encoded by TPSAB1 (gene for tryptase α and βisoforms; Figure 1A) and CMA1 (chymase; Figure 1B) did not change significantly (TPSAB1 0.254; CMA1 0.867), according to stage of the menstrual cycle.\n\n(A) TPSAB1 mRNA and (B) CMA1 mRNA. Single dots represent different patient samples, and data are expressed as the median. Proliferative phase n=14, early secretory phase n=7, mid secretory phase n=6, and menstrual phase n=3.\n\nImmunoexpression of both tryptase and chymase positive cells were identified in all three layers of the human uterus examined in this study. In line with a previous report (Jeziorska et al., 1995), the numbers of tryptase immunopositive cells appeared higher in the myometrium and adjacent basal layer of the endometrium than in the functional layer (green cells) in all phases of the cycle (Figure 2 and Figure 3, Supplementary Figure 1–Supplementary Figure 3). Notably some of the chymase cells (red staining) in the myometrium appeared to be ‘activated’, with immunopositive staining being intense and diffuse within the tissue during both the early (Supplementary Figure 2) mid (Figure 3, arrow) and late (Supplementary Figure 3) secretory and menstrual (Figure 4, arrows) phases.\n\nNote that mast cells were less abundant in the functional layer and appeared to be exclusively tryptase+/chymase-. (A–C) Functional, basal endometrium and myometrium during proliferative phase (P); (D–F) Early secretory phase (ES); (G–I) Mid secretory phase (MS); (J–L) Late Secretory phase (LS); (M–O) Menstrual phase (M); (P–R) Negative control (omission of primary antibody). Double immunofluorescence has revealed the presence of three uterine mast cell subtypes, single tryptase, single chymase and double tryptase-chymase positive cells. (P n=4, ES n=4, MS n=2, LS n=3, M n=2): negative controls were included on all sections.\n\nThe endometrial compartment shows three different mast cell (MC) subtypes: tryptase single positive, chymase single positive and tryptase and chymase double positive. Basal endometrial MCs are chymase+/tryptase- single positive and double positive, instead functional endometrium MCs are fewer in number and show a chymase-/tryptase+ phenotype. MCs during the secretory appeared to be activated in the myometrium, releasing both proteases from the cytoplasm. (n=3) (Arrowheads: MCTC cells; Vs: MCT cells; arrows: MCC cells).\n\nExamination of tissues from 20 of the patients obtained at different stages of the cycle, including the proliferative (Supplementary Figure 1), early (Supplementary Figure 2) mid (Figure 3) and late (Supplementary Figure 3) secretory and menstrual (Figure 4) phases, also identified a population of MCs that were chymase positive, but without co-incident expression of tryptase (arrows). These cells appeared less abundant than those that were immunopositive for both tryptase and chymase (arrowheads), and were confined to the basal compartment of the endometrium and the myometrium.\n\nUterine mast cells (MCs) appear to be tryptase and chymase double positive, in both myometrial and basal endometrial layers, with a small portion of chymase+ MCs in the myometrium. MCs were not detectable in the functional layer. MCs during the menstrual phase appeared to be activated only in myometrial compartment, releasing both tryptase and chymase from the cytoplasm. In the basal endometrium, MCs showed a steady state instead. (n=2) (Arrowheads: MCTC cells; Vs: MCT cells; arrows: MCC cells).\n\nThe data obtained from immunohistochemical analysis of the 20 patients are summarized in Table 1.\n\nThe uterus is an oestrogen target organ and detailed immunohistochemical studies conducted on menstrual cycle staged sections of endometrial tissue by ourselves (Bombail et al., 2008; Critchley et al., 2001) and others (Mylonas et al., 2004; Snijders et al., 1992) have documented cell and phase-dependent expression of both isoforms of the oestrogen receptor (ERα, ERβ). In the current study, in line with expectation, we identified ERα positive stromal and epithelial cells in the endometrium and stromal fibroblasts in the myometrium (Supplementary Figure 4); however, although tryptase-positive cells were readily detected in the basal endometrium and myometrium, none of these had detectable ERα protein in their nuclei (Supplementary Figure 4). In contrast, immunopositive staining for ERβ protein was present in multiple cell types, including stromal fibroblasts and endometrial epithelial cells, as well as tryptase-positive (green cytoplasm) MCs in both the functional and basal regions of the endometrium and throughout the myometrium of the uterus (Figure 5, arrows). The results obtained with antibody directed against the progesterone receptor mirrored those of ERα, with no evidence of PR-positive MCs (Supplementary Figure 5).\n\nImmunohistochemistry showed co-localization of ERβ in uterine mast cells (MCs). Nuclear expression of ERβ receptor (red staining) was detected in MCs across the tissue structures of uterus, myometrium, basal and functional endometrium, and during both the proliferative and secretory phases of the menstrual cycle. (Proliferative n=5, Secretory n=5).\n\nWe have previously identified GR in multiple cells within the endometrium, including endothelial cells and immune cells (Rae et al., 2009; Thiruchelvam et al., 2016), complemented by evidence that enzymes capable of the biosynthesis of cortisol, the natural ligand for GR, are present in the tissue (Thiruchelvam et al., 2016). In the present study, immunostaining for GR showed it was expressed within the stromal fibroblasts and other cells (putative immune cells), as well as being present in the nucleus of tryptase-positive cells in both endometrium and myometrium (arrowheads, Figure 6).\n\nMast cell nuclear glucocorticoid receptors (GR; red staining) was detected during the proliferative and secretory phase, throughout the myometrium, functional and basal endometrial and layers. The images are representative of results in proliferative (n=5) and secretory (n=5) phase samples.\n\nIn summary, we detected co-immunoexpression of both the beta isoform of ER and GR in the nuclei of uterine MCs (tryptase positive staining in their cytoplasm), but no evidence of immunoexpression of ERα or PR. A photomontage of representative sections stained for each of the receptors is provided in Figure 7.\n\nUterine mast cells are immunopositive for ERβ and GR, and immunonegative for ERα and PR. (Red staining: ERα, ERβ, PR and GR; green staining: tryptase). Arrowheads point to nuclei that have immunopositive (red) staining for ERβ and GR.\n\n\nDiscussion\n\nThis study has shed new light both on the phenotype of endometrial and myometrial MCs, as well as revealing the potential that they might respond in situ to both oestrogens and glucocorticoids. MCs are known to arise from progenitors in the bone marrow, but adapt to their mature phenotype depending upon the tissue microenvironment in which they mature. To date, uterine MCs have received little attention compared to other immune cell populations, such as uNK cells and macrophages (Gibson et al., 2015; Henderson et al., 2003; Thiruchelvam et al., 2013). For example, detailed analysis of immune cell populations in endometrium show cyclical variations in their numbers of immune cells, with a notable rise in uNK cells during the secretory phase, a rise in numbers of neutrophils at the start of menstrual tissue breakdown and the largest numbers of macrophages detected during the menstrual phase (reviewed in Maybin & Critchley, 2015).\n\nAnalysis of mRNAs encoding proteases expressed by MCs has not previously been reported in endometrial tissue homogenates. We found that the concentrations of tryptase and chymase mRNAs in our samples did not vary significantly between different phases of the menstrual cycle. These results would be consistent with previous reports that MC numbers vary little throughout the menstrual cycle (Salamonsen et al., 2002). We speculate that these results may reflect the long life span of tissue resident MCs, with some studies reporting that MCs have a lifespan of weeks to months (Kiernan, 1979; Padawer, 1974). As tryptase and chymase are constitutively expressed by MCs (Pejler et al., 2007), it is unsurprising they may remain unchanged if cell numbers are fairly constant.\n\nTraditionally, human MCs are classified according to different phenotypes depending on their expression of tryptase and/or chymase in cellular granules (Irani et al., 1986). In line with expectations based on MC phenotype in multiple tissues, we readily detected both tryptase positive and chymase positive cells within both the endometrium and myometrium. Using immunofluorescence, we were able to co-stain for tryptase and chymase in the same cells revealing populations of cells that were of MCT and MCTC phenotypes. Weidner & Austin (1993) were the first to report the existence of a chymase positive/tryptase negative (MCC) population of MCs in skin, lung and bowel. MCC have been detected by immunofluorescence in the airway and gastrointestinal tract, reported as being 12% of the MC population in human bronchi and 16.8% bowel submucosa. The current study provides the first evidence for the presence of a chymase positive subpopulation of MCs that did not contain tryptase. This complements and extends a previous study that stained parallel tissue sections with antibodies directed against tryptase and chymase (Jeziorska et al., 1995), identifying both MCT and MCTC, and increases both our understanding of the location and phenotypic heterogeneity of MCs in the uterus.\n\nResults in this study showed the phenotype of the uterine MCs varied between the different tissue layers of the uterus. Confirming previous studies, MCTC were predominantly resident in the basal endometrium and in the myometrium and MCT were found in functional endometrium. The rare MCC type was detected in the basal endometrium and myometrium and completely absent in the functional layer. These findings reinforce the principle that tissue specific phenotypes of MCs can exist within different regions within the same organ. It is already well known that the functional and basal compartments of the endometrium and myometrium vary with regard to cellular composition and cytokine/chemokine concentrations. Interestingly the region of the tissue where MCs appeared most abundant were close to the endometrial-myometrial junction. Previous studies have demonstrated that a large number of CD34+ MC progenitor cells reside in this area of the tissue and that their numbers are independent of phase of the menstrual cycle (Cho et al., 2004; Mai et al., 2008). Finding MCs in close proximity to smooth muscle fibres would be consistent with this cell type being a key source of stem cell factor, and a vital mediator for MC maturation and survival (Zhang et al., 1996).\n\nThe activation state of uterine MCs was also explored during the present study. Previously, endometrial MCs were reported to degranulate during the secretory phase, at a time when the tissue is in an oedematous state. This observation was based on detection of extracellular tryptase during oedema and weak intracellular tryptase staining detectable during the proliferative phase (Jeziorska et al., 1995). In this study, endometrial activation and the degranulation of MCs was documented during the early and mid-secretory stages of normal uterine tissue with detection of tryptase and chymase in the extracellular matrix. A ‘recovery’ state, characterised by weak immunostaining, was observed in tissue collected from patients during the proliferative (Supplementary Figure 1), late secretory (Supplementary Figure 3) and menstrual (Figure 4) stages. Within the myometrial compartment, MCs appeared to be in a ‘resting’ state during the proliferative phase (Supplementary Figure 1). Interestingly, in the myometrium, MCs appeared to be ‘activated’ with detection of immunopositive staining for chymase diffuse and spread beyond the margins of the individual cells during both mid secretory and menstrual phases (Figure 3 and Figure 4). We speculate that this might suggest a potential role for MC derived proteases in regulation of arteriole sprouting at early/mid secretory phases. A previous study also suggested the release of granules may play a role in smooth muscle contraction during menses (Sivridis et al., 2001) and our findings would be consistent with this suggestion.\n\nAlthough, other authors have previously demonstrated a direct effect of female sex hormones on MC behaviour, activation and migration, in those studies they cited activation of MCs via ERα and PR (Jensen et al., 2010; Zaitsu et al., 2007). In this study, based on detailed immunohistochemical analysis, using previously validated antibodies directed against oestrogen receptor subtypes (Critchley et al., 2002; Henderson et al., 2003), we found novel evidence for immunoexpression of ERβ, but no evidence of expression of ERα. These results mirror the oestrogen receptor phenotype of both uNK (Gibson et al., 2015; Henderson et al., 2003)) and macrophages (Thiruchelvam et al., 2013). This observation is also supported by the activation of uterine MCs during the secretory phase at a time in the menstrual cycle when intracrine biosynthesis of oestradiol has been shown to activate ERβ positive uNKs (Gibson et al., 2015).\n\nIn other studies, we have shown that in endometrial tissue expression of ERβ often parallels that of GR with co-expression in both uNK (Henderson et al., 2003) and endothelial cells (Critchley et al., 2002). In the current study, nuclear GR was detected in MCs in the functional, basal endometrium and myometrium. In the same samples, GR protein was also detected in endometrial stroma and smooth muscle fibres during the proliferative phase, but its expression was reduced during the progesterone-dominant secretory phase, a result which was in agreement with previous reports (Bamberger et al., 2001; Henderson et al., 2003). Several studies have reported that glucocorticoids may have an indirect anti-inflammatory impact on MCs, with the postulated mechanism being a reduction of stem cell factor production by fibroblasts (Da Silva et al., 2002). Alternatively, they may also have direct impacts by reducing IgE binding to the FcεRI receptors, thereby down regulating the expression of these receptors on the cell membrane of the MCs and inhibiting MC degranulation in vitro (Finotto et al., 1997; Yamaguchi et al., 2001; Zhou et al., 2008). Prior to the current study, the only report of expression of GR in human MCs was from Oppong et al. (2014). In their study, they localized GR to the plasma membrane in the RBL-2H3 MC line. The current study is the first to demonstrate that uterine MCs are immunopositive for nuclear GR. A glucocorticoid rich environment would favour activation of GR, with shuttling of ligand activated receptor from the cytoplasm towards the nucleus (Phuc Le et al., 2005). This observation would be consistent with expression 11-β hydroxysteroid dehydrogenase enzymes within the uterus, resulting in a cortisol rich microenvironment (Gibson et al., 2013; McDonald et al., 2006).\n\nIn summary, our study confirms that MCs are members of the leukocyte population of the human uterus, and that they are most abundant in the myometrial and basal endometrial compartments. Whilst uterine MCs predominantly belong to the classic MC subtypes: tryptase positive/chymase negative (MCT) and tryptase/chymase positive (MCTC), a rare third subtype (MCC) was also identified in the uterus for the first time. We demonstrated that endometrial and myometrial MCs are immunopositive for both ERβ and GR, demonstrating that, like other immune cells present in the endometrium (uNK, macrophages), they may be a target for the direct actions of oestrogens and glucocorticoids, which are both synthesised within the endometrial tissue microenvironment. This study provides a framework for furture studies on the role of MCs in endometrial and myometrial disorders, including conditions associated with increased pain, such as endometriosis.\n\n\nData availability\n\nDataset 1: TPSAB1 and CMA1 CT values for qRT-PCR. doi, 10.5256/f1000research.11432.d160468 (De Leo et al., 2017).\n\n\nEthical approval\n\nLothian Research Ethics Committee (LREC) approval was granted and written patient consent was obtained prior to tissue collection by dedicated research nurses (approval numbers, 10/S1402/59 and 16/ES/0007).",
"appendix": "Author contributions\n\n\n\nConceptualization: BD, HODC, PTKS; Investigation and analysis: BD, AE-Z, FC; Tissue Resources: HODC; Writing original draft: PTKS; Writing – reviewing and editing: BD, FC, HODC, PTKS; Supervision, management and funding: PTKS, HODC\n\n\nCompeting interests\n\n\n\nThe authors have no competing interests.\n\n\nGrant information\n\nPTKS, AZ-E and FC were funded by MRC Programme (grant G1100356) to PTKS, which also paid for the costs of consumables used in this study. BD was in receipt of an MRC PhD studentship funded by the Centre Grant for Reproductive Health (G1002033). Salaries of nurses who recruited patients to the study were paid by MRC grants G1100356 (PI. PTKS) and G0500047 (PI. HODC).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors are grateful to Research Nurses Sharon McPherson and Catherine Murray for patient recruitment, the staff of the SuRF histology facility (QMRI), who performed routine tissue processing for immunohistochemistry, and Dr Douglas Gibson for expert advice.\n\n\nSupplementary material\n\nSupplementary Table 1. Details of patients, cycle stage, diagnosis and use of individual samples in different experimental protocols.\n\nClick here to access the data.\n\nSupplementary Table 2. Details of antibodies used for immunofluorescence.\n\nClick here to access the data.\n\nSupplementary Figure 1. Immunolocalisation of tryptase and chymase in proliferative phase endometrium. Single fluorescence channels show the different mast cell subtypes in a uterine “full thickness” section during proliferative phase of the menstrual cycle. The myometrial compartment shows three different mast cell subtypes: tryptase single positive, chymase single positive and tryptase and chymase double positive. Basal endometrial mast cells are tryptase single positive and double positive, instead functional endometrium MCs are fewer in number and show a tryptase only phenotype. The MC activation profile during proliferative phase looks quiescent; proteases are retained in the cytoplasm. (n=4) (White triangles: MCTC cells; white Vs: MCT cells; white arrows: MCC cells).\n\nClick here to access the data.\n\nSupplementary Figure 2. Mast cell subtypes and activation state during the early secretory phase. Single fluorescence channels show the different mast cell subtypes in a uterine full thickness section during the early secretory phase. Myometrial compartment shows three different mast cell subtypes: tryptase single positive, chymase single positive and tryptase and chymase double positive. Basal endometrial mast cells are tryptase single positive and double positive, instead functional endometrium MCs are fewer in number and show a chymase negative phenotype. MCs during the early secretory phase appeared to be activated, releasing both proteases from the cytoplasm. (n=4) (White triangles: MCTC cells; white Vs: MCT cells; white arrows: MCC cells).\n\nClick here to access the data.\n\nSupplementary Figure 3. Mast cell subtypes and activation state during the late secretory phase. During the late secretory phase, MCs are identified as tryptase single positive and weakly double positive (MCTC). In both the endometrial layers MCs showed a strong tryptase only phenotype. MCs during the late secretory phase appeared to be activated only in the myometrial compartment, releasing tryptase and retaining chymase in the cytoplasm. (n=3) (White triangles: MCTC cells; white Vs: MCT cells).\n\nClick here to access the data.\n\nSupplementary Figure 4. Immunoexpression of ERalpha (ERα) was detected in multiple cell types within the endometrium and myometrium but not in the nuclei of tryptase-positive mast cells (MCs). Double immunofluorescence showed no ERα immunoexpression (red staining) in the nuclei of uterine MCs (green staining). MCs were noted to be immunonegative in all uterine layers and across the phases of the menstrual cycle. Myometrial, stromal andepithelial cells showed expression for ERα, as expected (Proliferative n=5, Secretory n=5).\n\nClick here to access the data.\n\nSupplementary Figure 5. Mast cells and progesterone receptor (PR) immunoexpression in the human uterus. Mast cells were demonstrated to be immunonegative for PR expression (red staining), across “full thickness” uterine sections and during both the proliferative and secretory phase. (Proliferative n=5, Secretory n=5).\n\nClick here to access the data.\n\n\nReferences\n\nBamberger AM, Milde-Langosch K, Löning T, et al.: The glucocorticoid receptor is specifically expressed in the stromal compartment of the human endometrium. J Clin Endocrinol Metab. 2001; 86(10): 5071–4. PubMed Abstract | Publisher Full Text\n\nBombail V, Macpherson S, Critchley HO, et al.: Estrogen receptor related beta is expressed in human endometrium throughout the normal menstrual cycle. Hum Reprod. 2008; 23(12): 2782–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButterfield JH, Weiler D, Dewald G, et al.: Establishment of an immature mast cell line from a patient with mast cell leukemia. Leuk Res. 1988; 12(4): 345–55. PubMed Abstract | Publisher Full Text\n\nCaughey GH: Mast cell tryptases and chymases in inflammation and host defense. Immunol Rev. 2007; 217(1): 141–54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChandran S, Cairns MT, O'Brien M, et al.: Effects of combined progesterone and 17β-estradiol treatment on the transcriptome of cultured human myometrial smooth muscle cells. Physiol Genomics. 2016; 48(1): 50–61. PubMed Abstract | Publisher Full Text\n\nCho NH, Park YK, Kim YT, et al.: Lifetime expression of stem cell markers in the uterine endometrium. Fertil Steril. 2004; 81(2): 403–7. PubMed Abstract | Publisher Full Text\n\nCollington SJ, Williams TJ, Weller CL: Mechanisms underlying the localisation of mast cells in tissues. Trends Immunol. 2011; 32(10): 478–485. PubMed Abstract | Publisher Full Text\n\nCritchley HO, Brenner RM, Henderson TA, et al.: Estrogen receptor beta, but not estrogen receptor alpha, is present in the vascular endothelium of the human and nonhuman primate endometrium. J Clin Endocrinol Metab. 2001; 86(3): 1370–1378. PubMed Abstract | Publisher Full Text\n\nCritchley HO, Henderson TA, Kelly RW, et al.: Wild-type estrogen receptor (ERbeta1) and the splice variant (ERbetacx/beta2) are both expressed within the human endometrium throughout the normal menstrual cycle. J Clin Endocrinol Metab. 2002; 87(11): 5265–73. PubMed Abstract | Publisher Full Text\n\nDa Silva CA, Kassel O, Mathieu E, et al.: Inhibition by glucocorticoids of the interleukin-1beta-enhanced expression of the mast cell growth factor SCF. Br J Pharmacol. 2002; 135(7): 1634–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Leo B, Esnal-Zufiaurre A, Collins F, et al.: Dataset 1 in: Immunoprofiling of human uterine mast cells identifies three phenotypes and expression of ERb and glucocorticoid receptor. F1000Research. 2017. Data Source\n\nEvans J, Salamonsen LA: Inflammation, leukocytes and menstruation. Rev Endocr Metab Disord. 2012; 13(4): 277–88. PubMed Abstract | Publisher Full Text\n\nFinotto S, Mekori YA, Metcalfe DD: Glucocorticoids decrease tissue mast cell number by reducing the production of the c-kit ligand, stem cell factor, by resident cells: in vitro and in vivo evidence in murine systems. J Clin Invest. 1997; 99(7): 1721–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarfield RE, Irani AM, Schwartz LB, et al.: Structural and functional comparison of mast cells in the pregnant versus nonpregnant human uterus. Am J Obstet Gynecol. 2006; 194(1): 261–267. PubMed Abstract | Publisher Full Text\n\nGibson DA, Greaves E, Critchley HO, et al.: Estrogen-dependent regulation of human uterine natural killer cells promotes vascular remodelling via secretion of CCL2. Hum Reprod. 2015; 30(6): 1290–1301. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGibson DA, McInnes KJ, Critchley HO, et al.: Endometrial Intracrinology--generation of an estrogen-dominated microenvironment in the secretory phase of women. J Clin Endocrinol Metab. 2013; 98(11): E1802–6. PubMed Abstract | Publisher Full Text\n\nGibson DA, Simitsidellis I, Cousins FL, et al.: Intracrine Androgens Enhance Decidualization and Modulate Expression of Human Endometrial Receptivity Genes. Sci Rep. 2016; 6: 19970. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHenderson TA, Saunders PT, Moffett-King A, et al.: Steroid receptor expression in uterine natural killer cells. J Clin Endocrinol Metab. 2003; 88(1): 440–449. PubMed Abstract | Publisher Full Text\n\nIrani AA, Schechter NM, Craig SS, et al.: Two types of human mast cells that have distinct neutral protease compositions. Proc Natl Acad Sci U S A. 1986; 83(12): 4464–4468. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJabbour HN, Kelly RW, Fraser HM, et al.: Endocrine regulation of menstruation. Endocr Rev. 2006; 27(1): 17–46. PubMed Abstract | Publisher Full Text\n\nJensen F, Woudwyk M, Teles A, et al.: Estradiol and progesterone regulate the migration of mast cells from the periphery to the uterus and induce their maturation and degranulation. PLoS One. 2010; 5(12): e14409. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJeziorska M, Salamonsen L, Woolley DE: Mast cell and eosinophil distribution and activation in human endometrium throughout the menstrual cycle. Biol Reprod. 1995; 53(2): 312–20. PubMed Abstract | Publisher Full Text\n\nJohannisson E, Landgren BM, Rohr HP, et al.: Endometrial morphology and peripheral hormone levels in women with regular menstrual cycles. Fertil Steril. 1987; 48(3): 401–408. PubMed Abstract | Publisher Full Text\n\nKelly RW, King AE, Critchley HO: Cytokine control in human endometrium. Reproduction. 2001; 121(1): 3–19. PubMed Abstract | Publisher Full Text\n\nKiernan JA: Production and life span of cutaneous mast cells in young rats. J Anat. 1979; 128(Pt 2): 225–238. PubMed Abstract | Free Full Text\n\nKirshenbaum AS, Goff JP, Semere T, et al.: Demonstration that human mast cells arise from a progenitor cell population that is CD34+, c-kit+, and expresses aminopeptidase N (CD13). Blood. 1999; 94(7): 2333–42. PubMed Abstract\n\nKunz G, Leyendecker G: Uterine peristaltic activity during the menstrual cycle: characterization, regulation, function and dysfunction. Reprod Biomed Online. 2002; 4(Suppl 3): 5–9. PubMed Abstract | Publisher Full Text\n\nLorentz A, Baumann A, Vitte J, et al.: The SNARE Machinery in Mast Cell Secretion. Front Immunol. 2012; 3: 143. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMai KT, Teo I, Al Moghrabi H, et al.: Calretinin and CD34 immunoreactivity of the endometrial stroma in normal endometrium and change of the immunoreactivity in dysfunctional uterine bleeding with evidence of 'disordered endometrial stroma'. Pathology. 2008; 40(5): 493–9. PubMed Abstract | Publisher Full Text\n\nMakieva S, Hutchinson LJ, Rajagopal SP, et al.: Androgen-Induced Relaxation of Uterine Myocytes Is Mediated by Blockade of Both Ca(2+) Flux and MLC Phosphorylation. J Clin Endocrinol Metab. 2016; 101(3): 1055–65. PubMed Abstract | Publisher Full Text\n\nMassey WA, Guo CB, Dvorak AM, et al.: Human uterine mast cells. Isolation, purification, characterization, ultrastructure, and pharmacology. J Immunol. 1991; 147(5): 1621–7. PubMed Abstract\n\nMaybin JA, Critchley HO: Menstrual physiology: Implications for endometrial pathology and beyond. Hum Reprod Update. 2015; 21(6): 748–761. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcDonald SE, Henderson TA, Gomez-Sanchez CE, et al.: 11Beta--hydroxysteroid dehydrogenases in human endometrium. Mol Cell Endocrinol. 2006; 248(1–2): 72–78. PubMed Abstract | Publisher Full Text\n\nMenzies FM, Shepherd MC, Nibbs RJ, et al.: The role of mast cells and their mediators in reproduction, pregnancy and labour. Hum Reprod Update. 2011; 17(3): 383–96. PubMed Abstract | Publisher Full Text\n\nMylonas L, Jeschke U, Shabani N, et al.: Immunohistochemical analysis of estrogen receptor alpha, estrogen receptor beta and progesterone receptor in normal human endometrium. Acta Histochem. 2004; 106(3): 245–252. PubMed Abstract | Publisher Full Text\n\nNarita SI, Goldblum RM, Watson CS, et al.: Environmental estrogens induce mast cell degranulation and enhance IgE-mediated release of allergic mediators. Environ Health Perspect. 2007; 115(1): 48–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNicovani S, Rudolph MI: Estrogen receptors in mast cells from arterial walls. Biocell. 2002; 26(1): 15–24. PubMed Abstract\n\nOppong E, Hedde PN, Sekula-Neuner S, et al.: Localization and dynamics of glucocorticoid receptor at the plasma membrane of activated mast cells. Small. 2014; 10(10): 1991–8. PubMed Abstract | Publisher Full Text\n\nPadawer J: Mast cells extended lifespan and lack of granule turnover under normal in vivo conditions. Exp Mol Pathol. 1974; 20(2): 269–280. PubMed Abstract | Publisher Full Text\n\nPejler G, Abrink M, Ringvall M, et al.: Mast Cell Proteases. Adv Immunol. 2007; 95: 167–255. PubMed Abstract | Publisher Full Text\n\nPhuc Le P, Friedman JR, Schug J, et al.: Glucocorticoid receptor-dependent gene regulatory networks. PLoS Genet. 2005; 1(2): e16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nProtic O, Toti P, Islam MS, et al.: Possible involvement of inflammatory/reparative processes in the development of uterine fibroids. Cell Tissue Res. 2016; 364(2): 415–27. PubMed Abstract | Publisher Full Text\n\nRae M, Mohamad A, Price D, et al.: Cortisol inactivation by 11beta-hydroxysteroid dehydrogenase-2 may enhance endometrial angiogenesis via reduced thrombospondin-1 in heavy menstruation. J Clin Endocrinol Metab. 2009; 94(4): 1443–50. PubMed Abstract | Publisher Full Text\n\nSalamonsen LA, Lathbury LJ: Endometrial leukocytes and menstruation. Hum Reprod Update. 2000; 6(1): 16–27. PubMed Abstract | Publisher Full Text\n\nSalamonsen LA, Zhang J, Brasted M: Leukocyte networks and human endometrial remodelling. J Reprod Immunol. 2002; 57(1–2): 95–108. PubMed Abstract | Publisher Full Text\n\nSivridis E, Giatromanolaki A, Agnantis N, et al.: Mast cell distribution and density in the normal uterus: metachromatic staining using lectins. Eur J Obstet Gynecol Reprod Biol. 2001; 98(1): 109–113. PubMed Abstract | Publisher Full Text\n\nSmith SJ, Piliponsky AM, Rosenhead F, et al.: Dexamethasone inhibits maturation, cytokine production and Fc epsilon RI expression of human cord blood-derived mast cells. Clin Exp Allergy. 2002; 32(6): 906–13. PubMed Abstract | Publisher Full Text\n\nSnijders MP, de Goeij AF, Debets-Te Baerts MJ, et al.: Immunocytochemical analysis of oestrogen receptors and progesterone receptors in the human uterus throughout the menstrual cycle and after the menopause. J Reprod Fertil. 1992; 94(2): 363–71. PubMed Abstract | Publisher Full Text\n\nThiruchelvam U, Dransfield I, Saunders PT, et al.: The importance of the macrophage within the human endometrium. J Leukoc Biol. 2013; 93(2): 217–25. PubMed Abstract | Publisher Full Text\n\nThiruchelvam U, Maybin JA, Armstrong GM, et al.: Cortisol regulates the paracrine action of macrophages by inducing vasoactive gene expression in endometrial cells. J Leukoc Biol. 2016; 99(6): 1165–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTiwari N, Wang CC, Brochetta C, et al.: VAWP-8 segregates mast cell-preformed mediator exocytosis from cytokine trafficking pathways. Blood. 2008; 111(7): 3665–3674. PubMed Abstract | Publisher Full Text\n\nUduwela AS, Perera MA, Aiqing L, et al.: Endometrial-myometrial interface: relationship to adenomyosis and changes in pregnancy. Obstet Gynecol Surv. 2000; 55(6): 390–400. PubMed Abstract | Publisher Full Text\n\nValent P, Spanblöchl E, Sperr WR, et al.: Induction of differentiation of human mast cells from bone marrow and peripheral blood mononuclear cells by recombinant human stem cell factor/kit-ligand in long-term culture. Blood. 1992; 80(9): 2237–45. PubMed Abstract\n\nWeidner N, Austen KF: Heterogeneity of mast cells at multiple body sites. Fluorescent determination of avidin binding and immunofluorescent determination of chymase, tryptase, and carboxypeptidase content. Pathol Res Pract. 1993; 189(2): 156–62. PubMed Abstract | Publisher Full Text\n\nWernersson S, Pejler G: Mast cell secretory granules: armed for battle. Nat Rev Immunol. 2014; 14(7): 478–94. PubMed Abstract | Publisher Full Text\n\nYamaguchi M, Hirai K, Komiya A, et al.: Regulation of mouse mast cell surface Fc epsilon RI expression by dexamethasone. Int Immunol. 2001; 13(7): 843–51. PubMed Abstract | Publisher Full Text\n\nZaitsu M, Narita SI, Lambert KC, et al.: Estradiol activates mast cells via a non-genomic estrogen receptor-alpha and calcium influx. Mol Immunol. 2007; 44(8): 1977–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang S, Howarth PH, Roche WR: Cytokine production by cell cultures from bronchial subepithelial myofibroblasts. J Pathol. 1996; 180(1): 95–101. PubMed Abstract | Publisher Full Text\n\nZhou J, Liu DF, Liu C, et al.: Glucocorticoids inhibit degranulation of mast cells in allergic asthma via nongenomic mechanism. Allergy. 2008; 63(9): 1177–85. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22759",
"date": "16 May 2017",
"name": "Lois A. Salamonsen",
"expertise": [
"Reviewer Expertise Uterine biology and endometrial remodelling",
"including leukocyte populations in endometrium"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article examines in detail the phenotypes of mast cells in the uterus – depending on their tryptase/chymase content and their steroid hormone receptor status. The work is solid, and well presented. The methods are well described so that others can validate the data. While tryptase/chymase phenotypes have previously been described across the menstrual cycle (referenced), the information regarding a tryptase-/chymase+ phenotype is new, as is the steroid receptor phenotypeof ERβ+/GR+/ERα-/PR- in tryptase positive cells. Where the data particularly varies from that previously published is in the activation status of the mast cells. This variation could be explained by different fixation protocols or different antibodies. Activation is very important as mast cells are only functionally relevant following this release of the potent molecular contents of their granules.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2804",
"date": "19 Jun 2017",
"name": "Philippa Saunders",
"role": "Author Response",
"response": "We thank Professor Salamonsen for her positive comments: we agree different methods of fixation can affect results in different studies. All the samples in this study were processed according to standard methods approved by our pathologist."
}
]
},
{
"id": "23114",
"date": "31 May 2017",
"name": "Areege M. Kamal",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an observational, descriptive study describing the enzymatic and hormonal phenotype of the mast cells in full thickness normal human endometrium and the adjacent myometrial layers. The data from the modest sample size presented in the manuscript confirms previous reports of spatio-temporal distribution of MC and their phenotype throughout the menstrual cycle; highlighting the novel finding, the glucocorticoid receptor expression in these cells. The authors have also examined the GR expression in the context of the expression of a selected panel of other steroid receptors. The manuscript is well written and presented; however, there are some points detailed below which warrants further consideration/ clarification.\nGeneral comments:\nThe conventional histopathological description of the endometrial layers as endometrial basalis layer and functionalis layer, or stratum basalis or stratum functionalis throughout the manuscript may provide more clarity than the inconsistent use of less conventional descriptive terms of basal compartment etc. in the present MS.\nReferral to some pivotal previous references could make the manuscript more comprehensive, such as Engemise SL et al, Eur J Obstet Gynecol Reprod Biol., 2011; Mori A et al., Hum Reprod., 1997; Drudy L et al., Eur J Obstet Gynecol Reprod Biol., 1991\n\nThe description of the genes TPSAB1 and CMA1 first appear in the results section but should be stated at the initial mention for clarity.\n\nMethods:\nAuthors have analysed the TPSAB1 and CMA1 mRNA expression by relative standard curve using a tonsil mRNA as a reference. What is the reason for this and why was this method utilised in place of the usual comparative Ct (delta delta Ct) method? The reasons need to be explained with reference to the more widely known method.\n\nIn dual IHC, was the 2RT antibody incubation for 16h? This appears to be relatively longer than the usual protocols. Did the authors use a quantification method to assess the immunolocalisation of the proteins of interest?\n\nResults:\nThe total number of samples mentioned in the Methods is 46, yet there is inconsistent numbers used at different aspects of the analysis. Particularly in the paragraph describing steroid receptors co-stained with tryptase, the number of samples stated in the legends of referenced figures does not add up to 20 as mentioned in the text.\n\nThe authors present data on the steroid receptor expression in tryptase-positive uterine mast cells why did they not present the data for the same in the chymase-positive cells? It is noticeable that not all the MC were ERβ+, this might be associated with chymase expression?\n\nData in Table 1 could be more informative, with quantification of the expression and statement if this refers to the exact endometrial layer; basalis or functionalis.\n\nIn Figure 6, GR+ MCs seems to be activated. Is this pertinent to all the samples? Or is it phase specific phenomenon?\n\nDiscussion:\nPage 10, “We found that the concentrations of tryptase and chymase mRNA ”, should be replaced with “levels”\n\nThe statement “As tryptase and chymase are constitutively expressed by MCs (Pejler et al., 2007), it is unsurprising they may remain unchanged if cell numbers are fairly constant.” Need to be updated with reference to activated MC releasing the tryptases to the extracellular compartment, but the extracellular compartment is part of the whole endometrial lysate that has been examined so overall levels are not expected to change anyway.\n\nThe reference to some other papers that agree and disagree with their work (mentioned above), authors may also acknowledge the small sample size as a limitation of their study.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2803",
"date": "22 Jun 2017",
"name": "Philippa Saunders",
"role": "Author Response",
"response": "We thank Dr Kamal for her comments on our paper which we have edited to address some of her concerns.Specific responses:We have now referenced the small study by Mori et al (1997) in the Introduction and the study by Engemise et al in the Discussion.We have changed the text of the Introduction so that the abbreviations for TPSAB1 and CMA1 are introduced during the first mention of their importance in defining the mast cell phenotype.Methods.The text describing the samples has been changed to highlight the patient information provided in Suppl Table 1 and clarify that although 46 patients were recruited during this study their samples were split between RNA (n=29) and fixation for immunostaining (n=26).Analysis methods for RTPCR data: Although the delta delta CT method is one of the most popular it is based on comparable amplification efficiencies of the endogenous control and gene of interest. As determined by the standard curves the efficiencies of the TPSAB1 and CMA1 were not suitable for analysis by this method and therefore a more appropriate method was the relative standard curve method. The text in M&M has been revised to make this clearer.See: Bustin SA, Benes V, Garson JA etal, the MIQE guidelines: Minimum Information for Publication of Quantitative Real-Time PCR experiments, Clinical Chemistry 55:4 611-622 (2009)These methods are also described in detail in Applied Biosystems manual onlinehttp://www6.appliedbiosystems.com/support/tutorials/pdf/performing_rq_gene_exp_rtpcr.pdfThe tonsil sample was chosen as a positive control as this tissue has mast cells which contain tryptase and chymase, the RNA sample was purchased from a commercial supplier and we chose this tissue because we were able to access fixed material from our hospital to use for control experiments to validate the antibodies.For dual immunostaining primary antibodies were applied to sections overnight (16h) and incubated in a fridge - secondary antibodies were applied for 1h at room temperature. This is a standard method used in our laboratory and in our hands it leads to lower rates of non-specific background staining than application of primary antibody at room temperature. We did not quantify the fluorescent staining.Results.The limitations on the amount of individual fixed samples meant that not all of the fixed samples (n=20) were stained with all antibodies and this is reflected in the numbers stated in the figure legends.MCs were considered as ERbeta positive only if the entire nucleus was identified in the section (as highlighted with white arrowheads). In this study the low number of chymase only MCs meant we were not able to conduct a comprehensive staining for ERbeta.In the summary Table 1 we decided to combine the results from the functional and basal layers because the mast cells were very rare in the former and we wanted to provide an overview to complement the more comprehensive data shown in the individual photographs.Our interpretation of the data related to GR positive cells (as illustrated in Figure 3, 6 panels) is that the tryptase positive mast cells that were GR+ appeared to be in resting state i.e. the tryptase appeared to be confined to the cytoplasmic area around the nucleus.Discussion.We used a standard curve method so 'concentration' is more appropriate than 'levels' which is a common term when the delta CT method is used.We have changed the text to make it clearer that this paragraph is discussing mRNAs and hence protein release is not relevant to the argument about constitutive expression."
}
]
}
] | 1
|
https://f1000research.com/articles/6-667
|
https://f1000research.com/articles/3-306/v1
|
12 Dec 14
|
{
"type": "Research Article",
"title": "L-arginine supplementation and risk factors of cardiovascular diseases in healthy men: a double-blind randomized clinical trial",
"authors": [
"Naseh Pahlavani",
"Mostafa Jafari",
"Masoud Rezaei",
"Hamid Rasad",
"Omid Sadeghi",
"Hossein Ali Rahdar",
"Mohammad Hasan Entezari",
"Naseh Pahlavani",
"Mostafa Jafari",
"Masoud Rezaei",
"Hamid Rasad",
"Omid Sadeghi",
"Hossein Ali Rahdar"
],
"abstract": "Context: The effect of L-arginine on risk factors of cardiovascular diseases (CVD) has mostly focused on western countries. Since cardiovascular diseases is the second cause of death in Iran and, as far as we are aware, there have been no studies about the effect of L-arginine on CVD risk factors, the aim of this trial was to assess the effects of L-arginine supplementation on CVD risk factors in healthy men. Objective: The purpose of this study was to evaluate the effect of low-dose L-arginine supplementation on CVD risk factors (lipid profile, blood sugar and blood pressure) in Iranian healthy men. Design, setting, participants: We conducted a double-blind randomized controlled trial in 56 patients selected from sport clubs at the Isfahan University of Medical Science between November 2013 and December 2013. Interventions: Healthy men received L-arginine supplementation (2000 mg daily) in the intervention group or placebo (2000 mg maltodextrin daily) in the control group for 45 days. Main outcome measure: The primary outcome measures were we measured the levels of fasting blood sugar, blood pressure and lipid profile including triglyceride (TG), cholesterol, LDL and HDL in healthy subjects. It was hypothesized that these measures would be significantly improved in those receiving L–arginine supplementation. at the beginning and end of the study. Results: In this trial, we had complete data for 52 healthy participants with mean age of 20.85±4.29 years. At the end of study, fasting blood sugar (P=0.001) and lipid profile (triglycerideTG (P<0.001), cholesterol (P<0.001), LDL (P=0.04), HDL (P=0.015)) decreased in the L-arginine group but we found no significant change in the placebo group. In addition, the reduction of fasting blood sugar and lipid profile in L-arginine was significant compared with placebo group. No significant changes were found about systolic (P=0.81) and diastolic blood pressure either in L-arginine or placebo group. (P=0.532). Conclusion: The use of L-arginine significantly improved outcomes compared to placebo.",
"keywords": [
"High blood pressure is now considered one of the main challenges facing human health and is one of the most important risk factors for cardiovascular disease1. It is predicted that the incidence of cardiovascular disease and hypertension will reach in about 30% of the world population by the year 2025. Iran is the fifth country in the world in terms of having high blood pressure related diseases. Approximately 6.6 million with an age range of 25–64 years have high blood pressure and an estimated 12 million people in the same age range are at increased risk of hypertension and cardiovascular disease2",
"3. One of the major mechanisms of cardiovascular disease is endothelial dysfunction. Dysfunction",
"which can increase the permeability of the plasma components",
"especially low-density lipoproteins (LDL) and deposition in the sub endothelial space",
"can be considered one of the earliest events that occur in atherosclerosis4",
"5. With the high prevalence of hypertension and cardiovascular diseases and complications",
"and high costs that they impose on society",
"presentation of new strategies for prevention and control of these diseases",
"as well as finding efficient and effective complementary therapies with few instances of complications is very important6."
],
"content": "Introduction\n\nHigh blood pressure is now considered one of the main challenges facing human health and is one of the most important risk factors for cardiovascular disease1. It is predicted that the incidence of cardiovascular disease and hypertension will reach in about 30% of the world population by the year 2025. Iran is the fifth country in the world in terms of having high blood pressure related diseases. Approximately 6.6 million with an age range of 25–64 years have high blood pressure and an estimated 12 million people in the same age range are at increased risk of hypertension and cardiovascular disease2,3. One of the major mechanisms of cardiovascular disease is endothelial dysfunction. Dysfunction, which can increase the permeability of the plasma components, especially low-density lipoproteins (LDL) and deposition in the sub endothelial space, can be considered one of the earliest events that occur in atherosclerosis4,5. With the high prevalence of hypertension and cardiovascular diseases and complications, and high costs that they impose on society, presentation of new strategies for prevention and control of these diseases, as well as finding efficient and effective complementary therapies with few instances of complications is very important6.\n\nL-arginine is a semi-essential amino acid that is used by all cells7. This amino acid, on average, constitutes 7–5% of the total amino acids in the normal human diet and is absorbed in the jejunum and ileum of the small intestine. L-arginine is used by the body in protein synthesis, urea cycle, tissue repairing and immune cell function8,9. Arginine is converted to nitric oxide and citrulline, which acts as a vasodilator. There are three isoforms of nitric oxide synthase (NOS), these isoforms need oxygen, arginine and Tetrahydrobiopterin 4 (BH4) and NADPH (nicotine amide adenine di nucleotides phosphate) for the synthesis of nitric oxide10. In a trial conducted on Zucker rats, L-arginine reduced adipose tissue11.\n\nRecently, arginine-rich foods were shown to be inversely associated with endothelial dysfunction in hypercholesterolemia patients12. It has also been shown that long-term administration of L-arginine reduces cardiovascular complications13. It is still not entirely clear that a low dose of L-arginine has a positive effect. Nitric oxide has an important function in fat metabolism14. Physiological levels of nitric oxide (25 to 35 µmol) increased oxidation of glucose and fat and prevented the synthesis of glucose and triglycerides15. Several amino acids, particularly arginine, glutamine, leucine and phenylalanine directly stimulate the production of insulin from pancreatic beta cells16. Other possible actions associated with L-arginine include lowering blood pressure and homocysteine levels, increasing lean body mass and decreasing fat mass and adiponectin and endothelin17. In one study conducted by Sato et al., infusion of L-arginine reduced blood pressure in patients with essential hypertension but was not effective in patients with a history of dangerously high blood pressure18.\n\nIn a study conducted on healthy volunteers, supplementing with L-arginine for 3 days in a week improved glucose metabolism19,20. In a study, Lucotti et al. demonstrated that prolonged treatment with L-arginine in patients with type 2 diabetes caused a significant decrease in blood sugar21. In general, a number of studies on the beneficial effects of L-arginine have been shown to reduce blood pressure22. But in some other studies, L-arginine had no effect on blood pressure23,24. In previous studies, the effects of long-term, low L-arginine intake have not been examined. Therefore, in this study, we examined the effect of L-arginine supplementation on lipid profile, blood pressure and fasting blood sugar (glucose; FBS) in healthy men.\n\n\nMaterial and methods\n\nThis double-blind randomized clinical trial (IRCT2013060411763N9) was conducted on 56 healthy male sports club members of Isfahan University of Medical Sciences, Isfahan, Iran, from November 2013 to December, 2013.\n\nMale participants, with no history of smoking or alcohol consumption during the past year, not taking nutritional sport supplements during the last 2 months, no acute or chronic illness (including mental disorders, untreated hypothyroidism, heart and kidney disease, hepatitis, infectious and inflammatory diseases) and 18 to 35 years of age were included in this trial. Participants with any of the aforementioned diseases were excluded from this study.\n\nParticipants were invited to participate in the study by advertising at sports clubs at Isfahan University of Medical Sciences, Isfahan, Iran. A total of 70 men participated in the study, 56 subjects of which fitted the inclusion criteria. The required sample size was determined by following formula, considering a study power of 80%, a type I error of 5% (α = 0.05) and type II error of 20% (β = 0.20).\n\n\n\nWe held five meeting with participants. In the first meeting, we obtained basic information using a general questionnaire. For dietary assessment, three-day dietary records of subjects (sessions 2, 3 and 4) were completed and the nutrient content of foods were determined by the Nutritionist 4 software (version 7.0; N-Squared Computing, Salem, OR), which was designed for evaluation of Iranian foods. Participants were instructed to record, as accurately as possible, everything they consumed during the day including supplements and between-meal and late-evening snacks. Physical activity level was assessed at baseline (session 1) and at the end of study (session 5) by using the IPAQ questionnaire, which is both a reliable (Tang K Hong et al.) and valid (Coral et al.) measure25,26. Weight was measured without shoes while the participants wore underwear and were recorded to the nearest 0.5 kg. BMI was calculated as weight in kilograms divided by height in meters squared. Height was measured without shoes while the shoulders were in a normal position.\n\nFasting blood samples were collected at day 0 (session 1) and day 45 (session 5) of this trial. The blood samples were separated at 4°C for 10 min centrifugation at 4000 rpm and the serum was frozen at -80°C until analysis. FBS levels and lipid profile including total cholesterol (TC), triglyceride (TG), LDL and HDL, were measured using Auto Analyser Biosystems A25 (BioSystems S.A., Barcelona, Spain). Blood pressure was measured three times in every session after a 15 minute rest sitting down by mercury sphygmomanometer and the average blood pressure obtained was recorded at each stage.\n\nAfter obtaining informed consent and with the approval of the ethics committee of Isfahan University of Medical Sciences, 56 healthy men participated in this study and were randomly assigned to consume L-arginine supplement (n = 28) or placebo (n = 28) for 45 days using envelopes containing numbers from a table of random numbers. Pure L-arginine supplements and placebo (maltodextrin) were purchased from a pharmaceutical company (Karen Pharmaceutical Co, Yazd-Iran). Participants were instructed to take one table per day (2000 mg of L-arginine in the L-arginine group, 2000 mg of maltodextrin in the placebo group). When the participants were given packets of L-arginine or placebo they were asked not to change the lifestyle, physical activity and diet during the study. For blinding, L-arginine and placebo packets were coded by someone outside the research team and the research team was unaware of the type of supplement. The L-arginine and placebo packets were delivered to participants at session 1 and 3. They were asked to bring back the empty packets at session 3 and the final session. The statistician was also not aware of the type of intervention. At the end of the project the final report determined the type of intervention. This trial was approved by the Isfahan University of Medical Sciences (with the number 392435) and was registered in clinical trials center’s website address (www.irct.ir) (code: IRCT2013121515807N1).\n\nThe incidence of adverse events was evaluated by recording all observed or volunteered adverse events. For this purpose, any study related adverse events during intervention were monitored by daily evaluation. For participants who withdrew or subjects lost to follow-up, adverse events were acquired by telephone.\n\nAll statistical analyses were done by means of SPSS software version 18 (SPSS, Inc. Chicago, IL, USA). We applied Kolmogrov–Smirnov test to ensure the normal distribution of variables. To determine the differences in general characteristics and dietary intakes between L-arginine and placebo groups, we used an independent-samples t-test. We used paired-samples t-tests to determine the effects of L-arginine and placebo on FBS, lipid profile and blood pressure. P-value < 0.05 was considered as the level of significance.\n\n\nResults\n\nFifty-six of the subjects fulfilled the inclusion criteria and participated in the study, but four dropped out the study with these reasons: two due to dermatitis and one for digestive problems in the intervention group and one in the placebo group because of personal problems. Therefore, 52 participants [L-arginine (n = 25) and placebo (n = 27)] completed the trial (Figure 1). Final statistical analyses were performed on the 52 participants. The rate of compliance in our study was high, such that approximately 100% of capsules were taken throughout the study in both groups. General characteristics of participants who received either L-arginine supplements or placebo are illustrated in Table 1. No significant differences were found in weight, BMI, physical activity, energy or protein intake between both groups (Table 1).\n\n1 All values are means ±SDs\n\n2 Received placebo 2000 mg per day during the study\n\n3 Received L-arginine supplement 2000 mg per day during study\n\n4 Obtained from independent-samples t test\n\nThe differences between the two groups in dietary intake during the trial are presented in Table 2. Dietary intake of energy, protein, carbohydrate, fat and arginine during study were not different between L-arginine and placebo groups.\n\nData is presented as mean and standard deviation\n\n†Obtained from independent sample t test\n\nData is presented as mean and standard deviation\n\nAbbreviation: fasting blood sugar, triglyceride, low density lipoprotein, high density lipoprotein, systolic blood pressure, diastolic blood pressure\n\n1 Received L-arginine 2000 mg per day during study\n\n2 Received placebo 2000 mg per day during the study\n\n3 Obtained from paired-samples t test\n\n4 Obtained from independent-samples t test\n\nBaseline and after intervention values of FBS, blood pressure and lipid profile are presented in Table 2. In this study, supplementation of 2000 mg L-arginine per day compared with placebo (2000 mg maltodextrin) for 45 days were given to participants. Levels of FBS, triglycerides, total cholesterol, LDL-c and HDL-c in L-arginine supplemented group compared with the placebo group showed statistically significant differences (P<0.05). The systolic and diastolic blood pressure before and after the intervention compared to the control group showed no significant difference (P>0.05) (Table 2).\n\n\nDiscussion\n\nIn this study, the effect of L-arginine supplementation on blood glucose, blood pressure and lipid profile in healthy male subjects between 18 and 35 years were examined. In the intervention group of subjects receiving 2000 mg daily of L-arginine pills for 45 days and the control group participants 2000 mg daily placebo pills (maltodextrin) consumed. The results showed that L-arginine supplementation significantly decreased the levels of FBS, triglycerides, LDL and cholesterol and a significant increase in HDL levels compared to the control group (P<0.05). However, there was no significant effect on systolic and diastolic blood pressure (P>0.05) (Table 2).\n\nIn some studies, healthy individuals exhibited improved glucose metabolism after three to seven days of L-arginine supplementation19,27, a result that is in line with our study. Lucotti et al. demonstrated that L-arginine supplementation reduces blood sugar in patients with diabetes, which also parallels with our study, although our study was conducted on healthy people21. There is evidence that long-term L-arginine intake can increase insulin sensitivity and improve the glycemic indices28. It seems that an acute dose of L-arginine affects the levels of nitric oxide. In a study conducted by Natarajan et al., supplementation with L-arginine improves glycemic sensitivity in patients with diabetes29. In another study Mohamadian and colleagues demonstrated that nitric oxide precursors can improve blood glucose and glycosylated hemoglobin levels in Wistar rats with diabetes through antioxidant activity30.\n\nIn a study conducted on people with type 2 diabetes, L-arginine supplementation reduced systolic and diastolic blood pressure, results that are inconsistent with our results31. However, a study by Lerman et al., found that L-arginine supplementation over 6 months had no significant effect on systolic and diastolic blood pressure in humans, which is in line with our study24. Lekakis and colleagues found similar results where a daily 6 g oral dose of L-arginine did not have a significant effect on blood pressure of patients with essential hypertension23.\n\nSiani et al. lowering blood sugar, lowering blood pressure, increasing HDL, cholesterol and triglycerides and decreased following administration of L-arginine supplementation reported, that the results of this study agree with our study19. In a study conducted in 2008 by Boger et al., supplementation with L-arginine improved the function of the cardiovascular system in patients on hemodialysis31. Several studies that were carried out on humans and male C57BL/6 mice have shown that L-arginine supplementation may be considered a new treatment for metabolic disorders and also has an effect of lowering blood pressure, adipose tissue and weight and improves insulin sensitivity32–34. In a study conducted by Martina et al. L-arginine supplementation of 2.1 g per day in combination with N-acetylcysteine at a dose of 2.1 g per day for 6 months has been shown to improve endothelial function35.\n\nIn one study, M.A. Nascimento et al. showed that L-arginine supplementation in overweight men for 7 days reduced LDL and increased HDL, results that agree with the results of our study. However, unlike our findings, they did not see any affect on the levels of triglycerides and total cholesterol. These differing results may be due to the short duration of their study, as well as the high BMI and average age of their participants (46 ± 5)36.\n\nMechanisms activated by L-arginine supplementation are still not fully understood. L-arginine can affect the molecular and cellular levels via complex mechanisms. Studies on multiple animal models and in a limited number of human subjects have shown that L-arginine can stimulate the development of brown adipose tissue mitochondria and induce the regulation of gene expression37. Another study reported that L-arginine supplementation decreased blood pressure and total homocysteine levels38. Indeed, nitric oxide is produced from arginine as an endothelium relaxation factor, which activates guanylyl cyclase. Guanylyl cyclase converted guanylyl triphosphate to cyclic guanylyl monophosphate which relaxes the smooth muscles that can cause a decrease in blood pressure39. Although the results of many studies are in line with our study, more are needed to determine the effects of differing L-arginine doses on CVD risk factors.\n\nSome limitations of this study should be considered. First, we could not examine the effects of L-arginine supplementation on inflammation factors including tumor necrosis factor alpha (TNF-α) C-reactive protein (CRP) and interleukin-6 as CVD risk factors. Second, this study was conducted on males and it is not clear the effects of L-arginine supplementation on CVD risk factors on females. Third, in this study, we just enrolled healthy subjects and we did not examine the effects of L-arginine on patients with CVD.\n\n\nData availability\n\nFigshare: http://dx.doi.org/10.6084/m9.figshare.126504740",
"appendix": "Author contributions\n\n\n\nWe strongly encourage authors to make these statements, specifying the ways in which the authors contributed to the paper. All authors contributed equally to this work. N.P, M.J, M.R and H.R performed conception, design, data collection and clinical studies. H.A.R did the literature search. O.S conducted all statistical analysis. N.P, O.S and M.H.E performed the manuscript preparation, editing and review. Funding and overall responsibility was assumed by M.H.E.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding for this study was provided by Department of Clinical Nutrition, School of Nutrition and Food Sciences, Isfahan, Iran, grant number: 392435.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThis study was supported by Department of clinical Nutrition, School of Nutrition and Food Sciences, Isfahan, Iran. The authors thank the athletes who participated in the research project.\n\n\nReferences\n\nBurke BE, Neuenschwander R, Olson RD: Randomized, double-blind, placebo-controlled trial of coenzyme Q10 in isolated systolic hypertension. South Med J. 2001; 94(11): 1112–7. PubMed Abstract\n\nKapil V, Milsom AB, Okorie M, et al.: Inorganic nitrate supplementation lowers blood pressure in humans: role for nitrite-derived NO. Hypertension. 2010; 56(2): 274–81. PubMed Abstract | Publisher Full Text\n\nEsteghamati A, Abbasi M, Alikhani S, et al.: Prevalence, awareness, treatment, and risk factors associated with hypertension in the Iranian population: the national survey of risk factors for noncommunicable diseases of Iran. Am J Hypertens. 2008; 21(6): 620–6. PubMed Abstract | Publisher Full Text\n\nMarx N, Grant PJ: Endothelial dysfunction and cardiovascular disease--the lull before the storm. Diab Vasc Dis Res. 2007; 4(2): 82–3. PubMed Abstract | Publisher Full Text\n\nHadi HA, Carr CS, Al Suwaidi J: Endothelial dysfunction: cardiovascular risk factors, therapy, and outcome. Vasc Health Risk Manag. 2005; 1(3): 183–98. PubMed Abstract | Free Full Text\n\nRosenfeldt FL, Haas SJ, Krum H, et al.: Coenzyme Q10 in the treatment of hypertension: a meta-analysis of the clinical trials. J Hum Hypertens. 2007; 21(4): 297–306. PubMed Abstract | Publisher Full Text\n\nWu G, Morris SM Jr: Arginine metabolism: nitric oxide and beyond. Biochem J. 1998; 336(Pt 1): 1–17. PubMed Abstract | Free Full Text\n\nWhite MF: The transport of cationic amino acids across the plasma membrane of mammalian cells. Biochim Biophys Acta. 1985; 822(3–4): 355–74. PubMed Abstract | Publisher Full Text\n\nHendler SS, Rorvik D: PDR for nutritional supplements. 1st ed. Montvale, NJ; PDR Thompson. 2001. Reference Source\n\nWu G, Bazer FW, Davis TA, et al.: Arginine metabolism and nutrition in growth, health and disease. Amino Acids. 2009; 37(1): 153–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFu WJ, Haynes TE, Kohli R, et al.: Dietary L-arginine supplementation reduces fat mass in Zucker diabetic fatty rats. J Nutr. 2005; 135(4): 714–21. PubMed Abstract\n\nMaxwell AJ, Anderson B, Zapien MP, et al.: Endothelial dysfunction in hypercholesterolemia is reversed by a nutritional product designed to enhance nitric oxide activity. Cardiovasc Drugs Ther. 2000; 14(3): 309–16. PubMed Abstract | Publisher Full Text\n\nSozykin AV, Noeva EA, Balakhonova TV, et al.: [Effect of L-arginine on platelet aggregation, endothelial function and exercise tolerance in patients with stable angina pectoris]. Ter Arkh. 2000; 72(8): 24–27. PubMed Abstract\n\nGarcia-Villafranca J, Guillen A, Castro J: Involvement of nitric oxide/cyclic GMP signaling pathway in the regulation of fatty acid metabolism in rat hepatocytes. Biochem Pharmacol. 2003; 65(5): 807–12. PubMed Abstract | Publisher Full Text\n\nJobgen WS, Fried SK, Fu WJ, et al.: Regulatory role for the arginine-nitric oxide pathway in metabolism of energy substrates. J Nutr Biochem. 2006; 17(9): 571–88. PubMed Abstract\n\nMenge BA, Schrader H, Ritter PR, et al.: Selective amino acid deficiency in patients with impaired glucose tolerance and type 2 diabetes. Regul Pept. 2010; 160(1–3): 75–80. PubMed Abstract | Publisher Full Text\n\nCassone Faldetta M, Laurenti O, Desideri G, et al.: L-arginine infusion decreases plasma total homocysteine concentrations through increased nitric oxide production and decreased oxidative status in Type II diabetic patients. Diabetologia. 2002; 45(8): 1120–7. PubMed Abstract | Publisher Full Text\n\nSato K, Kinoshita M, Kojima M, et al.: Failure of L-arginine to induce hypotension in patients with a history of accelerated-malignant hypertension. J Hum Hypertens. 2000; 14(8): 485–8. PubMed Abstract | Publisher Full Text\n\nSiani A, Pagano E, Iacone R, et al.: Blood pressure and metabolic changes during dietary L-arginine supplementation in humans. Am J Hypertens. 2000; 13(5 Pt 1): 547–51. PubMed Abstract | Publisher Full Text\n\nApostol AT, Tayek JA: A decrease in glucose production is associated with an increase in plasma citrulline response to oral arginine in normal volunteers. Metabolism. 2003; 52(11): 1512–6. PubMed Abstract | Publisher Full Text\n\nLucotti P, Setola E, Monti LD, et al.: Beneficial effects of a long-term oral L-arginine treatment added to a hypocaloric diet and exercise training program in obese, insulin-resistant type 2 diabetic patients. Am J Physiol Endocrinol Metab. 2006; 291(5): E906–12. PubMed Abstract | Publisher Full Text\n\nRector TS, Bank AJ, Mullen KA, et al.: Randomized, double-blind, placebo-controlled study of supplemental oral L-arginine in patients with heart failure. Circulation. 1996; 93(12): 2135–41. PubMed Abstract | Publisher Full Text\n\nLekakis JP, Papathanassiou S, Papaioannou TG, et al.: Oral L-arginine improves endothelial dysfunction in patients with essential hypertension. Int J Cardiol. 2002; 86(2–3): 317–23. PubMed Abstract | Publisher Full Text\n\nLerman A, Burnett JC Jr, Higano ST, et al.: Long-term L-arginine supplementation improves small-vessel coronary endothelial function in humans. Circulation. 1998; 97(21): 2123–8. PubMed Abstract | Publisher Full Text\n\nCraig CL, Marshall AL, Sjöström M, et al.: International physical activity questionnaire: 12-country reliability and validity. Med Sci Sports Exerc. 2003; 35(8): 1381–95. PubMed Abstract\n\nHong TK, Trang NH, van der Ploeg HP, et al.: Validity and reliability of a physical activity questionnaire for Vietnamese adolescents. Int J Behav Nutr Phys Act. 2012; 9: 93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGannon MC, Nuttall JA, Nuttall FQ: Oral arginine does not stimulate an increase in insulin concentration but delays glucose disposal. Am J Clin Nutr. 2002; 76(5): 1016–22. PubMed Abstract\n\nWascher TC, Graier WF, Dittrich P, et al.: Effects of low-dose L-arginine on insulin-mediated vasodilatation and insulin sensitivity. Eur J Clin Invest. 1997; 27(8): 690–5. PubMed Abstract\n\nNatarajan Sulochana K, Lakshmi S, Punitham R, et al.: Effect of oral supplementation of free amino acids in type 2 diabetic patients-- a pilot clinical trial. Med Sci Monit. 2002; 8(3): CR131–7. PubMed Abstract\n\nMohamadin AM, Hammad LN, El-Bab MF, et al.: Can nitric oxide-generating compounds improve the oxidative stress response in experimentally diabetic rats? Clin Exp Pharmacol Physiol. 2007; 34(7): 586–93. PubMed Abstract | Publisher Full Text\n\nBoger RH: L-Arginine therapy in cardiovascular pathologies: beneficial or dangerous? Curr Opin Clin Nutr Metab Care. 2008; 11(1): 55–61. PubMed Abstract | Publisher Full Text\n\nMcKnight JR, Satterfield MC, Jobgen WS, et al.: Beneficial effects of L-arginine on reducing obesity: potential mechanisms and important implications for human health. Amino Acids. 2010; 39(2): 349–57. PubMed Abstract | Publisher Full Text\n\nClemmensen C, Madsen AN, Smajilovic S, et al.: L-Arginine improves multiple physiological parameters in mice exposed to diet-induced metabolic disturbances. Amino Acids. 2012; 43(3): 1265–75. PubMed Abstract | Publisher Full Text\n\nMartina V, Masha A, Gigliardi VR, et al.: Long-term N-acetylcysteine and L-arginine administration reduces endothelial activation and systolic blood pressure in hypertensive patients with type 2 diabetes. Diabetes Care. 2008; 31(5): 940–4. PubMed Abstract | Publisher Full Text\n\nNascimento MA, Higa EMS, de Mello MT, et al.: “Effects of short-term L-arginine supplementation on lipid profile and inflammatory proteins after acute resistance exercise in overweight men”. e-SPEN J. 2014; 9(3): e141–e145. Publisher Full Text\n\nMcKnight JR, Satterfield MC, Jobgen WS, et al.: Beneficial effects of L-arginine on reducing obesity: potential mechanisms and important implications for human health. Amino Acids. 2010; 39(2): 349–57. PubMed Abstract | Publisher Full Text\n\nCassone Faldetta M, Laurenti O, Desideri G, et al.: L-arginine infusion decreases plasma total homocysteine concentrations through increased nitric oxide production and decreased oxidative status in Type II diabetic patients. Diabetologia. 2002; 45(8): 1120–7. PubMed Abstract | Publisher Full Text\n\nGruetter CA, Barry BK, McNamara DB, et al.: Relaxation of bovine coronary artery and activation of coronary arterial guanylate cyclase by nitric oxide, nitroprusside and a carcinogenic nitrosoamine. J Cyclic Nucleotide Res. 1979; 5(3): 211–224. PubMed Abstract\n\nHuynh NT, Tayek JA: Oral arginine reduces systemic blood pressure in type 2 diabetes: its potential role in nitric oxide generation. J Am Coll Nutr. 2002; 21(5): 422–7. PubMed Abstract | Publisher Full Text\n\nPahlavani N, Jafari M, Rezaei M, et al.: Clinical data on the effect of L-arginine supplementation on cardiovascular risk factors in healthy men. Figshare. 2014. Data Source"
}
|
[
{
"id": "7756",
"date": "03 Mar 2015",
"name": "Roman Leischik",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study is well conducted. The data support the conclusions. We don`t know anything about the long-term results. For this issue it is necessary to perform long-term studies. L-Arginine supplement was used in sport, but its positive effects on it are not sure 1. Plasma nitrite levels were elevated compared to placebo during acute supplement in 5 km runners. We don`t know the long-term effects in patients with diabetes or patients with CAD over the years. But some cited studies in this well conducted study are promising. I think, because of the financial problems we don´t will have a well performed double blinded long-term study in the future. It is therefore quite beneficial to have a well conducted study, especially about possible adverse effects. It is interesting that lowering of blood pressure showed negative effects, but maybe the underlying mechanism in patients with metabolic syndrome might lead to other effects. If it is possible, it would be necessary to get a grant for a long-term study, because the adverse effects are low and possible positive effects might be significant.The problem is that physical activity and healthy life style in general is more important as all supplements. There is a great desire in patients for taking supplements or pills as a solution for their problems and diseases, because then there is no need for an individually effort of the patient itself. I think it is important to mention this. Further studies should be carried out comparing the effects of supplementation of L-Arginine and physical activity in patients. But we have to start somewhere with research and this study is one of the first well conducted and transparent studies.It would be nice if the author can discuss the problem physical activity2/life style3 and supplements in the discussion section. Socioeconomic differences might play a greater role as supplement with L-Arginine4.",
"responses": []
},
{
"id": "8320",
"date": "14 Apr 2015",
"name": "Majid Ghayour-Mobarhan",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title and abstract are appropriate, as are the methods and analysis. The conclusions presented are justified based on the results, and the data provided is sufficient. Please make the following modifications in your article: The statistical correlations between the arginine and the age, BMI and Physical activity should be assessed and reported. On what basis was the arginine dosage selected? The conclusion should be present in a separate section.",
"responses": [
{
"c_id": "2710",
"date": "22 Jun 2017",
"name": "Mostafa Jafari",
"role": "Author Response",
"response": "Dear Referee Your comments on my article with title (L-arginine supplementation and risk factors of cardiovascular diseases in healthy men: a double-blind randomized clinical trial) were modified and submitted in new version. but in one comment :(The statistical correlations between the arginine and the age, BMI and Physical activity should be assessed and reported) Because L-arginine levels in serum was not measured during the study, there was no possibility of getting correlation between arginine and age, BMI and PA Thanks"
}
]
}
] | 1
|
https://f1000research.com/articles/3-306
|
https://f1000research.com/articles/6-967/v1
|
22 Jun 17
|
{
"type": "Research Article",
"title": "Analytical challenges of untargeted GC-MS-based metabolomics and the critical issues in selecting the data processing strategy",
"authors": [
"Ting-Li Han",
"Yang Yang",
"Hua Zhang",
"Kai P. Law",
"Ting-Li Han",
"Yang Yang",
"Hua Zhang"
],
"abstract": "Background: A challenge of metabolomics is data processing the enormous amount of information generated by sophisticated analytical techniques. The raw data of an untargeted metabolomic experiment are composited with unwanted biological and technical variations that confound the biological variations of interest. The art of data normalisation to offset these variations and/or eliminate experimental or biological biases has made significant progress recently. However, published comparative studies are often biased or have omissions. Methods: We investigated the issues with our own data set, using five different representative methods of internal standard-based, model-based, and pooled quality control-based approaches, and examined the performance of these methods against each other in an epidemiological study of gestational diabetes using plasma. Results: Our results demonstrated that the quality control-based approaches gave the highest data precision in all methods tested, and would be the method of choice for controlled experimental conditions. But for our epidemiological study, the model-based approaches were able to classify the clinical groups more effectively than the quality control-based approaches because of their ability to minimise not only technical variations, but also biological biases from the raw data. Conclusions: We suggest that metabolomic researchers should optimise and justify the method they have chosen for their experimental condition in order to obtain an optimal biological outcome.",
"keywords": [
"Normalisation method",
"Biomarker discovery",
"Gas chromatography-mass spectrometry",
"Metabolomics",
"Gestational diabetes"
],
"content": "Introduction\n\nMetabolomics is the large-scale study of small molecules in biological systems. It combines strategies to identify and quantify cellular metabolites using sophisticated analytical techniques with the application of multivariate statistics for data mining and interpretation1. Metabolomics, particularly mass spectrometry (MS)-based approaches, is increasingly being used in population-based or epidemiological studies, since the technology offers a high-level of reliability and sensitivity over conventional biochemical techniques, and multiple metabolites can be simultaneously monitored2. Furthermore, the technology can be used to examine biological matrices in a holistic non-biased manner, with the goal of bringing a global understanding of these complex systems and creating new hypotheses on how they function. However, even if clinical and pre-analytical procedures (e.g., specimen collection, storage and handling, and preparation of the samples) have been standardised and conducted appropriately, inevitably, there are still unwanted variations2. These variations are introduced by (1) the natural biological variations among the individual subjects and samples (the cohort); (2) the fluctuations in experimental conditions; and (3) the effects of the instrumental drifts that confound with the biological variations of interest. The instrumental drifts vary from changes in column condition and ageing, progressive contamination of the ion source and optics, and the deterioration of the detector response. The changes in column condition result in shifts in retention time, increased column bleeding that leads to erroneous data extraction. The progressive ion source and optics contamination lower the absolute instrument responses that result in profound difficulty in compound quantification. These variations can be detrimental to epidemiological studies that typically involve a population of subjects with a diverse range of biological characteristics, and large numbers of samples that are analysed over weeks with multiple batches of analyses. These unwanted variations in the raw data are minimised through a processing step called normalisation3,4. The removal of unwanted variation is by no means a trivial matter and is important, and yet remains a grey area, in which there is a distinct need to develop a greater understanding of when, why, and how, in order to achieve optimal biological outcomes5. Since every metabolomics experiment is exposed to multiple sources of unwanted variation, the results obtained in the subsequent data analysis can vary depending on the normalisation method used to remove the unwanted variations6.\n\nIn our previous work, we have discussed the fundamental issues surrounding the data pre-processing and normalisation of an untargeted gas chromatography-mass spectrometry (GC-MS)-based environmental study7. In this research article, we extend our discussion with a study of a longitudinal cohort study of Chinese pregnant women8–10, and share some of our experience in handling the analytical challenges of untargeted GC-MS-based epidemiological study. The structure of this manuscript is as follows: The current state-of-the-art data normalisation methods are reviewed and the challenges of data extraction and its effect toward downstream data processing are discussed; representative normalisation methods, including IS-based, QC-based, and model-based data normalisation approaches are used to process the data set, and the performance of these methods is evaluated by principal component analysis (PCA), relative log abundance (RLA) plots, relative standard deviation (RSD), and receiver operating characteristic (ROC); logistic regression is then used to adjust the significance with the biological confounders; and the implications of the findings are discussed.\n\n\nMethods\n\nThe full experimental design, procedures, and statistical methods are described in the Supplementary Methods (Supplementary File 1). The clinical characteristics of the participants have been described previously8.\n\nIn brief, the longitudinal cohort of this study constituted 61 Chinese pregnant women who completed their antenatal care at the First Affiliated Hospital of Chongqing Medical University. Of the 61 participants, 34 had normal glucose tolerance (controls), and 27 met the diagnostic criteria for gestational diabetes (GDM) based on the International Association of Diabetes and Pregnancy Study Groups recommendations11. Blood samples were collected on the scheduled antenatal visits, one in each trimester. Samples were stored at – 80°C until analysis.\n\nAn enhanced GC-MS method12 was employed to investigate the longitudinal change of non-esterified fatty acids (NEFAs) and other aromatic metabolites in the maternal plasma of women who developed GDM and healthy pregnancies (controls). To enhance the separation of cis- and trans- isomers of mono- and polyunsaturated fatty acid, methyl esters, a 100 m long biscyanopropyl/phenylcyanopropyl polysiloxane column was used. EDTA-treated plasma samples were thawed on ice and extracted with methanol/toluene pre-mixed with internal standards. The extracts were derivatized with acetyl chloride solution in round-bottom glass tubes with screw caps and sealed. The tubes were then heated and stirred at 100°C for 1h. NEFAs were derivatized to their fatty acid methyl esters (FAMEs). The organic layer was recovered and analysed directly by GC-MS after neutralisation with aqueous potassium carbonate solution. GC-MS data were acquired with an Agilent GC-MS system in the splitless mode. An RESTEK Rtx®-2330 column (90% biscyanopropyl/10% phenylcyanopropyl polysiloxane) was installed in the system. The column temperature was computer controlled and was ramped from 45°C to 215°C in over 65 mins. Data pre-processing was performed in the Agilent MassHunter suit (version 8 of Qualitative Workflows and Profinder), Metabolite Detector13 (version 2.5), and AMDIS (Automated Mass Spectral Deconvolution and Identification System) (version 2.72), and the accuracy of data extraction of these software tools was compared. Data was further processed and analysed with five different normalisation methods (CRMN, EigenMS, PQN, SVR and LOWESS). The performance of the normalisation methods and the marker candidates identified were investigated. PCA was performed with EZinfo (version 3.0.3). Multilevel PCA14 was performed using mixOmics (version 6.1.3). Pareto scaling was used in PCA and mPCA modelling. RLA plots were drawn with the RlaPlots function of the package metabolomics15 (version 0.1.4). ROC was calculated with the colAUC function of caTools (version 1.17.1). Binomial logistic regression was performed with the glm function of R (version 3.3.3).\n\n\nAn overview of the state-of-the-art data normalisation methods\n\nNormalisation is typically performed post-analytically (i.e., data normalisation). Data normalisation can be categorised as (1) internal standard (IS)-based (especially with the use of isotopic internal standards); (2) quality control (QC)-based, such as pooled samples; and (3) statistical- or model-based. The IS-based approach is the standard technique for targeted analysis of metabolites and peptides. Methods using multiple internal standards, such as NOMIS (Normalisation using Optimal selection of Multiple Internal Standards)16, CCSC (Comprehensive Combinatory Standard Correction)17,18 and CRMN (Cross-contribution Robust Multiple standard Normalisation) have been proposed for untargeted analysis. The latter methods address the specific issue of cross-contribution. Nevertheless, there is a practical limit to the number of internal standards that can be added to the samples, and so the coverage of different classes of compound in a complex mixture of biological extract. Despite the numerous drawbacks, IS-based approaches are still used in untargeted epidemiological metabolomics, particularly with the use of GC-MS19,20. However, the reported results of these studies are, in our view, dubious at best.\n\nAn alternative approach is the use of a pooled QC sample to calibrate the symmetric biases. Pooled QC was originally designed to monitor the system and sample stability over the course of an analysis21, but was adopted to provide an ability to perform signal correction22. A common method uses locally weighted scatterplot smoothing (LOWESS) for signal correction23. Several regression models have been proposed in this regard, but these algorithms have different susceptibility/tolerance to outliers. One method models the data by a set of local polynomials, which avoids the constraint that the data follow any one global model and is less sensitive to errant data points24. An improved version uses cubic spline interpolation to determine the coefficient values between QC samples25,26. Recently, single value regression model with the total abundance information (Batch Normalizer)27, support vector regression (SVR) normalisation (MetNormalizer)28 and mixture model normalisation (mixnorm)29 have also been proposed. While QC-based methods have been shown to provide an effective mean for performance monitoring and signal correction, the sources of unwanted variation seen in metabolomic data can occur due to both experimental and biological reasons5. QC-based methods are limited to drift in signal over time and batch effect removal. The applicability of these methods can also be limited by practical considerations.\n\nIn contrast, statistical- or model-based approaches are able to remove both experimental and biological variations. Probabilistic quotient normalisation (PQN) is one of the most commonly used model-based methods, particularly in nuclear magnetic resonance (NMR)-based metabolomics. The method assumes that biologically interesting concentration changes influence only parts of the NMR spectrum, while dilution effects will affect all metabolite signals30. The mean or median of the QC data is typically used as the reference spectrum3. EigenMS is an adaptation of surrogate variable analysis for microarrays and it uses a combination of ANOVA and singular value decomposition (SVD) to capture and remove biases from metabolomic peak intensity measurements, while preserving the variation of interest31,32. The number of bias trends is determined by a permutation test and the effects of the bias trends are then removed from the data. This approach has an advantage as it permits researchers to remove unwanted symmetric variation without knowing the sources of bias.\n\nConcurrent pre-analytical normalisation equalising the concentration of the samples prior to sample analysis is also desirable. For example, this can be achieved with freeze dried samples by weight. For urine, an application of appropriate dilution factor after a measurement of specific gravity33, osmolality34, or creatinine concentration35, reportedly reduces the analytical variability.\n\n\nResults and Discussion\n\nThe GC-MS data were first pre-processed with AMDIS and Metabolite Detector. As reported in our previous work7, despite having carefully adjusted software parameters, data deconvolution with AMDIS was error prone. In particular, a single component could be assigned to multiple components (insert in Supplementary Figure 1a). Some researchers use peak height instead of peak area to allow a manual removal of incorrectly assigned components from the data matrix. However, many components detected in our experiment were unsymmetrical and/or had tailings. Accordingly, we consider that the use of peak height was inappropriate. Relatively, the data deconvolution of Metabolite Detector was a lot better than AMDIS (Supplementary Figure 1b), and the problems encountered in AMDIS was not observed with Metabolite Detector (insert in Supplementary Figure 1b). Given our current and previous observations, we do not recommend using AMDIS (or workflow based on AMDIS) for untargeted GC-MS data deconvolution7.\n\nAnother challenge was the relatively large non-linear retention time shift over the course of the two-week analysis. For example, the retention of the cholest-3,5-diene varied nearly 50 s (Supplementary Figure S2). Retention time could normally be adjusted with retention time alignment and was performed with Metabolite Detector. However, many of the compounds detected were structurally similar or isomeric, closely eluted, and had identical or very similar electron impact mass spectra (Supplementary Table 1). We found that the retention alignment did not have the expected accuracy. As a result, the data extracted by the automatic/batch process of the software contained non-zero errors. These non-zero errors were poorly tolerated by the QC-based normalisations (especially by the LOWESS normalisation) in the downstream data processing. Although these errors also affected the IS-based and model-based normalisations, these errors were tolerated to some extent by these approaches. However, to make an accurate and impartial comparison, an alternative data pre-processing method was used.\n\nData pre-processing was further performed with the most recent release of Agilent MassHunter Suit. Data deconvolution and compound identification with the Qualitative Workflows and the Agilent NIST14 database were relatively easy, fast and accurate (Figure 1a). 385 components were detected above the user’s defined threshold value in a typical QC sample, of which 62 components were confidently annotated. The compound identification and the retention time information were then exported to the Profinder. The automatic/batch data extraction process of the Profinder was, however, far from perfect. Nevertheless, the interface of Profinder permitted a user-friendly visual inspection and manual correction that other similar software tools (including MS-DIAL, eRah, ADAP-GC, metaMS and MassOmics) did not provide. By manually correcting the inconsistency of data extraction (carefully selecting the exact region of the corresponding peak), an error-free data extraction was achieved (Figure 1b).\n\nAgilent MassHunter (a) Qualitative Workflows and (b) Profinder interface. 385 components were extracted from a typical QC sample from 14.5 to 56 min, of which 62 were confidently annotated with match factor ≥ 80. Data was then exported to a CEF file. The file was then used by Profinder for batch data extraction. The Profinder tool was designed with the use of reference spectra and retention time windows to assist data extraction.\n\nA common problem with most GC-MS studies is the progressive deterioration of the instrumental performance caused by the ion source and optics contamination. The unadjusted (raw) data (left panel, Supplementary Figure 3) showed the extent of loss of absolute signal intensity of the two internal standards and a background compound over the course of the analysis. The signal of 1,3-dimethyl-benzene from both QC and analytical samples (Supplementary Figure 3a) showed that the loss of absolute intensity was faster in the first batch and then recovered after setting the system at idle. Thereafter, the loss of absolute signal became stabilised. The overall trend of the two internal standards, tridecanoic acid and nonadecanoic acid, was similar (Supplementary Figures 3b and c), but batch 4 and 5 had a higher absolute signal relative to batch 3. These changes might be caused by fluctuation of other experimental condition as per batch-to-batch variation. The systematic biases, either due to loss of absolute intensity, or other fluctuations, were removed by normalisation (right panel, Supplementary Figure 3). However, not every normalisation method performed equally, and the normalisation employed had a significant influence on the determination of significant metabolites.\n\nThe pre-processed data were processed with five selected normalisation methods. The outputs from the CRMN, EigenMS and MetNormalizer packages are shown in Supplementary Methods, Figures M1-M3. The performance of these normalisation methods was evaluated by three methods. The PCA score plots are shown in Supplementary Figure 4. The within-group RLA plots are shown in Supplementary Figure 5. The RSD of the QC and analytical samples are shown in Table 1 and Supplementary Table 1.\n\nThe PCA score plot of the unadjusted data revealed a transition from red to green and blue, representing the first-, second-, and third-trimester samples (Supplementary Figure 4a). The RLA plot showed a relatively large within-group variation (Supplementary Figure 5a). The RSD of the QC samples was relatively high (19.34%) (Table 1) and four metabolites had QC RSD values ≥ 30% (Supplementary Table 1). After normalisation with CRMN, the classification was improved. The QC samples were seen clustered together in the PCA score plot (Supplementary Figure 4b). However, the RSD of the QC samples was higher than 10% (Table 1) and four metabolites had QC RSD values ≥ 30% (Supplementary Table 1). The within-group RLA plot suggested that the CRMN normalisation performance was relatively modest compared to other normalisation methods (Supplementary Figure 5b). These observations were partly because of the small number of ISs used in this experiment. As a result, we did not find the usefulness of CRMN or other IS-based normalisation methods for this data set.\n\nThe data processed with EigenMS, on the other hand, had significantly improved the classification (Supplementary Figure 4c), and it was the only method in all normalisation methods tested that was able to distinctively separate the clinical groups in the PCA plot. The RSD of the QC samples was reduced to 9.77% (Table 1) and two metabolites had QC RSD values ≥ 30% (Supplementary Table 1). The data processed with PQN was improved slightly further with RSD of the QC samples reduced to 8.92% (Table 1), although classification of the PCA score plot was less clear (Supplementary Figure 4d). Only one metabolite had a QC RSD value ≥ 30% (Supplementary Table 1).\n\nFinally, the data set was processed with two QC-based normalisation methods. Under the default settings of the two normalisation tools, the SVR normalisation was found to have a higher tolerance to outliers than the LOWESS normalisation (Supplementary Figure 6). In contrast, the LOWESS algorithm merely adjusted the analytical data according to the QC data after smoothing (data not visualised). These observations suggested that the algorithms of the SVR and LOWESS normalisation handled the outliers quite differently. This observation had an implication to the selection of analytical platform and the QC-based data normalisation methods. The RLA plots suggested that the performance of EigenMS, PQN and SVR normalisation were similar (Supplementary Figures 5c-e), but the data processed with LOWESS normalisation was the most precise (Supplementary Figure 5f). The RSD of the QC samples was 5.73% and 4.79% of the data processed with the SVR and LOWESS normalisation (Table 1), and no metabolite was found to have QC RSD ≥ 30% in the LOWESS-processed data set (Supplementary Table 1).\n\nTo account for the repeated measurements of the same subject at different stages of pregnancy (the longitudinal data set), multilevel statistics14 was used8,9. The three most promising normalisation methods were further interrogated with multilevel analysis. The multilevel PCA score plots of the data processed with EigenMS, the PQN and LOWESS normalisation were shown in Figure 2. In all cases, a clear separation between the early, middle, and late pregnancies was seen in the multilevel PCA score plots. This was a significant improvement over single-level PCA (Supplementary Figure 4). Still, no or minor separation between the GDM cases and the controls was observed. The corresponding loading plots of the models were compared. As shown in Figure 3, these models produced completely different sets of significantly metabolites that were changed in the course of pregnancy. On further inspection, the PQN-processed model was rejected, as the basic assumption of the PQN model (i.e., the majority of variables do not show “significant” differences between the studied groups) was not met. On the comparison of the EigenMS- and LOWESS-processed models, one might reasonably assume that the data set processed with the LOWESS normalisation was superior based on the RSD values (Table 1)29. However, we argue that QC-based normalisations could only remove technological variations, but not the unwanted biological variations5. The QC-based normalisations would have outperformed other normalisation approaches for the studies of cell culture, or animal studies, where experimental conditions permitted a high degree of control over the experimental subjects and so the condition of the samples. This would have hardly held true for the epidemiological studies of human subjects (patients). Although the precision of the data processed with EigenMS was suboptimal, it was unequivocal that the EigenMS-processed model gave the best classification of all methods tested and had both technical and unwanted biological variabilities minimised.\n\nMultilevel principal component analysis score plots produced by the data processed with the (a) Eigen, (b) PQN, and (c) LOWESS normalisation.\n\n(a) Eigen, (b) PQN, and (c) LOWESS normalisation.\n\nA heat map of the area under the ROC curve (AUC) of the data processed with four of the selected data normalisation methods is shown in Figure 4. The data processed with the LOWESS or SVR normalisation found no metabolites had an AUC ≥ 0.7. In the data processed with EigenMS, only one metabolite, hexadecanoic acid, was found significantly different between the GDM cases and the controls in the first trimester. The data set was analysed by logistic regression (Supplementary Table 2). Age, BMI, and parity were considered as confounding factors. The results were presented in the same format as reported by Enquobahrie, et al.36 (which did not involve odds ratio). The results of logistic regression analysis were consistent with the results of the ROC.\n\nOverall, the increase in NEFAs over the course of pregnancy reflected the progressive change in hepatic and adipose metabolism that occurs as part of the natural process of pregnancy, which facilitates the maternal utilisation of free fatty acids as an energy source, sparing other substrates for placental-foetal transport and foetal growth. However, the majority of individual NEFAs was not significantly different between the GDM cases and controls. It was concluded that the differences in the maternal plasma NEAF composition between the GDM cases and the healthy controls were very subtle37, and our analysis had reached a limit of untargeted GC-MS analysis with the selected data normalisation methods. By using targeted GC-MS analysis, Chen, et al., reported that the concentrations of NEFAs in maternal serum had a “graded” (or incremental) relationship with the severity of maternal hyperglycaemia38. These observational differences in the maternal plasma of our cohort may provide an insight into the development of GDM in the homogeneous population in China, who consume an oriental diet as opposed to populations in western countries.\n\n\nConclusions\n\nThe choice of the data normalisation method has a significant influence on biomarker discovery. Accordingly, researchers should justify that their selected methods are appropriate for their experimental condition. Where a study is conducted under a controlled experimental environment, and the specimens are biological equivalents (e.g., serum samples in an animal study, dried tissues, or cell cultures), we recommend QC-based normalisations. These methods effectively eliminate technical variations and the resulting data has the highest data precision. The selection of a QC-based method is instrumental platform or data dependent (i.e., tolerance to outliers and/or missing values). Where the data is generated by an epidemiological study of human subjects, model-based normalisations are recommended. PQN normalisation is the preferred choice when the basic assumption of the model is met. Conversely, we propose EigenMS. Although EigenMS still requires further development, we do believe that the principles of its unique biases capture and removal approach have a great potential to confront the analytical challenges of epidemiological metabolomics. Although IS-based normalisation is a common approach in GC-MS-based metabolomics, it has been demonstrated that the method is out-performed by other approaches. This is because batch effects can vary substantially according to chemical class and chromatographic retention. The use of a few selected ISs is not justified for untargeted analysis of complex biological mixtures. It is frequently mentioned in the review literature that the targeted analysis is limited by the scope of an analysis, but the untargeted analysis is also limited by the analytical precision. The current state-of-the-art data normalisation methods are not impeccable to the challenges. Nevertheless, by understanding the limitations of the popular data normalisation methods, a new approach capable of effectively eliminating both technical and irrelevant biological variations without compromising the integrity of the data may be developed. Moreover, a major challenge in the GC-MS-based analysis is the lack of suitable informatic tools specific for untargeted metabolomics. Many authors still rely on AMDIS, notwithstanding its known problems. It is worth stressing that errors in data extraction have an equal or greater effect on the downstream data analysis. We performed our data processing locally using R. Those not familiar with the R platform may consider the NOREVA server (http://server.idrb.cqu.edu.cn/noreva/), which offers a variety of data normalisation methods, including those used in this study, to streamline the analysis.\n\nAll the participants gave informed consent to participate in the current study. The study was approved by the Ethics Committee of the First Affiliated Hospital of Chongqing Medical University (University Hospital). More information can be found in the previous study8.\n\n\nData and software availability\n\nDataset 1: Pre-processed raw data and the data further processed with the data normalisation methods used in this study are available in Excel files: Raw is the unadjusted data; CRMN.norm, EigenMS.norm, PQN.norm, SVR.norm, LOWESS.norm are the data further processed with the corresponding normalisation methods; Injection sequence describes the injection order of the GC-MS experiment. This information is used for QC-based normalisation. http://dx.doi.org/10.5256/f1000research.11823.d16412139\n\nAgilent MassHunter suit version 8 is available to licensed subscribers of Agilent SubscribeNet (https://agilent.subscribenet.com/). Agilent Profinder version 8 is available free of charge to all Agilent's customers.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors would like to thank Steve Madden and Qingping Ma of Agilent Technologies for the early access to the new version of Agilent MassHunter Profinder (version 8). Ting-Li Han would like to thank Kai Pong Law for his invaluable support and guidence and the sharing of knowledge and credit for his works.\n\n\nSupplementary material\n\nSupplementary File 1: Supplementary Methods.\n\nClick here to access the data.\n\nTable S1. Metabolites confidently (match factor ≥ 80) identified by MassHunter software with NIST14 library and the relative standard deviation of individual metabolites in QC and analytical samples before (raw) and after data normalisation. IS denotes internal standard.\n\nClick here to access the data.\n\nTable S2. Binomial logistic regression analysis of the data set processed with (a) EigenMS, (b) PQN, (c) SVR, (d) LOWESS. Variables with significance (p-values) ≤ 0.05 are highlighted in bold.\n\nClick here to access the data.\n\nFigure S1. A comparison of (a) AMDIS and (b) Metabolite Detector deconvolution performance. AMDIS extracted a total of 277 components from 14.5 to 56 min, whereas Metabolite Detector extracted 264 components from 14.5 to 56 min (274 components up to 65 min) in a typical QC sample. Manual inspection revealed that a small number of peaks had been assigned to multiple components by AMDIS (insert in (a)) when the peaks were unsymmetrical. This problem was not observed with Metabolite Detector (insert in (b)). Many of the peaks, highlighted in blue triangles in (b), were low-intensity background components. 65 components were confidently annotated with match factor ≥ 80.\n\nClick here to access the data.\n\nFigure S2. Retention time shift of cholest-3,5-diene. Its retention varied from 54.65 – 55.17 mins.\n\nClick here to access the data.\n\nFigure S3. The left panel shows the raw intensity of (a) 1,3-dimethyl-benzene, (b) tridecanoic acid, methyl ester, and (b) nonadecanoic acid, methyl ester over the course of a 10-batch experiment. Their signal intensity was progressively deteriorated as a result of continual ion source/optic contamination. The right panel shows their intensity after SVR normalisation.\n\nClick here to access the data.\n\nFigure S4. Principal component analysis score plots of the (a) raw (unadjusted) data, and the data normalised with (b) CRMN (c), EigenMS, (d) PQN, (e) SVR and (f) cubic spline-LOWESS.\n\nClick here to access the data.\n\nFigure S5. Within-group relative log abundance plots of the (a) raw (unadjusted) data, and the data normalised with (b) CRMN (c), EigenMS, (d) PQN, (e) SVR and (f) cubic spline-LOWESS.\n\nClick here to access the data.\n\nFigure S6. The raw and SVR adjusted intensity of (a) hexanoic acid, methyl ester and (b) 1-ethyl-3,5-dimethyl-benzene. The SVR algorithm disregards unexpected or non-systemic signal intensity drift, thereby allowing some level of errors presented in the data set.\n\nClick here to access the data.\n\n\nReferences\n\nMizuno H, Ueda K, Kobayashi Y, et al.: The great importance of normalization of LC-MS data for highly-accurate non-targeted metabolomics. Biomed Chromatogr. 2017; 31(1): e3864. PubMed Abstract | Publisher Full Text\n\nLind MV, Savolainen OI, Ross AB: The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples. Eur J Epidemiol. 2016; 31(8): 717–33. PubMed Abstract | Publisher Full Text\n\nFilzmoser P, Walczak B: What can go wrong at the data normalization step for identification of biomarkers? J Chromatogr A. 2014; 1362: 194–205. PubMed Abstract | Publisher Full Text\n\nWu Y, Li L: Sample normalization methods in quantitative metabolomics. J Chromatogr A. 2016; 1430: 80–95. PubMed Abstract | Publisher Full Text\n\nDe Livera AM, Sysi-Aho M, Jacob L, et al.: Statistical methods for handling unwanted variation in metabolomics data. Anal Chem. 2015; 87(7): 3606–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Livera AM, Olshansky M, Speed TP: Statistical analysis of metabolomics data. Methods Mol Biol. 2013; 1055: 291–307. PubMed Abstract | Publisher Full Text\n\nLaw KP, Han TL: The importance of GC-MS date processing and analysis strategies suitable for plant and environmental metabolomics : with references to Changes in the abundance of sugars and sugar-like compounds in tall fescue (Festuca arundinacea) due to growth in naphthalene-treated sand. Environ Sci Pollut Res Int. 2016; 23(10): 10276–85. PubMed Abstract | Publisher Full Text\n\nLaw KP, Mao X, Han TL, et al.: Unsaturated plasma phospholipids are consistently lower in the patients diagnosed with gestational diabetes mellitus throughout pregnancy: A longitudinal metabolomics study of Chinese pregnant women part 1. Clin Chim Acta. 2017; 465: 53–71. PubMed Abstract | Publisher Full Text\n\nLaw KP, Han TL, Mao X, et al.: Tryptophan and purine metabolites are consistently upregulated in the urinary metabolome of patients diagnosed with gestational diabetes mellitus throughout pregnancy: A longitudinal metabolomics study of Chinese pregnant women part 2. Clin Chim Acta. 2017; 468: 126–39. PubMed Abstract | Publisher Full Text\n\nLaw KP, Zhang H: The pathogenesis and pathophysiology of gestational diabetes mellitus: Deductions from a three-part longitudinal metabolomics study in China. Clin Chim Acta. 2017; 468: 60–70. PubMed Abstract | Publisher Full Text\n\nInternational Association of Diabetes and Pregnancy Study Groups Consensus Panel, Metzger BE, Gabbe SG, et al.: International association of diabetes and pregnancy study groups recommendations on the diagnosis and classification of hyperglycemia in pregnancy. Diabetes Care. 2010; 33(3): 676–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKramer JK, Hernandez M, Cruz-Hernandez C, et al.: Combining results of two GC separations partly achieves determination of all cis and trans 16:1, 18:1, 18:2 and 18:3 except CLA isomers of milk fat as demonstrated using Ag-ion SPE fractionation. Lipids. 2008; 43(3): 259–73. PubMed Abstract | Publisher Full Text\n\nHiller K, Hangebrauk J, Jäger C, et al.: MetaboliteDetector: comprehensive analysis tool for targeted and nontargeted GC/MS based metabolome analysis. Anal Chem. 2009; 81(9): 3429–39. PubMed Abstract | Publisher Full Text\n\nSautron V, Terenina E, Gress L, et al.: Time course of the response to ACTH in pig: biological and transcriptomic study. BMC Genomics. 2015; 16(1): 961. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Livera AM, Dias DA, De Souza D, et al.: Normalizing and integrating metabolomics data. Anal Chem. 2012; 84(24): 10768–76. PubMed Abstract | Publisher Full Text\n\nSysi-Aho M, Katajamaa M, Yetukuri L, et al.: Normalization method for metabolomics data using optimal selection of multiple internal standards. BMC Bioinformatics. 2007; 8(1): 93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeport C, Ratel J, Berdagué JL, et al.: Comprehensive combinatory standard correction: a calibration method for handling instrumental drifts of gas chromatography-mass spectrometry systems. J Chromatogr A. 2006; 1116(1– 2): 248–58. PubMed Abstract | Publisher Full Text\n\nEngel E, Ratel J: Correction of the data generated by mass spectrometry analyses of biological tissues: application to food authentication. J Chromatogr A. 2007; 1154(1–2): 331–41. PubMed Abstract | Publisher Full Text\n\nChorell E, Hall UA, Gustavsson C, et al.: Pregnancy to postpartum transition of serum metabolites in women with gestational diabetes. Metabolism. 2017; 72: 27–36. Publisher Full Text\n\nDudzik D, Zorawski M, Skotnicki M, et al.: GC-MS based Gestational Diabetes Mellitus longitudinal study: Identification of 2-and 3-hydroxybutyrate as potential prognostic biomarkers. J Pharm Biomed Anal. 2017; pii: S0731-7085(17)30511-3. PubMed Abstract | Publisher Full Text\n\nGika HG, Theodoridis GA, Wingate JE, et al.: Within-day reproducibility of an HPLC-MS-based method for metabonomic analysis: application to human urine. J Proteome Res. 2007; 6(8): 3291–303. PubMed Abstract | Publisher Full Text\n\nChen M, Rao RS, Zhang Y, et al.: A modified data normalization method for GC-MS-based metabolomics to minimize batch variation. Springerplus. 2014; 3(1): 439. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan der Kloet FM, Bobeldijk I, Verheij ER, et al.: Analytical error reduction using single point calibration for accurate and precise metabolomic phenotyping. J Proteome Res. 2009; 8(11): 5132–41. PubMed Abstract | Publisher Full Text\n\nDunn WB, Broadhurst D, Begley P, et al.: Procedures for large-scale metabolic profiling of serum and plasma using gas chromatography and liquid chromatography coupled to mass spectrometry. Nat Protoc. 2011; 6(7): 1060–83. PubMed Abstract | Publisher Full Text\n\nEjigu BA, Valkenborg D, Baggerman G, et al.: Evaluation of normalization methods to pave the way towards large-scale LC-MS-based metabolomics profiling experiments. OMICS. 2013; 17(9): 473–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTsugawa H, Kanazawa M, Ogiwara A, et al.: MRMPROBS suite for metabolomics using large-scale MRM assays. Bioinformatics. 2014; 30(16): 2379–80. PubMed Abstract | Publisher Full Text\n\nWang SY, Kuo CH, Tseng YJ: Batch Normalizer: a fast total abundance regression calibration method to simultaneously adjust batch and injection order effects in liquid chromatography/time-of-flight mass spectrometry-based metabolomics data and comparison with current calibration methods. Anal Chem. 2013; 85(2): 1037–46. PubMed Abstract | Publisher Full Text\n\nShen X, Gong X, Cai Y, et al.: Normalization and integration of large-scale metabolomics data using support vector regression. Metabolomics. 2016; 12(5): 89. Publisher Full Text\n\nReisetter AC, Muehlbauer MJ, Bain JR, et al.: Mixture model normalization for non-targeted gas chromatography/mass spectrometry metabolomics data. BMC Bioinformatics. 2017; 18(1): 84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKohl SM, Klein MS, Hochrein J, et al.: State-of-the art data normalization methods improve NMR-based metabolomic analysis. Metabolomics. 2012; 8(Suppl 1): 146–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKarpievitch YV, Nikolic SB, Wilson R, et al.: Metabolomics data normalization with EigenMS. PLoS One. 2014; 9(12): e116221. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKarpievitch YV, Dabney AR, Smith RD: Normalization and missing value imputation for label-free LC-MS analysis. BMC Bioinformatics. 2012; 13(Suppl 16): S5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdmands WM, Ferrari P, Scalbert A: Normalization to specific gravity prior to analysis improves information recovery from high resolution mass spectrometry metabolomic profiles of human urine. Anal Chem. 2014; 86(21): 10925–31. PubMed Abstract | Publisher Full Text\n\nGagnebin Y, Tonoli D, Lescuyer P, et al.: Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies. Anal Chim Acta. 2017; 955: 27–35. PubMed Abstract | Publisher Full Text\n\nChen Y, Shen G, Zhang R, et al.: Combination of injection volume calibration by creatinine and MS signals' normalization to overcome urine variability in LC-MS-based metabolomics studies. Anal Chem. 2013; 85(16): 7659–65. PubMed Abstract | Publisher Full Text\n\nEnquobahrie DA, Denis M, Tadesse MG, et al.: Maternal Early Pregnancy Serum Metabolites and Risk of Gestational Diabetes Mellitus. J Clin Endocrinol Metab. 2015; 100(11): 4348–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAgakidou E, Diamanti E, Papoulidis I, et al.: Effect of Gestational Diabetes on Circulating Levels of Maternal and Neonatal Carnitine. J Diabetes Metab. 2013; 4(3): 250. Publisher Full Text\n\nChen X, Scholl TO, Leskiw M, et al.: Differences in maternal circulating fatty acid composition and dietary fat intake in women with gestational diabetes mellitus or mild gestational hyperglycemia. Diabetes Care. 2010; 33(9): 2049–54. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHan TL, Yang Y, Zhang H, et al.: Dataset 1 in: Analytical challenges of untargeted GC-MS-based metabolomics and the critical issues in selecting the data processing strategy. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23714",
"date": "10 Jul 2017",
"name": "Feng Zhu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors investigated five different representative methods and examined the performance of these methods against each other in an epidemiological study of gestational diabetes using plasma. Normalization is an important step in the analysis of metabolomics data, and hence, evaluating different normalization methods is of great importance. However, I have some comments, which are summarized below.\nComment 1: Liquid chromatography coupled with mass spectrometry (LC-MS) and nuclear magnetic resonance (NMR) spectroscopy are also the most commonly applied tools to achieve metabolomics studies. The authors should discuss what renders GC-MS datasets different from LC-MS or different from NMR.\nComment 2: Normalization is an important step in the analysis of metabolomics data and a variety of normalization methods have been developed for addressing the complex datasets generated. But their performances vary greatly and depend heavily on the nature of the studied data. Hence, how to choose the most appropriate method can be challenging for those without a background in bioinformatics. The recent published paper referred to identifying the well performed normalization method by taking multiple criteria into consideration (Nucleic Acids Res, 45(W1): W162-W170 (2017)). So, just to clarify the reader should be alerted when to use any of the best performing methods, plus should be alerted when not to use them.\nComment 3: Sparsity of data: in many cases metabolomics datasets contain zero values. Discuss in the manuscript how zero values affect the normalization and the relevant sections referred in the paper (Sci Rep, 6:38881 (2016)) could be discussion points.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2872",
"date": "13 Jul 2017",
"name": "Kai Law",
"role": "Author Response",
"response": "I would like to thank Dr Zhu for his detailed review of our article and his approval. Herewith my response to the reviewer’s comments.Comment 1 and 2:With respect to data normalisation for UPLC-MS data sets, please refer to my previously published articles (Ref. [8-10]). In brief, I have developed a two-step data normalisation approach specific for our cohort study of GDM (ref. [10]). The first step involves a normalisation method available in Progenesis QI (normalise to all compounds), which primarily deals with changes in the sample concentration. The second step involves a data equalisation with EigenMS, which further captures and removes residual biases, such as instrumental drift and batch effects. Comparison with other common normalisation methods has been described in the supplementary data in ref. [9]. I am not an expert of NMR, but to the best of my knowledge, the data normalisation methods (and so the data analysis methods) described in my works equally apply to NMR data sets.I agree with the reviewer that the application of data normalisation strategy is dependent on many factors, from study design, instrumentation, and software platform, to the nature/structure of the data set. Many authors have conducted their studies without considering this question seriously or applied the most common methods to avoid questions from the peer-reviewers. The NOREVA server the reviewer developed provides resources and tools for analysts, who may have limited bioinformatic background, to optimise their data normalisation strategy. However, no software tools can justify which normalisation method is the most appropriate in each situation for the user. For example, normalise to internal standards or pooled QC samples are common methods for data normalisation of GC-MS data set. However, I do not find normalising to a few internal standards can be justified. QC-based normalisation, although may give the highest analytical precision, model-based approaches are, however, the preferred methods for the cohort study of human population. It is because QC-based normalisation only deals with analytical drifts and the consistency of the QC samples may present an analytical challenge in a large-scale study. Data normalisation remains a challenge in metabolomics and is a grey area that needs further development. It is up to the readers to decide when a method is more appropriate than the others in their study. It is beyond the scope of this study to generalise or set rules as the reviewer suggested.Comment 3:“Zero values” is not a problem in our study. I have applied our software platform to avoid this problem completely in this study and so in my previous UPLC-MS works with the use of Progenesis QI. I also want to stress that I do not recommend imputation for zero or missing values since the choice of imputation method, as the reviewer has implied, “affect[s] the normalization” and so the biological outcome. In this article, I used Agilent’s Profinder to eliminate the problem of zero values, the data were manually checked and peaks were re-integrated to ensure accurate data extraction. This has been discussed in the article already. The raw data matrix provided by the article has shown that our data do not have the problem of zero or missing values. Furthermore, the Profinder software has a unique function called Recursive Feature Extraction that re-integrates peaks with intensity lower than the background value input by the user. This is useful when certain peaks have low intensity in some of the samples but are detected above background in the other samples. We have used the same method with Metabolite Detector in our previous work (ref. [7]) to eliminate zero values. We have stressed in this work that methods based on AMDIS (and indeed ChromaTOF), which require imputation, are not recommended."
}
]
},
{
"id": "24566",
"date": "08 Aug 2017",
"name": "Jianguo Xia",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nData processing and normalization are critical in large-scale metabolomics studies. Different choice tends to have a significant impact on the downstream analysis. It is well-known that optimal normalization is study-specific, as most normalization methods have been developed with certain data distributions in mind. However, research on this issue is challenging due to the lack of well-accepted benchmark metabolomics datasets and evaluation criteria. Common approaches include using simulated data in combination with a well-studied data, or using multiple datasets in order to generate a less biased conclusion.\nIn this paper, the authors reported their experience using 5 different normalization methods on an epidemiological metabolomics dataset generated from GC-MS. Therefore, the conclusion may not be directly applicable to data from other platforms such as LC-MS or NMR. Nevertheless, the authors described the pitfalls and challenges in processing such data, and shared their insights which may be useful for other researchers under similar experimental setup.\n\nMy comments are on two aspects:\n\n1) Although I have no problem understanding the content, I think the authors need to invest more time and efforts improving the readability of the paper. I have noticed many grammar issues. Almost all sentences in the Background section in Abstract needs to be carefully checked.\n\nFigures: Figure 2 - Including result based on raw data will be very helpful; Figure 3 legend - mPCA or msPCA? Figure 4 legend - \"are under the curve\" == > area?\n\n2) Some normalization methods are rather complementary. For instance, some adjust technical variations and some for biological variations. It will be interesting to test whether combining two different normalization methods will give better results.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-967
|
https://f1000research.com/articles/6-952/v1
|
21 Jun 17
|
{
"type": "Opinion Article",
"title": "ELIXIR-UK role in bioinformatics training at the national level and across ELIXIR",
"authors": [
"L. Larcombe",
"R. Hendricusdottir",
"T.K. Attwood",
"F. Bacall",
"N. Beard",
"L.J. Bellis",
"W.B. Dunn",
"J.M. Hancock",
"A. Nenadic",
"C. Orengo",
"B. Overduin",
"S-A Sansone",
"M. Thurston",
"M.R. Viant",
"C.L. Winder",
"C.A. Goble",
"C.P. Ponting",
"G. Rustici",
"L. Larcombe",
"R. Hendricusdottir",
"T.K. Attwood",
"F. Bacall",
"N. Beard",
"L.J. Bellis",
"W.B. Dunn",
"J.M. Hancock",
"A. Nenadic",
"C. Orengo",
"B. Overduin",
"S-A Sansone",
"M. Thurston",
"M.R. Viant",
"C.L. Winder",
"C.A. Goble",
"C.P. Ponting"
],
"abstract": "ELIXIR-UK is the UK node of ELIXIR, the European infrastructure for life science data. Since its foundation in 2014, ELIXIR-UK has played a leading role in training both within the UK and in the ELIXIR Training Platform, which coordinates and delivers training across all ELIXIR members. ELIXIR-UK contributes to the Training Platform’s coordination and supports the development of training to address key skill gaps amongst UK scientists. As part of this work it acts as a conduit for nationally-important bioinformatics training resources to promote their activities to the ELIXIR community. ELIXIR-UK also leads ELIXIR’s flagship Training Portal, TeSS, which collects information about a diverse range of training and makes it easily accessible to the community. ELIXIR-UK also works with others to provide key digital skills training, partnering with the Software Sustainability Institute to provide Software Carpentry training to the ELIXIR community and to establish the Data Carpentry initiative, and taking a lead role amongst national stakeholders to deliver the StaTS project – a coordinated effort to drive engagement with training in statistics.",
"keywords": [
"ELIXIR-UK",
"Training",
"Bioinformatics",
"Genomics",
"Metabolomics",
"Structural Bioinformatics",
"Impact",
"TeSS"
],
"content": "Introduction\n\nELIXIR-UK is at the forefront of bioinformatics training provision, across the UK and in the wider ELIXIR training programme. At the national level, ELIXIR-UK has surveyed bioinformatics training needs, has identified training gaps and developed strategies to address them. This resulted in the establishment of successful national training initiatives (see “ELIXIR UK training in the UK” section), focusing on up-skilling the UK research community, and developing a coherent bioinformatics training programme across the UK, coordinating the activities of a growing network of bioinformatics training centres. Three of these centres have also been recognised as ELIXIR-UK resources, through an open and transparent mechanism for identifying resources suitable for contribution to ELIXIR1: these are the Bioinformatics Training programme of the University of Cambridge; the Birmingham Metabolomics Training Centre; and the Edinburgh Genomics training programme. Together, these centres offer in excess of 150 high-quality practical bioinformatics training courses each year, focusing on a wide range of topics, from basic bioinformatics skills to advanced data analysis; over 2,500 people (largely UK-based scientists) are trained every year by these three centres combined. Alongside ELIXIR’s flagship Training Portal, TeSS, which is also developed and run by ELIXIR-UK, these three training centres are a major contribution from the Node to the ELIXIR-wide training programme, and are committed to adhere to the ELIXIR’s standards for quality, discoverability and interoperability.\n\nAdditionally, ELIXIR-UK has played an ELIXIR-wide strategic role in training by co-leading and coordinating, together with ELIXIR-NL and ELIXIR-CH, the ELIXIR Training Platform [Prof Ponting (Lead) and Dr Hendricusdottir (Coordinator), from 2014 to 2017; and currently Dr Bellis (Coordinator)], which aims to build a sustainable training infrastructure across Europe, and deliver training based on the evolving needs and requirements of the scientific community (See “ELIXIR-UK’s role in the ELIXIR Training Platform” section). ELIXIR-UK has also been involved in establishing and chairing the ELIXIR Training Coordinators Group (TrCG, Dr Hendricusdottir). The TrCG comprises training representatives from each of the ELIXIR Nodes, and plays an important role in coordinating ELIXIR-wide training efforts, leading the implementation of ELIXIR’s training strategy across Europe. This report provides an overview of ELIXIR-UK’s role in bioinformatics training at the national level and across ELIXIR.\n\n\nELIXIR-UK training in the UK\n\nPlease note that the activities listed in this section have been mapped to the original timetable of activities from the BB/L005069/1 grant (Supplementary File 1).\n\nAt the start of the ELIXIR-UK activities, a scoping exercise was undertaken (June to October 2014, Dr Larcombe), which involved working with UK sector leads to consider the training needs of their communities, and carrying out an industry survey of skills needs (Supplementary File 1 | Year 1 - T2, T3). The result of this activity was the identification of 5 priority areas (Supplementary File 1 | Year 1 - T2, T3, T5): Core Bioinformatics Skills, Metabolomics, Structural Bioinformatics, Clinical Genomics, and Applied Genomics. The rationale for prioritising these 5 thematic areas came from community input and training-gap analysis. Additionally, the emergence of Core Bioinformatics Skills, representing a broad base of quantitative and computational skills, also arose from responses to the ELIXIR-UK industry survey, and trends emerging from the BBSRC/MRC vulnerable skills report and the Global Organisation for Bioinformatics Learning, Education & Training (GOBLET) survey.\n\nThe complete results of the industry survey are available from the ELIXIR-UK website, but some key indicators are presented here:\n\n\n\nAround 70% of bioinformaticians would like training in statistics and data-analysis methods, with a specific focus on sequencing and genomics.\n\nSimilarly, around 60% of wet-lab biologists would like to acquire skills in data visualisation, data manipulation and general statistics.\n\nInterestingly, the skills desired by wet-lab scientists are also those that bioinformaticians (70–80%) consider to be the most crucial competencies.\n\nThe majority of wet-lab scientists (74%) have no programming experience, and 60% perform their data analysis in Excel.\n\nWhen considering the use of statistical methods, 65% are not confident with statistics, with a small percentage of respondents not even sure what statistical knowledge they should have (6%).\n\nAlthough most respondents do have a bioinformatician with whom they can collaborate, 34% do not have a bioinformatician/statistician to whom they can turn for support.\n\nELIXIR-UK has been addressing the 5 priority training areas listed above through:\n\n\n\nCoordinating a combination of named ELIXIR-UK training centres;\n\nDeveloping the ELIXIR training portal, TeSS, which provides a snapshot of ELIXIR’s training landscape by making training (events, materials, etc.) discoverable and accessible;\n\nCollaborating with other institutes and training initiatives;\n\nWorking with other infrastructure projects; and\n\nDeveloping new projects to fill existing training gaps.\n\nHow this has developed is outlined in the following sections for each of the 5 priority areas.\n\nThis area of the ELIXIR-UK training plan closely aligns with skills needs established by the BBSRC/MRC vulnerable skills report, and the skills gaps observed across ELIXIR and beyond. Increasing training in fundamental computational and quantitative skills is an area in which ELIXIR-UK has been very successful, with established partnerships with both Software Carpentry (SWC) and Data Carpentry (DC), and taking a leading role in the development of a national training strategy in statistics.\n\nSWC/DC training. Through its partnership with the Software Sustainability Institute (SSI), ELIXIR-UK has initiated and coordinated the provision of SWC/DC training across ELIXIR. These courses teach the fundamental skills necessary for data manipulation and best practices for reproducible research. Over the past 3 years, more than 50 workshops have been organised in the UK, aiming to teach researchers with minimal or no computational skills some basic data-manipulation and software-development techniques to help them improve or speed up their research. Feedback from course participants is extremely positive, and the topics covered by SWC/DC training are considered fundamental for all researchers. ELIXIR-UK has had a pivotal role in establishing SWC/DC training programmes within ELIXIR during the 2015 Pilot project, and helping to build the capacity within Nodes for sustaining these programmes independently. In 2016, ELIXIR began drafting a collaboration agreement with the SWC/DC Foundations, which will cover the development of common training materials, the coordination of training workshops for researchers, and the construction of a community of certified ELIXIR trainers. ELIXIR-UK will continue to provide support to such coordination effort, and the SSI is now working on establishing a SSI in Europe.\n\nIn the future, ELIXIR-UK wishes to expand the delivery of SWC/DC courses across the UK, by organising courses in locations where these have not been run before, to increase training capability in the UK. The collaboration agreement between ELIXIR and the SWC/DC Foundations will only allow for the delivery of two workshops per year at each ELIXIR Node, which would not be sufficient to meet the UK Institutions’ demand for such training. Some of the remaining funds from the original ELIXIR-UK BBSRC grant (BB/L005069/1) will enable this, while additional funding opportunities are being explored to continue sustaining this training provision in the long term.\n\nStatistics training. Over the last few years, several professional bodies, societies, industry bodies and the Research Councils have surveyed skills requirements, with statistics consistently being highlighted as an acute need. Specifically, from the ELIXIR-UK skills survey distributed to industry wet-lab biologists, 63% of respondents were not confident in their use of statistics. Because of this trend, ELIXIR-UK contributed to the development of the Statistics Training Signposting project, and has taken the lead in promoting statistics skill schools, helping researchers to: (i) realise that statistics is something that everyone should be able to do, and (ii) acquire a basic skill-set to enable them to learn “practical” statistics. BBSRC is currently funding the development of statistics skill schools, in partnership with ELIXIR-UK, and the dissemination of training materials through online resources. This activity is led by the University of Cambridge (Dr Rustici). The first course from this project was run in 2016, with financial support from Cancer Research UK, and resulted in extremely positive feedback. Additional courses will run in Cambridge between 2017 and 2019, with support from a BBSRC STARS award (Dr Rustici). Course materials will be disseminated online, made discoverable through TeSS, and linked to follow-up training; this project also relies on additional financial contribution from the MRC, plus training materials from AstraZeneca. Statistics courses will also be added to the DC training provision, to link these initiatives and increase sustainability.\n\nThe University of Oxford (Dr Sansone) will contribute to statistics training with the community-driven STATistics Ontology (STATO), which is being developed also as a didactic tool. STATO aims to cover processes such as statistical tests, their conditions of application, and information needed or resulting from statistical methods, such as probability distributions, variables, spread and variation metrics. STATO also covers aspects of experimental design, and description of plots and graphical representations commonly used to provide visual cues of data distribution or layout, and to assist the review of results. Funds will be sought to enhance the didactic aspect of STATO under the UK Node activities.\n\nStatistics has also been identified as an area of need in other ELIXIR Nodes, so the statistics training developed by ELIXIR-UK will also contribute to and inform the wider ELIXIR training activities in this area.\n\nMetabolomics was quickly identified as an area of growing need: ELIXIR-UK (University of Birmingham: Prof Viant, Dr Dunn, Dr Weber and Dr Winder) carried out a metabolomics community survey to assess resource usage, researcher base, and skills needs in partnership with the international Metabolomics Society. This activity highlighted several important community needs, and specifically demonstrated the sector's rapid growth, and the resulting large number of researchers new to this subject, who therefore need training. A review of this survey is published2, and a summary of all responses is available online.\n\nAn early reaction to this was to facilitate funding and knowledge exchange (Dr Larcombe) between ELIXIR-UK partners (QMUL, Prof Bessant; and Birmingham) to maximise the impact of BBSRC MTP funds for developing proteomics and metabolomics bioinformatics training, with resulting courses being developed collaboratively. To extend its reach, the metabolomics bioinformatics course is being operated online as a Small Private Online Course (SPOC). The course was first run in February 2017, and 51 trainees completed the course from the UK, Europe and Asia. Additionally, observations from this survey formed part of the rationale for the proposal and development of the Birmingham Metabolomics Training Centre, alongside the existing operation of two UK metabolomics cores: the Phenome Centre in Birmingham (through a £5M MRC-funded grant) and the NERC Metabolomics facility (NBAF-B). This was a significant development, and the Birmingham Metabolomics Training Centre is now a named ELIXIR-UK resource, offering several annual face-to-face, hands-on courses, a well-subscribed MOOC, through FutureLearn, and a SPOC focused on computational processing and analysis of metabolomics data. Industry (Thermo Fisher Scientific and Waters Ltd.) has recognised the importance of this training centre, and has provided funds (£1M) for scientific instrumentation and software to enhance training activities and capabilities for the trainees.\n\nThe centre has taken an important lead in the development of the European metabolomics training initiative (EmTraG), which was formed in 2016 with the support of ELIXIR-UK. EmTraG was created to address a pressing need to harmonise the rapidly expanding portfolio of metabolomics training courses in Europe, and improve its scientific coverage, geographical reach, quality and impact; ultimately, to empower the next generation of analytical, computational and applied metabolomics scientists. EmTraG seeks close ties with the ELIXIR Training Platform at a European level, and the Metabolomics Society at an international level. It is intended that www.EmTraG.eu will serve as the principal European portal for metabolomics training with close links to other portals, including TeSS.\n\nWhether ELIXIR-UK will continue to play a role in delivering further skills training in this area is unclear, given the resources currently in place in Birmingham. The UK Node will continue to seek funding in order to support the Birmingham Training Centre as a Node resource, and play a key advocacy role to provide routes to training at the European level. Some of the remaining funds from the original ELIXIR-UK BBSRC grant (BB/L005069/1) will be used to: (i) fund bursaries to enable BBSRC and MRC PhD students to attend face-to-face training courses, covering all of the course fee, and (ii) continue to operate an online course focused on metabolomics data processing and statistical analysis.\n\nThe field of structural bioinformatics, including both macromolecular structure and small chemical structure-related bioinformatics, is critical to translational medicine and the development of new therapeutics/medicines. The UK has a strong and historic community in this field, covering a great deal of expertise. Correspondingly, there are some key tools that have been developed, and for which the availability of training would be beneficial.\n\nThe early stages of ELIXIR-UK activity in this sector brought together members of this community to define the scope of training need, and to discuss training approaches for tackling complex concepts and methodologies (UCL, Prof Orengo). This process identified workflow-based approaches as an ideal way to present complex procedural tasks in structural bioinformatics, both as a high-level depiction of methodologies, and as a signpost to available training materials for distinct experimental/software stages. Unfortunately, the lack of funding made it impossible to implement these training workflows or develop face-to-face courses, with the exception of a small number of training activities (two events per year) in the analysis of protein structure data, which are run at UCL/EMBL-EBI and University of Cambridge.\n\nThe promotion of workflows as a high-level training “map” has nevertheless influenced the development of the ELIXIR-UK training portal, TeSS (see below); however, additional funding is needed to consolidate these activities and implement structural bioinformatics workflows in TeSS. Some of the remaining funds from the original ELIXIR-UK BBSRC grant (BB/L005069/1) will be used to implement these workflows, to provide training in the use of 3D structures to predict the impacts of genetic variations.\n\nIn 2014, a need for training in the field of clinical genomics became clear, given the aspirations of various groups to further the implementation of genomic medicine at a large scale. Principal amongst those groups was Genomics England, as it rapidly moved towards the start of the UK national 100,000 genomes project. As such, ELIXIR-UK felt that the development of training in this area was of critical importance to facilitate the clinical use, and interpretation, of genomics data.\n\nSeveral universities across the UK are playing a strategic role in developing Clinical Bioinformatics training, particularly through collaboration with Health Education England (HEE) and Genomics England, focusing on training the healthcare workforce to make use of genomic data for patient care. Several UK Universities have recently set up MSc courses in Genomics Medicine, and have developed online learning resources in Clinical Bioinformatics, such as the MOOC from the University of Manchester. In addition, HEE is improving their Scientific Training (STP) and Higher Specialist Scientific Training (HSST) programmes in clinical bioinformatics and clinical genomics by incorporating SWC/DC courses as part of the official curriculum, to ensure scientists have the right training in data management and analysis, and computational skills.\n\nIt is unclear how these MSc programmes will continue to be funded once the initial HEE sponsorship finishes, but discussions are taking place to ensure that clinical bioinformatics training provision continues in the long term. ELIXIR-UK (Dr Rustici) is working in partnership with HEE to establish best practices in clinical bioinformatics training, and to delineate a core clinical bioinformatics curriculum.\n\nTwo training centres (University of Cambridge and Edinburgh Genomics) heavily contribute to the provision of training in applied genomics. Courses on the analysis of High-Throughput Sequencing (HTS) data represent the largest subgroup of training events run every year at these two sites, and cover a wide spectrum of sequencing application (including whole genome sequencing, RNA-seq, ChIP-seq, methylation, DNA-seq, single cell RNA-seq, small-RNA-seq, etc.). The commitment from both centres to continue providing training in this area remains very high.\n\nIn this context, ELIXIR-UK has been collaborating with the Global Organisation for Bioinformatics Learning, Education & Training (GOBLET)3 on developing guidelines to enable training material sharing, dissemination and reuse. Common standards for describing training materials have been proposed and applied to the curation of existing HTS training materials4, during two face-to-face meetings of trainers from the HTS and metagenomics communities. The first such workshops was sponsored by ELIXIR-UK. Additional curation events are planned in the near future; see the ‘External liaisons’ section for more details.\n\n\nELIXIR-UK’s role in the ELIXIR Training Platform\n\nELIXIR-UK is currently responsible for three core activities within the ELIXIR Training Platform: (i) development of measures of training impact and quality assessment, (ii) development of the Training portal, TeSS, and (iii) coordination of the ELIXIR Training Platform. These activities are dependent on ELIXIR-EXCELERATE funding, which will terminate in 2019. Between 2014 and 2017, the Training Platform leadership and coordination (Prof Ponting and Dr Hendricusdottir) were funded by both ELIXIR-UK and ELIXIR-EXCELERATE.\n\nThe training impact and quality assessment work aims to develop a framework for capturing and reporting on the impact of the ELIXIR training programme as a whole, and to demonstrate the value gained from the time and money invested in training initiatives. As part of this work, a core set of Key Performance Indicators (KPIs) has been identified, and all ELIXIR training providers are in the process of implementing mechanisms (primarily through short- and long-term surveys) for collecting these KPIs for the training events taking place in their Nodes. Since February 2017, data from over 40 training events has been captured, with contributions from nine ELIXIR Nodes. This process will be iterated until an appropriate set of KPIs is identified that allows us to “measure” how participating in an ELIXIR training event has influenced how trainees work. This activity is co-led by the University of Cambridge (Dr Rustici, Dr Bellis) and EMBL-EBI (Dr Morgan). Collection of data through short-term surveys is being complemented with data collected through face-to-face interviews with course participants; these will be used to capture qualitative information, such as the improvement of understanding of a topic, or a subsequent change in professional development. Selected participants will be interviewed at the time of the event, and then at a defined point in the future, normally after 6 months to 1 year. Some of the remaining funds from the original ELIXIR-UK BBSRC grant (BB/L005069/1) will be used to run interviews in Cambridge, Birmingham, Edinburgh and at another ELIXIR Node.\n\nImpact assessment is currently a high-priority within ELIXIR, not just for training, but for all of its services and resources; the approach being developed for measuring the impact of training events is now being used to assess the impact of other event types, such as BYOD workshops and Industry events. It is unclear how impact assessment will be supported after the termination of the ELIXIR-EXCELERATE grant.\n\nThe ELIXIR Training Portal, TeSS5, is ELIXIR’s flagship training service and one of its three portals. It provides a snapshot of ELIXIR’s training landscape by making training (events, materials, etc.) from all ELIXIR Nodes searchable and more easily discoverable in a single, central location. Significant effort has been put into providing information in ways that support user decisions and choices, allowing organisation of materials and events into training packages (groups of resources that address a particular training topic), and training workflows (navigational tools that visualise learning steps and link to resources specific to the training tasks). Work is ongoing to develop links with ELIXIR’s e-learning resources. To date, TeSS includes more than 6,000 training resources (including upcoming and past training events, and training materials) automatically aggregated from 30 content providers, including ELIXIR Nodes and third-party organisations, such as SWC/DC, GOBLET, Coursera, FutureLearn, etc.; TeSS content also feeds into other dissemination services, such as iAnn widgets6 and Biocider7. Training resources within TeSS are linked to content from other ELIXIR registries, such as tools in the bio.tools registry8, and databases, standards and policies from Biosharing9. TeSS is jointly developed by the University of Manchester (Prof Goble and Prof Attwood) and the University of Oxford’s e-Research Centre (Dr Sansone). TeSS is aligning its efforts with comparable initiatives worldwide (see the “External liaisons” section), and the TeSS platform is to be adopted by EMBL-Australia.\n\nFuture of TeSS. Work on TeSS was initiated in the BBSRC Training award, and continues in the ELIXIR-EXCELERATE project. TeSS is a Node Service provided by ELIXIR-UK to ELIXIR, and will be subject to a Service Delivery Plan (in preparation) to a Service Delivery Plan. It is one of the three major, flagship Portals of ELIXIR. After 2019, the ELIXIR-EXCELERATE award will conclude; however, the TeSS will still need to be supported. The expectation of ELIXIR is that this is an obligation to be shouldered by the UK Node, as other National Nodes support key resources, but national funding has yet to be secured.\n\nELIXIR-UK has played an ELIXIR-wide strategic role in training by co-leading and coordinating, together with ELIXIR-NL (Dr van Gelder) and ELIXIR-CH (Dr Palagi), the ELIXIR Training Platform [Prof Ponting (Lead) and Dr Hendricusdottir (Coordinator), from 2014 to 2017; and currently Dr Bellis (Coordinator)]. Dr Rustici is currently contributing to the Training Platform leadership until ELIXIR establishes a procedure for the election of Platform leaders and new leaders are elected.\n\nELIXIR-UK’s leading role in this Platform has had an impact on the UK training strategy as well as the ELIXIR wide strategy, and has helped shaping the ELIXIR training program, establishing its aims and priorities, and developing activities/resources that address the training need of the ELIXIR scientific community.\n\nAlso, ELIXIR-UK (Dr Larcombe and Dr Hendricusdottir), supported by the Swiss (Dr Palagi) and Dutch (Dr van Gelder) Nodes, established the ELIXIR Training Coordinator Group (TrCG), representing 15 ELIXIR Nodes, which grew to 21 members representing all of the ELIXIR nodes in 2017. Dr Hendricusdottir was elected Chair of the TrCG and coordinated/co-led this group until 2017. The ELIXIR TrCG was established to coordinate training efforts across ELIXIR, sharing best practices in training and representing the interests of national nodes in pan-European activities. The TrCG made an important contribution to the H2020 ELIXIR-EXCELERATE grant proposal in which Dr Hendricusdottir coordinated the training work package (WP11). The TrCG is now an integral part of the ELIXIR Training Platform.\n\nIn addition, ELIXIR-UK (Dr Hendricusdottir) co-wrote the ‘Coordinator guideline’ and ensured that the TrCG worked alongside the ELIXIR Technical Coordinator Group (TeCG) and had as much weight in the governance as the TeCG. Dr Hendricusdottir also contributed to and co-wrote many ELIXIR documents, including annual reports, the industry strategy, the Training Platform Road map and the ELIXIR website.\n\n\nExternal liaisons\n\nForming close ties with third-party organisations is an important outreach activity for ELIXIR as a whole, as it helps to bring wider perspectives and to avoid costly duplication of effort. This has been particularly important for the Training Platform, which has developed joint training strategies with several organisations, including (i) GOBLET, a foundation dedicated to providing a global, sustainable support and networking infrastructure for bioinformatics trainers and trainees, and (ii) the NIH-funded Big Data to Knowledge (BD2K) Training Coordinating Centre (TCC). Both of these agreements were initiated and coordinated by ELIXIR-UK (Dr Hendricusdottir). ELIXIR-UK has also pump-primed the Bioschemas initiative for training material specifications (in collaboration with the Pistoia Alliance), which has now developed into a flagship project for the ELIXIR Interoperability, Data and Tools platforms, and spawned an international community activity under the W3C.\n\nThe GOBLET-ELIXIR joint training strategy sets out four key areas for collaboration:\n\n1. work together on ‘train-the-trainer’ and ‘train-the-researcher’ activities;\n\n2. jointly explore training 'accreditation' mechanisms;\n\n3. share best practices and expertise on professionalising bioinformatics training; and\n\n4. form close links between ELIXIR's TeSS and GOBLET's training portal10.\n\nIn particular, GOBLET and ELIXIR-UK have been collaborating on developing best practices and guidelines to enable training material sharing, dissemination and reuse. As mentioned above, common standards for describing training materials have been proposed and applied to the curation of existing HTS training materials during two face-to-face meetings of trainers from the HTS and metagenomics communities. The first workshop, sponsored by ELIXIR-UK, resulted in the creation of a Git repository for sharing annotated materials, which can now be reused, modified or incorporated into new courses4. All curated materials are discoverable through the TeSS and GOBLET portals. This work has helped shape the Bioschemas specifications for training materials.\n\nThe establishment and adoption of best practices for training materials is of fundamental importance, as it ensures that materials are properly described and easily comparable, for the benefit of users. Therefore, ELIXIR-UK will continue to collaboratively refine standards for training material dissemination, and apply them to a growing body of materials, starting with the EXCELERATE/GOBLET/GTN hackathon on Galaxy training material re-use that took place at the University of Cambridge in May 2017. This work will ultimately be beneficial to TeSS, enhancing material browsability and discoverability.\n\nBioschemas aims to leverage off-the-shelf approaches to structured Web mark-up so that search engines and metadata harvesters can extract metadata that can easily be published by providers.\n\nBioschemas defines sets of properties for life-science training entities, such as training materials and events, and data entities, such as data repositories, data-sets, beacons (infrastructure to allow genomic data centres to make data discoverable), samples, phenotypes and protein annotations. The Bioschemas project is a high-profile activity for the ELIXIR Interoperability platform, with major input from ELIXIR-UK institutions, including the University of Manchester (Prof Goble), the University of Oxford (Dr Sansone), Heriot-Watt University (Dr Gray) and the Earlham Institute (Mr Horro and Miss Artaza). The work crosses over to the Data and Tools platform, and is part of a wider initiative drawing in major search-engine projects (e.g., Google) and the NIH BD2K BioCADDIE Centre.\n\nBioschemas was thought up by ELIXIR-UK and ELIXIR-Hub, and launched at the ELIXIR-UK 2015 All-Hands meeting, to overcome difficulties in feeding metadata from third-party training resources into TeSS.\n\nIn collaboration with the BD2K TCC, the Training Platform has developed the ELIXIR Training Platform and BD2K TCC Training Collaboration, which outlines five areas for collaboration between these two initiatives:\n\n\n\n1. synergistic development of training portals: ELIXIR’s TeSS and BD2K’s BigDataU;\n\n2. development of core competencies in bioinformatics;\n\n3. organisation of data science summer schools in collaboration with RDA-CODATA;\n\n4. collaboration with GOBLET; and\n\n5. fostering international interactions and frameworks for developing standards for curating and disseminating educational materials.\n\nThe interaction of ELIXIR, GOBLET and BD2K TCC will be of direct benefit to the overall data science communities, as well as investigators seeking a broad basis for training in large-scale biomedicine around the world.\n\nFuture collaborations between the ELIXIR Training Platform and BD2K TCC were discussed in the upcoming “International Workshop on Data Science Training: Standards, Schemas, and Successes”, May 24–26, which brought together training experts from Europe, South Africa and Australia to join forces in key areas involving (i) training metadata standards, (ii) sharing content, resources and tools, (iii) training workflows, and (iv) sharing software, APIs and technical know-how/expertise.\n\n\nAdditional funds\n\nELIXIR-EXCELERATE is a four-year project (due to end in 2019) that was awarded to accelerate the implementation, and support early operation, of the ELIXIR research infrastructure (see the first periodic report for more in-depth information). The Training Work Package (WP11) is co-led by ELIXIR-CH, ELIXIR-NL and ELIXIR-UK (Prof Ponting - from 2014 to 2017 - and currently Dr Rustici). As already mentioned, several ELIXIR-UK training resources and initiatives are dependent on ELIXIR-EXCELERATE funding, including Bioschemas, TeSS and the impact/quality assessment work. It is currently unclear what funding will be available after 2019 to continue supporting these activities and ensure their sustainability.\n\n\nFuture plans\n\nThe training landscape in the UK has changed significantly since ELIXIR and ELIXIR-UK began their activities. We now have a wide network of universities, training centres and training initiatives across the UK, that deliver impactful training programmes, addressing the ever-changing training needs of the UK scientific community. Three of these centres have been named ELIXIR-UK Node resources, and contribute to the ELIXIR-wide training programme. Running these programmes requires training providers to be rooted in the scientific community, and to run training-gap analyses on a regular basis to ensure that training is developed and delivered in a timely way. Training centres need to work together and in a coordinated fashion to ensure the provision of a national training programme that is coherent and comprehensive. In many cases, the demand for a particular type of training (SWC, DC and analysis of HTS data, to name just a few) greatly exceeds the capability of a single training centre to meet such demand, both in terms of personnel and financial resources. Although this type of training is recognised as fundamental, funding to support training activities is extremely limited, often sacrificed to support research activities, and training centres have to operate under cost-recovery models. The situation is exacerbated by the fact that demand is growing, and more training providers – universities in particular – are turning to bioinformatics training centres to fill the bioinformatics training gaps in their undergraduate- and graduate-training programmes11. Coordination at a national level is therefore of paramount importance to ensure that demand is met, offer is diversified, and the necessary resources needed to sustain bioinformatics training at the national level are identified and financially secured.\n\nThe ELIXIR-UK Node is currently seeking funding to hire a new Training Coordinator, to continue providing both training coordination at the national level and training outreach to external training initiatives. Appropriate funding is required not only for coordinating training activities across the UK, but also to (i) consolidate the work already done, (ii) enable the development of new training activities, (iii) support existing training centres and their initiatives, (iv) maintain our international commitments to support the ELIXIR Training Portal, and (v) participate in an influential way in the ELIXIR Training Platform.\n\nSpecifically, appropriate funding (supporting key personnel) needs to be secured to sustain the following fundamentally important activities:\n\n\n\n1. Sustain the development and maintenance of TeSS: as previously indicated, this currently relies on the ELIXIR-EXCELERATE award, which terminates in 2019. National funding needs to be secured to sustain this flagship ELIXIR portal.\n\n2. Sustain the activities of the ELIXIR-UK Training centres (Cambridge, Birmingham and Edinburgh). As presented above, these deliver crucial training in priority areas, reaching a significant number of users (>2,500) every year, but they are operating under extremely limited budgets and cost-recovery models, which are insufficient to ensure long-term sustainability. Moreover, as they are Node contributions to the ELIXIR-wide training programme, they must adhere to ELIXIR’s standards for quality, discoverability and interoperability, fulfilling their obligations to ELIXIR, without receiving any support from it. In some cases, the centres are the sole providers of training on a particular topic. For example, metabolomics training provision is extremely limited across Europe, and the Birmingham Metabolomics Training Centre has developed several, successful face-to-face courses. These have high running costs and, in order to make them affordable, NERC is funding some bursaries, allowing NERC PhD students or early-career scientists to attend a course for free, while the running costs are recovered from NERC. This model has been extremely efficient and appropriate, as the training becomes free to those in need of it, and the metabolomics community's knowledge grows. We would recommend other Research Councils to consider implementing the same approach for \"their\" early-career researchers. Additional funding solutions need to be identified to sustain the provision of metabolomics training.\n\n3. Continue playing an influential role in the ELIXIR-wide training programme. ELIXIR-UK contributes key services to the ELIXIR training programme, leads the training impact assessment work, and established the ELIXIR Training Coordinator Group, to ensure the provision of a coherent bioinformatics programme across ELIXIR Nodes. The TrCG is a unique feature of the ELIXIR Training platform, providing great support in running its activities; it represents a vital forum for Training Coordinators across ELIXIR to discuss collaborations, align initiatives and leverage each other’s expertise. ELIXIR-UK needs to be represented on the TrCG, otherwise it will miss out on this crucial networking opportunity. This role is currently fulfilled by Dr Bellis and Dr Rustici (University of Cambridge), but this is not a sustainable solution.\n\n4. Coordinate the provision of bioinformatics training across existing, and new, ELIXIR-UK training centres and initiatives. ELIXIR-UK will launch another ‘Expression of Interest’ exercise for new resources to become part of ELIXIR-UK in 2018, and we foresee that new training centres will apply to join. Consequently, the coordination effort will need to expand to include such centres, as well as training initiatives like the BBSRC Bioinformatics and Biomathematics Training Hub, a collaborative project to coordinate the development and sharing of training materials and expertise across the UK’s National Institutes of Bioscience (NIB), which is now seeking partnership with ELIXIR-UK to sustain its activities and make them discoverable through TeSS.\n\n5. Ensure that appropriate training solutions (including training materials and face-to-face training events) are put in place for showcasing ELIXIR-UK resources. With the recent increase in the number of recognised ELIXIR-UK resources (including data, tools and interoperability resources), we want to ensure that these are included and represented in training activities. In some cases, this might involve facilitating activities in which ELIXIR-UK resources are represented; in other cases, it could mean providing guidance on developing new training solutions.\n\n6. Facilitate and support the development of training materials/solutions in areas where ELIXIR-UK has strong expertise, including but not limited to:\n\n• Data management and stewardship: this has been identified as a high priority area by the ELIXIR Training Platform, and significant expertise is available in the UK Node in this domain: e.g., FAIRDOM (led by Prof Goble), which supports ISA-based project data and model management, and is partnering with ELIXIR-NO, ELIXIR-DE and ELIXIR-NL. A proposal to develop collaborative training solutions in this area has been drafted between the Oxford e-Research Centre (BioSharing, Dr Sansone), TeSS and the University of Cambridge (Dr Rustici), but no funds are currently available to enable this;\n\n• Structural bioinformatics training workflows: structural bioinformatics expertise in the UK Node is significant, and the desire to contribute to training activities is high; so far, however, this has not been possible owing to lack of funds and dedicated resources. Collaborative work is planned to implement structural bioinformatics workflows in TeSS (with Prof Orengo, UCL), but additional funding is needed to consolidate these activities. These are just one example of training workflows that could be implemented in TeSS; TeSS is actively pursuing the inclusion of a growing number of workflows, and the means to link them to external resources.\n\n7. Run SWC/DC courses at an increasing number of UK sites, to satisfy a growing demand for this basic training. The SSI’s role in establishing the SWC/DC ELIXIR training programme has been pivotal, but the SSI is currently struggling to continue providing the coordination needed at the national level, given the extreme popularity of this training and the growing demand. Resources are needed to ensure that SWC/DC training is delivered to the UK scientists who need it, to support the organisation of courses, and increase the pool of UK trainers that can deliver this type of training, thereby increasing capacity. In the long term, this approach might enable the inclusion of such core training in a growing number of teaching/training programmes.\n\n8. Actively engage the UK scientific community. Resources are needed to engage with the UK community at large, to ensure that training activities are tailored to its needs. Regular gap analysis must be carried out to understand the training gaps across the rapidly evolving research landscape. This is a vital process, to ensure that training programmes are kept up to date. Community-engagement activities should target a diverse range of stakeholders, including academic and industry partners, both as consumers of and contributors to training activities. The ELIXIR Industry Advisory Committee has highlighted industry engagement as a priority for the coming year. To get these activities kick-started, ELIXIR-UK will host an SME event in January 2018 (Cambridge, UK), and will sponsor the UK Bioinformatics Core Facilities meeting, a nascent initiative that is trying to coordinate the activities of core facilities across the UK, and with which ELIXIR-UK wishes to engage.\n\n9. Continue leading the development of training best practices, to increase training quality, and help users to discover training solutions that best meet their needs. As the number and type of training resources increase, so does the degree of difficulty in finding the training that is most appropriate for trainees’ skill levels and expectations. Mapping existing training to bioinformatics core competencies would not only help trainees to choose the training that best fits their needs, but would also provide clear learning paths, signposting what training activities they should attend over a certain period of time to acquire the competencies needed to carry out their work effectively. The ELIXIR Industry Advisory Committee (IAC) has identified this as an area that should be developed in TeSS. This exercise would also contribute to refining existing core competencies (ISCB)12 and expand them as needed. ELIXIR-UK is currently in the process of proposing an Implementation Study focusing on developing solutions to signposting training through core competencies and learning paths.\n\nAdditionally, ELIXIR–UK needs to play a leading role in the development of national, coherent training curricula in priority areas such as clinical bioinformatics, data management and stewardship, etc. In doing so, it should establish strategic partnerships with relevant initiatives, such as the MRC Health Data Research UK for clinical bioinformatics.\n\n\nConclusions\n\nSince 2014, a significant amount of work has been done within ELIXIR-UK to coordinate, strengthen and expand bioinformatics training across the UK and ELIXIR as a whole. Crucial partnerships and collaborations have been established with several external initiatives, fostering interactions with international partners, and aligning efforts on a global scale.\n\nTraining remains a high priority for ELIXIR-UK and significant effort will be put into securing funding and recruiting a new Training Coordinator to drive training activities, to initiate new ones, and to continue participating in an influential way in the ELIXIR Training Platform.",
"appendix": "Author contributions\n\n\n\nLL was ELIXIR-UK Training Coordinator for Research Science (06/2014–05/2017); RH was ELIXIR-UK Training Outreach Manager, the Chair of ELIXIR Training Coordinator Group, and Training Platform Coordinator (09/2014 – 04/2017); TKA is Professor of Bioinformatics and TeSS Co-investigator at the University of Manchester; FB is TeSS developer at the University of Manchester; NB is TeSS manager at the University of Manchester; LJB is ELIXIR Training Platform Coordinator and Bioinformatics Training Quality/Impact Coordinator at the University of Cambridge; WBD is Director of the Birmingham Metabolomics Training Centre; JMH is ELIXIR-UK Node Coordinator; AN is Training Lead at the Software Sustainability Institute; CO is Professor of Bioinformatics at University College London; BO is Training and Outreach Bioinformatician at Edinburgh Genomics; SAS is Associate Director of the Oxford e-Research Centre and TeSS Co-investigator at the University of Oxford; MT is TeSS developer at Oxford e-Research Centre; MRV is Professor of Metabolomics at the University of Birmingham; CLW is Operations Manager at the Birmingham Metabolomics Training Centre; CAG was ELIXIR-UK Deputy Head of Node (2012-Jan 2016), and is ELIXIR-UK Head of Node, ELIXIR Interoperability Platform Lead, ELIXIR-EXCELERATE Work Package 5 (Interoperability) Lead, TeSS Co-investigator and Professor of Computer Science at the University of Manchester; CPP was ELIXIR-UK Head of Node (2012-Jan 2016) and ELIXIR Training Platform Lead (2014–2017); and GR is ELIXIR-UK Deputy Head of Node, ELIXIR-EXCELERATE Work Package 11 (Training) Lead, ELIXIR-EXCELERATE Impact/Quality Subtask Lead and Bioinformatics Training Manager at the University of Cambridge.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nELIXIR-UK is funded by the Biotechnology and Biological Sciences Research Council, the Medical Research Council and the Natural Environment Research Council (grant numbers BB/L005069/1 and BB/P017193/1). ELIXIR-EXCELERATE project is funded by the European Commission within the Research Infrastructures programme of Horizon 2020 (grant number 676559).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1: Table with original timetable of activities from the BB/L005069/1 grant.\n\nClick here to access the data.\n\n\nReferences\n\nHancock JM, Game A, Ponting CP, et al.: An open and transparent process to select ELIXIR Node Services as implemented by ELIXIR-UK [version 2; referees: 2 approved, 1 approved with reservations]. F1000Res. 2016; 5: pii: ELIXIR-2894. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeber RJ, Winder CL, Larcombe LD, et al.: Training needs in metabolomics. Metabolomics. 2015; 11(4): 784–786. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAttwood TK, Bongcam-Rudloff E, Brazas ME, et al.: Correction: GOBLET: The Global Organisation for Bioinformatics Learning, Education and Training. PLoS Comput Biol. 2015; 11(5): e1004281. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchiffthaler B, Kostadima M, NGS Trainer Consortium,Training in High-Throughput Sequencing: Common Guidelines to Enable Material Sharing, Dissemination, and Reusability. PLoS Comput Biol. 2016; 12(6): e1004937. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBeard N, Attwood T, Nenadic A: TeSS – Training Portal [version 1; not peer reviewed]. F1000Research. 2016; 5(ISCB Comm J): 1762(poster). Publisher Full Text\n\nJimenez RC, Albar JP, Bhak J, et al.: iAnn: an event sharing platform for the life sciences. Bioinformatics. 2013; 29(15): 1919–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHorro C, Cook M, Attwood TK, et al.: BioCIDER: a Contextualisation InDEx for biological Resources discovery. Bioinformatics. 2017. PubMed Abstract | Publisher Full Text\n\nIson J, Rapacki K, Ménager H, et al.: Tools and data services registry: a community effort to document bioinformatics resources. Nucleic Acids Res. 2016; 44(D1): D38–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcQuilton P, Gonzalez-Beltran A, Rocca-Serra P, et al.: BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences. Database (Oxford). 2016; 2016: pii: baw075. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorpas M, Jimenez RC, Bongcam-Rudloff E, et al.: The GOBLET training portal: a global repository of bioinformatics training materials, courses and trainers. Bioinformatics. 2015; 31(1): 140–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrazas MD, Blackford S, Attwood TK: Training: Plug gap in essential bioinformatics skills. Nature. 2017; 544(7649): 161. PubMed Abstract | Publisher Full Text\n\nWelch L, Brooksbank C, Schwartz R, et al.: Applying, Evaluating and Refining Bioinformatics Core Competencies (An Update from the Curriculum Task Force of ISCB's Education Committee). PLoS Comput Biol. 2016; 12(5): e1004943. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23886",
"date": "14 Jul 2017",
"name": "Michelle D. Brazas",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article provides an update on the ELIXIR-UK role in setting up and providing training resources and tools for both the UK and the broader ELIXIR community. It provides a good overview of the activities conducted and work achieved since 2014. The article then also provides an opinion of the funding situation supporting training activities.\nSome general comments for improvement:\nThe title and abstract indicate that the article will be about ELIXIR-UK and its role as lead of ELIXIR Training, and the work it has accomplished both for the UK and ELIXIR in this training sphere. The introduction even ends with an excellent summary statement \"This report provides an overview of ELIXIR-UK's role in bioinformatics training at the national level.\" However, the article scope goes beyond what is indicated by the abstract, and frequently includes funding issues and statements. While I recognize the need to acknowledge funding sources (can be done under Grant Information section) and to highlight funding shortfalls, I am not convinced that the current organization of repeating funding sources and problems in each section is the ideal organization as it appears to be a complaint rather than a constructive opinion. Grouping all funding related content into one section (Future Plans section) would greatly improve the read and tone of the article. In fact, splitting the article in 2 would be ideal - one article would be the report, as described by the abstract; and the second article would be the opinion, and could describe the future directions/funding opinions expressed at the end of the current article and throughout each section.\n\nSome references are hyperlinks and others are superscripts to the Reference Section. It would be better to be consistent. For example, the GOBLET survey on page 3 is a hyperlink to the publication rather than being listed in the reference section.\n\nSome resources are missing hyperlinks, particularly the ELIXIR-UK website on page 3. Please add hyperlinks to text missing them. Another example on page 6, for core Key Performance Indicators identified by the group. A hyperlink (or further text on the KPIs) to the KPIs in particular would be helpful to groups looking to implement similar metrics in their own training programs.\n\nWhat is the model for resources used in the statistics training and metabolomics training? Is the training software free or commercial software? How does commercial software align with the mandate of ELIXIR-UK and ELIXIR to train broadly?\n\nPage 5, Clinical Genomics, are there not other institutes like EBI involved in Clinical Bioinformatics training? As a North American, EBI comes to mind as an important training centre in the UK, yet is not discussed in this article. In general, where do these other very visible training groups in the UK fit into ELIXIR-UK?\n\nPage 7, External Liaisons - keep the same order of relationships as presented in the opening paragraph for this section. GOBLET section - the establishment and adoption of best practices is fundamental to what or whom? Bioschemas section - awkward location of the last paragraph. Consider moving up within the section. NIH section - North America was also a training expert brought together here. Please add.\n\nPage 8, Future Plans - how is it known that the delivered training programs are impactful? What are the metrics of impactfulness? Is this related to the KPIs? If so, please hyperlink or share the KPIs so that other training groups may benefit from these. Why is the cost-recovery model not sustainable? The point of a cost-recovery model is to ensure sustainability, otherwise the model is wrong.\n\nPage 8-9, Future Plans - in general the future plans section focuses more on the UK activities than the ELIXIR wide activities. Only the ELIXIR-wide TeSS and Coordination activities are described as going forward. Is the training impact and quality assessment work not continuing, and why not?\n\nOverall, this article describes a significant body of work and expresses important opinions on funding its continuation, but would greatly benefit from a reorganization, and possibly a split into two articles - report and opinion.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": []
},
{
"id": "23696",
"date": "17 Jul 2017",
"name": "Patricia M. Palagi",
"expertise": [
"Reviewer Expertise training",
"education",
"bioinformatics training"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes an extensive and comprehensive report of the ELIXIR-UK training resources and the ELIXIR-UK role in defining ELIXIR’s training. It is very useful to other ELIXIR nodes and other countries willing to setup a bioinformatics training program. The manuscript is clear and well written, but it is often read as a long administrative report for funders. Hereafter you will find my suggestions, mainly to reduce redundancy and to improve clarity:\n\nGeneral remarks:\nLong term sustainability for training is a major issue, in UK and in several countries, and it is continuously repeated in this report. Instead of repeating at each session, it could have a much higher impact if it was stated only once, in the future plans only, with a full paragraph/session dedicated only for this.\n\nIt would be useful to give more details on the funding opportunities the authors are envisioning and exploring to sustain training in the long term.\n\nBesides from funding, what are the other issues that ELIXIR-UK are facing regarding training? For instance, number of well-trained trainers and for each areas of expertise?\n\nThe commitment from the expert areas is only explicitly stated in the Applied Genomics session. Is community commitment an issue in the other areas of training?\nIn the five thematic areas:\nThe training content in the five thematic areas are of direct interest to other countries and could be better detailed in their respective sessions.\n\nIn the session Statistics training, what is taught in the statistics skill schools? Does it go beyond what is covered in the STATO didactic tool?\n\nSupplementary material:\nThe table in the supplementary materials is difficult to understand and does not support the content. There are acronyms that are not explained (TCRS, TOM, etc) and the time span is not anchored in the actual calendar. It would be worth: adding actual time periods, explaining the acronyms (or removing them if they are not useful for the content), explaining the tasks or removing them. This lighter and more self-explanatory version of the table (which could become an image) could then support the content of the manuscript and be inserted within the text, as a summary.\n\nSome minor corrections:\nSome texts are highlighted in bold (ELIXIR-UK has initiated and coordinated the provision of SWC/DC training across ELIXIR; etc), but it seems to be an error (or at least the reason why is not explicit).\n\nIn the Future of TeSS, “to a Service Delivery Plan” is written twice.\n\nIn “Future collaborations between the ELIXIR Training Platform and BD2K TCC were discussed in the upcoming “International Workshop on Data Science Training: Standards, Schemas, and Successes”, May 24–26, “, the word upcoming is outdated.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23698",
"date": "19 Jul 2017",
"name": "Celia W.G. van Gelder",
"expertise": [
"Reviewer Expertise Bioinformatics Training and Education",
"Training Coordination"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper gives a good overview of the various and important contributions of ELIXIR-UK both to training in the UK node and to the ELIXIR Training Platform as a whole. It also gives insight in the coordination efforts that are needed throughout the UK to achieve a coherent training infrastructure. Throughout the paper it touches several times upon a major challenge that everyone in training and training coordination is facing, which is the struggle to get sustainable funding in place to ensure continuity and sustainability for the training activities. It will be a waste of time and effort if the training infrastructure that has been built will simply stop after specific projects end. It would be interesting to learn a bit more about the ambition of and opportunities for ELIXIR-UK to achieve a national approach to this sustainability challenge. Is the ELIXIR-UK structure helping to achieve this goal?\nThe ELIXIR Training Platform activities are built by coordinating efforts in all the ELIXIR nodes and as such is a combined effort of many countries. It would be good to make this a bit clearer to the reader, e.g. in the section where the ELIXIR Training Platform is introduced. In addition a reference to https://zenodo.org/record/61411 could be included, which gives detailed background information about the setup of the ELIXIR Training Platform.\n\nHere are some of my specific comments and suggestions:\n\nThere is a mentioning of three ELIXIR flagship portals, but it is not explained which three portals the authors are referring to. This might confuse the reader.\n\nRelated to the TeSS portal it could be useful to give some insight in visitor statistics and also make the distinction between events and materials more clear, as well as the distinction between future and past events and materials. It should be realized that a large number of the 6000 entries mentioned are related to events that happened in the past. As such TeSS represents an important training archive in bioinformatics, and to my knowledge this is the only portal that offers this kind of information.\n\nRelated to the ELIXIR-SWC/DC agreement:\n\nLesson development will definitively take place and will be taken up by the ELIXIR nodes, but this not an explicit part of the partnership agreement that is currently being prepared for signing.\n\nThe agreement contains workshops and instructor trainings. The agreement entails 2 workshops per ELIXIR node, and not 2 workshops per year per ELIXIR node, as is currently stated.\n\nThe authors could refer to the F1000 paper about the ELIXIR SWC/DC pilot that has now been published. 1\n\nRelated to establishing the GOBLET-ELIXIR collaboration agreement it would be good to mention that Teresa Attwood, who is both Chair of GOBLET, and part of ELIXIR-UK, played a major role.\n\nRelated to the description of the process during WP11 grant writing it would be good to stress that this was a collaborative process of the WP11 leaders and several node representatives.\n\nIn the section on BioSchemas the training implications could have been described clearer. Further, this section could refer to the FAIR (Findable, Accessible, Interoperable, Reusable) principles, where BioSchemas mainly addresses findability.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23885",
"date": "27 Jul 2017",
"name": "Nicola J. Mulder",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper provides a comprehensive description of the ELIXIR-UK training activities and plans, though it was unclear whether the aim was to focus on ELIXIR-UK or ELIXIR overall, as the focus changed between these in the paper. It describes a well thought out process for evaluating training needs and attempting to address these. Challenges and future plans are discussed, and the issue of funding is raised repeatedly. The paper is interesting and useful, though I have a few minor concerns raised below.\nThe paper is quite long and I think some fine details could be removed, as the reader would not necessarily be interested in every name of who did what. The continual reference to funding issues is a bit annoying for the reader, these could be discussed once off in a section under challenges.\nOn page 6 there is mention that it is unclear how impact assessment would be supported after the grant ends, but if there is funding for training then it is not much extra overhead to assess the courses. Personal interviews would not be necessary for every course if resources are limited.\nPage 7 - mention of working with BD2K on developing core competencies, is the group also working with the ISCB core competency group? Is the BD2K portal not called ERuDIte now, rather than BigDataU?\nPage 8 line 2: \"...ND2K TCC were discussed in the upcoming..\" doesn't make sense. The meeting has now already taken place. Note, the USA was of course also involved among the countries listed.\nFor long term sustainability and addressing the demands, there could perhaps be a stronger focus on training trainers, and potentially live streaming some courses to cover a wider audience.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23887",
"date": "14 Aug 2017",
"name": "Manuel Corpas",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article \"ELIXIR-UK role in bioinformatics training at the national level and across ELIXIR\" reflects the developing training opportunities that ELIXIR-UK, as part of its training mandate, is performing throughout the UK and ELIXIR-Europe. My inside knowledge as Technical Coordinator for ELIXIR-UK from 2014-1016 may have influenced my review of this paper.\nMy main comments:\nI concur with another reviewer comments that the paper lacks a coherent structure. I do not see a clearly stated ELIXIR-UK’s training strategy and how it is being coherently being fleshed out throughout the article. Instead, the reader is presented with a chronological description of activities according to the different grants that support the training work of the node. Different sections would benefit from more cohesion between them with some kind of 'story' thread that joins them. Thus I am not sure if this is a summary technical report or a review of the training activities for the node.\n\nELIXIR-UK has more activities than training and it would add clarity to the narrative if the other activities in the node are enumerated in addition to how ELIXIR-UK’s training leadership has helped in developing them.\n\nA few explicit remarks now follow:\nYou say “ELIXIR-UK is at the forefront of bioinformatics training provision, across the UK and in the wider ELIXIR training programme” (Page 3, 1st paragraph) but you do not qualify it. Is there any explicit independent confirmation of this?\n\nWhat is the ELIXIR-UK training strategy? Explicitly, what does it consist of? What is the plan, achievements, goals and milestones? How do you know you are being successful?\n\nHow is the ELIXIR-UK Training operations building the community of bioinformatics in the UK? You mention this as one of its key priorities but miss the detail on how this is happening. This is related to my previous comment on the strategy development and is alluded to briefly in the community engagement paragraph of future work (page 9, bullet point 8) but I fail to clearly grasp current efforts.\n\nYou mention the Training programs in Cambridge, Birmingham and Edinburgh. They seem disconnected. What are your plans to synchronise these resources at a national level other than just being named services?\n\nEven though the complete skill surveys are referenced, it would add credibility to the summary statistics of the survey if its design and people involved both in developing and responding it are also included.\n\nYou say that you support the interoperability platform of ELIXIR, which is an external activity to training. What is the take of ELIXIR-UK training strategy to abide by FAIR-ness of data and materials involved in these courses/workshops? In what way are training courses/committed committed to FAIR values?\n\nSince FAIR is a cross-cutting initiative throughout ELIXIR and has so many ramifications in the spread of best practices, I feel this crucial aspect is missing in the paper (see above).\n\nCan you substantiate/measure in what way the excellent feedback from SWC/DC workshops has been collected and summarised?\n\nPage 7 paragraph 2: “co-wrote many ELIXIR documents, including annual reports, the industry strategy, the Training Platform Road map and the ELIXIR website” - I would appreciate references for this.\nThanks for giving me the opportunity to review this paper. The amount of activity presented here is indicative of the many efforts carried out by all ELIXIR-UK partners. By giving this paper more shape and a story thread, I am convinced it will provide an authoritative account of your contributions to training in the UK, Europe and beyond.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-952
|
https://f1000research.com/articles/6-586/v1
|
27 Apr 17
|
{
"type": "Software Tool Article",
"title": "Arkas: Rapid reproducible RNAseq analysis",
"authors": [
"Anthony R. Colombo",
"Timothy J. Triche Jr",
"Giridharan Ramsingh",
"Giridharan Ramsingh"
],
"abstract": "The recently introduced Kallisto pseudoaligner has radically simplified the quantification of transcripts in RNA-sequencing experiments. We offer cloud-scale RNAseq pipelines Arkas-Quantification, which deploys Kallisto for parallel cloud computations, and Arkas-Analysis, which annotates the Kallisto results by extracting structured information directly from source FASTA files with per-contig metadata and calculates the differential expression and gene-set enrichment analysis on both coding genes and transcripts. The biologically informative downstream gene-set analysis maintains special focus on Reactome annotations while supporting ENSEMBL transcriptomes. The Arkas cloud quantification pipeline includes support for custom user-uploaded FASTA files, selection for bias correction and pseudoBAM output. The option to retain pseudoBAM output for structural variant detection and annotation provides a middle ground between de novo transcriptome assembly and routine quantification, while consuming a fraction of the resources used by popular fusion detection pipelines. Illumina's BaseSpace cloud computing environment, where these two applications are hosted, offers a massively parallel distributive quantification step for users where investigators are better served by cloud-based computing platforms due to inherent efficiencies of scale.",
"keywords": [
"transcriptome",
"sequencing",
"RNAseq",
"automation",
"cloud computing"
],
"content": "Introduction\n\nHigh-performance computing based bioinformatic workflows have three main subfamilies: in-house computational packages, virtual-machines (VMs), and cloud based computational environments. The in-house approaches are substantially less expensive when raw hardware is in constant use and dedicated support is available, but internal dependencies can limit reproducibility of computational experiments. Specifically, “superuser’” access needed to deploy container-based, succinct code encapsulations (often referred to as \"microservices\" elsewhere) can run afoul of normal permissions, and the maintenance of broadly usable sets of libraries across nodes for users can lead to shared code dynamically linking to different libraries under various user environments. By contrast, modern cloud-based approaches and parallel computing are forced by necessity to offer a user-friendly platform with high availability to the broadest audience. Platform-as-a-service approaches take this one step further, offering controlled deployment and fault tolerance across potentially unreliable instances provided by third parties such as Amazon Web Service Elastic Compute Cloud (AWS EC2) and enforcing a standard for encapsulation of developers' services such as Docker. Within this framework, the user or developer cedes some control of the platform and interface, in exchange for the platform provider handling the details of workflow distribution and execution. This has provided the best compromise of usability and reproducibility when dealing with general audiences. In this regard, the lightweight-container approach exemplified by Docker lead to rapid development and deployment compared to VMs. Combined with versioning of deployments, it is feasible for users to reconstruct results from an earlier point in time, while simultaneously re-evaluating the generated data under state-of-the-art implementations.\n\nSeveral recent high impact publications used cloud-computing work flows such as CloudBio-linux, CloudMap1 and Mercury2 AWS EC23. The CloudBio-linux software is centered around comparative genomics, phylogenomics, transcriptomics, proteomics, and evolutionary genomics studies using Perl scripts3. Although offered with limited scalability, the CloudMap software allows scientists to detect genetic variations over a field of virtual machines operating in parallel3. For comparative genomic analysis, the Mercury workflow2 can be deployed within Amazon EC2 through instantiated virtual machines but is limited to BWA and produces a variant call file (VCF) without considerations of pathway analysis or comparative gene set enrichment analyses. The effectiveness for conducting genomic research is greatly influenced by the choice of computational environment. The majority of RNAseq analysis pipelines consist of read preparation steps, followed by computationally expensive alignment against a reference. Software for calculating transcript abundance and assembly can surpass 30 hours of computational time4. If known or putative transcripts of defined sequences are the primary interest, then pseudoalignment, which is defined as near-optimal RNAseq transcript quantification, is achievable in minutes on a standard laptop using Kallisto software4. After verifying these numbers on our own laptops, we became interested in a massively parallel yet easy-to-use approach that would allow us to perform the same task on arbitrary datasets, and reliably interpret the output. In collaboration with Illumina (San Diego, USA) we found that the available BaseSpace platform was already well-suited for this purpose, with automated ingestion of the Sequence Read Archive (SRA) datasets as well as newly produced data from core facilities using recent Illumina sequencers. The design of our framework emphasizes loose coupling of components and tight coupling of reference transcriptome annotations; nonetheless, the ease of use and massive parallelization provided by BaseSpace offers excellent default execution environment.\n\nThe BaseSpace Platform utilizes AWS cc2 8x-large instances by default, each with access to eight 64-bit CPU cores and virtual storage of over 3 terabytes. Published BaseSpace applications, which undergo rigorous review by Illumina staff scientists before deployment, can allocate up to 100 such nodes, distributing analyses simultaneously, in parallel. Direct imports of existing experiments from SRA, along with default availability of experimenters' own reads, fosters a critical environment for independent replication and reanalysis of published data.\n\nA second bottleneck in bioinformatic workflows, hinted at above, arises from the frequent transfer and copying of source data across local networks and/or the Internet. With a standardized deployment platform, it becomes easier to move executable code to the environment of the target data, rather than transferring massive datasets into the environment where the executable workflows were developed. For instance, an experiment from SRA with reads totaling 141.3GB is reduced to summary quantifications totaling 1.63GB (nearly two orders of magnitude) and a report of less than 10MB (a further two orders of magnitude), for a total reduction in size exceeding 4 orders of magnitude with little or no loss of user-visible information. Moreover, the untouched original data is never discarded unless the user explicitly demands it, something that can rarely be said of local computer environments. Moreover, the location of original sources is always traceable. The appropriate placement of Arkas cloud computational applications in close proximity to the origin of sequencing data removes cumbersome data relocation costs.\n\nThe scale and complexity of sequencing data in molecular biology has exploded in the 15 years following completion of the Human Genome Project5. Furthermore, as a dizzying array of sequencing protocols have been developed to open new avenues of investigation, a much broader cross-section of biologists, physicians, and computer scientists have come to work with biological sequence data. The nature of gene regulation (or, perhaps more appropriately, transcription regulation), along with its relevance to development and disease, has undergone massive shifts propelled by novel approaches, such as the discovery of evolutionarily conserved non-coding RNA by enrichment analysis of DNA and isoform-dependent switching of protein interactions6. What sometimes gets lost within this excitement, however, is the reality that biological interpretation of these results can be highly dependent upon both their extraction and annotation. A rapid, memory-efficient approach to estimate abundance of both known and putative transcripts substantially broadens the scope of experiments feasible for a non-specialized laboratory. Recent work on the Kallisto pseudoaligner4, amongst other k-mer based approaches, has resulted in such an approach.\n\nIn order to leverage these recent advances for large scale needs, we created a cloud computational pipeline, Arkas, which encapsulates Kallisto, automates the construction of composite transcriptomes from multiple sources, quantifies transcript abundances, and implements reproducible rapid differential expression analysis followed by a gene set enrichment analysis over Illumina's BaseSpace Platform. The Arkas workflow is versionized into Docker containers and publicly deployed within Illumina's BaseSpace cloud based computational environment.\n\n\nMethods\n\nArkas is a two-step cloud pipeline. Arkas-Quantification is the first step, which reduces the computational steps required to quantify and annotate large numbers of samples against large catalogs of transcriptomes. Arkas-Quantification calls Kallisto for on-the-fly transcriptome indexing and quantification recursively for numerous sample directories. Kallisto quantifies transcript abundance from input RNAseq reads by using pseudoalignment, which identifies the read-transcript compatibility matrix4. The compatibility matrix is formed by counting the number of reads with the matching alignment; the equivalence class matrix has a much smaller dimension compared to matrices formed by transcripts and read coverage. Computational speed is gained by performing the Expectation Maximization (EM) algorithm over a smaller matrix.\n\nFor RNAseq projects with many sequenced samples, Arkas-Quantification encapsulates expensive transcript quantification preparatory routines, while uniformly preparing Kallisto execution commands within a versionized environment encouraging reproducible protocols. The quantification step automates the index caching, annotation, and quantification associated while running the Kallisto pseudoaligner integrated within the BaseSpace environment. The first step in the pipeline can process raw reads into transcript and pathway collection results within Illumina’s BaseSpace cloud platform, quantifying against default transcriptomes such as ERCC spike-ins, ENSEMBL non-coding RNA, or cDNA build 88 for both Homo sapiens and Mus musculus; further, the first step supports user uploaded FASTA files for customized analyses. Arkas-Quantification is packaged into a Docker container and is publicly available as a cloud application within BaseSpace.\n\nPrevious work7 has revealed that filtering transcriptomes to exclude lowly-expressed isoforms can improve statistical power, while more-complete transcriptome assemblies improve sensitivity in detecting differential transcript usage. Based on earlier work by Bourgon et al.8, we included this type of filtering for both gene- and transcript-level analyses within Arkas-Analysis. The analysis pipeline automates annotations of quantification results, resulting in more accurate interpretation of coding and transcript sequences in both basic and clinical studies by just-in-time annotation and visualization.\n\nArkas-Analysis integrates quality control analysis for experiments that include Ambion spike-in controls, multiple normalization selections for both coding gene and transcript differential expression analysis, and differential gene-set analysis. If ERCC spike-ins, defined by the External RNA Control Consortium9, are detected then Arkas-Analysis will calculate Receiver Operator Characteristic (ROC) plots using 'erccdashboard'10. The ERCC analysis reports average ERCC Spike amount volume, comparison plots of ERCC volume amount, and normalized ERCC counts (Figure 1).\n\nA) The Receiver Operator Characteristic plot. The X-axis shows the False Positive Rate, the Y-axis shows True Positive Rate. B) and D) show the spike-in total RNA amounts with a linear model fit, and quantified ERCC transcript counts. C) shows a dispersion of mean transcript abundance counts and the estimated dispersion.\n\nSubsequent analyses import the data structure from SummarizedExperiment (Morgan, 2016) and create a sub-class titled KallistoExperiment that preserves the S4 structure and is convenient for handling assays, phenotypic and genomic data. KallistoExperiment includes GenomicRanges11, preserving the ability to handle genomic annotations and alignments, supporting efficient methods for analyzing high-throughput sequencing data. The KallistoExperiment sub-class serves as a general-purpose container for storing feature genomic intervals and pseudoalignment quantification results against a reference genome called by Kallisto. By default KallistoExperiment couples assay data such as the estimated counts, effective length, estimated median absolute deviation, and transcript per million count where each assay data is generated by a Kallisto run; the stored feature data is a GenomicRanges object from11, storing transcript length, GC content, and genomic intervals.\n\nGiven a KallistoExperiment containing the Kallisto sample abundances, principal component analysis (PCA) is performed12 on trimmed mean of M-value (TMM) normalized counts13 (Figure 2A). Differential expression (DE) is calculated on the library normalized transcript expression values, and the aggregated transcript bundles of corresponding coding genes using limma/voom linear model14 (Figure 3A). Further, an additional PCA and DE analysis of both transcripts and coding genes is performed using in-silico normalization using factor analysis15 (Figure 2B, Figure 3B, Figure 3C). In each DE analysis FDR filtering method is defaulted to 'Benjamini-Hochberg', if there are no resultant DE genes/transcripts the FDR methods is switched to 'none'. Arkas-Analysis consumes the Kallisto data output from Arkas-Quantification, and automates DE analysis using TMM normalization and in-silico normalization on both transcript and coding gene expression in a defaulted two group experimental design, which allows end-users to select the normalization type best suited for their needs.\n\nA) TMM normalization is performed on sample data and depicts the sample quantiles on normalized sample expression, PCA plot, and histogram of the adjusted p-values calculated from the DE analysis. Orange is the comparison group and green is the control group. B) A similar analysis is performed with RUV in-silico normalization.\n\nA) DE analysis using TMM normalization. The X-axis is the sample names (test data), the Y-axis are Gene symbols (HUGO). Expression values are plotted in log10 1+TPM. B) Similar analysis using RUV normalization. C) The design matrix with the RUV adjusted weights. The sample names are test data used in demonstrating the general analysis report output.\n\nGene set differential expression, which includes gene-gene correlation inflation corrections, is calculated using Qusage16. Qusage calculates the variance inflation factor, which corrects the inter-gene correlation that results in high type 1 errors using pooled or non-pooled variances between experimental groups. The gene set enrichment is conducted using Reactome pathways constructed using ENSEMBL transcript/gene identifiers (Figure 4 and Table 1); REACTOME gene sets are not as large as other databases, so Arkas-Analysis outputs DE analysis in formats compatible with more exhaustive databases such as Advaita. The DE files are compatible as a custom upload into Advaita iPathway guide, which offers an extensive Gene Ontology (GO) pathway analysis. Pathway enrichment analysis can be performed from the BaseSpace cloud system downstream from parallel differential expression analysis and can integrate with other pathway analysis software tools.\n\nGene-Set enrichment output report, each point represents the differential mean activity of each gene-set with 95% confidence intervals. The X-axis are individual gene-sets. The Y-axis is the log2 fold change.\n\nThe columns represent the Reactome pathway name corresponding to the depicted pathways in Figure 4, the log2 fold change, p-value, adjusted FDR, and an active link to the Reactome website with visual depictions of the gene/transcript pathway. Arkas-Analysis will output a similar report testing transcript-level sets.\n\nWe wished to show the importance of enforcing matching versions of Kallisto when quantifying transcripts because there is deviation of data between versions. Due to updated versions and improvements of Kallisto software, there obviously exists variation of data between algorithm versions (Figure 5, Supplementary Table 1, Supplementary Table 2). We calculated the standardized mean differences, and the variation of the differences between data output from Kallisto versions 0.43 and 0.43.1 (Supplementary Table 2), and found large variation of differences between raw values generated by differing Kallisto versions, signifying the importance of version analysis of Kallisto results.\n\nThe X-axis depicts the theoretical quantiles of the standardized mean differences. The Y-axis represents the observed quantiles of standardized mean differences.\n\nThe Dockerization of Arkas BaseSpace applications versionizes the Kallisto reference index to enforce that the Kallisto software versions are identical, and further documents the Kallisto version used in every cloud analysis. The enforcement of reference versions and Kallisto software versions prevents errors when comparing experiments.\n\nArkas-Quantification instructions are provided within BaseSpace (details for new users can be found here). The input are RNA sequencing samples, which may include SRA imported reads, and the outputs include the Kallisto data, .tar.gz files of the Kallisto sample data, and a report summary. Users may select for species type (Homo sapiens or Mus musculus), optionally correct for read length bias, and optionally select for the generation of pseudoBAMs. More significantly, users have the option to use the default transcriptome (ENSEMBL build 88) or to upload a custom FASTA of their choosing. For users that wish for local analysis, they can download the sample .tar.gz Kallisto files and analyze the data locally.\n\nThe Arkas-Analysis instructions are provided within the BaseSpace environment. The input for the analysis app is the Arkas-Quantification sample data, and the output files are separated into corresponding folders. The analysis also depicts figures for each respective analysis (Figure 1–Figure 4) and the images can be downloaded as a HTML format.\n\n\nResults\n\nOne main advantage of Dockerized analysis software is that it preserves software environments. As an exercise to show the importance of enforcing matching Kallisto versions, we've repeatedly ran Kallisto on the same 5 samples, quantifying transcripts (setting bootstraps=42) against two different Kallisto versions and calculating the standardized mean differences and variation of differences between each run. We ran Kallisto quantification once with Kallisto version 0.43.1, and 4 times with version 0.43.0, merging each run into a KallistoExperiment and storing the runs into a list of Kallisto experiments.\n\nWe then analyzed the standardized mean differences for each gene across all samples and calculated the variation of errors for each run quantified using version 0.43.0. Supplementary Table 1 shows the variation of the errors of the raw values such as estimated counts, effective length, and estimated median absolute deviation using the same Kallisto version 0.43.0. As expected, Kallisto data generated by the same Kallisto version had very low variation of errors within the same version 0.43.0 for every transcript across all samples. However, upon comparing Kallisto version 0.43.1 to version 43.0 using the raw data such as estimate abundance counts, effective length, estimated median absolute deviation, and transcript per million values, we found, as expected, large variation of data. Supplementary Table 2 shows that there is large variation of the differences of Kallisto data calculated between versions. Figure 5 depicts the standardized mean differences, i.e. errors, between Kallisto versions fitted to a theoretical normal distribution. The quantile-quantile plots show that the errors are marginally normal, with a consistent line centered near 0 but also large outliers (Figure 5). As expected, containerizing analysis pipelines will enforce versionized software, which benefits reproducible analyses.\n\nThe extraction of genomic and functional annotations directly from FASTA contig comments, eliding sometimes-unreliable dependencies on services such as BioMart, are calculated rapidly. The annotations were performed with a run time of 2.336 seconds (Supplementary Table 3) which merged the previous Kallisto data from 5 samples, creating a KallistoExperiment class with feature data containing a GenomicRanges11 object with 213782 ranges and 9 metadata columns. The system runtime for creating a merged KallistoExperiment class for 5 samples was 23.551 seconds (Supplementary Table 4).\n\n\nDiscussion\n\nThe choice of catalog, the type of quantification performed, and the methods used to assess differences can profoundly influence the results of sequencing analysis. ENSEMBL reference genomes are provided to GENCODE as a merged database from Havana's manually curated annotations with ENSEMBL's automatic curated coordinates. AceView, UCSC, RefSeq, and GENCODE have approximately twenty thousand protein coding genes, however AceView and GENCODE have a greater number of protein coding transcripts in their databases. RefSeq and UCSC references have less than 60,000 protein coding transcripts, whereas GENCODE has 140,066 protein coding loci. AceView has 160,000 protein coding transcripts, but this database is not manually curated. GENCODE is annotated with special attention given to long non-coding RNAs (lncRNAs) and pseudogenes, improving annotations and coupling automated labeling with manual curating. The database selected for protein coding transcripts can influence the amount of annotation information returned when querying gene/transcript level databases.\n\nAlthough previously overlooked, lncRNAs have been shown to share features and alternate splice variants with mRNA, revealing that lncRNAs play a central role in metastasis, cell growth and cell invasion17. LncRNA transcripts have been shown to be functional and are associated with cancer prognosis; proving the importance of studying these transcripts, which are included as defaults within the Arkas pipeline.\n\nEach transcript database is curated at different frequencies with varying amounts RNA entries that influences that mapping rate. GENCODE loci annotations contain 9640 loci, UCSC contain 6056 and RefSeq contain 4888. GENCODE annotations have the greatest number of lncRNA, protein and non-coding transcripts, and highest average transcripts per gene, with 91043 transcripts unique to GENCODE, absent for UCSC and RefSeq databases. ENSEMBL and AceView annotate more genes in comparison to RefSeq and UCSC, and return higher gene and isoform expression labeling improving differential expression analyses18. ENSEMBL achieves conspicuously higher mapping rates than RefSeq, and has been shown to annotate larger portions of specific genes and transcripts that RefSeq leaves unannotated18. Although ENSEMBL has been shown to detect the same differentially expressed genes as AceView, ENSEMBL/GENCODE annotations are manually curated and updated more frequently than AceView18. The choice of transcriptome will definitely influence the power of an analysis, thus Arkas cloud analysis applications use ENSEMBL build 88 (ncRNA, and cDNA) by default for Homo sapiens and Mus musculus and also allow users to upload customized FASTA files.\n\nReproducible research should consistently link the works developed by the research community to unique data environments such as clinical, sequencing and other experimental data, used in the construction of the published work. The aim for transparent research methodologies is to clearly define their association with every research experiment, minimizing opaqueness between findings and methods. For clinical studies, re-generating an experimental environment has a very low success rate19, which is why non-validated preclinical experiments have spawned the development of best practices for critical experiments. Re-creating a clinical study has many challenges, for example the difficult nature of a disease, the complexity of cell-line models in mouse and human that attempt to capture human tumor environment, and limited power through small enrollments in clinical trials19. Experimental validation is quite difficult and dependent on the skillful performance of an experiment, and an earnest distribution of the analytic methodology, which should contain most, if not all, raw and resultant data sets.\n\nWith recent developments for virtualized operating systems, developing best practices for bioinformatic confirmations of experimental methodologies is much more straightforward than duplicating clinical trials' experimental data. Recent technology advancements such as Docker allow for local software environments to be preserved using a virtual operating system. Docker allows users to build layers of read/write access files, creating a portable operating system which exhaustively controls software versions and data, and systematically preserves the complete software environment. Conserving a researcher's developmental environment advances analytical reproducibility if the workflow is publicly distributed. We suggest a global distributive practice for scholarly publications that regularly includes the virtualized operating system containing all raw analytical data, derived results, and computational software. Currently, Docker, compiled software through CMake, and virtual machines are being utilized, showing progress toward a global distributive practice linking written methodologies, and supplementary data, to the utilized computational environment20.\n\nComparing Docker as a distributive practice to virtual machines seems roughly equivalent. Distributed virtual machines are easy to download, and the environment allows for re-generating resultant calculations. However, this is limited if the research community advances the basic requirements for written methodologies and begins to adopt a large scale virtualized distribution, converging to an archive of method environments which would make hosting complete virtual machines impractical or impossible. If an archive were constructed where each research article would link to a distributed methods environment, then an archive of virtual machines for the entire research community is impossible. However, an archive of Dockerfiles is more realistic because a Dockerfile consists of only a few bytes in size.\n\nNovel bioinformatic software is often distributed as a cross-platform flexible build process independent of compiler, which reaches Apple, Windows and Linux users. The scope of novel analytical code is not to manage nor preserve computational environments, but to have environment independent source code as transportable executables. Docker, however, does manage operating systems, and the scope for research best practices does include gathering sets of source executables into a single collection of minimum space and maximum flexibility. Docker can provide the ability for the research community to simultaneously advance publication requirements and develop the future computational frameworks in cloud.\n\nAnother advantage for using Docker as the machine manifesting the practice of reproducible research methods, is that there is a trend of well-branded organizations such as Illumina's BaseSpace platform, Google Genomics, or SevenBridges (all of which offer bioinformatic computational software structures), to use Docker as the principal framework. Cloud computational environments offer many advantages over local high-performance in-house computer clusters, which systematically structure reproducible methodologies and democratize medical science. Cloud computational ecosystems preserve an entire developmental environment using the Docker infrastructure, improving bioinformatic validation. Containerized cloud applications form part of the global distributive effort and are favorable over local in-house computational pipelines because they offer rapid access to numerous public workflows, easy interfacing to archived read databases, and they accelerate the upholding process of raw data. The Google Genomics Cloud has begun to make first steps with integrating cloud infrastructure with the Broad Institute, whereas Illumina's BaseSpace platform has been hosting novel computational applications since its launch.\n\nScholarly publications that choose only a written method section passively make validation gestures, which is arguably inadequate in comparison to the rising trend or well-branded organizations. We envision a future where published work will share conserved analytic environments, with cloud software accessed by web-distributed methodologies, and/or large databases organizing multitudes of Dockerfiles with accession numbers, strengthening links between raw sequencing data and reproducible analytical results.\n\nCloud computational software does not only wish to crystallize research methods into a pristine pool of transparent methodologies, but also matches the rate of production of high quality analytical results to the rate of production of public data, which reaches hundreds of petabytes annually. In a talk given by Dr. Atul Butte in December 2015, he discussed that with endless public data, the traditional method for practicing science has inverted; no longer does a scientist formulate a question and then experimentally measure observations that test the hypothesis. In the modern area, empirical observations are being made at an unbounded rate, the challenge now is formulating the proper question (more details on his talk can be found here). Given a near-infinite amount of observations, what is the phenomena that is being revealed? Cloud computational software can accelerate the production of hypotheses by increasing the flexibility and efficiency of scientific exploration.\n\nMany bioinformaticians have noted a rising trend in biotechnology, predicting that open data and open cloud centers will help democratize research efforts and create a more inclusive practice. With the presence of cloud interfacing applications such as Illumina's BaseSpace Command Line Interface, DNA-Nexus, SevenBridges, and Google Genomics becoming more popular, cloud environments pioneer the effort for achieving standardized bioinformatic protocols.\n\nDemocratization of big-data efforts has some possible negative consequences. Accessing, networking, and integrating software applications for distributing data as a public effort requires massive amounts of specialized technicians to maintain and develop cloud centers that many research institutions are migrating toward. Currently, it is fairly common for research centers to employ high-performance computer clusters which store laboratory software and data locally; cloud computing clusters are beginning to offer clear advantages compared to local closed computer clusters. Collaborations are becoming more common practice for large research efforts, and sequencing databases have been distributing data globally, making cloud storage more efficient. This implies that services from cloud centers will most likely be offered by very few elite organizations because the large scale of cloud services will prevent incentives for smaller companies.\n\nIt is very likely that only a few elite organizations will provide services to cloud computing environments, acting as a gateway which directs the global research community toward a narrow set of well established, standardized, computational applications. With regard to recent changes relating to media consumption and e-commerce, democratization allows independent alternative selections far greater exposure, equalizing profits for lower ranked selections “at the tail\", however it may be possible that the abundant amount of data distributed over storage archives, which stimulates an economically abundant environment, could shift into a fiercely controlled economic environment of scarcity. For example, if a gold-standard is reached for computational applications, the range of alternative selections could remain non-existent, which may diminish the future roles of bioinformaticians. This possible scenario suggests bioinformaticians could be re-directed to small garages instead of the technocratic places such as Silicon Valley, motivated not from a spirit of entrepreneurialism, but from a lack of funding.\n\nAutomative downstream analyses is not without its drawbacks; most computational software is highly specialized for niche groups with a mathematical framework constructed by specialized assumptions, this may require a diverse array of computational developments, and thus a large community of developers. The automation of analytical results seems almost unavoidable, and the benefits seem to outweigh the negative consequences.\n\n\nConclusion\n\nArkas integrates the Kallisto pseudoalignment algorithm into the BaseSpace cloud computation ecosystem that can implement large-scale parallel ultra-fast transcript abundance quantification. We reduce a computational bottleneck by freeing inefficiencies from utilizing rapid transcript abundance calculations and connecting accelerated quantification software to the Sequencing Read Archive. We remove the second bottleneck because we reduce the necessity of database downloading; instead we encourage users to download aggregated analysis results. We also expand the range of common sequencing protocols to include an improved gene-set enrichment algorithm, Qusage, and allow for exporting into an exhaustive pathway analysis platform, Advaita, over the AWS EC2 field in parallel.\n\n\nData availability\n\nControls: SRR1544480 Immortal-1\n\nSRR1544481 Immortal-2\n\nSRR1544482 Immortal-3\n\nComparison: SRR1544501 Qui-1\n\nSRR1544502 Qui-2\n\nLatest source code:\n\nhttps://github.com/RamsinghLab/Arkas-RNASeq\n\nArchived source code as at the time of publication:\n\nDOI: 10.5281/zenodo.54565421\n\nLicense:\n\nMIT license\n\nFor Homo-sapiens and Mus-musculus ENSEMBL FASTA files were downloaded here for release 88.\n\nThe ERCC sequences are provided in a SQL database format located here",
"appendix": "Author contributions\n\n\n\nAC wrote the manuscript, and developed the web-application and related software. TJ developed software, and helped the project design. GR wrote the manuscript and contributed to the development of software.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis project was funded by grants from Leukemia Lymphoma Society-Quest for Cures (0863-15), Illumina (San Diego), STOP Cancer and Tower Cancer Research Foundation.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary Table 1: Data variation with matching Kallisto versions. This shows the variation of mean differences between data using the matching Kallisto version 0.43.0. The rows represent the samples from the first run using version 0.43.0. The columns represent the samples from an additional run with version 0.43.0.\n\nClick here to access the data.\n\nSupplementary Table 2: Data variation with non-matching Kallisto versions. Variation of mean differences between non-matching Kallisto versions and a randomly selected run previously generated (Supplement Table 1). The rows are samples run using version 0.43.0, the columns are runs using version 0.43.1.\n\nClick here to access the data.\n\nSupplementary Table 3: Annotation runtime. System runtime for full annotation of a merged KallistoExperiment (seconds). The columns represent system runtime, the Elapsed Time is the total runtime.\n\nClick here to access the data.\n\nSupplementary Table 4: KallistoExperiment Formation runtime. System runtime for the creation of a merged KallistoExperiment (seconds). The columns are similar to Supplementary Table 3.\n\nClick here to access the data.\n\n\nReferences\n\nMinevich G, Park DS, Blankenberg D, et al.: CloudMap: a cloud-based pipeline for analysis of mutant genome sequences. Genetics. 2012; 192(4): 1249–1269. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReid JG, Carroll A, Veeraraghavan N, et al.: Launching genomics into the cloud: deployment of Mercury, a next generation sequence analysis pipeline. BMC Bioinformatics. 2014; 15: 30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOcaña K, de Oliveira D: Parallel computing in genomic research: advances and applications. Adv Appl Bioinform Chem. 2015; 8: 23–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBray NL, Pimentel H, Melsted P, et al.: Near-optimal probabilistic RNA-seq quantification. Nat Biotechnol. 2016; 34(5): 525–527. PubMed Abstract | Publisher Full Text\n\nLander ES, Linton LM, Birren B, et al.: Initial sequencing and analysis of the human genome. Nature. 2001; 409(6822): 860–921. PubMed Abstract | Publisher Full Text\n\nYang X, Coulombe-Huntington J, Kang S, et al.: Widespread Expansion of Protein Interaction Capabilities by Alternative Splicing. Cell. 2016; 164(4): 805–817. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoneson C, Matthes KL, Nowicka M, et al.: Isoform prefiltering improves performance of count-based methods for analysis of differential transcript usage. Genome Biol. 2016; 17: 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBourgon R, Gentleman R, Huber W: Independent filtering increases detection power for high-throughput experiments. Proc Natl Acad Sci U S A. 2010; 107(21): 9546–9551. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker SC, Bauer SR, Beyer RP, et al.: The External RNA Controls Consortium: a progress report. Nat Methods. 2005; 2(10): 731–734. PubMed Abstract | Publisher Full Text\n\nMunro SA, Lund SP, Pine PS, et al.: Assessing technical performance in differential gene expression experiments with external spike-in RNA control ratio mixtures. Nat Commun. 2014; 5: 5125. PubMed Abstract | Publisher Full Text\n\nLawrence M, Huber W, Pagès H, et al.: Software for computing and annotating genomic ranges. PLoS Comput Biol. 2013; 9(8): e1003118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRisso D, Schwartz K, Sherlock G, et al.: GC-content normalization for RNA-Seq data. BMC Bioinformatics. 2011; 12: 480. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRitchie ME, Phipson B, Wu D, et al.: limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7): e47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRisso D, Ngai J, Speed TP, et al.: Normalization of RNA-seq data using factor analysis of control genes or samples. Nat Biotechnol. 2014; 32(9): 896–902. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYaari G, Bolen CR, Thakar J, et al.: Quantitative set analysis for gene expression: a method to quantify gene set differential expression including gene-gene correlations. Nucleic Acids Res. 2013; 41(18): e170. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMitra SA, Mitra AP, Triche TJ: A central role for long non-coding RNA in cancer. Front Genet. 2012; 3: 17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen G, Wang C, Shi L, et al.: Incorporating the human gene annotations in different databases significantly improved transcriptomic and genetic analyses. RNA. 2013; 19(4): 479–489. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBegley CG, Ellis LM: Drug development: Raise standards for preclinical cancer research. Nature. 2012; 483(7391): 531–533. PubMed Abstract | Publisher Full Text\n\nPiccolo SR, Frampton MB: Tools and techniques for computational reproducibility. Gigascience. 2016; 5(1): 30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColombo AR: RamsinghLab/Arkas-RNASeq: Adding data Variance package, mirror to BaseSpace software [Data set]. Zenodo. 2017. Data Source"
}
|
[
{
"id": "22282",
"date": "18 May 2017",
"name": "Harold Pimentel",
"expertise": [
"Reviewer Expertise RNA-Seq analysis methods and data analysis"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNote: I am a co-author of the kallisto tool, one of the tools that is used in this pipeline.\nColombo et al. describe Arkas, a tool that takes raw RNA-Seq data and produces several different types of downstream analyses. Arkas leverages existing analysis tools (e.g. kallisto and limma) and platforms (Illumina BaseSpace) to create an easy to use, fast, and reproducible pipeline. A very useful (unique?) feature is that it documents software versions and enforces consistent software versions allowing users to see the potential differences with different software versions. This is made explicit in the \"Results\" section.\nHaving all of these tools together greatly reduces the time to setup analyses and also reduces the complexity for RNA-Seq novices who might have no idea where to start. Arkas makes all of the typical figures one might make in a standard RNA-Seq analysis. It also provides gene-set analyses which are often excluded from other pipelines. In my experience, gluing together analyses from differential expression to gene-set analyses can often be an annoyance due to inconsistencies and annotations and versions of these annotations. Arkas nicely solves this problem.\nWhile I think the idea is very good and the tool seems comprehensive, I feel the manuscript needs a bit of work. Here are a few points:\n- There are a few areas where the scope seems too broad. In general, I feel that the manuscript can be shortened to be more clear as well as more precise. In particular, the Docker section in the discussion is too broad and the role of Arkas seems lost. I strongly recommend shortening this section and discussing the role of Docker in Arkas more clearly. - While the abstract and introduction provide a description of Arkas in RNA-Seq analysis, they do not provide a motivation. It is sort of hinted in several sections in the paper, but it is not explicit. The motivation of building another pipeline should be explicit. - How does this pipeline compare to other pipelines such as Galaxy, DNANexus, etc.? Should probably be noted in the introduction/discussion. - Perhaps I missed it, but the interface of Arkas does not appear to be described. There is a short subsection \"Operation\" that doesn't describe the type of interface. It appears to be available on Illumina BaseSpace, but does this make it a commandline tool or an online web form style tool? A short description of this interface and possibly supplementary figures (if it is a web form style) should be provided. This is unclear to folks who are not familiar with BaseSpace. - It should be greater emphasized how this tool can be used to reanalyze existing SRA data with relative ease. In my opinion this is a very strong argument as to why one might want a tool like this.\nAreas that can be shortened:\n- \"Data variance between software versions\" can be shortened as some of this is repeated in \"Results.\" - \"Complete transcriptomes enrich annotation information...\" Specifics of annotations can probably be removed/condensed. It is probably sufficient to say that some are 3x times larger which can change results drastically. - \"Docker as a cornerstone of reproducible research\" The role of Docker in general can probably be shortened and how Arkas leverages it should be made more clear.\nMore minor points:\n- A short sentence at the beginning of \"Methods\" should give an overview of the two-step process. - The Galaxy Project (https://usegalaxy.org/) should probably be cited even though the scope is a bit different. - Figure 1a: \"Receiver Operator Characteristic plot\" of what? This is stated in the main text, but should also the stated in the figure caption. - Swap Figure 1d and 1c. - It seems like BaseSpace sessions can easily be shared? If so, this is an additional strong point of using BaseSpace in Arkas.\nOverall, I'm very excited to see this comprehensive tool exist and be described in this paper.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "2766",
"date": "21 Jun 2017",
"name": "Giridharan Ramsingh",
"role": "Author Response",
"response": "Thank you very much Dr. Pimentel for your thorough review. We have significantly reduced the broad discussion section, and narrowed the manuscript to the most important features. The 'Abstract' and 'Introduction' section was reduced to explicitly state the motivations for the design of Arkas. In the revised manuscript, the 'Methods' section provides a brief overview of the applications, and the 'Operation' section describes the interface style and includes Supplementary Figures depicting both apps. The second reviewer Dr. Abel also suggested that the in-depth discussion of Docker was too broad. The revised version includes a discussion section that is compares processing times between Google Genomics, and another BaseSpace application. We also have now included brief points in regard to Galaxy. Your helpful comments helped the manuscript become much more concise. In addition to your remarks, we have addressed important features regarding microRNAs on behalf of the second reviewer. Kallisto can process smaller FASTA sequences, however we have now addressed that users can analyze microRNAs, but we suggest a separate analysis for this. We thank you very much for your revisions and appreciate your thoughtful remarks. We believe that addressing your remarks the manuscript is greatly elevated. Below are point-by-point responses to your questions. \"Note: I am a co-author of the kallisto tool, one of the tools that is used in this pipeline.Colombo et al. describe Arkas, a tool that takes raw RNA-Seq data and produces several different types of downstream analyses. Arkas leverages existing analysis tools (e.g. kallisto and limma) and platforms (Illumina BaseSpace) to create an easy to use, fast, and reproducible pipeline. A very useful (unique?) feature is that it documents software versions and enforces consistent software versions allowing users to see the potential differences with different software versions. This is made explicit in the \"Results\" section.Having all of these tools together greatly reduces the time to setup analyses and also reduces the complexity for RNA-Seq novices who might have no idea where to start. Arkas makes all of the typical figures one might make in a standard RNA-Seq analysis. It also provides gene-set analyses which are often excluded from other pipelines. In my experience, gluing together analyses from differential expression to gene-set analyses can often be an annoyance due to inconsistencies and annotations and versions of these annotations. Arkas nicely solves this problem.While I think the idea is very good and the tool seems comprehensive, I feel the manuscript needs a bit of work. Here are a few points:- There are a few areas where the scope seems too broad. In general, I feel that the manuscript can be shortened to be more clear as well as more precise. In particular, the Docker section in the discussion is too broad and the role of Arkas seems lost. I strongly recommend shortening this section and discussing the role of Docker in Arkas more clearly.\" Thank you very much for your input. In the revised manuscript, we have narrowed the Docker discussion section to the scope of BaseSpace platform, and have described Arkas' relationship to Docker as an applied infrastructure to this platform. The previous version of the manuscript detailed the role of Docker in the broad concept of reproducible research. We have omitted these details. The revised manuscript describes the interdependent relationship between Arkas and Docker in the context of BaseSpace. For example, Arkas containerized Node.js and R to parse the BaseSpace JSON input information relating to BaseSpace’s input fields. The new manuscript explained that Docker and Arkas are not independent entities, and pertain specifically to BaseSpace. \"- While the abstract and introduction provide a description of Arkas in RNA-Seq analysis, they do not provide a motivation. It is sort of hinted in several sections in the paper, but it is not explicit. The motivation of building another pipeline should be explicit.\" Thank you for this suggestion. We have now explicitly provided the motivation for Arkas’ development by mentioning bottlenecks in RNA-sequencing such as sequencing importing and pre-processing steps, and how Arkas rectifies those bottlenecks. In the revised version, we illustrate how Arkas was developed downstream from BaseSpace SRA Import to greatly reduce importing and conversion steps. Also, we now explicitly stated the motivation for Arkas-Quantification such that Kallisto was implemented in parallel, which now scales quantification speed to the Amazon AWS EC2 cluster node availability rate. In addition, the revised manuscript explicitly stated the motivation for Arkas-Analysis, which provides a comprehensive analysis. \"- How does this pipeline compare to other pipelines such as Galaxy, DNANexus, etc.? Should probably be noted in the introduction/discussion.\" Thank you for this suggestion. In the revised discussion section, we now compare features of other cloud platforms, and other BaseSpace RNA-Seq applications. The revised discussion now included processing times of a large scale RNA-seq analysis that implemented Kallisto using Google Genomics Platform. In addition to Goolgle Genomics, the revised manuscript briefly compares features offered by Galaxy to BaseSpace. Further we compare Arkas to other BaseSpace RNA-Seq applications.\"- Perhaps I missed it, but the interface of Arkas does not appear to be described. There is a short subsection \"Operation\" that doesn't describe the type of interface. It appears to be available on Illumina BaseSpace, but does this make it a commandline tool or an online web form style tool? A short description of this interface and possibly supplementary figures (if it is a web form style) should be provided. This is unclear to folks who are not familiar with BaseSpace.\" Thank you again for this suggestion. We have included a description explicitly stating that Arkas is a web form style. In addition, we included two Supplementary Figures to address the web input forms. Supplementary Figure 1 shows the input form for both web style apps, and Supplementary Figure 2 shows the output folder directory of the Arkas-Quantification.\"- It should be greater emphasized how this tool can be used to reanalyze existing SRA data with relative ease. In my opinion this is a very strong argument as to why one might want a tool like this.\"Thank you for addressing reanalysis of SRA data. In the updated manuscript, we now mention that Arkas' design was motivated by the BaseSpace application SRA Import. The revised introduction now explicitly stated that Arkas is SRA compatible and we have provided citations for readers interested in utilizing this SRA application. \"Areas that can be shortened:- \"Data variance between software versions\" can be shortened as some of this is repeated in \"Results.\" '\" We combined the “Data variance between software versions” and “Results” section into an appropriate concise section.- \"Complete transcriptomes enrich annotation information...\" Specifics of annotations can probably be removed/condensed. It is probably sufficient to say that some are 3x times larger which can change results drastically.\" We reduced this discussion to brief specifics of database sizes. While obvious, we believe that a brief overview provides motivation for the default transcriptomes chosen by Arkas. In the revised manuscript, we provide a very concise explanation behind the selection of default transcriptomes. - \"Docker as a cornerstone of reproducible research\" The role of Docker in general can probably be shortened and how Arkas leverages it should be made more clear.\" Thank you again for this comment. We agree that this broad discussion went off topic and may distract future readers. The manuscript is greatly improved with the removal of the discussion about democratization of research efforts, and biotechnology. We significantly revised the discussion to a comparison of differing cloud platforms and corresponding processing times of other cloud applications. \"More minor points:- A short sentence at the beginning of \"Methods\" should give an overview of the two-step process.\" We provided an overview of Arkas in the section described.\"- The Galaxy Project (https://usegalaxy.org/) should probably be cited even though the scope is a bit different.\" Galaxy is briefly mentioned in the discussion. The revised manuscript reviewed and compared processing times of Google Genomics Platform and another RNAseq application within BaseSpace.\"- Figure 1a: \"Receiver Operator Characteristic plot\" of what? This is stated in the main text, but should also the stated in the figure caption.- Swap Figure 1d and 1c.\" Thank you for pointing this out. The revised Figure 1a now states that the Receiver Operator Characteristic plot is for ratios of detected and actual spiked ERCC sequences. We have swapped Figure1d and Figure 1c.\"- It seems like BaseSpace sessions can easily be shared? If so, this is an additional strong point of using BaseSpace in Arkas.\" We now mention this brief point in the discussion.\"Overall, I'm very excited to see this comprehensive tool exist and be described in this paper.\" Thank you very much Dr. Pimentel."
}
]
},
{
"id": "22616",
"date": "24 May 2017",
"name": "Ted Abel",
"expertise": [
"Reviewer Expertise Molecular neuroscience"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper introduces a RNA-Seq analysis pipeline, Arkas, which combines currently available tools typically used in RNA-Seq studies. The novelty of this pipeline is the encapsulation of tools needed to prepare the data, run quality control checks, analyze the data and perform secondary analyses. This is especially beneficial for investigators new to RNA-Seq analysis with little experience navigating through computational tools. The authors take care to outline the rationale behind creating an easy-to-use interface and how this will increase reproducibility and consistency across RNA-Seq studies. They emphasize the importance of consistency with versions by showing differing results between two Kallisto versions.\nHowever, there are some minor limitations also found in this study: It would be beneficial to include quality control checks at the beginning of the pipeline to generate data regarding the inputted sequencing files.\nIt would be interesting to see more processing time information to show the benefit of using this pipeline compared to similar methods.\nAs is discussed, the inclusion of lncRNAs increases the amount of potentially interesting results from this pipeline. However, the authors have chosen to ignore microRNAs, an important regulator of cellular function. The inclusion of microRNAs as a default option in this pipeline would provide even more potentially interesting results.\nThe normalization steps and Figure 2 should be discussed in more detail. Specifically, expand on the reasons for choosing these two methods and the differences between the methods and their outputs. In addition, a note about how a user should select a normalization type would help new users. Whilst the authors suggest that the integration of Docker will help produce reproducible research methods, the in-depth look into Docker is unnecessary, as no data has been provided to show its benefit above other options.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": [
{
"c_id": "2767",
"date": "21 Jun 2017",
"name": "Giridharan Ramsingh",
"role": "Author Response",
"response": "Thank you very much Dr. Abel for your insightful review. The revised manuscript removed the in-depth discussion of Docker because it was too broad. The revised version included a discussion section that compares processing times between Google Genomics, and another BaseSpace application. Your comments helped address the analysis of microRNAs. For example, Kallisto can process smaller FASTA sequences, however this invokes limitations to the construction of the Target DeBruijn Graph by increasing the path ambiguity of longer read sequences. The revised manuscript now addressed this limitation, and suggested that users analyze microRNAs separately. This analysis feature is not yet a default, but would be a great future addition. We further address details in regard to normalization motivation and selection. As suggested by the first reviewer Dr. Pimentel, we have significantly reduced the broad discussion section, and explicitly described the motivation for the development of Arkas. We have additionally revised the 'Methods' section to provide a brief overview of the applications, and clearer descriptions of the interface style that included Supplementary Figures depicting both interfaces. \"This paper introduces a RNA-Seq analysis pipeline, Arkas, which combines currently available tools typically used in RNA-Seq studies. The novelty of this pipeline is the encapsulation of tools needed to prepare the data, run quality control checks, analyze the data and perform secondary analyses. This is especially beneficial for investigators new to RNA-Seq analysis with little experience navigating through computational tools. The authors take care to outline the rationale behind creating an easy-to-use interface and how this will increase reproducibility and consistency across RNA-Seq studies. They emphasize the importance of consistency with versions by showing differing results between two Kallisto versions.However, there are some minor limitations also found in this study:It would be beneficial to include quality control checks at the beginning of the pipeline to generate data regarding the inputted sequencing files.\" Thank you for this suggestion. Analyzing read quality will guide users into the important decision to filter low quality reads, however Arkas was not designed to address this. In the revised manuscript, we have now mentioned another independent BaseSpace application FastQC which can assess read quality. For users interested in manually uploading sequencing data to BaseSpace, each read must pass a quality filter. This quality filter will automatically reject poor quality reads, and for this we designed Arkas with the assumption that sequenced reads input were of good quality. \"It would be interesting to see more processing time information to show the benefit of using this pipeline compared to similar methods.\" Thank you very much for addressing processing times. The revised manuscript significantly reduced the discussion section to comparisons of processing times. Your remarks inspired the addition of processing times of Arkas. We’ve included further information comparing the processing time to another BaseSpace application RNAExpress. Further, we added processing time information of a different Kallisto analysis pipeline implemented over Google Genomics Platform. The discussion section now is far more concise with greater relevance toward the functionality of our developed software. \"As is discussed, the inclusion of lncRNAs increases the amount of potentially interesting results from this pipeline. However, the authors have chosen to ignore microRNAs, an important regulator of cellular function. The inclusion of microRNAs as a default option in this pipeline would provide even more potentially interesting results.\" Including microRNAs is a very great idea. Arkas can quantify microRNAs, but we decided not include microRNAs as default yet. In the revised manuscript we address that the small sequence sizes are a potential limitation to quantification of cDNAs/ncRNAs because it may increase path ambiguities during the construction of the Target DeBruijn graphs. Hence, we suggest that users analyze microRNAs separately and locally. This would be a great additional feature for the next version of Arkas.\"The normalization steps and Figure 2 should be discussed in more detail. Specifically, expand on the reasons for choosing these two methods and the differences between the methods and their outputs. In addition, a note about how a user should select a normalization type would help new users.\" Thank you for addressing this. The revised manuscript has now explicitly stated how end-users may decide a selection of the normalization type. We further provide a brief explanation to why unsupervised normalization was selected. \"Whilst the authors suggest that the integration of Docker will help produce reproducible research methods, the in-depth look into Docker is unnecessary, as no data has been provided to show its benefit above other options.\" We agree that the discussion of Docker was too broad, and the revised discussion is focused on comparative performance from other cloud platforms."
}
]
}
] | 1
|
https://f1000research.com/articles/6-586
|
https://f1000research.com/articles/6-947/v1
|
20 Jun 17
|
{
"type": "Case Report",
"title": "Case Report: Lupoid cutaneous leishmaniasis mimicking verruca plana",
"authors": [
"Emin Ozlu",
"Aysegul Baykan",
"Ozan Yaman",
"Ragıp Ertas",
"Mustafa Atasoy",
"Kemal Ozyurt",
"Abdullah Turasan",
"Nazan Taslıdere",
"Aysegul Baykan",
"Ozan Yaman",
"Ragıp Ertas",
"Mustafa Atasoy",
"Kemal Ozyurt",
"Abdullah Turasan",
"Nazan Taslıdere"
],
"abstract": "Cutaneous leishmaniasis (CL) is an infectious disease caused by various species of leishmania protozoan parasites. Lupoid CL is a rare form of CL that has a stunning similarity to other granulomatous cutaneous conditions of infectious or inflammatory origin. Verruca plana, also known as a “flat wart”, is a benign proliferation of the skin resulting from infection with human papilloma virus (HPV). Herein, we presented a case of lupoid CL mimicking verruca plana on the face.",
"keywords": [
"diagnosis",
"leishmaniasis",
"viral disease"
],
"content": "Introduction\n\nLeishmaniasis encompasses a group of diseases caused by the protozoan parasites of the Leishmania genus1. Classical lesions of cutaneous leishmaniasis (CL) advance in the forms of papules, nodules and ulcerated lesions, and they heal with an atrophic scar over months and years2. Nevertheless, CL has been seen in atypical form, including erysipeloid, lupoid, sporotrichoid, hyperkeratotic, eczematous, verrucous and impetiginized form3.\n\nLupoid CL is one of the more rarely seen forms of CL4. The incidence of lupoid CL has been reported to be 0.5 to 6.2%5. This clinical presentation with a chronic course develops after acute CL infection. In this clinical form, papulonodular lesions of granulomatous and lupoid character are seen 1–2 years after the acute lesion is healed4. Although there is an immune response against parasites in lupoid CL, the immune system is unable to remove the parasites altogether and thus the chronic granulomatous response continues for a long time6. Many clinical presentations of lupoid CL have been reported; however, no lupoid CL mimicking verruca plana has, to our knowledge, previously been reported in the literature.\n\n\nCase report\n\nA 9-year-old male patient presented at our clinic with multiple papular lesions located in the left cheek. He had had these lesion for three months, and they had gradually enlarged. The medical history revealed that the patient had a follow-up after diagnosis of CL located in the nose two years ago, and presented with complete regression after he was started on intralesional meglumine antimonate.\n\nDermatological examination revealed multiple, coalescing, rough, slightly elevated, yellowish-brown papular lesions 3–6 mm in diameter located in the left cheek (Figure 1). In addition, he had large atrophic scar on the nose. Nothing of interest was noted in his family history or his laboratory tests. After staining with Giemsa, a parasitological smear showed numerous leishmania parasites in their amastigote form (Figure 2).\n\nThe patient was diagnosed with lupoid CL based on his medical history, and his clinical and microscopy findings. He was started on intralesional meglumine antimonate injection per week. After the 4 sessions of treatment, significant improvement was observed in the patient’s lesions (Figure 3).\n\n\nDiscussion\n\nCL is a parasitic disease caused by leishmania protozoa, transmitted to humans during blood sucking by infected phlebotomine sandflies3. Clinical signs of CL vary from a self-limited asymptomatic presentation to life-threatening diffuse destructive lesions, depending on the type of the leishmania and the immunological state of the host. Initial lesions are frequently erythematous papules or nodular lesions7.\n\nLupoid CL is a rare, chronic form of CL that develops following acute CL. In this clinical form, papulonodular lesions are developed at the edges of the scar months and even years after the acute lesion is healed. Papular lesions in brownish red or brownish yellow with a tendency to merge with each other, and nodules in apple-jelly consistency compose the characteristic lupoid image in lupoid CL. The lesions are sometimes squamous, crusted, and psoriasiform and may mimic lupus vulgaris. The clinical course of lupoid CL is considered to be associated with changes in the cell-mediated immunity. A possible underlying pathogenetic mechanism involves changes in Th1 and Th2 cell responses and interleukin 4 (IL-4) production8. The altered host immune response then contributes to the high sensitivity to parasitic infections and extraordinary clinical presentations8.\n\nLupoid CL has been defined as having atypical clinical properties and a chronic recurrent course. Clinically, lupoid CL may particularly resemble lupus erythematosis and lupus vulgaris8. It may also resemble other granulomatous diseases of infectious or inflammatory origin and may mimic them; however, microscopic and histopathological findings are important in differentiating them from other dermatoses9.\n\nUl Bari et al.8 evaluated 16 patients with lupoid CL and reported 4 different morphological patterns, including psoriasiform lesions, ulcerated/crusted lesions and discoid lupus erythematosis. Douba et al.9 analyzed 1880 patients with chronic CL. In that study, 1.4 % of 1880 patients were reported to have lesions with verrucous character9. In this case report, the patient had a progressively increasing amount of groups of papular lesions that were yellowish brown in colour, in the left cheek and chin region. Clinically, the lesions suggested verruca plana; however, the microscopic evaluation of the sample obtained from the lesion revealed amastigotes and a diagnosis of lupoid CL was made. To the best of our knowledge, there is no case of lupoid CL mimicking verruca plana in the literature. In this case, it is striking that no lesion was seen around the CL scar.\n\nAmastigotes are seen rarely in the parasitological smear in lupoid CL10. In our present case, amastigotes may have been observed due to the lesions having appeared in the last three months.\n\nThere is no current protocol for efficient and accurate treatment of lupoid CL. First-line treatment involves administration of pentavalent antimony compounds10. In the present case, following treatment with intralesional meglumine antimonate for 4 sessions, a significant regression in the lesions of the patient was observed; and treatment was discontinued.\n\n\nConclusion\n\nLupoid CL is a rare and chronic form of CL. Lupoid CL manifest with atypical clinical and histopathological properties. It should be considered that lupoid CL may be seen as lesions similar to verruca plana.\n\n\nConsent\n\nWe obtained written informed consent from patient’s parents for the publication of the manuscript.",
"appendix": "Author contributions\n\n\n\nEO wrote the manuscript; AB prepared the manuscript; OY is the patient’s consultant from the Department of Microbiology; RE, MA and KO, AT and NT helped manage the patient’s diagnosis and therapy.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nInci R, Ozturk P, Mulayim MK, et al.: Effect of the Syrian Civil War on Prevalence of Cutaneous Leishmaniasis in Southeastern Anatolia, Turkey. Med Sci Monit. 2015; 21: 2100–2104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPace D: Leishmaniasis. J Infect. 2014; 69(Suppl 1): S10–18. PubMed Abstract | Publisher Full Text\n\nSaab J, Fedda F, Khattab R, et al.: Cutaneous leishmaniasis mimicking inflammatory and neoplastic processes: a clinical, histopathological and molecular study of 57 cases. J Cutan Pathol. 2012; 39(2): 251–262. PubMed Abstract | Publisher Full Text\n\nBowling JC, Vega-Lopez F: Case 2. Lupoid leishmaniasis. Clin and Exp Derm. 2003; 28(6): 683–684. PubMed Abstract | Publisher Full Text\n\nNilforoushzadeh MA, Jaffray F, Reiszadeh MR, et al.: The therapeutic effect of combined cryotherapy, paramomycin, and intralesional meglumine antimoniate in treating lupoid leishmaniasis and chronic leishmaniasis. Int J Dermatol. 2006; 45(8): 989–991. PubMed Abstract | Publisher Full Text\n\nStefanidou MP, Antoniou M, Koutsopoulos AV, et al.: A rare case of leishmaniasis recidiva cutis evolving for 31 years caused by Leishmania tropica. Int J Dermatol. 2008; 47(6): 588–589. PubMed Abstract | Publisher Full Text\n\nDavid CV, Craft N: Cutaneous and mucocutaneous leishmaniasis. Dermatol Ther. 2009; 22(6): 491–502. PubMed Abstract | Publisher Full Text\n\nUl Bari A, Raza N: Lupoid cutaneous leishmaniasis: a report of 16 cases. Indian J Dermatol Venereol Leprol. 2010; 76(1): 85. PubMed Abstract | Publisher Full Text\n\nDouba MD, Abbas O, Wali A, et al.: Chronic cutaneous leishmaniasis, a great mimicker with various clinical presentations: 12 years experience from Aleppo. J Eur Acad Dermatol Venereol. 2012; 26(10): 1224–1229. PubMed Abstract | Publisher Full Text\n\nKhaled A, Goucha S, Trabelsi S, et al.: Lupoid cutaneous leishmaniasis: a case report. Dermatol Ther (Heidelb). 2011; 1(2): 36–41. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23759",
"date": "23 Jun 2017",
"name": "Gulhan Gurel",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI evaluated the case report entitled ‘Lupoid cutaneous leishmaniasis mimicking verruca plana’.\n\nLeishmania is currently endemic in 102 countries, areas or territories worldwide and 2 million new cases are recorded annually1. This sentence can be added at the introduction of the article.\n\nCutaneous leishmaniasis is still considered an important health issue in many parts of the world. Lupoid CL is a very rare form of CL. The lupoid CL mimicking the verruca plana may lead to delay in the diagnosis. It is thought that the very rare case will contribute to the literature. This case report is appropriate for publication.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "23760",
"date": "23 Jun 2017",
"name": "Altay Atalay",
"expertise": [
"Reviewer Expertise Mycology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper titled “Case Report: Lupoid cutaneous leishmaniasis mimicking verruca plana” makes a significant contribution to the field because of its rarity. The topic is important. According to my opinion the background of the case’s history and progression described in sufficient detail and details provided of any physical examination and diagnostic tests, treatment given and outcomes are enough. In addition the case is presented with sufficient detail to be useful for other practitioners. Figures are useful.\nIn conclusion, the paper is novel and the work delivers what it promises. My overall evaluation of the paper is positive.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "23660",
"date": "03 Jul 2017",
"name": "Mehmet Akif Dundar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI read the case report. The differential diagnosis of lupoid cutaneous leishmaniasis is difficult and may depend on the detection of a few Leishmania amastigotes in the histologic sections. The case provided will most likely be highly favorable for management of this disease in Turkey. I think this case report is appropriate for publication and will contribute to the literature.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "24237",
"date": "24 Jul 2017",
"name": "Roderick J Hay",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting case report of an unusual variant of cutaneous leishmaniasis. It is infrequently reported and therefore worth recording. I'm not sure that these lesions resemble plane warts - they are certainly flattish but the lateral borders appear slightly curved. I would call them large flat topped papules instead.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-947
|
https://f1000research.com/articles/6-566/v1
|
25 Apr 17
|
{
"type": "Opinion Article",
"title": "Evidence of disease control: a realistic concept beyond NEDA in the treatment of multiple sclerosis",
"authors": [
"Ana C. Londoño",
"Carlos A. Mora",
"Ana C. Londoño"
],
"abstract": "Although no evidence of disease activity (NEDA) permits evaluation of response to treatment in the systematic follow-up of patients with multiple sclerosis (MS), its ability to accomplish detection of surreptitious activity of disease is limited, thus being unable to prevent patients from falling into a non-reversible progressive phase of disease. A protocol of evaluation based on the use of validated biomarkers that is conducted at an early stage of disease would permit the capture of abnormal neuroimmunological phenomena and lead towards intervention with modifying therapy before tissue damage has been reached.",
"keywords": [
"axonopathy",
"biomarkers",
"MRI",
"MS",
"NEDA",
"neurodegeneration",
"neuroinflammation"
],
"content": "Introduction\n\nImmunomodulatory therapies used in the treatment of patients with the clinical isolated syndrome (CIS) of multiple sclerosis (MS) and early relapsing remitting multiple sclerosis (RRMS), as well as the autologous hematopoietic stem cell transplant (aHSCT) used in the treatment of the catastrophic form of the disease, have accomplished a reduction in clinical relapses, a halting in progression toward neurological disability and have demonstrated a reduction of disease activity in MRI scans. This progress has led to the emergence of the ‘no evidence of disease activity’ (NEDA) composite which evaluates the response to these therapies in clinical studies, but its systematic application and utility in the clinical setting have not been established1. NEDA could be considered not only a goal of therapy, but also as an indicator of prognosis and a tool that measures the effect of the medication currently being used2. We propose a more aggressive approach that challenges the current application of NEDA.\n\n\nFragility of the NEDA composite\n\nIn a cohort of 219 patients with either CIS or RRMS, 60 of 218 (27.5%) maintained NEDA status at 2 years, whereas only 17 of 216 (7.9%) had NEDA at 7 years. NEDA status at 2 years had a positive predictive value of 78.3% for no progression of disease at 7 years, demonstrating that it may be optimal in terms of prognostic value in the long term1. This study disclosed a dissociation between clinical and MRI-followed disease activity, with a more prevalent loss of NEDA status determined by MRI changes at onset, followed by clinical relapses without presence of new lesions or changes in the previously existing lesions at later stages. The loss of NEDA status due to changes in expanded disability status scale (EDSS), which is a method of rating impairment in neurological functions excluding cognition, was infrequent1. These findings support a decrement in the inflammatory activity of disease as duration of disease increases, and the possibility of recruitment of additional neuronal pathways and/or a cortical remodeling that could compensate for the loss of function3. Also, cognitive impairment may affect more than 82% of the patients with MS from early stages of the disease, affecting cognitive performance and quality of life3. Damasceno et al. proposed that NEDA should also take into consideration other important measures of the patients’ neurological condition, such as their cognitive status and the volumetric analysis of the brain, converting NEDA into a completely effective tool for therapy evaluation. In their cohort of 42 patients with RRMS treated either with beta-interferon or glatiramer acetate, NEDA status was accomplished in only 30.8% of patients, with worsening of more than two cognitive domains in 58.3% of the NEDA group, and with evidence of cortical thinning and higher thalamus volume decrease in patients with MRI activity4. Studies using drugs known to produce a better therapeutic effect in multiple sclerosis, such as natalizumab and alemtuzumab, have disclosed loss of the NEDA status after 2 years of initiation of therapy in 37% and 39% of the treated patients, respectively4. Currently, aHSCT is the only therapeutic approach that has accomplished NEDA status after 3 years in 75% of patients5. Giovannoni has recently discussed the adaptation of NEDA to the type of the therapeutic regime, and has considered three scenarios. These include a) no treatment, b) maintenance/escalation of disease-modifying therapy, and c) use of induction therapies (such as alemtuzumab, cladribine and aHSCT) establishing a baseline according to the pharmacodynamics of each drug available2.\n\nAlthough progression of disease, which reflects the neurodegenerative component of MS, is expected to be reflected by the EDSS score in the actual concept of NEDA, evidence has shown that the T25FW (timed 25-foot walk) test gave better documentation of clinical progression1. Considering the use of NEDA, Dadalti Fragoso discussed the difficulties encountered when using EDSS to objectively document patient functionality in different areas. Clinical manifestations such as fatigue or sensitivity to heat are not considered, there is an inconvenient variability among evaluators with differences up to 2.0 points for EDSS and 3.0 points for functional system evaluation, and there is the disadvantage of having the patient, and not the evaluator, reporting the ability to walk 500 m or 300 meters6. These studies have shown that, in a high percentage of patients, activity of disease was present at baseline and could not be detected by NEDA thus resulting in a delayed therapeutic intervention and irreparable damage of the central nervous system (CNS).\n\n\nBackground activity beyond the surveillance of NEDA\n\nThe term ‘minimal evident disease activity’ (MEDA) has been applied to MS patients who have been apparently stable in comparison to patients with higher level of activity in the short to intermediate term2. Thus, beyond documenting NEDA we mostly need to achieve documentation of the unnoticed surreptitious activity of disease which remains despite treatment. The determination of biomarkers of inflammation and neurodegeneration in body fluids, combined with the use of non-conventional MRI with the ability to detect changes in the normal appearing brain tissue, could assist in the detection of a sequence of cellular and molecular events, inside and out of the CNS, that occur before fulfilling the current definition of NEDA composite. Multiple biomarkers have been identified in MS, but their validation and clinical application have not been established7,8. Teunissen recently discussed the use of biomarkers for MS such as N-acetylaspartate (representing mitochondrial dysfunction and/or neuro-axonal loss), chitinase 3 like-protein 1 (meaning reactive astrogliosis and microglial activity), neurofilament light chain (related to axonal loss) and glial fibrillary acidic protein (representing astrocytic cytoskeleton injury). They have been validated in at least two independent cohorts, and evaluation of their expression could be a useful tool at the time of diagnosis of MS, and during follow-up after the administration of disease modifying therapy9.\n\nBonnan et al. have recently suggested that NEDA cannot predict sustained remission or complete recovery of disease, and proposed a ‘disease free status score’ be established, based on whether or not there is biological activity of disease, by measuring the level of biomarkers in CSF10. Taking into consideration that, at present, there are no therapeutic agents available that would be able to offer a cure for MS, a disease free status score could create confusion on top of becoming a non-realistic concept. We support the concept that the methodology used to determine the stage of disease should be based on the measuring of the level of biomarkers involved in the inflammatory and neurodegenerative events of disease, not only by CSF analysis but also with non-invasive available tools such as PET-CT and non-conventional MRI to evaluate the normal appearing brain tissue11–16.\n\n\nGoals and conclusion\n\nIn the systematic evaluation of the patient with MS, the primary goal should be the monitoring of evidence of disease control with biomarkers in order to:\n\n1. Prevent clinical relapses\n\n2. Confirm absence of changes suggestive of progression of the disease in pre-existing lesions, including checking for presence of new lesions or atrophy detected by MRI; and\n\n3. Prevent progression toward disability\n\nBy supporting a pro-active management of disease, avoiding brain tissue injury instead of controlling existing inflammation and/or neurodegeneration, this approach is promoting a personalized management of disease17.\n\nEarly treatment of patients with CIS has led to the identification of a therapeutic window that, with current interventions, would be able to slow down progression toward higher scores in EDSS evaluation3. Taking into consideration the recent significant attention given to novel monoclonal antibody therapies (including alemtuzumab, rituximab, ocrelizumab, daclizumab)18–21 and stem cell therapies (aHSCT)22, our most immediate goal should involve searching for strategic interventions to control both inflammation and neurodegeneration, hopefully reaching a stage of prolonged remission in selected patients. Taking into consideration that NEDA status is currently able to switch on red flags only when tissue damage has already occurred in the CNS, we believe that ‘evidence of disease control’ will be accomplished through a better defined, convincing and more realistic monitoring of validated biomarkers.",
"appendix": "Author contributions\n\n\n\nACL and CAM equally contributed to the study concept and to drafting and critically revising the manuscript. Both ACL and CAM agreed to the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nCAM is a member of the Data & Safety Monitoring Board for the NINDS/NIH study NS003055-08/NS003056-08. He received no compensation for his participation in that study. ACL does not report any competing interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors thank Dr. Robert K. Shin (Department of Neurology, MedStar Georgetown University Hospital) for review of the initial draft of this manuscript.\n\n\nReferences\n\nRotstein DL, Healy BC, Malik MT, et al.: Evaluation of no evidence of disease activity in a 7-year longitudinal multiple sclerosis cohort. JAMA Neurol. 2015; 72(2): 152–158. PubMed Abstract | Publisher Full Text\n\nGiovannoni G: Multiple sclerosis should be treated using a step-down strategy rather than a step-up strategy-YES. Mult Scler. 2016; 22(11): 1397–1400. PubMed Abstract | Publisher Full Text\n\nZiemssen T, Derfuss T, de Stefano N, et al.: Optimizing treatment success in multiple sclerosis. J Neurol. 2016; 263(6): 1053–1065. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDamasceno A, Damasceno BP, Cendes F: No evidence of disease activity in multiple sclerosis: Implications on cognition and brain atrophy. Mult Scler. 2016; 22(1): 64–72. PubMed Abstract | Publisher Full Text\n\nSormani MP, Muraro PA, Saccardi R, et al.: NEDA status in highly active MS can be more easily obtained with autologous hematopoietic stem cell transplantation than other drugs. Mult Scler. 2017; 23(2): 201–204. PubMed Abstract | Publisher Full Text\n\nDadalti Fragoso Y: Why some of us do not like the expression “no evidence of disease activity” (NEDA) in multiple sclerosis. Mult Scler Relat Disord. 2015; 4(4): 383–4. PubMed Abstract | Publisher Full Text\n\nComabella M, Montalban X: Body fluid biomarkers in multiple sclerosis. Lancet Neurol. 2014; 13(1): 113–26. PubMed Abstract | Publisher Full Text\n\nKatsavos S, Anagnostouli M: Biomarkers in Multiple Sclerosis: An Up-to-Date Overview. Mult Scler Int. 2013; 2013: 340508. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeunissen CE, Malekzadeh A, Leurs C, et al.: Body fluid biomarkers for multiple sclerosis--the long road to clinical application. Nat Rev Neurol. 2015; 11(10): 585–96. PubMed Abstract | Publisher Full Text\n\nBonnan M, Marasescu R, Demasles S, et al.: No evidence of disease activity (NEDA) in MS should include CSF biology - Towards a 'Disease-Free Status Score'. Mult Scler Relat Disord. 2017; 11: 51–55. PubMed Abstract | Publisher Full Text\n\nDatta G, Violante IR, Scott G, et al.: Translocator positron-emission tomography and magnetic resonance spectroscopic imaging of brain glial cell activation in multiple sclerosis. Mul Scler. 2016; 1352458516681504. PubMed Abstract | Publisher Full Text\n\nPoutiainen P, Jaronen M, Quintana FJ, et al.: Precision Medicine in Multiple Sclerosis: Future of PET Imaging of Inflammation and Reactive Astrocytes. Front Mol Neurosci. 2016; 9: 85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAiras L, Rissanen E, Rinne J: Imaging of microglial activation in MS using PET: Research use and potential future clinical application. Mult Scler. 2017; 23(4): 496–504. PubMed Abstract | Publisher Full Text\n\nAlbrecht DS, Granziera C, Hooker JM, et al.: In Vivo Imaging of Human Neuroinflammation. ACS Chem Neurosci. 2016; 7(4): 470–83. PubMed Abstract | Publisher Full Text\n\nEnzinger C, Barkhof F, Ciccarelli O, et al.: Nonconventional MRI and microstructural cerebral changes in multiple sclerosis. Nat Rev Neurol. 2015; 11(12): 676–86. PubMed Abstract | Publisher Full Text\n\nLondoño AC, Mora CA: Nonconventional MRI biomarkers for In vivo monitoring of pathogenesis in multiple sclerosis. Neurol Neuroimmunol Neuroinflamm. 2014; 1(4): e45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHood L: Systems biology and p4 medicine: past, present, and future. Rambam Maimonides Med J. 2013; 4(2): e0012. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRiera R, Porfírio GJ, Torloni MR: Alemtuzumab for multiple sclerosis. Cochrane Database Syst Rev. 2016; 4: CD011203. PubMed Abstract | Publisher Full Text\n\nde Flon P, Laurell K, Söderström L, et al.: Improved treatment satisfaction after switching therapy to rituximab in relapsing-remitting MS. Mult Scler. 2016; 1352458516676643. PubMed Abstract | Publisher Full Text\n\nMontalban X, Hauser SL, Kappos L, et al.: Ocrelizumab versus Placebo in Primary Progressive Multiple Sclerosis. N Engl J Med. 2017; 376(3): 209–220. PubMed Abstract | Publisher Full Text\n\nHerwerth M, Hemmer B: Daclizumab for the treatment of relapsing-remitting multiple sclerosis. Expert Opin Biol Ther. 2017; 1–7. PubMed Abstract | Publisher Full Text\n\nMuraro PA, Pasquini M, Atkins HL, et al.: Long-term Outcomes After Autologous Hematopoietic Stem Cell Transplantation for Multiple Sclerosis. JAMA Neurol. 2017; 74(4): 459–469. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22236",
"date": "02 May 2017",
"name": "Hans Lassmann",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis opinion article deals with a highly relevant topic, the validity of the concept of no evidence for disease activity \"NEDA\" in multiple sclerosis. Prevention of disease progression may possibly be achieved in patients, when treatment completely blocks ongoing disease activity and NEDA has been suggested as a tool to check, whether this is achieved in individual patients. However, clinical detection of disease activity, as suggested in the NEDA concept is far from being complete. Thus, this opinion article suggests to supplement current NEDA criteria with additional para-clinical markers. Such potential biomarkers are non-conventional new MRI sequences, magnetic resonance spectroscopy (N-acetyl aspartate) for evaluation of mitochondrial injury and biochemical markers such as chitinase 3, neurofilament or glial fibrillary acidic protein in the cerebrospinal fluid. Whether these additional markers will increase the reliability of NEDA criteria, will have to be established in prospective clinical studies. However, so far none of them are perfect para-clinical predictors of disease activity and currently no tools are available to monitor major pathological substrates of disease progression in MS, such as for instance the dynamic development of demyelinating lesions in the cortex, the expansion of pre-existing white matter lesions or the presence of more subtle changes within the normal appearing white and grey matter of the MS brain.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "22238",
"date": "24 May 2017",
"name": "Declan Chard",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWith a view to preventing disability by optimising treatment decisions, there has been growing interest in the concept of ‘no evidence of disease activity’ (NEDA) outcomes. In their opinion article Londoño and Moro consider the complexities of NEDA definitions and the role of such scores in clinical practice.\n\nThey authors remind us that the definition of NEDA as a marker of treatment failure and predictor of future clinical outcomes is subject to debate. While considering definitions of NEDA, it is perhaps also worth mentioning the work of Rio and colleagues (Multiple Sclerosis. 2009 Jul;15(7):848–53.), which suggests that radiological evidence of disease activity does not necessarily translate into a significantly increased short-term risk of clinical disease activity.\n\nThe authors consider the limitations of using EDSS scores as a marker of clinical progression, to which could perhaps be added the difficulties determining if an episode of symptoms is due to inflammation (Tallantyre et al. 20151) They also highlight that MRI measures of lesion accrual or brain atrophy may still overlook clinically relevant disease activity, and that fluid biomarkers may also provide relevant indicators of evolving pathology.\n\nOn reviewing the work of Rotstein et al. (2015)2 the authors note that ‘loss of NEDA status due to changes in expanded disability status scale (EDSS), which is a method of rating impairment in neurological functions excluding cognition, was infrequent.’ While less frequent than relapses, the figures for people with established MS shown in Table 2 (if I have read these correctly) suggest that by 7 years about 20% of those with evidence of clinical activity had progression without relapses (74% had evidence of either progression or relapse, 59% had had a relapse, implying that 15% had progression without a relapse).\n\nThe authors ‘support the concept that the methodology used to determine the stage of disease should be based on the measuring of the level of biomarkers involved in the inflammatory and neurodegenerative events of disease’. This touches on an interesting line of thought on differentiating MS subtypes, which clinically can be difficult (and I am not aware of any biomarkers that substantially improve on this on a person-by-person basis), and how this relates to definitions of NEDA or ongoing clinically relevant disease activity. With regard to clinical outcomes, while NEDA definitions include both relapses and disability progression, the two may not be closely linked (Vukusic and Confavreux 20073). As such, predictors of the risk of future relapses and risk of non-relapse associated progression may differ, and similarly composite scores designed to predict these outcomes may not necessarily be the same, or applicable to all MS subtypes equally. It would be interesting to hear more of the authors’ thoughts on this.\n\nIn the abstract, the authors state that ‘A protocol of evaluation based on the use of validated biomarkers that is conducted at an early stage of disease would permit the capture of abnormal neuroimmunological phenomena and lead towards intervention with modifying therapy before tissue damage has been reached.’ However, there are perhaps some qualifications to this if a biomarker protocol is going to be useful in clinical practice, for example that the biomarkers used need to be (alone or in combination) reliable markers of disease activity at the level of individual people with MS, and that the pathological processes they reflect can be effectively targeted by treatments.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2775",
"date": "20 Jun 2017",
"name": "Carlos Mora",
"role": "Author Response",
"response": "We strongly appreciate the comments and input provided by the reviewers of the first version of our article. With especial interest in the comments and questions posed by reviewer 2 (Dr Chard) we acknowledge the work of Rio and colleagues (Multiple Sclerosis. 2009 Jul;15(7):848-53) and we take into consideration the fact that although the mentioned study was limited to the effect of one disease modifying agent (IFNb given in four different commercially available preparations) on patients with the relapsing-remitting form of disease [RRMS] it showed that progression in at least two of the three variables analyzed (relapses, increase of disability or MRI activity) after 12 months would correlate with risk of progression of disease in the following years. However, this conclusion cannot be extrapolated to other groups of patients with RRMS who have happened to be treated with other disease modifying agents taking into consideration the different mode of action of these immune-modulators. In relation to the comment on our citation of Rotstein and colleagues (JAMA Neurol. 2015;72(2):152–158. 25531931 10.1001/jamaneurol.2014.3537) we extended the content of the sentence in the text reflecting that the loss of NEDA status due to changes in EDSS was infrequent in comparison to the level of determination provided by the clinical relapsing and imaging-related biomarkers. As for the invitation to comment on how the different MS sub-types may correlate to definitions of NEDA or ongoing clinically relevant disease activity we concur with the fact that although NEDA definitions include both relapses and disability progression, the two may not be closely linked and, therefore, a different set of biomarkers ought to be validated to determine forthcoming inflammation and/or neurodegeneration. On the other hand, differentiation of sub-types of disease (especially in reference to inflammatory vs. degenerative pathologic processes) will be feasible with the forthcoming application of biomarkers in the MS related clinical practice. NEDA may not be a useful tool for the evaluation of patients with PPMS taking into consideration that this sub-type of MS is characterized by on-going disability and minimal inflammatory activity in MRI. At present, with the development of new medications for the treatment of PPMS, it would be essential to count with measurable biomarkers which would allow the determination of an objective response to therapies."
}
]
}
] | 1
|
https://f1000research.com/articles/6-566
|
https://f1000research.com/articles/6-941/v1
|
19 Jun 17
|
{
"type": "Research Note",
"title": "The antimicrobial activity of plants in the vicinity of a geothermal area in Perak, Malaysia",
"authors": [
"Yuhanis Mhd Bakri",
"Saripah Salbiah Syed Abd Azizz",
"Munirah Abdul Talib",
"Fatimah Mohamed",
"Saripah Salbiah Syed Abd Azizz",
"Munirah Abdul Talib",
"Fatimah Mohamed"
],
"abstract": "We wish to report the study of the antimicrobial activity of plants collected in the vicinity of a geothermal area in Perak, Malaysia. This is the first report of these species in the vicinity of thextis geothermal area. The plants are Cleome icosandra, locally known as Maman Pasir of the family Cleomaceae, and Stachytarpheta species of the family Lamiaceae. Both are subshrubs and are believed to have specific biological activities as a result of living in such extreme areas. Methanol extracts of both plants revealed no antimicrobial activity against Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa.",
"keywords": [
"geothermal",
"Cleomaceae",
"Lamiaceae",
"antimicrobial"
],
"content": "Introduction\n\nExtreme environments provide a new scaffold for the study of natural products for drug discovery. These environments are a challenge to live in, therefore the plant species that are found there must have specific survival mechanisms. This increases the possibility of discovering unique biologically active compounds. The polar region is an example of an extreme environment that has been explored. In McMurdo Sound, Antarctica, benthic invertebrates are exposed to significant predation by sea stars and potentially infectious water-column microorganisms, which suggests that chemically-mediated defense strategies would be an advantage. For example, discorhabdin C (Figure 1), found in the sponge Lantrunculia apicalis, is alkaloid and possesses a unique structure with antitumor and antimicrobial activity, and plays a role in the sea star tube-foot retraction assay1–3.\n\nAlkaloid discorhabdin C found in the sponge Lantrunculia apicalis.\n\nMalaysia, although the climate is tropical, is rich in various geography, including geothermal areas in the form of hot springs. The biodiversity of geothermal areas in Malaysia has yet to be fully studied, therefore, an opportunity to investigate the bioactivities of plants that grow near geothermal areas, has risen. Plants that survive here are unique because of their ability to adapt to extreme temperature. The most heat-tolerant plants are mosses and lichens, which can survive ground temperatures of 70°C. Prostrate kanuka (Kunzea ericoides var. microflorum) is a low, spreading variety of the kanuka shrub that only grows in geothermal areas. It can tolerate ground temperatures of up to 55°C4. Additionally, grasses such as Dichanthelium lanuginosum and Paspalum laeve Michx were also found in geothermally heated environments, namely in the Yellowstone National Park5 and Hot Springs National Park, Arkansas6, respectively. Locally, Hyptis suaveolens, located within the hot spring enclosure in the Tambun area of Perak is used to combat fever, and soothe headaches and skin rashes7.\n\nAmong the data presented in the 2nd edition of the National Antibiotic Guideline in conjunction with the Annual Scientific Meeting on Antimicrobial Resistance in 2015 in Putrajaya, Malaysia, was resistance of Escherichia coli to Ampicillin, which had risen to more than 50 percent. Streptococcus pneumoniae resistance towards Erythromycin had increased from 18.2 percent in 2013 to 28.1 percent in 2014, and Acinetobacter baumannii resistance against Meropenem had increased from 47.7 percent to 57.3 percent within seven years8,9. Additionally, P.aeruginosa is generally resistant to a large range of antibiotics and may demonstrate additional resistance after unsuccessful treatment10. Hence, research for the discovery of new antibiotics and drugs against these bacteria is very much needed.\n\n\nMethods\n\nPlants were collected by hand on the 26th of April 2016 in the vicinity of Ulu Slim, Perak, Malaysia (Figure 2). It should be noted that these plants do not have flowers or any significant morphology that allows their identification, so identifying them was a challenge.\n\nC.icosandra and Stachytarpheta sp. whole plant samples were washed with distilled water, dried then extracted with methanol for five days at room temperature. 0.51g of C.icosandra and 0.45g of Stachytarpheta sp. were used for the extraction, from which the crude extract was obtained. A total of 56.8 mg crude extract of C.icosandra and 65.6 mg crude extract of Stachytarpheta sp. was obtained, after solvent removal using a rotary evaporator.\n\nThe antimicrobial assay consisted of the applied agar diffusion method, or Kirby-Bauer method11,12. The assay consists of inoculating an agar plate with the chosen microorganisms, followed by placement of paper discs which have been impregnated with a known concentration of antibiotic. After incubation, the inhibition zone is measured.\n\nIn our assay, three bacterial isolates of E.coli, S.aureus and P.aeruginosa were grown in nutrient broth at 37°C overnight, using a shaking incubator. The three bacterial isolates were subcultured from an existing culture, so we are unable to state the strain of bacterial isolates. Plates were swabbed with cotton wool impregnated with the microorganisms. Individual filter paper discs (Whatman, Cat No 1001 110; 6 mm diameter) were saturated with 10 µL solution containing approximately 1 mg/mL of crude extract and dried for 15 minutes in the laminar flow before they were placed on the top of cultured nutrient agar. The plates were then sealed with parafilm and inverted before being incubated at 37°C overnight. After incubation, the inhibition halo was measured with a ruler (± 0.5mm); the measured distance was from the edge of the paper disc to the widest part of the inhibition halo. The average distance across the inhibition halos from the filter paper was taken as the level of inhibition of bacterial growth for each sample.\n\n\nResults and discussion\n\nBoth plants were collected in the vicinity of the geothermal site at Ulu Slim, Perak, Malaysia. Water temperature was 90°C at the time of collection. Upon collection, plants were immediately identified by a botanist and then underwent methanol extraction. Both plants were identified to be C.icosandra (also called C.viscosa), locally known as Maman Pasir of the family Cleomaceae. In addition, Stachytarpheta species of the family Lamiaceae were also collected. Further species identification needs to be con-ducted for this particular plant; they are identified based on plant morphology and taxonomy. It should be noted that both plants are very small, subshrub-like and only consist of leaves that are absent of flowers. We believe that these plants may possess unique, biologically active compounds. We proceeded to evaluate their antimicrobial activity, but crude methanol extracts of both plants revealed no activity against the three tested bacteria E.coli, S.aureus and P.aeruginosa (Table 1, Figure 3)\n\n(-) = no inhibition.\n\nThe figure displays petri dishes containing filter paper discs with saturated crude methanol extracts of C.icosandra (C) and Stachytarpheta sp. (D), in (i) E.coli, (ii) S. aureus and (iii) P. aeruginosa. No inhibition halo was observed.\n\n\nConclusions\n\nThis is the first record taken of plants in the vicinity of the geothermal area in Ulu Slim, Perak, Malaysia. Plants near these unexploited environments of extreme condition should be further studied for their potential in drug discovery.\n\n\nData availability\n\nAll data required to re-analyse the study has been provided in the main body of text.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors thank Universiti Pendidikan Sultan Idris for providing the laboratory facilities.\n\n\nReferences\n\nAmsler CD, McClintock JB, Baker BJ: Secondary metabolites as mediators of trophic interactions among Antarctic marine organisms. American Zoologist. 2001; 41(1): 17–26. Publisher Full Text\n\nWada Y, Fujioka H, Kita Y: Synthesis of the marine pyrroloiminoquinone alkaloids, discorhabdins. Mar Drugs. 2010; 8(4): 1394–1416. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerry NB, Blunt JW, McCombs JD, et al.: Discorhabdin C, a highly cytotoxic pigment from a sponge of the genus Latrunculia. J Org Chem. 1986; 51(26): 5476–5478. Publisher Full Text\n\nhttp://www.teara.govt.nz/en/hot-springs-mud-pools-and-geysers.\n\nStout RG, Al-Niemi TS: Heat-tolerant flowering plants of active geothermal areas in Yellowstone National Park. Ann Bot. 2002; 90(2): 259–267. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScully FJ: Grasses of Hot Springs National Park and Vicinity. Rhodora Journal. 1992; 44.\n\nWiart C: Medicinal plants of the Asia-Pacific: drugs for the future? World Scientific. 2006; 527. Reference Source\n\nBernama: Malaysian hospitals at forefront in war against antimicrobial resistance: Subramaniam. The Sun Daily. 2015. Reference Source\n\nMinistry of Health, Malaysia: Garis panduan antibiotik kebangsaan – edisi kedua bersempena Annual Scientific Meeting on Antimicrobial Resistance 2015. Rintangan antibiotik (antibiotic resistance) ancaman kepada kesihatan global. [Press release]. 2015. Reference Source\n\nLister PD, Wolter DJ, Hanson ND: Antibacterial-Resistant Pseudomonas aeruginosa: Clinical Impact and Complex Regulation of Chromosomally Encoded Resistance Mechanisms. Clin Microbiol Rev. 2009; 22(4): 582–610. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBauer AW, Kirby WM, Sherris JC, et al.: Antibiotic susceptibility testing by a standardized single disk method. Am J Clin Pathol. 1966; 45(4): 493–496. PubMed Abstract\n\nSati SC, Khulbe K, Joshi S: Antibacterial Evaluation of the Himalayan medicinal plant Valeriana wallichii DC. (Valerianaceae). Res J Microbiol. 2011; 6(3): 289–296. Publisher Full Text"
}
|
[
{
"id": "24658",
"date": "10 Aug 2017",
"name": "Mohamed A. Ghandourah",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Bakri et al. describes the evaluation of antimicrobial activity of two plant extracts collected from geothermal area in Malaysia. However, several points should be corrected/clarified before acceptance.\n1- The criteria for collection of those two plants; the existence in an extreme area is not a guarantee for antimicrobial activity. 2- The introduction section should survey previous work on the plants under investigation. 3- The name of the botanist with complete affiliation in addition to herbarium number should be mentioned. 4- The exp. section is not complete. 5- Please report the diameter of inhibition zones of each test. 6- No need to mention the structure of cpd isolated from marine organism.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? I cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "24657",
"date": "10 Aug 2017",
"name": "Anang W M Diah",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper report is very interesting as it discusses on the antimicrobial activity of the plants from a specific area in Perak, Malaysia. Although the authors tried a simple experiment as the first study to these plants extracts, further investigations need to be done to evaluate this. The extraction method used is appropriate but the authors should explain why choosing only the methanol solvent. Other researchers have different solvents to extract these families’ plants such as ethanol[ref1]-2. Different solvents should have different level of metabolite compounds. In addition, the discussion in the paper is too short to explain the results. The authors should explain Figure 3 with the notation of B and N(-) in those dishes. What are these? If B and N(-) as the controls, these should be mentioned in the experiment part and explained in discussion. These can be used to compare the inhibition zone between the methanol extracts and why they perform similar result.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-941
|
https://f1000research.com/articles/6-373/v1
|
28 Mar 17
|
{
"type": "Case Report",
"title": "Case Report: Diagnosis of hypogeusia after oral exposure to commercial cleaning agent and considerations for clinical taste testing",
"authors": [
"Marie Jetté",
"Catherine Anderson",
"Vijay Ramakrishnan",
"Marie Jetté",
"Catherine Anderson"
],
"abstract": "Few reports in the literature document acute taste disturbance following exposure to toxic chemicals. We describe the case of a 54-year-old man who presented with primary complaint of tongue numbness and persistent problems with taste 1.5 years following oral exposure to a commercial cleaning agent. A test of olfaction revealed normosmia for age and gender. Lingual tactile two-point discrimination testing showed reduced somatosensation. Taste threshold testing using a 3-drop method demonstrated severe hypogeusia, though the patient was able to discriminate tastants at lower concentrations with a whole mouth swish and spit test. We conclude that clinical evaluation of dysgeusia can be performed using a number of previously published testing methods, however, determining causative factors may be confounded by duration since exposure, lack of knowledge of baseline taste function, and medications. Although many testing options exist, basic taste testing can be performed with minimal expertise or specialized equipment, depending on the patient history and goals of evaluation.",
"keywords": [
"dysgeusia",
"hypogeusia",
"taste",
"tongue",
"burning mouth syndrome"
],
"content": "Introduction\n\nDisordered taste, referred to as dysgeusia, can lead to dramatic changes in weight and reduce quality of life. Causes for dysgeusia are variable and include primary medical disorders1, medication side effects2, chemicals and toxins3, local disorders of the mouth4, insufficient production of saliva5, and gastroesophageal reflux disease. Here we describe a case of hypogeusia following oral contact with a commercial cleaning agent, and discuss diagnostic considerations in determining the nature and extent of taste dysfunction.\n\n\nPatient information\n\nA 54-year-old Caucasian man presented with complaints of a dull sensation of the tongue, describing that his “taste seems off,” associated with numbness and occasional dry mouth. The patient reported that the taste disturbance began 1.5 years prior to presentation, immediately after accidental oral exposure to a cleaning agent, which he gargled with and immediately expectorated. At the time of injury he noted tongue numbness and throat irritation, and subsequently reported to the emergency department. There, examination of the oral cavity revealed no findings indicative of serious mucosal injury such as significant swelling, deep ulceration, or chemical burn. At an urgent dental visit one day following the incident, the patient reported tongue burning and tingling sensations, with increased tooth and gum sensitivity, and the dentist’s examination noted superficial irritation on the tongue and cheeks along with pharyngeal erythema. The patient was seen by an otolaryngologist approximately 3 weeks following the incident for complaints of burning and soreness of the tongue and loss of taste. His physical exam was notable for an area of mild erythema of the right lateral tongue, tender to palpation but without ulceration, and he was prescribed a supersaturated calcium phosphate rinse for oral mucositis and topical benzocaine for symptomatic relief. Approximately 6 weeks following the incident, the patient was seen in follow-up by the otolaryngologist for tongue soreness. At that time, examination revealed erythema and shallow erosion/ulceration of the dorsal tongue just anterior to the foramen cecum, and light erythema of the lateral tongue. The patient was diagnosed with candidiasis and prescribed nystatin. A follow-up appointment approximately 9 weeks after the incident showed resolution of physical exam findings, however, he continued to have symptoms.\n\nThe patient presented to our department 1.5 years after the injury, for evaluation of persistent subtotal loss of tongue sensation, both taste and somatosensation. During this visit, a complete head and neck examination was performed and was essentially normal, specifically with normal ear examination and intact cranial nerve exam. Intraoral examination demonstrated normal mucosa with no evidence of erythema or ulceration (Figures 1C, 1E, 1F). A shallow fissure was noted along the medial surface of the tongue (Figures 1A, 1D). An atrophic patch was noted on the posterior right tongue (Figure 1B), possibly consistent with geographic tongue. The patient’s medical history was significant for hypertension and his surgical history was unremarkable. His body mass index was 25.8. Medications at the time of our evaluation included aspirin 81mg, lisinopril, naproxen, and various vitamins and supplements including zinc.\n\nThe oral mucosa appears normal, though there is some evidence of possible geographic tongue. Panels 1A and 1D show a slight fissure (arrow). Panel 1B demonstrates an atrophic patch (arrow) consistent with possible geographic tongue. The circumvallate papillae are intact as demonstrated in 1C (arrows). The lateral lingual mucosa appears normal (1E, 1F).\n\nA series of tests was assembled and administered to determine the nature and extent of the patient’s dysgeusia, including: two-cup forced choice test, two-point tactile discrimination, trigeminal nerve testing, three-drop taste threshold testing, and swish and spit whole mouth taste testing (Table 1).\n\n\nDiagnostic assessment\n\nGiven that much of a patient’s subjective perception of “taste” usually involves a contribution of odor via the olfactory system6 screening of olfactory function is generally performed in a taste evaluation. The extent to which this patient’s dysgeusia was influenced by smell was assessed using the Smell Identification Test (SIT; Sensonics International, Haddon Heights, N.J.). The patient scored 34/40 correct responses on this test, putting him in the 33rd percentile for age and gender, categorized as normosmia7. This finding indicates that disordered smell is not a contributing factor in this patient’s dysgeusia.\n\nTwo-point discrimination is a common neurological test of somatosensory function, and here it was utilized to assess the tongue. Calipers were used to present either one or two tactile stimuli to the tongue and the patient was asked to indicate if he felt one point or two. An ascending threshold was established by beginning with the detection points at 2 mm and gradually moving them apart until the patient perceived two distinct points. This was completed in each of four quadrants of the tongue including left posterior, right posterior, left anterior, and right anterior until the two-point detection threshold was reached. Two-point detection thresholds were as follows:\n\nleft posterior = >23 mm,\n\nright posterior = 22 mm,\n\nleft anterior = 18mm,\n\nright anterior = 16 mm.\n\nThese thresholds were notably elevated relative to published means of 1.09 mm in the anterior region, 2.64 mm in the canine region, and 8.08 mm in the posterior region8.\n\nMany chemicals that stimulate taste buds also stimulate trigeminal neurons when presented at high concentrations9. Conversely, compounds such as capsaicin and mustard oil elicit responses from trigeminal fibers but not from taste cells. Therefore, to test trigeminal involvement in the patient’s dysgeusia, small boluses of mustard and chili pepper sauce were presented across the dorsal surface of the tongue using a cotton tip applicator and the patient was asked to describe his perception of each. The patient was able to detect both stimuli, describing the mustard as “sour” and guessing correctly that it was mustard, and describing the chili pepper sauce as “hot” and “spicy”. These results indicate that trigeminal responses detected by nociceptive nerve fibers of the tongue were intact.\n\nTo distinguish ageusia from malingering, the patient was presented with 6 trials of a forced choice task, whereby the patient was asked to swish 10 ml of a detectable tastant concentration (the highest concentrations of sucrose and sodium chloride as outlined in Table 2) or its diluent (distilled water) and determine which contained the tastant. Sweet taste (i.e. detection of sucrose) is considered the most robust across the lifespan, and coincidentally this tastant is readily available10. In 6/6 trials, the patient correctly indicated the cup containing the tastant. If the subject was truly ageusic and guessing at random, they would guess the correct cup 50% of the time, whereas scores deviating from 50 percent in either direction would indicate nonchance-level performance, and suggest that the test taker knew the correct answer but purposely selected the wrong one.11 This quick test can be performed six times in a row if malingering is suspected, as there is only a 1 in 64 chance that an ageusic subject would guess incorrectly six times in a row.\n\n1=mild, 2=mild-moderate, 3=moderate-strong, 4=strong. g/ml indicates grams of dry tastant per milliliter of distilled water, and M indicates molarity.\n\nThree-drop taste threshold test. A forced choice taste test12,13 was used to determine detection thresholds of four tastants including sucrose (sweet), sodium chloride (salty), citric acid (sour) and quinine hydrochloride (bitter), with concentrations shown in Table 2. Each trial consisted of 3 stimuli presented in a pseudorandom order at room temperature; one stimulus was the tastant and the other two were diluent (distilled water). To present the stimuli, a single drop of liquid was squeezed from a 3 ml plastic transfer pipette onto the center of the tongue dorsum, approximately 1.5 cm from the tip. The patient was asked to close his mouth following each drop and indicate which drop (the first, second, or third) had the tastant. Testing began with the highest (4) concentration of each stimulus and, if detected, proceeded with the next lower (3) concentration. If the next lower (3) concentration was not detected, the highest (4) concentration was presented in the following trial, and this cycle was repeated until there were 3 consecutive detections of a given concentration. Between each trial, the patient rinsed his mouth with tap water.\n\nThe patient was unable to detect the highest concentration of sucrose, indicating that his threshold for sweet detection is greater than 0.4 g/ml (1.17 M). Salty, bitter, and sour stimuli were consistently detected at the highest (4) concentrations only. A follow up swish and spit forced-choice test comparing the lowest (1) concentrations of sour and sweet tastants to distilled water was administered to determine if whole mouth discrimination, where posterior as well as anterior taste buds can detect the tastant, would be different from single drop discrimination. The patient immediately discriminated both low concentration solutions and correctly identified the sweet stimulus as “mildly sweet” and the sour stimulus as “sour”. These findings suggest that the patient’s hypogeusia is more severe in the region of the anterior tongue where taste buds are more exposed.\n\n\nDiscussion\n\nTaste buds in the anterior tongue are superficial receptor end organs, making them susceptible to direct chemical injury; however, reports of toxin-induced dysgeusia in the literature are rare even in surveys of specialty clinics. Smith et al.3 described a patient with severe hypogeusia after oral contact with ammonia that resulted in a chemical burn. Workers exposed to hydrocarbons have reported subjective disturbances in taste14, and taste thresholds are elevated in workers exposed to dichromate, chromic acid, and zinc chromate15. In our patient, a bathroom cleaner containing a proprietary organic salt made contact with the oral mucosa; based on comparison to similar cleaning materials, the presumed compound was likely urea sulfate, a mildly corrosive salt that rapidly breaks down into urea and sulfuric acid.\n\nThere are multiple potential reasons for the paucity of reported cases of chemical-induced taste dysfunction cases. First, clinical assessment of taste is not as standardized a practice as testing other senses like hearing and smell is; therefore, patients may not be objectively evaluated to determine the degree of taste dysfunction until several months or years have passed since the initial exposure. This obscures identification of a particular chemical or toxin as the causative agent. Further, taste cells are continually renewed16, so a superficial mucosal injury will likely resolve over time as damaged cells are replaced. Finally, several factors known to be associated with dysgeusia may confound determination of cause and effect, including various medications5, comorbid conditions like oral candidiasis4, diabetes5, hypothyroidism5, Sjogren’s syndrome17, and age18.\n\nThe constellation of symptoms and signs reported by the patient in this case may also be consistent with some features associated with burning mouth syndrome (BMS). BMS affects the oral mucosa, lips and/or tongue and is characterized by the sensation of burning, tingling, or numbness in the absence of visible inflammation or lesions19. BMS has been associated with two factors detailed in this report: angiotensin converting enzyme (ACE) inhibitors19 and oral candidiasis19. ACE inhibitors inhibit zinc action in the salivary glands and taste receptor cells thereby reducing saliva production and affecting taste20. Brown et al.21 described two cases of BMS associated with ACE inhibitors that subsequently improved after changes in drug therapy. Oral candidiasis caused by Candida albicans is a common fungal infection and estimates indicate that anywhere from 36–60%22,23 of the population are carriers without clinical symptoms. Sakashita et al.24, reported that 70% of Candida carriers in their 50’s demonstrated dysgeusia. The patient presented herein was treated for suspected (although unconfirmed) oral candidiasis approximately 6 weeks following reported onset of taste loss, suggesting that he may be a carrier for Candida albicans.\n\nThoroughly investigating the cause of a patient’s dysgeusia allows the clinician to offer potential treatment options, but it is equally important to document and determine the extent of the taste disturbance in order to track a patient’s recovery over time. Quantitative taste testing is relatively easy to perform in the clinic and should be used to assess taste function of patients complaining of both acute and chronic alterations in taste. Subjective patient reports may exaggerate or minimize the nature and extent of dysgeusia, and testing can determine whether the degree of dysfunction is normal relative to age. This information can be used for patient counseling, and may also be helpful to evaluate longitudinal improvement in an objective fashion. It is also possible to detect malingering by administration of simple, forced-choice tests.\n\nGiven the chronic nature of the patient’s hypogeusia, we expect that it is unlikely that he will experience spontaneous resolution at this time point. If his taste dysfunction was truly a direct result of chemical injury to the oral cavity mucosa, the gustatory system exhibits a robust regeneraitve capacity, with taste cell renewal every 10–14 days (see Barlow, 201516 for a review). This suggests that function should have recovered within months of the incident. Medical treatment options that could be attempted include discontinuing lisinopril and trialing a different (nonsulfhydryl) ACE inhibitor20, or dietary supplementation with selenium methionine25.\n\nWe employed multiple tests targeting somatosensory function, trigeminal nerve response, taste thresholds for sweet, sour, bitter, and salty, and odor detection. There are additional tests reported in the literature that can also be used for measuring dysgeusia, some of which are commercially available. These include electrogustometry26 (e.g., TR-06 Rion Electrogustometer, Sensonics, Inc.), Taste Strips (Burghart), filter paper discs27, taste tablets28, and subjective health-related quality-of-life questionnaires. For a standardized taste intensity testing protocol, clinicians may opt to follow the technical manual and scoring and interpretation guidelines included in the National Institutes of Health (NIH) Toolbox, as established by the National Health and Nutrition Examination Survey (NHANES)29. As a very basic measure, it is also possible to create testing solutions from store-bought sugar (low concentration=3 tsp; high concentration=24 tsp) and salt (low concentration=2 tsp; high concentration=14 tsp) dissolved in 8 oz (237 ml) distilled water to perform both tongue tip and whole mouth testing.\n\n\nConclusion\n\nThis case report adds to the limited literature on toxin-induced hypogeusia following oral exposure. We promote the concept that taste testing is relatively easy to perform and should be completed as soon as possible following an incident in order to determine the extent of injury and track improvement in function over time.\n\n\nConsent\n\nWritten informed consent for publication of their clinical details and clinical images was obtained from the patient.",
"appendix": "Author contributions\n\n\n\nAll authors designed the diagnostic battery. VRR administered the testing. MJ prepared the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nResearch reported in this publication was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under award numbers K23DC014747 and T32DC012280.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe acknowledge Sue Kinnamon, PhD and Tom Finger, PhD, of the Rocky Mountain Taste and Smell Center, as well as members of the Kinnamon Lab including Aurelie Vandenbeuch, PhD, Eric Larson, PhD, Courtney Wilson, and Kyndal Davis for thoughtful commentary on this case.\n\n\nReferences\n\nHeckmann JG, Lang CJ: Neurological causes of taste disorders. In: Hummel, T, Welge-Lüssen A, eds. Taste and Smell: An Update. Basel: Karger; Adv Otorhinolaryngol. 2006; 63: 255–264. PubMed Abstract | Publisher Full Text\n\nFetting JH, Wilcox PM, Sheidler VR, et al.: Tastes associated with parenteral chemotherapy for breast cancer. Cancer Treat Rep. 1985; 69(11): 1249–1251. PubMed Abstract\n\nSmith WM, Davidson TM, Murphy C: Toxin-induced chemosensory dysfunction: a case series and review. Am J Rhinol Allergy. 2009; 23(6): 578–581. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrushka M, Ching V, Epstein J: Burning mouth syndrome. In: Hummel, T, Welge-Lussen A, eds. Taste and Smell: An Update. Basel: Karger; Adv Otorhinolaryngol. 2006; 63: 278–287. PubMed Abstract | Publisher Full Text\n\nReiter ER, DiNardo LJ, Costanzo RM: Toxic effects on gustatory function. In: Hummel T, Welge-Lussen A, eds. Taste and Smell: An Update. Basel: Karger; Adv Otorhinolaryngol. 2006; 63: 265–277. PubMed Abstract | Publisher Full Text\n\nSumner D: Post-traumatic ageusia. Brain. 1967; 90(1): 187–202. PubMed Abstract | Publisher Full Text\n\nDoty RL, Shaman P, Kimmelman CP, et al.: University of Pennsylvania Smell Identification Test: a rapid quantitative olfactory function test for the clinic. Laryngoscope. 1984; 94(2 Pt 1): 176–178. PubMed Abstract | Publisher Full Text\n\nMinato A, Ono T, Miyamoto JJ, et al.: Preferred chewing side-dependent two-point discrimination and cortical activation pattern of tactile tongue sensation. Behav Brain Res. 2009; 203(1): 118–126. PubMed Abstract | Publisher Full Text\n\nSimon SA, Wang Y: Chemical Responses of Lingual Nerves and Lingual Epithelia. In: Mechanisms of Taste Transduction. Ann Arbor, MI: CRC Press, Inc.; 2003; 225–252. Reference Source\n\nYamauchi Y, Endo S, Yoshimura I: A new whole-mouth gustatory test procedure. II. Effects of aging, gender and smoking. Acta Otolaryngol Suppl. 2002; (546): 49–59. PubMed Abstract | Publisher Full Text\n\nGreve KW, Bianchini KJ, Ameduri CJ: Use of a forced-choice test of tactile discrimination in the evaluation of functional sensory loss: a report of 3 cases. Arch Phys Med Rehabil. 2003; 84(8): 1233–1236. PubMed Abstract | Publisher Full Text\n\nHenkin RI, Gill JR, Bartter FC: Studies on Taste Thresholds in Normal Man and in Patients with Adrenal Cortical Insufficiency: The Role of Adrenal Cortical Steroids and of Serum Sodium Concentration. J Clin Invest. 1963; 42(5): 727–735. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMueller C, Kallert S, Renner B, et al.: Quantitative assessment of gustatory function in a clinical context using impregnated \"taste strips\". Rhinology. 2003; 41(1): 2–6. PubMed Abstract\n\nHotz P, Tschopp A, Söderström D, et al.: Smell or taste disturbances, neurological symptoms, and hydrocarbon exposure. Int Arch Occup Environ Health. 1992; 63(8): 525–530. PubMed Abstract | Publisher Full Text\n\nSeeber H, Fikentscher R: [Taste disorders in chromium exposed workers]. Z Gesamte Hyg. 1990; 36(1): 33–34. PubMed Abstract\n\nBarlow LA: Progress and renewal in gustation: new insights into taste bud development. Development. 2015; 142(21): 3620–3629. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHenkin RI, Talal N, Larson AL, et al.: Abnormalities of taste and smell in Sjogren's syndrome. Ann Intern Med. 1972; 76(3): 375–383. PubMed Abstract | Publisher Full Text\n\nPlattig KH, Kobal G, Thumfart W: [The chemical senses of smell and taste in the course of life - changes of smell and taste perception]. Z Gerontol. 1980; 13(2): 149–157. PubMed Abstract\n\nGrushka M, Epstein JB, Gorsky M: Burning mouth syndrome. Am Fam Physician. 2002; 65(4): 615–620. PubMed Abstract\n\nAckerman BH, Kasbekar N: Disturbances of taste and smell induced by drugs. Pharmacotherapy. 1997; 17(3): 482–496. PubMed Abstract\n\nBrown RS, Krakow AM, Douglas T, et al.: \"Scalded mouth syndrome\" caused by angiotensin converting enzyme inhibitors: two case report. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 1997; 83(6): 665–667. PubMed Abstract | Publisher Full Text\n\nWright BA, Fenwick F: Candidiasis and atrophic tongue lesions. Oral Surg Oral Med Oral Pathol. 1981; 51(1): 55–61. PubMed Abstract | Publisher Full Text\n\nFotos PG, Vincent SD, Hellstein JW: Oral candidosis. Clinical, historical, and therapeutic features of 100 cases. Oral Surg Oral Med Oral Pathol. 1992; 74(1): 41–49. PubMed Abstract | Publisher Full Text\n\nSakashita S, Takayama K, Nishioka K, et al.: Taste disorders in healthy \"carriers\" and \"non-carriers\" of Candida albicans and in patients with candidosis of the tongue. J Dermatol. 2004; 31(11): 890–897. PubMed Abstract | Publisher Full Text\n\nZazgornik J, Kaiser W, Biesenbach G: Captopril-induced dysgeusia. Lancet. 1993; 341(8859): 1542. PubMed Abstract | Publisher Full Text\n\nStillman JA, Morton RP, Hay KD, et al.: Electrogustometry: strengths, weaknesses, and clinical evidence of stimulus boundaries. Clin Otolaryngol Allied Sci. 2003; 28(5): 406–410. PubMed Abstract | Publisher Full Text\n\nSato K, Endo S, Tomita H: Sensitivity of three loci on the tongue and soft palate to four basic tastes in smokers and non-smokers. Acta Otolaryngol Suppl. 2002; (546): 74–82. PubMed Abstract | Publisher Full Text\n\nAhne G, Erras A, Hummel T, et al.: Assessment of gustatory function by means of tasting tablets. Laryngoscope. 2000; 110(8): 1396–1401. PubMed Abstract | Publisher Full Text\n\nTaste and Smell Examination Component Manual. National Health and Nutrition Examination Survey. 2013; Accessed August 1, 2016. Reference Source"
}
|
[
{
"id": "21955",
"date": "19 Apr 2017",
"name": "Claudio Augusto Marroni",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFirstly, we would like to congratulate the authors for the choice of the case, an interesting subject of great scientific relevance. The structuring of the case report is well documented and with a helpful review of the scientific literature on the hypotheses of the patient's clinical condition against hypogeusia.\nWe have developed a line of argument on the subject, raised some issues that we consider important to contribute to the understanding of the case.\nThe tongue is a muscular organ located on the floor of the mouth, its root is the posterior part, the apex lies in the anterior extremity and the dorsum is divided by a middle groove in symmetrical halves. It contributes to chewing by placing food between the teeth and plays an important role in deglutition and articulation of speech sounds (1).\nThe shape of the tongue presents a characteristic appearance due to the presence of small protrusions, called lingual or gustatory papillae, that are distributed in great quantity before the terminal sulcus. The taste buds are small appendages filled with sensory cells. These cells are attached to our brain by nerve fibers (1).\nThe papillae of the tongue are composed of connective tissue covered by stratified squamous epithelium and, by their appearance, are classified into four types: filiform, fungiform, goblet or circumvalate and foliaceous or foliate (1).\nThe taste buds are responsible for the palate. There are names for disturbances involving the taste buds. When the sensitivity of the palate is diminished, it is classified by hypogeusia; The altered sensitivity is called dysgeusia and, finally, the ageusia is the absence of palate (2).\nDecreased palate may be associated with clinical malnutrition in diabetes mellitus, Cushing's disease, systemic arterial hypertension, obesity, adrenalectomy, neoplasias and chronic diseases, drug consumption, radiotherapy and surgical interventions, aging, saliva composition , Smoking and eating habits, mood and feelings of hunger or satiety (2).\nThe reduction of the foleal papillae occurs significantly with age and the location of the dysgeusia of the evaluated patient is in the same location of these papillae.\nDysgeusia is directly related to the characteristic of the taste buds. The taste buds present four types of cells, type I, II, III and IV, which are responsible for the transduction of the taste signal and, according to the disease that the patient may present, these cells may have a disruption.\n\nType I Cells Type I cells are the most abundant in the mammalian gustatory system (3) and are responsible for the hydrolysis of a portion of extracellular ATP which is a neurotransmitter of the palate as well as glutamate. Type I cells appear to be involved in the synaptic transmission that encloses and restricts signal propagation by the transmitter, performed in the central nervous system by the glial cells(4).\nIn addition, type I cells have a homeostasis power through K+ channels (2). During prolonged courses of action potentials stimulated by intense taste stimulation, type I cells release K+ which accumulates in the interstitial limited spaces of the taste bud and decrease the excitability of the other cell types. Thus, type I cells appear to function as glial cells in taste buds (3).\nType I cells may have ionic currents implicated in taste signal transduction(4). Although it is the most abundant type of cell in the palate, very little is known about type I cells.\n\nType II Cells\nType II cells function within the taste buds and are incorporated into the plasma membrane of these cells, where receptors bind to bitter, sweet, or umami compounds. These flavor receptors are coupled to the GPCR-G protein (second messenger) of receptors with seven transmembrane domains (4).\nType II cells express Na+ and K+ ion channels essential for the production of action potentials and structural subunits of \"key\" readers for ATP secretion. Any type II cell can bind to the GPCR-specific G protein family to identify taste, such as sweet or bitter, but not both (4).\nIn recognition of their role as the main detectors of these palate classes, type II cells were renamed \"receptor\" cells. Type II cells do not appear to be directly stimulated by acid or salty (4).\nThe recipient cells do not form an ultrastructure capable of performing synapses for certain flavors. Presumably, the nerve fibers of these cells have different forms of afferent synaptic connections. Signals transmitted from afferent receptor sensory cells or other cells within the gustatory papilla should be made by unconventional mechanisms, i.e. without the involvement of synaptic vesicles (5).\n\nType III Cells\nThe consensus is that type III cells express proteins associated with synapses and that form synaptic junctions with nerve terminals (5). These cells express a number of genes, surface cell adhesion molecules, enzymes for the synthesis of at least two neurotransmitters and the Ca++ ion channels typically associated with the release of neurotransmitters (5).\nType III cells, which express synaptic proteins and rapid Ca++-dependent depolarization, are characterized as \"presynaptic.\" Type III cells, because they are signal receptors (presynaptic cells), are also excitable and express a complement of Na+ channels and K+ channels to support action potentials (5).\nInnervation where the synapse of type III cells, which appears to be by afferent connections, is not known. In addition to the above-mentioned neuronal properties, presynaptic cells also respond directly to stimuli to bitter taste and carbonate solutions and are presumably the cells responsible for signaling such sensations (5).\n\nType IV Cells\nThe type IV cell is characterized by having a spherical or ovoid shape that does not extend to taste processes and is likely to be undifferentiated cell of the palate or immature (6).\nSome researchers identify type IV cells as repositories in the process of apoptosis of the others. This cell class would be the progenitor, guaranteeing the homeostasis of the other groups. Its real functions and characterization as basal cell are to be clarified (7,8).\nIn the presented report we can raise the hypothesis that a deformity occurred in the structure of these cells of rapid prolifeferation, significantly compromising the signal transmission. We know that zinc is responsible for the formation of alpha-gustine, the protein responsible for taste, an alternative could have been offered a treatment of chelated zinc for three months, waiting for the recovery of the papillae.\n\nOnce again, we congratulate the work and note the importance of improving the early identification of palate dysfunction in the clinical area, since numerous diseases present this clinical picture frequently and end up leading to other comorbidities.",
"responses": []
},
{
"id": "22595",
"date": "30 May 2017",
"name": "Christian A. Müller",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFirst, we would like to thank the authors that they have illustrated the clinical importance of taste disorders and related diagnostic procedures based on this interesting case report. Thorough medical history and validated assessment of sensory acuity is mandatory during the management of patients with smell and taste disorders. Consequently, we would like to suggest some revisions to the case report.\n\nAlthough important causes of taste disorders are mentioned, some other causes, e.g., surgical causes, such as tonsillectomy and middle ear surgery, as well as psychogenic causes should be added (1-3).\n\nIt would be interesting if there was a decrease in body-mass-index within the last year or if it was stable. It would also be interesting if there were any changes in medication since the accident and why and how often the patient took naproxen, as it may cause stomatitis and glossitis.\n\nIt would be interesting if the patient reported complaints of flavor perception.\n\nWith regard to the performance of the three-drop taste threshold, it should be mentioned that the usual way of testing starts with the lowest concentration in order to avoid adaptation. Moreover, this test measures gustatory function of the whole mouth similar to the swish and spit test.\n\nThe authors used a two-alternative forced-choice taste test and concluded that every deviation from a 50 percent result in ageusic patients might be suspicious of malingering. However, it has to be stated that this is not always the case and that simple and quick tests cannot guarantee the suggested confidence in detection of malingering. In some circumstances more elaborate tests (e.g., event-related potentials) are needed.\n\nIn summary, we would like to thank the authors for this interesting case report, as further research in the field of rare causes of taste disorders are necessary.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "22193",
"date": "31 May 2017",
"name": "Baslie N. Landis",
"expertise": [
"Reviewer Expertise Clinical smell and taste disorders"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral Comments:\nThe authors present a case of a patient who has accidentally ingested a cleaning agent and has long lasting oral taste and sensitivity problems.\nMajor Comments:\nAlthough interesting, this case does not add much to the understanding of the underling mechanism.\nWe do not learn what the composition of the cleaning agent was.\nThe pictures are not really pathological. Tongues are like faces, they have a wide variety of individual differences which are not always pathological. The images suggest that the complaint and the neural damage can be seen. I do not think that if you would give these pictures to 5 ENT specialists without any history they would label them as pathological (unlike a clinical image where no text is needed and everybody knows the diagnosis).\nFinally, the authors mix two things that do not fit together. There is an average case report and a general overview about taste testing. I think this is a kind of not suitable mix. The overview is too short and superficial and not critically discussed. The case report is not well discussed (as the discussion is about taste tests and not about the case). There is no further lab tests done for the case to rule out other diseases (Vitamin deficiencies, Sjögren, metabolic disorders, etc..). 1.5 year follow up is short for taste disorders. Recovery of severe dysgeusia takes up to 2 years (see reference [1])\n\nIs the background of the case’s history and progression described in sufficient detail? No\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? No\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? No\n\nIs the case presented with sufficient detail to be useful for other practitioners? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-373
|
https://f1000research.com/articles/6-100/v1
|
03 Feb 17
|
{
"type": "Research Article",
"title": "Comprehensive comparison of Pacific Biosciences and Oxford Nanopore Technologies and their applications to transcriptome analysis",
"authors": [
"Jason L Weirather",
"Mariateresa de Cesare",
"Yunhao Wang",
"Paolo Piazza",
"Vittorio Sebastiano",
"Xiu-Jie Wang",
"David Buck",
"Kin Fai Au",
"Jason L Weirather",
"Mariateresa de Cesare",
"Yunhao Wang",
"Paolo Piazza",
"Vittorio Sebastiano",
"Xiu-Jie Wang"
],
"abstract": "Background: Given the demonstrated utility of Third Generation Sequencing [Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT)] long reads in many studies, a comprehensive analysis and comparison of their data quality and applications is in high demand. Methods: Based on the transcriptome sequencing data from human embryonic stem cells, we analyzed multiple data features of PacBio and ONT, including error pattern, length, mappability and technical improvements over previous platforms. We also evaluated their application to transcriptome analyses, such as isoform identification and quantification and characterization of transcriptome complexity, by comparing the performance of PacBio, ONT and their corresponding Hybrid-Seq strategies (PacBio+Illumina and ONT+Illumina). Results: PacBio shows overall better data quality, while ONT provides a higher yield. As with data quality, PacBio performs marginally better than ONT in most aspects for both long reads only and Hybrid-Seq strategies in transcriptome analysis. In addition, Hybrid-Seq shows superior performance over long reads only in most transcriptome analyses. Conclusions: Both PacBio and ONT sequencing are suitable for full-length single-molecule transcriptome analysis. As this first use of ONT reads in a Hybrid-Seq analysis has shown, both PacBio and ONT can benefit from a combined Illumina strategy. The tools and analytical methods developed here provide a resource for future applications and evaluations of these rapidly-changing technologies.",
"keywords": [
"Third Generation Sequencing",
"PacBio",
"Oxford Nanopore Technologies",
"Transcriptome"
],
"content": "Introduction\n\nThird Generation Sequencing (TGS) emerged more than 5 years ago when Pacific Biosciences (PacBio) commercialized Single Molecule Real Time (SMRT) sequencing technologies in 20111. Although TGS platforms have significant technical differences, they all generate very long reads (1–100kb)2–5, which is distinct from Second Generation Sequencing (SGS). Considering the paired-end information, the main SGS platform Illumina provides 50–600bp information from each DNA fragment; no SGS platforms provide >1000bp, including 454 sequencing, which generates the longest SGS reads (~700bp)6,7. Therefore, the short sequencing length limits the applications of SGS to large or complex genomic events, such as gene isoform reconstruction. TGS overcomes these challenging problems via long read lengths.\n\nThe most widely used TGS platforms [PacBio and Oxford Nanopore Technologies (ONT)] developed new biochemistry/biophysics methods to directly capture the very long nucleotide sequences from single DNA molecules. Other emerging TGS platforms (Moleculo8 and 10X Genomics9) are based on the assembly of short reads from the same DNA molecules to generate synthetic long reads (SLR). Herein, we focus on data features of PacBio and ONT and their applications to transcriptome analysis.\n\nPacBio adopts a similar sequencing-by-synthesis strategy as Illumina sequencing, except PacBio captures a single DNA molecule and Illumina detects augmented signals from a clonal population of amplified DNA fragments. The error rate of raw PacBio data is 13–15%, as the signal-to-noise ratio from single DNA molecules is not high3. To increase accuracy, the PacBio platform uses a circular DNA template by ligating hairpin adaptors to both ends of target double-stranded DNA. As the polymerase repeatedly traverses and replicates the circular molecule, the DNA template is sequenced multiple times to generate a continuous long read (CLR). The CLR can be split into multiple reads (\"subreads\") by removing adapter sequences, and multiple subreads generate circular consensus sequence (\"CCS\") reads with higher accuracy. The average length of a CLR is >10kb and up to 60kb, which depends on the polymerase lifetime3. Thus, the length and accuracy of CCS reads depends on the fragment sizes. PacBio sequencing has been utilized for genome (e.g., de novo assembly, detection of structural variants and haplotyping)10 and transcriptome (e.g., gene isoform reconstruction and novel gene/isoform discovery)11–13 studies.\n\nONT is a nanopore-based single molecule sequencing technology, and the first prototype MinION was released in 201414. As compared to other sequencing technologies utilizing nucleotide incorporation or hybridization, ONT directly sequences a native single-stranded DNA (ssDNA) molecule by measuring characteristic current changes as the bases are threaded through the nanopore by a molecular motor protein. ONT MinION uses a hairpin library structure similar to the PacBio circular DNA template: the DNA template and its complement are bound by a hairpin adaptor. Therefore, the DNA template passes through the nanopore, followed by a hairpin and finally the complement. The raw read can be split into two “1D” reads (“template” and “complement”) by removing the adaptor. The consensus sequence of two “1D” reads is a “2D” read with a higher accuracy2. Due to similar data features with PacBio, many researchers have utilized or are testing ONT in applications where PacBio has been applied.\n\nPacBio and ONT platforms share the advantage of long read lengths, yet they also have the same drawback: higher sequencing error rate and lower throughput compared to SGS3,14–16. High sequencing error rates pose challenges for single-nucleotide-resolution analyses, such as accurate sequencing of transcripts, identification of splice sites and SNP calling. Low throughput is an obstacle for quantitative analysis, such as gene/isoform abundance estimation. Although PacBio CCS and ONT 2D consensus strategies can reduce error rates, the corresponding read lengths become shorter and throughput becomes lower. Therefore, hybrid sequencing (“Hybrid-Seq”), which integrates TGS and SGS data, has emerged as an approach to address the limitations associated with analysis of TGS data with assistance of SGS data. For example, error correction of PacBio or ONT long reads by SGS short reads improves the accuracy and mappability of long reads17–19. Hybrid-Seq can be applied to genome assembly and transcriptome characterization and improve the overall performance and resolution11–13,17.\n\nThe long read length of PacBio and ONT is very informative for transcriptome research, especially for gene isoform identification. In addition to human transcriptomes20–22, the PacBio transcript sequencing protocol, Iso-Seq, has been widely used to characterize transcriptome complexity in non-model organisms and particular genes/gene families23–31. In contrast, ONT has no standard transcript sequencing protocol and only a few pilot studies are publically available. Using MinION, Bolisetty et al. discovered very high isoform diversity of four genes in Drosophila, which illustrates the utility of ONT in investigating complex transcriptional events32. Oikonomopoulos et al. also demonstrated the stability of ONT sequencing in quantifying transcriptome by analyzing an artificial mixture of 92 transcripts with Spike-In RNA33. Compared to these studies using PacBio or ONT alone, Hybrid-Seq can reduce the requirement of data size and improve the output, especially for transcriptome-wide studies. For example, a series of Hybrid-Seq methods (IDP, IDP-fusion, IDP-ASE) have been developed to improve the transcriptome studies to isoform levels (e.g., gene isoform reconstruction, fusion genes and allele phasing) with higher sensitivity and accuracy, and achieve a more accurate abundance estimation, which has been demonstrated in human embryonic stem cells (hESCs) and breast cancer11–13.\n\nHerein, we generated PacBio and ONT data from cDNA of hESCs. Using our tool AlignQC (http://www.healthcare.uiowa.edu/labs/au/AlignQC/), we performed a comprehensive analysis and comparison of PacBio and ONT data, including the raw data (subreads and 1D “template” reads) and their consensus (CCS and 2D reads). Comparisons analysed included error rate and error pattern, read length, mappability and abnormal alignments, as well as technology improvements between the latest sequencing models (PacBio P6-C4 and ONT R9) and previous versions (C2 and R7). We also validated and compared the capability of PacBio and ONT alone to study a gold standard set of spike-in transcripts. Then, we applied long read only and the corresponding Hybrid-Seq approaches to human transcriptome analyses, including isoform identification, quantification and discovery of complex transcriptome events. In addition to a comprehensive evaluation of the characteristics of the two main TGS data platforms, this work serves as a guide for applications of PacBio and ONT and the corresponding Hybrid-Seq for transcriptome analysis.\n\n\nMethods\n\nHuman embryonic stem cells (H1 cell line; WiCell) were cultured as previously described11. In brief, cells were cultured in mTeSR1 (Stem Cell Technologies) on Matrigel matrix (BD). Cells were harvested between passages 50 and 55. Cells were fixed in 4% PFA for 10 minutes at room temperature and either incubated in blocking solution (2% FBS in PBS) or permeabilized in 0.2% Triton X-100 followed by incubation in blocking solution, where undifferentiated cells were verified by immunofluorescence (OCT4, NANOG, SSEA4, TRA-1-60, and TRA-1-81) as previous described34. Briefly, the primary antibodies used in the study were as follows: anti-OCT4 (mouse; Santa Cruz; sc-5279; 1:500), anti-h-Nanog (rabbit; Cosmo Bio; REC-RCAB0004P-F; 1:200), anti-SSEA-3 (rabbit; Millipore; MAB4303; 1:500), anti-TRA-1-60 (mouse; Millipore; MAB4360; 1:500), anti-CD31 (R&D Systems), and anti-desmin (Thermo Fisher Scientific). Primary antibodies were diluted 1:200 in blocking solution, unless otherwise stated, and incubated overnight at room temperature. Secondary antibodies (goat or donkey; Invitrogen; Alexa 488 and Alexa 594; 1:5000) were incubated for two hours at room temperature. Pluripotency was confirmed by teratoma assay where three germ layers formed in vivo35.\n\nTotal RNA was extracted using RNeasy Plus Mini Kit (QIAGEN). Agilent RNA 6000 Pico Kit (Agilent) was used to assess the RNA quality, and Qubit RNA BR Assay Kit (ThermoFisher Scientific) was used to quantify the extracted RNA. SIRV (Spike-in RNA Variant) E0 mixture (Lexogen, Batch No. 216652830) was added to the extracted total RNA (about 2.83% SIRVs in the final mixture).\n\nFor Illumina sequencing, TruSeq Stranded mRNA HT Sample Prep Kit (Illumina) was used to prepare the sequencing library by substituting the TruSeq barcoded adapter with Illumina Adapters (Multiplexing Sample Preparation Oligonucleotide Kit) and the PCR Primer Cocktail with Multiplex PCR primer 1.0 (5′-AATGATACGGCGACCACCGAGATCTACACTCTTTCCCTACACGACGCTCTTCCGATCT-3′) and custom index primer (5′-CAAGCAGAAGACGGCATACGAGAT[index]CAGTGACTGGAGTTCAGACGTGTGCTCTTCCGATCT-3’) as described previously36. Sequencing was performed by Illumina HiSeq4000 with 150bp paired-end reads.\n\nFor PacBio sequencing, full-length cDNA and SMRTbell templates were prepared at the Centre of Genomic Research, University of Liverpool, following the Iso-Seq sample preparation protocol (Pacific Biosciences). For size selection, the full-length cDNA was fractioned into four contiguous size ranges (0–1kb, 1–2kb, 2–3kb, >3kb) on a Sage ELF (Sage Science) before constructing SMRTbell templates. Sequencing was performed by PacBio RS II using C4/P6 chemistry. The SMRT cell counts were 1, 4, 4 and 3 for 0–1kb, 1–2kb, 2–3kb and >3kb libraries, respectively.\n\nFor ONT sequencing, full-length cDNA was generated by the Smart-seq2 protocol, as described by Picelli et al.37 using modified sequences for the TSO (5’-TTTCTGTTGGTGCTGATATTGCTGCCATTACGGCCrGrG+G-3’) and the Oligo-dT30VN (5’-ACTTGCCTGTCGCTCTATCTTCT30VN-3’) to allow amplification of the cDNA second strand with primers provided by ONT. The quality and size distribution of the cDNA was tested by a TapeStation Genomic DNA system (Agilent). For each ONT flowcell, 1 μg of double-stranded cDNA was converted in a Nanopore-compatible sequencing library using the Genomic DNA Sequencing Kit SQK– NSK007 (ONT), according to the manufacturer’s protocol with minor modifications. In detail, the ds-cDNA was subjected to a combined end repairing and dA-tailing step using the NEBNext Ultra™ II End Repair/dA-Tailing Module (New England BioLabs) and incubated for 30 min at 20°C followed by 30 min at 65°C. The reactions were purified with 0.4x volume Agencourt AMPure XP beads (Beckman Coulter), according to manufacturer’s instructions. The end-prepped cDNA was subsequently ligated to ONT leader- and HP-adapter using Blunt/TA Ligase Master Mix (New England BioLabs) with a 10 min incubation at room temperature. The ligated cDNA was annealed to a biotinylated tether oligo (ONT) that targets the hairpin-adapter (HP-adapter) by incubation for an additional 10 min at room temperature. The fragments with a HP-adapter ligated were selectively pulled down using Dynabeads MyOne Streptavidin C1 (Life Technologies). After washing the DNA-bounded beads to remove unbounded DNA, the captured cDNA library was released from the streptavidin beads by incubating the beads re-suspended in ONT Elution Buffer for 10 min at 37°C. The beads were then pelleted using a magnetic rack and the supernatant containing the library was recovered. The full-length cDNA library was sequenced on a MinION Mk 1B using a 48h sequencing protocol on R7/R9 chemistry flowcells.\n\nLong reads require special considerations when accessing their quality; they have variable error rates and they are often size selected. These attributes make careful study of the alignments of long reads necessary to understand the quality and coverage of transcriptome sequencing.\n\nImplementation: AlignQC (http://www.healthcare.uiowa.edu/labs/au/AlignQC/) is designed to provide comprehensive quality assessment for TGS long read sequencing alignment data by three layers: (1) basic statistics of the data, including read length, alignment and coverage across all chromosomes; (2) error pattern analysis if a reference genome is provided; (3) transcript-related statistics if a gene annotation is provided. AlignQC takes the standard BAM format file as the input, outputs XHTML format file for easy visualization, and provides links to access all analysis results.\n\nFor basic statistics of the data, AlignQC parses the CIGAR string and SEQ fields from the BAM file. Multiple alignment paths can be reported for each read, but only the longest aligned path is used in error rate calculations and annotation analyses. For alignment statistics, if two or more alignment paths are reasonably spaced across the read, and can together generate a longer alignment, they will be combined and classified as: (a) a gapped alignment of a gene if paths occur within close proximity to each other on the same strand; (b) a trans-chimeric alignment if paths occur on different loci; (c) a self-chimeric alignment if paths align to an overlapping genomic position; otherwise, the read is defined as (d) a single alignment.\n\nFor the error pattern analysis, AlignQC compares the aligned reads to the provided reference genome. Based on the difference between aligned reads and reference genome, it estimates the error rates of total and different error types, including substitutions, insertions, and deletions. The overall error rates are calculated by sampling alignments until at least 1 million aligned bases have been included. Context-specific error pattern is analyzed by randomly sampling the best alignments until each individual context has been observed at least 10,000 times.\n\nFor transcript related statistics, AlignQC firstly annotates the aligned reads according to their overlap with provided genes/transcripts. A read is assigned to a reference transcript if it can cover the first and last exons with any length, and the internal exons with ≥ 80% length. When multiple exons are present and both the read and the reference transcript have the same consecutive exons, the match is called as a “full-length” match, otherwise, it is referred to as a “partial” match.\n\nOperation: AlignQC usage can be divided in to report generation, and report viewing. Report generation requires a Linux operating system with coreutils (version 8.6 or newer) and python (2.7 or newer); both are present in most current Linux releases. R must be installed (tested with version 3.3.0; https://www.r-project.org/). At least 16GB of RAM is recommended to run AlignQC. A full analysis of an alignment from a PacBio SMRT cell containing 107,960 molecules was processed by 4 threads in 32m21.307s. A full analysis of an alignment from an ONT R9 flow cell containing 387,810 molecules required 52m22.163s.\n\nReport viewing can be done through any modern web browser and does not require any specific operating system. The primary output of AlignQC is an XHTML format report. Analysis files are embedded in the report; these include high quality plots and the long read mappings that are compatible with the UCSC genome browser38. These reports can serve as both an analysis archive and a convenient means to share results.\n\nFor Illumina short reads, the quality was assessed by FastQC. The sequencing adapters were trimmed by 9 bases on the 5’ end and adapters were removed by cutadapt39 with the parameter “-a AGATCGGAAGAG -A AGATCGGAAGAG -m 50”. Short read alignment was performed by HISAT with default parameters. For SIRV, the reference genome (SIRV_151124a.fasta; https://www.lexogen.com/wp-content/uploads/2015/11/SIRV_Sequences_151124.zip; Supplementary Table 1) was provided by Lexogen. For the hESCs analysis, the reference genome was downloaded from UCSC (hg38 assembly; GCA_000001305.2; http://hgdownload.cse.ucsc.edu/goldenPath/hg38/chromosomes/).\n\nFor PacBio, the subreads and CCS reads were extracted using SMRT Analysis software (version 2.3.0; http://www.pacb.com/products-and-services/analytical-software/smrt-analysis/).\n\nFor ONT, the template, complement and 2D reads were extracted by poretools software (version 0.5.1; https://poretools.readthedocs.io/en/latest/).\n\nFor PacBio and ONT long read alignment, GMAP40 (version 2016-06-30) was used with the parameter “-n 10”.\n\nThe SIRV (Lexogen) transcriptome, which consists of 69 transcripts, mimics 7 human model genes and includes all kinds of complex alternative splicing events. SIRV is useful to assess the performance of sequencing technology applied to studying human transcriptome. This study used the SIRV E0 mix (Batch No. 216652830, in which isoform SIRV502 is missing) with 68 RNA variants. The concentration ratio is identical for each isoform. Meanwhile, Lexogen also provides three types of annotation libraries: “corrected”, with all 68 truly-expressed isoforms; “insufficient”, including 43 of 68 truly-expressed isoforms; and “over-annotated”, with 68 truly-expressed isoforms and an additional 32 falsely-expressed isoforms.\n\nWhen illustrating the performance of Illumina short reads on isoform identification, reference-guided assembly software StringTie41 (version 1.3.0) with default parameters was used, based on three different annotation libraries above. The true positive and false positive number of assembled isoforms was counted for the three libraries, respectively.\n\nWhen illustrating the performance of PacBio and ONT long reads on isoform identification, an isoform was considered identified when at least one long read was uniquely aligned to this isoform.\n\nGencode (version 24) gene annotation library (https://www.gencodegenes.org/; Supplementary Table 1) was used for isoform detection.\n\nAlignQC was used to identify isoforms annotated by Gencode (version 24). Briefly, for isoforms with only one exon (singleton isoform), if 90% of the isoform length could be covered by at least one long read, it was considered identified. For isoforms with multiple exons (multi-exon isoform), we required at least one long read that covered the first and last exons and ≥ 80% mutual overlap of each internal exon.\n\nNotably, for Hybrid-seq (PacBio+Illumina and ONT+Illumina) strategies, we combined the results mentioned above and the output of IDP11 (version 0.1.9), which is a tool specifically for isoform detection and prediction by Hybrid-seq data. The primary parameters of IDP were “Njun_limit=10, Niso_limit=100, and FPR=0.05”, using Gencode (version 24) as the primary reference, and a comprehensive transcript reference from the combination of Gencode (version 24), RefSeq (UCSC version 2015-06-03; http://hgdownload.cse.ucsc.edu/goldenPath/hg38/database/refFlat.txt.gz; Supplementary Table 1) and ESTs (downloaded from UCSC genome browser; http://hgdownload.cse.ucsc.edu/goldenPath/hg38/database/all_est.txt.gz; Supplementary Table 1).\n\nFor novel isoform identification, the output of IDP with the same parameters was used.\n\nWhen investigating the accuracy of splice sites/exon boundaries within the multi-exon isoforms, we calculated the relative distance between known splice sites annotated by Gencode and detected splice sites by four strategies.\n\nFor repetitive element analysis, the lower-case sequence marked by RepeatMasker and Tandem Repeats Finder tools was used from the reference genome (UCSC hg38; http://hgdownload.cse.ucsc.edu/goldenPath/hg38/chromosomes/; Supplementary Table 1). For each isoform, the proportion of repetitive element sequence was calculated.\n\nThe isoforms identified by 7 strategies (Illumina with “correct” library, Illumina with “insufficient” library and Illumina with “over-annotated” library, PacBio with “correct” library, ONT with “correct” library, PacBio+Illumina with “correct” library, and ONT+Illumina with “correct” library) were used to perform isoform abundance estimation.\n\nThe relative expression percentage (REP) of each isoform was calculated. Expected REP is 1/68.\n\nFor three Illumina-only strategies (Illumina with “correct” library, Illumina with “insufficient” library and Illumina with “over-annotated” library), the TPM (transcripts per million) value from RSEM with default parameter was used to calculate the REP.\n\nFor two long read only strategies (PacBio with “correct” library and ONT with “correct” library), the read count from AlignQC was used to calculated the REP.\n\nFor two Hybrid-Seq strategies (PacBio+Illumina with “correct” library and ONT+Illumina with “correct” library), only the Illumina short read data was used to run RSEM with default parameters. The TPM (transcripts per million) value from RSEM was used to calculate the REP.\n\nTo compare the estimation error of 7 strategies, the euclidean distance between expected REP and estimated REP was calculated.\n\nFor alternative splicing analysis, LESSeq (https://github.com/gersteinlab/LESSeq) was used, following its instructions.\n\nFor the prediction of protein coding capability of novel isoforms, GeneMarkS-T (version 5.1; http://exon.gatech.edu/GeneMark/) with default parameters was used. For gene enrichment analysis, DAVID (version 6.8)42 was used.\n\n\nResults\n\nThe mappable length is a good representation of the useful length of long reads. The median mappable lengths of PacBio data are 1,299bp and 1,464bp for subreads and CCS reads, respectively. ONT data are slightly longer, with median lengths of 1,602bp and 1,754bp for 2D and 1D reads, respectively (Table 1), although size selection was performed in PacBio, but not in ONT (Methods).\n\nThe fractions of each error types are in parenthesis. The fractions of the most predominant error types in each data are in bold.\n\nThe overall length distributions of the raw data and consensus data for both PacBio and ONT (subreads vs. CCS and 1D vs. 2D) are similar, while the differences between PacBio and ONT are more remarkable (Figure 1). Compared to ONT, the length distribution of PacBio data skews to the left, with many reads <1kb, which may be caused by a short size-selected fraction (<1kb) of cDNA library (see Methods, Figure 1 and Supplementary Figure S1). In addition, CCS reads have a large proportion of very long reads (>3.5kb), as the high quality of CCS reads guarantee the alignment of the full length while the other reads (e.g., subreads) are partially aligned.\n\nThe length distribution of Oxford Nanopore Technologies (ONT) 2D and 1D reads (top) and Pacific Biosciences (PacBio) CCS and subreads (bottom). Aligned reads are color-coded to indicate fraction of reads that are: single best alignments (gray), gapped alignments consisting of multiple paths (red), self-chimeric alignments (purple) where different read segments map to overlapping sequences, and trans-chimeric alignments (blue) where read segments map to different loci; white color represents unaligned reads. The leftmost bar represents all reads, the middle portion reads from 0–4kb in length, and the rightmost are reads greater than 4kb.\n\nONT R9 and the previous sequencing platform R7 have similar length distributions (Figure 1, Supplementary Figure S2.1 and Supplementary Figure S2.2), while the yield of R9 is much higher (204,891±61,389 vs. 61,799±42,393 molecules were sequenced and mappable per R9 and R7 per flow cells, respectively). Thus, R9 provides a more stable and higher throughput, which will allow broader applications of ONT data (Supplementary Table 2). The length distribution of the previous PacBio C2 sequencing data skews to a shorter length, compared to P6-C4. The yield of P6-C4 increased (76,597±23,387 vs. 21,827±9,707 molecules were sequenced and mappable per P6-C4 and C2 per SMRT cells, respectively). Overall, the yield per flow cell of ONT is much higher than PacBio, because each nanopore can sequence multiple molecules, while the wells of PacBio SMRT cells are not reusable. In addition, the PacBio read lengths in each SMRT cell are consistent with the sizes selected, so the size-selection protocol works well for PacBio data (Supplementary Figure S1).\n\nMappability of long reads is necessary to confirm repetitive elements, gene isoforms and gene fusions11,12,21. PacBio subreads and ONT 1D reads have similar rates of aligned reads (80.41% and 78.24%) and bases (81.80% and 81.03%) to the reference genomes (Figure 2). However, a higher proportion of PacBio CCS reads (96.15%) and bases (95.07%) can be aligned than ONT 2D reads (92.05% and 87.37%), while both are higher than their corresponding raw data (subreads and 1D). Thus, generation of consensus sequences truly improves data quality. As 2D reads only sequence target molecules twice, it is expected to have lower quality than CCS with multiple subreads.\n\nThe leftmost bar represents the fraction of the mappable read length out of the total read length for all reads. The middle section shows the mappable fraction for 500bp increments ranging from 0–4kb read lengths, and the rightmost bar represents the mappable fraction of reads greater than 4kb. ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nFor all types of data, we consistently observe that short read lengths (<500bp) have low alignment rates. This is likely due to a larger portion of adapter and linker sequences in this short-length data bin. In addition, although a large fraction of ONT data are defined as “fail” reads during the data pre-process and filtered out, the alignment rates are as high as 65.74% and 50.95% for 2D fail reads and 1D fail reads, respectively. These findings indicate that parts of the fail reads are informative and should be rescued (e.g., by error correction) to increase throughput.\n\nThe mappability of PacBio data is similar between the C2 and P6-C4 chemistries, while the ONT 1D reads in R9 have almost doubled the proportion of aligned bases relative to R7 (81.03% vs. 44.43%). However, the alignment rate of R9 1D reads is surprisingly slightly worse than the previous R7 data (78.24% vs. 82.19%). The improvements in total bases aligned is likely attributable to improvements in raw data quality, while relaxing criteria for calling 1D reads in R9 may explain the slight drop in the overall alignment rate. The slight drop in the alignment rate accompanies a largely improved throughput of 1D reads per cell for R9 compared to R7 (181,599±54,331 vs. 55,366±26,371).\n\nLong reads generated from gene fusions or trans-splices can be aligned to separated genomic loci, which are denoted as \"trans-chimeric\". Since hESCs contain very few fusion events or trans-splices, trans-chimeric reads are likely due to library preparation artifacts. 2D data contain 8.05% trans-chimeric reads, while 1D data contain surprisingly less (3.16%). Considering they are from the same data and library preparation, the lower trans-chimeric frequency in 1D reads may be due to the very low mappability of some error-prone regions. ONT data have particularly higher trans-chimeric rates in very long reads (>4kb) (Figure 2). PacBio CCS reads have far less trans-chimeric alignments (0.93%), while 1D reads and subreads are of similar trans-chimeric fractions (3.47% vs 3.16%). Therefore, the library preparation artifact is not negligible in TGS data, and the trans-chimeric reads in non-tumor samples should be filtered before further usage. In addition, two fragments of a long read may be aligned to the same genomic locus, denoted as \"self-chimeric\", because of the failure of removing adaptor sequences from the raw data (e.g., PacBio CLR). Overall, self-chimeric proportion is much smaller than trans-chimeric. The chimeric reads may cause an overestimate of the lengths of DNA molecules.\n\nSince some regions of long reads may be particularly error-prone, long reads may be aligned as separated fragments. With careful analysis, these \"gapped alignments\" can be used similarly with the paired-end Illumina reads. Corresponding to the high error rate, more ONT data are gapped alignments (1D: 6.10% and 2D: 2.98%) than PacBio (subreads: 3.45% and CCS: 0.48%). This rate is even more severe in the previous ONT R7 chemistry, especially for 1D reads (30.82%), while the difference between PacBio C2 and P6-C4 data is much smaller.\n\nWhereas mappability is a metric of the fraction of useful reads, error rate and error pattern measure the quality of the data, which have a strong effect on single-nucleotide resolution analysis (e.g., SNP calling and splice detection) and design of error correction algorithms. The error rate of PacBio CCS reads is as low as 1.72%. The 14.20% error rate of subreads is consistent with previous reports (Korlach J. Understanding Accuracy in SMRT Sequencing. Pacific Biosciences; http://www.pacb.com/wp-content/uploads/2015/09/Perspective_UnderstandingAccuracySMRTSequencing1.pdf.) and is similar with ONT 2D data (13.41%). However, 1D reads have a 20.19% error rate (Table 1). Thus, the raw data and the consensus sequence of PacBio data are of higher base quality than corresponding ONT data.\n\nMoreover, the composition of PacBio and ONT errors are different. Mismatches are the major errors in both ONT data (2D: 40.99% and 1D: 48.25%), and the proportion of deletions are also as high as >35% (Table 1). Thus, insertions are the least common errors in ONT. Insertions are also the least common in PacBio CCS reads, whereas mismatches are more predominant (75.70%), though the absolute error rate is fairly low. Conversely, the rate of insertions in subreads is the highest (41.71%), and mismatches are at a similar level (37.12%). Thus, insertions and deletions together (“indels”) contribute to most errors with the exception of CCS reads.\n\nPacBio base calling is based on distinguishing signals from the neighborhood background; ONT relies on the current signal change from the five upstream bases. Therefore, their errors may both have context-specific patterns. As the predominant error type in CCS reads, mismatches mostly arise from two context-specific events: CG->CA and CG->TG (Figure 3); however, these mismatches are likely alignment errors rather than sequencing errors as they are also observed in the alignments of high-quality Illumina data and simulation data (Supplementary Figure S3). The mismatch TAG->TGG is most striking in both ONT 2D and 1D reads, followed by TAC->TGC, while the other mismatches are far less frequent (Figure 3). In contrast, the mismatches in subreads show a clear “loose homopolymer pattern”: the base is more likely mis-called as either the upstream or downstream base (“cross shape” in Figure 3). The same homopolymer pattern also exists in the indels in subreads: 46.07% indels are in a homopolymer (Figure 3). The indels prefer to occur in homopolymers in CCS and 2D reads as well, with 85.46% and 39.40% in homopolymers, respectively. In addition, both CCS and 2D reads have the same bias of homopolymer pattern to specific bases: A and T in insertions and G and C in deletions. Moreover, insertions of G and C have a “tight homopolymer pattern”: both upstream and downstream bases are the same as the inserted bases (“diagonal spots” in Figure 3 and Supplementary Figure S4). Overall, the homopolymer pattern of errors is more pronounced in the raw PacBio data (subreads), but not very clear in the raw ONT data (1D reads). Regardless of the difference in sequencing platform, the overall error patterns of CCS and 2D both contain homopolymer indels, which may be due to the consensus sequence algorithm. The specific mismatches of ONT data may be caused by some difficult case contexts for the basecaller.\n\nContext specific-errors are shown for Oxford Nanopore Technologies (ONT) 2D and 1D reads (top), and Pacific Biosciences (PacBio) CCS and subreads (bottom). The error types shown are insertions, deletions and mismatches. For insertions, the large base above the plot indicates the inserted base, and for deletions, the deleted base. For mismatch errors, the large base to the left indicates the expected reference base, and the large base above indicates the base observed in the read. A block of color tiles shows the error frequency within specific contexts for each error; the small base to the left of the tiles indicates the base preceding the error, and the small base above is the base following error. Error frequency is plotted on separate scales for insertions, deletions, and mismatches. Homopolymer error patterns are highlighted with a bold cross- or L-shaped outlines in the ONT 2D, PacBio CCS and PacBio Subreads plots. Context-specific insertions and mismatches of interest in the ONT 1D, 2D and PacBio CCS reads are highlighted by a bold outlines. For a better contrast of lower error rate in PacBio CCS reads and ONT 2D reads, Supplementary Figure S4 displays each result with its own scale.\n\nIn spite of the higher overall error rate, the error pattern of the PacBio C2 data is almost the same as P6-C4 data, while the C2 CCS reads have a “loose” rather than the “tight” homopolymer pattern of P6-C4 data for indels (Supplementary Figure S4). Compared to ONT R9 data, the error patterns of R7 data (both 2D and 1D reads) are mosaic, with a few predominant errors (Supplementary Figure S4). Only the “tight homopolymer pattern” of indels is observed in R7 2D reads. Therefore, PacBio and ONT data have been improved dramatically, except for some systematic errors at homopolymers and specific contexts.\n\nOur next goal was to investigate the advantages of PacBio and ONT long reads for transcriptome analysis over Illumina short reads. We first compared the performance of gene isoform identification using the gold standard Spike-In RNA Variant Control mixes (SIRVs), which contain 68 isoforms of 7 genes with various splicing complexity and known abundance. This allows the evaluation of isoform recall by PacBio, ONT and Illumina data. We reconstructed isoforms from Illumina short reads by the reference-guided mode of StringTie41 with three types of reference annotation libraries: \"correct library\" containing all 68 truly-expressed isoforms, \"insufficient library\" containing 43 of 68 truly-expressed isoforms, and \"over-annotated library\" containing 68 truly-expressed isoforms and 32 additional unexpressed isoforms (see Methods). None were able to report all 68 truly expressed isoforms (44, 63 and 62, respectively; Table 2). When the reconstruction was guided by the insufficient library, only 20.00% (5 of 25) of missing isoforms were rescued, along with 33 false positive predictions. When guided by the over-annotated library, 46.87% (15 of 32) of unexpressed, but annotated, isoforms were incorrectly reported, with an additional 24 false positive predictions. Even if the assembly was guided by the \"correct library,\" which is rarely available in practical transcriptome analysis, short reads identified 92.65% (63 of 68) annotated isoforms, but with 27 false positive predictions. These results demonstrated the incompleteness or high false positive rate of isoform reconstruction by short reads. In contrast, ONT directly detected all 68 expressed isoforms, and PacBio missed only one, isoform SIRV618, which is 219bp and may be filtered out by size selection in PacBio library preparation. Thus, PacBio and ONT long reads show a far superior performance in isoform identification over short reads.\n\nWe further evaluated the performance of PacBio and ONT in identifying isoforms from hESCs (H1 cell line, see Methods). In total, 919,158 mappable PacBio reads and 923,671 mappable ONT reads were used. A total of 57,868 and 59,098 Gencode-annotated isoforms were detected by PacBio and ONT reads, including 23,067 and 21,196 full-length isoform detection, respectively (Figure 4A and Supplementary Figure S5). The full length rates were 47.14% and 44.79%, respectively. For the >1kb isoforms that are difficult to detect by short reads, PacBio and ONT directly detected 15,764 and 14,669 full length transcripts (Figure 4A). Thus, ONT shows comparable sensitivity with PacBio for full-length isoform detection.\n\n(a) Length distribution of isoforms identified by full-length by long read only and Hybrid-Seq strategies. (b) Numbers of identified isoforms with single exon (singleton isoform) and multiple exons (multi-exon isoform). (c) Overlap between isoforms identified by two Hybrid-Seq strategies. (d) Accuracy of splice sites detected by four strategies. Perfect means the detected splice sites exactly match known splice sites annotated by Gencode (version 24). Imperfect means the detected splice sites are shorter or longer than known splice sites annotated by Gencode (version 24). (e) Overlap between novel isoforms identified by two Hybrid-Seq strategies. (f) Numbers of identified isoforms with different ratios of repetitive elements. ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nNext, we identified isoforms from two Hybrid-Seq datasets: PacBio+Illumina and ONT+Illumina. Firstly, the long reads were corrected by LSC (version 1 beta)18 and Illumina reads, and the number of mappable reads increased to 951,258 and 933,762 for PacBio and ONT, respectively (see Methods). Furthermore, error correction greatly improved overall error rates and context-specific errors patterns (Supplementary Figure S4). By inputting the corrected long reads and Illumina reads to IDP, 26,325 and 23,340 Gencode-annotated isoforms were identified by full length by PacBio+Illumina and ONT+Illumina, respectively (Figure 4A), demonstrating the superior sensitivity of Hybrid-Seq over long reads only to identify isoforms. For multi-exon isoforms that are difficult to be constructed by short reads alone, the full-length isoform identification ratios were as high as 92.82% and 91.48% for PacBio+Illumina and ONT+Illumina, respectively (Figure 4B). Whereas 16,711 isoforms were identified by both Hybrid-Seq datasets, the overlap ratios of identified isoforms were not very high (PacBio+Illumina: 63.48% and ONT+Illumina: 71.60%; Figure 4C). That is, the two Hybrid-Seq datasets rescued significant numbers of isoforms that were missed by the other (9,614 and 6,629 for PacBio+Illumina and ONT+Illumina, respectively). These discordant isoforms were mostly multi-exon isoforms (Supplementary Figure S6).\n\nImperfect alignments of error-prone long reads subsequently result in ambiguous determination of splice sites/exon boundaries within the multi-exon isoforms. Using splice sites annotated by the reference library and or detected by short reads as the gold standard, 14.72% and 30.82% splice sites were incorrectly identified by PacBio and ONT, respectively (Figure 4D and Supplementary Figure S7). By contrast, by correcting long reads with short reads and integrating short reads in isoform identification (i.e., by the tool IDP), the incorrectly identified rates were decreased to 7.05% and 19.94% for PacBio+Illumina and ONT+Illumina, respectively. Thus, Hybrid-Seq provides a higher resolution of the exon-intron structures within each identified isoform. In addition, PacBio showed a better performance of splice site determination for both long read only and Hybrid-Seq strategies, which is consistent with the lower error rates than ONT.\n\nWith the determination of high-resolution exon-intron structure and the consistent evidence from both TGS and SGS data, we can discover and annotate significant amounts of novel multi-exon isoforms accurately: 2,712 and 2,095 by PacBio+Illumina and ONT+Illumina, respectively (Figure 4E). Compared with the overlap of annotated isoform detection (Figure 4C), only a minority of novel isoforms (467) were identified by both Hybrid-Seq strategies (Figure 4E). Besides the possible technological difference, the distinct coverage of novel isoforms by our PacBio and ONT data may be attributable to sampling differences.\n\nWe also illustrated the utility of long reads to identify isoforms with repetitive elements (see Methods). Approximately 60% of isoforms identified by PacBio (13,830; 59.96%), ONT (12,559; 59.25%), PacBio+Illumina (17,672; 60.86%) and ONT+Illumina (15,426; 60.65%) contained repetitive elements, and in particular, a significant amount of isoforms identified contained > 50% repetitive elements (516, 451, 665 and 593, respectively; Figure 4F). Reconstruction of isoforms with repetitive elements is difficult for short reads43, while it is relatively easily and accurately accomplished using long reads.\n\nWe evaluated the performance of PacBio, ONT, Hybrid-Seq and Illumina data on isoform quantification, using the gold standard SIRVs (see Methods). We first tested Illumina data, with the isoform library reconstructions guided by the three aforementioned annotation libraries. The median estimation errors were 0.12, 0.18 and 0.12 for “correct”, “insufficient” and “over-annotated” libraries, respectively (Figure 5). It suggests isoform abundance estimation is less accurate when expressed, but unannotated isoforms are missed in isoform identification (e.g. insufficient library). In contrast, when isoforms were identified and quantified by Hybrid-Seq, the median estimation errors were as low as 0.06 for PacBio+Illumina and 0.05 for ONT+Illumina. Additionally, we also observed high median estimation errors when using long reads only (0.15 for PacBio and 0.13 for ONT). This reflects the drawbacks of TGS long reads in quantitative analysis, such as low throughput and bias, yet a better isoform library can be obtained than with short reads only. Therefore, Hybrid-Seq provides a better strategy to fully utilize PacBio and ONT long reads in transcriptome analysis.\n\nThe X axis shows 7 strategies and the Y axis shows the euclidean distance between real relative expression percentage and estimated relative expression percentage (for more details see Methods). ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nAlternative splicing and alternative polyadenylation, produce a substantial number of isoforms with different lengths, exon usage and polyadenylation sites, which greatly enriches the complexity of the human transcriptome44–46. The average lengths of identified isoforms were 1,759bp, 1,670bp, 1,848bp and 1,747bp for PacBio, ONT, PacBio+Illumina and ONT+Illumina, respectively (Supplementary Figure S8). The longest isoform (Gencode ID: ENST00000262160.10), which was simultaneously identified by all four strategies, was 34,537bp.\n\nFor multi-exon isoforms, an average of ~8 exons in each isoform was identified by each of the four strategies (Supplementary Figure S9). However, the largest numbers of exons contained within single isoforms differed among PacBio, ONT, PacBio+Illumina and ONT+Illumina datasets: 64, 49, 67 and 52, respectively. When considering the isoforms with ≥30 exons, both PacBio (243) and PacBio+Illumina (367) were capable of identifying more isoforms than ONT (84) and ONT+Illumina (169). These results reveal that PacBio is superior to ONT on identifying isoforms with many exons, and Hybrid-Seq performs better than long reads only. Although the lengths of isoforms identified are similar, the higher quality of PacBio data allows more robust long read alignments of isoforms with many splices.\n\nAlternative splicing events lead to the diversity of isoform expression. PacBio, ONT, PacBio+Illumina and ONT+Illumina identified 2,829, 2,734, 3,617 and 3,516 alternative splicing events, respectively (Figure 6). On average, the most frequent alternative splicing events identified were intron retentions (35.90%), followed by exon skipping (28.28%), alternative 3’ splicing sites (18.20%) and alternative 5’ splicing sites (17.32%). A few mutually exclusive exons events (0.30%) were also discovered. Hybrid-Seq identified 24–37% more intron retentions, exon skipping, alternative 3’ splicing sites and alternative 5’ splicing sites than their corresponding long read only methods. PacBio+Illumina had better sensitivity for all types of alternative splicing events than ONT+Illumina.\n\nA5SS: alternative 5’ splicing site; A3SS: alternative 3’ splicing site; ES: exon skipping; RI: retained intron; MXE: mutually exclusive exons; ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nAs reported recently, PacBio data can identify alterative polyadenylation sites23. In our data, poly(A/T) tails were detected in 76.71% PacBio CCS reads and 59.75% ONT 2D reads. It shows the comparable potential of ONT to identify alternative polyadenylation sites as PacBio.\n\nFor the Gencode-annotated isoforms identified by PacBio, ONT, PacBio+Illumina and ONT+Illumina, 42.51%, 41.87%, 44.06% and 43.78% were protein-coding, respectively (Figure 7A) and the ratios of pseudogenes were 28.38%, 29.99%, 26.38% and 28.46%. Some isoforms were annotated as retained introns (9.48%, average), lincRNA (4.47%, average) and antisense transcripts (3.02%, average).\n\n(a) Feature statistics of isoforms annotated by Gencode (version 24). (b) Length distribution of open reading frames (ORFs) of novel isoforms identified by two Hybrid-Seq strategies. (c) Gene enrichment analysis of genes with at least one novel isoform identified by two Hybrid-Seq strategies. (d) Seven novel isoforms (red tracks) of the human embryonic stem cell-relevant gene ESRG were identified by two Hybrid-Seq strategies. The topmost isoform (blue track) is annotated by Gencode (version 24). ESRG: Embryonic Stem Cell Related Gene; ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nFor novel isoforms identified by Hybrid-Seq, we evaluated the protein coding potential by GeneMarkS-T (see Methods). Open reading frames (ORFs) with >97 amino acids were found in 92.59% (2,511/2,712) and 89.40% (1,873/2,095) novel isoforms identified by PacBio+Illumina and ONT+Illumina, respectively, with average lengths of 516 and 427 amino acids (Figure 7B). The longest ORFs were 2,302 and 1,980 amino acids, respectively.\n\nWe performed gene enrichment analysis for those genes with ≥1 novel isoform. Most genes were enriched in transcription regulation, DNA binding and metal ion binding processes (Figure 7C), which are likely important for human embryonic development. Some other enriched genes have protein kinase activity and are associated with DNA damage response, cell division, cell cycle and RNA processing processes.\n\nFurthermore, 22 hESC-relevant genes expressed ≥1 novel isoform, which was supported by PacBio or ONT full length data (Supplementary Table 3). For example, 7 novel isoforms (red track in Figure 7D) were identified in ESRG (Embryonic Stem Cell Related Gene), which is required for maintenance of human embryonic stem cell pluripotency47. These isoforms were not annotated by the existing annotation libraries (Gencode, Ensembl or RefSeq) and contained alternative 5’ splicing sites and alternative 3’ splicing sites. On average, 75.31% sequences in 7 novel isoforms were repetitive elements, so they have been likely missed by short reads in previous studies.\n\n\nDiscussion\n\nOverall, PacBio and ONT are similar: long read length, high error rate and relatively low throughput. However, they have distinct aspects, such as homopolymer error in PacBio and context-specific mismatches in ONT. PacBio sequences a molecule multiple times to generate high-quality consensus data, while ONT can only sequence a molecule twice. Together with the higher quality of the raw data, PacBio can generate extremely-low-error-rate data for high-resolution studies, which is not feasible for ONT. PacBio has better data quality for most aspects, such as error rate and mappability, especially for the consensus data (CCS vs. 2D). However, ONT has a few advantages: in addition to slightly longer mappable length, ONT MinION provides very high throughput as the nanopores can sequence multiple molecules. The cost for our ONT data generation was 1,000–2,000USD. Since sequencing cost is a significant obstacle of TGS application, the relatively high throughput and affordability makes ONT promising for many applications, especially for genome-wide and transcriptome-wide studies, requiring large amounts of data.\n\nWith a comprehensive understanding of the data features of PacBio and ONT, we can perform better data analysis and bioinformatics method development. We found a significant number of chimeric reads, which may be generated by either library preparation artifacts or failure of removing adaptors. Thus, it is important to filter these problematic long reads before further analyzing TGS data. However, we cannot filter the data using a simple cutoff: though the subreads and 1D reads are not as accurate as CCS and 2D reads, they are useful because of their reasonable mappability. In particular, error correction by short reads can improve the error rates and increase the mappability. The subreads and 1D reads consist of ~50–60% of the total data provided from the machines, and moreover, many ONT “fail” reads are also mappable, though they are often discarded. Therefore, sophisticated data analysis and bioinformatics methods, such as error correction, are required to rescue or to make better use of these data. The specific error pattern lays the groundwork for better method development. Similarly, the studies of error pattern can also benefit the development and applications of both long-reads only and Hybrid-Seq approaches for nucleotide analysis, such as SNP calling. We notice that our results are subjected to a compound workflow, including library preparation, sequencing, base calling, and analysis software. However, as we used standard protocols/analyses, these results can still serve as an informative reference.\n\nIn fact, studies concerning ONT have recently validated its utility in genome assembly48. For transcriptome analysis, we demonstrated the capability of both ONT and PacBio to provide precise and complete isoform identification of a small gold standard library SIRVs. For complicated transcriptomes (e.g., hESCs), ONT also provided comparable results to PacBio. However, with the higher data quality, PacBio has a slightly better overall performance, such as discovery of transcriptome complexity and sensitive identification of isoforms. Furthermore, we successfully improved the overall transcriptome analysis by ONT+Illumina, which is the first study to use ONT data in the Hybrid-Seq strategy. This similar improvement is also observed in PacBio Hybrid-Seq over PacBio alone, as reported previously11, because short reads not only correct the errors of long reads, but also improve abundance estimation and splice site determination. Abundance estimation could be also benefit from a more precise isoform library by Hybrid-Seq. In addition, the requirement of consistency between TGS and SGS data could also filter out many false positives, such as false gene fusion detection from library preparation artifacts. It is notable that PacBio and ONT have their unique discoveries missed by each other, such as novel isoforms.\n\nAdditionally, we established that the technology improvements from the previous to the latest sequencing models of both PacBio and ONT are significant, including error rates and yields (especially for ONT). Therefore, the applications of both PacBio and ONT are expected to increase dramatically in the near future, and the results and the comparisons above provide a reference for analyzing PacBio and ONT data. This study also provides an informative paradigm for the application of PacBio and ONT to analyze transcriptomes by long reads only and their corresponding Hybrid-Seq strategies.\n\n\nSoftware and data availability\n\nThe AlignQC software described herein is freely available for use and can be downloaded from: http://www.healthcare.uiowa.edu/labs/au/AlignQC/.\n\nSource code available from: https://github.com/jason-weirather/AlignQC\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.22412549 (https://zenodo.org/record/224125#.WHUFN1WLTcs)\n\nLicense: Apache 2.0\n\nReference sequence and annotation versions are described in Supplementary Table 1.",
"appendix": "Author contributions\n\n\n\nKFA and DB designed the experiments. JLW wrote the AlignQC software. JLW and YW analyzed the data. MC and PP prepared samples for sequencing and performed all ONT sequencing. VS cultured the H1 cell line. XW contributed critical intellectual content. KFA, MC, YW, and JLW wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the National Human Genome Research Institute [R01HG008759 to KFA, YW and JLW]; the institutional fund of Department of Internal Medicine, University of Iowa [to KFA YW and JLW]. The Multidisciplinary Lung Research Career Development Program [T32HL007638 to JLW]; and the National Natural Science Foundation of China [91540204 to XW].\n\n\nAcknowledgements\n\nThank you to Kristina Thiel, Ph.D., Research Scientist at the University of Iowa Carver College of Medicine, for critical reading of the manuscript.\n\n\nSupplementary material\n\nSupplementary Figures 1–9 (in zipped file; Click here to access the data.):\n\nFigure S1. Read length distribution per-cell. The read length distribution per Pacific Biosciences (PacBio) Single Molecule Real Time (SMRT) cells and Oxford Nanopore Technologies (ONT) flow cell are shown. Read counts are plotted from bins of 200bp. Panels represent each type of read generated by the platforms: circular consensus sequence (CCS) and subreads for PacBio, and for ONT there are 2D, 1D template and 1D complement. ONT reads classified as pass or fail are plotted as different colors. Note* that PacBio C2 subreads are plotted on a different y-axis scale than PacBio P6-C4 for visibility of the lower per-cell throughput from the older C2 chemistry. The number of cells (n) and number of reads (r) for each technology is listed at the top of each panel.\n\nFigure S2.1. Length distribution of reads and mappability. The length distribution of alignments and their mapped portions shown for Oxford Nanopore Technologies (ONT) 2D, and 1D (template strand) reads and Pacific Biosciences (PacBio) circular consensus sequence (CCS) and subreads in the main text are supplemented here with additional read sets of interest. High quality reads represent ONT 2D reads and PacBio CCS reads. The Raw reads represent ONT template strand reads and PacBio subreads. The 1D complement strand of ONT is now included. Columns have also been added for side-by-side comparison with LSC-corrected reads. Rows contain results for both current PacBio P6-C4 and ONT R9 along with older PacBio C2 and ONT R7 chemistries.\n\nFigure S2.2. Length distribution of reads and mappability for ONT ‘fail’. These results show the mappability of ‘fail’ classified ONT reads.\n\nFigure S3. Mismatch error pattern regardless of platform. A ‘C’ followed by a ‘G’ followed by any base or the reverse complement that sequence is a pattern observed cross the low error rate Illumina and is also observed even when reads perfectly match the reference sequence. PacBio: Pacific Biosciences.\n\nFigure S4. Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT) context-specific errors. Along with the context-specific errors plotted in the main text, this plot adds side-by-side comparisons of LSC-corrected data reads. Each error type (‘insertion’, ‘deletion’, and ‘mismatch’) for each result is individually scaled for better resolution of errors present in each result.\n\nFigure S5. Length statistics of ‘partial’ isoforms detected by four strategies. ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nFigure S6. Overlap between isoforms identified by two Hybrid-Seq strategies. (a) Overlap between multi-exon isoforms. (b) Overlap between singleton isoforms. ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nFigure S7. Statistics of splice site accuracy. This figure does not include the perfectly matched splice sites (relative distance is equal to 0). The negative values and positive values represent the truncated (shorter than known splice sites) and elongated (longer than known splice sites) nucleotide distance from the reference splice site, respectively. ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nSupplementary Figure S8. Length statistics of identified isoforms. ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nSupplementary Figure S9. Exon number statistics of identified isoforms. ONT: Oxford Nanopore Technologies; PacBio: Pacific Biosciences.\n\nSupplementary Table 1. Reference sequences and annotations. The source, address, version, and accession numbers are provided, when available, for reference sequences and annotations.\n\nClick here to access the data.\n\nSupplementary Table 2. Summary statistics comparing technologies. The statistics of reads outputted by each technology are organized by row. The colored columns, A–F, represent the subset of the technology being shown. These variables include the Platform (A), Chsemistry (B), and Correction status (C). GeneralType (D) describes whether reads are high quality (HQ) consensus sequences, the raw nucleotides (possibly multiple per molecule), or the single-best aligned sequence representing a single molecule. The ReadType (E) more specifically defines the GeneralType based on the platform-specific outputs. Finally column (F) specifies whether reads were called as ‘pass’ or ‘fail’ for the Oxford Nanopore Technologies (ONT) platform. The remaining columns provide yield, length, mappability, and error rate information.\n\nClick here to access the data.\n\nSupplementary Table 3. Novel human embryonic stem cell (hESC)-relevant isoforms. The novel isoforms of 22 hESC-relevant genes are shown along with a functional description, the number of supporting full-length Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT) long reads. Mapping information to the hg38 genome is also provided.\n\nClick here to access the data.\n\n\nReferences\n\nMcCarthy A: Third generation DNA sequencing: pacific biosciences' single molecule real time technology. Chem Biol. 2010; 17(7): 675–6. PubMed Abstract | Publisher Full Text\n\nLaver T, Harrison J, O'Neill PA, et al.: Assessing the performance of the Oxford Nanopore Technologies MinION. Biomol Detect Quantif. 2015; 3: 1–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRhoads A, Au KF: PacBio Sequencing and Its Applications. Genomics Proteomics Bioinformatics. 2015; 13(5): 278–89. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLu H, Giordano F, Ning Z: Oxford Nanopore MinION Sequencing and Genome Assembly. Genomics Proteomics Bioinformatics. 2016; 14(5): 265–79. PubMed Abstract | Publisher Full Text | Free Full Text\n\nReuter JA, Spacek DV, Snyder MP: High-throughput sequencing technologies. Mol Cell. 2015; 58(4): 586–97. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Dijk EL, Auger H, Jaszczyszyn Y, et al.: Ten years of next-generation sequencing technology. Trends Genet. 2014; 30(9): 418–26. PubMed Abstract | Publisher Full Text\n\nLiu L, Li Y, Li S, et al.: Comparison of next-generation sequencing systems. J Biomed Biotechnol. 2012; 2012: 251364. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCoy RC, Taylor RW, Blauwkamp TA, et al.: Illumina TruSeq synthetic long-reads empower de novo assembly and resolve complex, highly-repetitive transposable elements. PLoS One. 2014; 9(9): e106689. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZheng GX, Lau BT, Schnall-Levin M, et al.: Haplotyping germline and cancer genomes with high-throughput linked-read sequencing. Nat Biotechnol. 2016; 34(3): 303–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPendleton M, Sebra R, Pang AW, et al.: Assembly and diploid architecture of an individual human genome via single-molecule technologies. Nat Methods. 2015; 12(8): 780–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAu KF, Sebastiano V, Afshar PT, et al.: Characterization of the human ESC transcriptome by hybrid sequencing. Proc Natl Acad Sci U S A. 2013; 110(50): E4821–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeirather JL, Afshar PT, Clark TA, et al.: Characterization of fusion genes and the significantly expressed fusion isoforms in breast cancer by hybrid sequencing. Nucleic Acids Res. 2015; 43(18): e116. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeonovic B, Wang Y, Weirather JL, et al.: IDP-ASE: haplotyping and quantifying allele-specific expression at the gene and gene isoform level by hybrid sequencing. Nucleic Acids Res. 2016; pii: gkw1076. PubMed Abstract | Publisher Full Text\n\nIp CL, Loose M, Tyson JR, et al.: MinION Analysis and Reference Consortium: Phase 1 data release and analysis [version 1; referees: 2 approved]. F1000Res. 2015; 4: 1075. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuick J, Quinlan AR, Loman NJ: A reference bacterial genome dataset generated on the MinIONTM portable single-molecule nanopore sequencer. Gigascience. 2014; 3: 22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeng Z, Fang G, Korlach J, et al.: Detecting DNA modifications from SMRT sequencing data by modeling sequence context dependence of polymerase kinetic. PLoS Comput Biol. 2013; 9(3): e1002935. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoren S, Schatz MC, Walenz BP, et al.: Hybrid error correction and de novo assembly of single-molecule sequencing reads. Nat Biotechnol. 2012; 30(7): 693–700. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAu KF, Underwood JG, Lee L, et al.: Improving PacBio long read accuracy by short read alignment. PLoS One. 2012; 7(10): e46679. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSalmela L, Rivals E: LoRDEC: accurate and efficient long read error correction. Bioinformatics. 2014; 30(24): 3506–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTevz G, McGrath S, Demeter R, et al.: Identification of a novel fusion transcript between human relaxin-1 (RLN1) and human relaxin-2 (RLN2) in prostate cancer. Mol Cell Endocrinol. 2016; 420: 159–68. PubMed Abstract | Publisher Full Text\n\nSharon D, Tilgner H, Grubert F, et al.: A single-molecule long-read survey of the human transcriptome. Nat Biotechnol. 2013; 31(11): 1009–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTilgner H, Grubert F, Sharon D, et al.: Defining a personal, allele-specific, and single-molecule long-read transcriptome. Proc Natl Acad Sci U S A. 2014; 111(27): 9869–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbdel-Ghany SE, Hamilton M, Jacobi JL, et al.: A survey of the sorghum transcriptome using single-molecule long reads. Nat Commun. 2016; 7: 11706. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMinoche AE, Dohm JC, Schneider J, et al.: Exploiting single-molecule transcript sequencing for eukaryotic gene prediction. Genome Biol. 2015; 16: 184. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomas S, Underwood JG, Tseng E, et al.: Long-read sequencing of chicken transcripts and identification of new transcript isoforms. PLoS One. 2014; 9(4): e94650. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXu Z, Peters RJ, Weirather J, et al.: Full-length transcriptome sequences and splice variants obtained by a combination of sequencing platforms applied to different root tissues of Salvia miltiorrhiza and tanshinone biosynthesis. Plant J. 2015; 82(6): 951–61. PubMed Abstract | Publisher Full Text\n\nShi L, Guo Y, Dong C, et al.: Long-read sequencing and de novo assembly of a Chinese genome. Nat Commun. 2016; 7: 12065. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGordon SP, Tseng E, Salamov A, et al.: Widespread Polycistronic Transcripts in Fungi Revealed by Single-Molecule mRNA Sequencing. PLoS One. 2015; 10(7): e0132628. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTreutlein B, Gokce O, Quake SR, et al.: Cartography of neurexin alternative splicing mapped by single-molecule long-read mRNA sequencing. Proc Natl Acad Sci U S A. 2014; 111(13): E1291–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLarsen PA, Heilman AM, Yoder AD: The utility of PacBio circular consensus sequencing for characterizing complex gene families in non-model organisms. BMC Genomics. 2014; 15(1): 720. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang W, Ciclitira P, Messing J: PacBio sequencing of gene families - a case study with wheat gluten genes. Gene. 2014; 533(2): 541–6. PubMed Abstract | Publisher Full Text\n\nBolisetty MT, Rajadinakaran G, Graveley BR: Determining exon connectivity in complex mRNAs by nanopore sequencing. Genome Biol. 2015; 16: 204. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOikonomopoulos S, Wang YC, Djambazian H, et al.: Benchmarking of the Oxford Nanopore MinION sequencing for quantitative and qualitative assessment of cDNA populations. Sci Rep. 2016; 6: 31602. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSebastiano V, Zhen HH, Haddad B, et al.: Human COL7A1-corrected induced pluripotent stem cells for the treatment of recessive dystrophic epidermolysis bullosa. Sci Transl Med. 2014; 6(264): 264ra163. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSebastiano V, Maeder ML, Angstman JF, et al.: In situ genetic correction of the sickle cell anemia mutation in human induced pluripotent stem cells using engineered zinc finger nucleases. Stem Cells. 2011; 29(11): 1717–26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLamble S, Batty E, Attar M, et al.: Improved workflows for high throughput library preparation using the transposome-based Nextera system. BMC Biotechnol. 2013; 13: 104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPicelli S, Faridani OR, Björklund AK, et al.: Full-length RNA-seq from single cells using Smart-seq2. Nat Protoc. 2014; 9(1): 171–81. PubMed Abstract | Publisher Full Text\n\nKent WJ, Sugnet CW, Furey TS, et al.: The human genome browser at UCSC. Genome Res. 2002; 12(6): 996–1006. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartin M: Cutadapt removes adapter sequences from high-throughput sequencing reads. 2011; 17(1). Publisher Full Text\n\nWu TD, Watanabe CK: GMAP: a genomic mapping and alignment program for mRNA and EST sequences. Bioinformatics. 2005; 21(9): 1859–75. PubMed Abstract | Publisher Full Text\n\nPertea M, Pertea GM, Antonescu CM, et al.: StringTie enables improved reconstruction of a transcriptome from RNA-seq reads. Nat Biotechnol. 2015; 33(3): 290–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang da W, Sherman BT, Lempicki RA: Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009; 4(1): 44–57. PubMed Abstract | Publisher Full Text\n\nLoomis EW, Eid JS, Peluso P, et al.: Sequencing the unsequenceable: expanded CGG-repeat alleles of the fragile X gene. Genome Res. 2013; 23(1): 121–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBentley DL: Coupling mRNA processing with transcription in time and space. Nat Rev Genet. 2014; 15(3): 163–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKeren H, Lev-Maor G, Ast G: Alternative splicing and evolution: diversification, exon definition and function. Nat Rev Genet. 2010; 11(5): 345–55. PubMed Abstract | Publisher Full Text\n\nElkon R, Ugalde AP, Agami R: Alternative cleavage and polyadenylation: extent, regulation and function. Nat Rev Genet. 2013; 14(7): 496–506. PubMed Abstract | Publisher Full Text\n\nWang J, Xie G, Singh M, et al.: Primate-specific endogenous retrovirus-driven transcription defines naive-like stem cells. Nature. 2014; 516(7531): 405–9. PubMed Abstract | Publisher Full Text\n\nGoodwin S, Gurtowski J, Ethe-Sayers S, et al.: Oxford Nanopore sequencing, hybrid error correction, and de novo assembly of a eukaryotic genome. Genome Res. 2015; 25(11): 1750–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeirather J: jason-weirather/AlignQC: Current version code accompanying publication [Data set]. Zenodo. 2016. Data Source"
}
|
[
{
"id": "19894",
"date": "27 Feb 2017",
"name": "Jingyi Jessica Li",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper, the authors provides comprehensive analyses to compare two third-generation sequencing technologies (PacBio and ONT) for RNA sequencing. The comparison was conducted in many aspects, including read lengths, mappability, chimeric and gapped alignments, error patterns, isoform identification, and isoform abundance estimation. To my knowledge, this paper is the first comparison of PacBio and ONT and using each of them in hybrid with Illumina, and its comparison results will provide valuable information about these two third-generation technologies to the transcriptomics field. My comments/questions about some contents in this paper are summarized below.\nIn the isoform identification task, it is unclear how the authors defined \"true positive and false positive isoforms\" assembled by StringTie from Illumina reads?\n\nIn Figure 1, why does ONT 2D have more reads than ONT 1D?\n\nIn the comparison of error patterns, the definition of “homopolymer pattern” is unclear.\n\nIn Figure 2, only the percentages of mapped reads of each read category are shown. While this is important information, it would be also important to know the absolute numbers of mapped reads in each category.\n\nIn Figure 3, the row containing labels “A C G T ” above the insertion row should be better placed above the mismatch row.\n\nIn Table 2, the top row labeling is confusing. It would be clearer to remove \"Over-annotated library (100)\", \"Correct library (68)\", and \"Insuf cient library (43)\" from the top row. Also why does the \"Illumina+Insufficient\" row have one additional cell?\n\nIn Figure 4, it would be better to make the circles in Venn diagrams proportional to the numbers?\n\nIt is unclear why the authors included the insufficient annotation and the overannotated cases in the study of isoform identification and isoform abundance estimation. Since they are only applicable to the Illumina data using StringTie but not relevant to the PacBio or the ONT data, including them seems deviation from the theme of the paper.\n\nIn Figure 7d, are the seven novel isoforms verified?",
"responses": [
{
"c_id": "2760",
"date": "19 Jun 2017",
"name": "Kin Fai Au",
"role": "Author Response",
"response": "We greatly appreciate your time and thoughtful questions and critiques of our manuscript “Comprehensive comparison of PacBio and Oxford Nanopore Technologies and their applications to transcriptome analysis.” These are addressed in this point by point response and in the corresponding manuscript revisions. JJL: In this paper, the authors provides comprehensive analyses to compare two third-generation sequencing technologies (PacBio and ONT) for RNA sequencing. The comparison was conducted in many aspects, including dread lengths, mappability, chimeric and gapped alignments, error patterns, isoform identification, and isoform abundance estimation. To my knowledge, this paper is the first comparison of PacBio and ONT and using each of them in hybrid with Illumina, and its comparison results will provide valuable information about these two third-generation technologies to the transcriptomics field. My comments/questions about some contents in this paper are summarized below.JJL (1): In the isoform identification task, it is unclear how the authors defined \"true positive and false positive isoforms\" assembled by StringTie from Illumina reads?Thank you for your question. Our original manuscript did not adequately describe the criteria used in this analysis and has been modified accordingly in the revision (see the second paragraph in Methods “Isoform identification in SIRVs by Illumina, PacBio, ONT”): “For all SIRV isoforms, we classified them into two groups: 1) true positive if the isoform was annotated by SIRV “correct” annotation library; and 2) false positive if not.JJL (2): In Figure 1, why does ONT 2D have more reads than ONT 1D?We apologize if the axis labeling and scaling of Figure 1 made this point unclear; raw counts associated with this figure are available in the Supplemental Table 2. As expected, ONT 2D have less reads (289430) than the ONT 1D reads (339651). The Figure 1 legend has been modified to appropriately refer readers to Supplementary Table 2. JJL (3): In the comparison of error patterns, the definition of “homopolymer pattern” is unclearThank you for give us a chance to clarify definitions of “loose” and “tight” homopolymer patterns. If an error rate is much higher when both surrounding bases are the same as the mismatched, inserted or deleted bases, then it indicates that these errors are mostly occurring in a homopolymer runs. In the “loose” homopolymer error pattern, the error rate is high if either surrounding bases are the same as the mismatched, inserted or deleted bases. The context requirement is “looser” than “tight” homopolymer patterns. This is observed as the cross-shaped higher error rates (in the context of Figure 3). JJL (4): In Figure 2, only the percentages of mapped reads of each read category are shown. While this is important information, it would be also important to know the absolute numbers of mapped reads in each category.We agree that total number of aligned reads represented in each category would be a very useful addition, and have updated Figure 2 accordingly and added to the figure legend.JJL (5): In Figure 3, the row containing labels “A C G T ” above the insertion row should be better placed above the mismatch row..Thank you for the suggestion. To improve the visual cues in the figure, we have filled out the labeling in Figure 3 around the mismatch patterns. JJL (6): In Table 2, the top row labeling is confusing. It would be clearer to remove \"Over-annotated library (100)\", \"Correct library (68)\", and \"Insuf cient library (43)\" from the top row. Also why does the \"Illumina+Insufficient\" row have one additional cell?Thanks for your suggestion. We revised Table 2 to clearly show our results on isoform identification.JJL (7): In Figure 4, it would be better to make the circles in Venn diagrams proportional to the numbers?Yes, we agree with you. For Figure 4e and 4f, the circles in Venn diagrams were made to be proportional to the numbers.JJL (8): It is unclear why the authors included the insufficient annotation and the overannotated cases in the study of isoform identification and isoform abundance estimation. Since they are only applicable to the Illumina data using StringTie but not relevant to the PacBio or the ONT data, including them seems deviation from the theme of the paper.For isoform identification, a “reference-annotation-guided” mode is recommended by most short read-based method (e.g. Cufflinks and StringTie). The performance could strongly rely on the reliability and completeness of the reference annotation library. To consider different scenarios, we included three types of reference annotation libraries in the comparison. In detail, we want to prove two points:1) First, for most non-model organisms, isoform annotation libraries are incompletely annotated and thus insufficient for transcriptome analysis. Recovering un-annotated isoforms that are expressed in the given sample is therefore challenging. As shown in Table 2, StringTie (by Illumina data and “insufficient library”) only rescued 5 of 25 un-annotated but truly-expressed isoforms. Second, for well-studied species like human, not all annotated isoforms are expressed in a given sample. Thus, the isoform annotation libraries are usually “over-annotated” for a given sample (e.g., a specific cell line or tissue). Using the “over-annotated library” with 32 unexpressed isoforms, StringTie incorrectly assembled 15 unexpressed isoforms, which highly increased the false positive rate. Therefore, short read-based strategies have an inherent disadvantage to long read-based strategies, and prediction alone is insufficient to overcome this.2) Long read sequencing technologies (PacBio or ONT) can directly detect expressed isoforms. As shown in Table 2, PacBio and ONT detected 67 and 68 expressed isoforms, respectively. Therefore, both long read-based strategies overcome shortcomings of prediction through direct detection. JJL (9): In Figure 7d, are the seven novel isoforms verified?In this study, we did not verify identified novel isoforms. In our future work, we will verify these novel isoforms, especially for those isoforms that are associated with embryonic stem cell identity. To increase the reliability of novel isoforms, we have updated Figure 7d and only retained those novel isoforms that are supported by both PacBio and ONT full-length long reads. The number of supporting long reads can be found in updated Supplementary Table 3. The manuscript has also been updated to reflect this change (see Results “Functional analysis of identified isoforms in hESCs”)."
}
]
},
{
"id": "20639",
"date": "01 Mar 2017",
"name": "Hagen Tilgner",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript by Weirather and coworkers compares two third generation sequencing protocols (Pacific Biosciences – PacBio as well as Oxford Nanopore technologies – ONT) in terms of their performance for RNA sequencing. It concludes that both technologies can be used for transcriptome analysis with PacBio having advantages in terms of sequencing errors and consequently alignability while ONT gives higher sequencing throughput.\nGenerally speaking, this is an important topic, which many readers will find of interest. The manuscript has a lot of very informative information that can guide researchers in their experiments. On the flipside there are also a couple of instances where readers might be misled if they are not specialists in the field. I will detail these points of the manuscript below and what I think should be done in order to address them. The authors should be able to address these issues without many difficulties. This will then be an important contribution to the field.\n\nStrengths:\n\n1) The demonstration of the dependence of sequencing quality (or the Fraction of read aligned) on read length (figure 2) both for single pass reads (subreads for PacBio and 1D for ONT) and for multi-pass consensus reads (CCS for PacBio and 2D reads for ONT) is very useful. Future readers will be able to have a good estimate of what they might expect for their genes of interest.\n\n2) The comparison of the type of error (figure 3) is very useful.\n\n3) Likewise the chimera analysis is useful to understand the limitations one must be aware of when planning experiments.\n\nWeaknesses and solutions and other questions:\n\n1a) The first drawback is that the experiment for PacBio and ONT is not exactly identical. PacBio libraries underwent size selection, whereas ONT libraries did not (as the authors indicate in an upfront way), although in theory, I do not see why this could not have been done for ONT. The reason, I would guess, is that for ONT size fractions are not required (just as they were not in our 2015 synthetic long read isoform paper1). Nevertheless, this leaves us with the problem that we cannot exactly understand what are characteristic differences between ONT and PacBio and what may be linked to size selection. I think the authors should indicate in a prominent place (e.g. the abstract) that this is a comparison of a “PacBio experiment using size fractions” and a non-size-selected Oxford Nanopore experiment. This is of importance because many readers may only read the abstract and look at the figures – and the current version could cause them to miss this point. 1b) From the above drawback, it follows that for PacBio the authors need to choose how much sequencing is devoted to the four size bins (1,4,4 and 3 SMRT cells, I believe are chosen) but for ONT this is not done. Therefore the length profile in figure 1 (top) is a function of the Oxford Nanopore system and the cDNA sample only, but the distributions (bottom) for PacBio also depend on the employed size selections and SMRT cell numbers. In principle one could (if one wanted to) make the 500bp-1kb bin the most prominent bin in the PacBio length distribution, by also using 4 SMRT cells for this bin. Conversely one could give more weight to other bins. On the upside, this means one can zoom in on sizes of interest. On the downside, one must carefully consider the implications for the transcriptome of interest. The important point here is, again, that all of this could also have been done for ONT. I suggest to make readers aware of this in an obvious way in the legend of figure 1. 1c) Additionally, in figure 2, I would remove the leftmost boxplot for each panel (the overall Fraction of Read aligned), because in the case of PacBio this would change, if one were to use different amounts of sequencing for different bins (because these bins differ, as the authors show, in terms of alignability). The “Fraction of Read Aligned” broken up by length bins however is highly informative. Please do keep this by all means!\n\n2a) Regarding isoform abundance estimation from SIRVs (figure5): The authors employed the E0 mix of the SIRVs, in which all different isoforms are of equal abundance. This is very different from real-world situations, in which different genes but also different isoforms from the same gene can be of very different expression level. The authors note earlier that ONT has advantages in sequencing depth (at the cost of quality), which (we would hope) would lead to better isoform quantification for lowly expressed genes and minor isoforms– but using the E0 mix we cannot tell (while we could have, I think with the E1 mix). Reading the paper, I was searching for the use of the E1 and E2 mixes which could have answered these questions. It would be good to point out that lowly expressed gene and minor isoform quantification were not addressed here. 2b) Also, regarding the isoform abundance estimation, my first impression was “these are actually very small errors” when looking at the y-axis of figure 5. My current understanding of the situation is however different: As the authors point out, the actual expression of each isoform is 1/68~=0.015, meaning that the errors are of the same order of magnitude as the (uniformly) expressed transcripts – and a bit less for error corrected reads. If my reading of the situation is accurate, then this should be noted somewhere.\n\n3) Other points:\nPage 8 left column: fig 2 is referenced for “ONT data have particularly higher trans-chimeric rates in very long reads (>4kb)”. Shouldn’t this be fig. 1 ?\n\nPage 10, right column, end of first paragraph: when referring to table 2, it is not obvious (apologies, if I missed it) what kind of long-reads (ONT-1D vs. ONT-2D vs. ONT-errorCorrection and PacBio-CCS vs. PacBio-subread vs. PacBio-errorCorrected) are used. Earlier parts of the paper use abbreviations like ONT-1D or PacBio-subreads, but not here.\n\nA similar statement is true for figure 5 and the corresponding text (“when using long reads only”): it is not clear if PacBio-CCS or PacBio-subreads are used (and the same for ONT) when comparing to the error-corrected subreads.\n\nFor figure 5, it is somewhat difficult to understand, what was exactly done. The authors say that, they used the “Euclidean distance” between REP and estimated REP. The way I understand it, is that the authors calculated REP and estimated REP for each transcript, and then calculated the Euclidean distance for each isoform. In this case (one dimension only) the Euclidean distance reduces to the absolute value of REP minus estimated REP. If this was done, this simpler way of saying it, is advantageous, I believe. Using the word “Euclidean distance” makes me expect multidimensionality. This would suggest that the authors have a vector of isoform expression values for each gene (or maybe multiple samples)? That would imply that the boxplots only represent 7 dots for the 7 SIRV genes…please clarify so that there is no doubt.\n\nThe section “Isoform Identification in hESCs by PacBio, ONT and Hybrid-Seq” is difficult to read. This may stem from the terms “full length rates” and “full-length isoform identification rates”. It is not fully clear, if they mean the same or different things; What is exactly meant? Is it “fraction of discovered annotated isoforms that are seen at least once in a full length read” or “fraction of reads that are judged as full-length” or something else? Please clarify.\n\nPage 12, the third paragraph, regarding the discovery of isoforms with >=30 exons. The correct finding of isoforms with lots of exons of course depends on error-rate (which is linked to getting all splice sites correctly) and having long enough reads. In the absence of a size selection experiment for the Minion, one cannot prove that the observed difference between PacBio and Minion would not be rendered smaller (probably not totally removed though – because of the higher Minion error rate), with a size selection experiment for the Minion. I would mention that.\n\nRegarding the quantification of alternative splicing events … there are many publications that suggested exon skipping is the most frequent type of alternative splicing in humans. There are reports that have reported high occurrence of intron retention – but to my knowledge not that intron retention is more frequent than exon skipping. For example Braunschweig et al, Genome Research 20142 using short reads and our own paper Tilgner et al, Nature Biotech, 20151 using long reads. The authors could also use a minimum frequency of each single alternative event (e.g. 10% as in the papers referenced above) to distinguish splicing errors and few intermediate RNA molecules from “real” isoforms. This may change the relative abundance of each type of splicing event.\n\nIn the last paragraph on page 12, the word “alterative” is used. I assume this should be “alternative”. If this is a spelling mistake, there may be more.",
"responses": [
{
"c_id": "2759",
"date": "19 Jun 2017",
"name": "Kin Fai Au",
"role": "Author Response",
"response": "We greatly appreciate your time and thoughtful questions and critiques of our manuscript “Comprehensive comparison of PacBio and Oxford Nanopore Technologies and their applications to transcriptome analysis.” These are addressed in this point by point response and in the corresponding manuscript revisions. HT: The manuscript by Weirather and coworkers compares two third generation sequencing protocols (Pacific Biosciences – PacBio as well as Oxford Nanopore technologies – ONT) in terms of their performance for RNA sequencing. It concludes that both technologies can be used for transcriptome analysis with PacBio having advantages in terms of sequencing errors and consequently alignability while ONT gives higher sequencing throughput.Generally speaking, this is an important topic, which many readers will find of interest. The manuscript has a lot of very informative information that can guide researchers in their experiments.On the flipside there are also a couple of instances where readers might be misled if they are not specialists in the field. I will detail these points of the manuscript below and what I think should be done in order to address them. The authors should be able to address these issues without many difficulties. This will then be an important contribution to the field. Strengths: 1) The demonstration of the dependence of sequencing quality (or the Fraction of read aligned) on read length (figure 2) both for single pass reads (subreads for PacBio and 1D for ONT) and for multi-pass consensus reads (CCS for PacBio and 2D reads for ONT) is very useful. Future readers will be able to have a good estimate of what they might expect for their genes of interest. 2) The comparison of the type of error (figure 3) is very useful. 3) Likewise the chimera analysis is useful to understand the limitations one must be aware of when planning experiments. Weaknesses and solutions and other questions: HT (1a): The first drawback is that the experiment for PacBio and ONT is not exactly identical. PacBio libraries underwent size selection, whereas ONT libraries did not (as the authors indicate in an upfront way), although in theory, I do not see why this could not have been done for ONT. The reason, I would guess, is that for ONT size fractions are not required (just as they were not in our 2015 synthetic long read isoform paper1). Nevertheless, this leaves us with the problem that we cannot exactly understand what are characteristic differences between ONT and PacBio and what may be linked to size selection. I think the authors should indicate in a prominent place (e.g. the abstract) that this is a comparison of a “PacBio experiment using size fractions” and a non-size-selected Oxford Nanopore experiment. This is of importance because many readers may only read the abstract and look at the figures – and the current version could cause them to miss this point. Thank you for strongly making this point. The fact that PacBio was size selected and ONT was not deserves discussion and consideration. In fact, we did try size selection with ONT, but unfortunately it did not work in our hands and we haven't figured out the reason. Size selection is officially recommended for PacBio Iso-Seq protocol and has been validated by many published works, while there is so far no \"official\" protocol released by ONT. Therefore, transcriptome data collection without size selection was the only successful way that we could perform with ONT platform. We strongly encourage more follow-up studies to figure out an optimal protocol to generate transcriptome data with ONT platform.Nevertheless, we agree size selection is a critical difference between the two sequencing data collections in our work and needs prominent mention in the manuscript. To this end, we have modified the Abstract, Introduction, and first two figures to make specific mention of this difference. We hope this change will make readers more clearly aware of this difference.HT (1b): From the above drawback, it follows that for PacBio the authors need to choose how much sequencing is devoted to the four size bins (1,4,4 and 3 SMRT cells, I believe are chosen) but for ONT this is not done. Therefore the length profile in figure 1 (top) is a function of the Oxford Nanopore system and the cDNA sample only, but the distributions (bottom) for PacBio also depend on the employed size selections and SMRT cell numbers. In principle one could (if one wanted to) make the 500bp-1kb bin the most prominent bin in the PacBio length distribution, by also using 4 SMRT cells for this bin. Conversely one could give more weight to other bins. On the upside, this means one can zoom in on sizes of interest. On the downside, one must carefully consider the implications for the transcriptome of interest. The important point here is, again, that all of this could also have been done for ONT. I suggest to make readers aware of this in an obvious way in the legend of figure 1. We modified the legend of Figure 1 to point out the size selection step in PacBio data. As mentioned above, we did not have a successful experiment doing size-selection of ONT or have an official protocol recommendation. To be clear, we do not want our lack of success in working size-selection into the ONT protocol to be misinterpreted as deficiency in the ONT platform. Rather, we would prefer to defer topic of size selection in ONT until it has been better explored by ourselves or others in the community.HT (1c): Additionally, in figure 2, I would remove the leftmost boxplot for each panel (the overall Fraction of Read aligned), because in the case of PacBio this would change, if one were to use different amounts of sequencing for different bins (because these bins differ, as the authors show, in terms of alignability). The “Fraction of Read Aligned” broken up by length bins however is highly informative. Please do keep this by all means! Thank you for this suggestion. While we agree that the most informative parts of the plot are the center and left panels, we feel the leftmost (all aligned reads) plot is somewhat useful for providing an overall view of alignability and would prefer to keep it. In response to the other reviewer's comment, this plot was supplemented with the aligned read counts, which should improve the overall readability. HT (2a): Regarding isoform abundance estimation from SIRVs (figure5): The authors employed the E0 mix of the SIRVs, in which all different isoforms are of equal abundance. This is very different from real-world situations, in which different genes but also different isoforms from the same gene can be of very different expression level. The authors note earlier that ONT has advantages in sequencing depth (at the cost of quality), which (we would hope) would lead to better isoform quantification for lowly expressed genes and minor isoforms– but using the E0 mix we cannot tell (while we could have, I think with the E1 mix). Reading the paper, I was searching for the use of the E1 and E2 mixes which could have answered these questions. It would be good to point out that lowly expressed gene and minor isoform quantification were not addressed here. Thank you for the suggestions. We elected to use the E0 mix to have as many fixed variables as we possibly could to get a simple and clear readout on performance. We aimed to evaluate how isoform identification and different types of sequencing coverage (by long reads or short reads) affect the isoform quantification. For example, hybrid sequencing strategies had better isoform identification by long reads (PacBio or ONT) and better quantitative information from short-read coverage (Illumina) in the statistical model, so they had better accuracy. We agree that including E1 and E2 is good to explore more issues in isoform quantification, such as the lowly-expressed ones. For example, lower sequencing coverage of lowly-expressed transcripts could contribute to the variance of abundance estimation. We could consider a separate manuscript to study all problems of isoform abundance estimation thoroughly. HT (2b): Also, regarding the isoform abundance estimation, my first impression was “these are actually very small errors” when looking at the y-axis of figure 5. My current understanding of the situation is however different: As the authors point out, the actual expression of each isoform is 1/68~=0.015, meaning that the errors are of the same order of magnitude as the (uniformly) expressed transcripts – and a bit less for error corrected reads. If my reading of the situation is accurate, then this should be noted somewhere. We are sorry for the unclear description of Figure 5. We revised the section “Isoform abundance estimation by PacBio, ONT and Hybrid-Seq” and the legend of Figure 5 to clarify this issue. HT (3): Other points:HT (3a): Page 8 left column: fig 2 is referenced for “ONT data have particularly higher trans-chimeric rates in very long reads (>4kb)”. Shouldn’t this be fig. 1 ?Yes, thank you so much for pointing this out. We have made this correction in the manuscript.HT (3b): Page 10, right column, end of first paragraph: when referring to table 2, it is not obvious (apologies, if I missed it) what kind of long-reads (ONT-1D vs. ONT-2D vs. ONT-errorCorrection and PacBio-CCS vs. PacBio-subread vs. PacBio-errorCorrected) are used. Earlier parts of the paper use abbreviations like ONT-1D or PacBio-subreads, but not here.We are sorry for confusing labels in Table 2 and main text. In Table 2, “correct” means one of three SIRV annotation libraries (“correct”, “insufficient” and “over-annotated”). However, in the end of first paragraph, right column, Page 10, the word “corrected”/”correction” means the sequencing long reads that are corrected by short reads using error-correction software (e.g., LSC). We have added some annotation for Table 2 for better understanding.HT (3c): A similar statement is true for figure 5 and the corresponding text (“when using long reads only”): it is not clear if PacBio-CCS or PacBio-subreads are used (and the same for ONT) when comparing to the error-corrected subreads.We are sorry for the unclear figure legend of Figure 5. The x-axis shows the strategy of isoform identification and quantification. Here, the words “correct”, “insufficient” and “over-annotated” inside the parentheses represents three different SIRV annotation libraries that were used in the \"reference-annotation-guided\" mode of StringTie. They do not represent the types of sequencing reads. We have modified the figure legends to clarify this issue. In addition, we have updated the section “Short read and long read data processing and alignment” to describe more details about which long reads were used in the analyses. Reads used in the technical comparisons are defined specifically throughout as being either consensus or raw reads (e.g. CCS or subreads). For the transcriptome analyses, both PacBio and ONT reads were comprised of “best reads”. These were constructed with the goal of 1) having each molecule represented in the dataset once and only once and 2) choosing the best read of each molecule for transcriptome analysis. Below is the priority order of reads to be selected as the \"best read\" for each molecule in different analysis strategies:PacBio (long reads only) The best aligned CCS read (defined by the number of bases in the read mapped to the reference genome) Otherwise, the best aligned subread PacBio (long and short reads combined, Hybrid-Seq) The best aligned CCS read with >2 passes and accuracy greater than 95 (estimated by SMRT Analysis software). Corrected reads were not used here because the consensus is already exceeding typical short read correction. Otherwise, the best aligned CCS read corrected by short reads. Otherwise, the best aligned subread corrected by short reads. ONT (long reads only) The best aligned 2D read Otherwise, the best aligned 1D template read Otherwise, the best aligned 1D complement read ONT (long and short reads combined, Hybrid-Seq) The best aligned 2D read corrected by short reads Otherwise, the best aligned 1D template read corrected by short reads Otherwise, the best aligned 1D complement read corrected by short reads So, for example of the “long read only” analysis of ONT, a 2D read was aligned, its best alignment would be used, and 1D reads would not be used.HT (3d): For figure 5, it is somewhat difficult to understand, what was exactly done. The authors say that, they used the “Euclidean distance” between REP and estimated REP. The way I understand it, is that the authors calculated REP and estimated REP for each transcript, and then calculated the Euclidean distance for each isoform. In this case (one dimension only) the Euclidean distance reduces to the absolute value of REP minus estimated REP. If this was done, this simpler way of saying it, is advantageous, I believe. Using the word “Euclidean distance” makes me expect multidimensionality. This would suggest that the authors have a vector of isoform expression values for each gene (or maybe multiple samples)? That would imply that the boxplots only represent 7 dots for the 7 SIRV genes…please clarify so that there is no doubt. Thank you for the question. The “Euclidean distance” is the aggregated measure of errors that are the differences between the expected relative expression percentage (REP) and observed REP.We calculated “Euclidean distance” with multiple dimensions, where each transcript represents one dimension. The expected REP of each transcript is 1/68. The observed REP was calculated by dividing a transcript TPM (or read counts) by the sum of all observed TPMs (or read counts) of 68 SRIV transcripts. Below is the formula:Total_expression = Isoform1_TPM + Isoform2_TPM + … + Isoform68_TPMEuclidean_distance = sqrt((Isoform1_TPM/Total_expression-1/68)^2+(Isoform2_TPM/Total_expression-1/68)^2+…+(Isoform68_expression/Total_expression-1/68)^2) HT (3e): The section “Isoform Identification in hESCs by PacBio, ONT and Hybrid-Seq” is difficult to read. This may stem from the terms “full length rates” and “full-length isoform identification rates”. It is not fully clear, if they mean the same or different things; What is exactly meant? Is it “fraction of discovered annotated isoforms that are seen at least once in a full length read” or “fraction of reads that are judged as full-length” or something else? Please clarify.We are sorry for the unclear description. The terms “full length rates” and “full-length isoform identification rates” mean the same things: “fraction of discovered annotated isoforms that are seen at least once in a full length read”. We changed “full length rates” to “full-length isoform identification rates” to for consistency. Please find the detailed definition of “full-length isoform identification rates” in Methods section (“Isoform identification in hESCs by PacBio, ONT and Hybrid-Seq”).HT (3f): Page 12, the third paragraph, regarding the discovery of isoforms with >=30 exons. The correct finding of isoforms with lots of exons of course depends on error-rate (which is linked to getting all splice sites correctly) and having long enough reads. In the absence of a size selection experiment for the Minion, one cannot prove that the observed difference between PacBio and Minion would not be rendered smaller (probably not totally removed though – because of the higher Minion error rate), with a size selection experiment for the Minion. I would mention that. We agree that it is important to mention the size-selection difference in two sequencing experiments, since it could affect these numbers. We have adjusted the manuscript text accordingly to report the observations, and not to draw conclusions about the technologies relative capabilities.HT (3g): Regarding the quantification of alternative splicing events … there are many publications that suggested exon skipping is the most frequent type of alternative splicing in humans. There are reports that have reported high occurrence of intron retention – but to my knowledge not that intron retention is more frequent than exon skipping. For example Braunschweig et al, Genome Research 20142 using short reads and our own paper Tilgner et al, Nature Biotech, 20151 using long reads. The authors could also use a minimum frequency of each single alternative event (e.g. 10% as in the papers referenced above) to distinguish splicing errors and few intermediate RNA molecules from “real” isoforms. This may change the relative abundance of each type of splicing event.Thanks for your suggestions. Based on this suggestion, we calculated the minimum frequency of each single alternative splicing event and took 10% as the cut-off. The results showed that exon skipping is the most frequent AS event as the reviewer expected (see the updated Figure 6). We have also updated our analyses in Results section “Complexity of the hESC transcriptome”. HT (3h): In the last paragraph on page 12, the word “alterative” is used. I assume this should be “alternative”. If this is a spelling mistake, there may be more.Thank you for pointing out this typo. We have fixed this in the manuscript."
}
]
}
] | 1
|
https://f1000research.com/articles/6-100
|
https://f1000research.com/articles/6-940/v1
|
19 Jun 17
|
{
"type": "Research Note",
"title": "Massive open online courses in health sciences from Latin American institutions: A need for improvement?",
"authors": [
"Carlos Culquichicón",
"Luis M. Helguero-Santin",
"L. Max Labán-Seminario",
"Jaime A. Cardona-Ospina",
"Omar A. Aboshady",
"Ricardo Correa",
"Luis M. Helguero-Santin",
"L. Max Labán-Seminario",
"Jaime A. Cardona-Ospina",
"Omar A. Aboshady",
"Ricardo Correa"
],
"abstract": "Background: Massive open online courses (MOOCs) have undergone exponential growth over the past few years, offering free and worldwide access to high-quality education. We identified the characteristics of MOOCs in the health sciences offered by Latin American institutions (LAIs). Methods: We screened the eight leading MOOCs platforms to gather their list of offerings. The MOOCs were classified by region and subject. Then, we obtained the following information: Scopus H-index for each institution and course instructor, QS World University Ranking® 2015/16 of LAI, and official language of the course. Results: Our search identified 4170 MOOCs worldwide. From them, 205 MOOCs were offered by LAIs, and six MOOCs were health sciences related. Most of these courses (n = 115) were offered through Coursera. One health science MOOC was taught by three instructors, of which only one was registered in Scopus (H-index = 0). The remaining five health science MOOCs had solely one instructor (H-index = 4 [0–17]). The Latin American country with the highest participation was Brazil (n = 11). Conclusion: The contribution of LAI to MOOCs in the health sciences is low.",
"keywords": [
"Health education",
"Education distance",
"Continuing education",
"Latin America"
],
"content": "Introduction\n\nThe 21st century technological and educational revolution has increased access to massive open online courses (MOOCs). They are internationally available online educational courses that are delivered using Web 2.0. MOOCs incorporate video conferencing supports and allow individuals worldwide to access high quality content provided by top-ranking universities1. A large number of users have participated in more than 3859 MOOCs through the most popular platforms, such as Coursera®, edX® and Udacity®2.\n\nFurthermore, MOOCs have generated interest because of their innovative educational techniques3. The international recognition of the quality of education, the flexible schedules and the absence of geographical barriers motivates students to access MOOCs4,5. In fact, they represent one strategy to reduce costs and enable continuous medical education, especially to rural physicians of developing countries6,7.\n\nDespite their proven pedagogical quality and their impact, participation in MOOCs is lower from Latin American countries compared to USA or Europe because of difficulties accessing the technology, language barriers and low offering from Latin American institutions (LAIs)3,8. This study aimed to identify the characteristics of MOOCs offered by LAIs in the health science field.\n\n\nMethods\n\nA search of MOOCs was performed using the virtual institutions catalog of eight platforms; Coursera®, edX®, FutureLearn®, Canvas.net®, MiriadaX®, iversity®, Open Education by Blackboard®, and NovoEd® from June 24 to June 30, 2016. These are the largest platforms and host more than 75% of MOOCs available worldwide9. A search was conducted to identify MOOCs (cMOOCs and xMOOCs) that had current free access. Among these, we identified MOOCs that were offered by a LAI, and then identified which are related to health sciences. Each MOOC was screened for the location of educational institution (Latin America, non-Latin America), H-index of institution and instructor (provided by Scopus), QS World University Ranking® (QS) 2015/16 of the educational institution, official language of the course and subject of course (health sciences, non-health sciences). Categorical variables were summarized using frequencies and percentages, and numeric variables were summarized using median and range.\n\n\nResults\n\nThe search identified 4170 MOOCs offered by educational institutions worldwide. LAI offered 205 MOOCs (4.91%). Table 1 summarizes the results of each platform.\n\nOnly six (2.93%) of these courses were in health sciences; one by Coursera®, two by Miriada X®, and three by edX®. One of the health science MOOCs was taught by three instructors, only one of whom was registered on Scopus and had an H-index of zero. The other five health science MOOCs offered by a LAI had only one instructor, and the median H-index was four (range, 0–17).\n\nAccording to the number of institutions per country, Brazil contributed with 24.44% (n = 11), Colombia 22.22% (n = 10), Argentina 13.33% (n = 6), Mexico 13.33% (n = 6), and Peru 8.89% (n = 4), to the platforms studied.\n\nThe top-five LAIs with the most MOOCs in the platforms were Monterrey Institute of Technology and Higher Education, Mexico 17.83% (n = 33, H-index = 71, QS = 238°), National Autonomous University of Mexico, Mexico 16.21% (n = 30, H-index = 68, QS = 160°), Universidad de los Andes, Colombia 9.18% (n = 17, H-index = 92, QS = 283°), Ministry of Health Mexico, Mexico 6.48% (n = 12, H-index, QS = not available), Technological Institute of Aeronautics, Brazil 5.94% (n = 11, H-index = 56, QS = not available).\n\n\nDiscussion\n\nThe number of MOOCs offered by LAIs was low compared with other regions. They represented almost the 5% of the MOOCs offered by educational institutions worldwide, in contrast with US institutions that offers most of these courses among several platforms1.\n\nBrazil and Mexico offer the most available MOOCs from Latin America. This could be due to the higher demand for MOOCs in these countries, especially in Brazil, which is related with a broad multidisciplinary research culture that can foster a high user demand among undergraduates10.\n\nAdditionally, there was a low number of health sciences MOOCs offered by LAIs. Mexico offered the largest number of MOOCs in health sciences, which may be attributed to the cutting-edge educational strategies and individuals with high academic degrees available11. It is worth mentioning that some organizations, like World Medical Association and the Internet Medical Society, are establishing agreements with some LAIs to develop high quality MOOCs for the benefit of the medical community that works in rural areas.\n\nEven in developed countries, educational institutions that offer MOOCs want to achieve academic and scientific excellence. The currently offered MOOCs by LAIs are provided by instructors who have low H-indices, which may indirectly influence the quality of MOOCs12. This may be due to a low level of training of the faculties and deans of health sciences schools in LAIs, and lack of incentives for undertaking teaching and research activities in these institutions13.\n\nThis study has some limitations, such as the lack of data concerning instructors in some platforms, and the incomplete coverage of all available platforms. However, the covered platforms represent only 75% of worldwide MOOC, to the best of our knowledge, this is the first study with greater coverage in the scientific community5. Despite the limitations that the H-index has, it´s the only indirect quality measure available for notifying the expertise of the instructors14.\n\n\nConclusion\n\nThe contribution of LAIs to health science MOOCs is low. LAIs should invest, develop, and promote this type of educational strategy, which offers huge potential for continuing medical education in this century, and promote access to these technologies, particularly in rural and remote areas.\n\n\nData availability\n\nDataset 1: Massive open online courses in health sciences from Latin American institutions in 2016. doi, 10.5256/f1000research.11626.d16489115",
"appendix": "Author contributions\n\n\n\nStudy design: CC; Data collection: CC, LMHS, LMLS; Data analysis: All authors; Writing: All authors. Supervision of the project: JACO, OAA, RC. All authors read the final version submitted.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nLuis M. Helguero-Santin and L. Max Labán-Seminario received grant funding from Universidad Nacional de Piura (1283-2017-OPPTO-OCP-UNP) for the presentation of the abstract in TASME Spring Conference 2017, UK, and for the editorial charges.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors express their thankful to Tomas Gálvez-Olortegui for his comprehensive review of the manuscript.\n\n\nReferences\n\nSubhi Y, Andresen K, Rolskov Bojsen S, et al.: Massive open online courses are relevant for postgraduate medical training. Dan Med J. 2014; 61(10): A4923. PubMed Abstract\n\nBrahimi T, Sarirete A: Learning outside the classroom through MOOCs. Comput Human Behav. 2015; 51(Part B): 604–9. Publisher Full Text\n\nLoeckx J: Blurring Boundaries in Education: Context and Impact of MOOCs. The International Review of Research in Open and Distributed Learning. 2016; 17(3). Publisher Full Text\n\nAboshady OA, Radwan AE, Eltaweel AR, et al.: Perception and use of massive open online courses among medical students in a developing country: multicentre cross-sectional study. BMJ Open. 2015; 5(1): e006804. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiyanagunawardena TR, Williams SA: Massive open online courses on health and medicine: review. J Med Internet Res. 2014; 16(8): e191. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKettunen J: Implementation of strategies in continuing education. International Journal of Educational Management. 2005; 19(3): 207–17. Publisher Full Text\n\nMayta-Tristán P, Poterico JA, Galán-Rodas E, et al.: [Mandatory requirement of social health service in Peru: discriminatory and unconstitutional]. Rev Peru Med Exp Salud Publica. 2014; 31(4): 781–7. PubMed Abstract\n\nAleman de la Garza L, Sancho Vinuesa T, Gomez Zermeño MG: Indicators of pedagogical quality for the design of a Massive Open Online Course for teacher training. RUSC Universities and Knowledge Society Journal. 2015; 12(1): 104. Publisher Full Text\n\nClass Central. [accessed: 2016-06-26]. Reference Source\n\nCulquichicón-Sánchez C, Ramos-Cedano E: [Scientific initiation scholarships: a comprehensive development model for Latin American research]. Rev Med Chil. 2016; 144(5): 683–4. PubMed Abstract | Publisher Full Text\n\nEmanuel EJ: Online education: MOOCs taken by educated few. Nature. 2013; 503(7476): 342. PubMed Abstract | Publisher Full Text\n\nPereyra-Elías R, Huaccho-Rojas JJ, Taype-Rondan Á, et al.: [Publishing and its associated factors in teachers of scientific research in schools of medicine in Peru]. Rev Peru Med Exp Salud Publica. 2014; 31(3): 424–30. PubMed Abstract\n\nRodríguez-Morales AJ, Culquichicón-Sánchez C, Gil-Restrepo AF: Baja producción científica de decanos en facultades de medicina y salud de Colombia: ¿una realidad común en Latinoamérica? Salud Publica Mex. 2016; 58(4): 402–3. PubMed Abstract | Publisher Full Text\n\nDunnick NR: The H Index in Perspective. Acad Radiol. 2017; 24(2): 117–8. PubMed Abstract | Publisher Full Text\n\nCulquichicón C, Helguero-Santin LM, Labán-Seminario LM, et al.: Dataset 1: Massive open online courses in health sciences from Latin American institutions: A need for improvement? F1000Research. 2017. Data Source"
}
|
[
{
"id": "23578",
"date": "29 Jun 2017",
"name": "Sri Harsha Tella",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an unique and excellent study that is focused on open online courses in Latin American medical education.\nMinor comments: It would be great if authors can comment on:\nWhy Latin America is having low rate of medical online courses and potential answers for these problems? - For example: Many universities in USA have free provision of internet access to medical students while in campus and students are made aware of the available courses in many possible ways. Do they have this kind of feature in Latin American countries? If not- it may be one of the contributing factors. This opens doors to many questions- like the fee charged in USA medical schools vs the fee charged by Latin American medical schools -- one of the few potential factors that may be contributing to the difference.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23579",
"date": "30 Jun 2017",
"name": "Ritu Madan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is a good research idea and a thoroughly done research but that being said, I am not sure if this is relevant to the scope of this website. It will be interesting to see how the quality of MOOC in Latin America compare with the rest of the world though.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-940
|
https://f1000research.com/articles/6-668/v1
|
12 May 17
|
{
"type": "Research Article",
"title": "Effect of PVP on the characteristic of modified membranes made from waste PET bottles for humic acid removal",
"authors": [
"Nasrul Arahman",
"Afrillia Fahrina",
"Sastika Amalia",
"Rahmat Sunarya",
"Sri Mulyati",
"Afrillia Fahrina",
"Sastika Amalia",
"Rahmat Sunarya",
"Sri Mulyati"
],
"abstract": "Background: The aim of the present study was to evaluate the possibility of using recycled polymer (waste polyethylene terephthalate [PET] bottles) as a membrane material. Furthermore, the effect of the addition of a pore-forming agent and preparation conditions was also observed. Methods: Porous polymeric membranes were prepared via thermally induced phase separation by dissolving recycled PET in phenol. PET polymer was obtained from waste plastic bottles as a new source of polymeric material. For original PET membrane, the casting solution was prepared by dissolving of 20wt% PET in phenol solution. For PET modified membrane, a 5 wt% of polyvinylpyrrolidone (PVP) was added into polymer solution. The solution was cast onto a glass plate at room temperature followed by evaporation before the solidification process. The membranes formed were characterized in terms of morphology, chemical group, and filtration performance. A humic acid solution was used to identify the permeability and the solute rejection of the membranes. Results: The results showed that the recycled PET from waste plastic bottles was applicable to use as a membrane material for a water treatment process. The highest rejection of humic acid in a water sample, which reached up to 75.92%, was obtained using the PET/PVP membrane. Conclusions: The recycled PET from waste bottles was successfully used to prepare porous membrane. The membrane was modified by the addition of PVP as a membrane modifying agent. SEM analysis confirmed that the original PET membrane has a rough and large pore structure. The addition of PVP improved the pore density with a narrow pore structure. The PET/PVP membrane conditioned with evaporation was the best in humic acid rejection.",
"keywords": [
"polyethylene terephthalate (PET)",
"plastic bottles",
"membrane",
"humic acid"
],
"content": "Introduction\n\nClean water is one of the most vital and essential elements for sustaining human life. This is the reason why the lack of drinking water has become a serious issue for the entire world1,2. Membrane technology has been applied widely in water and wastewater treatment processes. In water purification, organic material contaminants, such as humic acid and suspended solids, are effectively removed by microfiltration or ultrafiltration membranes3. The advantages of separation using membrane technology are that it is free of chemicals or additives, uses little temperature or at least less energy compared with conventional treatments (i.e. coagulation followed by sand filter), and that it is scalable and hybrid-separated4.\n\nThe effectiveness of the ultrafiltration process using a membrane depends on the material and preparation process. Membranes are prepared from organic substances, such as polymers, or inorganic and composite materials. The polymeric materials generally used in membrane fabrication are cellulose acetate (CA), polyethersulfone, polyvinylidene fluoride, and polyethylene terephthalate (PET). PET is commonly used for membrane ultrafiltration in the separation process. Khayet et al. used PET membrane grafting with polystyrene for methanol/toluene separation through pervaporation5, while Behary et al. conducted bio-separation from surfactant using PET membrane modified with chitosan6 . Li et al. used cellulose acetate with PET as an additive for the forward osmosis process7. The effect of PET as an additive is that it increases the mechanical properties of the membrane7.\n\nPET is a polymeric material that is generally derived from commercial polymer, which can increase the production costs. However, plastic bottles as drink packaging are composed of PET); therefore, waste plastic bottles can potentially be used as a membrane material8. This new source of polymeric material helps to reduce the waste of plastic bottles and constitutes a green alternative to limit the consumption of polymers. Additionally, the use of PET bottles as polymeric material will reduce the cost of manufacturing membranes.\n\nThe synthesis of PET membranes from plastic bottles was investigated previously by Rajesh and Murti9. Their results showed that PET membranes without modification with polyethylene glycol have poor mechanical properties. Another study by Zander et al. investigated using PET from waste bottles to fabricate fiber membranes via the electrospinning technique. The obtained membrane was used for water filtration to separate latex beads. The study found that about 99% of the beads can be removed from a water sample10. In the water treatment process, the pore size of the membrane has an important role in the rejection of water contaminants. Membranes with a small pore size, but high pore density, is recommended to obtain a stable permeation with high selectivity of water contaminant\n\nIn this study, the PET membrane from plastic bottles was modified by addition of PVP to enhance its pore size and pore density. The addition of PVP as a pore forming agent and evaporation on the casting film may affect the quality of the resulting membrane. This ultrafiltration PET membrane was then investigated for humic acid removal, and its characterization was performed using the water permeability test, scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), and ultraviolet-visible (UV-Vis) analysis.\n\n\nMethods\n\nPET was derived from waste plastic bottles. Phenol was used as a solvent (KGaA Merck, Germany). PVP (40,000 Da) was purchased from Sigma-Aldrich Co. Ltd (USA). Humic acid (HA) powder was also obtained from Sigma-Aldrich. The HA solution was synthesized by dissolving HA powder in 1 L of distilled water.\n\nThe membrane was prepared via thermally induced phase separation (TIPS). Firstly, phenol, as a solvent, was heated at 50°C until the liquid phase was reached. Fragments of PET bottles were dissolved in molten phenol at 100°C and stirred using a magnetic stirrer for 6 hours until homogeneous. In order to improve the performance of the membrane, 5wt of PVP was added to the solution. Four types of membranes were composed: the composition and the condition of each dope solution are summarized in Table 1. A minimum of four membranes were made of each type, and three membranes of each type were chosen for the filtration experiments (below).\n\nAfter obtaining a homogeneous solution, the dope temperature was maintained at 100°C without any stirring to remove air bubbles. The homogeneous solutions were cast uniformly onto a glass plate using a Baker applicator (YBA-3, Yoshimitsu, Japan) at room temperature. The thickness of the membrane was set at 700 µm. The casting film was left in the air for 7 minutes to evaporate the solvent. The glass plate was then dipped into a coagulation bath containing water-propanol 1:12 as a non-solvent. Another casting film, for no evaporation treatment, was directly immersed into non-solvent. The glass plate was then dipped into a coagulation bath containing water-propanol 1:12 as a non-solvent. The membrane sheets formed were washed and stored in distilled water for 1 day to remove any residual solvent.\n\nThe morphologies of the membrane surface and cross-section were analyzed using a scanning electron microscope (model, JSM 6360LA; JEOL Ltd., Japan). The dried sheets of membrane were gold sputtered for eproducing electric conductivity. Photomicrographs of PET membranes were viewed in vacuum condition at 5 kV. The magnification image was taken at 10,000 x for the surface and 700 x for the cross-section.\n\nFunctional groups of the membrane were analyzed using a Shimadzu FTIR-8400 spectrometer (Japan). A wavelength of 4000–400 cm−1 was used, and the chemical groups of the membranes were identified by their peaks using IR solution 1.50 software (Shimadzu).\n\nThe ultrafiltration test was conducted using dead-end filtration and pressurized with nitrogen gas. The filtration area of the membranes was 15.2 cm2. The operating condition was set at 1 bar transmembrane pressure and room temperature. Filtration was carried out for 30 minutes, and the permeate was collected three times every 10 minutes. Three repeats for each membrane type was performed. The filtration experiment was evaluated in term of flux and rejection of HA solution. Flux is the total volume of permeate pass through the membrane in a determined filtration period calculated by Equation (1). Rejection is the amount of HA particle rejected by membrane, as analyzed by Equation (2).\n\n\n\nIn which: V = Volume of permeate (L)\n\nA = Membrane surface area (m2)\n\nt = Filtration periode (hr)\n\nThe model for the ultrafiltration test in this study was HA solution at 10 mg/L concentration. The solution was prepared by dissolving HA powder in 1 L distilled water. The rejection value of the membrane in HA filtration was calculated as follows11,12:\n\n\n\nIn which: R = rejection (%)\n\nCf = HA concentration in feed\n\nCp = HA concentration in permeate\n\nThe HA concentration in the feed and permeate solution were measured using a UV-Vis spectrometer (model UV-1700; Shimadzu) at 490 nm wavelength. The HA filtration schematic using a PET membrane can be seen in Figure 1.\n\n\nResults and discussion\n\nThe PET membrane was prepared via TIPS. The constructed membranes were categorized as asymmetric ultrafiltration membrane. The top surface and cross-sectional images of the membranes are shown in Figure 2 and Figure 3, respectively. Figure 2 shows the changes in membrane morphology with the addition of PVP and evaporation on casting film. PET-2 and PET-4 were evaporated for 7 minutes before being immersed in a coagulation bath. The membranes formed showed a decrease in the membrane porosity. A high evaporation temperature (100°C) for 7 minutes increased the exchange rate of solvent from the surface of the casting film. This led to a higher polymer concentration near the top surface, and, therefore, the phase exchange of solvent and nonsolvent in the coagulation bath became lower. This is called delay demixing, which causes the membrane to have a less porous structure13.\n\nPET-1, no added PVP or evaporation; PET-2, no added PVP and 7 minutes of evaporation; PET-3, added PVP and no evaporation; added PET-4, PVP and 7 minutes of evaporation.\n\nPET-1, no added PVP or evaporation; PET-2, no added PVP and 7 minutes of evaporation; PET-3, added PVP and no evaporation; added PET-4, PVP and 7 minutes of evaporation.\n\nThe cross-sectional image in Figure 3 shows the asymmetric structure of the sub-layer membrane. The evaporated membranes had a thicker, dense top layer due to the delay demixing. In the sub-layer of the modified membrane, the porous structure changed with the addition of PVP. The effect of PVP in the casting solution caused the formation of pores and sponge structure in the sub-layer9. In the membrane modification process, enhancing the pore density with uniform pore size is essential. A sponge structure, like the one formed in the modified membrane, affects the filtration quality and mechanical properties of the membrane2.\n\nThe FTIR spectrum was analyzed to determine the changes of the chemical groups on the membrane surface3. Regarding polymer composition in this research, FTIR analysis was carried out for PET-1 and PET-3 membranes only. Polymer composition of PET-2 and PET-4 membranes was similar with PET-1 and PET-3, respectively; the IR spectra of PET-2 and PET-4 are equal to the IR spectra of PET-1 and PET-3, respectively. Figure 4 shows the FTIR spectrum of the PET-1 membrane. A peak of 3630-3300 cm-1 indicated the presence of the alcohol functional group (OH). In the range of 3200-3000 cm-1 and 1700 cm-1, bands of OH and CO were derived from the carboxylic acid functional group. An aromatic functional group of C = CC band was located at 1650-1600 cm-1. At 1400 cm-1 and 860 cm-1, a C-H band of alkanes and aromatics were observed. A very weak peak at 1250 cm-1 indicated a C-O band of the phenol functional group; phenol is composed of the aromatic ring and OH groups14. The identification of the membrane chemical groups is presented in Table 2. The chemical structure of PET, composed of several chemical groups, is shown in Figure 5. According to the data shown in Table 2, the membrane was composed of PET material.\n\n*Bands of PET polymer cited from reference15.\n\nThe comparison of the FTIR spectrum analysis between PET-1 and modified membranes (PET-3) can be seen in Figure 6. Generally, the FTIR spectrums of PET-1 and modified PET/PVP membranes were similar, because both membranes were made with PET as the basic material. However, a lower peak at 1820 cm-1 and 1180 cm-1 of the modified membrane (PET-3) indicated a carbonyl functional group and CN band. The existence of these bands showed that the membrane was composed of PVP. The molecular structure of PVP is shown in Figure 7.\n\nPET-1, no added PVP or evaporation; PET-3, added PVP and no evaporation.\n\nWater filtration is related to membrane characteristics, such as hydrophilicity and pore size. Furthermore, the addition of membrane modifying agent and the evaporation process also affects water permeability13. In this study, a feed solution of humic acid (HA) was tested at 10 mg/L. The concentration of HA solution in the feed and permeate were measured using a UV-Vis spectrometer. The comparison of the original and modified PET membranes in the HA flux and rejection is given in Figure 8. According to Figure 8, the PET/PVP modified membrane with evaporation (PET-4) had the highest HA rejection of up to 76%, followed by PET-3, PET-2, and PET-1 which are 67.3, 50.07, and 25.02%, respectively.\n\nPET-1, no added PVP or evaporation; PET-2, no added PVP and 7 minutes of evaporation; PET-3, added PVP and no evaporation; added PET-4, PVP and 7 minutes of evaporation. Three repeats of each membrane type were performed.\n\nThe differences in HA rejection in Figure 8 show the influence of PVP as an additive material and the evaporation time on the performance of the membrane. Figure 8 also shows the flux of the HA filtration. The membrane with smaller and uniform pores (PET-4) was better in HA rejection, but produced less permeates (the addition of PVP in the membrane solution increased the total concentration of the polymer and led the membrane to have smaller pores). Additionally, PVP improved the hydrophilic nature of the membrane surface. This prevented the hydrophobic HA molecules from getting closer to the membrane surface3. Therefore, the PET/PVP- modified membrane followed by evaporation (PET-4) was the best at HA rejection compared to the original PET membrane without evaporation (PET-1), or original PET membrane with evaporation process (PET-2).\n\n\nConclusions\n\nMembranes with pore structure were successfully fabricated using recycled PET from waste plastic bottles. The characteristics and performance of these membranes were affected by the membrane preparation conditions. In this study, PET membranes were modified by the addition of additives (PVP) and conditioned using evaporation during solidification.. Based on the results, it can be concluded that the presence of PVP in polymer system has an effect on the pore structure and flux of PET membrane. The original PET membrane (PET-1; no PVP or evaporation) had a rough pore structure, which resulted in low solute rejection. The addition of PVP improved pore density with a narrow pore structure, and using a high temperature of evaporation resulted in a membrane surface with smaller pores. Consequently, a PET/PVP membrane conditioned with evaporation (PET-4) was most efficient in humic acid rejection. In general the membranes were suitable for use in a water treatment process. Modifying agents of PET membranes should be further developed to enhance the performance of PET membranes, especially for ultrafiltration process.\n\n\nData availability\n\nDataset 1: Raw data for IR spectra (400-4000 (1/cm)) of PET-1. doi, 10.5256/f1000research.11501.d16067517\n\nDataset 2: Raw data for comparison of IR spectra (500-2254 (1/cm)) of PET-1 and PET-3. doi, 10.5256/f1000research.11501.d16067718\n\nDataset 3: Raw data for flux and rejection of humic acid. doi, 10.5256/f1000research.11501.d16067919",
"appendix": "Author contributions\n\n\n\nNA designed the research proposal, provided experimental material and apparatus, and approved the final draft of manuscript. AA, SA and RS were responsible for conducting the experiment and preparing draft paper. SM contributed to the research report as supervisor.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nArahman N, Mulyati S, Rahmah M, et al.: The removal of fluoride from water based on applied current and membrane types in electrodialyis. J Fluor Chem. 2016; 191: 97–102. Publisher Full Text\n\nArahman N, Mulyati S, Lubis MR, et al.: Modification of polyethersulfone hollow fiber membrane with different polymeric additives. Membr Water Treat. 2016; 7(4): 355–65. Publisher Full Text\n\nMehrparvar A, Rahimpour A, Jahanshahi M: Modified ultrafiltration membranes for humic acid removal. J Taiwan Inst Chem Eng. 2014; 45(1): 275–82. Publisher Full Text\n\nUlbricht M: Advanced functional polymer membranes. Polymer. 2006; 47(7): 2217–62. Publisher Full Text\n\nKhayet M, Nasef MM, Mengual JI: Application of poly(ethylene terephthalate)-graft-polystyrene membranes in pervaporation. Desalination. 2006; 193(1–3): 109–18. Publisher Full Text\n\nBehary N, Perwuelz A, Campagne C, et al.: Adsorption of surfactin produced from Bacillus subtilis using nonwoven PET (polyethylene terephthalate) fibrous membranes functionalized with chitosan. Colloids Surf B Biointerfaces. 2012; 90(1): 137–43. PubMed Abstract | Publisher Full Text\n\nLi G, Wang J, Hou D, et al.: Fabrication and performance of PET mesh enhanced cellulose acetate membranes for forward osmosis. J Environ Sci (China). 2016; 45: 7–17. PubMed Abstract | Publisher Full Text\n\nDing J, Kong Y, Yang J: Preparation of Polyimide/Polyethylene Terephthalate Composite Membrane for Li-Ion Battery by Phase Inversion. J Electrochem Soc. 2012; 159(8): A1198–202. Publisher Full Text\n\nRajesh S, Murthy ZV: Ultrafiltration Membranes from Waste Polyethylene Terephthalate and Additives: Synthesis and Characterization. 2014; 37(4): 653–7. Publisher Full Text\n\nZander NE, Gillan M, Sweetser D: Recycled PET nanofibers for water filtration applications. Materials. 2016; 9(4): 247. Publisher Full Text\n\nSyawaliah AN, Mukramah MS: Effects of PEG Molecular Weights on PVDF Membrane for Humic Acid-fed Ultrafiltration Process. J Phys Conf Ser. 2017; 180(1): 1–7. Publisher Full Text\n\nMukramah S, Mulyati S, Arahman N: Influence of Brij58 on the Characteristic and Performance of PES Membrane for Water Treatment Process. J Phys Conf Ser. 2017; 180(1): 1–7. Publisher Full Text\n\nKoseoglu-Imer DY: The determination of performances of polysulfone (PS) ultrafiltration membranes fabricated at different evaporation temperatures for the pretreatment of textile wastewater. Desalination. 2013; 316: 110–9. Publisher Full Text\n\nPoljanšek I, Krajnc M: Characterization of phenol-formaldehyde prepolymer resins by in line FT-IR spectroscopy. Acta Chim Slov. 2005; 52(3): 238–44. Reference Source\n\nCoates J: Interpretation of Infrared Spectra, A Practical Approach. Encycl Anal Chem. 2000; 10815–37. Publisher Full Text\n\nPrasad SG, De A, De U, et al.: Structural and Optical Investigations of Radiation Damage in Transparent PET Polymer Films. Int J Spectrosc. 2011; 2011: 7, 810936. Publisher Full Text\n\nArahman N, Fahrina A, Amalia S, et al.: Dataset 1 in: Effect of PVP on the characteristic of modified membranes made from waste PET bottles for humic acid removal. F1000Research. 2017. Data Source\n\nArahman N, Fahrina A, Amalia S, et al.: Dataset 2 in: Effect of PVP on the characteristic of modified membranes made from waste PET bottles for humic acid removal. F1000Research. 2017. Data Source\n\nArahman N, Fahrina A, Amalia S, et al.: Dataset 3 in: Effect of PVP on the characteristic of modified membranes made from waste PET bottles for humic acid removal. F1000Research. 2017. Data Source"
}
|
[
{
"id": "22681",
"date": "22 May 2017",
"name": "Muhammad Roil Bilad",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study assesses the possibility of using recycled polyethylene terephthalate from used bottle waste as a membrane material. A topic that highly relevant in waste-to-value perspective. The fact that the study was a success make this report worthy of publication. Moreover, the approach of using recycled PET as membrane material is rather new with only few references available. I believe that by addressing comments below, the quality of the manuscript will be improved.\nMajor comment: When discussing the membrane formation mechanism, the authors mentioned the roles of evaporation time. However, by looking into the properties of phenol (melting point of 40.5 °C and boiling point of 181.7 °C), the rate of phenol evaporation is too low. As seen in Figure 4, phenol in the presence of residue in the membrane matrices is obvious. Another aspect to be discussed is the effect of falling temperature on the solution (I assumed that the casting was done at room temperature) as well as inhibition of water from humid air. Those phenomena occur simultaneously and contribute to the formation of membrane structure.\n\nThe authors are expected to address minor comments below and fit their answer into the revised manuscript.\nAdd information about the permeability in the abstract.\n\nPlease revise typo error far description of parameters in Eq. 2: Cp and Cf instead of Cp and Cf.\n\nWhat is the solubility of phenol in a mixture of 1:12 of water:propanol. Is there any preliminary study on selecting this ratio? What will be the impact of non-solvent composition on the produced PET membrane?\n\nFigure 8: A simpler scatter plot will be more informative rather than overlapping 3D plots in the current version of the manuscript. Also, information on the testing pressure should be included in the caption.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2788",
"date": "16 Jun 2017",
"name": "Nasrul Arahman",
"role": "Author Response",
"response": "Comment 1. When discussing the membrane formation mechanism, the authors mentioned the roles of evaporation time. However, by looking into the properties of phenol (melting point of 40.5 °C and boiling point of 181.7 °C), the rate of phenol evaporation is too low. As seen in Figure 4, phenol in the presence of residue in the membrane matrices is obvious. Another aspect to be discussed is the effect of falling temperature on the solution (I assumed that the casting was done at room temperature) as well as inhibition of water from humid air. Those phenomena occur simultaneously and contribute to the formation of membrane structure. Answer: Thank you for your valuable discussion. We agree with your opinion. Evaporation treatment is one the affected parameter to the membrane morphology. Another parameter as you pointed was also affected to the membrane formation. The changing of solution temperature suddenly from hot plate condition to glass plate at room temperature may affect the membrane formation. However, we did not conduct such kind of this investigation in this study. Comment 2. Add information about the permeability in the abstract (revised manuscript). Answer: Flux information added in the abstract Comment 3. Please revise typo error far description of parameters in Eq. 2: Cp and Cf instead of Cp and Cf. Answer: Type error for description of parameter in Equation 2 already revised (revised manuscript). Comment 4. What is the solubility of phenol in a mixture of 1:12 of water : propanol. Is there any preliminary study on selecting this ratio? What will be the impact of non-solvent composition on the produced PET membrane? Answer: The composition of phenol-propanol (1:12) as non-solvent was determined by conducting several solidification process using a different concentration of water-propanol. The mixture was tested at ratio 1:0; 1:1; 1:5; 1:7; 1:9; and 1:12. The result shows that non-solvent without propanol produced a rigid and kinked membrane with many big holes on the surfaces. The addition of propanol helps to smoothen the surface. But, too much propanol caused mechanical properties of membrane too fragile. So based on the preliminary study, the mixture of 1:12 of water-propanol was the best non-solvent composition to produced PET membrane with good mechanical properties and pore performance. The solubility of phenol based on material safety data sheet from sciencelab.com : Easily soluble in methanol, diethyl ether. Soluble in cold water, acetone. Solubility in water: 1g/15 ml water. Soluble in benzene. Very soluble in alcohol, chloroform, glycerol, petroleum, carbon disulfide, volatile and fixed oils, aqueous alkali hydroxides, carbon tetrachloride, acetic acid, liquid sulfur dioxide. Almost insoluble in petroleum ether. Miscible in acetone. Sparingly soluble in mineral oil. So, phenol has good solubility in a mixture of water-propanol. Comment 5. Figure 8: A simpler scatter plot will be more informative rather than overlapping 3D plots in the current version of the manuscript. Also, information on the testing pressure should be included in the caption. Answer: New figure prepared and replaced for Figure 8 (revised manuscript)."
}
]
},
{
"id": "23115",
"date": "05 Jun 2017",
"name": "Zuchra Helwani",
"expertise": [
"Reviewer Expertise Materials characterization"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the discussion about filtration performance (rejection and flux), give more reasons regarding the best performance of PET-4. Make sure that these reasons are related to the characteristic of the membranes. The statistical analysis and its interpretation are only partly appropriate - was the filtration performance conducted on different conditions, like pressure and the temperature of the process?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2787",
"date": "16 Jun 2017",
"name": "Nasrul Arahman",
"role": "Author Response",
"response": "According to the SEM image in Figure 2, PET-4 has densest and smallest pore than other membranes. The pore can retain humid acid molecules which has larger pore than the membrane. So, in filtration test, PET-4 has the highest rejection of humid acid. Different pressures were conducted in preliminary permeation test. In case of PET-4 membrane, a maximum 3 atm pressure is applied for permeation test. On the other hand, PET membrane without addition of PVP (PET-1, and PET-3), the permeation test only capable at 1 atm of applied pressure. The addition of PVP brought about the enhance of membrane strength so that it is appropriate to conduct the filtration process at pressure of 2 or 3 atm. This is the reason the filtration performance test was carried out at constant pressure at 1 bar for all membrane. Addition of PVP into polymer solution also influence to increase the hydrophilicity of the membrane. Good hydrophilicity of membrane surface can prevent HA molecules attach to the membrane surface. So, we concluded that PET-4 is the best membrane in this study."
}
]
}
] | 1
|
https://f1000research.com/articles/6-668
|
https://f1000research.com/articles/6-921/v1
|
15 Jun 17
|
{
"type": "Research Article",
"title": "A bipedal mammalian model for spinal cord injury research: The tammar wallaby",
"authors": [
"Norman R. Saunders",
"Katarzyna M. Dziegielewska",
"Sophie C. Whish",
"Lyn A. Hinds",
"Benjamin J. Wheaton",
"Yifan Huang",
"Steve Henry",
"Mark D. Habgood",
"Katarzyna M. Dziegielewska",
"Sophie C. Whish",
"Lyn A. Hinds",
"Benjamin J. Wheaton",
"Yifan Huang",
"Steve Henry",
"Mark D. Habgood"
],
"abstract": "Background: Most animal studies of spinal cord injury are conducted in quadrupeds, usually rodents. It is unclear to what extent functional results from such studies can be translated to bipedal species such as humans because bipedal and quadrupedal locomotion involve very different patterns of spinal control of muscle coordination. Bipedalism requires upright trunk stability and coordinated postural muscle control; it has been suggested that peripheral sensory input is less important in humans than quadrupeds for recovery of locomotion following spinal injury. Methods: We used an Australian macropod marsupial, the tammar wallaby (Macropus eugenii), because tammars exhibit an upright trunk posture, human-like alternating hindlimb movement when swimming and bipedal over-ground locomotion. Regulation of their muscle movements is more similar to humans than quadrupeds. At different postnatal (P) days (P7–60) tammars received a complete mid-thoracic spinal cord transection. Morphological repair, as well as functional use of hind limbs, was studied up to the time of their pouch exit. Results: Growth of axons across the lesion restored supraspinal innervation in animals injured up to 3 weeks of age but not in animals injured after 6 weeks of age. At initial pouch exit (P180), the young injured at P7-21 were able to hop on their hind limbs similar to age-matched controls and to swim albeit with a different stroke. Those animals injured at P40-45 appeared to be incapable of normal use of hind limbs even while still in the pouch. Conclusions: Data indicate that the characteristic over-ground locomotion of tammars provides a model in which regrowth of supraspinal connections across the site of injury can be studied in a bipedal animal. Forelimb weight-bearing motion and peripheral sensory input appear not to compensate for lack of hindlimb control, as occurs in quadrupeds. Tammars may be a more appropriate model for studies of therapeutic interventions relevant to humans.",
"keywords": [
"Tammar wallaby",
"bipedal locomotion",
"spinal cord injury",
"regeneration",
"supraspinal innervation",
"marsupial"
],
"content": "Introduction\n\nConcentrated experimental efforts in adult animals, mainly rodents, have generated substantial information about spinal cord responses to injury and mechanisms behind axonal regenerative failure after trauma. However, translation into clinical outcomes for patients with spinal cord injuries (SCI) has been limited. This is partly because of difficulties in replicating promising results (Steward et al., 2012), but mostly because of the lack of an accessible model to study therapies aimed at improving bipedal locomotion characteristics in humans. Effective translation of various therapies derived from behavioural studies, mainly in quadrupeds (usually rodents) to patients has so far been unsuccessful. Possible reasons for this have been discussed (Côté et al., 2017). Most relevant for this study is the accumulating evidence that sensory feedback is important for driving locomotor output in quadrupeds, but not in humans.\n\nThere is therefore a need to establish an animal model where naturally occurring bipedal locomotion can be studied. The only eutherian bipedal models available are some species of non-human primates (see Alexander, 2004; Babu & Namasivayam, 2008); however most primates are not truly bipedal, are expensive to study and are not readily accessible. Very few studies of SCI in primate species that are truly bipedal have been published (Rangasamy, 2013). Macropodid marsupials are the only other major group of mammals that are substantially bipedal. It has been proposed that spinal cord circuits used in bipedal gait in humans and macropods are similar, but evolutionary pressures have resulted in different adaptations as the most effective forms of locomotion (Baudinette et al., 1992). Macropods employ a hopping bipedal gait and use an alternating pattern of hind limb movements when swimming (Wilson, 1974). Their locomotion has been studied with respect to mechanics and energetics (e.g. Cavagna et al., 1988), but the circuitry involved in locomotion and development of the spinal cord has only been studied to a limited extent in macropodid marsupials: tammar wallabies (Comans et al., 1987; Harrison & Porter, 1992), kangaroos (Watson & Freeman, 1977), quokkas (Watson, 1971) and potoroos, all of which hop (Martin et al., 1972). In tammars the forelimb pattern generator, which is important for the first journey to the pouch and is retained for different motor functions in subsequent development, has been described (Ho, 1997).\n\nA further complication in SCI studies in quadrupedal eutherian species is that the experiments are usually performed at stages of cord development where the environment is inhibitory to growth and therefore regeneration fails to occur naturally. The finding that there is a developmental stage in mammals when the immature spinal cord can naturally repair itself and develop near normally after injury provides a system in which regenerative repair can be studied, especially in marsupials where this stage is uniquely after birth (Tyndale-Biscoe & Janssens, 1988). This has been shown extensively in two marsupial opossums, Didelphis virginiana, (Martin & Xu, 1988; Wang et al., 1996; Wang et al., 1998a; Xu & Martin, 1992) and Monodelphis domestica (Fry et al., 2003; Lane et al., 2007; Saunders et al., 1995; Saunders et al., 1998; Wheaton et al., 2011). The fact that these results can also be demonstrated in rodents, provided the injuries occur at embryonic stages, shows that regeneration is not merely a marsupial-dependent quality but that the immature mammalian spinal cord possesses substantially greater potential for repair following injury (Migliavacca, 1930; Saunders et al., 1992).\n\nIn this project, we have exploited a unique combination of features of the tammar wallaby: (i) like all marsupials, tammars are born at a very early stage of central nervous system development (equivalent to embryonic stages in the rodent) and are therefore accessible for surgery at developmental stages where regeneration might occur without the need for in utero interventions; (ii) tammars and other macropods use alternating hind limb movements when swimming and bipedal over-ground locomotion.\n\nHere we show that the tammar, similar to the well characterized quadrupedal Monodelphis, is also able to re-grow long supraspinal connections following a complete thoracic transection, providing the injury occurs early in development (less than 3 weeks of age). Once these young grew to pouch exit age they were able to hop in a manner similar to un-operated controls; they showed rhythmic alternating swimming hind-limb movements, but with a pattern differing from controls. In contrast, animals injured after 6 weeks of age did not re-grow axons across the site of transection and tended not to survive past the period when they normally start leaving the pouch, indicating that their locomotion was affected and they lacked full hind-limb control. The importance of these findings is two-fold: (i) the study is an independent replication of the fundamentally important biological observation of recovery from spinal cord injury in immature members of a distantly related species and (ii) it establishes the tammar as a model of SCI in a bipedal species. This last is of particular pertinence in the field of spinal trauma research because it offers a new animal model in which interventions to promote repair after SCI can be tested. The use of a bipedal marsupial model allows for the study of behavioural responses, in the presence or absence of spinal cord regeneration, in an animal that maintains upright trunk support for normal posture and locomotion. Thus these animals cannot so easily compensate for dysfunctional hind-limbs by shifting weight support to the fore-limbs, as quadrupeds can. To understand how these animals function following SCI would make a substantial contribution to developing future therapies suitable for human patients with paraplegia.\n\n\nMethods\n\nThe tammar wallaby (Macropus eugenii) pouch young (PY) were sourced from adult females derived from the CSIRO Wallaby Colony maintained at Crace, Canberra, Australia. In this colony, animals are held in open grassy yards and provided with supplemental ewe and lamb food pellets and water ad libitum. All procedures were approved by the CSIRO Wildlife and Large Animal Animal Ethics Committee (approval number, 13-05) following National Health and Medical Research Council (NHMRC; Australia) guidelines.\n\nA complete spinal transection results in initial paralysis of the hind limbs and loss of sensation from the hindlimbs and lower trunk. However, at the time when the operation was done the young are at a very immature stage of development, are highly dependent on the protective environment of their mother’s pouch in which all nutritional and thermoregulatory needs are provided for up to 180 days (Tyndale-Biscoe & Janssens, 1988). The pouch young is permanently attached to its specific teat at this age, their nutritional state is not disrupted. The young remain in the pouch for the duration of the experiment and so are protected from the external environment. Developmental changes in the electroencephalogram and responses to toe pinching have been previously monitored in tammar young. EEG activity was not recorded until after 120 days, and responses to toe pinching were shown to be minimal prior to 127 days of age (Diesch et al., 2010). In the tammar wallaby the eyes open at 140 days of age. In the present study surgery was done at or before 60 days of age and well before eye opening (140 days). Thus distress due to the spinal transection would be minimal or even non-existent; however, all operative procedures were carried out under anaesthesia.\n\nTwo types of experiments were conducted: (i) animals at four different postnatal-day (P) age groups: P7 (P4−15), P20 (P18−23), P40 (P39−45), and P60 were used for shorter time experiments that investigated re-growth of neuronal fibres across the site of transection and (ii) animals transected at P7–15 and P40–60 that were held until early pouch exit (around 200 days) in order to study their locomotor abilities. Details of numbers of animals and their ages are listed below.\n\nAdult female tammars with PY were caught, transferred into hessian sacks, and held in a holding facility. PY were removed from the pouch, and their head length and body weight recorded prior to surgery for age determination (Poole et al., 1991). PYs of both sexes were used (see below). They were anaesthetized by exposure to inhaled Isoflurane (3–4% in oxygen). A cotton ball was soaked in Isoflurane and placed in a small glass jar. The jar was placed over the PY’s snout until there was no evidence of muscle reflexes in response to a peripheral stimulus.\n\nAt each age, the available PY were divided into two groups: an experimental group (n = 4–11) that had their spinal cord transected and a control group (matched numbers), which were not injured. The sex of tammar PYs cannot be determined in the first week of life. In the case of older PYs, the sex of the animals was not known to the person allocating the animals to the two groups or to the person conducting the operations.\n\nUnder sterile conditions, the PY was positioned on its stomach over a roll of gauze to elevate the thoracic spine. A small skin incision was made over the dorsal aspect of the lower thoracic spine spanning two vertebrae. The spinal cord of each experimental animal was exposed at the lower thoracic level via gentle muscle dissection between two vertebrae. The spinal cord was cut completely at the approximate level of T10 using a fine ophthalmic blade, as described previously for Monodelphis domestica (Fry et al., 2003; Lane et al., 2007; Wheaton et al., 2011).\n\nThe wound was closed using fine monofilament sutures and sealed with tissue adhesive. The PY was warmed using body heat until it had visibly recovered from anaesthesia and was then placed back into the mother’s pouch. Approximately 30 minutes later the mother was put under light anaesthesia by the use of inhaled Isoflurane (3% in oxygen). The pouch was held open to expose the teats and, using small forceps, the active teat was placed into the PY’s mouth. The PY was observed until it was apparent that the young was securely attached. The level of anaesthetic was turned down to 0% in oxygen and the mother placed on her side in a hessian bag and observed until recovery from anaesthesia was apparent. In each age group some PYs (randomly selected by a person not involved in surgery) were removed from the pouch, terminally anaesthetized and fixed for examination of completeness of the lesion both macroscopically and microscopically. In the P7 group 4/4 cuts were complete. In the P20 age group, 3/4 (2 males and two females) were complete, while in the P40 age group all five (two males, three females) were complete. Only one (female) PY at P60 was tested and the transection was complete. Mothers were then released into their respective yards. PY from P7 and P20 age groups were observed 24 hours after the surgery to confirm that they were in good condition and still attached to the teat. They were then checked again 4–5 days later. P40 and older PY were checked after 4–5 days, but were largely left unobserved to avoid handling stress and to increase the chance of their survival.\n\nFour experimental age groups were used for retrograde labeling of axons growing through the transection site. PY at ages of P4–7 (“P7 group”, n=4), P18–21 (“P20 group”, n=4), P40–45 (“P40 group”, n= 8) and P60 (n=2) received a complete spinal cord transection in the lower thoracic region (T10), as described above. Age-matched controls were also included. Numbers of animals used in these experiments are listed in Table 1. Retrograde tracers were injected into the spinal cord at different times to label axons extending through the injury site.\n\nDetails of animals’ sex included in the legend to Table 2. P, postnatal day.\n\nNote that animal losses were mostly due to the difficulties in their survival after the second surgery (injections of Fluoro-Ruby), as at those stages the animals were older.\n\nA diagram illustrating the injection protocol is shown in Figure 1A. Briefly, immediately following transection, the upper lumbar region of the spinal cord was exposed and 0.25μL of Oregon Green (Molecular Probes) tracer (25% weight/volume in 2.5% Triton X-100, 0.1M TRIS buffer; pH 7.6) was injected into each side of the cord at the approximate level of L2–L3 using a fine glass pipette and gentle pressure. Animals in the control group were not injured, but received an injection of Oregon Green in the similar upper lumbar region of the spinal cord. This label was injected at this early time in order to test the completeness of the cut, as any green labeling rostral to the transection would indicate an incomplete transection (Fry et al., 2003).\n\nA is a diagram representing the brain and the cord with levels of transection at thoracic (T) segment 10 indicated by lines and sites of first injection of Oregon Green at lumbar segment L2–3 shown in green while the second injection of Fluoro-Ruby into L1 shown in red. A micrograph of the spinal segment indicated by broken lines and taken under the fluorescent microscope is shown in B.\n\nAfter a period of 1–2 months (time adjusted for the age differences between the groups) a second tracer (Fluoro-Ruby, Molecular Probes) was injected rostral to the first injection, but still caudal to the site of transection, in order to label any fibres that had re-grown through the injury site (Fry et al., 2003). For this procedure, PY were removed from the mother, as described above, and their body weight and head length recorded. Animals were anaesthetised with 3% isoflurane in oxygen and the upper lumbar vertebrae exposed at the approximate level of L1–L2. The retrograde axon labeling probe, Fluoro-Ruby (25% weight/volume in 2.5% Triton X-100 0.1M TRIS buffer; pH 7.6) was injected into both sides of the cord (0.5μL each injection). The wound was closed and PY were returned to their mother’s pouch, as described above.\n\nAfter the Fluoro-Ruby injection, all young were held for a further 14–21 days. Young were retrieved, final body measurements recorded and the young were terminally anaesthetized (overdose of isoflurane), perfuse-fixed with 4% paraformaldehyde (PFA), and post fixed for an extra 24 hours in 4% PFA.\n\nProcessing of tissue samples for fluorescent analysis. Following fixation in PFA, brains were removed from the skull and the spinal cords were dissected out of the vertebral column (from about T1 level to L6) and individually embedded in high gel strength 4% Agar (Sigma-Aldrich). Each spinal cord was divided into blocks of about 2–3cm, including the site of injury and both injections, and cut into serial longitudinal sections of 100μm thick on a vibrating microtome (Leica). The brainstems were serially sectioned into 100μm coronal sections. All sections were mounted on glass slides using fluorescent mounting medium (DAKO) and kept at 4°C covered with foil to restrict light exposure. All sections were viewed with an Olympus BX50 fluorescent light microscope with filters specific for Fluoro-Ruby, Oregon Green or triple filter. Digitized photographs with embedded scale bars were taken using an Olympus DP70 camera attached to the microscope.\n\nFollowing the initial screen of the tissue, specific criteria were set for animals to be further included in the study:\n\n(a) Spinal injections needed to be successful: Oregon Green injected immediately after injury and Fluoro-Ruby injected 30–60d later both had to be visible in the lumbar spinal cord at the correct spinal level. This applied to both the injured and uninjured control animals (Figure 1B).\n\n(b) Evidence of completeness of the transection: in animals with complete spinal injuries, no Oregon Green cells could be detected rostral to the injury site (both in rostral spinal cord and the brainstem).\n\n(c) In control animals, both labels were clearly visible in the rostral spinal cord and brain stem nuclei.\n\nAny animal that failed one or more of these criteria was removed from further analysis. The final animal numbers used in this part of the study are shown in Table 2.\n\nAge refers to the mean group age (described in Methods) at the time of spinal cord injury (SCI) and injection of the Oregon Green tracer (green); analysis refers to time of termination of experiment after injection of the Fluoro-Ruby tracer (red). n = number of animals with successful outcomes/out of total attempted following surgery and both injections in each age group. Please note that to be counted as a successful outcome selection criteria were applied as described in Methods. Numbers for all labeled neuronal cell bodies counted in brainstem nuclei are shown for individual animals (0 = no positive labeled cell bodies detected). In the P7, P20 and P40 groups, numbers of males and females were similar; at P60 both animals were males. P, postnatal day.\n\nAnalysis of axonal labeling in the cord and neuronal labeling in the brainstem. Spinal cord: Every longitudinal section through each cord spanning the entire length from about T1 to L6 was examined under the fluorescent microscope and the distribution of green- and red-labeled fibres and cell bodies was noted. This was done to establish that injections were successful and in the right spinal segment. In the case of transected animals, the lack of green labeling in the rostral segment was used as a definitive criterion that the original cut was complete.\n\nBrainstem: All serial sections from brainstems of transected and control PY were investigated under a fluorescent microscope and all cell bodies containing fluorescent labeling were counted and tallied. A note was taken of how many cells were labeled with either green, red or included both fluorophores (yellow).\n\nTwo age groups of animals were used for long term experiments, in which locomotor abilities of PY at pouch exit (P190–200) were analyzed: P7–15 (P7; n=8), P39–45 (P40; n=4) and P60 (n=2). Age-matched controls were also included (n=3. Spinal cord transection procedures and animal care were performed as described above. To assess completeness of the spinal transection, cords from random PY were collected immediately after surgery. Four out of five cords investigated immediately after transections at P7–15 had complete cuts and five out of five collected immediately after transections at P39–45 were complete. Figure 2 illustrates two such cords collected immediately after transection at P7 and at P40.\n\nA–D are whole-mounts of the cords taken immediately after dissection from a fixed tissue while E shows changes in cross-sectional area at the injury site of P7, P40 and P60 transected cords when measured at P160. A is immediately after surgery on a pouch young at P13; B is a cord from a P12 transected animal that was allowed to grow until P160; C is immediately after surgery on a P40 animal; D is a cord from a P40 transected pouch young taken at P160. The rostral end is to the left. Scale bar is 1mm. E represents the cross area (in mm2) of H&E stained sections. 10mm length of cord enclosing the injury site was cut into consecutive 5μm thick transverse sections, every 500μm section stained with H&E and the spinal cord area measured (see Methods). Sections were centered on the middle of the injury (0), rostral (negative numbers) is to the left while caudal (positive numbers) is to the right. Hashed line at the top illustrates the expected P160 cross-sectional area of the cord if the injury were not present.\n\nLong-term animal care and locomotor testing. All animals (transected and age-matched controls) were measured (weight and head length; Poole et al., 1991) and returned to the mother’s pouch; their ability to move hind limbs was assessed from around P140, with a more comprehensive testing at around P150–160. This included taking video recordings of PY efforts to stand up, move their hind limbs while supported, with special emphasis on rhythmic kicking. After these tests, the young were returned to the pouch and mothers were placed in individual wire-enclosed pens in the open yards, as at this stage if the injured young fall out of the pouch they may not be able to return and are potentially a target to bird predators. Once established in individual pens, the females were monitored every second day for the presence of the young in their pouches. At this stage (days 140–160), we observed greatest losses of the young: most of the animals injured at P40 were found outside the pouch and had difficulty standing and were unable to return to the pouch unaided. Therefore, experiments for all animals operated at P40 and P60 were terminated by terminal anaesthesia at around P150–160. Animals operated at earlier ages were left with their mothers until P190, when the young start to periodically leave the pouch (Tyndale-Biscoe & Janssens, 1988) and are able to hop in a mostly coordinated fashion. Further testing was performed on this group of PY, as described below. For all observations of locomotion, including hopping and swimming (see below) the observer was blinded to whether the animals were controls or operated.\n\nOver ground hopping test. A runway (0.6m wide × 2.5m long) was erected against a solid wall with the parallel wall (0.6m high) made of clear Perspex. Each test young was placed at one end of the runway and encouraged to hop towards the other end where the mother was placed held in a hessian bag. When the young vocalized so did the mother and the young would generally hop towards the sound. Markings at 100cm intervals were made on the back wall and video footage was used to determine normality of movements compared to uninjured controls. In addition, the young were also placed in an open field inside the room and allowed to hop of their own accord whilst being video recorded. The animals would typically hop towards and follow the experimenter. Each animal was observed for its ability to hop in a straight line, weight bearing use of fore and hind limbs and limb coordination.\n\nSwimming test. A clear, glass tank (1.8m long/0.6m wide/0.8m deep) was constructed and used in this test. Water temperature was kept at approximately 27°C. Young tammars at about P190–P200 were placed at one end of the tank and video recordings of their swimming were taken. Animals were not able to touch the bottom of the pool with their hind limbs. Video footage was examined for the animals’ ability to use their hind limbs (evidence of supraspinal innervation, Wheaton et al., 2011), hind limb extension, fore-hind limb coordination and tail movements. However due to losses of young transected at ages older than P40, we were only able to record swimming of animals transected at P7–15 together with age-matched controls.\n\nMorphology of the injury site. At the end of the locomotor testing, animals were terminally anaesthetised by an overdose of inhaled isoflurane, transcardially perfuse-fixed with PFA and spinal cords and brains collected, as described above. A segment of cord (10mm) containing the injury site was dissected out and post fixed in Bouin’s fixative for detailed morphological analysis in paraffin sections.\n\nTissue fixed in Bouin’s fixative for 24 hours was rinsed in distilled water and washed in 70% ethanol until clear, followed by dehydration in graded alcohols up to 100% ethanol. Tissue was placed in chloroform for 24 hours, followed by 60°C in paraffin wax and infiltrated under vacuum before embedding in warm paraffin wax in the desired orientation.\n\nParaffin embedded spinal cord tissue was serially sectioned (5μm) in the transverse plane on a microtome (Leica). Ribbons of ten consecutive sections were mounted on each gelatin-coated glass slide and left to dry over several days in a warm oven (36°C).\n\nIn order to perform histological and immunohistochemical analysis, every tenth slide (about 500μm apart) was stained for each method in order to obtain an overview of each cord. General histology sections were stained using Mayer’s Haematoxylin and Eosin Y (H&E), myelin was detected using Luxol Fast blue (LFB), and axonal neurofilaments and glia cells were detected using specific antibodies and PAP method (see below).\n\nH&E: Selected slides were de-waxed and re-hydrated through graded ethanol solutions and washed in tap water. They were immersed in haematoxylin (0.2%; Sigma) for 20 min, washed in tap water and stained with eosin (0.1%; Sigma) for 2 minutes, followed by another wash in tap water. Finally sections were dehydrated through increasing concentrations of ethanols and histolene (Fronine) and finally coverslipped with Ultramount Mounting Medium (Fronine).\n\nLFB: Selected slides were de-waxed, but not rehydrated. Instead slides were brought to 100% ethanol and immersed in LFB solution (0.1% w/v; Sigma; 10% acetic acid dissolved in 95% ethanol) overnight at 60°C. Tissue was rinsed in 95% ethanol and differentiated in 0.05% solution of lithium carbonate in water followed by 70% ethanol until white and gray matter could be easily distinguished. Sections were then permanently mounted as described above.\n\nImmunohistochemistry: Following de-waxing and rehydration (see above) selected sections were washed in phosphate-buffered saline (PBS; 0.2M, pH 7.4) with 0.2% Tween20 (Sigma) and incubated with Peroxidase- and Protein- Blocking Solutions (both from DAKO) for 2 hours each. Sections were incubated with primary antibodies: rabbit anti-glial fibrillary acidic protein (DAKO, Z0334 1:200 dilution in PBS) or mouse anti-SMI 312 (pan-axonal neurofilament marker; Steinberger, SMI312; diluted 1:200 in PBS) overnight at 4°C. The next day, slides were washed in PBS/Tween20 and incubated with corresponding secondary antibodies (swine anti-rabbit, Z0196 or rabbit anti-mouse, E0464, both from DAKO) in 1:200 dilution in PBS for 2 hours at room temperature. After several washes in PBS/Tween20 slides were incubated in appropriate peroxidase-anti peroxidase (PAP) complexes: rabbit PAP (P1291, 1:200, Sigma) or mouse PAP (B650, 1:200, DAKO) again for 2 hours at room temperature. Following washes, slides were developed by a reaction with DAB (DAKO). Once the reaction was developed (about 5 minutes) slides were dehydrated and mounted as above.\n\nHistological analysis: One section from every stained slide was photographed under 10× magnification (Olympus BX50 microscope with DP70 digital camera). The area of stained cord was quantified using ImagePro Plus software (version 4.5.1.22). The perimeter of the section was outlined and automated measurement function applied. For comparison between cords obtained from animals injured at different ages, the obtained areas of all sections were plotted against their position along the length of the corresponding cord segment relative to the center of the injury site (Figure 2E).\n\n\nResults\n\nIn the group of animals used for the labeling experiments, the outcomes of each surgery and animal survival varied between age groups. All animals survived the first surgery and were responsive when placed back into the pouch and re-attached to their teat; however, the likelihood of their survival in the time between the first and the second surgery and then until the final collection appeared to decrease with age at the time of the transection. The mothers of those that did not survive were found with pouches empty during checks undertaken before the second surgery was scheduled. Animals from the two older operated groups did not fare as well and their losses were much higher. The numbers of animals that survived and were analyzed are recorded in Table 1; not all animals had successful injections or complete cuts. Numbers of animals that conformed to inclusion criteria listed above (successful injections, completeness of the cut) are listed in Table 2. There was no difference in the survival rates between the two sexes.\n\nIn the two age-groups of animals that were used in the long-term experiments to assess their locomotion after SCI, all animals injured at P7–15 (n=8) survived until their final testing, while out of 4 transected at P39–45 and two transected at P60, all survived until P150, but one from each group died shortly after. However, due to increased risk of losing these animals, short behavioural testing was conducted at about P150, and all but one of the surviving animals from the two older groups were culled at this time for morphological examination. The one retained young was re-tested at P160 followed by terminal anaesthesia and collection of tissue for morphology.\n\nSpinal cord. Every longitudinal spinal cord section for each embedded cord was examined under the fluorescent microscope and distribution of green and red labeling was noted. As described above, these sections spanned approximately from the T1 to L6 spinal segments, encompassing both the injury and injection sites. In control cords from all age groups, green and red cell bodies and fibres were visible along the entire length of the cord, including the most rostral segments. However, no double labeled cell bodies were observed in any of the cords. An illustration of successful labeling following both injections in the cord is presented in Figure 3A.\n\n(A) P23 control (not transected) spinal cord injected at the same level as that in operated animals and examined 30 days later; this illustrates the longitudinal appearance of the cord from an animal that was included in the study of counting labeled neurons in the brain stem. Rostral end to the left. (B–D) 50μm thick vibratome cut transverse sections of P23 transected spinal cord injected with Oregon Green caudal to lesion immediately after lesioning and collected 1.5h later. Note lack of Oregon Green rostral to lesion (B), but presence of the fluorophore caudal to the lesion (C and D). (C) is in close proximity and (D) is further caudally. Lack of rostral labeling indicates that the lesion was complete. Scale bar is 500μm.\n\nIn all animals with spinal transections that were included in this study, no green labeling was visible in the segment rostral to the injury (Figure 3B), but was clearly visible caudal to the injury site (Figure 3C and Figure 3D). If green label was detected in the rostral segment of the cord, it was an indication that the injury was not complete and the animal was removed from the study (see Methods). This only occurred in one animal injured at P4 and in one injured at P60. The red fluorophore (2nd injection) could be detected in spinally transected animals from the two younger groups, but not in animals injured after P40. This observation was confirmed by the results obtained from the analysis of labeled neuronal cell bodies in the brainstem regions of control and transected animals (Figure 4). Control animals all had labeled axons detected in the rostral end of their cords and cell bodies labeled in the brainstems (Figure 4). This indicates that following a complete transection in the thoracic region up to 3 weeks of age, spinal axons were able to span the injury site, but this did not occur if the transection was performed in animals older than 6 weeks of age.\n\nGigantocellular reticular nucleus of a P7 transected (A) and age-matched control (B) pouch young. Only red-labeled cells could be detected in transected animals indicating re-growth of supraspinal axons across the site of injury. Lack of green labeled cells indicated that the transection was complete (A). In control animals both labels could be detected with many more red-labeled than green (arrow), or double-labeled (arrowhead) neurons visible (B). Images taken under triple filter specific for fluorescein and rhodamine. Scale bar is 200μm.\n\nBrainstem. Agar-embedded brainstems were sectioned in the coronal plane on a vibrating microtome. They were examined under a fluorescent microscope fitted with filters specific to each fluorescent probe. The numbers of neuronal cell bodies positive for Oregon Green (green) and Fluoro-Ruby (red) were counted, as well as those containing both labels (yellow).\n\nIn SCI animals, the presence of Fluoro-Ruby labeling within cell bodies in the brainstem of pouch young indicated the regrowth of axons across the lesion site (Figure 4A). However if Oregon Green–positive cell bodies were also detected, this indicated that the transection was not complete and these animals were not used in further analysis (see above). Only one animal injured at P4 was removed from the study due to incomplete transection. Presence of both labels in brainstem nuclei in control animals indicated successful injections in these animals (Figure 4B).\n\nFive of the eight P40 injured young were lost soon after the second injection and are thus not included in Table 2, which shows final numbers of animals that were analyzed. As can be noted in Table 2, in control animals, the numbers of Oregon Green and Fluoro-Ruby–labeled cell bodies respectively were of similar order of magnitude in all three age groups, indicating that the volume of injected dye was relatively adjusted to the size of the growing spinal cord (Fry et al., 2003). In all cases, the numbers of Fluoro-Ruby-positive neurons were an order of magnitude higher than Oregon Green-positive neurons, which is most likely a reflection of older (bigger) cords at the time of the second injection.\n\nThere was a large variation in numbers of labeled neurons even within one age group; this was most likely due to difficulties in injecting small volumes of the fluorescent probe into the cords, especially in SCI animals, where cords were often narrower near the injury site. Nevertheless the data clearly show that following a complete T10 transection in animals up to 3 weeks of age (total n=4), there is substantial morphological repair across the site of injury, as demonstrated by the presence of red-labeled neurons in the brainstems of spinal animals (Figure 4A). In brains of two animals injured at one week of age and two animals injured at around three weeks of age, up to 800 Fluoro-Ruby-labeled neuronal cell bodies were counted. Additionally in the P7 and P20 age group, the numbers of Fluoro-Ruby cell bodies were between 5−20% of those detected in un-injured age-matched controls. In the three animals obtained from the group injured at around 6 weeks of age and one injured at P60, no Fluoro-Ruby-labeled cell bodies were detected in the brainstem (Table 2), indicating that there was no re-growth of axons across the site of transection in these older pouch young.\n\nDouble labeled neuronal cell bodies were also detected in brain stems of all control animals and were occasionally as high as 80 (Figure 4B). There were no double-labeled neurons in any of the brains from SCI animals, thus confirming the completeness of the spinal transections.\n\nThe difference in the age-related morphological repair between the two groups of animals observed short-term (above) was confirmed by immunohistochemical analysis of transverse sections through cords obtained from animals used in the long-term experiments described below. Sections through the centre of the initial spinal transection performed at either P7 or P40 (and age-matched uninjured controls) and analysed at P150-180 are illustrated in Figure 5. Measurements of the cross-sectional area of transverse sections along the length of different cords are illustrated in Figure 2E above.\n\nAll sections are from animals at around P160. A and B are immunostained with antibodies to neurofilament. C, D and E are from sections stained with Luxol Fast blue to detect myelin. F is stained with antibodies to GFAP. B is a section from the middle of the injury performed at P13. D, E and F are from the middle of the injury site made at P60. Note substantial neurofilament immune-positive tissue present in B indicating regrowth of axons after transection at P13. Following transection at P60 myelination could not be detected at the lesion centre (D and E). Instead very strong GFAP staining in F indicates that the site of transection performed at P60 was now filled with glia cells, forming a scar. Scale bar is 500μm in A, B and C; 200μm in D; 100μm in E and F.\n\nThe control cords at P160 (Figure 5A) show the usual well defined structure of characteristic white and grey matter typical of the thoracic mammalian cord and white matter, as defined by LFB staining (Figure 5C). Following spinal transection at P7, the cords were able to repair themselves to a degree showing strong immunoreactivity for neurofilament, indicating that neurites were able to span the site of the injury. However the overall morphology of the cord is disturbed and grey matter is absent, but some white matter is defined (Figure 5B). In contrast to the cords of injured animals from the younger (P7–P20) permissive group, cords from the older (P40−P60) non-permissive group, when analysed at P150, showed no LFB staining (Figures 5D and E) or neurofilament immunoreactivity at the transection site. Instead, a loosely defined fibrous connective tissue could be observed defined by a very strong immunopositive reaction with antibodies to GFAP (Figure 5F), indicating a likely glial scar, presumably involving mainly A1 astrocytes given the lack of neurite outgrowth (Liddelow et al., 2017).\n\nEstimations of the cross-sectional areas of cords injured at P7, P40 and P60 are illustrated in Figure 2E. Each point on the graph represents measurements from one section. As can be seen in this figure, a significant tissue deficit in all three transected cords was detected. However, the tissue that was present in the P7 injured cord was considerably larger than either of the cords injured in older animals.\n\nThese data and observations confirm that following a complete transection of the spinal cord, tammars up to three weeks of age are capable of significant morphological repair (Fluoro-Ruby-labeled axons in the rostral cord and labeled neurons in the brain stem indicate neurites crossing the site of transection). This process of morphological repair did not occur in animals injured from about six weeks of age.\n\nThe next set of experiments aimed at establishing if morphological repair translated into functional recovery after SCI.\n\nFollowing complete thoracic spinal transection at different ages (see Methods), all surviving animals were tested at two ages: around P150–160 and P190–P200. The aim was to keep the animals until such time that they would begin to leave the pouch (around P200), be able to stand and hop (after P180) and swim when placed in a water tank (also after P180). However, since we noticed in our previous experiments that most losses of PY operated after P40 occurred around the time of P140–P160, we instead conducted the first test, consisting only of recording the animals’ ability to use their hind limbs in a manner similar or dissimilar to un-operated age-matched controls, at this age. At around P150–160, the control (n=3, two females, one male) and P7–15 transected (n=8, four females, four males) PY could kick their hind-limbs in a coordinated manner while supported, but the young operated at P40 (n=2, one female, one male) and older (P60, n=1, male) were less able to do so. In addition control and PY injured in the first three weeks of life were able to remain upright and stand using all four limbs for support, while animals operated after P40 were not able to maintain an upright posture (Figure 6). As P40 operated animals exhibited such reduced mobility, and most in fact were lost after about 150 days of life, these animals were terminally anaesthetized and their brain and spinal cord fixed for morphological examination together with n=3 PY from the P7–15 group at this point. One P60 operated PY was allowed to remain in the study, but was rejected by the mother shortly afterwards and was therefore terminated for morphological examination of the cord.\n\nControl (A), P7 (B) and P40 (C) were removed from their mothers’ pouches and videos of their ability to remain upright recorded. Note that controls (n=3) and P7 group transected animals (n=8) were able to remain upright using the support of all four limbs while the P40 group operated animals (n=3) were unable to do so.\n\nAll surviving PY were retested at P190 for hopping and swimming ability. Still images from video footage of hopping are included in Figure 7 and swimming in Figure 8. There was no visible difference in the ability of animals transected at P7−15 and controls to hop over ground, all were able to stand upright on their hind legs using their tail for support (see Supplementary Video S1 and Supplementary Video S2). However, differences were noted during the swimming test between the two groups. Control animals (see Supplementary Video S3) used alternating strong rhythmic hind-limb kicks extending out behind the body coupled with snake like sideways movements of the tail to propel themselves forwards, while maintaining a fairly horizontal body orientation. Measurements of the angle between the hind-limb axis at full extension (line drawn between the tip of the hind-leg and ventral base of the tail) and the axis of the body (line drawn between the tip of the nose and ventral base of the tail) in control animals were close to a straight line (175−180°; Figure 8). These animals consistently used their forelimbs in coordination with their hind-limbs. In contrast, animals transected at P7–15 (n=3) also used their hind-limbs in an alternating and coordinated fashion; however, the hind-limb kicks were in a more vertical plane, lacking the normal posterior extension, which resulted in reduced ability to propel forwards. They also used their tail, but in an upwards/downwards plane rather than horizontal as controls did. They have also shown less coordination of forelimbs with hind-limbs (see Supplementary Video S4) and maintained a much more vertical hind-limb angle at full extension compared to controls when swimming (105−140°; Figure 8). We have observed some variation in swimming performance between injured animals. All were able to swim with rhythmical hind-limb kicks, but 2 out of 3 had difficulty propelling themselves forward in a straight direction, instead moving in a circle. Figure 8 (and Supplementary Video S4) shows the best performing of the spinal injured animals.\n\nAt about P180–200 pouch young periodically start leaving the pouch. Control and P7 group transected (SCI@P7–P13) animals were placed on a hard surface in the Animal House and allowed to hop. Videos of their movements can be found in the (Supplementary Material S1 and Supplementary Material S2). There was no obvious difference in the way young from either group moved, all were very active and equally fast. Their use of the tail for support was also similar (n=3).\n\nP190–200 pouch young were placed in a swimming tank (see Methods) and their swimming recorded (see Supplementary video S3 and Supplementary video S4). Control animals used alternating hind-limbs kicks with full retraction and posterior extension of the hind legs in kicking movements. Transected animals (SCI@P13) were also able to swim, but their tails were less flexible and their hind-limbs extended more vertically relative to the body during the kicking stroke (n=3). To highlight these differences in body position while swimming, still images were captured from the videos where there was full extension of each hind-limb. The angle made between the body axis and the extended hind-limb axis was measured by drawing a line between the tip of the nose and the ventral base of the tail (body axis, red line) and between the ventral base of the tail and the tip of the toes on the extended hind limb (hind-limb axis, blue line). In control animals, this angle was close to a horizontal line, but in spinally injured animals the angle was more acute.\n\nThese results demonstrate that if a spinal transection is performed during the first 3 weeks of life of the tammars, their spinal cords not only repair themselves morphologically, but this repair also translates into functional recovery (see Discussion).\n\n\nDiscussion\n\nIn this study, we aimed to establish in the tammar periods of spinal cord development when, following a complete mid thoracic transection, supraspinal axons are able to bridge the injury site and when this ability is lost in development. Similar periods in development of another marsupial, Monodelphis domestica have been described and the terms “permissive” and “non-permissive” stages of cord maturation were proposed (Noor et al., 2013), as previously suggested for chick embryo spinal cord (Keirstead et al., 1992) (see below). Results obtained here confirm that during tammar development there is indeed a permissive period of spinal cord maturation (first 3 weeks of life) followed by a non-permissive period (from P40 onwards).\n\nMarsupials are unique experimental animal models, especially for developmental studies, for a number of important reasons. All marsupials, including tammars, are born at a very immature stage of early development and most brain development occurs ex utero (Dziegielewska et al., 1989; Riese, 1948; Renfree et al., 1982; Reynolds et al., 1985; ). While there are variations in details of anatomical development between different marsupials, at birth in most cases their stage of central nervous system (CNS) development corresponds to approximately that of a rat at embryonic day 14 and a human at about six weeks (Comans et al., 1987; Nicholls et al., 1990; Reynolds et al., 1985; Saunders et al., 1989). This means that experiments performed previously in eutherians in utero can be performed ex utero in marsupials, automatically eliminating the many risks and complications involved in performing surgery on pregnant females (Saunders et al., 1989).\n\nPrevious studies have used the marsupial South American opossum (Monodelphis domestica) and North American opossum (Didelphis virginiana) to demonstrate the effects of spinal cord injuries at different stages of development (Fry & Saunders, 2000; Martin & Xu, 1988; Saunders et al., 1989; Saunders et al., 1998; Wang et al., 1996; Wang et al., 1998a; Wheaton et al., 2011; Wheaton et al., 2013; Wheaton et al., 2015; Xu & Martin, 1992). It has been shown that when pups were injured early in development at post-natal day P7, regrowth of supraspinal neuronal axons across the lesion site was demonstrated (termed the “permissive” stage, as previously suggested for the period when functional repair occurred in chick embryonic spinal cord following SCI; Keirstead et al., 1992). However, when injured at a later stage of development, at P28, no regeneration of axons was observed (“non-permissive” stage). Martin and colleagues referred to “the critical period” for regenerative growth in postnatal Didelphis and a subsequent period when the local environment at the site of injury was “non-permissive” (e.g. Terman et al., 1999; Wang et al., 1998a). Several attempts have been made to identify cellular and molecular changes that could explain this shift in the ability of axons to bridge the site of injury (Noor et al., 2011; Saunders et al., 1998; Saunders et al., 2014; Terman et al., 1999; Wang et al., 1998b; Wheaton et al., 2015) and some suggestions that it could be related to the onset of myelination were also put forward (Bandtlow, 2003; Saunders et al., 2014; Wheaton et al., 2015).\n\nIn the present study we have sought to define these “permissive” and “non-permissive” stages in the tammar. Our results demonstrate that the period of development that is permissive to spinal cord regeneration extends up to P20−25 and animals older than P40 at the time of injury show no regeneration. Cord development of the tammars appears to be similar to that of the Monodelphis domestica (Nicholls et al., 1990) at similar ages. By P1 the tammar spinal cord shows a deep central canal surrounded by a proliferating neuroepithelium. The dorsal horn contains advanced neurons at cervical and brachial levels; however, fewer developed neurons are present in the lumbosacral cord (Harrison & Porter, 1992). By approximately P17, the spinal cord has reached a mature form with a small central canal, fully formed dorsal horns and distinct fasciculi gracilis and cuneatus in the dorsal column (Comans et al., 1987) and at P20 the corticospinal tracts begin to form (Ashwell, 2010). The switch to a non-permissive phase of development appears to occur soon after this stage.\n\nFollowing SCI, patients face many problems in trying to regain some degree of function and independence. Being able to use legs for voluntary movement is only one part of this process. The ability to successfully walk would also require balance. This is critically dependent on the stability of the trunk. This is one fundamental limitation for the use of quadrupeds as models for studying recovery of function after spinal trauma that could complicate a treatment’s eventual transfer to humans. Tammars, like humans, require trunk stability and are a truly unique animal model, in which it is possible to answer many questions not answerable in quadrupeds (e.g., the commonly used rodent models).\n\nFor many bipedal animals (including humans), to implement successful locomotion one leg must support the body in stance, while the other limb lifts off the ground and swings through to produce forward movement. To achieve this there must be activity in the descending motor pathways, to innervate and coordinate muscle contraction of the limbs, as well as activity of the trunk musculature to trigger compensatory postural movements, preventing the body from losing balance. Quadrupeds are able to use their fore limbs to compensate for balance deficits in the hind quarters, whereas bipeds cannot, at least not while maintaining unsupported bipedal locomotion. Although some rat studies have used a harness for body weight support and obliged spinal rats to walk on a treadmill using only hind limbs (e.g. van den Brand et al., 2012), this approach merely provides external support for the body and does not take into account autonomous trunk stability during bipedal gait. In one of the few studies of compensatory trunk control in quadrupeds, Giszter et al. (2007) showed that spinal injured rats were able to modify command of the trunk musculature, unloading the hind-limbs in order to compensate for their aberrant stepping (Giszter et al., 2007). Crucially though, these modifications served primarily to shift weight support to the forelimbs (Giszter et al., 2008), an option that bipeds do not have.\n\nThe bipedal hopping locomotion of macropods, such as tammars, might appear significantly different from bipedal locomotion in humans; however, the underlying spinal cord circuits are likely to be similar and certainly not exclusive to macropods. There are reports that now-extinct species of macropod used bipedal striding gait rather than hopping and tree-kangaroos have been observed to walk bipedally along branches (Janis et al., 2014); larger species of macropod, but not the smaller tammar, use a low speed pentapedal gait when foraging (Dawson et al., 2015). Hopping is energetically efficient compared with other forms of locomotion and perhaps evolved in Australia in response to the large distance and limited food supply in an arid environment (Baudinette et al., 1992). Of particular relevance to the present study are the publications of Kiehn (e.g. Kiehn, 2011), who has described in detail the locomotor networks responsible for rhythmic coordinated limb movements in neonatal animals. This suggests that the underlying circuitry and nervous system development involved is similar.\n\nWhen behavioural analysis of Monodelphis with complete spinal transections at either P7 (permissive) or P28 (non-permissive) was performed, both experimental groups maintained the ability to walk over-ground using coordinated fore and hind limb movements (Wheaton et al., 2011; Wheaton et al., 2013). This was in spite of the fact that animals injured at P28 did not have any supraspinal innervation below the thoracic transection (Wheaton et al., 2011). However, when these animals were subjected to a swim test, Monodelphis injured at P28 were observed to only use their forearms to navigate through the water, in contrast to animals injured at P7 that could swim using all four limbs (Wheaton et al., 2011). This was evidence that the over-ground locomotion observed in the P28-injured opossums (those without supraspinal innervation to the hind limbs) was dependent upon reflex input and without it (in the case of the swim test) the limbs were receiving insufficient signals to move.\n\nThere have been many studies in quadrupedal animals leading to the idea of a central pattern generator in the lumbar spinal cord, which is a local circuit that has the ability to generate rhythmic movement of the hind limbs when activated by peripheral sensory input (Grillner et al., 2008; Kiehn, 2011; Pearson, 2000; Rossignol & Frigon, 2011; ). This makes the swim test a good test to detect the effects of locomotion in the absence of this reflex input, and a good measure of supraspinal control (Saunders et al., 1998; Smith et al., 2006; Magnuson et al., 2009; Wheaton et al., 2011; Wheaton et al., 2013).\n\nIn the present study, we have shown that control tammar young are able to swim using hind limbs, as well as forelimbs and the tail from about P190. Young with SCI performed at P7–20 were equally able to swim using hind limbs and the tail, but in a pattern different from un-operated controls (see Figure 8 and Supplementary Video S3 and Supplementary Video S4). Thus this swimming test confirms the tracing studies showing retrogradely labeled axons spanning the site of injury in P7 and P20 groups of experimental animals (Figure 4 and Figure 5), and suggests the re-establishment of functional supraspinal control.\n\nWhile there has been limited research into the rhythm generators in the spinal cord that contribute to bipedal walking (Courtine et al., 2009; Rossignol & Frigon, 2011; van den Brand et al., 2012), there have been attempts to translate these findings into humans with SCI. In these trials, patients were supported in a sling while walking on a treadmill in order to improve their weight-bearing locomotion; however, over time no improvements were observed (Dobkin et al., 2006). This may be a consequence of the loss of core strength in humans due to the loss of innervation of the core muscles of the trunk. Quadrupeds gain stability through their fore limbs and thus rely less on the core strength through their trunk. Therefore supraspinal input may not play such an important role in quadruped locomotion after injury (Wheaton et al., 2011). Similarly, the Australian brushtail possums possess propriospinal mechanisms that control the release of hind limb grip when the forepaws are activated during climbing (Ashwell, 2010), a mechanism which does not benefit bipeds. This is why it is important to establish a bipedal animal model of spinal cord injury.\n\nThe fluorescent labeling undertaken here in the tammar study showed definitive evidence of regrowth of supraspinal axons through the injury site (Figure 4) in animals injured in the first three weeks of life, but no regrowth after this age. Previous work employing labeling methods has identified similar regenerative periods in Monodelphis (regeneration up to P14; Fry et al., 2003; Lane et al., 2007) and Didelphis (regeneration up to P24, Wang et al., 1998a). The main difference between the studies of Fry et al. (Fry et al., 2003) in Monodelphis and the present one in tammars was that the injection protocol was designed to study true regeneration as opposed to re-growth of axons after transection that would occur as part of normal development. For that purpose, injection of the first marker was done 3 days before the transection to label axons that were subsequently cut, while in the present study the first marker was injected immediately after the transection. This difference in the design stems from the fact that Monodelphis are easier experimental animals to maintain as a colony: they breed all year around, have multiple young and are the size of a small rat (Saunders et al., 1989), while tammars breed naturally once a year and usually have one PY (Tyndale-Biscoe & Janssens, 1988). Therefore, the completeness of the transection was deemed to be more important to establish in the present study as the objective was to determine the permissive versus non-permissive ages in a new species.\n\n\nLimitations of the study\n\nThis study has two main limitations: (i) The small numbers of animals that were available and (ii) the problems we encountered in achieving survival of older (P40 and P60) operated animals.\n\nOver many decades, studies on marsupial biology, particularly reproductive and developmental biology, were internationally recognised strengths of Australian science, much of it carried out in government research institutes, such as the former CSIRO Wildlife Research in Canberra. However, with the increasing emphasis on utilitarian research over the past 3−4 decades (Saunders, 1989) the facilities for such fundamental research have declined. We were fortunate to obtain access to the remnants of a once large colony of tammars at the CSIRO in Canberra. In addition to the small numbers of animals available from the colony, tammars, like many Australian marsupial species, generally produce only one young per year and the lengthy period in the pouch makes for long experiments. This is in contrast to the South American opossums, which will breed all the year round in captivity, have multiple young and reach maturity in a shorter time. An experimental advantage of the tammar over the South American opossum is that the tammars look after their young in a pouch, whereas the opossum is pouchless. This means that in effect the tammar mother provides us with a natural incubator for the postoperative young. This was effective for the younger (P7) operated animals, where the main reason for discarding animals from the analysis was evidence that the spinal transections were incomplete. For the older operated animals, as outlined in the Results section, young were lost before the time they would normally begin to exit the pouch. It was unclear whether these losses were due to the mother recognizing that the pouch young were abnormal or because if having left the pouch the young were sufficiently disabled as to be unable to return. Whatever the explanation, the result is important because it suggests that bipedal tammars are less able to cope with a complete spinal lesion after the period when regeneration naturally occurs, compared with the quadrupedal Monodelphis, which exhibited weight bearing quadrupedal locomotion in the absence of any axon growth across the lesion (Wheaton et al., 2011; Wheaton et al., 2013). This confirms the conclusion of Côté et al. (2017) that peripheral sensory inputs may be insufficient to drive locomotion in bipedal animals. In future studies it should be possible to use artificial incubators and feeding systems to maintain these older SCI tammars to later stages of development in order to confirm that their locomotor activity is indeed limited by the loss of sensory input via forelimb stretch, as appears to occur in Monodelphis with complete SCIs made at 4 weeks of age (Wheaton et al., 2011; Wheaton et al., 2013).\n\nThere is also the question to what extent bipedal locomotion in the tammar can accurately equate to bipedal locomotion patterns in humans. Important evidence suggesting that the spinal circuits involved in bipedal locomotion in tammars are likely to be similar to those in humans comes from the observation that tammars swim using alternating movements of their hind limbs. The evolutionary reasons for why tammars and kangaroos in general have adopted a bipedal hopping gait are discussed above, but their swimming movements suggest similarity of the spinal circuits in tammars and humans. The nature of the spinal rhythm generators in the tammar is not known, but will be an important topic for future investigations.\n\n\nConclusions\n\nThe experiments reported here provide further evidence in another species (tammar wallaby) for an early period of CNS development that is “permissive” for axon growth and functional recovery, in contrast to a later stage when axon growth does not occur (“non-permissive”). An important difference in this study of a species with bipedal gait compared to quadrupedal opossums is the poor locomotor recovery in the tammar when the spinal cord is injured during the “non-permissive” period. We propose that this difference may be due to sensory feedback from the limbs being much less effective in promoting locomotion in bipedal animals, as suggested by Côté et al. (2017) for humans. This not only may contribute to explaining the lack of translation to human patients of apparently effective therapies based on experiments in quadrupedal rats, it also provides a potential solution, namely to develop tammars or other macropods as animal models for testing potential therapies for human patients.\n\n\nData availability\n\nDataset 1: Archived video and histological data. http://dx.doi.org/10.5256/f1000research.11712.d164069 (Saunders et al., 2017).\n\nVideo files:\n\n- PY2835 Control Hopping.mov, uninjured control tammar, video recorded at age P191;\n\n- PY2988 Control Hopping.mov, uninjured control tammar, video recorded at age P196;\n\n- PY2988 Control swimming.mov, uninjured control tammar, video recorded at age P196;\n\n- PY2846 Control swimming.mov, uninjured control tammar, video recorded at age P199;\n\n- PY2836 SCI@P12 Hopping.mov, spinal transection at P12, video recorded at age P195;\n\n- PY2839 SCI@P7 Hopping.mov, spinal transection at P7, video recorded at age P195;\n\n- PY2836 SCI@P12 swimming.mov, spinal transection at P12, video recorded at age P191.\n\nNote: video recordings were made for animals that could swim when placed into the tank. Animals injured at later ages (P40-P60) were not able to swim.\n\nHistology files:\n\n- Control SC sections.pdf, serial sections of a control tammar spinal cord, H&E stained;\n\n- H&E serial sections.pdf, cross-sectional area measurements of H&E stained serial sections of tammar spinal cords injured at P7, P40 and P60. Cords collected at P195-198. Includes H&E stained serial sections of P60 injured spinal cord.\n\n- P7-60 injury centre sections.pdf, H&E stained sections from the lesion centre of tammar spinal cords injured between P7 and P60 and collected at P195-198.\n\nRetrograde tracing and cell counts:\n\nSaunders et al Raw data files.pdf\n\n- Head length and body weight measurements of the tammar young\n\n- Position and number of green and red retrograde labelled cell bodies in the brainstem regions of the tammar cords.\n\n- Images of the injection sites for the retrograde tracers\n\nBrainstem cell counts data.xls\n\n- Head length and body weight measurements of the tammar young\n\n- Position and number of green and red retrograde labelled cell bodies in the brainstem regions of the tammar cords.",
"appendix": "Author contributions\n\n\n\nThe study was conceived by KMD. It was planned by KMD, MDH and NRS. Tammar colony management and postoperative care of tammar pouch young was by LAH and SH. Animal operations were done by BJW, SCW and MDH. Photography and videoing of animals was by LAH, SH and MDH. Morphological material was prepared and analysed by KMD, BJW, SCW, MDH and YH The first draft of the manuscript was prepared by KMD, MDH and NRS. All authors contributed to the final manuscript, which they have read and approved.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe should like to thank the CSIRO Ecosystem Sciences management for access to the tammar colony and laboratory facilities.\n\n\nSupplementary material\n\nSupplementary Videos S1 and S2: Over ground locomotion of control (S1) and operated (S2) tammars. At about P180−200, pouch young periodically start leaving the pouch. Control and P7−13 group transected animals were placed on a hard surface in the Animal House and allowed to hop. Note that there were no obvious differences in the way young from either group moved, all were active, propelled themselves equally fast and were able to sit upright on their hind-legs with front paws by their sides. Their use of the tail for support when standing was also similar. Three animals in each group were analyzed and four independent, blinded observers were asked to identify any differences. None were observed.\n\nClick here to access the data.\n\nSupplementary Videos S3 and S4: Control (S3) and operated (S4) tammars swimming. After completing the over ground locomotion test (Figure 7; Supplementary Video S1 and Supplementary Video S2), pouch young were placed in a swimming tank (27°C) and their swimming recorded. Control animals used alternating hind-limbs kicks with full retraction and extension of legs in kicking movements. Transected animals were also able to swim, but their tails were less flexible and their hind-limbs showed less posterior extension in the kicking stroke. In addition their front paws were less coordinated with hind-legs than in control young. N=3 per group.\n\nClick here to access the data.\n\n\nReferences\n\nAlexander RM: Bipedal animals, and their differences from humans. J Anat. 2004; 204(5): 321–330. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAshwell K: The Neurobiology of Australian Marsupials. Cambridge University Press; 2010. Reference Source\n\nBabu RS, Namasivayam A: Recovery of bipedal locomotion in bonnet macaques after spinal cord injury: footprint analysis. Synapse. 2008; 62(6): 432–447. PubMed Abstract | Publisher Full Text\n\nBandtlow CE: Regeneration in the central nervous system. Exp Gerontol. 2003; 38(1–2): 79–86. PubMed Abstract | Publisher Full Text\n\nBaudinette RV, Snyder GK, Frappell PB: Energetic cost of locomotion in the tammar wallaby. Am J Physiol. 1992; 262(5 Pt 2): R771–R778. PubMed Abstract\n\nCavagna GA, Franzetti P, Heglund NC, et al.: The determinants of the step frequency in running, trotting and hopping in man and other vertebrates. J Physiol. 1988; 399: 81–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nComans PE, McLennan IS, Mark RF: Mammalian motoneuron cell death: development of the lateral motor column of a wallaby (Macropus eugenii). J Comp Neurol. 1987; 260(4): 627–634. PubMed Abstract | Publisher Full Text\n\nCourtine G, Gerasimenko Y, van den Brand R, et al.: Transformation of nonfunctional spinal circuits into functional states after the loss of brain input. Nat Neuroscience. 2009; 12(10): 1333–1342. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCôté MP, Murray M, Lemay MA: Rehabilitation Strategies after Spinal Cord Injury: Inquiry into the Mechanisms of Success and Failure. J Neurotrauma. 2017; 34(10): 1841–1857. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDawson RS, Warburton NM, Richards HL, et al.: Walking on five legs: investigating tail use during slow gait in kangaroos and wallabies. Aust J Zool. 2015; 63: 192–200. Publisher Full Text\n\nDiesch TJ, Mellor DJ, Johnson CB, et al.: Developmental changes in the electroencephalogram and responses to a noxious stimulus in anaesthetized tammar wallaby joeys (Macropus eugenii eugenii). Lab Anim. 2010; 44(2): 79–87. PubMed Abstract | Publisher Full Text\n\nDobkin B, Apple D, Barbeau H, et al.: Weight-supported treadmill vs over-ground training for walking after acute incomplete SCI. Neurology. 2006; 66(4): 484–493. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDziegielewska KM, Habgood M, Jones SE, et al.: Proteins in cerebrospinal fluid and plasma of postnatal Monodelphis domestica (grey short-tailed opossum). Comp Biochem Physiol B. 1989; 92(3): 569–576. PubMed Abstract | Publisher Full Text\n\nFry EJ, Saunders NR: Spinal repair in immature animals: a novel approach using the South American opossum Monodelphis domestica. Clin Exp Pharmacol Physiol. 2000; 27(7): 542–547. PubMed Abstract | Publisher Full Text\n\nFry EJ, Stolp HB, Lane MA, et al.: Regeneration of supraspinal axons after complete transection of the thoracic spinal cord in neonatal opossums (Monodelphis domestica). J Comp Neurol. 2003; 466(3): 422–444. PubMed Abstract | Publisher Full Text\n\nGiszter SF, Davies MR, Graziani V: Motor strategies used by rats spinalized at birth to maintain stance in response to imposed perturbations. J Neurophysiol. 2007; 97(4): 2663–2675. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiszter SF, Davies MR, Graziani V: Coordination strategies for limb forces during weight-bearing locomotion in normal rats, and in rats spinalized as neonates. Exp Brain Res. 2008; 190(1): 53–69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrillner S, Wallén P, Saitoh K, et al.: Neural bases of goal-directed locomotion in vertebrates--an overview. Brain Res Rev. 2008; 57(1): 2–12. PubMed Abstract | Publisher Full Text\n\nHarrison PH, Porter M: Development of the brachial spinal cord in the marsupial Macropus eugenii (tammar wallaby). Brain Res Dev Brain Res. 1992; 70(1): 139–144. PubMed Abstract | Publisher Full Text\n\nHo SM: Rhythmic motor activity and interlimb co-ordination in the developing pouch young of a wallaby (Macropus eugenii). J Physiol. 1997; 501(Pt 3): 623–636. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJanis CM, Buttrill K, Figueirido B: Locomotion in extinct giant kangaroos: were sthenurines hop-less monsters? PLoS One. 2014; 9(10): e109888. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKeirstead HS, Hasan SJ, Muir GD, et al.: Suppression of the onset of myelination extends the permissive period for the functional repair of embryonic spinal cord. Proc Natl Acad Sci U S A. 1992; 89(24): 11664–11668. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKiehn O: Development and functional organization of spinal locomotor circuits. Curr Opin Neurobiol. 2011; 21(1): 100–109. PubMed Abstract | Publisher Full Text\n\nLane MA, Truettner JS, Brunschwig JP, et al.: Age-related differences in the local cellular and molecular responses to injury in developing spinal cord of the opossum, Monodelphis domestica. Eur J Neurosci. 2007; 25(6): 1725–1742. PubMed Abstract | Publisher Full Text\n\nLiddelow SA, Guttenplan KA, Clarke LE, et al.: Neurotoxic reactive astrocytes are induced by activated microglia. Nature. 2017; 541(7638): 481–487. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMagnuson DS, Smith RR, Brown EH, et al.: Swimming as a model of task-specific locomotor retraining after spinal cord injury in the rat. Neurorehabil Neural Repair. 2009; 23(6): 535–545. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartin GF, Megirian D, Conner JB: The origin, course and termination of the corticospinal tracts of the Tasmanian potoroo (Potorous apicalis). J Anat. 1972; 111(Pt 2): 263–281. PubMed Abstract | Free Full Text\n\nMartin GF, Xu XM: Evidence for developmental plasticity of the rubrospinal tract. Studies using the North American opossum. Brain Res. 1988; 467(2): 303–308. PubMed Abstract | Publisher Full Text\n\nMigliavacca A: Ricerche sperimentale sulla rigenerazione del midollo spinale nei feti e ne neonati. Archivio dell Instituto biochemico italiano II. 1930; 201–236.\n\nNicholls JG, Stewart RR, Erulkar SD, et al.: Reflexes, fictive respiration and cell division in the brain and spinal cord of the newborn opossum, Monodelphis domestica, isolated and maintained in vitro. J Exp Biol. 1990; 152: 1–15. PubMed Abstract\n\nNoor NM, Møllgård K, Wheaton BJ, et al.: Expression and cellular distribution of ubiquitin in response to injury in the developing spinal cord of Monodelphis domestica. PLoS One. 2013; 8(4): e62120. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNoor NM, Steer DL, Wheaton BJ, et al.: Age-dependent changes in the proteome following complete spinal cord transection in a postnatal South American opossum (Monodelphis domestica). PLoS One. 2011; 6(11): e27465. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPearson KG: Plasticity of neuronal networks in the spinal cord: modifications in response to altered sensory input. Prog Brain Res. 2000; 128: 61–70. PubMed Abstract | Publisher Full Text\n\nPoole WE, Simms NG, Wood JT: CSIRO Research Publications Repository - Tables for age determination of the Kangaroo Island wallaby (tammar), Macropus eugenii, from body measurements. Memorandum; 1991; 32. Publisher Full Text\n\nRangasamy SB: Locomotor recovery after spinal cord hemisection/contusion injures in bonnet monkeys: footprint testing--a minireview. Synapse. 2013; 67(7): 427–453. PubMed Abstract | Publisher Full Text\n\nRenfree MB, Holt AB, Green SW, et al.: Ontogeny of the brain in a marsupial (Macropus eugenii) throughout pouch life. I. Brain growth. Brain Behav Evol. 1982; 20(1–2): 57–71. PubMed Abstract\n\nReynolds ML, Cavanagh ME, Dziegielewska KM, et al.: Postnatal development of the telencephalon of the tammar wallaby (Macropus eugenii). An accessible model of neocortical differentiation. Anat Embryol (Berl). 1985; 173(1): 81–94. PubMed Abstract | Publisher Full Text\n\nRiese W: The early postnatal development of the brain of the opossum, Didelphis virginiana. J Mammal. 1948; 29(2): 150–155. PubMed Abstract | Publisher Full Text\n\nRossignol S, Frigon A: Recovery of locomotion after spinal cord injury: some facts and mechanisms. Annu Rev Neurosci. 2011; 34: 413–440. PubMed Abstract | Publisher Full Text\n\nSaunders NR: CSIRO cuts. Nature. 1989; 339: 574. Reference Source\n\nSaunders NR, Adam E, Reader M, et al.: Monodelphis domestica (grey short-tailed opossum): an accessible model for studies of early neocortical development. Anat Embryol (Berl). 1989; 180(3): 227–236. PubMed Abstract | Publisher Full Text\n\nSaunders NR, Balkwill P, Knott G, et al.: Growth of axons through a lesion in the intact CNS of fetal rat maintained in long-term culture. Proc Biol Sci. 1992; 250(1329): 171–180. PubMed Abstract | Publisher Full Text\n\nSaunders NR, Deal A, Knott GW, et al.: Repair and recovery following spinal cord injury in a neonatal marsupial (Monodelphis domestica). Clin Exp Pharmacol Physiol. 1995; 22(8): 518–526. PubMed Abstract | Publisher Full Text\n\nSaunders NR, Dziegielewska KM, Whish SC, et al.: Dataset 1 in: A bipedal mammalian model for spinal cord injury research: The tammar wallaby. F1000Research. 2017. Data Source\n\nSaunders NR, Kitchener P, Knott GW, et al.: Development of walking, swimming and neuronal connections after complete spinal cord transection in the neonatal opossum, Monodelphis domestica. J Neurosci. 1998; 18(1): 339–355. PubMed Abstract\n\nSaunders NR, Noor NM, Dziegielewska KM, et al.: Age-dependent transcriptome and proteome following transection of neonatal spinal cord of Monodelphis domestica (South American grey short-tailed opossum). PLoS One. 2014; 9(6): e99080. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith RR, Burke DA, Baldini AD, et al.: The Louisville Swim Scale: a novel assessment of hindlimb function following spinal cord injury in adult rats. J Neurotrauma. 2006; 23(11): 1654–1670. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSteward O, Popovich PG, Dietrich WD, et al.: Replication and reproducibility in spinal cord injury research. Exp Neurol. 2012; 233(2): 597–605. PubMed Abstract | Publisher Full Text\n\nTerman JR, Wang XM, Martin GF: Developmental plasticity of ascending spinal axons studies using the North American opossum, Didelphis virginiana. Brain Res Dev Brain Res. 1999; 112(1): 65–77. PubMed Abstract | Publisher Full Text\n\nTyndale-Biscoe CH, Janssens PA: The Developing Marsupial. eds. Tyndale-Biscoe CH & Janssens PA. Springer-Verlag, Berlin Heidelberg; 1988. Reference Source\n\nvan den Brand R, Heutschi J, Barraud Q, et al.: Restoring voluntary control of locomotion after paralyzing spinal cord injury. Science. 2012; 336(6085): 1182–1185. PubMed Abstract | Publisher Full Text\n\nWang XM, Basso DM, Terman JR, et al.: Adult opossums (Didelphis virginiana) demonstrate near normal locomotion after spinal cord transection as neonates. Exp Neurol. 1998a; 151(1): 50–69. PubMed Abstract | Publisher Full Text\n\nWang XM, Terman JR, Martin GF: Evidence for growth of supraspinal axons through the lesion after transection of the thoracic spinal cord in the developing opossum Didelphis virginiana. J Comp Neurol. 1996; 371(1): 104–115. PubMed Abstract | Publisher Full Text\n\nWang XM, Terman JR, Martin GF: Regeneration of supraspinal axons after transection of the thoracic spinal cord in the developing opossum, Didelphis virginiana. J Comp Neurol. 1998b; 398(1): 83–97. PubMed Abstract | Publisher Full Text\n\nWatson CR: An experimental study of the corticospinal tract of the kangaroo. J Anat. 1971; 110(Pt 3): 501. PubMed Abstract\n\nWatson CR, Freeman BW: The corticospinal tract in the kangaroo. Brain Behav Evol. 1977; 14(5): 341–351. PubMed Abstract | Publisher Full Text\n\nWheaton BJ, Callaway JK, Ek CJ, et al.: Spontaneous development of full weight-supported stepping after complete spinal cord transection in the neonatal opossum, Monodelphis domestica. PLoS One. 2011; 6(11): e26826. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWheaton BJ, Noor NM, Dziegielewska KM, et al.: Arrested development of the dorsal column following neonatal spinal cord injury in the opossum, Monodelphis domestica. Cell Tissue Res. 2015; 359(3): 699–713. PubMed Abstract | Publisher Full Text\n\nWheaton BJ, Noor NM, Whish SC, et al.: Weight-bearing locomotion in the developing opossum, Monodelphis domestica following spinal transection: remodeling of neuronal circuits caudal to lesion. PLoS One. 2013; 8(8): e71181. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson GR: How kangaroos swim. Search. 1974; 5: 598–600.\n\nXu XM, Martin GF: The response of rubrospinal neurons to axotomy at different stages of development in the North American opossum. J Neurotrauma. 1992; 9(2): 93–105. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "23512",
"date": "26 Jun 2017",
"name": "Helen Marie Bramlett",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes the use of the tammar wallaby as a model for use in spinal cord injury research. The uniqueness of this animal is the fact that it is a bipedal mammal similar to humans when assessing gait either through walking (hopping) or swimming. Additionally, the tammar wallaby has similar properties in regeneration as the authors’ have previously shown in Monodelphis domestica. Several outcomes measures are used including behavioral assessment as well as fluorescent tracing to demonstrate regeneration. These outcomes clearly show that positive results are only seen when the SCI is done early on in life during a “permissive stage” of regrowth and not during the “non-permissive” stage of the animal.\nSpecific Comments:\nThis is a well-written manuscript and provides compelling evidence for the use of the tammar wallaby as a model for SCI in a bipedal animal.\n\nAlthough limitations are discussed including this one, the survivability of animals in this model for assessment is a big issue for the use of the tammar wallaby. This will have to be addressed and improved in order for the use of this animal as a valid model for SCI regeneration.\n\nHowever, the use of the tammar wallaby in future studies is well-supported based on the data and conclusions in the paper.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23514",
"date": "27 Jun 2017",
"name": "Zoltan Varga",
"expertise": [
"Reviewer Expertise Zebrafish CNS development & patterning",
"zebrafish husbandry and cryopreservation",
"repair of connections in Monodelphis domestica CNS after injury in vitro"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI really enjoyed reading this article. It explores whether the tammar wallaby is a suitable model organism to study mechanisms of spinal cord regeration after injury at the mid-thoracic region in pouch young animals. Comparable to previous studies by the Saunders group (e.g. with Mondoelphis domestica), this study establishes a critical \"permissive\" period for wallabies after birth, after which growth of neurites across the lesion and behavioral recovery of hopping and swimming can not occur any longer. Importantly - the tammar wallaby might be a more representative bipedal species to humans because it probably relies less on sensory feedback (as opposed to quadrupeds) for the recovery of locomotor functions after injury. Therefore, therapeutic discoveries in this species will be more relevant for human spinal cord injuries.\nThe article is very thoughtfully written, data are well presented and carefully discussed. The study is scientifically sound. I have no doubt that it will find the interest of a wide audience. A few (very minor) comments, suggestions.\nThe perceived shortcomings of the wallaby model at this time (problematic survival of P40/P60 operated animals), could be overcome by appropriate husbandry refinements and changes in post-operative care. In my opinion the wallaby is an excellent test animal and this article lays important ground work for future studies.\n\nPY (pouch young) should be defined at first mention in the main text.\n\nIt would be helpful to approximately indicate how far rostral and caudal from the lesion site sections B,C, and D were taken in Figure 3.\n\nConsidering the question of suitability of the wallaby for bipedal recovery after SCI and the role of sensory feedback in the process: what is known about the anatomy and connectivity of spinal cord sensory connections and how wallabies compare e.g. to dorsal spinal cord sensory tracts in M. domestica or human? Is the absence of Oregon Green labeling rostral to the lesion (after the recovery period) indicative of the absence of sensory feedback connections (compared to quadrupeds)?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-921
|
https://f1000research.com/articles/6-920/v1
|
15 Jun 17
|
{
"type": "Clinical Practice Article",
"title": "Case report: Percutaneous electrical neural field stimulation in two cases of sympathetically-mediated pain",
"authors": [
"Lynn Fraser",
"Anna Woodbury",
"Lynn Fraser"
],
"abstract": "Background: Fibromyalgia and complex regional pain syndrome (CRPS) are both chronic pain syndromes with pathophysiologic mechanisms related to autonomic nervous system dysregulation and central sensitization. Both syndromes are considered difficult to treat with conventional pain therapies. Case presentations: Here we describe a female veteran with fibromyalgia and a male veteran with CRPS, both of whom failed multiple pharmacologic, physical and psychological therapies for pain, but responded to percutaneous electrical neural field stimulation (PENFS) targeted at the auricular branches of the cranial nerves. Discussion: While PENFS applied to the body has been previously described for treatment of localized pain, PENFS effects on cranial nerve branches of the ear is not well-known, particularly when used for regional and full-body pain syndromes such as those described here. PENFS of the ear is a minimally-invasive, non-pharmacologic therapy that could lead to improved quality of life and decreased reliance on medication. However, further research is needed to guide clinical application, particularly in complex pain patients.",
"keywords": [
"percutaneous electrical neural stimulation",
"vagal stimulation",
"fibromyalgia",
"complex regional pain syndrome",
"central pain"
],
"content": "Introduction\n\nChronic pain syndromes encompass a poorly defined group of symptoms, including the chief perceptive state of ongoing pain. Patients often have comorbid findings, such as sleep disturbances, fatigue, headache, memory impairment, depression, anxiety, and bowel disturbance. All of these syndromes and associated symptoms can be precipitated by or worsened by stress and stressful life events, resulting in sympathetic nervous system activation1–3. Therefore, modulation of the sympathetic nerve system, for example through vagal nerve stimulation or sympathetic nervous blockade, may help in reducing pain symptoms. Chronic pain syndromes significantly impact function and quality of life for these patients.\n\nFibromyalgia is among these syndromes that pose a challenge to pain specialists. Perhaps the difficulty in treating these patients lies in the lack of insight into the cause of fibromyalgia. Although the exact mechanism remains unknown, a central sympathetically-mediated mechanism is suspected4–6. This central sensitization and pain-induced brain changes can be visualized with neuroimaging7. Various psychological interventions for pain have therefore been employed to counter this sympathetic nervous system hyperactivity and central sensitization8.\n\nFibromyalgia, as a chronic, multi-system illness that can present with a variety of symptoms, ranging from decreased physical activity, sleep disturbances, fatigue, emotional disorders, memory loss, and the hallmark musculoskeletal pain, has generated hypotheses from initial theories of muscle inflammation towards a theory of central nervous system derangement9. In fact, using objective measures of activation of the sympathetic nervous system, Zamanuer et al. concluded that the degree of sympathetic activation was positively correlated with intensity of pain in fibromyalgia patients6. From this data, the investigators postulated that sympatholytic drugs may lessen pain intensity in fibromyalgia. It has even been proposed that fibromyalgia and related disease entities be referred to as “Central Sensitivity Syndrome.”10 Genetic, hormonal, psychosocial, and environmental factors may also play a role in this complex pathogenesis.\n\nSimilarly, complex regional pain syndrome (CRPS) is thought to have a mechanism of action involving dysfunction of the sympathetic nervous system, though its etiology is still poorly defined11. It is believed there is a complex interplay between inflammatory mediators and the sympathetic nervous system, including sympathetic neurons releasing norepinephrine, which acts on adrenergic receptors, causing an increase in the amount of pro-inflammatory cytokines produced, ultimately leading to peripheral pain sensitization12. Over time, maladaptive changes take place in the central nervous system, leading to central sensitization. While there is an increase in locally produced pro-inflammatory cytokines, interestingly, there is a lower level of norepinephrine measured in the affected limb13. The sympathetic overdrive seen in CRPS is believed to be due to “sympathetic sprouting” and up-regulation of alpha-adrenergic receptors, leading to increased sensitivity to sympathetic nervous system neurotransmitters13. Sympathetic blockade has been used in the treatment of CRPS, although it is unclear if this has long-term benefits14.\n\nThe impact of fibromyalgia, CRPS, and other chronic pain syndromes is widespread. The difficultly in treating these syndromes imparts a financial burden on the healthcare system, with an unsurprising increased utilization of healthcare services. Mean total healthcare costs are estimated to be about three times higher among fibromyalgia patients when compared to age- and sex-matched patients without fibromyalgia15.\n\nWe often find ourselves with limited tools in our armamentarium to effectively treat pain in this patient population. A multi-modal approach that includes non-pharmacologic therapy is desirable in addressing these elusive syndromes. Psychotherapy and physical therapy are considered fundamental in the treatment of fibromyalgia and CRPS, but are often insufficient as sole therapies to address the patient's symptomatology. Among the minimally invasive non-pharmacologic therapies employed in chronic pain therapy are acupuncture16, transcutaneous electrical neural stimulation (TENS)17, percutaneous electrical neural stimulation (PENS)18, and more recently percutaneous electrical neural field stimulation (PENFS). Peri-auricular PENFS, which can be thought of as an evolution of auricular acupuncture and PENS to allow for stimulation of the entire ear, is thought to work by targeting the auricular branches of the cranial nerves. The auricular branches of the vagal nerve have previously been targeted for treatment of chronic pain syndromes19–21. Peri-auricular PENFS can plausibly exert a modulating effect on the central nervous system and on sympathetically-mediated pain through its access to auricular branches of the vagal nerve.\n\nHere we present two cases of peri-auricular PENFS use in patients seen in our specialist pain medicine clinic at the Veterans Affairs Medical Center in Atlanta (GA, USA), with chronic pain syndromes refractory to multiple therapeutic interventions. The specific device used for PENFS in these patients was the Military Field Stimulator© (2016; Innovative Health Solutions INC.). Both patients gave written informed consent, in line with Declaration of Helsinki guidelines.\n\n\nCase presentation\n\nA 56-year-old female veteran with a long-standing history of fibromyalgia and bipolar disorder, both diagnosed in 1990 following surgery for ovarian cancer, initially presented to the clinic on September 17, 2014 for interventional therapy for chronic low back pain of >10 years' duration. She reported receiving epidural steroid injections and sacroiliac joint injections in the past, from which she had temporary relief of her low back pain. In addition to her chief complaint of low back pain, she reported diffuse pain, in areas including the head, neck, arms, hands, hips, and buttocks. She was under the care of a specialist for fibromyalgia and taking multiple daily prescription medications to control the pain, including meloxicam, amantadine, topiramate, hydrocodone/acetaminophen, duloxetine, cyclobenzaprine, tizanidine, and gabapentin. Her bipolar disorder was treated with lamotrigine. She was taking indomethacin and sumatriptan on an as needed basis for headache. In addition, she was taking herbs and supplements, including maca, garcinia, L-lysine, vitamin E, vitamin D, coral calcium, and niacin. She was evaluated by a rheumatologist with inconclusive results regarding active inflammatory disease; she was previously on prednisone without reported benefit. Her daily function was significantly limited secondary to pain. She was unable to perform household chores and could not do physical activity for more than five minutes. If she over-exerted herself, she required one week without any physical activity. As a result, she reported poor quality of life. Her history was complicated by social issues, including strained relationships with her mother and daughter; she suffered a history of child abuse.\n\nThe patient agreed to try a series of acupuncture treatments. Pain scores were reported using the visual analog scale (VAS). After a series of acupuncture sessions between September 2014 and January 2015 (see Table 1), pain located in her head and neck was completely relieved, but the lower back pain persisted. She subsequently underwent a series of three peri-auricular PENFS applications (Table 1: January 21, 2015; January 28, 2015, February 3, 2015), per manufacturer recommendations (see Discussion), for initial treatment of complex pain. Follow-up visits after the first application revealed 100% relief of diffuse pain that she attributed to fibromyalgia. She stated that she no longer required hydrocodone/acetaminophen. However, she noted her sacroiliac joint pain remained. The second application of the peri-auricular PENFS yielded similar results. After the third and final application, she returned to the clinic stating she had on-going complete relief of pain attributed to fibromyalgia. Although she reported persistent sacroiliac joint pain, she noted she was able to go golfing again without issues, which was a significant functional improvement from initial presentation, and something she had not been able to do in almost a decade. The sacroiliac joint pain was successfully treated with radiofrequency ablation of the L5 dorsal ramus and S1 and S2 lateral branches, as she had previously undergone sacroiliac joint injections with significant, but unsustained, relief from outside providers (series of 2 injections 1 month apart, multiple in 2014), and a repeated flurosocopy-guided injection in our procedure suite on January 9, 2015 yielded similar results.\n\nVAS, visual analog scale.\n\nFollow-up (February 9, 2015; March 20, 2015; July 8, 2015) revealed three months’ of >80% pain relief following peri-auricular PENFS treatment. The patient reported the return of diffuse pain coinciding with the death of her father, with whom she was close. She reported that her pain symptoms were exacerbated by anxiety and stress. At that time, a pain psychologist was included in her care. Acupuncture sessions were continued, and radiofrequency ablation of bilateral sacroiliac joints was continued. PENFS application was not repeated, (though recommendations from the manufacturer were that another one-time application could have resulted in alleviation of her symptoms again) as the patient felt that self-obtained cannabis oil had relieved the majority of her symptoms related to PTSD and pain. She felt that the device was bulky and difficult to wear, with adhesive that tended to entangle her hair, and preferred the combination of auricular acupuncture and cannabis oil for symptom management, despite decreased analgesic duration. The patient self-discontinued narcotic pain medications and pursued psychotherapy and auricular acupuncture, in combination with her own cannabis oil for the treatment of her symptoms.\n\nA 52-year-old visually-impaired male veteran, with a history of left toe non-union fracture, presented for evaluation of worsening left lower extremity pain on December 17, 2014. According to the patient, he was previously diagnosed with CRPS > 10 years prior. The pain was characterized as left foot and leg pain radiating to the left hip accompanied by intense burning. These symptoms were previously controlled with a combination of NSAIDs, acetaminophen, and meditation, but he attributed an acute worsening of pain due to inability to meditate and the added burdens of becoming a primary caregiver for his mother. He recently tried gabapentin without benefit, and with side effects of sedation and balance difficulties. He had acupuncture in the past for myofascial pain in the upper neck and trapezius with good relief, and presented to the clinic requesting a series of acupuncture treatments for his uncontrolled left lower extremity pain. His medical history also included traumatic brain injury, chronic headaches, blindness, and “micro-seizures”, described as sensations of a shooting electrical and static electrical nature, all following a car accident with head impact.\n\nFollowing his initial acupuncture treatment for CRPS on December 17, 2014, he had immediate pain relief on the day of therapy. On a follow-up visit, January 27, 2015, he endorsed >50% relief of the left lower extremity pain, as well as reduced intensity of the micro-seizures. At that time, the decision was made to begin a series of three placements of a peri-auricular PENFS device. Following the first application (January 27, 2015), he reported five days of >50% pain relief, accompanied by reduction of micro-seizures and an improvement in daily function. There was a noticeable increase in pain intensity when the battery life ended. He presented for a second application of the peri-auricular PENFS (February 3, 2015), after which he reported two days of 50% pain relief. After two days, he also reported return of micro-seizures that he noted seemed to correlate in intensity with left lower extremity pain. The final device application (February 9, 2015) resulted in two days of pain relief, at which time it came dislodged during visual evoked potential testing. He did state the needles stayed in place and seemed to give some relief despite not being connected to the stimulator.\n\nAfter the final peri-auricular PENFS placement, the patient expressed a desire to continue with acupuncture sessions, which seemed to provide longer-lasting relief with each subsequent session. He felt that the device did not stay in place as well as the auricular acupuncture needles, and that it only gave him relief while it was in place. He also believed that the battery ran out quicker than it should have, so did not wish to try it again, particularly given the expense of the device relative to auricular acupuncture. Pregabalin was added to his medications and use of a TENS unit was used as an adjunct in an effort to achieve satisfactory pain relief, though he did self-discontinue the pregabalin due to side effects (balance difficulties), maintaining that ibuprofen and acetaminophen were better-tolerated and helped to alleviate his symptoms.\n\n\nDiscussion\n\nThe Military Field Stimulator is an FDA-approved nerve stimulator that is meant to target both acute and chronic pain by creating a field stimulation around the auricle to peripheral auricular branches of the cranial nerves, including the vagus, trigeminal, facial, hypoglossal, and occipital nerves. Placement is minimally invasive and involves three stimulating electrodes and a grounding electrode. Electrodes are placed percutaneously and secured in place (Figure 1). A battery-operated generator is placed behind the ear that creates current that can be varied according to provider input. Stimulation is provided over a five-day period (120 hours) using a mild frequency between one and ten Hertz and amplitude of three Volts. A two-hour period of stimulation alternates with a two-hour period of rest for the treatment duration. Different treatment regimens exist, but for chronic pain, it is recommended that the patient have a series of three device placements, with each new trial occurring every seven days. The patient then has a rest period of seven days without the device. If pain increases during the rest period, the series of three device placements is repeated every seven days. Maximum analgesic benefit is achieved after six total placements.\n\nElectrode placement is provided in blue. Electrical field stimulation (blue arrows) of auricular branches of the trigeminal (red), vagus (yellow) and greater auricular (green) nerves is also depicted. Electrode placement can be varied to specifically target different points on the ear, though field stimulation provides broader coverage, regardless of placement.\n\nThe two patients described both obtained 50–100% relief of their chronic pain symptoms with utilization of peri-auricular PENFS. However, due to the discomfort associated with the device, both patients opted for alternate therapies (including auricular acupuncture) upon return of their symptoms. The patient in Case 1 achieved long-term (3 months' relief) following her PENFS series, until she experienced a life stressor. Manufacturer recommendations suggest that the device should be replaced for a five-day period should breakthrough pain occur after a period of analgesic effect. Had the patient chosen this option, it could have potentially restored her relief, but she had obtained cannabis oil from an unknown source, which she felt also gave her adequate relief of her pain. The patient in Case 2 could not complete the series, as the PENFS device became repeatedly dislodged. It is possible that he could have achieved longer-term relief, had he been able to complete the series in full. PENFS is distinct from manual acupuncture, electroacupuncture and TENS, although physiological effects may be related. In both Case 1 and Case 2, acupuncture had been performed prior to placement of peri-auricular PENFS, though results lasted for <30 days with each acupuncture treatment. It is possible that acupuncture may have primed these two patients for success with the PENFS device through neuro-modulatory mechanisms22,23.\n\nAt the time of this case report, there has been only one study published using a PENFS device in patients with persistent, non-malignant, chronic pain24. In this study of 20 chronic pain patients with non-specific pain diagnoses, a significant decrease (average 65% improvement) in the VAS score was found after four treatments with the device. Though many studies have employed PENS at various body parts for localized pain, these studies did not use PENFS (which employs broader field stimulation), did not specifically target the cranial nerve branches in the ear, and did not employ PENFS/PENS for complex, widespread pain syndromes, such as CRPS or fibromyalgia. It is possible that the mechanism of action in which PENFS is applied to the ear can create a broader effect that alleviates not just regional, but full-body pain, perhaps through its action on the vagus nerve.\n\nStimulation of afferent nerves, particularly the vagus nerve, could modulate the autonomic nervous system, such that sympathetic or centrally-mediated chronic pain syndromes could benefit from the device6,13,19–21. Stimulation of the vagus nerve has been shown to inhibit spinal cord neurons below C3, but excite neurons between C1 and C3, suggesting that these areas may play a role in pain relief. Percutaneous stimulation of the auricular branches of the vagus nerve has been used to treat cervical dystonia25. There is also evidence that stimulating the vagus nerve affects both the thalamus and hypothalamus, areas where pain modulation has been shown to occur26.\n\nThe cases discussed here involve patients in which a specific FDA-approved PENFS unit, the Military Field Stimulator, was placed in an attempt to control chronic pain secondary to fibromyalgia and CRPS. Both patients had persistent pain despite initial treatment modalities and agreed to a trial of PENFS. Both patients arguably had some element of sympathetically-mediated pain, which may explain why field stimulation of the auricular branches of the cranial nerves, including the vagus (i.e. increasing parasympathetic nervous system stimulation) may have modulated their pain. Of note, the patient in Case 1 did not experience relief of her sacroiliac joint pain using the PENFS device. This may indicate that, while PENFS could be helpful in sympathetically-mediated pain states with central sensitization, it may not be as effective in arthritic pain.\n\nAs noted previously, vagus nerve stimulation has been shown to have effects on the thalamus and hypothalamus26. In addition to the favorable impact on reported pain levels, the impact of vagal nerve stimulation on these brain segments is believed to counteract chemically induced nausea and vomiting due to the connection of the thalamus and hypothalamus to the “vomiting center” of the brain. Transcutaneous vagal nerve stimulation has also been shown to improve gastric motility27. While further studies are needed to validate these initial findings, there appears to be promising possibilities for the use of PENFS devices or similar means of vagal nerve stimulations as adjuncts for multiple perceptive states.\n\nThis report describes an abbreviated trial of the PENFS device in two patients with sympathetically-mediated chronic pain syndromes. Further directions include recruiting a larger sample size and prolonged series of treatments (6 placements). However, based on our experiences with the Military Field Stimulator device, patients showed poor tolerance to repeated placement of the device due to its poor wearability, and thus showed a preference for alternative treatments, such as auricular and body acupuncture, despite the short-term duration of these treatments. A more wearable and user-friendly PENFS device is likely needed for increased patient compliance. It may be interesting to do a comparative study with PENFS and electroacupuncture or auricular acupuncture. Neuroimaging could also be employed to explore the neural mechanisms of PENFS effects in central pain states. A cost-benefit analysis should also be performed, as the Military Field Stimulator requires multiple applications with an indeterminate duration of relief. However, the costs of pharmacologic therapy and pain-related disability is also high. Well-designed, randomized controlled trials evaluating long-term pain and functional improvements related to PENFS use are needed.\n\nAuricular PENFS is a promising therapeutic modality that requires further investigation. Further delineation of appropriate applications for peri-auricular PENFS is necessary to determine in which pain syndromes it would be most useful. It may be a useful adjunct in pain syndromes that have been otherwise difficult to control, but more studies are needed.\n\n\nConsent\n\nWritten informed consent for publication of their clinical details was obtained from the patients.",
"appendix": "Author contributions\n\n\n\nLynn Marie Fraser reviewed all notes and procedural details regarding the cases and wrote the majority of the clinical case report under the supervision of Dr. Woodbury.\n\nAnna Woodbury saw and examined the patients, documented details in progress notes, and provided guidance to Dr. Fraser. She also personally contributed to much of the discussion section regarding the mechanisms of PENFS action in these patients.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed. The authors have no conflicts of interest and received no monetary funding from any sources to perform this research. Military Field Stimulators used for the two patients in these cases were provided free of charge from the manufacturer.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe are grateful to the patients and caregivers for their participation.\n\n\nReferences\n\nLerman I, Davis BA, Bertram TM, et al.: Posttraumatic stress disorder influences the nociceptive and intrathecal cytokine response to a painful stimulus in combat veterans. Psychoneuroendocrinology. 2016; 73: 99–108. PubMed Abstract | Publisher Full Text\n\nHeidari J, Mierswa T, Kleinert J, et al.: Parameters of low back pain chronicity among athletes: Associations with physical and mental stress. Phys Ther Sport. 2016; 21: 31–37. PubMed Abstract | Publisher Full Text\n\nBurke NN, Finn DP, McGuire BE, et al.: Psychological stress in early life as a predisposing factor for the development of chronic pain: Clinical and preclinical evidence and neurobiological mechanisms. J Neurosci Res. 2016; 95(6): 1257–1270. PubMed Abstract | Publisher Full Text\n\nKosek E, Altawil R, Kadetoff D, et al.: Evidence of different mediators of central inflammation in dysfunctional and inflammatory pain--interleukin-8 in fibromyalgia and interleukin-1 β in rheumatoid arthritis. J Neuroimmunol. 2015; 280: 49–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKang JH, Kim JK, Hong SH, et al.: Heart Rate Variability for Quantification of Autonomic Dysfunction in Fibromyalgia. Ann Rehabil Med. 2016; 40(2): 301–309. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZamunér AR, Barbic F, Dipaola F, et al.: Relationship between sympathetic activity and pain intensity in fibromyalgia. Clin Exp Rheumatol. 2015; 33(1 Suppl 88): S53–57. PubMed Abstract\n\nCagnie B, Coppieters I, Denecker S, et al.: Central sensitization in fibromyalgia? A systematic review on structural and functional brain MRI. Semin Arthritis Rheum. 2014; 44(1): 68–75. PubMed Abstract | Publisher Full Text\n\nThieme K, Turk DC, Gracely RH, et al.: Differential psychophysiological effects of operant and cognitive behavioural treatments in women with fibromyalgia. Eur J Pain. 2016; 20(9): 1478–89. PubMed Abstract | Publisher Full Text\n\nSluka KA, Clauw DJ: Neurobiology of fibromyalgia and chronic widespread pain. Neuroscience. 2016; 338: 114–129. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYunus MB: Central sensitivity syndromes: a new paradigm and group nosology for fibromyalgia and overlapping conditions, and the related issue of disease versus illness. Semin Arthritis Rheum. 2008; 37(6): 339–352. PubMed Abstract | Publisher Full Text\n\nKuttikat A, Noreika V, Shenker N, et al.: Neurocognitive and Neuroplastic Mechanisms of Novel Clinical Signs in CRPS. Front Hum Neurosci. 2016; 10: 16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi W, Shi X, Wang L, et al.: Epidermal adrenergic signaling contributes to inflammation and pain sensitization in a rat model of complex regional pain syndrome. Pain. 2013; 154(8): 1224–1236. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchlereth T, Drummond PD, Birklein F: Inflammation in CRPS: role of the sympathetic supply. Auton Neurosci. 2014; 182: 102–107. PubMed Abstract | Publisher Full Text\n\nO'Connell NE, Wand BM, McAuley J, et al.: Interventions for treating pain and disability in adults with complex regional pain syndrome. Cochrane Database Syst Rev. 2013; (4): CD009416. PubMed Abstract | Publisher Full Text\n\nBerger A, Dukes E, Martin S, et al.: Characteristics and healthcare costs of patients with fibromyalgia syndrome. Int J Clin Pract. 2007; 61(9): 1498–1508. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeare JC, Zheng Z, Xue CC, et al.: Acupuncture for treating fibromyalgia. Cochrane Database Syst Rev. 2013; (5): CD007070. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGozani SN: Fixed-site high-frequency transcutaneous electrical nerve stimulation for treatment of chronic low back and lower extremity pain. J Pain Res. 2016; 9: 469–479. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRossi M, DeCarolis G, Liberatoscioli G, et al.: A Novel Mini-invasive Approach to the Treatment of Neuropathic Pain: The PENS Study. Pain Physician. 2016; 19(1): E121–128. PubMed Abstract\n\nNapadow V, Edwards RR, Cahalan CM, et al.: Evoked pain analgesia in chronic pelvic pain patients using respiratory-gated auricular vagal afferent nerve stimulation. Pain Med. 2012; 13(6): 777–789. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYuan H, Silberstein SD: Vagus Nerve and Vagus Nerve Stimulation, a Comprehensive Review: Part I. Headache. 2016; 56(1): 71–78. PubMed Abstract | Publisher Full Text\n\nYuan H, Silberstein SD: Vagus Nerve and Vagus Nerve Stimulation, a Comprehensive Review: Part III. Headache. 2016; 56(3): 479–490. PubMed Abstract | Publisher Full Text\n\nWang SM, Kain ZN, White PF: Acupuncture analgesia: II. Clinical considerations. Anesth Analg. 2008; 106(2): 611–621, table of contents. PubMed Abstract | Publisher Full Text\n\nHui KK, Marina O, Claunch JD, et al.: Acupuncture mobilizes the brain's default mode and its anti-correlated network in healthy subjects. Brain Res. 2009; 1287: 84–103. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoberts A, Brown C: Decrease in VAS Score Following Placement of a Percutaneous Peri-Auricular Peripheral Nerve Field Stimulator. Clinical Medicine and Diagnostics. 2015; 5(2): 17–21. Reference Source\n\nKampusch S, Kaniusas E, Széles JC: Modulation of Muscle Tone and Sympathovagal Balance in Cervical Dystonia Using Percutaneous Stimulation of the Auricular Vagus Nerve. Artif Organs. 2015; 39(10): E202–212. PubMed Abstract | Publisher Full Text\n\nChakravarthy K, Chaudhry H, Williams K, et al.: Review of the Uses of Vagal Nerve Stimulation in Chronic Pain Management. Curr Pain Headache Rep. 2015; 19(12): 54. PubMed Abstract | Publisher Full Text\n\nFrøkjaer JB, Bergmann S, Brock C, et al.: Modulation of vagal tone enhances gastroduodenal motility and reduces somatic pain sensitivity. Neurogastroenterol Motil. 2016; 28(4): 592–598. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "24043",
"date": "11 Jul 2017",
"name": "Katie Schenning",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe case series describes the use of a Federal Drug Administration-approved electroacupuncture device, the Military Field Stimulator, which is a type of percutaneous electrical neural field stimulation (PENFS), for the treatment of patients with sympathetically mediated chronic pain syndromes. This is a novel approach to treat chronic pain, by stimulating auricular branches of cranial nerves and likely modulating and normalizing the autonomic systems in the brain. Overall, this is an interesting and well-written description of 2 patient cases.\nIntroduction The introduction describes well the complexity of fibromyalgia and complex regional pain syndrome (CRPS), including the hypotheses these syndromes are sympathetically mediated entities. However, we believe the introduction would be greatly improved by decreasing some of the details regarding fibromyalgia and CRPS, and instead focusing more on PENFS. For example, paragraphs two and three are repetitive and could be condensed into a single paragraph. On the other hand, PENFS is new technology and unfamiliar to most readers. The authors describe it later, in the Discussion section, however, the article would flow better if PENFS was introduced earlier. For example, Figure 1 could be part of the introduction.\nCase Presentation- Case 1 The first patient has fibromyalgia, as well as sacroiliac joint dysfunction and comorbid mood disorder. The patient in case one had a drastic, sustained improvement in her pain, as well as her functional status.\nCase Presentation- Case 2 For the patient with CRPS (case two) there is no mention of abnormal sympathetic tone in the affected extremity: swelling, edema, skin color changes, and temperature changes. This is important, as normalization of such findings (i.e. temperature difference between affected and unaffected leg during PENFS application) would support the notion that PENFS works by decreasing sympathetic tone, possibly in a similar way as a sympathetic block would act.\nDiscussion PENFS shows potential as a novel, non-pharmacological, low side-effect profile, treatment for patients with complex pain syndromes that are refractory to other therapies. The two cases responded well to this therapy, despite not using it as recommended by the manufacturer. Both patients had significant pain relief from acupuncture, before initiation of PENFS, and the authors mention that “… acupuncture might have primed these two patients for success with the PENFS device …”. The authors should also discuss the possibility that these patients responded to PENFS because they had good pain relief with acupuncture (selection bias). The authors should consider explaining the rationale for choosing these patients for PENFS trials. Was it because these patients had previously responded well to acupuncture? It may be that certain patients and possibly certain subtypes of fibromyalgia and CRPS respond to acupuncture, whereas others do not. The authors should also comment more broadly on the use of acupuncture for chronic pain syndromes in the discussion.\n\nMinor points:\nRemove the from “auricular branches of the cranial nerves” in abstract and discussion. “Zamanuer et al.” is misspelled.\n\nIs the background of the cases’ history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the conclusion balanced and justified on the basis of the findings? Yes",
"responses": []
},
{
"id": "23509",
"date": "06 Sep 2017",
"name": "Albert Leung",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very interesting case series documenting the efficacy PENFS in treating two complex chronic pain conditions. While the clinical outcome and potential related mechanisms are well described, it will be interesting for the reader if the authors can discuss the duration of pain relief from the treatment, which can vary from days to months as noted in table 1.\n\nIs the background of the cases’ history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the conclusion balanced and justified on the basis of the findings? Yes",
"responses": []
},
{
"id": "23904",
"date": "04 Oct 2017",
"name": "Dalia Elmofty",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nStimulation of the auricular branch of the vagal nerve for the treatment of chronic pain has been reported in the literature. Percutaneous electrical neural field stimulation has been approved by the FDA and is classified as a minimal-risk device for the treatment of chronic pain. The exact mechanism is unknown, yet it is believed that the external ear contains branches of cranial nerves that project to brainstem nuclei involved in pain processing. Dr. Fraser and Woodbury provide an interesting description on the use of PEFNS for two cases of sympathetically-mediated pain.\n\nEditorial Comments Introduction:\nThe authors gave a detailed description on the pathophysiology of fibromyalgia and CRPS. The authors should consider providing a brief description on the mechanism of action of auricular stimulation.\nCase 1:\nThe authors should consider stating that PENFS did not provide relief for pain from sacroiliitis and no need to further discuss other management (RFA). Does the author have any data on objective improvement in pain after PENFS therapy for case 1 (fibromyalgia patient).\nCase 2:\nThe authors should consider providing information regarding sympathetic-mediated symptoms and signs of CRPS for case 2 and if PENFS treatment reduced any of the sympathetic-mediated symptoms and signs. The authors should consider providing a table for treatment modalities performed for case 2 and outcomes (as done for Case 1).\n\nDiscussion:\nThe authors should consider discussing any potential side-effects from PENFS (exp. vaso-vagal response). The authors should consider providing a more detailed explanation of the mechanism of action. This is a fairly new device. There is evidence that it works on the NTS, RVM, hypothalamus, amygdale and spinal cord.\n\nIs the background of the cases’ history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the conclusion balanced and justified on the basis of the findings? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-920
|
https://f1000research.com/articles/6-622/v1
|
03 May 17
|
{
"type": "Research Article",
"title": "A putative antiviral role of plant cytidine deaminases",
"authors": [
"Susana Martín",
"José M. Cuevas",
"Ana Grande-Pérez",
"Santiago F Elena",
"Susana Martín",
"José M. Cuevas",
"Ana Grande-Pérez"
],
"abstract": "Background: A mechanism of innate antiviral immunity operating against viruses infecting mammalian cells has been described during the last decade. Host cytidine deaminases (e.g., APOBEC3 proteins) edit viral genomes, giving rise to hypermutated nonfunctional viruses; consequently, viral fitness is reduced through lethal mutagenesis. By contrast, sub-lethal hypermutagenesis may contribute to virus evolvability by increasing population diversity. To prevent genome editing, some viruses have evolved proteins that mediate APOBEC3 degradation. The model plant Arabidopsis thaliana genome encodes nine cytidine deaminases (AtCDAs), raising the question of whether deamination is an antiviral mechanism in plants as well. Methods: Here we tested the effects of expression of AtCDAs on the pararetrovirus Cauliflower mosaic virus (CaMV). Two different experiments were carried out. First, we transiently overexpressed each one of the nine A. thaliana AtCDA genes in Nicotiana bigelovii plants infected with CaMV, and characterized the resulting mutational spectra, comparing them with those generated under normal conditions. Secondly, we created A. thaliana transgenic plants expressing an artificial microRNA designed to knock-out the expression of up to six AtCDA genes. This and control plants were then infected with CaMV. Virus accumulation and mutational spectra where characterized in both types of plants. Results: We have shown that the A. thaliana AtCDA1 gene product exerts a mutagenic activity, significantly increasing the number of G to A mutations in vivo, with a concomitant reduction in the amount of CaMV genomes accumulated. Furthermore, the magnitude of this mutagenic effect on CaMV accumulation is positively correlated with the level of AtCDA1 mRNA expression in the plant. Conclusions: Our results suggest that deamination of viral genomes may also work as an antiviral mechanism in plants.",
"keywords": [
"antiviral innate immunity",
"Cauliflower mosaic virus",
"error catastrophe",
"hypermutagenesis",
"mutational spectrum",
"plant-virus interaction",
"pararetrovirus",
"virus evolution"
],
"content": "Introduction\n\nThe human APOBEC (apolipoprotein B mRNA editing catalytic polypeptide-like) family includes enzymes that catalyze the hydrolytic deamination of cytidine to uridine or deoxycytidine to deoxyuridine. This family is composed of eleven known members: APOBEC1, APOBEC2, APOBEC3 (further classified as A3A to A3H), APOBEC4, and AID (activation induced deaminase). APOBEC proteins are associated with several functions involving editing of DNA or RNA (reviewed by Smith et al1). APOBEC1 mediates deamination of cytidine at position 6666 of apolipoprotein B mRNA, resulting in the introduction of a premature stop codon and the production of the short form of the protein2–4. APOBEC2 is essential for muscle tissue development5. APOBEC4 has no ascribed function so far6. AID deaminates genomic ssDNA of B cells, initiating immunoglobulin somatic hypermutation and class switch processes7–9. Most notably, APOBEC3 enzymes participate in innate immunity against retroviruses and endogenous retroelements10–12. Sheehy et al. demonstrated that A3G also plays a role in immunity against human immunodeficiency virus type 1 (HIV-1)13. For its antiviral role, A3G is packaged along with viral RNA14. Upon infection of target cells and during the reverse transcription process, A3G deaminates the cytosine residues of the nascent first retroviral DNA strand into uraciles. The resulting uracil residues serve as templates for the incorporation of adenine, which at the end result in strand-specific C/G to T/A transitions and loss of infectivity through lethal mutagenesis15–19. On the other hand, sub-lethal mutagenic activity of APOBEC3 proteins may end up being an additional source for HIV-1 genetic diversity, hence bolstering its evolvability20–22. APOBEC3 proteins have been shown to inhibit other retroviruses (simian immunodeficiency virus23, equine infectious anemia virus24, foamy virus25, human T-cell leukemia virus26, and murine leukemia virus27), pararetroviruses (hepatitis B virus28) and DNA viruses (herpes simplex virus 129,30, Epstein-Barr virus30, HSV-1 and EBV respectively, and human papillomavirus31). In the cases of HSV-1 and EBV, the antiviral role of deaminases has not yet been demonstrated30. Evidence also exists that A3G significantly interferes with negative-sense RNA viruses lacking a DNA replicative phase32. For example, the transcription and protein accumulation of measles virus, mumps virus and respiratory syncytial virus (RSV) was reduced 50–70%, whereas the frequency of C/G to U/A mutations was ∼4-fold increased32. In contrast, A3G plays no antiviral activity against influenza A virus despite being highly induced in infected cells as part of a general IFN-β response to infection33,34.\n\nHuman APOBEC belongs to a superfamily of polynucleotide cytidine and deoxycytidine deaminases distributed throughout the biological world35. All family members contain a zinc finger domain (CDD), identifiable by the signature (H/C)-x-E-x25-30P-C-x-x-C. Plants are not an exception and, for example, the Arabidopsis thaliana genome encodes nine putative cytidine deaminases (with genes named AtCDA1 to AtCDA9). Whilst the AtCDA1 gene is located in chromosome II, the other eight genes are located in chromosome IV. In the case of rice and other monocots, only one CDA has been identified35. Interestingly, this CDA expression was highly induced as part of the general stress response of rice against infection of the fungal pathogen Magnaporthe grisea, resulting in an excess of A to G and U to C mutations in defense-related genes36. Edited dsRNAs might be retained in the nucleus and degraded, generating miRNAs and siRNAs37. Given the relevance of deamination as an antiviral innate response in animals, we sought first to determine whether any of the AtCDA proteins encoded by plants can participate in deaminating the genome of the pararetrovirus, cauliflower mosaic virus (CaMV; genus Caulimovirus, family Caulimoviridae) and, second, we sought to explore whether this deamination may negatively impact viral infection. We hypothesize that deamination may take place mainly at the reverse transcription step. The CaMV genome is constituted by a single molecule of circular double-stranded DNA of 8 kbp38. The DNA of CaMV has three discontinuities, Δ1 in the negative-sense strand (or a strand), and Δ2 and Δ3 in the positive-sense strand (yielding the b and g strands). In short, the replication cycle of CaMV is as follows38: in the nucleus of the infected cell, the a strand is transcribed into 35S RNA, with terminal repeats, that migrates to the cytoplasm. Priming of the 35S RNA occurs by the annealing of the 3’ end of tRNAmet to the primer-binding site (PBS) sequence, leading to the synthesis of the DNA a strand by the virus’ reverse transcriptase. Then, the RNA in the heteroduplex is degraded by the virus’ RNaseH activity, leaving purine-rich regions that act as primers for the synthesis of the positive-sense DNA b and g strands.\n\nOur results show that AtCDA1 significantly increases the number of G to A mutations in vivo, and that there is a negative correlation between the amount of AtCDA1 mRNA present in the cell and the load reached by CaMV, suggesting that deamination of viral genomes may also constitute a significant antiviral mechanism in plants.\n\n\nMethods\n\nAtCDAs cDNAs were cloned under the 35S promoter in a pBIN61 vector39. N. bigelovii plants were inoculated with CaMV virions purified from Brassica rapa plants40 previously infected with the clone pCaMVW26041. Symptomatic leafs were agroinfiltrated39 with one of the nine AtCDAs and with the empty vector pBIN61, each on one half of the leaf. Samples were collected three days post-agroinfiltration.\n\nThe design and cloning of the artificial micro-RNA (amiR) able to simultaneously suppress the expression of AtCDAs 1, 2, 3, 4, 7, and 8 was performed as described in ref. 42. The amiRNA was cloned under the control of Aspergillus nidulans ethanol regulon43,44 and used to transform A. thaliana by the floral dip method45. By doing so, we obtained the transgenic line amiR1-6-3. One-month-old seedlings of transgenic and wild-type A. thaliana were treated with 2% ethanol (or water for the control groups) three times every four days. Three days after the third treatment, plants were inoculated with the infectious clone pCaMVW260 as described in ref. 41. Infections were established by applying 1.31×1011 molecules of pCaMVW260 to each of three leaves per plant. Subsequently, plants were subjected to two additional treatments with 2% ethanol (or water) one and five days post-infection. Finally, samples were taken eight days after inoculation and handled as previously described46. For each genotype (transgenic or wild-type) and treatment (ethanol or water) combination, 22 plants were analyzed.\n\nCaMV genomic DNA was purified using DNeasy Plant Mini Kit (Qiagen) according to manufacturer’s instructions. For detection of edited genomes 3D-PCR was performed using primers HCa8Fdeg and HCa8Rdeg. PCRs were performed in a Mastercycler® (Eppendorf) at denaturation temperatures 82.1 °C, 82.9 °C, 83.9 °C, and 85.0 °C. PCR products obtained with the lowest denaturation temperature were cloned in pUC19 vector (Fermentas), transformed in Escherichia coli DH5α and sent to GenoScreen (Lille, France) for sequencing.\n\nTotal RNA was extracted from A. thaliana plants using the RNeasy® Plant Mini Kit (Qiagen), according to manufacturer’s instructions. AtCDA1 specific primers qCDA1-F and qCDA1-R were designed using Primer Express software (Applied Biosystems). RT-qPCR reactions were performed using the One Step SYBR PrimeScript RT-PCR Kit II (Takara). Amplification, data acquisition and analysis were carried out using an Applied Biosystems Prism 7500 sequence detection system. All quantifications were performed using the standard curve method. To quantify AtCDA1 mRNA, a full-ORF runoff transcript was synthetized with T7 RNA polymerase (Roche) using as template a PCR product obtained from cloned AtCDA1 and primers T7-CDA1F and qCDA1-R. CaMV qPCR quantitation was performed as described in ref. 46.\n\nAll primers used are listed in Supplementary Table S3.\n\n\nResults\n\nTo test the mutagenic activity of A. thaliana CDAs, nine N. bigelovii plants were inoculated with CaMV. After systemic infection was established, we performed transient AtCDA overexpression experiments. To do so, the same leaf was agroinfiltrated twice; one half of the leaf was infiltrated with one of the nine AtCDA genes and the other half of the leaf was infiltrated with the empty vector. This test was done for all nine AtCDA genes in different plants. The presence of AtCDA mRNAs was verified by RT-PCR from DNase-treated RNA extracts. DNA was extracted from agroinfiltrated areas for 3D-PCR amplification of a 229 bp fragment in the ORF VII of CaMV. 3D-PCR uses a gradient of low denaturation temperatures during PCR to identify the lowest one, which potentially allows differential amplification of A/T rich hypermutated genomes47. There were no differences in the lowest denaturation temperature that could result in differential amplification of controls and the AtCDA-agroinfiltrated samples, suggesting that hypermutated genomes should be at low frequency, if present at all.\n\nPCR products obtained at the lowest denaturation temperature were cloned and sequenced. In a preliminary experiment, we sequenced 25 clones from each AtCDA/negative control pair (Supplementary Table S1). At least one G to A transition was detected in clones from areas infiltrated with AtCDA1, AtCDA2 and AtCDA9 genes. For these three genes, we further increased the number of sequenced clones up to 106. The CaMV mutant spectra was significantly different between plants overexpressing AtCDA1 and their respective negative controls (Figure 1a: χ2 = 25.760, 7 d.f., P = 0.001). This difference was entirely driven by the 471.43% increase in G to A transitions observed in the plants overexpressing AtCDA1. A thorough inspection of alignments showed that most of the G to A mutations (65.6%) detected in the different samples were located at the nucleotide position 181 (Supplementary Table S1). By contrast, no overall difference existed between the mutant spectra of CaMV populations replicating in plants overexpressing AtCDA2 (Figure 1b: χ2 = 8.944, 6 d.f., P = 0.177) or AtCDA9 (Figure 1c: χ2 = 6.539, 8 d.f., P = 0.587) and their respective controls. Consistently, the mutant spectra from the three AtCDA-overexpressed samples were significantly heterogeneous (χ2 = 41.063, 16 d.f., P = 0.001), again due to the enrichment in G to A transitions observed in the case of AtCDA1. By contrast, the three independent control inoculation experiments showed homogeneous mutant spectra for CaMV (χ2 = 14.605, 18 d.f., P = 0.689), undistinguishable from the mutant spectra previously reported for natural isolates of this virus48. The consistency of the mutant spectra observed for the three control experiments and for a natural isolate of the virus suggests that in absence of a perturbation such as the overexpression of AtCDA1, the CaMV mutant spectrum is rather stable.\n\n(a) AtCDA1, (b) AtCDA2 and (c) AtCDA9. The pBIN61 empty vector was agroinfiltrated in the same leaves than their corresponding AtCDAs (mock). For each sample 20,034 nucleotides were sequenced.\n\nWe conclude that overexpressing the AtCDA1 gene results in a significant shift in CaMV genome composition towards G to A mutations, as expected from cytidine deaminase hypermutagenic activity.\n\nTo test the effects of suppressing the expression of AtCDA on viral accumulation we produced a transgenic line of A. thaliana Col-0, named amiR1-6-3. This line was stably transformed with an amiR, controlled by the A. nidulans ethanol regulon to achieve ethanol-triggered RNAi-mediated simultaneous suppression of AtCDAs 1, 2, 3, 4, 7, and 8 expression. Transgenic and wild-type plants were subjected to periodical treatment with 2% ethanol (or water for the control groups). Subsequently, plants were inoculated with the infectious clone pCaMVW260 that expresses the genome of CaMV. Samples were taken eight days after inoculation and AtCDA1 mRNA and CaMV viral load were quantified by real time RT-qPCR and qPCR, respectively, in the same samples. For each genotype and/or treatment, 22 plants were analyzed.\n\nThe expression of AtCDA1 mRNA depended on the plant genotype (Figure 2a; GLM: χ2 = 28.085, 1 d.f., P < 0.001) as well as on the interaction of plant genotype and treatment (χ2 = 26.037, 1 d.f., P < 0.001), suggesting a differential accumulation of AtCDA1 mRNA on each plant genotype depending on the amiR1-6-3 induction state. Ethanol treatment reduced the amount of AtCDA1 mRNA by 24.01% in transgenic plants, proving that triggering the expression of the amiR1-6-3 significantly and efficiently silences the expression of AtCDA1. Unexpectedly, the effect was the opposite in wild-type plants, for which we observed 23.76% increase in AtCDA1 mRNA accumulation (Figure 2a) upon treatment with ethanol. This increase in expression of AtCDA1 in wild-type plants after ethanol treatment and the underlying mechanisms certainly deserve to be investigated further. However, for the purpose of this study, its relevance is that it may increase the number of G to A mutations in the CaMV genome, thus making the antiviral effect stronger to some extent.\n\n(a) Number of AtCDA1 mRNA molecules/80 ng total RNA quantified by RT-qPCR using the standard curve method for absolute quantification. (b) Number of CaMV genomes/80 ng total DNA. For each block of plants (wild-type and amiR1-6-3), values were normalized to the average number of genomes estimated in the corresponding water-treated (control) plants.\n\nMore interestingly, the relative accumulation of CaMV in ethanol-treated plants was significantly different, depending on the plant genotype being infected (Figure 2b; Mann-Whitney U test, P = 0.002): silencing the AtCDA1 gene bolstered CaMV accumulation to 103.10% compared to the accumulation observed in wild-type plants. Furthermore, there was a significant negative correlation between the number of molecules of AtCDA1 mRNA and viral load (partial correlation coefficient controlling for treatment: r = –0.299, 86 d.f., P = 0.005).\n\nGiven the significant increase of viral load in plants with lower levels of AtCDA1 mRNA, we sought the molecular signature of deamination in transgenic plants. For this, we selected three biological replicates from each treatment group (ethanol or control) and sequenced between 39–45 clones of the CaMV fragment from each replicate. As shown in Figure 3, silencing of the AtCDA1 gene affects the composition of CaMV mutant spectrum by reducing the number of G to A transitions by 69.23%. Nevertheless, overall, both mutational spectra were not significantly different (Figure 3: χ2 = 9.108, 6 d.f., P = 0.168), prompting caution against making a definite conclusion on the role of deamination in the observed increase in CaMV accumulation.\n\nThe number of nucleotides sequenced was 23,436 for control and 24,003 for ethanol-treated plants. Ethanol-treated plants turn on the expression of amiR1-6-3 that was designed to silence the expression of the AtCDA1 gene.\n\nWe conclude that suppressing the expression of the AtCDAs 1, 2, 3, 4, 7, and 8 significantly reduces the accumulation of CaMV. However, the characterization of the mutant spectrum of the same CaMV populations provides no strong enough support to the cytidine deamination hypothesis.\n\n\nDiscussion\n\nLethal mutagenesis through deamination of RNA/DNA by cytidine deaminases has been proven to work as an antiviral mechanism against retroviruses16–19,23–27, and some DNA28–31 and RNA32 viruses infecting mammals. Our results show that the A. thaliana CDA1 gene has some degree of mutagenic activity on the pararetrovirus CaMV genome. Moreover, simultaneously suppressing the expression of a subset of AtCDAs, including AtCDA1, increased CaMV load, strongly suggesting an antiviral role for AtCDAs.\n\nOur data show that AtCDAs probably restrict CaMV replication through a process similar to the restriction of HIV-1 by APOBEC3. CaMV replicates in the cytoplasm by reverse transcription using the positive-sense 35S RNA as template. As for HIV-1, the first strand negative-sense cDNA could be deaminated during reverse transcription, transforming deoxycytidine into deoxyuridine. Then, when the positive-sense strand is produced, an A is incorporated instead of a G, increasing the proportion of G to A mutations. In the case of HIV-1, this G to A mutational bias is explained by A3G and A3H specificity for single negative stranded DNA: during HIV-1 replication, C to G transitions are rare and restricted to the PBS site and U3 regions in the 5’ long terminal repeat, where positive-stranded DNA is predicted to become transiently single stranded49. Similarly, during CaMV replication the negative strand remains single stranded, while the positive is copied from it and remains double stranded50. Surprisingly, for AtCDA1, C to T mutations were also increased; the region studied here is close to the 5’ end of CaMV, which contains the PBS for negative-strand synthesis and the ssDNA discontinuity ∆1. The observed C to T transitions could reflect transient positive-stranded ssDNA in the 5’ terminal region during reverse transcription, nevertheless a different substrate specificity of A. thaliana CDAs cannot be ruled out.\n\nMost of the G to A transitions detected in agroinfiltration experiments were located in the G at position 181. HIV-1 hypermutated genomes show mutational hot spots as well, which are due to preference of A3G and A3F for deamination of the third C in 5’-CCC (negative-strand) and 5’-GGC, respectively51,52. The context of the C complementary to G181 (5’-GGC) differs from what has been described for APOBEC3, suggesting that if AtCDAs had context preference, it would be different from the one described for A3G. However, given the low number of mutations found, we should be cautious when concluding whether AtCDAs have a possible sequence-context preference. Since our experiments were performed in vivo, negative selection is expected to purge genomes carrying deleterious mutations. This limitation could account for our failure to detect largely hypermutated genomes, and demonstrates the need for developing new selection-free assays to further characterize AtCDA-induced mutagenesis.\n\nAlthough there is not a demonstrated correlation between the expression of APOBEC3 and mutational bias of viruses infecting mammals, caulimoviruses have an excess of G to A transitions in synonymous positions53. In A. thaliana plants, we found that silencing of AtCDA1 reduced the frequency of G to A transitions in the CaMV genome, suggesting a contribution of AtCDAs to the nucleotide bias found in caulimoviruses. The increased viral load in CDA-silenced A. thaliana plants strongly suggests that deamination of viral genomes may work as an antiviral mechanism in plants, leading to questions about how general this mechanism might be, and how it may contribute to viral evolution. Describing a new natural antiviral mechanism in plants opens new research avenues for the development of new durable control strategies.\n\n\nData availability\n\nAll datasets that support the findings in this study are available at LabArchives with DOI: 10.6070/H4TD9VD5.\n\n‘File Sequence_data_for_Figure_1.zip’ contains the FASTA files with the sequence data used to generate the mutational spectra shown in Figure 1.\n\n‘Data_for_Figure_2a.xlsx’ contains the AtCDA1 expression data used to generate Figure 2a.\n\n‘Data_for_Figure_2b.xlsx’ contains the CaMV accumulation data used to generate Figure 2b.\n\n‘Sequence_data_for_Figure_3.zip’ contains the FASTA files with sequence data used to generate the mutational spectra shown in Figure 3.",
"appendix": "Author contributions\n\n\n\nSFE conceived the study, designed the experiments and analyzed the data. SM, JMC and AG-P performed the experiments and contributed to experimental design. SM, JMC and SFE wrote the paper. All authors revised and approved the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the former spanish Ministerio de Ciencia e Innovación-FEDER grant BFU2009-06993 to SFE. JMC was supported by the CSIC JAE-doc program/Fondo Social Europeo. AG-P was supported by a grant for Scientific and Technical Activities and by grant P10-CVI-65651, both from Junta de Andalucía.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Francisca de la Iglesia (IBMCP-CSIC), Àngels Pròsper (IBMCP-CSIC) and Ana Cuadrado (IBMCP-CSIC) for excellent technical assistance, Miguel A. Blázquez (IBMCP-CSIC) for help in designing the amiR1-6-3 and generating the transgenic plants and Rémy Froissart (BGPI-CNRS) for providing the pCaMVW260 infectious clone.\n\n\nSupplementary material\n\nSupplementary Table S1. Nucleotide substitutions detected in the overexpression experiments. For each of the nine infiltrated plants, the substitutions observed in the clonal sequences analyzed at the overexpressed (AtCDA1 to AtCDA9) and control (pBIN61-infiltrated) regions are shown. In some cases, a given substitution is present in several clonal sequences from the same sample and the number of times it appears is indicated between parentheses. Nucleotide positions are given according to CaMV isolate W260, GenBank accession JF809616.1.\n\nClick here to access the data.\n\nSupplementary Table S2. Nucleotide substitutions found in A. thaliana transgenic plants with or without inducing the expression of amiR3-1-9 that silences the expression of several AtCDAs. In some cases, a given substitution is present in several clonal sequences from the same sample and the number of times it appears is indicated between brackets. Nucleotide positions are given according to CaMV isolate W260, GenBank accession JF809616.1.\n\nClick here to access the data.\n\nSupplementary Table S3. Primers used in this study.\n\nClick here to access the data.\n\n\nReferences\n\nSmith HC, Bennett RP, Kizilyer A, et al.: Functions and regulation of the APOBEC family of proteins. Semin Cell Dev Biol. 2012; 23(3): 258–268. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDriscoll DM, Zhang Q: Expression and characterization of p27, the catalytic subunit of the apolipoprotein B mRNA editing enzyme. J Biol Chem. 1994; 269(31): 19843–19847. PubMed Abstract\n\nNavaratnam N, Morrison JR, Bhattacharya S, et al.: The p27 catalytic subunit of the apolipoprotein B mRNA editing enzyme is a cytidine deaminase. J Biol Chem. 1993; 268(28): 20709–20712. PubMed Abstract\n\nTeng B, Burant CF, Davidson NO: Molecular cloning of an apolipoprotein B messenger RNA editing protein. Science. 1993; 260(5115): 1816–1819. PubMed Abstract | Publisher Full Text\n\nSato Y, Probst HC, Tatsumi R, et al.: Deficiency in APOBEC2 leads to a shift in muscle fiber type, diminished body mass, and myopathy. J Biol Chem. 2010; 285(10): 7111–7118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRogozin IB, Basu MK, Jordan IK, et al.: APOBEC4, a new member of the AID/APOBEC family of polynucleotide (deoxy)cytidine deaminases predicted by computational analysis. Cell Cycle. 2005; 4(9): 1281–1285. PubMed Abstract | Publisher Full Text\n\nMuramatsu M, Sankaranand VS, Anant S, et al.: Specific expression of activation-induced cytidine deaminase (AID), a novel member of the RNA-editing deaminase family in germinal center B cells. J Biol Chem. 1999; 274(26): 18470–18476. PubMed Abstract | Publisher Full Text\n\nArakawa H, Hauschild J, Buerstedde JM: Requirement of the activation-induced deaminase (AID) gene for immunoglobulin gene conversion. Science. 2002; 295(5558): 1301–1306. PubMed Abstract | Publisher Full Text\n\nFugmann SD, Schatz DG: Immunology. One AID to unite them all. Science. 2002; 295(5558): 1244–1245. PubMed Abstract | Publisher Full Text\n\nChiu YL, Witkowska HE, Hall SC, et al.: High-molecular-mass APOBEC3G complexes restrict Alu retrotransposition. Proc Natl Acad Sci U S A. 2006; 103(42): 15588–15593. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchumann GG: APOBEC3 proteins: major players in intracellular defence against LINE-1-mediated retrotransposition. Biochem Soc Trans. 2007; 35(Pt 3): 637–642. PubMed Abstract | Publisher Full Text\n\nEsnault C, Millet J, Schwartz O, et al.: Dual inhibitory effects of APOBEC family proteins on retrotransposition of mammalian endogenous retroviruses. Nucl Acids Res. 2006; 34(5): 1522–1531. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSheehy AM, Gaddis NC, Choi JD, et al.: Isolation of a human gene that inhibits HIV-1 infection and is suppressed by the viral Vif protein. Nature. 2002; 418(6898): 646–650. PubMed Abstract | Publisher Full Text\n\nSmith HC: APOBEC3G: a double agent in defense. Trends Biochem Sci. 2011; 36(5): 239–244. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMangeat B, Turelli P, Caron G, et al.: Broad antiretroviral defence by human APOBEC3G through lethal editing of nascent reverse transcripts. Nature. 2003; 424(6944): 99–103. PubMed Abstract | Publisher Full Text\n\nZhang H, Yang B, Pomerantz RJ, et al.: The cytidine deaminase CEM15 induces hypermutation in newly synthesized HIV-1 DNA. Nature. 2003; 424(6944): 94–98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrowne EP, Allers C, Landau NR: Restriction of HIV-1 by APOBEC3G is cytidine deaminase-dependent. Virology. 2009; 387(2): 313–321. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiyagi E, Opi S, Takeuchi H, et al.: Enzymatically active APOBEC3G is required for efficient inhibition of human immunodeficiency virus type 1. J Virol. 2007; 81(24): 13346–13353. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchumacher AJ, Haché G, Macduff DA, et al.: The DNA deaminase activity of human APOBEC3G is required for Ty1, MusD, and human immunodeficiency virus type 1 restriction. J Virol. 2008; 82(6): 2652–2660. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSadler HA, Stenglein MD, Harris RS, et al.: APOBEC3G contributes to HIV-1 variation through sublethal mutagenesis. J Virol. 2010; 84(14): 7396–7404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMulder LC, Harari A, Simon V: Cytidine deamination induced HIV-1 drug resistance. Proc Natl Acad Sci U S A. 2008; 105(14): 5501–5506. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRussell RA, Moore MD, Hu WS, et al.: APOBEC3G induces a hypermutation gradient: purifying selection at multiple steps during HIV-1 replication results in levels of G-to-A mutations that are high in DNA, intermediate in cellular viral RNA, and low in virion RNA. Retrovirology. 2009; 6: 16. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHultquist JF, Lengyel JA, Refsland EW, et al.: Human and rhesus APOBEC3D, APOBEC3F, APOBEC3G, and APOBEC3H demonstrate a conserved capacity to restrict Vif-deficient HIV-1. J Virol. 2011; 85(21): 11220–11234. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZielonka J, Bravo IG, Marino D, et al.: Restriction of equine infectious anemia virus by equine APOBEC3 cytidine deaminases. J Virol. 2009; 83(15): 7547–7559. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDelebecque F, Suspène R, Calattini S, et al.: Restriction of foamy viruses by APOBEC cytidine deaminases. J Virol. 2006; 80(2): 605–614. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahieux R, Suspène R, Delebecque F, et al.: Extensive editing of a small fraction of Human T-cell leukemia virus type 1 genomes by four APOBEC3 cytidine deaminases. J Gen Virol. 2005; 86(Pt 9): 2489–2494. PubMed Abstract | Publisher Full Text\n\nDang Y, Wang X, Esselman WJ, et al.: Identification of APOBEC3DE as another antiretroviral factor from the human APOBEC family. J Virol. 2006; 80(21): 10522–10533. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBonvin M, Achermann F, Greeve I, et al.: Interferon-inducible expression of APOBEC3 editing enzymes in human hepatocytes and inhibition of hepatitis B virus replication. Hepatology. 2006; 43(6): 1364–1374. PubMed Abstract | Publisher Full Text\n\nGee P, Ando Y, Kitayama H, et al.: APOBEC1-mediated editing and attenuation of Herpes simplex virus 1 DNA indicate that neurons have an antiviral role during herpes simplex encephalitis. J Virol. 2011; 85(19): 9726–9736. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuspène R, Aynaud MM, Koch S, et al.: Genetic editing of herpes simplex virus 1 and Epstein-Barr herpesvirus genomes by human APOBEC3 cytidine deaminases in culture and in vivo. J Virol. 2011; 85(15): 7594–7602. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang Z, Wakae K, Kitamura K, et al.: APOBEC3 deaminases induce hypermutation in human papillomavirus 16 DNA upon beta interferon stimulation. J Virol. 2014; 88(2): 1308–1317. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFehrholz M, Kendl S, Prifert C, et al.: The innate antiviral factor APOBEC3G targets replication of measles, mumps and respiratory syncytial viruses. J Gen Virol. 2012; 93(Pt 3): 565–576. PubMed Abstract | Publisher Full Text\n\nPauli EK, Schmolke M, Hofmann H, et al.: High level expression of the anti-retroviral protein APOBEC3G is induced by influenza A virus but does not confer antiviral activity. Retrovirology. 2009; 6: 38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang GF, Lin SY, Zhang H, et al.: Apobec 3F and apobec 3G have no inhibition and hypermutation effect on the human influenza A virus. Acta Virol. 2008; 52(3): 193–194. PubMed Abstract\n\nConticello SG, Thomas CJ, Petersen-Mahrt SK, et al.: Evolution of the AID/APOBEC family of polynucleotide (deoxy)cytidine deaminases. Mol Biol Evol. 2005; 22(2): 367–377. PubMed Abstract | Publisher Full Text\n\nGowda M, Venu RC, Li H, et al.: Magnaporthe grisea infection triggers RNA variation and antisense transcript expression in rice. Plant Physiol. 2007; 144(1): 524–533. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlow MJ, Grocock RJ, van Dongen S, et al.: RNA editing of human microRNAs. Genome Biol. 2006; 7(4): R27. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHaas M, Bureau M, Geldreich A, et al.: Cauliflower mosaic virus: still in the news. Mol Plant Pathol. 2002; 3(6): 419–429. PubMed Abstract | Publisher Full Text\n\nBendahmane A, Querci M, Kanyuka K, et al.: Agrobacterium transient expression system as a tool for the isolation of disease resistance genes: application to the Rx2 locus in potato. Plant J. 2000; 21(1): 73–81. PubMed Abstract | Publisher Full Text\n\nSchoelz JE, Shepherd RJ, Daubert S: Region VI of cauliflower mosaic virus encodes a host range determinant. Mol Cell Biol. 1986; 6(7): 2632–2637. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScholelz JE, Shepherd RJ: Host range control of cauliflower mosaic virus. Virology. 1988; 162(1): 30–37. PubMed Abstract | Publisher Full Text\n\nSchwab R, Ossowski S, Riester M, et al.: Highly specific gene silencing by artificial microRNAs in Arabidopsis. Plant Cell. 2006; 18(5): 1121–1133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaddick MX, Greenland AJ, Jepson I, et al.: An ethanol inducible gene switch for plants used to manipulate carbon metabolism. Nat Biotech. 1998; 16(2): 177–180. PubMed Abstract | Publisher Full Text\n\nRoslan HA, Salter MG, Wood CD, et al.: Characterization of the ethanol-inducible alc gene-expression system in Arabidopsis thaliana. Plant J. 2001; 28(2): 225–235. PubMed Abstract | Publisher Full Text\n\nClough SJ, Bent AF: Floral dip: a simplified method for Agrobacterium-mediated transformation of Arabidopsis thaliana. Plant J. 1998; 16(6): 735–743. PubMed Abstract | Publisher Full Text\n\nMartín S, Elena SF: Application of game theory to the interaction between plant viruses during mixed infections. J Gen Virol. 2009; 90(Pt 11): 2815–2820. PubMed Abstract | Publisher Full Text\n\nSuspène R, Henry M, Guillot S, et al.: Recovery of APOBEC3-edited human immunodeficiency virus G–>A hypermutants by differential DNA denaturation PCR. J Gen Virol. 2005; 86(Pt 1): 125–129. PubMed Abstract | Publisher Full Text\n\nChenault KD, Melcher U: Patterns of nucleotide sequence variation among cauliflower mosaic virus isolates. Biochimie. 1994; 76(1): 3–8. PubMed Abstract | Publisher Full Text\n\nYu Q, König R, Pillai S, et al.: Single-strand specificity of APOBEC3G accounts for minus-strand deamination of the HIV genome. Nat Struct Mol Biol. 2004; 11(5): 435–42. PubMed Abstract | Publisher Full Text\n\nMarco Y, Howell SH: Intracellular forms of viral DNA consistent with a model of reverse transcriptional replication of the cauliflower mosaic virus genome. Nucl Acids Res. 1984; 12(3): 1517–1528. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiddament MT, Brown WL, Schumacher AJ, et al.: APOBEC3F properties and hypermutation preferences indicate activity against HIV-1 in vivo. Curr Biol. 2004; 14(15): 1385–91. PubMed Abstract | Publisher Full Text\n\nKohli RM, Maul RW, Guminski AF, et al.: Local sequence targeting in the AID/APOBEC family differentially impacts retroviral restriction and antibody diversification. J Biol Chem. 2010; 282(52): 40956–40964. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMüller V, Bonhoeffer S: Guanine-adenine bias: a general property of retroid viruses that is unrelated to host-induced hypermutation. Trends Genet. 2005; 21(5): 264–268. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22521",
"date": "18 May 2017",
"name": "Pedro Gomez",
"expertise": [
"Reviewer Expertise Viral Evolutionary Ecology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nComments of the manuscript entitled \"A putative antiviral role of plant cytidine deaminases”.\nThis manuscript reports how plant cytidine deaminases, particularly AtCDA1, might contribute to the deamination of Cauliflower mosaic virus (CaMV) genome, and hence, affect its viral accumulation in plants. The work has merit and seems to be a good contribution. Whereas this potential antiviral response has been assessed in human and animals, virtually nothing is known about this mutagenic activity in plants.\n\nThe experimental methods are overall solid, and the manuscript is very well-written, clear and easy to follow.\n\nThe authors first examined which AtCDA proteins encoded by Arabidopsis thaliana have an effect on the CaMV mutational spectrum by performing an AtCDA overexpression in Nicotiana bigelovii. While results are consistent with the expected AtCDA mutagenic activity, I would suggest to them to describe the reasoning behind performing it in N. bigelovii plants in order to clarify whether there would be any potential host AtCDAs background-noise effect that could affect or not the transient genes activity results. It may not matter, but I have not found the answers to this question within the text. If that is so, I am guessing that results from the mutant analysis spectra could even improve by providing strong results from the AtCDA1 analysis or even some differences to the other AtCDA genes could be found, as consequence of buffering those effects from negative control samples.\n\nSecondly, they sought to evaluate the effect of suppressing AtCDAs in transgenic A. thaliana plants on the accumulation and mutant spectrum of CaMV. Here, I am a bit concerned whether the general claim that the authors are making with this study is properly warranted. Considering that all results of this section are only based on the AtCDA1, this seems to overstate the final conclusion and perhaps this can be slightly tempered. I would recommend either to moderate this conclusion (and title) to only the atCDA1 results or to show evidence of the CaMV load reduction when suppressing the expression of the other AtCDAs. This should then be accompanied by the full description of primers and expression patterns of AtCDA2-8 mRNA analysis in the methods section, in addition to inclusion of statistics data of the mutant spectrum in the results section. This could also increase the appeal of the manuscript.\n\nIn this sense, thinking about the general-nature of the findings, it would be very interesting and nice to read any thoughts/perspectives (in the discussion section) about this cytidine deaminase mutagenic activity in some other plant viruses (i.e. RNA virus), which could be infecting through different replicating strategies.\n\nSpecific minor comments: - Please, double check this % … 471.43% increase in G to A transitions - Colouring treatments of the Fig 3 is a bit confusing. Please keep that as previous figures. - Table S1: Please, describe that G to A substitutions detected here are shaded in the table.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2769",
"date": "15 Jun 2017",
"name": "Santiago F Elena",
"role": "Author Response",
"response": "Dear Dr. Gómez, Thank you very much for your time in reviewing the manuscript and also for your very constructive comments. Below we provide detailed responses to each one of them. We justify the choice of N. bigelovii for our agroinfiltration experiments. Basically, it was a practical choice: the clone of CaMV used in this study does not infect N. tabacum nor N. benthamiana efficiently and we needed a plant with large enough leaves to be agroinfiltrated. It is true that our results only provide suggestion that AtCDA1 may be involved in C deamination of CaMV genome. We have edited the text to avoid making any unsubstantiated claim. We have also added a paragraph in the Discussion putting our results in the context of recent findings that suggest that only AtCDA1 may be relevant for the homeostasis of pyrimidines while the other eight members of the gene family may be pseudogenes. We did not quantified the expression levels of AtCDAs 2 – 9, since we decided to focus our attention in AtCDA1 after observing that the expected bias in mutation spectrum was only found in this case. We have added a new paragraph to the Discussion on the potential antiviral role of plant CDAs for other viruses. Unfortunately, possible evidences are only limited to one Potyvirus. The three specific minor comments have been considered: the percentage was correct, coloring in Fig. 3 is right and the legend of Supplementary Tables S1 and S2 have been modified to indicate that G to A transitions are shaded in grey."
}
]
},
{
"id": "22486",
"date": "02 Jun 2017",
"name": "Israel Pagán",
"expertise": [
"Reviewer Expertise Plant virus evolution"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMartín et al. present an interesting work on the role of plant cytidine deaminases (CDA) as a defense mechanism against virus infection through the increase of mutational load in the viral genome during replication. CDAs are known to increase the frequency of G to A transitions. Although such mutational load has been shown to be a effective defense mechanism against some animal viruses (mainly retroviruses), this paper shows for the first time evidence that support a similar role in plants against a plant pararetrovirus. As such, I consider the paper scientifically sound.\n\nI find the paper well written and easy to read, and I would like to acknowledge the effort made by the authors on this aspect. The methodology is well described and all the information necessary to understand the experiments is provided. On this sense, I would just suggest adding complementary information on the number of leaves from the N. bigelovii agroinfiltrated with each AtCDA. This would help to understand the degree of biological variation considered in the study.\n\nThe main conclusion of the manuscript is that overexpression of AtCDA leads to a decrease of viral load. I think that this conclusion is robustly supported by the data presented in the result section, and statistics are flawlessly performed and described as is the rule in the work from Prof. Elena´s group. A second main conclusion of this work is that higher viral load may be associated with the trend towards reduced frequency of G to A transitions in plants with silenced AtCDA. The authors are careful on drawing conclusions from this observation, given that the observed trend is not statistically significant. I was wondering whether the effect of the bias in G to A transitions might not be quantitative but rather qualitative. In other words, it might be interesting some discussion about the existence of a threshold in the frequency of G to A transitions bias that may lead to the reduction in viral load.\n\nMy last suggestion relates to the observation that mutations at position 181 accounts for most of the G to A transitions. This makes me wonder about the spatial distribution of mutations (especially G to A transitions) across the viral genome. I think that including some information on whether mutations are mainly localized in coding or non-coding regions, and on whether mutations located in coding regions results mainly in synonymous and non-synonymous changes may be a nice addition. Perhaps this information may help to understand the effects of G to A transitions in the genome “functionality”.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2768",
"date": "15 Jun 2017",
"name": "Santiago F Elena",
"role": "Author Response",
"response": "Dear Dr. Pagán, Thank you very much for your time in reviewing the manuscript and also for your very constructive comments. Below we provide detailed responses to each one of them. We now mention the number of half-leafs per plant (3) that were agroinfiltrated with each one of the nine AtCDAs. We have added a brief text to the Discussion on the possibility of whether low a threshold number of G to A transitions needs to be reached in order to have a significant effect on CaMV accumulation. Please, recall that we have sequenced only a region within ORF VII, thus all mutations observed are in a coding sequence. Nonetheless, we have added extra text to the Discussion commenting on the synonymous/nonsynonymous nature of all the observed mutations, in particular for the G to A transitions most relevant for our study. Furthermore, Supplementary Table S1 now indicates the nonsynonymous substitutions for the case of AtCDA1 in the agroinfiltration experiments."
}
]
}
] | 1
|
https://f1000research.com/articles/6-622
|
https://f1000research.com/articles/6-124/v1
|
10 Feb 17
|
{
"type": "Method Article",
"title": "A very simple, re-executable neuroimaging publication",
"authors": [
"Satrajit S. Ghosh",
"Jean-Baptiste Poline",
"David B. Keator",
"Yaroslav O. Halchenko",
"Adam G. Thomas",
"Daniel A. Kessler",
"David N. Kennedy",
"Satrajit S. Ghosh",
"Jean-Baptiste Poline",
"David B. Keator",
"Yaroslav O. Halchenko",
"Adam G. Thomas",
"Daniel A. Kessler"
],
"abstract": "Reproducible research is a key element of the scientific process. Re-executability of neuroimaging workflows that lead to the conclusions arrived at in the literature has not yet been sufficiently addressed and adopted by the neuroimaging community. In this paper, we document a set of procedures, which include supplemental additions to a manuscript, that unambiguously define the data, workflow, execution environment and results of a neuroimaging analysis, in order to generate a verifiable re-executable publication. Re-executability provides a starting point for examination of the generalizability and reproducibility of a given finding.",
"keywords": [
"Neuroimaging analysis",
"re-executable publication",
"reproducibility"
],
"content": "Introduction\n\nThe quest for more reproducibility and replicability in neuroscience research spans many types of problems. True reproducibility requires the observation of a ‘similar result’ through the execution of a subsequent independent, yet similar, analysis on similar data. However, what constitutes ‘similar’, and how to appropriately annotate and integrate a lack of replication in specific studies remains a problem for the community and the literature that we generate.\n\nA number of studies have brought the reproducibility of science into question (Prinz et al., 2011). Numerous factors are critical to understand reproducibility, including: sample size, and its related issues of power and generalizability (Button et al., 2013; Ioannidis, 2005); P-hacking, trying various statistical approaches in order to find analyses that reach significance (Simmons et al., 2011; Simonsohn et al., 2014); completeness of methods description, the written text of a publication cannot completely describe an analytic method in its entirety. Coupled with this is the publication bias that arises from only publishing results from the positive (“significant”) tail of the distribution of findings. This contributes to a growing literature of findings that do not properly ‘self-correct’ through an equivalent publication of negative findings (that indicate a lack of replication). Such corrective aggregation is needed to balance the inevitable false positives that result from the millions of experiments that are performed each year.\n\nBut before even digging too deeply into the exceedingly complex topic of reproducibility, there already is great concern that a typical neuroimaging publication, the basic building block that our scientific knowledge enterprise is built upon, is rarely even re-executable, even by the original investigators. The general framework for a publication is the following: take some specified “Data”, apply a specified “Analysis”, and generate a set of “Results”. From the Results, claims are then made and discussed. In the context of this paper, we consider “Analysis” to include the software, workflow and execution environment, and use the following definitions of reproducibility:\n\nRe-executability (publication-level replication): The exact same data, operated on by the exact same analysis should yield the exact same result. This is currently a problem since publications, in order to maintain readability, do not typically provide a complete specification of the analysis method or access to the exact data.\n\nGeneralizability: We can divide generalizability into three variations:\n\nGeneralization Variation 1: Exact Same Data + Nominally ‘Similar’ Analyses should yield a ‘Similar’ Result (i.e. FreeSurfer subcortical volumes compared to FSL FIRST)\n\nGeneralization Variation 2: Nominally ‘Similar’ Data + Exact Same Analysis should yield a ‘Similar’ Result (i.e. the cohort of kids with autism I am using compared to the cohort you are using)\n\nGeneralized Reproducibility: Nominally ‘Similar’ Data + Nominally ‘Similar’ Analyses should yield a ‘Similar’ Result\n\nSince we do not really characterize data, analysis, and results very exhaustively in the current literature, the concept of ‘similar’ has lots of wiggle room for interpretation (both to enhance similarity and to discount differences, as desired by the interests of the author).\n\nIn this paper, we look more closely at the re-executability necessary for publication-level replication. The technology exists, in many cases, to make neuroimaging publications that are fully re-executable. Re-executability of an initial publication is a crucial step in the goal of overall reproducibility of a given research finding. There are already examples of re-executable individual articles (e.g. Waskom, 2014), as well as journals that propose to publish reproducible and open research (e.g. https://rescience.github.io). Here, we propose a formal template for a reproducible brain imaging publication and provide an example on fully open data from the NITRC Image Repository. The key elements to publication re-executability is definition of and access to: 1) the data; 2) the processing workflow; 3) the execution environment; and 4) the complete results. In this report, we use existing technologies (i.e., NITRC (http://nitrc.org), NIDM (http://nidm.nidash.org), Nipype (http://nipy.org/nipype), NeuroDebian (http://neuro.debian.net)) to generate a re-executable publication for a very simple analysis problem, which can form an essential template to guide future progress in enhancing re-executability of workflows in neuroimaging publications. Specifically, we explore the issue of exact re-execution (identical execution environment) and re-execution of identical workflow and data in ‘similar’ execution environments (Glatard et al., 2015).\n\n\nMethods\n\nWe envision a ‘publication’ with four supplementary files, the: 1) data file, 2) workflow file, 3) execution environment specification, and 4) results. The task the author would like to enable, for an interested reader, will be to facilitate the use of the first three specifications and easily be able to run them, and confirm (or deny) the similarity of the results from an independent re-execution compared to those published.\n\nFor the purpose of this report, we wanted an easy to execute query run on completely open, publically available data. We also wanted to use a relatively simple workflow that could be run in a standard computational environment and have it operate on a tractable number of subjects. We selected a workflow and sample size such that the overall processing could be accomplished in a few hours. The complete workflow and results can be found in the Github repository (doi, 10.5281/zenodo.266673; Ghosh et al., 2017).\n\nThe data. The dataset for this exercise was created by a query as an unregistered guest user of the NITRC Image Repository (NITRC-IR; RRID:SCR_004162; Kennedy et al., 2016). We queried the NITRC-IR search page (http://www.nitrc.org/ir/app/template/Index.vm; 1-Jan-2017) on the ‘MR’ tab with the following specification: age, 10–15 years old; Field Strength, 3. This query returned 24 subjects, which included subject identifier, age, handedness, gender, acquisition site, and field strength. We then selected the ‘mprage_anonymized’ scan type and ‘NIfTI’ file format in order to access the URLs (uniform resource locators) for the T1-weighted structural image data of these 24 subjects. The subjects had the following characteristics: age=13.5 +/- 1.4 years; 16 males, 8 females; 8 right handed, 1 left and 15 unknown. All of these datasets were from the 1000 Functional Connectomes project (Biswal et al., 2010), and included 9 subjects from the Ann Arbor sub-cohort, and 15 from the New York sub-cohort. We captured this data in tabular form (Supplementary File 1). Following the recommendations of the Joint Declaration of Data Citation Principles (Starr et al., 2015), we used the Image Attribution Framework (Honor et al., 2016) to create a unique identifier for this data collection (image collection: doi, 10.18116/C6C592; Kennedy, 2017). Data collection identifiers are useful in order to track and attribute future reuse of the dataset and maintain the credit and attribution connection to the constituent images of the collection which may, in general, come from heterogeneous sources. Representative images from this collection are shown in Figure 1.\n\nThe workflow. For this example, we use a simple workflow designed to generate subcortical structural volumes. We used the following tools from the FMRIB software library (FSL, RRID:SCR_002823; Jenkinson et al., 2012), conformation of the data to FSL standard space (fslreorient2std), brain extraction (BET), tissue classification (FAST), and subcortical segmentation (FIRST).\n\nThis workflow is represented in Nipype (RRID:SCR_002502; Gorgolewski et al., 2011) to facilitate workflow execution and provenance tracking. The workflow is available in the GitHub repository. The workflow also includes an initial step that accesses the contents of Supplementary Table 1, which are pulled from a Googles Docs spreadsheet (https://docs.google.com/spreadsheets/d/11an55u9t2TAf0EV2pHN0vOd8Ww2Gie-tHp9xGULh_dA/edit?usp=sharing) to copy the specific data files to the system, and a step that extracts the volumes (in terms of number of voxels and absolute volume) of the resultant structures. In this workflow, the following regions are assessed: brain and background (as determined from the masks generated by BET, the brain extraction tool), gray matter, white matter and CSF (from the output of FAST), and left and right accumbens, amygdala, caudate, hippocampus, pallidum, putamen, and thalamus-proper (from the output of FIRST).\n\nThe sequence and dependence of processing events used in this example re-executable publication.\n\nThe execution environment. In order to utilize a computational environment that is, in principle, accessible to the other users in configuration identical to the one used to carry out this analysis, we use the NITRC Computational Environment (NITRC-CE, RRID:SCR_002171). The NITRC-CE is built upon NeuroDebian (RRID:SCR_004401; Hanke & Halchenko, 2011), and comes with FSL (version 5.0.9-3~nd14.04+1) pre-installed on an Ubuntu 12.04 operating system. We run the computational environment on the Amazon Web Services (AWS) elastic cloud computing (EC2) environment. With EC2, the user can select properties of their virtual machine (number of cores, memory, etc.) in order to scale the power of the system to their specific needs. For this paper, we used the NITRC-CE v0.42, with the following specific identifier (AMI ID): ami-ce11f2ae.\n\nSetting up the software environment on a different machine. To re-execute a workflow on a different machine or cluster than the one used originally, the first step is to set up the software environment. A README.md file in the GitHub repository describes how to set up this environment on GNU/Linux and MacOS systems. We assume FSL is installed and accessible on the command line. A Python 2.7.12 environment can be set up and the Nipype workflow re-executed with a few shell commands, as noted in the README.md.\n\nWe performed the analysis (the above described workflow applied to the above described data, using the described computational system) and stored these results in our GitHub repository as the ‘reference run’, representing the official result that we are publishing for this analysis.\n\nGenerating the reference run. In order to run the analysis we executed the following steps:\n\n1) Launch an instance of NITRC-CE version v0.42 from AWS (we selected a 16 core c4.8xlarge instance type)\n\n2) Execute the following commands on this system to install the workflow, configure the environment and run the workflow:\n\n\n\nThe details within the Simple_Prep.sh Script: In order to run this workflow we need both FSL tools and a Python environment to run the Nipype workflow. We achieve the specification of the Python environment using conda, a package manager that can be run without administrative privileges across different platforms. An environment specification file ensures that specific versions of Python and other libraries are installed and used. The setup script then downloads the Simple Workflow repository and creates and activates the specifically versioned Python environment for Nipype.\n\nExact re-execution. In principle, any user could run the analysis steps, as described above, to obtain an exact replication of the reference results. The similarity of this result and the reference result can be verified by running the following command on the computational environment:\n\n\n\nThis program will compare the new results to the archived reference results and report on any differences, allowing for a numeric tolerance of 1e-6. If differences are found, a comma separated values (CSV) file is generated that quantifies these differences.\n\nRe-execution on other systems. While the reference analysis was run using NITRC-CE (Ubuntu 12.04) running on AWS, this analysis workflow can be run, locally or remotely, on many different operating systems. In general, the exact results of this workflow depends on the exact operating system, hardware, and the software versions. Execution of the above commands can be accomplished on any other Mac OS X or GNU/Linux distribution, as long as FSL is installed. In these cases, the results of the ‘python check_output.py’ command may indicate some numeric differences in the resulting volumes. In order to demonstrate these potential differences, we ran this identical workflow on the Mac OS X and an Ubuntu 16.04 platforms.\n\nIn addition to the reference run, the code for the project is housed in the Github repository. This allows integration with external services, such as CircleCI (http://circleci.com), which can re-execute the computation every single time a change is accepted into the code repository. Currently, the continuous integration testing runs on amd64 Debian (Wheezy) and uses FSL (5.0.9) from NeuroDebian. This re-execution generates results that are compared with the reference run, allowing us to evaluate a similar analysis automatically.\n\n\nResults\n\nThe specific versions of data used in this publication are available from NITRC. The code, environment details, and reference output are all available from the GitHub repository. The results of the reference run are stored in the expected_output folder of the GitHub repository at https://github.com/ReproNim/simple_workflow/tree/b0504592edafb8d4c6336a2497c216db5909ddf6/expected_output. By sharing the results of this reference run, as well as the data workflow, and a program to compare results from different runs, we can enable others to verify that they can arrive at the exact same result (if they use the exact same execution environment), or how close they come to the reference results if they utilize a different computational system (that may differ in terms of operating system, software versions, etc.).\n\nWhen the reference run is re-executed in the same environment there is no observed difference in the output. We also compared the execution of the reference run and re-execution in a separate MacOS environment. Table 1 indicates the numerical differences found in these alternate system example runs.\n\nResults are shown from the reference run (AWS Ubuntu 12.04) and a comparison run executed on a Mac OS X (10.10.4) system. The mean differences between these two systems are also summarized.\n\n\nDiscussion\n\nRe-executability is an important first step in the establishment of a more comprehensive framework of reproducible computing. In order to properly compare the results of multiple papers, the underlying details of processing are essential to know to interpret the causes of ‘similarity’ and ‘dissimilarity’ between findings. By explicitly including linkage between a publication, and its data, workflow, execution environment and results, we can enhance the ability of the community to examine the issues related to reproducibility of specific findings.\n\nIn this publication, we are not looking at the causes of operating system dependence of neuroimaging results, but rather to emphasize the presence of this source of analysis variation, and examine ways to reduce this source of variance. Detailed results of neuroimaging analyses have been shown to be dependent on the exact details of the processing, specific computational operating system and software version (Glatard et al., 2015). In this work, we replicate the observation that, despite an exact match on the data and workflow, the results of analysis differ (if even only very slightly) between execution in different operating systems. While in this case, the volumetric differences are not large, it illustrates the general nature of this overall concern.\n\nPublications can be made re-executable relatively simply by including links to the data, workflow, and execution environment. A re-executable publication with shared results is thus verifiable, by both the authors and others, increasing the trust in the results. The current simple example shows a simple volumetric workflow on a small dataset in order to demonstrate the way in which this could work in the real world. We felt it important to document this on a small problem (in terms of data and analysis complexity) in order to encourage others to actually verify these results, which is a practice we would like to see become more routine and feasible in the future. While this example approach is ‘simple’ in the context of what it accomplishes, it is still a rather complex and ad hoc procedure to follow. As such, it provides a roadmap for improvement, simplification, and standardization of the ways that these descriptive procedures can be handled.\n\nProgress in simplifying this simple example can be expected in the near future on many fronts. Software deployments that are coupled with specific execution environments (such as Docker, Vagrant, Singularity, or other virtual or container machine instances) are now being deployed for common neuroimaging applications. In addition, more standardized data representations (such as BIDS, Gorgolewski et al., 2016; NIDM, Gorgolewski et al., 2016; BDBags, http://bd2k.ini.usc.edu/tools/bdbag/) will simplify how experimental data is assembled for sharing and use in specific software applications. Data distributions with clear versioning of the data, such as DataLad (http://datalad.org), will unify versioned access to data resources and sharing of derived results. While the workflow in this case is specified using Nipype, extensions to LONI Pipeline, shell scripting, and other workflow specifications is easily envisioned. Tools necessary to capture local execution environments (such as ReproZip, http://reprozip.org) will help users to share the software environment of their workflows in conjunction with their publications more easily.\n\n\nConclusion\n\nWe have demonstrated a simple example of a fully re-executable publication to take publically available neuroimaging data and compute some volumetric results. This is accomplished by augmenting the publication with four ‘supplementary’ files that include exact specification of 1) data, 2) workflow, 3) execution environment, and 4) results. This provides a roadmap to enhance the reproducibility of neuroimaging publications, by providing a basis for verifying the re-executability of individual publications and providing a more structured platform to examine the generalizability of the findings across changes in data, workflow details and execution environments. We expect these types of publication considerations to advance to a point where it can be relatively simple and routine to provide such supplementary materials for neuroimaging publications.\n\n\nSoftware and data availability\n\nWorkflow and results are available on GitHub at: https://github.com/ReproNim/simple_workflow.\n\nArchived workflow and results as at time of publication: doi, 10.5281/zenodo.266673 (Ghosh et al., 2017).\n\nLicense: Apache License, Version 2.0.\n\nThe data used in this publication are available at NITRC-IR (project, fcon_1000; image collection: doi, 10.18116/C6C592 - Kennedy, 2017) and referenced in Supplementary File 1. These data were originally gathered from the NITRC-IR, 1000 Functional Connectomes project, Ann Arbor and New York sub-projects.\n\n\nConsent\n\nThe data used is anonymized and publically available at NITRC-IR. Consent for the data sharing was obtained by each of the sharing institutions.",
"appendix": "Author contributions\n\n\n\nDNK, SSG, YOH and JBP conceived the study. SG designed the workflow, DNK generated and executed the data query, YOH designed the execution environment, DBK designed the data model. DNK, SSG, J-BP, DAK and AGT executed the re-execution experiments. DNK prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by: NIH-NIBIB P41 EB019936 (ReproNim), NIH-NIBIB R01 EB020740 (Nipype), and NIH-NIMH R01 MH083320 (CANDIShare). J-BP was also partially supported by NIH NIH-NIDA 5U24 DA039832 (NIF).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThis work was conceived for and initially developed at OHBM Hackathon 2016 (http://brainhack.org/categories/ohbm-hackathon-2016/). We are exceedingly grateful to Cameron Craddock and the rest of the organizers of this event, and the Organization for Human Brain Mapping for support of their Open Science Special Interest Group (http://www.humanbrainmapping.org/i4a/pages/index.cfm?pageID=3712).\n\n\nSupplementary material\n\nSupplementary File 1: Data specification file. This file contains the basic demographics of the subjects (Subject, Age, Hand, Gender, and Acquisition Site) as well as the URL to the imaging data, as hosted at NITRC-IR (project, fcon_1000).\n\nClick here to access the data.\n\n\nReferences\n\nBiswal BB, Mennes M, Zuo XN, et al.: Toward discovery science of human brain function. Proc Natl Acad Sci U S A. 2010; 107(10): 4734–4739. PubMed Abstract | Publisher Full Text | Free Full Text\n\nButton KS, Ioannidis JP, Mokrysz C, et al.: Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013; 14(5): 365–376. PubMed Abstract | Publisher Full Text\n\nGhosh SS, Poline JB, Keator DB, et al.: ReproNim - Simple Paper v1.0.0 [Data set]. Zenodo. 2017. Data Source\n\nGlatard T, Lewis LB, Ferreira da Silva R, et al.: Reproducibility of neuroimaging analyses across operating systems. Front Neuroinform. 2015; 9: 12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGorgolewski KJ, Auer T, Calhoun VD, et al.: The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci Data. 2016; 3: 160044. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGorgolewski K, Burns CD, Madison C, et al.: Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python. Front Neuroinform. 2011; 5: 13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHanke M, Halchenko YO: Neuroscience Runs on GNU/Linux. Front Neuroinform. 2011; 5: 8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHonor LB, Haselgrove C, Frazier JA, et al.: Data Citation in Neuroimaging: Proposed Best Practices for Data Identification and Attribution. Front Neuroinform. 2016; 10: 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIoannidis JP: Why most published research findings are false. PLoS Med. 2005; 2(8): e124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJenkinson M, Beckmann CF, Behrens TE, et al.: FSL. NeuroImage. 2012; 62(2): 782–790. PubMed Abstract | Publisher Full Text\n\nKennedy DN, Haselgrove C, Riehl J, et al.: The NITRC image repository. NeuroImage. 2016; 124(Pt B): 1069–1073. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKennedy DN: ReproNim Simple Workflow test dataset. ReproNim. 2017. Data Source\n\nPrinz F, Schlange T, Asadullah K: Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011; 10(9): 712. PubMed Abstract | Publisher Full Text\n\nSimmons JP, Nelson LD, Simonsohn U: False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011; 22(11): 1359–1366. PubMed Abstract | Publisher Full Text\n\nSimonsohn U, Nelson LD, Simmons JP: P-curve: a key to the file-drawer. J Exp Psychol Gen. 2014; 143(2): 534–547. PubMed Abstract | Publisher Full Text\n\nStarr J, Castro E, Crosas M, et al.: Achieving human and machine accessibility of cited data in scholarly publications. PeerJ Comput Sci. 2015; 1: pii: e1. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "20292",
"date": "21 Feb 2017",
"name": "Konrad Hinsen",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article aims to demonstrate how a neuroimaging study can be published in such a way that readers can re-execute the complete workflow with (relative) ease. Although the concrete study used as an example is probably of little scientific interest, it could serve as a template and guideline for other researchers using the same software tools.\nMy main issue with this paper is that I was unable to re-execute the workflow, in spite of the authors' efforts to document the process. A detailed technical explanation is provided at the end of this review. In addition to technical obstacles that could in principle be overcome, the main issue is the use of the proprietary software package FSL, whose licence can be interpreted as prohibiting its use in the context of a review for F1000Research. It is not clear to me if using FSL via NeuroDebian would somehow solve this problem, because I did not succeed in obtaining the NeuroDebian docker image.\nAnother issue of the interpretation of numerical tolerances. The article says that the authors made comparisons \"allowing for a numeric tolerance of 1e-6\", but no explanation is given for this particular choice, nor is it said if this is an absolute or a relative error criterion. Table 1 shows numerical results obtained on different platforms, but provides no guide for their interpretation. How big would a difference have to to influence the scientific interpretation of the results?\nFinally, readers wishing to take this example as a starting point for preparing their own studies reproducibly would benefit from a more extensive discussion of the technical choices and of the work required for actually composing, rather than simply consulting, the authors' code repository. For example, it is not stated explicitly anywhere that the workflow takes the form of a Python script (run_demo_workflow.py). The use of conda in re-creating an environment with precise versions of each software package is also not generally known and deserves more explanation.\n\nAttempting to re-execute the example workflow\nI tried to re-execute the workflow on a MacBookPro running under macOS 10.11.6, using both procedures explained in the authors' README.md.\n1. \"Within your current environment\".\nThe first obstacle is \"Make sure FSL is available in your environment and accessible from the command line.\" Where do I get FSL? Which version(s) are acceptable? Please provide at the very least a link to the software's Web page (https://fsl.fmrib.ox.ac.uk/).\nI decided not to install FSL on my computer because I am not willing to accept the licence. It excludes commercial use, and in particular \"use of the Software to provide any service to an external organisation for which payment is received\". F1000Research offers reviewers a reduction on future article page charges, which could be interpreted as a form of payment. Beyond this specific legal issue, I also consider it unreasonable to request reviewers to register as users of proprietary software, providing personal data for marketing purposes in the process.\nI did, however, continue the setup process to check it for completeness. The instruction \"If you already have a `conda` environment, please follow the detailed steps below.\" lacks some precision: where exactly do I have to start if I already have a conda environment? The right answer is \"at 'conda config --add channels conda-forge'\", which I think is not obvious.\n2. \"Within Docker\"\nRunning the Simple_Prep_docker under macOS ends with the error message\nreadlink: illegal option -- f usage: readlink [-n] [file ...]\nI modified the script, replacing \"readlink\" by \"greadlink\" from Homebrew's coreutils package. The next error message then is\nsed: -i may not be used with stdin\nThere are two uses of \"sed -i\" in the script, but for neither one it is obvious under which conditions it would erroneously act on stdin. I decided to give up at this point. Considering the use of apt-get in the script, it probably requires Debian or Ubuntu Linux anyway.\nI am not sure the authors can do much to address this issue, given that writing platform-independent shell scripts is difficult to impossible, but they should at least say clearly in the installation instructions for which platforms they have actually tested the installation.",
"responses": [
{
"c_id": "2742",
"date": "15 Jun 2017",
"name": "David Kennedy",
"role": "Author Response",
"response": "We thank the reviewer for their thoughtful review and helpful comments. We have revised the manuscript and design of this manuscript to meet many of the concerns raised, and we believe that this has resulted in an improved presentation. We have also reworked the repository and the way the experiment can be reproduced and extended. Reviewer Comment: My main issue with this paper is that I was unable to re-execute the workflow, in spite of the authors' efforts to document the process. A detailed technical explanation is provided at the end of this review. Response: We are sorry that this did not work as expected in your case. As further detailed in our response to your detailed technical explaination below, and in the github ‘issue’ (https://github.com/ReproNim/simple_workflow/issues/15), we hope that we have successfully ammended the procedure to make the system even more re-executable by including the Docker re-execution option. Reviewer Comment: In addition to technical obstacles that could in principle be overcome, the main issue is the use of the proprietary software package FSL, whose licence can be interpreted as prohibiting its use in the context of a review for F1000Research. It is not clear to me if using FSL via NeuroDebian would somehow solve this problem, because I did not succeed in obtaining the NeuroDebian docker image. Response: We agree that licensing issues are very important to consider. Many of the users in the neuroimaging community have FSL locally, and have consented to the FSL licensing terms; hence re-execution of this workflow incurs no additional licensing issues. Users using a NITRC-CE AWS instance are explicitly presented a page that that details the licensing terms of the software installed on that instance and use of the instance is with the acknowledgement of the licensing terms. Our initial Docker did not present these licensing terms, but in our new README for the repository and Docker, we have include a notice for commercial use. Reviewer Comment: Another issue of the interpretation of numerical tolerances. The article says that the authors made comparisons \"allowing for a numeric tolerance of 1e-6\", but no explanation is given for this particular choice, nor is it said if this is an absolute or a relative error criterion. Table 1 shows numerical results obtained on different platforms, but provides no guide for their interpretation. How big would a difference have to to influence the scientific interpretation of the results? Response: We now provide more information on this issue. The threshold we apply in the ‘check_output.py’ script is simply selected to catch the presence of any numerical difference between the test run and the reference run. The default in the version of numpy used is in the present container is 1e-5. The biological interpretation of these differences is multi-faceted. On the one hand, the correlation of the volumetric results within each individual subject, and in aggregate across the population is very high (0.918 - 1.000). On the other hand, we see, per structure, a range of volumetric differences that reveals a large span of percentage of structure differences, differences that are, not surprisingly, dependent upon the overall size of the structure itself. The extremes of this distribution of the average difference (provided in Table 1) can be as large as 7.7% for the right amygdala. Sources of volumetric variance in this range can be troubling as biological changes on this order of volumetric difference can otherwise be the types of changes that studies are designed to observe. Reviewer Comment: Finally, readers wishing to take this example as a starting point for preparing their own studies reproducibly would benefit from a more extensive discussion of the technical choices and of the work required for actually composing, rather than simply consulting, the authors' code repository. For example, it is not stated explicitly anywhere that the workflow takes the form of a Python script (run_demo_workflow.py). The use of conda in re-creating an environment with precise versions of each software package is also not generally known and deserves more explanation. Response: We now add more discussion of the topic of approaches that others can take to generate more re-executable workflows, and the various challenges in representing the execution environment. Specifically, conda (https://conda.io/) is a cross-platform package manager and handles user installations for many packages into a controlled environment. Unlike many operating system package managers (e.g., yum, apt), Conda does not require root privileges. This allows individuals to replicate isolated virtual environments easily without requiring system administrator help. Conda uses standard PATH variables to isolate the environments. Coupled with Anaconda cloud and conda-forge, Conda is capable of installing versioned dependencies of Python and other packages. Reviewer Comment: Reviewer Attempting to re-execute the example workflow The first obstacle is \"Make sure FSL is available in your environment and accessible from the command line.\" Where do I get FSL? Which version(s) are acceptable? Please provide at the very least a link to the software's Web page (https://fsl.fmrib.ox.ac.uk/). Response: Links are now provided, and we remind the reader of this page that 5.0.9 is the version of the reference run. Again, half of the point of this exercise is to provide the opportunity to see what happens, numerically, IF the user is already using a different environment or version. Reviewer Comment: I decided not to install FSL on my computer because I am not willing to accept the licence. It excludes commercial use, and in particular \"use of the Software to provide any service to an external organisation for which payment is received\". F1000Research offers reviewers a reduction on future article page charges, which could be interpreted as a form of payment. Beyond this specific legal issue, I also consider it unreasonable to request reviewers to register as users of proprietary software, providing personal data for marketing purposes in the process. Response: For these various reasons, we have now elected to also include a Docker version of this workflow that precludes the need to locally download specific software to one’s local computer. We have also included a commercial use statement from the FSL developers in the README. While most members of the neuroimaging community have access to FSL, in the future, we hope to move to FOSS versions of imaging tools to make such testing accessible to a broader community. Reviewer Comment: I did, however, continue the setup process to check it for completeness. The instruction \"If you already have a `conda` environment, please follow the detailed steps below.\" lacks some precision: where exactly do I have to start if I already have a conda environment? The right answer is \"at 'conda config --add channels conda-forge'\", which I think is not obvious. Response: We have updated the script to test for existence of software and create a standalone environment that does not interfere with any user environment directly. The user can activate this environment if the user so chooses. Since FSL download on non-Debian systems requires going to a site, we simply recommend that people do so on their own. The same script is also used in the Docker container. We have updated this in the README for the repository. Reviewer Comment: 2. \"Within Docker\" etc. I am not sure the authors can do much to address this issue, given that writing platform-independent shell scripts is difficult to impossible, but they should at least say clearly in the installation instructions for which platforms they have actually tested the installation. Response: We thank the reviewer for alerting us to this compatibility issue. As we have discussed in the github issue, as a workaround, we first have pre-generated a Dockerfile so that reviewer could run the analysis and validate our findings. We have recently finalized our changes to the script to make it compatible with both Linux and OSX so it should now run natively on reviewers infrastructure without issues. We very much appreciate the time and effort the reviewer took to help finalize this script."
}
]
},
{
"id": "20114",
"date": "06 Mar 2017",
"name": "Allan J. MacKenzie-Graham",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI found the paper to be clear and straightforward, easy to read and understand. I find the use of a NITRC-CE virtual machine as an execution environment in combination with Nipype an excellent mechanism for facilitating re-executability and documentation of a series of processing steps. Combined with the use of GitHub to keep track of the elements, I believe that this is a remarkably good and fairly easily implemented solution. This encapsulation is excellent for re-execution of an analysis or for applying exactly the same analysis to a novel set of data, however, as the authors state, it only begins to address reproducibility and generalizability across multiple execution environments.\nIn this context, it is not clear to me what the processing environment was on the Mac OS comparison test, specifically what version of the OS and what version of the FSL tools were used in each case. Second, Ubuntu 12.04 is used within NITRC-CE v0.42, but the Mac OS comparison appears to be done against a machine running Ubuntu 16.04. I assume that 16.04 is a typo, but it should be corrected - and if it is not a typo, then a justification for a change in OS version should be given. In previous work by myself1 and others2, we showed that differences in software version and compilation settings can lead to measurably different results. A statement expressing the software versions used and that the software was compiled using similar settings (e.g. same level of optimization, same architecture was used - x86 or x64, etc.) would be reassuring. I realize that this similarity in environment is implied by the mechanism used to install the software, but it should be stated explicitly.\nOverall, this is an excellent manuscript and implementation for sharing re-executable neuroimaging results. The methods and reporting can readily be repeated and replicated, supporting the main thrust of the paper.",
"responses": [
{
"c_id": "2741",
"date": "15 Jun 2017",
"name": "David Kennedy",
"role": "Author Response",
"response": "We thank the reviewer for their thoughtful review and helpful comments. We have revised the manuscript and design of this manuscript to meet many of the concerns raised, and we believe that this has resulted in an improved presentation. We have also reworked the repository and the way the experiment can be reproduced and extended. Reviewer Comment: In this context, it is not clear to me what the processing environment was on the Mac OS comparison test, specifically what version of the OS and what version of the FSL tools were used in each case. Response: In the various ‘comparison runs’ that we present, we now take more care in providing complete descriptions of the OS and software versions that are being used. Reviewer Comment: Second, Ubuntu 12.04 is used within NITRC-CE v0.42, but the Mac OS comparison appears to be done against a machine running Ubuntu 16.04. I assume that 16.04 is a typo, but it should be corrected - and if it is not a typo, then a justification for a change in OS version should be given. In previous work by myself1 and others2, we showed that differences in software version and compilation settings can lead to measurably different results. A statement expressing the software versions used and that the software was compiled using similar settings (e.g. same level of optimization, same architecture was used - x86 or x64, etc.) would be reassuring. I realize that this similarity in environment is implied by the mechanism used to install the software, but it should be stated explicitly. Response: As above, there is clearly some ambiguity as to how we presented the ‘comparison runs’ in the original manuscript that we try to clarify in this version. In the original, the NITRC-CE AWS Ubuntu 12.04 FSL 5.0.9 run was the ‘reference’ and we compared to runs of this workflow (using FSL 5.0.9 in all cases) on local MAC OS 10.10.4, and Ubuntu 16.04 platforms. In the current version, we add a Docker version of the of the workflow (FSL 5.0.9, Debian jessie (8.7)), and enhance the description of the comparison runs. Finally, thanks for the additional references which we now also include."
}
]
}
] | 1
|
https://f1000research.com/articles/6-124
|
https://f1000research.com/articles/6-454/v1
|
10 Apr 17
|
{
"type": "Research Article",
"title": "Patient waiting time in the outpatient clinic at a central surgical hospital of Vietnam: Implications for resource allocation",
"authors": [
"Tho Dinh Tran",
"Uy Van Nguyen",
"Vuong Minh Nong",
"Bach Xuan Tran",
"Tho Dinh Tran",
"Uy Van Nguyen",
"Vuong Minh Nong",
"Bach Xuan Tran"
],
"abstract": "Background: Patient waiting time is considered as a crucial parameter in the assessment of healthcare quality and patients’ satisfaction towards healthcare services. Data concerning this has remained limited in Vietnam. Thus, this study aims to assess patient waiting time in the outpatient clinic in Viet Duc Hospital (Hanoi, Vietnam) in order to enable stakeholders to inform evidence-based interventions to improve the quality of healthcare services. Methods: A cross-sectional study was conducted from June 2014 to June 2015 in the outpatient clinic at Viet Duc Hospital. Waiting time stratified by years (2014 and 2015), months of the year, weekdays, and hours of the day were extracted from Hospital Management software and carefully calculated. Stata 12.0 was employed to analyze data, including the average time (M± SD), frequencies and percentage (%). Results: There was a total of 137,881 patients involved in the study. The average waiting time from registration to preliminary diagnosis in 2014 was 50.41 minutes, and in 2015 was 42.05 minutes. A longer waiting time was recorded in the morning and in those having health insurance. Conclusions: Our study highlights the essential need for human resource promotion to reduce patient waiting time. Also, attention should be paid to the simplification of administrative procedures in order to reduce waiting time among insured patients.",
"keywords": [
"Patient waiting time",
"outpatient clinic",
"Viet Duc Hospital",
"health insurance"
],
"content": "Introduction\n\nAlthough patient waiting time has been defined as an important indicator in the assessment of healthcare quality1 and patients’ satisfaction towards healthcare services2,3, lengthy outpatient waiting time has posed a great challenge to maximize healthcare quality4. This issue is worse among countries with low provider-patient ratios5, and Vietnam is among highly populated countries that are fueled by patient overload, especially in the central hospitals6. Thus, extended waiting time has remained highly prevalent.\n\nPrevious study suggests that appropriate operation of medical examinations could shorten patient waiting times7. In Vietnam, attention has already been paid to the assessment of the length of medical examination. In 2015, a study in Ha Dong General Hospital by Nguyen indicated the average time of medical examination was 96.91 ± 72.16 minutes. The average waiting time was 63.05 ± 62.96 minutes8. In 2012, a study by Le et al. conducted in outpatient clinic (Trung Vuong Emergency Hospital) suggested that the average time spent from registration to doctors’ conclusions was 246.87 ± 104.55 minutes (4.11 ± 1.7 hours)9. Accordingly, patient waiting time is influenced by various factors, such as working procedure, patient overload and appointment schedule10,11.\n\nViet Duc is a central hospital, with the aim of ensuring health for Northern Vietnamese patients. The outpatient clinic welcomes hundreds of patients on a daily basis and is often overloaded. Thus, Viet Duc Hospital is always seeking evidence-based solutions to enhance the quality of healthcare services. However, data on patient waiting time in the outpatient clinic at Viet Duc Hospital remains limited. Thus, the aim of this study was to examine patient waiting times in the outpatient clinic, Viet Duc Hospital, thereby enabling the hospital administration to design evidence-based interventions to improve the satisfaction of patients.\n\n\nMethods\n\nA cross-sectional study was conducted from June 2014 to June 2015 in the outpatient clinic of Viet Duc Hospital (Hanoi, Vietnam). It is the largest surgical center of Vietnam, with approximately 1300 beds and approximately 150,000 patients using outpatient services annually.\n\nAll patients that underwent a medical examination during this time were eligible for the research. There were no specific exclusion criteria used in this study. Data from a total of 137881 patients were extracted for final analysis.\n\nTime data was collected via Hospital Management Software, which was developed to support hospital management in Viet Duc Hospital. Data concerning the waiting time for utilizing service was computed as the time that patients met the physicians minus the time that the patient registered. The waiting time for health service use was analyzed regarding years (2014 and 2015), months of the year, weekdays and hours of the day.\n\nData was cleaned and entered using Epidata 3.1. Stata 12.0 was employed to analyze data: the average time (M±SD), frequencies and percentage (%). Since we extracted data from the software, there was no bias in this study.\n\nThe study was approved by the IRB of Viet Duc Hospital, Hanoi, Vietnam. Data collection procedures and the use of data for analysis were also approved by the directors of Viet Duc Hospital. No personal data concerning patients was collected in this study.\n\n\nResults\n\nTable 1 illustrates the average waiting time of patients in the outpatient clinic of Viet Duc Hospital. There was a total of 137881 patients who had a medical examination during the time of conducting the research, in which 38298 patients had health insurance, accounting for approximately 27.8%. The average waiting time from registration to preliminary diagnosis in 2014 was 50.41 minutes and in 2015 was 42.05 minutes.\n\nPatient waiting time regarding the hours of the day are presented in Table 2. The largest number of patients having a medical examination were in the hours 7:00–8:00 and 8:00–9:00. The lowest number of patients having medical examination were in the hours 11:00–12:00, 15:00–16:00 and 16:00–16:30 (because the hospital was closed at 16:30). The longest patient waiting time was at 6:30 to 7:00, and the time among those having health insurance was 81.54 minutes, while the longest patient waiting time among those who did not have health insurance was 70.63 minutes.\n\nTable 3 shows patient waiting time regarding weekdays. The largest number of patients having a medical examination was on Monday, Tuesday and Wednesday. There were fewer patients on Thursday and Friday. The shortest waiting time was on Thursday, while the longest waiting time was on Tuesday.\n\nTable 4 demonstrates patient waiting time regarding the month of the year. Generally, few patients had medical examinations in February, 2015. The longest waiting time was in July, August, and September for both insured and uninsured patients.\n\n\nDiscussion\n\nThe purpose of this study was to assess the patient waiting time in an outpatient clinic, Viet Duc Hospital, Hanoi, Vietnam. Our findings indicate that the average waiting time from registration to preliminary diagnosis was decreased in a period of two years from 2014 to 2015. Findings also suggest the difference regarding waiting time between the morning and the afternoon, those having health insurance compared to those that did not have health insurance.\n\nThe average waiting time was lower than previous studies at Ha Dong General Hospital (Hanoi City)8, Trung Vuong Emergency Hospital (Ho Chi Minh city)9, and Nguyen Trai Hospital (Ho Chi Minh City)12. However, our findings were higher than studies by Vu at the National Hospital of Tropical Diseases (Hanoi City)7, and Cole in Australia13. It could be hypothesized that the outpatient clinic at Viet Duc Hospital is well-qualified (with skilled physicians and advanced medical technologies), patients directly come to the Hospital without visiting healthcare facilities at grass-roots levels, leading to overload. In fact, each department at the hospital receives approximately 130,000 medical visits every year; therefore, overload frequently happens. The study in Trung Vuong Emergency Hospital was conducted in 2011 when the decision 1313/QĐ-BYT related to the medical examination procedure was not implemented. Therefore, patient waiting time might be prolonged.\n\nThe higher number of visited patients in the morning and the afternoon observed in our study could be potentially explained since patients prefer to have health consultations in the morning, as they could receive the results of clinical tests within the day. A study by Han et al also indicated that the number of patients that visit An Giang Cardiovascular Hospital (An Giang Province) in the morning is higher than the afternoon14. Thus, our study highlights the essential need for human resources enhancement, especially in the morning. Besides, health care providers should be well-distributed appropriately to shorten patients’ time of medical consultations.\n\nNoticeably, those having health insurance had to wait for their turn longer than those that did not have health insurance. This may potentially reflect shortcomings regarding complicated administrative procedures that could extend waiting time8. In fact, cumbersome administrative procedures related to health insurance remain the pressing issue in Vietnamese healthcare system15. Moreover, the government has planned to move toward universal health insurance, where 80% of the total population are covered by health insurance and reduce out-of-pocket health expenses to under 40% by 202015. Since this strategy may be hampered by health insurance-related procedures, stakeholders should pay attention on simplifying administrative procedures for insured patients.\n\n\nConclusions\n\nOur results provided evidence for authorities and stakeholders to create future interventions, in order to enhance patients’ satisfaction and the quality of healthcare services. Primarily, human resources promotion and distribution should be emphasized in outpatient clinics and health insurance-related administrative procedures should be simplified.\n\n\nData availability\n\nDataset 1: Raw data used in the construction of Table 1–Table 4. Data from June 2014-June 2015 detailing waiting times of patients and if health insurance was present. doi, 10.5256/f1000research.11045.d15711216",
"appendix": "Author contributions\n\n\n\nTDT, UVN, BXT conceived, designed and conducted the experiments; TDT, UVN, VMN collected the data; TDT, UVN, BXT, VMN analyzed and interpreted the data; TDT, UVN, BXT, VMN wrote the paper. All authors read and revised the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe would like to express our deepest gratitude for the great contributions of the authorship and the support of the Director of Viet Duc Hospital to conduct this study.\n\n\nReferences\n\nBar-dayan Y, Leiba A, Weiss Y, et al.: Waiting time is a major predictor of patient satisfaction in a primary military clinic. Mil Med. 2002; 167(10): 842–5. PubMed Abstract\n\nThompson DA, Yarnold PR, Williams DR, et al.: Effects of actual waiting time, perceived waiting time, information delivery, and expressive quality on patient satisfaction in the emergency department. Ann Emerg Med. 1996; 28(6): 657–65. PubMed Abstract | Publisher Full Text\n\nThompson DA, Yarnold PR: Relating patient satisfaction to waiting time perceptions and expectations: the disconfirmation paradigm. Acad Emerg Med. 1995; 2(12): 1057–62. PubMed Abstract | Publisher Full Text\n\nMcCarthy K, McGee HM, O'Boyle CA: Outpatient clinic waiting times and non-attendance as indicators of quality. Psychology Health & Medicine. 2000; 5(3): 287–293. Publisher Full Text\n\nMehra P: Outpatient clinic waiting time, provider communication styles and satisfaction with healthcare in India. Int J Health Care Qual Assur. 2016; 29(7): 759–77. PubMed Abstract | Publisher Full Text\n\nReport on Healthcare Sector in Vietnam. 2012. Reference Source\n\nVu TM: Patient waiting time in outpatient clinic in National Hospital of Tropical Diseases from November 2009 to February 2010. In Bachelor Education Thesis. Hanoi Medical University: Hanoi, 2010.\n\nNguyen HTT: The time of using healthcare services and related factors in outpatient clinic, Ha Dong General Hospital, Hanoi. In Master Thesis. Hanoi Medical University: Hanoi, 2015.\n\nLe CT, Huynh TT, Do TC: Procedures of medical examination in outpatient clinic- Trung Vuong Emergency Hospital. The Medicine Journal of Ho Chi Minh City. 2012; 4(16).\n\nZhu Z, Heng BH, Teow KL: Analysis of factors causing long patient waiting time and clinic overtime in outpatient clinics. J Med Syst. 2012; 36(2): 707–13. PubMed Abstract | Publisher Full Text\n\nGroome LJ, Mayeaux EJ Jr: Decreasing extremes in patient waiting time. Qual Manag Health Care. 2010; 19(2): 117–28. PubMed Abstract | Publisher Full Text\n\nNguyen HT: The satisfaction of patients using health insurance towards healthcare quality in Nguyen Trai hospital, Ho Chi Minh city in 2011 and related factors, in Master Thesis. University of Medicine and Pharmacy, Ho Chi Minh City Ho Chi Minh city, 2001.\n\nCole FL: Determinants of patient waiting time in the general outpatient department of a tertiary health institution in Australia. Australia 2000.\n\nNguyen HNT, Nguyen VHT, Bui TMH: Patient waiting time and satisfaction of patients in outpatient clinic, An Giang Cardiovascular hospital. 2012.\n\nSomanathan A, Ajay T, Dao HL, et al.: Moving toward Universal Coverage of Social Health Insurance in Vietnam: Assessment and Options. Directions in development; human development. Washington, DC: World Bank Group, 2014. Publisher Full Text\n\nTran TD, Nguyen UV, Nong Minh V, et al.: Dataset 1 in: Patient waiting time in the outpatient clinic at a central surgical hospital of Vietnam: Implications for resource allocation. F1000Research. 2017. Publisher Full Text"
}
|
[
{
"id": "21992",
"date": "10 May 2017",
"name": "Duong Minh Duc",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper is short and concise but the statistical test is too simple. The authors should use some bivariate analysis.\n\nThe discussion about \"Moreover, the government has planned to move toward universal health insurance, where 80% of the total population are covered by health insurance and reduce out-of-pocket health expenses to under 40% by 2020. Since this strategy may be hampered by health insurance-related procedures, stakeholders should pay attention on simplifying administrative procedures for insured patients.\" could be not appropriate because this study has been conducted in a national-level hospital and it could not be refer to universal health coverage which should be provided at grassroot level (commune health station or district hospital).\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "2702",
"date": "11 May 2017",
"name": "Vuong Nong Minh",
"role": "Author Response",
"response": "Dear Mr Duc,Thank you very much for your comments. We would very carefully consider your feedback and revise our manuscript. We hope that our newest version makes you satisfy.Sincerely,Authors"
}
]
}
] | 1
|
https://f1000research.com/articles/6-454
|
https://f1000research.com/articles/6-876/v1
|
13 Jun 17
|
{
"type": "Opinion Article",
"title": "Four simple recommendations to encourage best practices in research software",
"authors": [
"Rafael C. Jiménez",
"Mateusz Kuzak",
"Monther Alhamdoosh",
"Michelle Barker",
"Bérénice Batut",
"Mikael Borg",
"Salvador Capella-Gutierrez",
"Neil Chue Hong",
"Martin Cook",
"Manuel Corpas",
"Madison Flannery",
"Leyla Garcia",
"Josep Ll. Gelpí",
"Simon Gladman",
"Carole Goble",
"Montserrat González Ferreiro",
"Alejandra Gonzalez-Beltran",
"Philippa C. Griffin",
"Björn Grüning",
"Jonas Hagberg",
"Petr Holub",
"Rob Hooft",
"Jon Ison",
"Daniel S. Katz",
"Brane Leskošek",
"Federico López Gómez",
"Luis J. Oliveira",
"David Mellor",
"Rowland Mosbergen",
"Nicola Mulder",
"Yasset Perez-Riverol",
"Robert Pergl",
"Horst Pichler",
"Bernard Pope",
"Ferran Sanz",
"Maria V. Schneider",
"Victoria Stodden",
"Radosław Suchecki",
"Radka Svobodová Vařeková",
"Harry-Anton Talvik",
"Ilian Todorov",
"Andrew Treloar",
"Sonika Tyagi",
"Maarten van Gompel",
"Daniel Vaughan",
"Allegra Via",
"Xiaochuan Wang",
"Nathan S. Watson-Haigh",
"Steve Crouch",
"Monther Alhamdoosh",
"Michelle Barker",
"Bérénice Batut",
"Mikael Borg",
"Salvador Capella-Gutierrez",
"Neil Chue Hong",
"Martin Cook",
"Manuel Corpas",
"Madison Flannery",
"Leyla Garcia",
"Josep Ll. Gelpí",
"Simon Gladman",
"Carole Goble",
"Montserrat González Ferreiro",
"Alejandra Gonzalez-Beltran",
"Philippa C. Griffin",
"Björn Grüning",
"Jonas Hagberg",
"Petr Holub",
"Rob Hooft",
"Jon Ison",
"Daniel S. Katz",
"Brane Leskošek",
"Federico López Gómez",
"Luis J. Oliveira",
"David Mellor",
"Rowland Mosbergen",
"Nicola Mulder",
"Yasset Perez-Riverol",
"Robert Pergl",
"Horst Pichler",
"Bernard Pope",
"Ferran Sanz",
"Maria V. Schneider",
"Victoria Stodden",
"Radosław Suchecki",
"Radka Svobodová Vařeková",
"Harry-Anton Talvik",
"Ilian Todorov",
"Andrew Treloar",
"Sonika Tyagi",
"Maarten van Gompel",
"Daniel Vaughan",
"Allegra Via",
"Xiaochuan Wang",
"Nathan S. Watson-Haigh"
],
"abstract": "Scientific research relies on computer software, yet software is not always developed following practices that ensure its quality and sustainability. This manuscript does not aim to propose new software development best practices, but rather to provide simple recommendations that encourage the adoption of existing best practices. Software development best practices promote better quality software, and better quality software improves the reproducibility and reusability of research. These recommendations are designed around Open Source values, and provide practical suggestions that contribute to making research software and its source code more discoverable, reusable and transparent. This manuscript is aimed at developers, but also at organisations, projects, journals and funders that can increase the quality and sustainability of research software by encouraging the adoption of these recommendations.",
"keywords": [
"Open Source",
"code",
"software",
"guidelines",
"best practices",
"recommendations",
"Open Science",
"quality",
"sustainability",
"FAIR"
],
"content": "Introduction\n\nNew discoveries in modern science are underpinned by automated data generation, processing and analysis: in other words, they rely on software. Software, particularly in the context of research, is not only a means to an end, but is also a collective intellectual product and a fundamental asset for building scientific knowledge. More than 90% of scientists acknowledge software is important for their own research and around 70% say their research would not be feasible without it (Hannay et al., 2009; Hettrick et al., 2016).\n\nScientists are not just users of software; they are also prime producers (Goble, 2014). 90% of scientists developing software are primarily self-taught and lack exposure and incentives to adopt software development practices that are widespread in the broader field of software engineering (Wilson et al., 2014). As a result, software produced for research does not always meet the standards that would ensure its quality and sustainability, affecting the reproducibility and reusability of research (Crouch et al., 2013).\n\nOpen Source Software (OSS) is software with source code that anyone can inspect, modify and enhance. OSS development is used by organisations and projects to improve accessibility, reproduction, transparency and innovation in scientific research (Mulgan et al., 2005; Nosek et al., 2015). OSS not only increases discoverability and visibility, but it also engages developer and user communities, provides recognition for contributors, and builds trust among users (McKiernan et al., 2016). OSS development significantly contributes to the reproducibility of results generated by the software and facilitates software reusability and improvement (Ince et al., 2012; Perez-Riverol et al., 2014). Opening code to the public is also an opportunity for developers to showcase their work, so it becomes an incentive for adoption of software development best practices (Leprevost et al., 2014). Thus, OSS can be used as a vehicle to promote the quality and sustainability of software, leading to the delivery of better research.\n\nThis manuscript describes a core set of OSS recommendations to improve the quality and sustainability of research software. It does not propose new software development best practices, but rather provides easy-to-implement recommendations that encourage adoption of existing best practices. These recommendations do not aim to describe in detail how to develop software, but rather lay out practical suggestions on top of Open Source values that go towards making research software and its source code more discoverable, reusable and transparent.\n\nThe OSS recommendations should be applied following existing and complementary guidelines like best practices, manifestos and principles that describe more specific procedures on how to develop and manage software. Some of these complementary guidelines are related to version control, code review, automated testing, code formatting, documentation, citation and usability. (Artaza et al., 2016; DagstuhlEAS, 2017; Gilb, 1988; Leprevost et al., 2014; List et al., 2017; Perez-Riverol et al., 2016; Prlić & Procter, 2012; Smith et al., 2016; Wilson et al., 2014; Wilson et al., 2016).\n\nThis manuscript also aims to encourage projects, journals, funders and organisations to both endorse the recommendations and to drive compliance through their software policies. The recommendations are accompanied by a list of arguments addressing common questions and fears raised by the research community when considering open sourcing software.\n\nIn this manuscript, software is broadly defined to include command line software, graphical user interfaces, desktop and mobile applications, web-based services, application program interfaces (APIs) and infrastructure scripts that help to run services.\n\n\nTarget audience\n\nOur target audience includes leaders and managers of organisations and projects, journal editorial bodies, and funding agencies concerned with the provision of products and services relying on the development of open research software. We want to provide these stakeholders with a simple approach to drive the development of better software. Though these OSS recommendations have mostly been developed within, and received feedback from, the life science community, the document and its recommendations apply to all research fields.\n\nStrategies to increase software quality usually target software developers, focusing on training and adoption of best practices (Wilson et al., 2014). This approach can yield good results, but requires a significant effort as well as personal commitment from developers (Wilson, 2014). For an organisation employing scientists and developers with different sets of programming skills and responsibilities, it is not easy to endorse specific best practices or define a broad range of training needs. It is easier to endorse a set of basic recommendations that are simple to monitor, simple to comply with, and which drive the adoption of best practices and reveal training needs. The OSS recommendations aim to create awareness, encourage developers to be more conscious of best practices, and make them more willing to collaborate and request support. The recommendations define broad guidelines, giving developers freedom to choose how to implement specific best practices.\n\nIn terms of the adoption of these recommendations, we see endorsement as the first step: that is, agreeing to support the OSS recommendations without a formal process for implementation. Promotion is a second step: that is, actively publicising and incentivising the OSS recommendations within the organisation as well as globally. Compliance is the third step: to formally implement them within the organisation, with ongoing monitoring and public reporting if possible. To facilitate progress, we propose that organisations, projects, journals, as well as funding agencies include these OSS recommendations as part of their policies relating to the development and publication of software.\n\nOpen Source Software is not just adopted by non-profit organisations, but also by commercial companies as a business model (Popp, 2015). Therefore, we encourage not only publicly funded projects but also for-profit entities to adopt OSS and support these recommendations.\n\n\nRecommendations\n\nDevelop source code in a publicly accessible, version controlled repository (e.g., GitHub and Bitbucket) from the beginning of the project. The longer a project is run in a closed manner, the harder it is to open it later (Fogel, 2005). Opening code and exposing the software development life cycle publicly from day one:\n\nPromotes trust in the software and broader project\n\nFacilitates the discovery of existing software development projects\n\nProvides a historical public record of contributions from the start of the project and helps to track recognition\n\nEncourages contributions from the community\n\nIncreases opportunities for collaboration and reuse\n\nExposes work for community evaluation, suggestions and validation\n\nIncreases transparency through community scrutiny\n\nEncourages developers to think about and showcase good coding practices\n\nFacilitates reproducibility of scientific results generated by all prior versions of the software\n\nEncourages developers to provide documentation, including a detailed user manual and clear in-code comments\n\nSome common doubts and questions about making software Open Source are discussed in the Supplementary File S1, “Fears of open sourcing and some ways to handle them”.\n\nFacilitate discoverability of the software project and its source code by registering metadata related to the software in a popular community registry. Metadata might include information like the source code location, contributors, licence, version, identifier, references and how to cite the software. Metadata registration:\n\nIncreases the visibility of the project, the software, its use, its successes, its references, and its contributors\n\nProvides easy access for software packagers to deploy your software, thus increasing visibility\n\nEncourages software providers to think about the metadata that describes software as well as how to expose such metadata\n\nHelps to expose the software metadata in a machine readable format via the community registry\n\nIncreases the chances of collaboration, reuse, and improvement\n\nExamples of community registries of software metadata are bio.tools (Ison et al., 2016), (Ison et al., 2016) biojs.io (Corpas et al., 2014; Gómez et al., 2013) and Omic Tools (Henry et al., 2014) in the life sciences and DataCite (Brase, n.d.) as a generic metadata registry for software as well as data.\n\nAdopt a suitable Open Source licence to clarify how to use, modify and redistribute the source code under defined terms and conditions. Define the licence in a publicly accessible source code repository, and ensure the software complies with the licences of all third party dependencies. Providing a licence:\n\nClarifies the responsibilities and rights placed on third parties wishing to use, copy, redistribute, modify and/or reuse your source code\n\nEnables using the code in jurisdictions where “code with no licence” means it cannot be used at all\n\nProtects the software’s intellectual property\n\nProvides a model for long-term sustainability by enabling legally well-founded contributions and reuse\n\nWe advise choosing a OSI-approved Open Source Licence unless your institution or project requires a different licence. Websites like “Choose an open source license” provide guidelines to help users to select an OSI-approved Open Source Licence. Organisations like the OSS Watch also provide advice on how to keep track of the licences of software dependencies. For reusability reasons, we also advise authors to disclose any patents and pending patent applications known to them affecting the software.\n\nOpen sourcing your software does not mean the software has to be developed in a publicly collaborative manner. Although it is desirable, the OSS recommendations do not mandate a strategy for collaborating with the developer community. However, projects should be clear about how contributions can be made and incorporated by having transparent governance model and communication channels. Clarity on the project structure, as well as its communication channels and ways to contribute:\n\nIncreases transparency on how the project and the software is being managed\n\nHelps to define responsibilities and how decision are made in the software project\n\nHelps the community know how to collaborate, communicate and contribute to the project\n\nFor instance the Galaxy project’s website describes the team’s structure, how to be part of the community, and their communication channels.\n\n\nAlignment with FAIR data principles\n\nThe FAIR Guiding Principles for scientific data management and stewardship provide recommendations on how to make research data findable, accessible, interoperable and reusable (FAIR) (Wilkinson et al., 2016). While the FAIR principles were originally designed for data, they are sufficiently general that their high level concepts can be applied to any digital object including software. Though not all the recommendations from the FAIR data principles directly apply to software, there is good alignment between the OSS recommendations and the FAIR data principles (see Table 1).\n\nThere are also distinctions between the OSS recommendations and the FAIR data principles. The FAIR data principles have a specific emphasis on enhancing machine-readability: the ability of machines to automatically find and use data. This emphasis is not present in the OSS recommendations which expect machine readable software metadata to be available via software registries. The OSS recommendations are less granular and aim to enhance understanding and uptake of best practices; they were designed with measurability in mind. The FAIR data principles do not have such built-in quantification yet. FAIR metrics are a separate effort under development, lead by the Dutch Techcentre for Life Sciences (Eijssen et al., 2016).\n\nThe community registries can play an important role in making software metadata FAIR by capturing, assigning and exposing software metadata following a standard knowledge representation and controlled vocabularies that are relevant for domain-specific communities. Thus we expect the community registries to provide guidelines on how to provide software metadata following the FAIR Guiding Principles (Wilkinson et al., 2016).\n\n\nConclusion\n\nThe OSS recommendations aim to encourage the adoption of best practices and thus help to develop better software for better research. These recommendations are designed as practical ways to make research software and its source code more discoverable, reusable and transparent, with the desired objective to improve its quality and sustainability. Unlike many software development best practices tailored for software developers, the OSS recommendations aim to target a wider audience, particularly research funders, research institutions, journals, group leaders, and managers of projects producing research software. The adoption of these recommendations offer a simple mechanism for these stakeholders to promote the development of better software and an opportunity for developers to improve and showcase their software development skills.",
"appendix": "Author contributions\n\n\n\nSteve Crouch, Neil Chue Hong, Mateusz Kuzak, Manuel Corpas, Jason Williams, Maria Victoria Schneider and Rafael C Jimenez also contributed organising and facilitating workshops. Federico Lopez developed the reference website to provide information and a point of contact for these recommendations. All the authors contributed providing feedback to shape this manuscript and recommendations.\n\n\nCompeting interests\n\n\n\nAt the time of writing MC is an employee of Repositive Ltd.\n\n\nGrant information\n\nThis work was partially supported by ELIXIR-EXCELERATE and CORBEL. ELIXIR-EXCELERATE and CORBEL are funded by the European Commission within the Research Infrastructures programme of Horizon 2020, grant agreement numbers 676559 and 654248. The European workshops were supported by ELIXIR and organised in collaboration with the Software Sustainability Institute and Netherlands eScience Center. The workshop in Australia was supported by EMBL-ABR via its main funders: The University of Melbourne and Bioplatforms Australia.\n\n\nAcknowledgments\n\nThe authors wish to thank all the supporters of the OSS recommendations.\n\nThe OSS recommendations presented in this manuscript have been have been open for discussion for more than a year. This allowed them to be developed by a wide range of stakeholders, including developers, managers, researchers, funders and project coordinators and anybody else concerned with the production of quality software for research. We also organised several workshops and presented this work in several meetings to engage more stakeholders, collect feedback and refine the recommendations. For further information, about the OSS recommendations please visit the following site: https://SoftDev4Research.github.io/recommendations/\n\n\nSupplementary material\n\nSupplementary File 1: ‘Fears of open sourcing and some ways to handle them’. In this appendix we aim to expose some of the common fear scenarios related to open sourcing, and some ways to handle them.\n\nClick here to access the data.\n\n\nReferences\n\nArtaza H, Chue Hong N, Manuel C, et al.: Top 10 metrics for life science software good practices [version 1; referees: 2 approved]. F1000Res. 2016; 5: pii: ELIXIR-2000. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrase J: Datacite - A Global Registration Agency for Research Data. SSRN Electronic Journal. n.d. Publisher Full Text\n\nCorpas M, Jimenez R, Carbon SJ, et al.: BioJS: an open source standard for biological visualisation – its status in 2014 [version 1; referees: 2 approved]. F1000Res. 2014; 3: 55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrouch S, Chue Hong N, Hettrick S, et al.: The Software Sustainability Institute: Changing Research Software Attitudes and Practices. Computing in Science & Engineering. 2013; 15(6): 74–80. Publisher Full Text\n\nDagstuhlEAS: DagstuhlEAS/draft-Manifesto. GitHub. 2017. Accessed January 5. Reference Source\n\nEijssen L, Evelo C, Kok R, et al.: The Dutch Techcentre for Life Sciences: Enabling data-intensive life science research in the Netherlands [version 2; referees: 2 approved, 1 approved with reservations]. F1000Res. 2016; 4: 33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFogel K: Producing Open Source Software: How to Run a Successful Free Software Project. O’Reilly Media, Inc. 2005. Reference Source\n\nGilb T: Principles of Software Engineering Management. Addison-Wesley Professional, 1988. Reference Source\n\nGoble C: Better Software, Better Research. IEEE Internet Computing. 2014; 18(5): 4–8. Publisher Full Text\n\nGómez J, García LJ, Salazar GA, et al.: BioJS: An Open Source JavaScript Framework for Biological Data Visualization. Bioinformatics. 2013; 29(8): 1103–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHannay JE, MacLeod C, Singer J, et al.: How Do Scientists Develop and Use Scientific Software? In 2009 ICSE Workshop on Software Engineering for Computational Science and Engineering. 2009. Publisher Full Text\n\nHenry VJ, Bandrowski AE, Pepin AS, et al.: OMICtools: An Informative Directory for Multi-Omic Data Analysis. Database (Oxford). 2014; 2014: pii: bau069. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHettrick S, Antonioletti M, Carr L, et al.: UK Research Software Survey 2014. Accessed November 27. 2016. Publisher Full Text\n\nInce DC, Hatton L, Graham-Cumming J: The Case for Open Computer Programs. Nature. 2012; 482(7386): 485–88. PubMed Abstract | Publisher Full Text\n\nIson J, Rapacki K, Ménager H, et al.: Tools and Data Services Registry: A Community Effort to Document Bioinformatics Resources. Nucleic Acids Res. 2016; 44(D1): D38–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeprevost Fda V, Barbosar VC, Francisco EL, et al.: On best practices in the development of bioinformatics software. Front Genet. 2014; 5: 199. PubMed Abstract | Publisher Full Text | Free Full Text\n\nList M, Ebert P, Albrecht F: Ten Simple Rules for Developing Usable Software in Computational Biology. PLoS Comput Biol. 2017; 13(1): e1005265. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKiernan EC, Bourne PE, Brown CT, et al.: How open science helps researchers succeed. eLife. 2016; 5: pii: e16800. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMulgan G, Steinberg T, Salem O: Wide Open: Open Source Methods and Their Future Potential. Demos Medical Publishing. 2005. Reference Source\n\nNosek BA, Alter G, Banks GC, et al.: SCIENTIFIC STANDARDS. Promoting an open research culture. Science. American Association for the Advancement of Science, 2015; 348(6242): 1422–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerez-Riverol Y, Gatto L, Wang R, et al.: Ten Simple Rules for Taking Advantage of Git and GitHub. PLoS Comput Biol. 2016; 12(7): e1004947. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerez-Riverol Y, Wang R, Hermjakob H, et al.: Open source libraries and frameworks for mass spectrometry based proteomics: a developer's perspective. Biochim Biophys Acta. 2014; 1844(1 Pt A): 63–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPopp KM: Best Practices for Commercial Use of Open Source Software: Business Models, Processes and Tools for Managing Open Source Software. BoD – Books on Demand. 2015. Reference Source\n\nPrlić A, Procter JB: Ten simple rules for the open development of scientific software. PLoS Comput Biol. 2012; 8(12): e1002802. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith AM, Katz DS, Niemeyer KE, et al.: Software Citation Principles. PeerJ Comput Sci. 2016; 2: e86. Publisher Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson G, Aruliah DA, Brown CT, et al.: Best practices for scientific computing. PLoS Biol. 2014; 12(1): e1001745. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilson G, Bryan J, Cranston K, et al.: Good Enough Practices in Scientific Computing. 2016. Reference Source\n\nWilson G: Software Carpentry: lessons learned [version 1; referees: 3 approved]. F1000Res. 2014; 3: 62. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23468",
"date": "26 Jun 2017",
"name": "Roberto Di Cosmo",
"expertise": [
"Reviewer Expertise formal methods",
"software engineering",
"programming languages",
"logics",
"theoretical computer science",
"semantics",
"parallel programming",
"open source software",
"package management",
"archival"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article presents four simple recommendations that may improve the overall quality and visibility of research software. This reviewer agrees with the basic principles set forth by the authors, and hopes they will be widely shared and adopted at least for software that is expected to last longer than the time it takes for the corresponding research paper to be accepted and/or presented.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23467",
"date": "26 Jun 2017",
"name": "Gregory V. Wilson",
"expertise": [
"Reviewer Expertise software engineering education"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article presents focused, well-argued advocacy for improving software development practices in the sciences. None of the recommendations will be surprising to those already involved in open science, but as only a small minority of researchers actually do them, it is worth presenting them forcefully and succinctly. I would recommend shortening the introductory material (sections \"Introduction\" and \"Target Audience\"), but that is a minor point.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23464",
"date": "10 Jul 2017",
"name": "Stefanie Betz",
"expertise": [
"Reviewer Expertise Software Engineering"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article advocates research software openness presenting four recommendations to improve research software visibility, re-usability, and transparency. I really like the article and I think it is important to open research software. Please find below my feedback. (Overall, I agree with the comments of Milad Miladi):\nIn my opinion, the title does not reflect the content of the article. The focus is on adopting OSS and supporting the provided four recommendations to help to develop better software for better research. Currently, only the second part (supporting the provided four recommendations to help to develop better software for better research) is reflected in the title not the part about OSS. I am not sure everybody is familiar with the open registry platforms. Thus, some more information regarding them or a link to background information would be nice. I think recommendation four should include documentation.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-876
|
https://f1000research.com/articles/6-875/v1
|
13 Jun 17
|
{
"type": "Opinion Article",
"title": "A community proposal to integrate proteomics activities in ELIXIR",
"authors": [
"Juan Antonio Vizcaíno",
"Mathias Walzer",
"Rafael C. Jiménez",
"Wout Bittremieux",
"David Bouyssié",
"Christine Carapito",
"Fernando Corrales",
"Myriam Ferro",
"Albert J.R. Heck",
"Peter Horvatovich",
"Martin Hubalek",
"Lydie Lane",
"Kris Laukens",
"Fredrik Levander",
"Frederique Lisacek",
"Petr Novak",
"Magnus Palmblad",
"Damiano Piovesan",
"Alfred Pühler",
"Veit Schwämmle",
"Dirk Valkenborg",
"Merlijn van Rijswijk",
"Jiri Vondrasek",
"Martin Eisenacher",
"Lennart Martens",
"Oliver Kohlbacher",
"Mathias Walzer",
"Rafael C. Jiménez",
"Wout Bittremieux",
"David Bouyssié",
"Christine Carapito",
"Fernando Corrales",
"Myriam Ferro",
"Albert J.R. Heck",
"Peter Horvatovich",
"Martin Hubalek",
"Lydie Lane",
"Kris Laukens",
"Fredrik Levander",
"Frederique Lisacek",
"Petr Novak",
"Magnus Palmblad",
"Damiano Piovesan",
"Alfred Pühler",
"Veit Schwämmle",
"Dirk Valkenborg",
"Merlijn van Rijswijk",
"Jiri Vondrasek",
"Martin Eisenacher",
"Lennart Martens",
"Oliver Kohlbacher"
],
"abstract": "Computational approaches have been major drivers behind the progress of proteomics in recent years. The aim of this white paper is to provide a framework for integrating computational proteomics into ELIXIR in the near future, and thus to broaden the portfolio of omics technologies supported by this European distributed infrastructure. This white paper is the direct result of a strategy meeting on ‘The Future of Proteomics in ELIXIR’ that took place in March 2017 in Tübingen (Germany), and involved representatives of eleven ELIXIR nodes.\n\nThese discussions led to a list of priority areas in computational proteomics that would complement existing activities and close gaps in the portfolio of tools and services offered by ELIXIR so far. We provide some suggestions on how these activities could be integrated into ELIXIR’s existing platforms, and how it could lead to a new ELIXIR use case in proteomics. We also highlight connections to the related field of metabolomics, where similar activities are ongoing. This white paper could thus serve as a starting point for the integration of computational proteomics into ELIXIR. Over the next few months we will be working closely with all stakeholders involved, and in particular with other representatives of the proteomics community, to further refine this paper.",
"keywords": [
"proteomics",
"mass spectrometry",
"computational proteomics",
"databases",
"bioinformatics infrastructure",
"data standards",
"training",
"multi-omics approaches."
],
"content": "Introduction\n\nProteomics is generally defined as the large-scale experimental study of the proteome. High-throughput proteomics approaches have matured significantly, becoming an increasingly used tool in biological research. The rapid development of the field over the last decade has been primarily driven by technological progress in mass spectrometry instrumentation, chromatographic separation, genomics (increased availability of sequenced genomes) and bioinformatics1,2. The primary workhorse of proteomics today is mass spectrometry coupled to liquid chromatography (LC-MS), with less commonly used high-throughput proteomics approaches based on antibodies (e.g., protein arrays and other immunofluorescence-based techniques). Key applications of proteomics are the study of (differential) protein expression in time and space, characterization of protein primary structures and their post-translational modifications (PTMs), such as phosphorylation and glycosylation, elucidating protein structures, and protein-protein interactions. It is the primary technology driving progress in unravelling signalling networks (e.g. protein phosphorylation driven signalling) and protein interaction networks, and is indispensable for understanding biological function of protein isoforms and disentangling their specific functions. In complex systems biology and systems medicine studies, proteomics often complements information gained from other omics levels, such as genomics and transcriptomics (the so-called proteogenomics and proteotranscriptomics studies3), metagenomics (metaproteomics), glycomics, and metabolomics.\n\nAs already highlighted, remarkable advances in computational methods have been a key driver in the fast development of the field of proteomics. ELIXIR (https://www.elixir-europe.org/) is a European Research Infrastructure (ESFRI), which coordinates, integrates and sustains bioinformatics resources across its member states. Some of the most prominent research groups in proteomics are active in Europe, such as Prof. Matthias Mann (Martinsried, Munich, Germany), Prof. Ruedi Aebersold (Zurich, Switzerland), Prof. Albert Heck (Utrecht, the Netherlands) and Prof. Mathias Uhlén (Stockholm, Sweden). In addition, Europe also hosts worldwide renowned groups that are focused on the development and application of widely-used bioinformatics tools and resources, including MaxQuant4, the OpenMS framework5, CompOmics6 tools, such as PeptideShaker7, the PRIDE database, as the world-leading proteomics repository8 (also coordinating the global ProteomeXchange Consortium of proteomics resources9), and a collection of open data standards and related software10, emphasizing the European leading role in the activities of the HUPO (Human Proteome Organisation) Proteomics Standards Initiative (PSI).\n\nOutside mass spectrometry proteomics, the Human Protein Atlas11, located in Sweden, is the world-leading resource for antibody-based characterization of the human proteome. In this context, it should also be highlighted that two of the three sites behind the development of UniProt12, the most-widely used protein knowledgebase, are European: the Swiss Institute of Bioinformatics (SIB) and the European Bioinformatics Institute (EMBL-EBI). Furthermore, neXtProt13, which is the reference knowledgebase for human proteins in the context of the HUPO Human Proteome Project, is also developed and maintained at SIB.\n\nAdditionally, it is of note to highlight that a number of national proteomics-dedicated infrastructures have already prioritised structuring and developing computational proteomics among their activities. This is the case of the French proteomics infrastructure ProFI (which has, for instance, devoted a major investment to develop the Proline tool) and the Spanish infrastructure ProteoRed (which has, for example, contributed heavily to PSI activities). Also, proteomics is represented in other national scientific infrastructures. One example is the Netherlands DTL (Dutch Techcentre for Life Sciences), having an active role in advocating for FAIR data management.\n\nIn the context of providing infrastructure for storing proteomics data, it is worth highlighting here that, although it was not the case just a few years ago, thanks to many of these efforts, and with the support of scientific publishers and funders, public availability of proteomics data has increased exponentially in recent years, becoming a common scientific practise, similarly to how it routinely happens in disciplines such as genomics and transcriptomics14. Figure 1 summarises the growth of the PRIDE database in recent years.\n\nNumber of datasets (A), and the total size of the PRIDE database (B), from 2005 to 2016. Data was retrieved directly from the PRIDE Archive OracleTM database instance, which contains the file sizes and the dates when the datasets where originally submitted.\n\nAlthough the European bioinformatics community has been very active in the proteomics field (see above), proteomics activities have not been highly represented in ELIXIR so far. There was a proteomics component in two small ELIXIR pilot actions (collaborations between EMBL-EBI, now ELIXIR central node, and Bioinformatics Services to Swedish Life Science [BILS], now ELIXIR-Sweden). In addition, a selection of proteomics tools and training events have been included in the ELIXIR tool registry15 (http://bio.tools) and in TeSS, the ELIXIR training portal (https://tess.elixir-europe.org/), respectively. These platforms were recently presented to the proteomics community16. However, we propose that, due to the growing importance of the field and the prominence of proteomics bioinformatics activities in Europe, it is the right time to formally integrate proteomics activities in ELIXIR.\n\nIn this context, in February 2017, EMBL-EBI and ELIXIR-DE initiated the first ELIXIR ‘Implementation study’ involving proteomics approaches, as a starting point for the field. Within this implementation study, suggested by ELIXIR management, the meeting “The future of proteomics in ELIXIR” took place in Tübingen (Germany), as a general strategy meeting for future proteomics activities in ELIXIR. In this white paper, we first summarize the main conclusions of the meeting, and then explain possible future directions for the incorporation of proteomics activities in ELIXIR, taking into account the current overall ELIXIR structure, split in platforms and use cases.\n\n\nMethods\n\nThe meeting took place on March 1st–2nd 2017 in Tübingen (Germany). Attendance was widely advertised through ELIXIR dissemination channels (e.g., mailing lists, newsletter) and was open to any interested member of the community. There were 24 attendees representing eleven ELIXIR nodes: Germany (host), Belgium, Czech Republic, Denmark, France, Italy, Netherlands, Sweden, Switzerland, EMBL-EBI, and one representative from the ELIXIR Hub. The detailed minutes of the meeting are available as Supplementary File 1. The meeting started with a presentation given by Rafael Jiménez (ELIXIR Chief Technical Officer) who provided a general overview of the current ELIXIR activities. This initial talk was followed by a series of presentations where the representatives of each node summarized their ongoing activities related to proteomics. All the presentations are freely available at http://tinyurl.com/elixir-proteomics.\n\nThe remainder of the meeting was devoted to an open discussion on how to bring together activities, experience, stakeholders, and emerging needs. First of all, ten potential ELIXIR stakeholders in this domain were identified, namely: funding agencies, regulatory bodies, educators, infrastructures, publishers, core facilities, bioinformaticians, life scientists, industries and hospitals/patients. Second, a series of needs and challenges were outlined for each of the stakeholders, and these were then mapped to each of the existing ELIXIR platforms: Data, Tools, Interoperability, Compute and Training (see below). The output of this activity is summarized in Supplementary Table 1. In the second day of the meeting, more concrete topics, derived from the identified needs and challenges were outlined by the attendees, and then organised in wider areas, so called “clusters”. Finally, they were prioritised, with the idea that these could form the basis for future proteomics activities in ELIXIR.\n\nHere we describe the current status of ELIXIR platforms and use cases, by May 2017. ELIXIR’s activities are structured around platforms and use cases. They bring together resources and expertise from the ELIXIR Nodes and form the basic unit of operation within ELIXIR. The ELIXIR platforms are responsible of the implementation of the ELIXIR programme and are organised in five key areas: Data, Tools, Compute, Interoperability and Training. The platforms are complemented by four use cases that represent four scientific communities: Human data, Rare diseases, Marine metagenomics and Plant sciences (Figure 2). The use cases drive the work of the ELIXIR platforms by defining their bioinformatics needs and requirements. The close collaboration between the ELIXIR use cases and platforms ensures that the services developed by the ELIXIR platforms are fit for purpose. Each platform and use case is led by a group of senior scientists from across the ELIXIR nodes. In addition to the funding available in each national ELIXIR node, the main source of financial support for ELIXIR activities comes from the ELIXIR-EXCELERATE EU H2020 project. Additional activities are funded through other complementary grants as well as ‘Implementation studies’ supported by the ELIXIR Hub.\n\nThe ELIXIR platforms. The Data platform focuses on sustaining Europe’s life science data infrastructure. This platform is working on guidelines and indicators for data resources to improve their impact and sustainability17. It also works on improving links between curated data resources and literature data.\n\nThe Tools platform is dedicated to services and connectors to drive access and exploitation of bioinformatics research software. The main key activities within this platform are centred to facilitate the discovery, benchmarking and interoperability of software. It does also focus on software development best practices, as well as on a strategy for workflows and software containers.\n\nThe Interoperability platform supports the discovery, integration and analysis of biological data. Activities driven by this platform are organised in projects around identifiers, metadata standards and linked data. It also works on the description of interoperability services as well as specialised workshops named BYOD (Bring Your Own Data)18 to improve the “FAIRness” (Findable, Accessible, Interoperable and Re-usable) of data resources19.\n\nThe Compute platform is dedicated to the compute, storage, transfer, authentication and authorization of biological data relying on services provided by ELIXIR nodes and e-infrastructures. Finally, the Training platform aims to increase the professional skills for managing and exploiting data. Part of activities are meant to train researchers, trainers and service providers, but it also includes other activities related to e-learning, to improve the discovery and availability of training materials and to measure the impact of training.\n\nCurrent ELIXIR use cases. There are four use cases. First of all, the Human data use case develops long-term strategies for managing and accessing sensitive human data. The Rare diseases use case supports the development of new therapies for rare diseases. The Marine metagenomics use case works towards a sustainable metagenomics infrastructure to nurture research and innovation in the marine domain. Finally, the Plant sciences use case develops an infrastructure to support genotype-phenotype analysis for crop and tree species.\n\n\nResults and Discussion\n\nIn an attempt to assess the relative priorities of the various areas of proteomics that might steer integration into ELIXIR, attendees voted on the relative importance of the topics. The following list shows the top-ranked areas for future ELIXIR related proteomics activities (called “Clusters” from now on), sorted in descending order by the number of votes received:\n\nCluster 1. Multi-omics approaches. It includes topics such as data integration of proteomics and other types of omics data, correlation between gene and protein expression, and development of data standards for “multi-omics” data types. A closely related group of activities was “Cancer proteomics”, comprising topics such as support for clinical proteomics data (including large patient cohorts) and cancer “multi-omics” (proteogenomics) studies.\n\nCluster 2. Proteoforms and PTMs. The term proteoform20 represents the different molecular forms in which the protein product of a single gene can be found, including changes due to genetic variations, alternatively spliced RNA transcripts and PTMs, among other events. This cluster of activities included topics such as the handling, validation of proteoforms and creation of standards for their description, improvement of the existing connection between proteoforms, genes and metabolites (topic related to Cluster 1 above), and activities devoted to explain unidentified spectral signals.\n\nCluster 3. Quality Control (QC) activities. In any analytical discipline, QC is essential. Due to the fact that proteomics is a newer and still rapidly developing field, QC has historically not been as well-developed in proteomics as in, for instance, the more established small molecule mass spectrometry field14. This cluster is therefore focused on activities to develop automatic and reliable pipelines for QC of proteomics data at different levels.\n\nCluster 4. Data analysis workflows and cloud computing. The concrete activities outlined here could be summarized as the development of robust, reproducible, scalable, user-friendly, integrated, QC-controlled, data analysis pipelines, ideally enabling the use of compatible cloud infrastructures, which in addition, could also be used for data storage. Infrastructure supporting efficient development of such pipelines and workflows is also important, including tool repositories and documentation, workflow management systems and interfaces for accessing computational resources.\n\nCluster 5. Protein quantification and statistics. This topic encompasses the improvement of protein inference in shotgun proteomics approaches, the use of peptides that match to more than one protein precursor, and enhanced data integration/harmonisation for quantitative proteomics. It was perceived by many attendees that, although many of these issues are often considered solved, there are still many improvements possible in this area.\n\nCluster 6. Metadata, standardisation, annotation, and data management. This topic also includes the improvement in the annotation of proteomics datasets, in particular in data repositories like PRIDE (to facilitate public data reuse by third parties), the development and/or extension of existing Laboratory Information Management Systems (LIMS), standard data formats, and guidelines summarising best practises for data management, following FAIR principles.\n\nThe rest of the areas discussed got only one or two votes from the attendees, and included activities related to interactomics, structural proteomics, metabolomics, metaproteomics, the development of benchmarking datasets, and training efforts. Finally, it is worth highlighting an additional proposal for the creation of a repository for tool-related ideas.\n\nAs mentioned in the ‘Methods’ section, ELIXIR activities are currently structured in five platforms (Data, Tools, Interoperability, Compute and Training) and four longitudinal use cases (Human data, Rare diseases, Marine metagenomics and Plant sciences) (Figure 2). Our preferred option is that proteomics becomes the main focus of one additional ELIXIR use case in the near future. If there is no scope for proteomics to have its own use case, other options could be possible, for instance the integration of proteomics activities into the existing ELIXIR use cases. We think that the current use cases, heavily focused on genomics data, would also benefit from having a “multi-omics” perspective. Out of the existing cases, Human data would be an obvious choice. The other three could also benefit from proteomics activities: Rare diseases (clinical proteomics), Marine metagenomics (metaproteomics), and Plant sciences (plant proteomics).\n\nIn any case, and without considering specific use cases, it is clear that there are several topics that, in our opinion, would fit very well into the scope of the current five ELIXIR platforms:\n\n1- Data platform. Metadata, standardisation, annotation and data management activities (Cluster 6), and “multi-omics” approaches (Cluster 1), involving data integration efforts from different omics data types, would be highly relevant in this context. Moreover, QC efforts (Cluster 3) are essential for all such types of data re-use. Indeed, data re-use is a very important aspect in proteomics data, as only 30–40% of the acquired data is typically exploited14. This creates extremely exciting opportunities for proteomics data re-use with specialized tools that can lead to the discovery of new biological information21.\n\n2- Tools platform. In this platform, the overall aim would be to increase the visibility, quality and sustainability of proteomics software developed following best practices. First of all, more proteomics tools should be effectively included into the ELIXIR tool registry, and highlighted there appropriately. However, it is worth highlighting that there are around 350 tools proteomics tools represented in this resource already. In addition, the development of improved and user-friendly quantification algorithms and tools, along with direct coupling to dedicated and performant statistical analysis (Cluster 5), also connects directly to the ELIXIR tools platform. Other possible activities would be related to Cluster 4, and would involve the improvement of the description and sharing of proteomics workflows, facilitating the encapsulation of workflows, data and tools into proteomics software containers that could be shared across ELIXIR, taking advantage of existing resources. Finally, the proposed idea in the meeting to create of a repository for tool-related ideas, would also be applicable in the wider ELIXIR context.\n\n3- Interoperability platform. Obviously, the activities of this platform are very relevant in the case of “multi-omics” approaches (Cluster 1), for instance the development of data standards for these type of approaches, e.g. to enable better data integration and visualisation. A close connection can also be made to metabolomics with regards to QC (Cluster 3), as the main technology of choice (mass spectrometry) is shared between these two fields.\n\n4- Compute platform. Workflow analysis pipelines and activities related to the development of cloud infrastructures (Cluster 4) and QC activities (Cluster 3) should be highlighted here. This would involve different pieces of infrastructure supporting the efficient development of such workflows, e.g. the EDAM ontology and the ELIXIR tool registry15, open data formats, and workflow management systems, among others.\n\nIt is important to note that, although originally set up in the Data platform, the proteomics ELIXIR implementation study mentioned in the ‘Introduction’ section aims to make a first step forward in this direction, for the popular shotgun (MS/MS) approaches. As a proof of concept, these pipelines will be deployed first in the EMBL-EBI “Embassy Cloud”, a cloud infrastructure based in the Open Stack operating system, with the idea that in the future they can be made available in other cloud systems (e.g. Amazon EC2, Google Cloud, Microsoft Azure), so that the developed pipelines can be freely reused by any interested researcher in the community. In the context of the implementation study, the pipelines are connected to the PRIDE database, bringing the analysis tools closer to the data as datasets become larger in size and complexity. This trend is ongoing for other omics technologies in the context of ELIXIR: for instance, the pipelines developed in the context of the Marine metagenomics use case22.\n\n5- Training platform. ELIXIR is already working actively in coordinating training activities across Europe (e.g. the already mentioned TeSS portal). Although training was not specifically highlighted as one of the main areas for future development during the meeting, the reason is that everyone assumes that this is implicitly a key need in all bioinformatics fields. While excellent training courses and workshops have already been created in the field, a higher degree of coordination across Europe should be achieved for bioinformatics training activities, and in particular for computational proteomics topics, possibly in coordination with other fields such as metabolomics (see next section).\n\nAs mentioned already, metabolomics and proteomics share a common experimental platform: mass spectrometry. A parallel effort is currently ongoing to integrate metabolomics activities in ELIXIR. There are many topics of common interest where both fields could benefit from a closer interaction: e.g. the development of common software (for QC, data visualisation, signal processing, to name but a few) and open data standards. Some initiatives are already working towards this goal, e.g. the computational mass spectrometry group (http://compms.org/), and some recent activities recently carried out by the PSI, but a lot of work remains to be done. One concrete proposal would be to coordinate efforts concerning training activities. ELIXIR would be an ideal platform to enable this coordination effort.\n\nComing back to the proteomics field, PRIME-XS (http://www.primexs.eu) was a four-year Infrastructure project funded by the EU FP7 Programme (2011–2015), coordinated by Prof. Albert Heck. The main aim of PRIME-XS was to provide funded access to an infrastructure of state-of-the-art proteomics technology to the European biological and biomedical research community. It is anticipated that future tailored EU H2020 infrastructure grant calls might provide an opportunity to create a second iteration of this successful project. The potential synergies with ELIXIR, considering the outlined activities in this white paper, would be obvious at different levels, for instance the provision of a cloud computing infrastructure to store and analyse the acquired data in a scalable and reproducible way, and the development of pipelines to allow data integration between proteomics and other omics data types.\n\n\nConclusions\n\nWe hope that this white paper acts as a guide to achieve the overall goal of integrating proteomics into ELIXIR. In our opinion, this would not only make perfect sense from a scientific point of view (proteomics information provides an essential ingredient in a “full picture” of life), but also because it would enable computational proteomics to work in close contact with bioinformatics activities in other high-throughput fields, which will undoubtedly trigger many possible interactions, exchanges, and exciting novel developments.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe meeting was funded by funds provided by the ELIXIR Central Hub (supported by EU H2020 Research Infrastructures). This activity is included in the ELIXIR Implementation Project entitled ‘Mining the Proteome: Enabling Automated Processing and Analysis of Large-Scale Proteomics Data’.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe want to acknowledge P. Velek (ELIXIR Hub) for his help in drafting Figure 2.\n\n\nSupplementary material\n\nSupplementary File 1: Detailed minutes of the meeting “The future of proteomics in ELIXIR”.\n\nClick here to access the data.\n\nSupplementary File 2: Table S1: Table summarising the output of the open discussion at the meeting.\n\nClick here to access the data.\n\n\nReferences\n\nMallick P, Kuster B: Proteomics: a pragmatic perspective. Nat Biotechnol. 2010; 28(7): 695–709. PubMed Abstract | Publisher Full Text\n\nAebersold R, Mann M: Mass-spectrometric exploration of proteome structure and function. Nature. 2016; 537(7620): 347–55. PubMed Abstract | Publisher Full Text\n\nMenschaert G, Fenyö D: Proteogenomics from a bioinformatics angle: A growing field. Mass Spectrom Rev. 2015. PubMed Abstract | Publisher Full Text\n\nCox J, Mann M: MaxQuant enables high peptide identification rates, individualized p.p.b.-range mass accuracies and proteome-wide protein quantification. Nat Biotechnol. 2008; 26(12): 1367–72. PubMed Abstract | Publisher Full Text\n\nRöst HL, Sachsenberg T, Aiche S, et al.: OpenMS: a flexible open-source software platform for mass spectrometry data analysis. Nat Methods. 2016; 13(9): 741–8. PubMed Abstract | Publisher Full Text\n\nBarsnes H, Vaudel M, Colaert N, et al.: compomics-utilities: an open-source Java library for computational proteomics. BMC Bioinformatics. 2011; 12: 70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVaudel M, Burkhart JM, Zahedi RP, et al.: PeptideShaker enables reanalysis of MS-derived proteomics data sets. Nat Biotechnol. 2015; 33(1): 22–4. PubMed Abstract | Publisher Full Text\n\nVizcaino JA, Csordas A, del-Toro N, et al.: 2016 update of the PRIDE database and its related tools. Nucleic Acids Res. 2016; 44(D1): D447–56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeutsch EW, Csordas A, Sun Z, et al.: The ProteomeXchange consortium in 2017: supporting the cultural change in proteomics public data deposition. Nucleic Acids Res. 2017; 45(D1): D1100–D6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeutsch EW, Albar JP, Binz PA, et al.: Development of data representation standards by the human proteome organization proteomics standards initiative. J Am Med Inform Assoc. 2015; 22(3): 495–506. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUhlén M, Fagerberg L, Hallström BM, et al.: Proteomics. Tissue-based map of the human proteome. Science. 2015; 347(6220): 1260419. PubMed Abstract | Publisher Full Text\n\nUniProt Consortium: UniProt: a hub for protein information. Nucleic Acids Res. 2015; 43(Database issue): D204–12. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLane L, Argoud-Puy G, Britan A, et al.: neXtProt: a knowledge platform for human proteins. Nucleic Acids Res. 2012; 40(Database issue): D76–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartens L, Vizcaíno JA: A Golden Age for Working with Public Proteomics Data. Trends Biochem Sci. 2017; 42(5): 333–341. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIson J, Rapacki K, Ménager H, et al.: Tools and data services registry: a community effort to document bioinformatics resources. Nucleic Acids Res. 2016; 44(D1): D38–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWillems S, Bouyssié D, David M, et al.: Proceedings of the EuBIC Winter School 2017. J Proteomics. 2017; 161: 78–80. PubMed Abstract | Publisher Full Text\n\nDurinx C, McEntyre J, Appel R, et al.: Identifying ELIXIR Core Data Resources [version 2; referees: 2 approved]. F1000Res. 2016; 5: pii: ELIXIR-2422. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEijssen L, Evelo C, Kok R, et al.: The Dutch Techcentre for Life Sciences: Enabling data-intensive life science research in the Netherlands [version 2; referees: 2 approved, 1 approved with reservations]. F1000Res. 2015; 4: 33. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith LM, Kelleher NL; Consortium for Top Down Proteomics: Proteoform: a single term describing protein complexity. Nat Methods. 2013; 10(3): 186–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVaudel M, Verheggen K, Csordas A, et al.: Exploring the potential of public proteomics data. Proteomics. 2016; 16(2): 214–25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobertsen EM, Denise H, Mitchell A, et al.: ELIXIR pilot action: Marine metagenomics – towards a domain specific set of sustainable services [version 1; referees: 1 approved, 2 approved with reservations]. F1000Res. 2017; 6: (ELIXIR)(70). Publisher Full Text"
}
|
[
{
"id": "23454",
"date": "28 Jun 2017",
"name": "Felix Elortza",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nProteomics has reached a point in which bioinformatics is of paramount importance for the process to extract the maximum information out of the enormous amount of data obtained in practically any study. The scientific community in general and proteomics in particular is eager to get new and effective solutions that may implement that way. In this white paper manuscript, Vizcaino et al. explain the ELIXIR initiative and the importance of integrating proteomics in this European Research Infrastructure. Authors cover in a detailed way what was covered on the meeting that took place in Tübingen (Germany) on February 2017 under the name “The future of proteomics in ELIXIR”. The paper is well written and structured. Conclusions are clearly stated.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23920",
"date": "05 Jul 2017",
"name": "Chuming Chen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis white paper presents a framework for integrating computational proteomics into ELIXIR as the result of a strategy meeting on ‘The Future of Proteomics in ELIXIR’ that took place in March 2017 in Tübingen (Germany). The meeting discussions lead to six priority areas: Multi-omics approaches; Proteoforms and PTMs; Quality Control (QC) activities; Data analysis workflows and cloud computing; Protein quantification and statistics; Metadata, standardization, annotation, and data management. The priority areas are well aligned with the current ELIXIR platforms: Data platform; Tools platform; Interoperability platform; Compute platform; Training platform and can be integrated into ELIXIR. The paper is well structured and written.\nMinor change: On page 7 of PDF version of paper, paragraph \"2 - Tools platform ...\": \"it is worth highlighting that there are around 350 tools proteomics tools represented in this resource already\", the first \"tools\" should be removed.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-875
|
https://f1000research.com/articles/6-871/v1
|
12 Jun 17
|
{
"type": "Research Article",
"title": "Preliminary assessment of soil contamination by trace metals in peri-urban municipal landfills in Ibadan, Nigeria",
"authors": [
"Benjamin Oyegbile",
"Brian Oyegbile",
"Brian Oyegbile"
],
"abstract": "Background: Soil contamination by trace metals as a result of improper waste management and disposal in Ibadan, Nigeria has been evaluated in this study. Several studies have shown the link between trace metal soil contamination and improper solid waste disposal. Methods: Soil samples were taken from two major landfills in Ibadan, in the south-west of Nigeria, and subjected to laboratory analysis using inductively coupled plasma-optical emission spectrometry (ICP-OES) as part of a wider study to evaluate the waste management practices in the city. Results: The results of this investigation, without taking into account the background values of the trace metals at both landfill sites, showed that the quantified levels of lead at both sites exceeded threshold levels. The quantified values of zinc and copper metals exceeded the threshold levels specified in the Finnish government decree on the assessment of soil contamination and remediation needs, at 1098 mg/kg and 233.20 mg/kg in the Aba-Eku landfill site, and 1205 mg/kg and 476.55 mg/kg in the Lapite landfill site, respectively. This calls for a comprehensive risk assessment. Conclusions: It is hoped that the results of this study will serve as a basis for a wider risk assessment of all landfill sites within the city.",
"keywords": [
"trace metals",
"remediation",
"contamination",
"leachate",
"risk-assessment"
],
"content": "Introduction\n\nLandfills offer an inexpensive disposal option for different streams of solid waste and are widely used around the world1–5. They are especially popular in developing and low-income countries, where waste management resources or the expertise necessary to operate other capital-intensive and more sophisticated waste disposal options, such as incinerators or waste-to-energy facilities, are limited. However, it is well-known that there exists a significant risk of environmental contamination from untreated landfill leachate which may infiltrate into the surrounding soil and groundwater6–8. Therefore, the design, construction, operation and de-commissioning of a modern landfill facility must ensure that the leachate from the landfill is collected and properly treated in order to prevent release of toxic substances into the environment.\n\nSolid waste as a type of waste that emanates from human activities is a potential source of environmental pollutants. Hence, the release of contaminants upon exposure and contact with the environment should be anticipated. Leaching is the major route through which soluble contaminants are released into the environment upon contact with water. This results in potential risk to human and environmental health9–13.\n\nThe environmental, ecological, and human health impacts of elevated levels of trace metals from soil samples and untreated leachate from open landfills have been well-documented in numerous scientific publications. The health risks associated with the presence of trace metals in environmental samples are mainly attributed to their carcinogenic nature when in contact with living tissues via different exposure routes such as dermal contact, inhalation or consumption of contaminated fruits and vegetables14–17. Therefore, soil has been identified as one of the routes through which trace metals are released into the environment18,19.\n\nThe disposal of solid waste in open landfills not only increases the risk of soil and groundwater contamination but also increases the risk of fire outbreak as there is no mechanism for the control and collection of landfill gaseous emissions. Open and uncontrolled landfilling of waste therefore constitutes a real threat to public and environmental health. However, open landfills will continue to serve as the preferred disposal option in some parts of the developing world, and to some extent in other emerging economies, at least into the foreseeable future. Consequently, public health and safety must be given top priority in the operation of these landfills.\n\nIn order to minimize the environmental burden arising from the operation of these landfills, it is very important to carry out environmental risk assessments on a regular basis. The assessment will serve as an environmental monitoring tool and help to establish if there is any need for a comprehensive risk assessment or remediation strategy, especially when the quantified values of trace metals exceed regulatory values. In addition, previous research has shown that the levels of trace metals in any soil sample is a combination of natural and anthropogenic factors20–22. Anthropogenic sources of trace metals in soils are mainly from hazardous solid wastes, combustion processes in industries and transportation, mining activities, as well as long-term and extensive use of pesticides on agricultural land23. The concentration of trace metals from anthropogenic inputs may exceed those from natural sources24.\n\nThe aim of this study is to determine the levels of trace metals in soil samples taken from open landfill sites using inductively coupled plasma-optical emission spectrometry (ICP-OES). While there are many studies on waste management practices within urban centres in Ibadan25,26, there are very few studies on the risk assessment of the city’s waste management system with respect to trace metals and other soil contaminants. The present study aims to fill this gap in knowledge. The results presented here are only a preliminary assessment, and they are site-specific. They do not reflect the general quality of soils around all the sampling sites as this work does not take into account the background values of the trace metals at the sampling locations and the effect of contaminant migration. However, it is hoped that this study will serve as a guide for a more comprehensive risk-assessment of all landfill sites in Ibadan for heavy metal contamination.\n\nIn carrying out a laboratory analysis of environmental samples for the presence of trace metals, there are two commonly used methods, namely: atomic absorption spectrometry (AA), and the inductively coupled plasma optical emission spectrometry (ICP-OES), both of which are based on emission spectroscopy27,28. ICP-OES is a special type of atomic emission technique which employs inductively coupled plasma to generate excited atoms that emit electromagnetic radiation when heated to a very high temperature, with wavelengths that correspond to a particular element The sample atoms are subjected to a temperature of around 6727 °C29. ICP-OES is based on the principle of atomic light emission properties of samples; when they are excited at high enough temperature, the concentration of the sample is directly proportional to the intensity of the emitted light.\n\nTo carry out spectroscopic analysis for the detection of trace metals in environmental samples, acid digestion is commonly performed on the collected sample. For example, in soil and sediment analysis, and in the determination of total phosphorus, and total nitrogen under heated conditions28. The digestion procedure breaks down an organically bound substance and converts it to the analysable form by using oxidising agents such as sulphuric acid (H2SO4), nitric acid (HNO3), perchloric acid (HClO4), or hydrochloric acid (HCl), or using oxidising mixtures such aqua regia. Sometimes, addition of bromide (Br2) or hydrogen peroxide (H2O2) to mineral acids will increase their solvent action and speed up the release of bound organic materials in the sample.\n\n\nMaterials and methods\n\nA detailed description of the study area as well as the geographical location of the sampling sites is available elsewhere25,30–32. Soil sampling, pre-treatment and preservation was carried out according to33. Several grab samples of the top soil were collected using hand-held augers and tube samplers from a depth of about 15 cm at two sub-urban landfill sites, namely: Aba-Eku and Lapite (Table 1, Figure 1), located on the outskirts of the Ibadan metropolis. The samples were immediately transferred into 50 ml airtight plastic containers (Burkle GmbH, Rheinauen, Germany) with screwed cap and labelled appropriately as samples A and B depending on the landfill sites. The soil was in dry condition at the time of sampling, with an ambient temperature of about 29 °C. The sample containers were stored in the laboratory at room temperature without any further pre-treatment, before carrying out the ICP-OES analysis.\n\nTest solutions were prepared according to the standard method34. A weighed portion of the soil sample (1.0 g) was placed in three separate glass beakers that had been thoroughly rinsed with dilute nitric acid and thereafter deionized water. 20 ml of nitric acid (7 mol/l) was added to each beaker and thereafter transferred into an autoclave for acid digestion. The beakers were kept in the autoclave at 121°C for about 30 minutes. The content of the beakers was carefully filtered into three 100 ml volumetric flasks after cooling to room temperature.\n\nThe mixed standard solution for the calibration step was prepared following Rezic & Stefan’s method35. Six 100 ml volumetric flasks were carefully rinsed with dilute nitric acid followed by deionized water. A calculated volume of the prepared test solution with 0.1 % nitric acid solution (Table 2) was added to each flask using a pipette, and filled to the 100 ml mark with distilled water. The test solution and the calibration solutions were subsequently transferred for ICP-OES analysis.\n\nThe samples of the test solutions were analysed for trace metals according to29. A test sample (100 ml) was placed in the measurement chamber of the iCAP™ 7000 spectrometer (ThermoFisher Scientific GmbH, Germany) after allowing sufficient time for initial start-up of the machine. The relevant test parameter values were given via a PC connected to the machine. The entire process is fully automated and the measured quantities of trace metals were generated as printed output. A total of two different measurements were performed on the samples from each landfill site and the result was given as the average of all measurements (Figure 2).\n\n\nResults and discussion\n\nThe ICP-OES spectrophotometer generated data on the concentration of trace metals in the analysed soil samples. The reported concentration is the concentration in the liquid digest samples in milligrams per litre (mg/l). This data was thereafter converted to obtain the concentration in milligrams per kilogram dry weight of the dry soil samples (See Equation 1 and Table 4, Table 5).\n\nThe equation above is used to calculate the concentration of trace metals in milligrams per kilogram dry weight of the soil samples, using the reported soil concentration from the liquid digest samples. Cd is the heavy metal concentration of the dry soil sample, Cw is the heavy metal concentration of digested soil sample as obtained from ICP-OES analysis, v is the volume of digested solution (ml), and w is the dry weight of the soil samples.\n\nTrace metal uptake in plants has been widely reported in several studies, and many bio-remediation techniques are based on this principle. The danger to public health arises from the fact that trace metals can end up in the food chain as a result of bio-accumulation in plant species, especially those grown for human consumption3,31. The key to breaking this chain is to ensure that fruits and vegetables are not grown in areas prone to trace metal contamination such as landfill sites. A more detailed discussion on the bioavailability of trace metals in the environment can be found elsewhere36,37.\n\nThe results of the quantified levels of trace metals and their comparison with the regulatory limits are shown graphically in Figure 3–Figure 6. The implication of these results can better be understood by making a direct comparison to specified values in the appropriate regulatory standard, or by comparing with quantified values from an uncontaminated sample. The test results show that the quantified levels of copper and zinc exceeded both the threshold and the lower guideline values specified by the Finnish government decree on the assessment of soil contamination and remediation needs (Table 3), while the values for lead exceeded only the threshold values at both landfill sites. The lower and higher guideline values is an indication of soil contamination under different land use conditions38. The reference document, ‘Government Decree on the Assessment of Soil Contamination and Remediation Need’ was adopted as a source for threshold and lower guideline values for levels of copper and zinc in the tested soil samples as there was no applicable regulatory standard in Nigeria with respect to soil contamination and remediation.\n\nThese results also confirms findings from previous communications that reported poor leachate management at the Aba-Eku landfill site1,26. Based on these findings and owing to increasing risk of human exposure, there is an urgent need for comprehensive risk assessment at the landfill sites, and if possible a remediation measure needs to be put in place in order to protect the rapidly expanding residential areas around these sites as well as the groundwater source. However, it may be worthwhile to point out that this study does not take into account the background values of the trace metals at the sampling sites. Therefore, a direct comparison with a regulatory limit only serves as a guide, and a more comprehensive study of the landfill sites is needed in order to establish the full ecological and health risks, including the risk of groundwater contamination.\n\n\nConclusions\n\nThis paper reports the findings of a study that assessed soil contamination by trace metals as a result of uncontrolled disposal of solid waste in two municipal landfills in the Ibadan metropolis. The investigation was part of an environmental risk assessment of the current waste management and disposal practices in Ibadan, Nigeria. Several samples of soil collected at the landfill sites were analysed for trace metals using ICP-OES. The results of this analysis, without taking into account the background values of the trace metals in the area, show elevated levels of lead, zinc and copper for the two landfill sites, above the specified limits in the applicable regulatory standard38. There is a need for a more comprehensive risk assessment of the landfill sites with respect to soil and groundwater contamination, and this study will serve as a basis for a more comprehensive evaluation of all landfill sites in the city.\n\n\nData availability\n\nDataset 1: Trace metal assessment results from the Aba-Eku landfill site using the ICP-OES technique. DOI, 10.5256/f1000research.9673.d16318839\n\nDataset 2: Trace metal assessment results from the Lapite landfill site using the ICP-OES technique. DOI, 10.5256/f1000research.9673.d16318940",
"appendix": "Author contributions\n\n\n\nA. B. and O. B. designed the study, A. B. performed the experiments, A. B. and O. B. carried out the data analysis. All authors contributed towards the preparation and submission of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nAluko OO, Sridhar MK: Application of constructed wetlands to the treatment of leachates from a municipal solid waste landfill in Ibadan, Nigeria. J Environ Health. 2005; 67(10): 58–62. PubMed Abstract\n\nBodzek M, Surmacz-Gorska J, Hung Y: Treatment of Landfill Leachate. In: Hazardous Industrial Waste Treatment. Boca Raton, FL: CRC Press. 2006; 441–94. Publisher Full Text\n\nIdris A, Saed K, Hung Y: Leachate Treatment Using Bioremediation. In: Advances in Hazardous Industrial Waste Treatment. Boca Raton, FL: CRC Press. 2008; 175–91. Publisher Full Text\n\nKjeldsen P, Barlaz MA, Rooker AP, et al.: Present and Long-Term Composition of MSW Landfill Leachate: A Review. Crit Rev Environ Sci Technol. 2002; 32(4): 297–336. Publisher Full Text\n\nKurniawan TA, Lo W, Chan G, et al.: Biological processes for treatment of landfill leachate. J Environ Monit. 2010; 12(11): 2032–47. PubMed Abstract | Publisher Full Text\n\nDiels L, Van der Lelie N, Bastiaens L: New Developments in Treatment of Heavy Metal Contaminated Soils. Rev Environ Sci Biotechnol. 2002; 1(1): 75–82. Publisher Full Text\n\nNduka JK, Orisakwe OE, Ezenweke LO, et al.: Metal Contamination and infiltration into the soil at refuse dump sites in Awka, Nigeria. Arch Environ Occup Health. 2006; 61(5): 197–204. PubMed Abstract | Publisher Full Text\n\nRaghab SM, Abd El Meguid AM, Hegazi HA: Treatment of Leachate from Municipal Solid Waste Landfill. HBRC J. 2013; 9(2): 187–92. Publisher Full Text\n\nPichtel J: Waste Management Practices: Municipal, Hazardous, and Industrial. Boca Raton, FL: CRC Press; 2005. Reference Source\n\nSvojitka J, Wintgens T, Melin T: Treatment of Landfill Leachate in a Bench-Scale MBR. Desalination Water Treat. 2009; 9(1–3): 136–41. Publisher Full Text\n\nTwardowska I: Assessment of Pollution Potential from Solid Waste. In: Solid Waste: Assessment, Monitoring and Remediation. Amsterdam: Elsevier; 2004; 173–494. Reference Source\n\nWilliams PT: Waste Treatment and Disposal. 2nd ed. Chichester: John Wiley & Sons; 2005. Publisher Full Text\n\nWorrell WA, Vesilind PA: Solid Waste Engineering. 2nd ed. Stamford: Cengage Learning; 2012. Reference Source\n\nDong X, Li C, Li J, et al.: A Novel Approach for Soil Contamination Assessment from Heavy Metal Pollution: A Linkage between Discharge and Adsorption. J Hazard Mater. 2010; 175(1–3): 1022–30. PubMed Abstract | Publisher Full Text\n\nKasassi A, Rakimbei P, Karagiannidis A, et al.: Soil Contamination by Heavy Metals: Measurements from a Closed Unlined Landfill. Bioresour Technol. 2008; 99(18): 8578–8584. PubMed Abstract | Publisher Full Text\n\nSu C, Jiang L, Zang W: A Review on Heavy Metal Contamination in the Soil Worldwide. Environ Skept Crit. 2014; 3(2): 24–38. Reference Source\n\nYao Z, Li J, Xie H, et al.Review on Remediation Technologies of Soil Contaminated by Heavy Metals. Procedia Environ Sci. 2012; 16: 722–729. Publisher Full Text\n\nYekeen TA, Xu X, Zhang Y, et al.: Assessment of health risk of trace metal pollution in surface soil and road dust from e-waste recycling area in China. Environ Sci Pollut Res. 2016; 23(17): 17511–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHong AH, Law PL, Onni SS: Environmental Burden of Heavy Metal Contamination Levels in Soil from Sewage Irrigation Area of Geriyo Catchment, Nigeria. Civ Environ Res. 2014; 6(10): 118–24. Reference Source\n\nBlaser P, Zimmermann S, Luster J, et al.: Critical examination of trace element enrichments and depletions in soils: As, Cr, Cu, Ni, Pb, and Zn in Swiss forest soils. Sci Total Environ. 2000; 249(1–3): 257–280. PubMed Abstract | Publisher Full Text\n\nPalumbo B, Angelone M, Bellanca A, et al.: Influence of Inheritance and Pedogenesis on Heavy Metal Distribution in Soils of Sicily, Italy. Geoderma. 2000; 95(3–4): 247–66. Publisher Full Text\n\nSalonen V-P, Korkka-Niemi K: Influence of Parent Sediment on the Concentration of Heavy Metals in Urban and Sub-Urban Soils in Turku, Finland. Appl Geochem. 2007; 22(5): 906–18. Publisher Full Text\n\nParth V, Murthy NN, Saxena PR: Assessment of Heavy Metal Contamination in Soil Around Hazardous Waste Disposal Sites in Hyderabad City (India): Natural and Anthropogenic Implications. E3 J Environ Res Manag. 2011; 2(2): 27–34. Reference Source\n\nMirsal I: Soil Pollution: Origin, Monitoring & Remediation. Heidelberg: Springer; 2004. Publisher Full Text\n\nAdelekan BA, Alawode AO: Contributions of Municipal Refuse Dumps to Heavy Metals Concentrations in Soil Profile and Groundwater in Ibadan Nigeria. J Appl Biosci. 2011; 40: 2727–37. Reference Source\n\nAluko OO, Sridhar MKC, Oluwande PA: Characterization of Leachates from Municipal Solid Waste Landfill Site in Ibadan, Nigeria. J Environ Health Res. 2003; 2(1): 32–7. Reference Source\n\nBukar LI, Hati SS, Dimari GA, et al.: Study of Vertical Migration of Heavy Metals in Dumpsite Soils. ARPN J Sci Technol. 2012; 2(2): 50–5. Reference Source\n\nZhang CC: Fundamentals of Environmental Sampling and Analysis. Hoboken, NJ: John Wiley & Sons; 2007. Reference Source\n\nEN 16170: 2016: Sludge, Treated Biowaste and Soil - Determination of Elements Using Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES). [Internet]. European Committee for Standardization. [cited 2017 May 20]; 2016. Reference Source\n\nAluko OO, Sridhar M: Evaluation of effluents from bench-scale treatment combinations for landfill leachate in Ibadan, Nigeria. Waste Manag Res. 2014; 32(1): 70–8. PubMed Abstract | Publisher Full Text\n\nSola O, Awodoyin RO, Opadeji T: Urban Agricultural Production: Heavy Metal Contamination of Amaranthus Cruentus L. Grown on Domestic Landfill Soils in Ibadan, Nigeria. Emir J Agric Sci. 2003; 15(2): 87–94. Publisher Full Text\n\nAluko OO, Sridhar MK: Evaluation of leachate treatment by trickling filter and sequencing batch reactor processes in Ibadan, Nigeria. Waste Manag Res. 2013; 31(7): 700–5. PubMed Abstract | Publisher Full Text\n\nEN 16179: 2012: Sludge, Treated Biowaste and Soil - Guidance for Sample Pretreatment. [Internet]. European Committee for Standardization. [cited 20 May 2017]; 2012. Reference Source\n\nEN 16173: 2012: Sludge, Treated Biowaste and Soil - Digestion of Nitric Acid Soluble Fractions of Elements. [Internet]. European Committee for Standardization. [cited 20 May 2017]; 2012. Reference Source\n\nRezić I, Steffan I: ICP-OES Determination of Metals Present in Textile Materials. Microchem J. 2007; 85(1): 46–51. Publisher Full Text\n\nBradl H: Heavy Metals in the Environment Origin, Interaction and Remediation. Amsterdam: Academic Press; 2005. Reference Source\n\nNagajyoti PC, Lee KD, Sreekanth TVM: Heavy Metals, Occurrence and Toxicity for Plants: A Review. Environ Chem Lett. 2010; 8(3): 199–216. Publisher Full Text\n\nGovernment Decree on the Assessment of Soil Contamination and Remediation Needs. 214/2007 [Internet]. Finland’s Ministry of Justice. [cited 20 May 2017]. 2007. Reference Source\n\nOyegbile B, Brian O: Dataset 1 in: Preliminary assessment of soil contamination by trace metals in peri-urban municipal landfills in Ibadan, Nigeria. F1000Research. 2017. Data Source\n\nOyegbile B, Brian O: Dataset 2 in: Preliminary assessment of soil contamination by trace metals in peri-urban municipal landfills in Ibadan, Nigeria. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23556",
"date": "16 Jun 2017",
"name": "Mynepalli Sridhar",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper is interesting as it focuses on some of the heavy metals which find their way into food via wastes disposed in the city. The authors have taken pains in bringing out the salient features of the heavy metal contamination. The analytical methods used are appropriate. They can be reproduced by other scientists. To strengthen the outcome, the study should have looked into the composition of the wastes being dumped at the landfill and also compared with age of the wastes and their heavy metal composition and leachability. The type of soils and the composition is not adequately described. The sample size of the soils should have been more than what the authors have collected, sieved through a specific particulate size before processing. This should have given a better and homogenous sample. Also, finding the levels of heavy metals at various depths of soil may have been useful. Heavy metal levels of control soil samples should have added the value to the work.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "27064",
"date": "06 Nov 2017",
"name": "Chidozie C Nnaji",
"expertise": [
"Reviewer Expertise Water and wastewater treatment",
"water supply",
"solid waste management",
"soil and water pollution."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article was well written but is deficient in terms of methodology design. Given below is a list of points which if addressed will greatly improve the quality of the paper. I have also attached an annotated pdf version of the manuscript - please download to access the comments.\n\nYour Introduction suggests that you are mistaking large urban open solid waste dumps for sanitary landfills. Best waste management practices require that material and energy recovery processes be applied to solid waste streams before disposal in landfills. What you have described fits open dumps and not landfills. I suggest you change the term landfill to open dump.\n\nWhy was the Finnish government decree for soil remediation used? You argued that Nigeria has no such standards. If that is so, you should have used USEPA guideline values. However, it may interest you to know that the Department of Petroleum Resources has published guideline values for heavy metals Nigerian soils. Please see the article published by Wuana and Okeimen (2011) which can be assessed with the following doi: http://dx.doi.org/10.5402/2011/402647. It is suggested that you also use the DPR guideline values even if you would like to retain the Finnish guideline values.\n\nYour experimental months contain too much detail which can easily be found in standard methods or equipment manufacturers’ manuals.\n\nThe major weakness of this paper is that very few samples were analysed. Taking four random samples from a 21 Ha waste dump is inadequate and the results can hardly be generalized. One would expect that there would be a spatial variation of these heavy metals in the soil which should have necessitated a systematically staggered/spaced sampling points in order to obtain representative results.\n\nBoth table and graphs were presented for the same dataset. You should either use tables or graphs for your data depending on which you find more suitable, but you cannot use both for the same data.\n\nDiscussion of results is shallow.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-871
|
https://f1000research.com/articles/6-122/v1
|
09 Feb 17
|
{
"type": "Research Note",
"title": "Insights for conducting real-time focus groups online using a web conferencing service",
"authors": [
"James Kite",
"Philayrath Phongsavan",
"Philayrath Phongsavan"
],
"abstract": "Background Online focus groups have been increasing in use over the last 2 decades, including in biomedical and health-related research. However, most of this research has made use of text-based services such as email, discussion boards, and chat rooms, which do not replicate the experience of face-to-face focus groups. Web conferencing services have the potential to more closely match the face-to-face focus group experience, including important visual and aural cues. This paper provides critical reflections on using a web conferencing service to conduct online focus groups.\nMethods As part of a broader study, we conducted both online and face-to-face focus groups with participants. The online groups were conducted in real-time using the web conferencing service, Blackboard CollaborateTM. We used reflective practice to assess how the conduct and content of the groups were similar and how they differed across the two platforms.\nResults We found that further research using such services is warranted, particularly when working with hard-to-reach or geographically dispersed populations. The level of discussion and the quality of the data obtained was similar to that found in face-to-face groups. However, some issues remain, particularly in relation to managing technical issues experienced by participants and ensuring adequate recording quality to facilitate transcription and analysis.\nConclusions Our experience with using web conferencing for online focus groups suggests that they have the potential to offer a realistic and comparable alternative to face-to-face focus groups, especially for geographically dispersed populations such as rural and remote health practitioners. Further testing of these services is warranted but researchers should carefully consider the service they use to minimise the impact of technical difficulties.",
"keywords": [
"online research",
"focus groups",
"web conferencing",
"methodology",
"critical reflection"
],
"content": "Introduction\n\nFocus groups are a well-established qualitative research methodology that have become increasingly popular among social researchers over the last few decades1,2. Their popularity is tied to their ability to use group interactions to elicit detailed responses, which have been shaped as much by social cues as by the individual’s own beliefs and perceptions3–5. However, traditional face-to-face focus groups have some disadvantages, particularly when dealing with hard-to-reach or geographically dispersed populations and sensitive topics6–10. Increasingly, the Internet offers a real alternative to face-to-face groups as technology improves and connection spreads. Online focus groups therefore have the potential to address this gap while also offering researchers the opportunity to avoid the costs of finding an ideal location to host their groups7,11.\n\nUsing online platforms for focus groups has been trialled over the last 20 years with an increasing number of studies making use of asynchronous platforms (e.g. email and discussion boards) for their research8,9,11,12. This includes research with rural and remote nurses4, travelling nurses13, and gay and bisexual men with cancer10. There are a number of potential benefits for conducting research in this way, including increased speed of data collection, lower cost, and, of particular relevance to biomedical and health-related research, improved opportunity for some population groups to participate in research2,14–16. However, using text-based platforms changes the nature of focus groups, with the major criticisms being that you lose spontaneity in participant responses and also visual and aural cues, which collectively promote the expression of emotions and can be very influential in directing participant interactions2,3,8. Some researchers have made use of chat services to run synchronous (i.e. real-time) online focus groups [for example8] but again the visual and aural cues are lost, and participants and moderators must be skilled in reading and writing to be able to respond quickly while also being as unambiguous as possible in order to avoid misunderstandings11. Chat-based focus groups also come with a risk of returning inadequate data quality as participants and moderators take short-cuts to speed up writing2,17.\n\nAudio-visual tools, such as web conferencing services, offer a way of more closely mirroring the experience of a face-to-face focus group but this appears to be an under-used approach. Indeed, we found only two studies, both published in 2015, that report on the experience of conducting focus groups in this way11,13. This is likely because, until recently, limited bandwidth and inappropriate or inadequate platforms meant that online face-to-face groups faced significant technical barriers. This is no longer the case with Internet penetration and speed accelerating rapidly, especially in developing nations18. In addition, and importantly, there is research that suggests that social interactions are similar in both the face-to-face and online environments19, indicating that online face-to-face focus groups may be a viable alternative to traditional groups. To date, however, there is little available evidence on the experience of conducting groups in this way. To address this gap, we report on our experience of conducting focus groups using a web conferencing service, with comparisons to traditional face-to-face focus groups where relevant.\n\n\nMethods\n\nWe recently conducted an evaluation of a postgraduate subject in public health at the University of Sydney, approved by the University of Sydney Human Research Ethics Committee (Project No. 2014/1015). In the unit being evaluated, students have the choice of studying either face-to-face or online, with most students who complete the unit online either living a significant distance from campus (e.g. interstate) or having other commitments (e.g. full-time work) that make it difficult to attend face-to-face classes. Although the unit content is the same regardless of the delivery mode, we hypothesised that the student learning experience would be significantly different, making it important that both face-to-face students and online students had an equal opportunity to participate in this study. One part of the evaluation study involved focus groups with former and current students and so, to ensure equal opportunity to participate, we offered both face-to-face and online focus groups. The online focus groups were completed in real-time over a web conferencing service, Blackboard CollaborateTM (Version 9.1; http://www.blackboard.com/online-collaborative-learning/).\n\nParticipants were recruited via email in a two-stage process. The first stage involved seeking expressions of interest in participating in the research from all students who completed the subject in either 2013 or 2014 (n=400). Emails were sent to their official University email accounts. The second stage required those who had expressed interest (n=23) to complete a short survey (see Supplementary material) on their preferred focus group platform, as well as time and date availability. Two participants had to withdraw after this second stage as it was not possible to accommodate their schedules.\n\nIn total, we conducted 5 focus groups: 3 face-to-face (two groups with n=4 and one with n=6 participants) and 2 online (n=3 and n=4 participants respectively). The focus groups ran for approximately 90 minutes but online participants were encouraged to logon before the scheduled start of the focus group to allow time to calibrate microphones and cameras if required. Additionally, in the invitation to the focus group, online participants had been directed to a Help page (http://sydney.edu.au/elearning/staff/help/collaborateHelp.shtml) where they could trial Blackboard CollaborateTM and ensure that their system met the minimum requirements for running this service.\n\nModeration of all groups was conducted by the same person (JK), with an effort made to keep the style of moderation similar in both online and face-to-face groups. This included allowing multiple speakers at the same time during online groups, as opposed to setting CollaborateTM to only allow one speaker at a time. The topics for discussion, which were the same in both the online and face-to-face focus groups, focused on assessment practices within the subject but also canvassed experiences with tutorials and lectures (see Supplementary material for complete discussion guide). The only difference between the online and face-to-face focus groups was that some time was allocated at the beginning of the online focus groups to provide a brief tutorial on using some of Collaborate’sTM features. This paper does not report the findings from these focus groups; these are available elsewhere20.\n\nWe used reflective practice to assess how the conduct and content of the groups were similar and how they differed across the two platforms21. This involved reviewing the audio recordings and transcripts from the focus groups and assessing what happened, reflecting on our conduct and interaction with the groups, and what we would do differently next time. In particular, we reflected on our own experience (i.e. as researchers) with managing and moderating the groups. All of the reflections presented here are based on our impression of how the groups functioned and what we perceived as being the most salient issues that arose throughout. Although we did not directly ask the participants about their experience in either format of focus group, we were also able to glean some insights from remarks they made during the conduct of the focus group.\n\n\nParticipants’ and researchers’ experience\n\nWhen we were arranging the groups, potential participants were asked to nominate their preferred platform for the focus group. Based on responses to the survey, there was considerable interest (n=17 of 23 responses) in conducting online groups, especially among those who had work or family commitments that made attending face-to-face focus groups more difficult. Indeed, some of the online participants made mention of the fact that they were grateful of being given the opportunity to put forward their views in such a forum, something that would not have been possible had we only conducted face-to-face groups.\n\n“Thanks for doing this. Interesting to know subjects seriously evaluating how they do things.”\n\nThe advantage of using a web conferencing service compared to a chat service to run synchronous focus groups online is that it more closely mirrors the experience of face-to-face groups: participants are able to respond to visual and aural clues that would otherwise be missed. This was certainly evident in the online focus groups; our perception was that the interaction between participants and the moderator was dynamic and similar to that experienced in the face-to-face groups. Participants were genuinely engaged and attentive, although personal issues (e.g. distractions from phones, children, and other background noise) did occasionally interrupt discussions. Further, we noted that communication was considerably slower and more time was spent discussing issues of no relevance to the research, compared to the face-to-face groups. In particular, online participants spent some time familiarising themselves and each other with the web conferencing service, as well as discussing the novelty of web conferencing service and any technical difficulties they had experienced or were experiencing, as shown in the following example.\n\nParticipant 1: “[I think the] mic is bad. This is not working well.”\n\nParticipant 2: “We can hear you alright but it is just cutting out.”\n\nParticipant 1: “I’ll try logging off.”\n\nAlthough the slower and more distracted discussion did produce less data overall20, the quality of the data we did obtain was of an equal to that in the face-to-face groups, which is in line with the experience of Abrams et al.11. In general, the themes that emerged from both the face-to-face and online groups were similar but it was obvious that there were critical differences in the student experience between online and face-to-face students. By way of example, a group assignment was discussed at length in all of the focus groups but the difference in experience for face-to-face and online students could not have been starker. The face-to-face students praised it but the online students had a deeply negative experience. This contrast may have been missed had we not conducted the online focus groups.\n\nImportantly, the online groups did function as a focus group should; that is, there was genuine discussion between participants, rather than just between a participant and an interviewer, as you would find in a group interview5. The groups still provided important and valuable insights even though they did not cover as many topics as the face-to-face groups. In recognition of this, in the second group the moderator focused more on topics where we expected the experience of online and face-to-face students to be most divergent (e.g. tutorials and group work) and less on experiences that were likely to be similar (e.g. written assignments). This change did not involve any modification to the discussion guide, only a change in the time allocated to each section.\n\nThe experience of moderation was relatively similar across both platforms but there were some minor differences. In particular, the moderator had to be familiar with CollaborateTM in order to be able to quickly troubleshoot with participants when necessary, something that was obviously not necessary for the face-to-face groups. This included having to deal with participants who were using the chat feature when their microphones were not working. Having one participant contributing to the discussions via chat did add a layer of complexity and meant that the moderator had to allow additional time for the participant to type and for all participants to read and react to the response. However, this did not appear to affect the conversation in any meaningful way, although it is possible that participants using chat features may have been taking short cuts to speed up writing, as noted in previous research2,17. It is also worth noting that it was not necessary for the moderator to alter style or speed of talking as participants could generally hear the conversation clearly, as they would in a face-to-face group.\n\nWe noted that online participants were more inclined to withdraw from the study. We had originally recruited 5 participants for each focus group but 3 withdrew (2 before group 1 and 1 before group 2) in the hour before their scheduled focus group was due to commence. Further, 2 more participants withdrew (1 in each group) after the groups commenced, with one having technical difficulties and the other because of constant distractions from their children. In contrast, only 1 participant withdrew from the face-to-face groups.\n\nFinally, sound quality was a significant issue, specifically for transcription purposes. Participants did experience some difficulty hearing each other during the groups but this was not a major problem. However, when it came time to transcribe the recordings, echoing made it extremely difficult to do so accurately.\n\n\nDiscussion\n\nWe found that the use of a web conferencing service to conduct focus groups has potential, even though there are a number of issues. It is worth highlighting, however, the thankfulness reported by participants in the online groups, and the fact that there were some critical differences in experience between online and face-to-face students that may have been missed without conducting the online groups. Although some may argue that we could have captured the views of online students through other means like a survey, such an approach would have meant losing the social interactions that are a key feature of focus groups. This highlights the importance of continuing to trial new technologies so that hard-to-reach groups are given greater opportunity to participate in all types of research.\n\nIt was not unexpected that participants spent considerable online focus group time discussing technical difficulties they were experience or familiarising with the technologies. To circumvent this, we directed participants to the Help page in the participation information and encouraged participants to logon early, both of which were done by at least some participants, although it was not clear whether all participants made use of these options. We also allocated time at the beginning of the focus group to providing a brief tutorial on CollaborateTM but, despite all this, significant time was still spent discussing the service itself at the expense of the research topics. This suggests that more may need to be done to address this issue, which may include, for example, scheduling more time for online focus groups than for comparable face-to-face groups. Alternatively, other researchers planning on using a web conferencing service could consider scheduling fewer topics for discussion.\n\nWe had selected Blackboard CollaborateTM as the web conferencing service because it is supported by the University’s online learning management system, with which participants would be familiar. Using this service also meant that participants could access it without needing to create additional online profiles and did not need to download any software. CollaborateTM also offers the ability to upload slides, which can be viewed and edited by participants during the session, a feature not offered by some other web conferencing services. It also includes an in-built recording system, which means that researchers can avoid the need to source or purchase a stand-alone recording device and the need to ensure sufficient battery life in order to record the entire discussion. The service also provided prompts to begin recording, reducing the risk of missing any of the discussion by mistake. However, a downside was that, although participants were familiar with the online learning management system, they were unfamiliar with CollaborateTM as the University had only recently begun supporting it. One participant who had had considerable trouble joining the focus group even asked why we had not used a more familiar service like Skype. Researchers considering using online focus groups should consider following the lead of Tuttas13 and evaluate several available web conferencing services when designing their study.\n\nSimilar to our experience, Tuttas13 experienced issues with sound quality but resolved it by asking participants to mute their microphones when they were not speaking. This may be advisable as standard practice for other researchers, at least until the technology improves. That said, this would encourage participants to take turns to speak and therefore might reduce the dynamic nature of the discussion. This risks making the group less like a face-to-face focus group and more like a group interview5,11. An alternative may be to ask all participants to use headsets with a microphone, rather than relying on their computer’s in-built speakers and microphone. The use of headsets would eliminate echoing, improving the quality of recording.\n\nThe phenomenon of online participants being more likely to withdraw was also noted by Tuttas13, suggesting that there is something about the online environment that reduces the connection between participants and research. One of the benefits of online research is that participants can feel an increased sense of anonymity and may therefore be more willing to offer their opinion14,15. It may be, however, that this feeling of anonymity reduces participants’ connection with the research and makes them more likely to disengage. Tuttas13 recommends that researchers over-sample in order to compensate for this attrition, which we echo but with the caveat that the smaller group sizes that we had were easier to manage in the online environment. Had these participants not withdrawn, we believe the larger group sizes would have further reduced the number of topics covered. Researchers may therefore find it worthwhile to plan for more online focus groups with fewer participants than they would if conducting face-to-face groups.\n\nOnline focus groups are being used within biomedical and health-related research, usually to enable increased participant anonymity when discussing sensitive topics or to bring together hard-to-reach populations4,10,13. Their potential is also recognised in advertising research16, which has implications for social researchers interested in the impact of exposure to advertising on health. However, these studies have recognised the limitations of text-based platforms, which include difficulty in organising and managing real-time groups, the need for motivated participants in asynchronous groups in order to maintain participation over several days or weeks, and the exclusion of participants with low literacy levels. Our experience suggests that web conferencing services offer a viable alternative to face-to-face focus groups and are worthy of further testing. Importantly, they will overcome the barriers inherent in text-based groups, which will strengthen research methodology.\n\nThe major limitation of this paper is that we had not put any formal reflective process in place before conducting the focus groups because we had not intended to explore the use of online focus groups before we conducted them. This has meant that we were only able to take into account our own perceptions and experience with the groups and not that of the participants. Nonetheless, we feel that these reflections are potentially valuable for researchers interested in using this methodology because, to date, very little is available to guide the implementation of online focus groups. More formal testing of this method is needed but our reflections should help to improve their design and implementation while this testing is being carried out.\n\n\nConclusions\n\nOur experience with using a web conferencing service to conduct a real-time focus group was mixed. Online services have the potential to offer a realistic and comparable alternative to face-to-face focus groups for geographically dispersed populations. Further testing of available services is certainly warranted. However, technical difficulties, particularly with ease of participant access and poor recording quality, mean that we strongly recommend that researchers carefully consider and test the web conferencing service that they intend to use for hosting their focus groups.\n\n\nData availability\n\nThe qualitative data underpinning this analysis is not available because it cannot be sufficiently anonymised.",
"appendix": "Author contributions\n\n\n\nJK and PH conceived, designed, and implemented the study. JK moderated the focus groups, led the analysis, and wrote the first draft of the manuscript. Both authors contributed to writing the manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Sydney School of Public Health, University of Sydney’s Research into Teaching Seed Funding, 2014.\n\n\nAcknowledgements\n\nWe would like to acknowledge and thank all of the students who participated in this study, as well as Catherine Kiernan for transcribing the focus groups.\n\n\nSupplementary material\n\nShort survey and discussion guides.\n\n\nReferences\n\nPeek L, Fothergill A: Using focus groups: lessons from studying daycare centers, 9/11, and Hurricane Katrina. Qual Res. 2009; 9(1): 31–59. Publisher Full Text\n\nCarey MA: Focus Groups--What Is the Same, What Is New, What Is Next? Qual Health Res. 2016; 26(6): 731–3. PubMed Abstract | Publisher Full Text\n\nStewart K, Williams M: Researching online populations: the use of online focus groups for social research. Qual Res. 2005; 5(4): 395–416. Publisher Full Text\n\nKenny AJ: Interaction in cyberspace: an online focus group. J Adv Nurs. 2005; 49(4): 414–22. PubMed Abstract | Publisher Full Text\n\nParker A, Tritter J: Focus group method and methodology: current practice and recent debate. International Journal of Research & Method in Education. 2006; 29(1): 23–37. Publisher Full Text\n\nBirnbaum MH: Human research and data collection via the internet. Annu Rev Psychol. 2004; 55(1): 803–32. PubMed Abstract | Publisher Full Text\n\nBurton LJ, Bruening JE: Technology and Method Intersect in the Online Focus Group. Quest. 2003; 55(4): 315–27. Publisher Full Text\n\nFox FE, Morris M, Rumsey N: Doing synchronous online focus groups with young people: methodological reflections. Qual Health Res. 2007; 17(4): 539–47. PubMed Abstract | Publisher Full Text\n\nTabak SJ, Klettke B, Knight T: Simulated jury decision making in online focus groups. Qualitative Research Journal. 2013; 13(1): 102–13. Publisher Full Text\n\nThomas C, Wootten A, Robinson P: The experiences of gay and bisexual men diagnosed with prostate cancer: results from an online focus group. Eur J Cancer Care (Engl). 2013; 22(4): 522–9. PubMed Abstract | Publisher Full Text\n\nAbrams KM, Wang Z, Song YJ, et al.: Data Richness Trade-Offs Between Face-to-Face, Online Audiovisual, and Online Text-Only Focus Groups. Soc Sci Comput Rev. 2015; 33(1): 80–96. Publisher Full Text\n\nMurray PJ: Using virtual focus groups in qualitative research. Qual Health Res. 1997; 7(4): 542–9. Publisher Full Text\n\nTuttas CA: Lessons learned using Web conference technology for online focus group interviews. Qual Health Res. 2015; 25(1): 122–33. PubMed Abstract | Publisher Full Text\n\nAhern NR: Using the Internet to conduct research. Nurse Res. 2005; 13(2): 55–70. PubMed Abstract | Publisher Full Text\n\nTates K, Zwaanswijk M, Otten R, et al.: Online focus groups as a tool to collect data in hard-to-include populations: examples from paediatric oncology. BMC Med Res Methodol. 2009; 9(1): 15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStewart DW, Shamdasani P: Online Focus Groups. J Advert. 2016; 1–13. Publisher Full Text\n\nVicsek L: Improving data quality and avoiding pitfalls of online text-based focus groups: A practical guide. The Qualitative Report. 2016; 21(7): 1232–42. Reference Source\n\nInternational Telecommunications Union: ICT Facts and Figures 2016. 2016. Reference Source\n\nHoffman D, Novak T, Stein R: The digital consumer. In: Belk R Llamas R editors. The Routledge Companion to Digital Consumption. New York, USA, 2012; 28–38. Publisher Full Text\n\nKite J, Phongsavan P: Evaluating standards-based assessment rubrics in a postgraduate public health subject. Assessment & Evaluation in Higher Education. 2016; 1–13. Publisher Full Text\n\nFook J, White S, Gardner F: Critical reflection: a review of contemporary literature and understandings. Critical reflection in health and social care. 2006; 3–20. Reference Source"
}
|
[
{
"id": "21467",
"date": "04 Apr 2017",
"name": "Sarah Collard",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract: These are appropriate. There are grammatical errors within the abstract that need to be amended.\n\nArticle content: This article provided an interesting comparison of face-to-face and online focus groups. Using online conferencing is an important topic to explore and these findings can aid researchers in the positives about using online conferencing for hard to reach populations. It also will aid researchers as it provides concerns that arise in using such technology.\nIt is always good to see researchers writing about their own experiences of applying (new) research methods, this gives students and junior researcher a chance to consider any issues with these methods and helps them in their decision to select appropriate ones for their own research.\n\nWe have summarised the two major variants of internet-based focus groups: (a) written chat; and (b) audio/video conferencing. Our paper highlighted the differences and similarities of these two approaches, as well as the strengths and limitations of each internet-based FG methods1. We feel the authors could have made a little more effort to put their particular approach to conducting online focus groups in the wider field of online focus group techniques.\n\nThis article does need amendments as there are quite a lot of grammatical errors and inconsistencies, particularly in product names (e.g. Blackboard Collaborate and then it just says Collaborate, and then goes back to Blackboard Collaborate). In regards to the methods, this also needs more details. Reflective practice was used, but this needs more explanation of why this was chosen and the pros versus cons of this method. Was this the method to use?\n\nQuestions also arise in the explanation of how the focus groups were conducted. For example, more explanation on privacy concerns needs to be discussed. Did you lay ground rules down with the participants? Why was privacy not taken into account prior to conducting the focus groups?\n\nAlso, to help other researchers using online conferencing for focus groups, what are key concerns and positive techniques that should be used for future groups? In terms of results, less data were obtained from the online focus groups, why does this not mean there is a significant difference? Please justify why this is still a sufficient method. Regarding the technical issues, how might these be lessened for others?\n\nThe major concerns throughout are the aspects that the analysis of this project was not thought of prior to conducting the research. If it was, this needs to be stated more clearly. In addition, please be careful of word choices (e.g. “thankfulness”) and grammar errors throughout. This will decrease confusion and allow it to read smoother.\n\nConclusions: A bit more explanation about concerns and outcomes needs to be taken into consideration. Particularly after the statement that less data was obtained through the online version compared to face to face. This seems like a large difference, so please state how the online version is still beneficial besides the decrease in data obtained and technical difficulties that can create concern. The topic of this paper is of interest and can bring new insight, but it needs to be a bit more explicit in how.",
"responses": [
{
"c_id": "2680",
"date": "02 May 2017",
"name": "Juliet Bell",
"role": "Reader Comment",
"response": "I completely agree with the statement\" Technical difficulties, particularly with ease of participant access and poor recording quality, mean that we strongly recommend that researchers carefully consider and test the web conferencing service that they intend to use for hosting their focus groups.\" Hence, I would recommend using high quality web conferencing tools like R-HUB web conferencing servers for all your online web conferencing needs. It is an on premise solution which provides a simple and easy to use interface an works from behind the firewall, hence better security."
},
{
"c_id": "2730",
"date": "12 Jun 2017",
"name": "James Kite",
"role": "Author Response",
"response": "Thank you very much for your review. As well as uploading a revised version of our paper, we have outlined our responses to each of your specific queries below. Reviewer comment: Title and Abstract: These are appropriate. There are grammatical errors within the abstract that need to be amended. Authors’ response: We have amended the abstract to try and address any grammatical errors. Reviewer comment: Article content: This article provided an interesting comparison of face-to-face and online focus groups. Using online conferencing is an important topic to explore and these findings can aid researchers in the positives about using online conferencing for hard to reach populations. It also will aid researchers as it provides concerns that arise in using such technology. It is always good to see researchers writing about their own experiences of applying (new) research methods, this gives students and junior researcher a chance to consider any issues with these methods and helps them in their decision to select appropriate ones for their own research. Authors’ response: Thank you. The principal motivation for writing this paper was to share our experience as we would have found this information to be of considerable benefit had it been available at the time we were designing the study. We hope it can be of use to others, as you have suggested. Reviewer comment: We have summarised the two major variants of internet-based focus groups: (a) written chat; and (b) audio/video conferencing. Our paper highlighted the differences and similarities of these two approaches, as well as the strengths and limitations of each internet-based FG methods. We feel the authors could have made a little more effort to put their particular approach to conducting online focus groups in the wider field of online focus group techniques. Authors’ response: Thank you for sharing your paper. We have incorporated its findings into our introduction and discussion. Reviewer comment: This article does need amendments as there are quite a lot of grammatical errors and inconsistencies, particularly in product names (e.g. Blackboard Collaborate and then it just says Collaborate, and then goes back to Blackboard Collaborate). Authors’ response: We have tried to address any grammatical errors in the revised version. Reviewer comment: In regards to the methods, this also needs more details. Reflective practice was used, but this needs more explanation of why this was chosen and the pros versus cons of this method. Was this the method to use? Authors’ response: We have added additional discussion of the strengths and weaknesses of reflective practice to the methods and discussion sections. Reviewer comment: Questions also arise in the explanation of how the focus groups were conducted. For example, more explanation on privacy concerns needs to be discussed. Did you lay ground rules down with the participants? Why was privacy not taken into account prior to conducting the focus groups? Authors’ response: Privacy was considered as part of the ethics approval process prior to conducting the focus groups and was not raised as a concern by either the Committee or by participants. From reading your paper, we assume you are most concerned about the potential for others not involved in the study to overhear responses from participants. This was not a major concern in our context, given the discussion topics were not sensitive, but we acknowledge that it may be in other contexts. We have added a point in the discussion to this effect. Reviewer comment: Also, to help other researchers using online conferencing for focus groups, what are key concerns and positive techniques that should be used for future groups? In terms of results, less data were obtained from the online focus groups, why does this not mean there is a significant difference? Please justify why this is still a sufficient method. Regarding the technical issues, how might these be lessened for others? Authors’ response: All of the paragraphs contained in the discussion include our recommendations for future research using web-conferencing, including testing various platforms before deciding on the most appropriate service for a particular study, using headsets to overcome echoing in recording, and aiming to recruit a higher number of participants to allow for the higher withdrawal rates. We do acknowledge that we obtained less data overall in our online groups but highlight that the quality of data was comparable to that obtained face-to-face. We believe that quality is more important than quantity, hence why we argue that there was not a significant difference between the 2 groups. At the very least, it is not an insurmountable issue, as we suggest allowing more time for online groups then you would for face-to-face groups. Reviewer comment: The major concerns throughout are the aspects that the analysis of this project was not thought of prior to conducting the research. If it was, this needs to be stated more clearly. Authors’ response: We acknowledge this as a significant limitation for our study and have added additional information to make the implications of this clearer."
}
]
},
{
"id": "22904",
"date": "18 May 2017",
"name": "Marita Hefler",
"expertise": [
"Reviewer Expertise Qualitative research methods including focus groups",
"group and individual interviews",
"program evaluation",
"health promotion and public health."
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a useful article for researchers who use focus groups, particularly those for whom online approaches may assist with overcoming barriers of geography, mobility or time constraints to ensure inclusion of hard-to-reach groups. Overall, it provides a number of useful tips and helpful discussion, however there are some issues which require clarification.\n\nIt is unclear if the platform offered for focus group participation matched the mode of study that participants had completed. The second paragraph under the heading methods states that all students who completed the subject were contacted about participating in the research. Those who were interested were then asked about their preferred platform. There does not appear to have been any distinction made about whether students completed the study unit online or face to face. The first paragraph under the sub-heading ‘participants’ and researchers’ experience’ also seems to confirm this. However, the last part of the third paragraph under the same heading seems to suggest that participants were grouped for focus groups according to their mode of study – ie students who completed the unit face-to-face were assigned to face-to-face focus groups and online students to online focus groups. (“By way of example, a group assignment was discussed at length in all focus groups, but the difference in experience for face-to-face and online students could not have been starker…This contrast may have been missed had we not conducted the online focus groups”). Can the authors please clarify? If the mode of study was not matched to the focus group platform offered, the statement that the contrast would have been missed is not valid – it would be more a reflection of doing a sufficient number of focus groups, rather than offering participation through different platforms. Overall, I think the authors somewhat downplay the impact of several of the issues on the quality of data collected through online focus groups. The third paragraph on the second page suggests that use of web conferencing came close to approximating the experience of face-to-face groups, however the paragraph goes on to note that communication was slower, time was necessarily to resolve technical issues even after encouraging participants to log on and trouble shoot prior to commencement. Not only did this produce less data, but the issue of sound quality mentioned in the last paragraph on page two seems to also be significant – both because of the difficulty participants had hearing each other, and the difficulty of accurate transcription. In addition, one participant being forced to participate via chat would also have impacted on the aural and visual cues which the authors rightly note are an important component of focus groups.\nThere is discussion about online participants being more likely to withdraw (either before or during focus groups), however given the very small sample, this should be treated with caution. I would suggest the most salient findings in terms of withdrawal are the challenge of technical difficulties forcing withdrawal, and also the possibility of participants having outside distractions from wherever they are participating, which obviously don’t exist in a dedicated face-to-face focus group environment.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2731",
"date": "12 Jun 2017",
"name": "James Kite",
"role": "Author Response",
"response": "Thank you very much for your review, Marita. We have outlined our responses to each of your specific queries below and revised the paper accordingly.Reviewer comment: It is unclear if the platform offered for focus group participation matched the mode of study that participants had completed. The second paragraph under the heading methods states that all students who completed the subject were contacted about participating in the research. Those who were interested were then asked about their preferred platform. There does not appear to have been any distinction made about whether students completed the study unit online or face to face. The first paragraph under the sub-heading ‘participants’ and researchers’ experience’ also seems to confirm this. However, the last part of the third paragraph under the same heading seems to suggest that participants were grouped for focus groups according to their mode of study – ie students who completed the unit face-to-face were assigned to face-to-face focus groups and online students to online focus groups. (“By way of example, a group assignment was discussed at length in all focus groups, but the difference in experience for face-to-face and online students could not have been starker…This contrast may have been missed had we not conducted the online focus groups”). Can the authors please clarify? If the mode of study was not matched to the focus group platform offered, the statement that the contrast would have been missed is not valid – it would be more a reflection of doing a sufficient number of focus groups, rather than offering participation through different platforms.Authors’ response: Participants were not forced to complete the focus groups on the platform that matched their mode of study. However, only 2 of the participants who attended the face-to-face groups completed the subject online, while all of the participants who attended the online groups completed the subject online. As participant group platform did almost perfectly match the subject delivery mode, we believe that contrast is valid. We have modified the paper to make this detail clear. Reviewer comment: Overall, I think the authors somewhat downplay the impact of several of the issues on the quality of data collected through online focus groups. The third paragraph on the second page suggests that use of web conferencing came close to approximating the experience of face-to-face groups, however the paragraph goes on to note that communication was slower, time was necessarily to resolve technical issues even after encouraging participants to log on and trouble shoot prior to commencement. Not only did this produce less data, but the issue of sound quality mentioned in the last paragraph on page two seems to also be significant – both because of the difficulty participants had hearing each other, and the difficulty of accurate transcription. In addition, one participant being forced to participate via chat would also have impacted on the aural and visual cues which the authors rightly note are an important component of focus groups.Authors’ response: It was not our intention to downplay the significance of these issues and we have therefore tried to amend the language around the above points to make this clear. However, we have also clarified that the issue of participants being unable to hear each other during the groups was indeed minor and did not affect the flow of discussion. Reviewer comment: There is discussion about online participants being more likely to withdraw (either before or during focus groups), however given the very small sample, this should be treated with caution. I would suggest the most salient findings in terms of withdrawal are the challenge of technical difficulties forcing withdrawal, and also the possibility of participants having outside distractions from wherever they are participating, which obviously don’t exist in a dedicated face-to-face focus group environment.Authors’ response: We agree that the issues of technical difficulties and outside distractions that forced withdrawal are noteworthy because they would not affect face-to-face groups. We have added this point to the ‘Discussion’. We have also added a note regarding the sample size to the ‘Participants’ and researchers’ experience’ section."
}
]
}
] | 1
|
https://f1000research.com/articles/6-122
|
https://f1000research.com/articles/6-867/v1
|
12 Jun 17
|
{
"type": "Case Report",
"title": "Case Report: Nicolau syndrome due to etofenamate injection",
"authors": [
"Emin Ozlu",
"Aysegul Baykan",
"Ragıp Ertas",
"Yılmaz Ulas",
"Kemal Ozyurt",
"Atıl Avcı",
"Halit Baykan",
"Aysegul Baykan",
"Ragıp Ertas",
"Yılmaz Ulas",
"Kemal Ozyurt",
"Atıl Avcı",
"Halit Baykan"
],
"abstract": "Nicolau syndrome, also known as embolia cutis medicomentosa, is a rare complication characterized by tissue necrosis that occurs after injection of drugs. The exact pathogenesis is uncertain, but there are several hypotheses, including direct damage to the end artery and cytotoxic effects of the drug. Severe pain in the immediate postinjection period and purplish discoloration of the skin with reticulate pigmentary pattern is characteristic of this syndrome. Diagnosis is mainly clinical and there is no standard treatment for the disease. Etofenamate is a non-steroidal anti-inflammatory drug and a non-selective cyclooxygenase inhibitor. Cutaneous adverse findings caused by etofenamate are uncommon. Herein, we present a case with diagnosis of Nicolau syndrome due to etofenamate injection, which is a rare occurrence.",
"keywords": [
"Complication",
"etofenamate",
"Nicolau syndrome"
],
"content": "Introduction\n\nNicolau syndrome is a rare complication caused by intramuscular injection of various medications1. The necrosis in the injection site of skin and sometimes muscle is a characteristic feature of this syndrome1. The development of acute vasospasm following intravenous or around the vein injection is the most widely accepted hypothesis in its pathogenesis1. Etofenamate is an anti-inflammatory drug that non-selectively inhibits the cyclooxygenase (COX) pathway2. Herein, we present a rare case of Nicolau syndrome after etofenamate injection.\n\n\nCase report\n\nAn 81-year-old woman was admitted to our clinic with a painful necrotic ulcer in the left gluteal region. Her medical history, which was non-specific, except for back pain, revealed an intramuscular etofenamate injection (1000 mg), due to back pain, 15 days before. Dermatological examination revealed a painful ulcerous plaque with a black necrotic crest in the lateral part of the left gluteal region. This ulcerous plaque appeared indurated and erythematous in its surrounding (Figure 1). Her complaints started with erythematous swelling and pain in the injection site approximately ten days ago. Subsequently, the ulcer developed in the lesion area of the patient's erythematous swelling. There were not any abnormal parameters in both complete blood count and routine biochemistry tests. The patient was diagnosed with Nicolau syndrome based on her medical history and clinical signs and symptoms. Biopsy from the lesion area was not obtained, as it could develop more necrosis in the lesion. Etofenamate treatment was discontinued.\n\nLocal wound care with saline solution once a day and topical 2% mupirocin twice a day was applied to the lesion and the patient was referred to the Department of Plastic Surgery for the debridement of the necrotic tissue. After surgical debridement by the plastic surgeon, and continuation of local wound care (as above), the ulcer lesion was completely regressed, leaving an atrophic scar after one month (Figure 2).\n\n\nDiscussion\n\nNicolau syndrome, also known as embolia cutis medicamentosa, is defined as an iatrogenic syndrome following intramuscular injections. However, cases with Nicolau syndrome after subcutaneous, intravenous, or intraarticular injection have been recently reported in the literature3–5.\n\nAlthough the pathogenesis of Nicolau syndrome is not fully understood, direct vascular damage, perivascular inflammation, and vascular contraction following an injection are thought to be responsible6. In addition, it has been suggested that pharmacological properties of an individual drug may play a role in the pathogenesis6.\n\nEtofenamate is a non-steroidal anti-inflammatory drug (NSAID) with analgesic, antipyretic, and anti-inflammatory effects. It inhibits the COX pathway and blocks prostaglandin synthesis non-selectively2. It has been shown that NSAIDs play a key role in the pathogenesis of vascular spasm induction and local circulation blockage, inhibiting the COX enzyme and prostaglandin synthesis7. In addition, these drugs have a central role in inducing vascular spasm and blocking local circulation, inhibiting the COX enzyme and prostaglandin synthesis in the pathogenesis of this syndrome7.\n\nIn Nicolau syndrome, following the injection of the clinically active agent, erythematous, ecchymosed, and reticular lesions appear in the injection site with severe pain. Progressive ischemic necrosis with sharp edges in a livedoid pattern develops later. Lesions often heal leaving atrophic scars8.\n\nNicolau syndrome has no definitive treatment. In the early period, the main goal of therapy is to prevent the development of necrosis. Therefore, pentoxifylline, hyperbaric oxygen, intravenous alprostadil, and heparin, which strengthen the vasculature, can be used4. Intralesional steroid injection can also be effective by reducing inflammation. Surgical debridement should be performed in the case of necrosis4. Systemic antibiotics should be used in case of secondary infection4. Contraction and deformity development are among late complications, and surgical treatment can be required in these cases9. Nicolau syndrome is uncommon with proper injection techniques - aspirating just before injecting medication has been suggested as a technique of preventing this syndrome10.\n\n\nConclusion\n\nAs a result, applications of standard drug injection rules are essential in prevention from Nicolau syndrome. It should be kept in mind that Nicolau syndrome could also develop following the use of intramuscular etofenamate.\n\n\nConsent\n\nWritten informed consent was obtained from the patient for the publication of the patient’s clinical details and accompanying images.",
"appendix": "Author contributions\n\n\n\nEO: wrote the manuscript; AB: Prepared the manuscript; RE, YU, KO and AA: Helped manage the patient’s diagnosis and therapy; HB: patient’s consultant from the Department of Plastic and Reconstructive Surgery\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nAnıl M, Çakmak B, Bal A, et al.: Nicolau Syndrome in Two Children Following Penicillin Injection: Case Report. Turkiye Klinikleri J Pediatr. 2010; 19(2): 144–7. Reference Source\n\nOrbak Z, Yıldırım ZK, Sepetci O, et al.: Adverse reaction of topical etofenamate: petechial eruption. West Indian Med J. 2012; 61(7): 767–769. PubMed Abstract\n\nSonntag M, Hodzic-Avdagic N, Bruch-Gerharz D, et al.: [Embolia cutis medicamentosa after subcutaneous injection of pegylated interferon-alpha]. Hautarzt. 2005; 56(10): 968–9. PubMed Abstract | Publisher Full Text\n\nGeukens J, Rabe E, Bieber T: Embolia cutis medicamentosa of the foot after sclerotherapy. Eur J Dermatol. 1999; 9(2): 132–3. PubMed Abstract\n\nCherasse A, Kahn MF, Mistrih R, et al.: Nicolau's syndrome after local glucocorticoid injection. Joint Bone Spine. 2003; 70(5): 390–2. PubMed Abstract | Publisher Full Text\n\nDadacı M, Altuntas Z, Ince B, et al.: Nicolau syndrome after intramuscular injection of non-steroidal anti-inflammatory drugs (NSAID). Bosn J Basic Med Sci. 2015; 15(1): 57–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEzzedine K, Vadoud-Seyedi J, Heenen M: Nicolau syndrome following diclofenac administration. Br J Dermatol. 2004; 150(2): 385–7. PubMed Abstract | Publisher Full Text\n\nLee MW, Kim KJ, Choi JH, et al.: A case of embolia cutis medicamentosa. J Dermatol. 2003; 30(12): 927–928. PubMed Abstract | Publisher Full Text\n\nCorazza M, Capozzi O, Virgilit A: Five cases of livedo-like dermatitis (Nicolau's syndrome) due to bismuth salts and various other non-steroidal anti-inflammatory drugs. J Eur Acad Dermatol Venereol. 2001; 15(6): 585–8. PubMed Abstract | Publisher Full Text\n\nNischal KC, Basavaraj HB, Swaroop MR, et al.: Nicolau syndrome: an iatrogenic cutaneous necrosis. J Cutan Aesthet Surg. 2009; 2(2): 92–95. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23434",
"date": "03 Jul 2017",
"name": "Mahmut Sami Metin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article has been well designed. Pictures are good. Discussion is long enough. The quality of the research is good enough. The work has been well designed, executed and discussed. No changes are required. The authors could use this new research article \"Nicolau Syndrome due to Penicillin Injection: A Report of 3 Cases without Long-Term Complication.\"\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "24881",
"date": "14 Aug 2017",
"name": "Burak Tekin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWell-presented case of Nicolau syndrome. Dermatologists seem to be familiar with this entity, however, all healthcare workers may encounter this reaction in their clinical practice since injectable NSAIDs are commonly used. This report may serve the purpose of increasing awareness with regard to this entity, while placing emphasis on the importance of adhering to the proper injection technique.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-867
|
https://f1000research.com/articles/6-865/v1
|
12 Jun 17
|
{
"type": "Research Article",
"title": "Platelet distribution width, mean platelet volume and haematological parameters in patients with uncomplicated plasmodium falciparum and P. vivax malaria",
"authors": [
"Elrazi A. Ali",
"Tajeldin M. Abdalla",
"Ishag Adam",
"Elrazi A. Ali",
"Tajeldin M. Abdalla"
],
"abstract": "Background: The association between the haematological profile (including abnormal platelets) and malaria is not completely understood. There are few published data on haematological profiles of malaria patients in areas with unstable malaria transmission. The current study was conducted to investigate if the haematological parameters, including platelet indices, were reliable predictors for microscopically-diagnosed malaria infection. Methods: A case-control study with a total of 324 participants (162 in each arm) was conducted at the out-patient clinic of New Halfa hospital during the rainy and post rainy season (August 2014 through to January 2015). The cases were patients with uncomplicated Plasmodium falciparum (107; 66.9%) and P. vivax malaria (55, 34.0%) infections. The controls were aparasitemic individuals. The haematological parameters were investigated using an automated hemo-analyser. Results: There was no significant difference in the mean (±SD) age between the study groups; however, compared to the controls, patients with uncomplicated malaria had significantly lower haemoglobin, leucocyte and platelet counts, and significantly higher red cell distribution width (RDW), platelet distribution width (PDW) and mean platelet volume (MPV). Conclusions: The study revealed that among the haematological indices, PDW and MPV were the main predictors for uncomplicated P. falciparum and P. vivax malaria infection. Abbreviations: OR: odds ratio.",
"keywords": [
"hematological profile",
"PDW",
"MPV",
"plasmodium falciparum",
"P. vivax"
],
"content": "Introduction\n\nIn spite of the preventative measures, malaria remains a major public health concern. Malaria is responsible for 781,000 deaths in a year, the majority of which are in Sub -Saharan Africa1. A correct diagnosis is one of the most important tools in the management of malaria. It has been recommended that all persons with suspected malaria should have a parasitological confirmation of diagnosis1. Microscopic examination of malaria consists of the identification of parasite species in thin and/or thick blood films, which is the “gold standard” for malaria diagnosis1,2. Microscopy requires trained technicians, and well-maintained microscopes with a perfect quality management system. However, acceptable microscopy services are not widely available for the diagnosis of malaria in some areas where malaria is endemic e.g. in communities in Sub-Saharan Africa1.\n\nPreviously, measurement of haematological blood parameters was unreliable due to intra and inter-method variation. Nowadays, automated analysers have replaced the traditional methods. Automated analysers are available in most settings and can give reliable results within a short period of time. There is a universal trend toward using these to aid the presumptive diagnosis of malaria infection2,3.\n\nPrevious studies have reported different results levels of sensitivity and specificity of haematological parameters as predictors of malaria infection4–8. There is no published data on haematological changes in patients infected with malaria parasites in Sudan, where malaria in the major health problem9. The current study was conducted in New Halfa, eastern Sudan to investigate the haematological changes observed during malaria infection and to assess the reliability of the haematological parameters used for diagnosis.\n\n\nMethods\n\nA case-control study was conducted at the out-patient clinic of New Halfa hospital during the rainy and post rainy season (August 2014 through to January 2015). The cases were patients with symptoms and signs of uncomplicated malaria and who were confirmed to be infected with P. falciparum or P. vivax by microscopic examination of Giemsa stained blood smears during the study10. The controls were the patients that presented to the same clinic with symptoms of malaria but were found to have negative blood films for malaria. After the participants (or their parents/legal guardians if they were minors) provided written informed consent, a clinical history was gathered using questionnaires. Weight and height were measured and body mass index was expressed as kg/m2.\n\n2ml of blood was taken from each participant and placed in a container with EDTA, and a complete hemogram was performed using an automated hematology analyser (Sysmex XN-9000; Hyogo, Japan), following the manufacturers' instructions as previously described11–13. The hemogram included measuring the haemoglobin level, leucocyte count and platelet indices, namely platelet count, mean platelet volume (MPV), and platelet distribution width (PDW).\n\nThick and thin blood films were prepared and stained with 10% Giemsa to microscopically confirm which participants were infected. If the slide was positive, the parasite density was measured by counting the number of asexual parasites per 200 leukocytes, and multiplied against the participants own leucocytes number/μL. The blood films were considered negative if no parasites were detected in 100 oil immersion fields of a thick blood film.\n\nA minimum sample size of 162 participants for each arm of the study was calculated assuming that 10% of participants would have incomplete data. In this way, it would be possible to calculate a significant difference (at α = 0.05) in the means of the proposed variables - mainly haemoglobin, red cell distribution width (RDW), leucocytes, platelets counts and PDW - between the cases and the controls, at 80% power.\n\nStatistical analysis was performed using SPSS for Windows, version 20.0 (SPSS Inc., Chicago, IL, USA). Proportions of the studied groups were expressed as percentages and compared using the chi-squared test. Continuous data were checked for normality using the Shapiro-Wilk test. The means (±SD) or median (IQR) were used to describe the studied variables, depending if they were normally or non-normally distributed. The t-test (or Mann-Whitney U test if the data were not normally distributed) evaluated the differences between the studied groups. Binary regression was calculated, where malaria was the dependent variable and medical and haematological indices were the independent variables. Diagnostic screening tests were used to determine the diagnostic cut-offs of various parameters (based on test sensitivity and specificity) using the receiver operating characteristic (ROC) curve. P < 0.05 was considered statistically significant.\n\n\nResults\n\n107 (66.9%) and 55 (34.0%) of the uncomplicated malaria cases were infected with P. falciparum and P vivax, respectively. There was no significant difference in the age or BMI between the cases and the controls. Patients had significantly higher body temperature than the controls (Table 1). Their ages ranged between 1.1−55 years in the cases and 1.1−42 years in the controls. Around one third of the cases (53, 32.7%) and one third of the controls (49, 30.2%) were children that were under five years old (p=0.665). There were 81 (50.0%) vs. 79 (48.8%) males in the cases and controls, respectively (p=0.912).\n\nCompared with the controls, patients with uncomplicated malaria had significantly lower haemoglobin levels and lower leucocyte, lymphocyte, neutrophil, and platelet counts, but significantly higher RDW, PDW and MPV (Table 2).\n\nData is displayed as mean (±SD), and the t-test was used because the data was normally distributed.\n\nA receiver operating characteristic (ROC) curve was used to determine the cut-offs for haemoglobin levels, RDW, leucocytes and platelet counts, PDW and MPV for prediction of malaria infection. The area under the ROC curve is shown in Table 3 and Figure 1, which failed to confirm predictability of hemoglobin, RDW, leucocytes and platelet count. Poor and fair predictability of PDW and MPV for malaria infection was demonstrated; the areas under the curves were 0.637 and 0.726, respectively.\n\nKey:\n\n*: These were considered because of fair predictability of the area under the curve\n\nWhen the cut-off levels were evaluated using binary regression analysis, PDW ≥14.550 % (OR =2.9, 95% CI =1.64−5.43, P < 0.001) and MPV≥ 9.05fL (OR =2.25, 95% CI =1.12−4.51, P < 0.001) were the most important predictors for malaria infection, (Table 4).\n\nOR: odds ratio.\n\nThere was no significant difference in the hemoglobin, leucocytes, lymphocytes, neutrophils, platelets counts, RDW, PDW, MPV and the parasite count (P=0.201) when the cases of P. falciparum and P. vivax were compared (Table 5).\n\nData is displayed as mean (±SD), and the t-test was used because the data was normally distributed.\n\n\n\n\nDiscussion\n\nAccording to our present findings, PDW and MPV are the two most important haematological predictors of P. falciparum and P. vivax malaria infection. This is in line with a recent finding where Al-Salahy et al. reported that patients in Hajjah, Northwest Yemen with malaria parasitemia had significantly lower hemoglobin, hematocrit, leucocytes, lymphocytes, and platelet counts compared to healthy subjects14. Previous studies have shown that patients with complicated malaria had reduced haematological parameters such as platelet, leucocyte, and RBC counts, which provided relatively good predictors for the diagnosis of malaria infection8,15. On the other hand, the significant differences observed in the haematological parameters between parasitemic Ugandan patients and non-parasitemic Ugandans were only observed in the monocyte and the platelet count16. No significant difference was found between the haemoglobin levels, MCV, MCH, neutrophils, lymphocyte counts or MPV16.\n\nIn the current study, a PDW ≥14.550% and MPV ≥ 9.05fL were the main predictors for malaria (OR =2.9 and 2.3). Previous studies have reported an increased MPV level in malaria15,17. Interestingly, Chandra et al reported that an MPV > 8 fL had a sensitivity and specificity of 70.8% and 50.4% for the diagnosis of malaria, respectively8.\n\nThe higher PDW and MPV values in malaria could be explained by bone marrow formation of megakaryocytes to compensate for the low absolute platelet count during acute malaria infection8,15. A significantly higher level of the key platelet growth factor (thrombopoietin) has been reported in patients with malaria18. Furthermore, the parasitized RBCs could increase in platelet sensitivity to adenosine diphosphate (ADP), prompting secretion of dense granules19,20.\n\nNutritional deficiency and haemoglobinopathies were not investigated in the current study and have to be mentioned as study limitations. Haematological parameters formalaria-infested blood may vary depending on the level of malaria endemicity, presence of haemoglobinopathies and nutritional status21,22. Another limitation of the is that we relied on microscopy only for the malaria diagnosis. Some of negative controls may have had undetected parasitemia (submicroscopic parasitemia). We have previously observed that the majority of febrile patients who were parasite negative by microscopy had P. falciparum infection according to PCR results23. Lastly, other infections that might have an effect on blood parameters were not ruled out in both the cases and controls. In conclusion, the study revealed that a PDW ≥14.550% and MPV ≥ 9.05fL were the main predictors for uncomplicated P. falciparum and P. vivax malaria infection.\n\n\nData availability\n\nDataset 1: Raw data collected as the basis for this study. Plasmf = Blood film for P. falciparum. DOI, 10.5256/f1000research.11767.d16401024\n\n\nConsent\n\nThe study was approved by the Institutional Review Board of the Medical College, University of Khartoum (3# 2015 1114).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe authors wish to express their sincere gratitude to Mr. Abdulla Hafaz Alla, Najah Laboratory, New Halfa, Sudan for technical assistance.\n\n\nReferences\n\nManual A: Universal access to malaria diagnostic testing. 2011; [cited 2017 Mar 5]. Reference Source\n\nJain M, Gupta S, Jain J, et al.: Usefulness of automated cell counter in detection of malaria in a cancer set up--our experience. Indian J Pathol Microbiol. 2012; 55(4): 467–73. PubMed Abstract | Publisher Full Text\n\nGeorge I, Ewelike-Ezeani C: Haematological changes in children with malaria infection in Nigeria. J Med Med. 2011; 2(4): 768–771. Reference Source\n\nMaina RN, Walsh D, Gaddy C, et al.: Impact of Plasmodium falciparum infection on haematological parameters in children living in Western Kenya. Malar J. 2010; 9(Suppl 3): S4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCoelho HC, Lopes SC, Pimentel JP, et al.: Thrombocytopenia in Plasmodium vivax malaria is related to platelets phagocytosis. Carvalho LH, editor. PLoS One. 2013; 8(5): e63410. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChandra S, Chandra H: Role of haematological parameters as an indicator of acute malarial infection in uttarakhand state of India. Mediterr J Hematol Infect Dis. 2013; 5(1): e2013009. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeal-Santos FA, Silva SB, Crepaldi NP, et al.: Altered platelet indices as potential markers of severe and complicated malaria caused by Plasmodium vivax: a cross-sectional descriptive study. Malar J. 2013; 12: 462. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChandra S, Chandra H: Role of haematological parameters as an indicator of acute malarial infection in uttarakhand state of India. Mediterr J Hematol Infect Dis. 2013; 5(1): e2013009. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbdalla SI, Malik EM, Ali KM: The burden of malaria in Sudan: incidence, mortality and disability--adjusted life--years. Malar J. 2007; 6: 97. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorld Health Organization: Severe falciparum malaria. World Health Organization, Communicable Diseases Cluster. Trans R Soc Trop Med Hyg. 2000; 94(Suppl 1): S1–90. PubMed Abstract | Publisher Full Text\n\nRayis DA, Ahmed MA, Abdel-Moneim H, et al.: Trimester Pattern of Change and Reference Ranges of Hematological Profile Among Sudanese Women with Normal Pregnancy. Clin Pract. 2017; 7(1): 888. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbdullahi H, Osman A, Rayis DA, et al.: Red blood cell distribution width is not correlated with preeclampsia among pregnant Sudanese women. Diagn Pathol. 2014; 9: 29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAbdelrahman EG, Gasim GI, Musa IR, et al.: Red blood cell distribution width and iron deficiency anemia among pregnant Sudanese women. Diagn Pathol. 2012; 7: 168. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAl-Salahy M, Shnawa B, Abed G, et al.: Parasitaemia and Its Relation to Hematological Parameters and Liver Function among Patients Malaria in Abs, Hajjah, Northwest Yemen. Interdiscip Perspect Infect Dis. Hindawi Publishing Corporation; 2016; 2016: 5954394. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaina RN, Walsh D, Gaddy C, et al.: Impact of Plasmodium falciparum infection on haematological parameters in children living in Western Kenya. Malar J. 2010; 9(Suppl 3): S4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuwonge H, Kikomeko S, Sembajjwe LF, et al.: Uncomplicated Plasmodium falciparum Malaria in an Endemic. 2014; 1–9.\n\nLadhani S, Lowe B, Cole AO, et al.: Changes in white blood cells and platelets in children with falciparum malaria: relationship to disease outcome. Br J Haematol. 2002; 119(3): 839–47. PubMed Abstract | Publisher Full Text\n\nKreil A, Wenisch C, Brittenham G, et al.: Thrombopoietin in Plasmodium falciparum malaria. Br J Haematol. 2000; 109: 534–6. PubMed Abstract | Publisher Full Text\n\nWickramasinghe SN, Abdalla SH: Blood and bone marrow changes in malaria. Baillieres Best Pract Res Clin Haematol. 2000; 13(2): 277–99. PubMed Abstract | Publisher Full Text\n\nPrasad R, Das BK, Pengoria R, et al.: Coagulation status and platelet functions in children with severe falciparum malaria and their correlation of outcome. J Trop Pediatr. 2009; 55(6): 374–8. PubMed Abstract | Publisher Full Text\n\nPrice RN, Simpson JA, Nosten F, et al.: Factors contributing to anemia after uncomplicated falciparum malaria. Am J Trop Med Hyg. 2001; 65(5): 614–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErhart LM, Yingyuen K, Chuanak N, et al.: Hematologic and clinical indices of malaria in a semi-immune population of western Thailand. Am J Trop Med Hyg. 2004; 70(1): 8–14. PubMed Abstract\n\nGiha HA, A-Elbasit IE, A-Elgadir TM, et al.: Cerebral malaria is frequently associated with latent parasitemia among the semi-immune population of eastern Sudan. Microbes Infect. 2005; 7(11–12): 1196–203. PubMed Abstract | Publisher Full Text\n\nAli EA, Abdalla TM, Adam I: Dataset 1 in: Platelet distribution width, mean platelet volume and haematological parameters in patients with uncomplicated plasmodium falciparum and P. vivax malaria. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23925",
"date": "07 Jul 2017",
"name": "Con J.F. Fontes",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMr Ali and colleagues present a study that revealed the mean platelet volume and platelet distribution width as good parameters to predict uncomplicated malaria parasite infection.The manuscript is well written and presented. The analysis of the results is adequate and correct. Some minor points should be reviewed:\nAs the authors used several statistical tests to compare the results between cases and controls, it is important to make these tests explicit for each p-value presented in the tables, as a footnote.\n\nThe title of Table 5 is not clear. In fact, Table 5 compares means (SD), and non-median (IQR).\n\nIn the last paragraph of the Discussion section the sentence \"...parameters formalaria-infested blood...\" should be changed by \"... for malaria-infected blood...\".\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23451",
"date": "11 Jul 2017",
"name": "Alberto Tobón-Castaño",
"expertise": [
"Reviewer Expertise Malaria",
"clinical and epidemiological studies"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe objective of the study should be re-examined in the introduction. The terms \"reliability\" and \"validity\" should be differentiated. Reliability refers to the degree to which the measurement procedure can be reproduced.\n\nThus, if the objective is to study the validity of the hematological parameters to make a diagnosis, the methodology must be revised since these studies require a sample size that considers the prevalence and the expected sensitivity. Since this is a case-control study the objective should be consistent with this methodology.\n\nIt must be specified what a case is; What are the signs and symptoms of uncomplicated malaria?\n\nIt should be explained why 100 oil inmmersion fields were used as criteria to consider a thick blood film as negative. This does not correspond to international recommendations.\n\nThe calculation of the sample should be presented with the elements of a case-control study i.e. case-control relation, expected O.R., frequency of outcomes in exposed and unexposed.\n\nRESULTS. If the clinical history is available, the time of evolution of the disease could be compared, which may lead to differences in hematological profiles.\n\nThe results presented in the Tables 2 and 5 are confusing. It is not clear whether the averages or medians are compared and what was the statistical test of comparison. It is not explicitly stated which variables presented normal distribution.\n\nThe presumptive diagnosis of malaria with haematological analyzers does not seem to be the appropriate way of solving malaria diagnosis problems in endemic areas. In this sense, it is not clear whether the study intends to contribute to the malaria diagnosis using a method that would be non-specific.\n\nThe study contributes to show hematological differences between cases and controls and this should serve to identify early lesions, but clearly is no a useful method for malaria diagnosis.\nThere is no relevant discussion about the findings. The reference values of the hematological parameters according to age and sex are not presented; the comparisons must take into account this variation.\nSome aspects related to platelet changes are analyzed but superficially. Leukocyte count and cell line analysis are neglected; no mention is made of the significance of alterations, monocytes and eosinophils are neglected. There are no correlations with parasitemias which is also a variable that can contribute significantly to the hematological alterations as a function of the greater or lesser inflammatory response.\nREFERENCES. Reference 1. Who are the authors? Ref 6 and 8 are the same. Ref. 16. The titlle is incomplete? The journal name?\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "23927",
"date": "31 Jul 2017",
"name": "Walter R.J. Taylor",
"expertise": [
"Reviewer Expertise Malaria and tropical medicine"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a nice academic paper that deals with haematological parameters in malaria. Using a case control approach, the authors have identified a mean platelet distribution width and mean platelet volume as possible markers of acute symptomatic malaria in their setting.\n\nThis is interesting but of limited practical application. Therefore, this paper should emphasise the academic aspects of their findings.\n\nI think this paper can be accepted but with major revisions in two areas, in particular.\n\nSample size calculation – this has been determined using continuous variables yet we do not see any examples. This must be addressed.\n\nDiscussion – this should be more thorough comparing and contrasting the current data with previously published data. Moreover, readers will want to know important causes of increased RDW, PDW and MPV and where in haematological practice these parameters play an important role.\n\nThe authors should let us know if their research has led to any new research questions and future avenues of research.\n\nMinor comments\nThe number of malaria related deaths are higher than those reported by the WHO. The authors should obtain the latest data.\n\nNo mention of the role of malaria rapid diagnostic tests\n\nThe second line of the Discussion does not follow. Did the Yemeni study report PDW and MPV ?\n\nPage 6 left hand column penultimate paragraph:\nRemove 'in' in this sentence: Furthermore, the parasitized RBCs could increase in platelet sensitivity to adenosine diphosphate (ADP), prompting secretion of dense granules.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-865
|
https://f1000research.com/articles/6-569/v1
|
26 Apr 17
|
{
"type": "Research Article",
"title": "The dynamic upper limit of human lifespan",
"authors": [
"Saul Newman",
"Simon Easteal",
"Saul Newman"
],
"abstract": "We respond to claims by Dong et al. that human lifespan is limited below 125 years. Using the log-linear increase in mortality rates with age to predict the upper limits of human survival we find, in contrast to Dong et al., that the limit to human lifespan is historically flexible and increasing. This discrepancy can be explained by Dong et al.’s use of data with variable sample sizes, age-biased rounding errors, and log(0) instead of log(1) values in linear regressions. Addressing these issues eliminates the proposed 125-year upper limit to human lifespan.",
"keywords": [
"lifespan",
"human lifespan",
"contradictory findings",
"ageing",
"life history",
"refutation"
],
"content": "\n\nRecent findings by Dong et al.1 suggested fixed upper limits to the human life span. Using the same data, we replicated their analysis to obtain an entirely different result: the upper limit of human life is rapidly increasing.\n\nDong et al. conclude that the maximum reported age at death (MRAD) is limited to 125 years in humans1 and that lifespan increases above age 110 are highly unlikely, due to the reduced rate of increase in life expectancy at advanced ages.\n\nWe repeated Dong et al.’s1 analysis using identical data (SI). Replicating these findings requires the inclusion of rounding errors, treating zero-rounded values as log(1) and the incorrect pooling of populations.\n\nThe Human Mortality Database (HMD) data provide both the age-specific probability of survival (qx) and the survival rates of a hypothetical cohort of 100,000 individuals (lx). However, lx survival rates are rounded off to the nearest integer value.\n\nThe magnitude and frequency of lx rounding errors increases as the probability of survival approaches 1 in 100,000. These rounding errors mask variation in survival rates at advanced ages: over half of lx survival data are rounded to zero above age 90 (Figure 1b).\n\n(a) Figure modified after Dong et al. Figure 1b, showing rounded survival data (red points), rounded survival data with log(0)=log(1) (black points), the resulting linear regression in Dong et al. (solid red line) and observed survival data (pink points). (b) Rounding errors in survival data (box-whisker plots; 95% CI) and the proportion of survival data rounded to zero in males (blue line) and females (red line). (c) Survival data from (a) with rounding errors removed, showing variation outside the 1900-1990 period (vertical dotted lines). (d) The rate of change in late-life mortality since 1900 with (dotted lines) and without (solid lines) rounding errors (after Dong et al. Figure 1c).\n\nDong et al. appear to have used these rounded-off survival data in their models1 and incorrectly treated log(0) values as log(1) in log-linear regressions (Figure 1a–d; SI).\n\nThese errors have considerable impact. Re-calculating cohort survival from raw data or excluding zero-rounded figures eliminates the proposed decline in old-age survival gains (Figure 1d; SI).\n\nLikewise, recalculating these data removed their proposed limits to the age of greatest survival gain (SI), which in 15% of cases were the result of the artificial 110-year age limit placed on HMD data2.\n\nWe also found that variation in the probability of death was masked by date censoring1. Major non-linear shifts in old-age survival occur outside the 1900-1990 period used by Dong et al. (Figure 1c). Why these data were excluded from this regression, but included elsewhere, is unclear.\n\nEvidence based on observed survival above age 110 appears to support a late-life deceleration in survival gains1. For the period 1960–2005 Dong et al. present data1 from 4 of the 15 countries in the International Database on Longevity3 (IDL). In their pooled sample of these countries, there is a non-significant (p=0.3) reduction in MRAD between 1995 and 2006 (Figure 2a).\n\n(a) Reproduction of Dong et al. Figure 2a, including 95% CI for increasing (p<0.0001) and falling (p=0.3) maximum recorded age at death (MRAD), showing data biased by the addition and removal (up and down arrows) of populations. (b) Locally weighted smoothed splines of MRAD in Japan (green), the USA (red), the UK (dark blue) and France (purple). (c) Locally weighted trends of MRAD in the USA across the oldest 5 reported ages at death (red, orange, green, blue and purple lines show rank 1–5 respectively).\n\nThe declining MRAD reported by Dong et al.1 arises from the use of falling sample sizes. According to the Gerontology Research Group (GRG), 62% of validated supercentenarians alive in 2007 resided in France and the USA. However, these countries are not surveyed3 by the IDL after 2003 (Figure 2a). The proposed post-1995 decline in MRAD results from this dramatic fall in sample size.\n\nViewed individually, all four countries have an upward trend in the mean reported age at death (RAD; Figure 2b) of supercentenarians (SI) and the top 5 ranked RADs (Figure 2c). All four countries achieved record lifespans since 1995, as did 80% of the countries in the IDL. Without the pooling of IDL data used by Dong et al. there is no evidence for a plateau in late-life survival gains.\n\nWe attempted to reproduce Dong et al.’s supporting analysis of GRG records. The text and Extended Data Figure 6 of Dong et al. do not match annual MRAD records from 1972 as stated1. However, they do match deaths of the world’s oldest person titleholders from 1955 (GRG Table C, Revision 9) with all deaths in May and June removed (SI).\n\nActual MRAD data from the GRG support a significant decline in the top-ranked age at death since 1995 (r = -0.47; p = 0.03, MSE = 3.2). However, this trend is not significant if only Jeanne Clament is removed (p = 0.9). Linear models fit to lower-ranked RADs have an order of magnitude better fit, and all indicate an increase in maximum lifespan since 1995 (N= 64; SI).\n\nCollectively, these data indicate an ongoing rebound of upper lifespan limits since 1950, with a progressive increase in observed upper limit of human life. To estimate theoretical limits, we developed a simple approximation of the upper limit of human life.\n\nMortality rates double with age in human populations (Figure 3a and b). Log-linear models fit to this rate-increase closely approximate the observed age-specific probability of death4. These models also provide a simple method of predicting upper limits to human life span that is independent of population size.\n\n(a) In humans, the probability of death q at age x (qx; red line) increases at an approximately log-linear rate with age (black lines; 95% CI), shown here for the birth cohort of Jeanne Clament (d.122.5 years; circle). Projection of this log-linear increase to log(q) = 0 provides the MSA, the upper limit of human survival, shown here for (b) observed and projected global populations5 and (c) 40 historic HMD populations 1751–2014.\n\nWe fit log-linear models to age-specific mortality rates from the HMD data used by Dong et al.1, and used these models to predict the age at which the probability of death intercepts one. This maximum survivable age (MSA) provides a simple, conservative estimate of the upper limit of human life (Figure 3c).\n\nLog-linear models closely approximate the observed probability of death in HMD populations for both period and cohort life tables (median R2 = 0.99; 4501 population-years). These models predict an MSA exceeding 125 years within observed historic periods (Figure 3b and c; SI).\n\nFurthermore, period data indicate that MSA is steadily increasing from a historic low c.1956 (Figure 3b and c) and that the MRAD is expected to rise over the next century. This result is supported by trends in global mortality data from the United Nations5, sampled across 194 nations (Figure 3b).\n\nThis analysis provides an estimate of human lifespan limits that is conservatively low. Log-linear mortality models assume no late-life deceleration in mortality rates6, which, if present, would increase the upper limits of human lifespan7. In addition, these models are fit to population rates and cannot provide an estimate of individual variation in the rate of mortality acceleration.\n\nGiven historical flexibility in lifespan limits and the possibility of late-life mortality deceleration in humans8, these models should, however, be treated with caution.\n\nA claim might be made for a general, higher 130-year bound to the human lifespan. However, an even higher limit is possible and should not be ruled out simply because it exceeds observed historical limits.\n\n\nMethods\n\nLife table data were downloaded from the United Nations5 (UN) and the Human Mortality Database (HMD) and lifespan records from the International Database on Longevity (IDL) and the Gerontology Research Group (GRG).\n\nLeast squared linear models were fit to life table data on the log-transformed age-specific probability of death (qx), and projected to qx=1 to predict the maximum survivable age in each population (Figure 1b and c; SI). Maximum lifespan within GRG and IDL data was annually aggregated and fit by locally weighted smoothed splines9 (Figure 3b and c).\n\nWe reproduced the analysis of Dong et al. in R version10 3.2.1 (SI), using the code in Supplementary File 1.\n\nAn earlier version of this article can be found on bioRxiv (doi: 10.1101/124800).\n\n\nData availability\n\nThe authors declare that all data are available within the paper and its supplementary material.",
"appendix": "Author contributions\n\n\n\nS.J.N. wrote the analysis and code, and reproduced Dong et al.’s analysis. S.J.N. and S.E. developed the analysis, methods and statistical design, and co-wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: Supplementary information guide. The supplementary information supplied here constitutes two sections of integrated code in the R statistical language:\n\n1. R code required to calculate the maximum survivable age or MSA from both public data and the human mortality data used by Dong et al., and required to reproduce our findings in full.\n\n2. R code required to reproduce Dong et al.1 with original errors, often including several error-corrected or partially corrected versions.\n\nBoth sections are wrapped in the same script, and require several package dependencies and datasets outlined in the annotated code.\n\nClick here to access the data.\n\n\nReferences\n\nDong X, Milholland B, Vijg J: Evidence for a limit to human lifespan. Nature. 2016; 538(7624): 257–259. PubMed Abstract | Publisher Full Text\n\nWilmoth JR, Andreev K, Jdanov D, et al.: Methods Protocol for the Human Mortality Database. Version 5.0. 83. 2007. Reference Source\n\nMaier H, Gampe J, Jeune B, et al.: Supercentenarians. Demogr Res Monogr. 2010; 7. Publisher Full Text\n\nFinch CE: Variations in senescence and longevity include the possibility of negligible senescence. J Gerontol A Biol Sci Med Sci. 1998; 53(4): B235–9. PubMed Abstract | Publisher Full Text\n\nUnited Nations: World Population Prospects: The 2015 Revision. United Nations Econ Soc Aff. 2015; XXXIII: 1–66. Reference Source\n\nRose MR, Drapeau MD, Yazdi PG, et al.: Evolution of late-life mortality in Drosophila melanogaster. Evolution. 2002; 56(10): 1982–1991. PubMed Abstract | Publisher Full Text\n\nThatcher AR: The long-term pattern of adult mortality and the highest attained age. J R Stat Soc Ser A Stat Soc. 1999; 162(Pt 1): 5–43. PubMed Abstract | Publisher Full Text\n\nKannisto V, Lauritsen J, Thatcher AR, et al.: Reductions in Mortality at Advanced Ages: Several Decades of Evidence from 27 Countries. Popul Dev Rev. 1994; 20(4): 793–810. Publisher Full Text\n\nCleveland WS: LOWESS: A program for smoothing scatterplots by robust locally weighted regression. Am Stat. 1981; 35(1): 54. Publisher Full Text\n\nR Core Development Team: R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria. 2012. Reference Source"
}
|
[
{
"id": "22231",
"date": "18 May 2017",
"name": "Jean-Michel Gaillard",
"expertise": [
"Reviewer Expertise Biodemography"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript challenges the recent findings by Dong et al. that human lifespan has an upper limit and proposes that human lifespan is rapidly increasing. While I am quite convinced by all the problems the authors identified in the analysis performed by Dong et al. and by the maximum survivable age of about 125 years the authors estimated for humans, I am not convinced at all by the claim that maximum reported age at death is expected to rise over the next century. In particular, I do not understand the rationale of removing Jeanne Calment from the analysis. This data point has been validated. Contrary to the authors' interpretation, the mere existence of a lifespan of 122 observed 20 years ago without being even approached since then seems to indicate that some saturation in the maximal age at death is occurring. It is required to estimate the probability of not observing older ages at death for 20 years under different scenarios of increasing trends in maximal age at death.\nDetailed comments: p. 3 second column 3rd paragraph l. 1: Remove \"significant\" p. 3 second column 3rd paragraph l. 1: Should be \"Calment\", not \"Clament\"!\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2758",
"date": "09 Jun 2017",
"name": "Saul Newman",
"role": "Author Response",
"response": "We thank J-MG for these comments, and welcome the opportunity to clarify our analysis and expand our rationale for this study. We wish to clarify that our projected increase in MRAD over the coming century is not based on IDL or GRG lifespan data, and these projections do not require either removal or inclusion of Jeanne Calment. Rather, these projections are based on theoretical lifespan limits calculated using HMD and WHO life table data, including the WHO population projections through 2100. Our statement that “period data indicate that MSA is steadily increasing from a historic low c.1956 (Figure 3b and c) and that the MRAD is expected to rise over the next century” is based on observed HMD and WHO data for the “historic low” in 1956 and on WHO population projections over 2015-2100 for the ‘steady rise’. We can see that we did not make this distinction sufficiently clear and we have amended the text to make it clearer. The rationale for removing Jeanne Calment from the regression of observed trends was to demonstrate her singular effect on recent MRAD trends. Jeanne Calment is a remarkable statistical outlier from historic trends in both Dong et al. and our linear models, with a Cook’s distance of 1.97 (Cook, 1982). A large Cook’s distance is not necessarily grounds for ignoring a well-validated data point. Our finding that the inferred decline in MRAD is eliminated by removal of one outlier demonstrates that the inference is tenuous rather than invalid. We find it notable that such broad conclusions on lifespan limits depend on the status of a single data point. We agree that the 20-year streak set by Jeanne Calment is remarkable, and we consider it valid. However, we disagree that this indicates a saturation of old-age survival. Jeanne Calment’s record-holding streak is not unprecedented in length. Mary Kelly held the overall lifespan record for 17 years from 1964 to 1981. Gert Adrians-Boomgaard held the male lifespan record for 68 years. Mathew Beard broke this record, and held it for another 21.9 years (1985-2007). Like Jeanne Calment, Mathew Beard exceeded the next-lowest record by three years for this period. We consider that these record-holding streaks do not reflect saturation of the MRAD, but the uncertain ascertainment and stochastic nature of lifespan records. Therefore, we suggest Jeanne Calment’s survival reflects a rare statistical event where survival has approached the calculated upper limit of lifespan in her cohort. We think this event is a result of stochastic variation, and is biasing the projection of short-term trends. The simulated removal of Jeanne Calment was intended to indicate this. More broadly, we feel that too much emphasis is placed on this single data point. Jeanne Calment’s predecessor Carrie White held a well-verified claim to the world’s oldest and then second-oldest woman for 24 years, before the claim was recognised as a clerical error in 2012 and retracted. In the unlikely event that Jeanne Calment’s lifespan claim is also false, demographers would be depending on conclusions about lifespan limits from a single, false data point that is an outlier from the aggregate trend across 1626 other supercentenarians in the GRG (see figure S1). Furthermore, a declining or flat trend in lifespan limits is inconsistent with our analysis of 3.3 billion lifespan records in 194 nations. We maintain that statistical trends in data for billions of deaths should not be ignored in favour of a single data point in 1997. Finally, we accept our spelling error for Jeanne Calment’s name. Yet we do not understand the request to remove the word ‘significantly’ from the text: the negative trend was significant (p=0.03)."
},
{
"c_id": "2839",
"date": "29 Jun 2017",
"name": "Xiao Dong Brandon Milholland and Jan Vijg",
"role": "Reader Comment",
"response": "Gaillard is correct to express reservations about this paper: the “problems” that Newman and Easteal claim to have identified do not actually reflect the contents of our paper; see our comment for more detail. Gaillard is “not convinced at all by the claim that maximum reported age at death is expected to rise over the next century” and neither are we. As our correctly conducted analysis shows, the MRAD is likely to remain stagnant in the future. Xiao Dong, Brandon Milholland and Jan Vijg"
}
]
},
{
"id": "22232",
"date": "30 May 2017",
"name": "Michael R. Rose",
"expertise": [
"Reviewer Expertise Aging",
"Evolution of demography"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nEvidently the Dong et al. Nature article1 made some major, though elementary, mistakes: (a) an incorrect rounding procedure; and (b) choosing data from an historical period (1900-1990) that generally fit a limited lifespan expectation, while neglecting to consider other data. Newman and Easteal2 correct the rounding error and look at human cohort data from a broader range of historical dates. However, they fit log-linear Gompertz models to the data, though doing so is questionable as they themselves admit, if in fact human mortality rates plateau3. It is often the case that extrapolating regression models beyond the range used to fit them is fraught with danger.\nOf most interest for us was whether the fundamental concept of inferring a species-wide maximum lifespan from one or even many cohorts of demographic data is cogent at all. With that in mind, we used mortality data from a twenty-cohort study of Drosophila melanogaster that we and our colleagues recently published4. Specifically, we used “A-type” populations to predict sex-specific “maximum lifespans” for populations of “C-type” populations. Each of the ten pairs of A and C type populations share a common ancestral population in our laboratory, though they have since evolutionarily diverged.\nWe took the initial sample size of, say, cohort CO-i, sex-j (Nij), and then computed the age at which the probability of survival in the matching ACO-i, sex-j cohort is <= 1/(10*Nij). We plotted the maximum lifespans predicted from the 20 “A” sex-specific cohorts versus the observed maximum lifespan in the 20 corresponding “C” cohorts (see Figure 1) using a double plateau Gompertz model4. If the concept of species-wide maximum longevity were cogent, we would expect all the observed maximum lifespans to be near or well below the y=x line. In fact, most are well above that line, and show no correlation with the predicted values. Effectively, the maximum lifespan estimation procedure is not generally reliable. In the case of the example that we give here, the maximum lifespan was altered by substantial genetic changes caused by natural selection.\nWe have long used experimental evolution to reconfigure the onset and end of periods of aging in carefully handled cohorts, as illustrated in publications 3 and 4. We do not regard maximum lifespans as characteristic of entire species, however they might be defined demographically. Rather, we view them as phenotypes that depend on both genotype and environment even in so-called “wild-type” populations, like most components of life history, as the extensive evolutionary literature on life history and aging has long suggested. More importantly, in cohorts that show a cessation of aging3 we doubt that the concept of maximum lifespan has any biological cogency.\n\nFigure 1. The predicted maximum lifespan based on 10 A-type populations4 and the observed maximum lifespan in 10 C-type populations:\nhttps://f1000researchdata.s3.amazonaws.com/supplementary/11438/8c2e3298-9ed0-4ac5-8b85-ce62336a9bbb.png\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2764",
"date": "09 Jun 2017",
"name": "Saul Newman",
"role": "Author Response",
"response": "We wish to thank MRR and LDM for their review. We agree with the ideas expressed, particularly with the risk of extrapolating log-linear models and the potential for improving estimates of lifespan limits using alternate models of old age survival. There is little about maximum human lifespan or lifespan limits can be concluded with certainty. Therefore, our focus was on implementing the simplest available methods. The appended figure S2 supported the rationale for our use of log-linear models for projecting lifespan limits. This figure reproduces the plot presented by MRR and LDM for Drosophila using human data. It plots the upper observed lifespan (MRAD) of individuals in the GRG against their calculated lifespan limit. We could match 331 validated maximum lifespan observations to a historical gender-pooled MSA estimate. Of these, 330 approached but do not exceed their predicted limit. The only exception was Jeanne Calment, who outlived the gender-pooled MSA limit by 3 years but fell short of the female-cohort limit shown in Figure 3a. Unlike Drosophila, these calculated limits seem to fit reasonably well with MRAD. However, there are many problems involved in human data that do not affect Drosophila. Human populations are not directly observable past their maximum age, individual variation in mortality acceleration is unknown, and ascertainment and validation problems abound. Therefore, we still consider these projection methods, the GRG and IDL data, and much of the discussion of MRAD, problematic.Finally, we agree that in species with a cessation of ageing or negative senescence there will be an unbounded, finite maximum lifespan with intrinsic limit. By extension, if there were a complete cessation of ageing in humans at extreme old ages there would be no intrinsic limit, but many probabilistic constraints, to human lifespan."
},
{
"c_id": "2838",
"date": "29 Jun 2017",
"name": "Xiao Dong Brandon Milholland and Jan Vijg",
"role": "Reader Comment",
"response": "In their review, Rose and Mueller briefly express their approval of Newman and Easteal’s paper before moving on to a lengthier discussion of their own experiments with Drosophila. As to the first point, we believe that even a cursory examination of our work and the material produced by Newman and Easteal will show that the accusations of errors are themselves erroneous; see our comment for an in-depth critique. As for the results in fruitflies, Rose and Mueller’s findings are potentially very interesting, but their immediate relevance to humans is unclear. If as Rose and Mueller say “extrapolating regression models…is fraught with danger” extrapolating from flies to humans must be similarly fraught. Xiao Dong, Brandon Milholland and Jan Vijg"
}
]
}
] | 1
|
https://f1000research.com/articles/6-569
|
https://f1000research.com/articles/6-858/v1
|
09 Jun 17
|
{
"type": "Research Article",
"title": "Interactions of CAF1-NOT complex components from Trypanosoma brucei",
"authors": [
"Chaitali Chakraborty",
"Abeer Fadda",
"Esteban Erben",
"Smiths Lueong",
"Jörg Hoheisel",
"Elisha Mugo",
"Christine Clayton",
"Chaitali Chakraborty",
"Abeer Fadda",
"Esteban Erben",
"Smiths Lueong",
"Jörg Hoheisel",
"Elisha Mugo"
],
"abstract": "The CAF1-NOT complex of Trypanosoma brucei, like that of other eukaryotes, contains several NOT proteins (NOT1, NOT3, NOT3/5, NOT10, and NOT11), NOT9/CAF40, and the CAF1 deadenylase, which targets 3' poly(A) tails. Again like other eukaryotes, deadenylation is the first step in the degradation of most trypanosome mRNAs. In animal cells, destruction of unstable mRNAs is accelerated by proteins that bind the RNA in a sequence-specific fashion, and also recruit the CAF1-NOT complex. However, this has not yet been demonstrated for T. brucei. To find interaction partners for the trypanosome NOT complex, we did a genome-wide yeast two-hybrid screen, using a random shotgun protein fragment library, with the subunits CAF40, NOT2, NOT10 and NOT11 as baits. To assess interaction specificity, we compared the results with those from other trypanosome proteins, including the cyclin-F-box protein CFB1. The yeast 2-hybrid screen yielded four putatively interacting proteins for NOT2, eleven for NOT11, but only one for NOT9/CAF40. Both CFB1 and NOT10 had over a hundred potential interactions, indicating a lack of specificity. Nevertheless, a detected interaction between NOT10 and NOT11 is likely to be genuine. We also identified proteins that co-purify with affinity tagged NOT9/CAF40 by mass spectrometry. The co-purifying proteins did not include the 2-hybrid partner, but the results confirmed NOT9/CAF40 association with the CAF1-NOT complex, and suggested interactions with expression-repressing RNA-binding proteins (ZC3H8, ZC3H30, and ZC3H46) and the deadenylase PARN3.",
"keywords": [
"Trypanosoma",
"mRNA decay",
"mRNA degradation",
"Crr4/Caf1/Not"
],
"content": "Introduction\n\nThe CCR4/CAF1-NOT complex (NOT complex) represses translation and deadenylates polyadenylated mRNAs, thus priming them for degradation. It has been found in all eukaryotes examined so far. CCR4 and CAF1 are deadenylases; NOT1 is a large protein upon which the whole complex is assembled, and it binds directly to most other components. NOT3 or NOT5 (two proteins in yeast), NOT2 and CAF40 assemble on one side of the complex, while CAF1, CCR4, NOT10, and NOT11 are on the other.1,2. Another component, called Not4p in yeast, is less evolutionarily conserved; it participates in protein quality control by acting as an E3 ligase3. Artificial attachment of any of the complex components to a reporter mRNA represses expression, presumably via recruitment of the whole complex4. Analysis of unstable mRNAs in animal cells has revealed that in many cases, destabilizing elements in the 3'-untranslated region are recognized by specific RNA-binding proteins (RBPs), which in turn recruit components of the NOT complex via binding to different subunits5–8.\n\nThe unicellular eukaryote Trypanosoma brucei belongs to the family Kinetoplastidae, which also includes Leishmania and Trypanosoma cruzi. All of these parasites cause serious disease in humans and other mammals, and all share the same unusual mode of gene expression, with polycistronic transcription and mRNA trans splicing. Since individual mRNAs are co-transcribed, control of gene expression is almost exclusively post-transcriptional. There are three types of deadenylation machineries: PAN2/39 PARN proteins10 and the CAF1-NOT complex11–13. The trypanosome CAF1-NOT complex, which is most important in deadenylation, consists of CAF1 (the only deadenylase12), NOT1 (Tb927.10.1510), NOT2 (Tb927.6.850), NOT3/5 (Tb927.3.1920), NOT9/CAF40 (Tb927.4.410), NOT10 (Tb927.10.8720), NOT11 (Tb927.8.1960), and the multi-purpose helicase DHH1 (Tb927.10.3990)11–13. Pairwise yeast–two-hybrid results indicated that trypanosome CAF1 interacted with NOT10 and the N-terminal half of NOT1, and NOT10 interacted with NOT3/513; depletion of NOT10 led to detachment of CAF1 from the NOT complex13. Later, yeast 2-hybrid screens using a \"mini-library\" of full-length proteins implicated in mRNA metabolism confirmed the expected interactions of NOT1 with NOT2 and CAF1, but also suggested that CAF1 interacts with four proteins with RNA-binding domains: RBP31, DRBD5, ZC3H5, and ZC3H15. Interactions of NOT1 with two proteins of unknown function - Tb927.4.3330 and Tb927.11.2030 - were also found14. In this paper we have investigated the interactions of NOT2, NOT10, NOT11 and NOT9/CAF40.\n\n\nResults\n\nOur trypanosome yeast two-hybrid prey library was made by random shotgun genomic cloning. The library has several million independent yeast clones, each of which expresses a different protein fragment (with roughly 1/12 being within an open reading frame and in-frame)15. NOT2, NOT10, NOT11 and CAF40 were used as baits to screen the library by mating. We also included another protein of interest, CFB1. CFB1 is expressed in bloodstream-form trypanosomes. Its function is somewhat enigmatic, though it is probably required for optimal cell growth16. All results are in Table S1. For the NOT complex components, at least one million diploid progeny were subjected to selection, resulting in between 100 and 800 surviving colonies, from which inserts were amplified and subjected to high-throughput sequencing. To find interacting proteins, we selected open reading frames for which there were at least two different in-frame sequences represented by at least 20 reads (Table S1, sheet 2). This procedure gives false positives and false negatives. Poor folding could result in a false positive due to aberrant exposure of hydrophobic surfaces, or in a false negative due to incorrect formation of a folding-dependent interaction domain. In addition, proteins for which the whole protein, or a (near-) complete N-terminus, are required for interaction would be excluded.\n\nWe obtained 6, 158, 15 and 3 interaction partners for NOT2, NOT10, NOT11 and CAF40 respectively; CFB1 interacted promiscuously with over 800 partners. (These did not include the only validated partner, MKT1, but we had previously found that MKT1 seems only to interact as a complete protein15.) To judge the likely specificity of the interactions, we compared them with those of MKT115, RBP10 (Mugo and Clayton; unpublished study, manuscript submitted), 4EIP and Tb927.7.2780 (unpublished studies; Terrao, ZMBH). The result for NOT10 strongly suggested a tendency for non-specific interactions, although not as severe as for CFB1. The most specific interactions for NOT2, NOT10 and NOT11 are shown in Table 1. It was notable that no interactions with either CAF1 or NOT1 were detected; this might be because the fragments encoded in our prey library are too small. NOT11 had two unique interactions, with a DNAj domain protein (Tb927.9.1560) and mitochondrial EF-Tu; the latter is unlikely to be physiological because of the protein location. There were also two interactors that were shared only with the promiscuous CFB1 - a protein phosphatase and a protein of unknown function, both of which are probably in the cytoplasm. NOT11 interacted with itself, and shared 7 other interactions with NOT10. Some of these, such as the interaction between NOT10 and NOT11, are likely to be genuine - NOT10 interacts with NOT11 in yeasts4,17. Apart from NOT11, none of these proteins interacts with bloodstream-form trypanosome mRNA14 or has any effect in the tethering assay14,18.\n\nFor details see Table S1. Only proteins that interacted in less than four screens are shown; none were positive with MKT1 or 4EIP. The columns for NOT2, NOT10, NOT11 and CAF40 show the number of different interacting protein fragments. The 4EIP, 7.2780 and CFB1 screens were done at different times, with different sequencing depth, so that the numbers are not strictly comparable. These are therefore designated “y” for “yes” and “0” for less than 2 interacting fragments. “Loc” indicates the subcellular location, when known, either from the TrypTag project (http://tryptag.org) or from other information. “cyt” = cytosol, “mit” = mitochondrion”, “nuc” = nucleolus. POMP37 was a mitochondrial membrane protein by proteomics but the GFP-tagged protein in TrypTag has an ER-type pattern.\n\nTo supplement results from the random fragment library, we used recombinant CAF40 to screen a cDNA protein expression array and a similar array displaying proteins from full-length open reading frames of proteins involved in mRNA metabolism14,19. Two other proteins, Aurora kinase B and T. brucei polo-like kinase, were included as bait controls to exclude \"sticky\" proteins. Using the open reading frame array, no specific interaction partners were found. For the cDNA array, selected clones were sequenced, then checked in a pairwise yeast-two hybrid assay. The only confirmed interaction was with the nascent polypeptide associated complex alpha subunit-like protein (Tb927.9.8100/8130). An interaction between the yeast NOT complex and the nascent polypeptide associated complex was previously reported in yeast; the beta subunit Egd1p, had a weak two-hybrid interaction with Caf40p, but much stronger interactions were detected between Egd1p and some other NOT complex components20. The significance of the possible interaction is unclear since the nascent polypeptide associated complex was not found in the affinity purification (see below).\n\nTo find proteins that co-purify with CAF40, we integrated a sequence encoding a V5 tag in frame with the 5' end of the open reading frame in the genome of bloodstream form trypanosomes21. The resulting parasites are expected to express N-terminally V5-tagged CAF40 at an approximately normal level. Eluates from two independent pull-downs - one with RNase inhibitor, and the other with RNase - were analysed by mass spectrometry, in parallel with a single control from wild-type cells (no tag) (Table S2, sheet 1). As additional controls we used the results from three other tandem affinity purifications of tagged GFP. To detect proteins that are frequent contaminants after affinity purification we compared the results with those from many other experiments from our and other labs. Since our CAF40 experiment included only two replicates, all of the results need further confirmation before definitive conclusions can be drawn. Nevertheless, V5-tagged CAF40 showed specific, reproducible RNA-independent pull down of the NOT1, NOT10, NOT11 and CAF1 subunits of the CAF1-NOT complex (Table S2, sheet 2), while co-purification of NOT2 and NOT5 was diminished by RNAse treatment (Table S2, sheet 3). The results also suggested RNA-independent association with three potential RNA-binding proteins, ZC3H8, ZC3H30 and ZC3H46. Interestingly, all three of these RNA-binding proteins were found associated with mRNA14 and each one repressed expression in tethering screens14,18. We therefore speculate that they may repress expression by recruiting the NOT complex via CAF40. Some of the proteins that were pulled down in the absence of RNase also appeared interesting; they included 7 known or potential RNA-binding proteins (CSBII, DRBD4 (PTB2), PUF6, ZC3H28 and ZC3H41, HNRNPH and Tb927.11.14090), and the 5'-3' exoribonuclease XRNA (at rather low coverage). This was, however, only a single experiment, so the results will not be discussed further.\n\n\nConclusions\n\nThe yeast 2-hybrid data by themselves provided no clues concerning functional partners of the NOT complex components investigated, beyond confirming the likely interaction between NOT11 and NOT10. Nevertheless, they may be useful in conjunction with other results; for example, the protein phosphatase Tb927.10.4930 might be implicated in regulation. Our mass spectrometry of CAF40, in contrast, did point to some proteins that might be involved in recruiting the NOT complex to mRNAs; this is especially likely for those proteins that also repressed expression in the tethering assay. We therefore hope that the results will be useful to others who may wish to investigate the modes of action of these proteins in detail.\n\n\nMethods\n\nThe Matchmaker GAL4 Two-Hybrid System3 (Clontech) was used, with Gateway cloning of amplified open reading frames to create bait plasmids. The trypanosome prey library, sequencing methods and bioinformatic analysis have been previously described15. Briefly, bait plasmids were transformed into AH109 yeast, and the pool of prey plasmids were transformed into the Y187 strains. The cells were allowed to mate, plated on SD agar plates lacking tryptophan, leucine, or both (double drop-out medium) to calculate mating efficiency, then plated without tryptophan, leucine, adenine and histidine (quadruple drop-out medium), and incubated for 3 to 5 days at 30°C. The resulting colonies were re-plated on quadruple drop-out SD plates with 40 μg/ml X-α-Gal and 3-amino triazole (3-AT, 0.5 to 2mM), and incubated for 3 to 5 days. Blue colonies were then pooled and grown overnight in quadruple drop-out medium with 3-amino triazole for plasmid isolation. The plasmid inserts were amplified with bar-coded primers, pooled and sent for library preparation by David Ibbersson (BioQuant, Heidelberg, Germany), then sequenced (Illumina Hi-Seq, EMBL, Heidelberg., Germany). The numbers of reads obtained are in Table S1, Sheet 1.\n\nReads were aligned to the T. brucei 927 genome, as described15. We selected sequences that were present at least 10 times, and that had annotated open reading frames that were in frame with the DNA-binding domain (Table S1, sheets 3–5). We then chose a list of unique open reading frames with only one copy each of repeated genes22. We also restricted the selection to open reading frames that included no more than 13 codons upstream of the start codon, and included at least 6 codons before the stop codon. Individual results are in Table S1, sheets 4–7 and final results are summarized in Table S1, sheet 2. Sheet 3 includes locations found in screens with more than three different baits.\n\nAbout 1010 bloodstream form cultured bloodstream-form T brucei carrying endogenously V5-tagged CAF40 were subjected to immunoprecipitation, as previously described13. The eluate was run 1 cm in 10% SDS PAGE gel, and was cut in 6 pieces for Mass-Spectrometry analysis. The piece were transferred to a 96-well plate and reduced, alkylated and digested with trypsin, as described23, except that triethylammoniumbicarbonate buffer was used instead of ammoniumbicarbonate buffer. Following digestion, tryptic peptides were extracted from the gel pieces with 50% acetonitrile/0.1% TFA, and concentrated nearly to dryness in a speedVac vacuum centrifuge. Peptides were separated on an analytical column (75um × 150mm) with a flow rate of 300nl/min (nanoAcquity, Waters) using 1% formic acid and an acetonitrile gradient increasing from 3% acetonitrile to 37% acetonitrile over 50 minutes. The UPLC system was on-line coupled to an ESI LTQ Orbitrap XL MS (Thermo Fisher). One survey scan (res: 60000) was followed by 5 information dependent product ion scans in the LTQ. Only doubly and triply charged ions were selected for fragmentation.\n\nData were analyzed using Proteome Discoverer 1.4 and Mascot (Matrix Science; version 2.4). Mascot was set up to search the the TriTrypDB_9.0_Tbrucei_2015Jan database (20400 entries) using trypsin as protease, a fragment ion mass tolerance of 0.50 Da and a parent mass tolerance of 100 ppm. Iodoacetamide derivative of cysteine was specified in Mascot as a fixed modification. Deamidation of asparagine and glutamic acid, as well as oxidation of methionine, were specified in Mascot as variable modifications. Target decoy PSM validator was set to a target FDR (strict) of 0.01. Results are in Table S2.\n\n\nData availability\n\nDataset 1: Table S1: Yeast 2-hybrid results. A detailed legend is on the first sheet. doi, 10.5256/f1000research.11750.d163337\n\nDataset 2: Table S2: Mass spectrometry of tandem affinity purified CAF40. A detailed legend is on the first sheet. doi, 10.5256/f1000research.11750.d163338",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by core funding to CC, and by the Deutsche Forschungsgemeinschaft (grant Cl112/24).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank David Ibbersson (BioQuant, Heidelberg) for library construction and sequencing. The pairwise interaction between CAF40 and the nascent polypeptide associated complex alpha subunit was tested in the DKFZ two-hybrid facility.\n\n\nReferences\n\nVillanyi Z, Collart MA: Building on the Ccr4-Not architecture. Bioessays. 2016; 38(10): 997–1002. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUkleja M, Cuellar J, Siwaszek A, et al.: The architecture of the Schizosaccharomyces pombe CCR4-NOT complex. Nat Commun. 2016; 7: 10433. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCollart MA, Panasenko OO: The Ccr4-not complex. Gene. 2012; 492(1): 42–53. PubMed Abstract | Publisher Full Text\n\nBawankar P, Loh B, Wohlbold L, et al.: NOT10 and C2orf29/NOT11 form a conserved module of the CCR4-NOT complex that docks onto the NOT1 N-terminal domain. RNA Biol. 2013; 10(2): 228–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSgromo A, Raisch T, Bawankar P, et al.: A CAF40-binding motif facilitates recruitment of the CCR4-NOT complex to mRNAs targeted by Drosophila Roquin. Nat Commun. 2017; 8: 14307. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuzuki A, Saba R, Miyoshi K, et al.: Interaction between NANOS2 and the CCR4-NOT deadenylation complex is essential for male germ cell development in mouse. PLoS One. 2012; 7(3): e33558. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSandler H, Kreth J, Timmers HT, et al.: Not1 mediates recruitment of the deadenylase Caf1 to mRNAs targeted for degradation by tristetraprolin. Nucleic Acids Res. 2011; 39(10): 4373–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan Etten J, Schagat TL, Hrit J, et al.: Human Pumilio proteins recruit multiple deadenylases to efficiently repress messenger RNAs. J Biol Chem. 2012; 287(43): 36370–83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwede A, Manful T, Jha BA, et al.: The role of deadenylation in the degradation of unstable mRNAs in trypanosomes. Nucleic Acids Res. 2009; 37(6): 5511–28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUtter CJ, Garcia SA, Milone J, et al.: PolyA-specific ribonuclease (PARN-1) function in stage-specific mRNA turnover in Trypanosoma brucei. Eukaryot Cell. 2011; 10(9): 1230–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErben E, Chakraborty C, Clayton C: The CAF1-NOT complex of trypanosomes. Front Genet. 2014; 4: 299. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchwede A, Ellis L, Luther J, et al.: A role for Caf1 in mRNA deadenylation and decay in trypanosomes and human cells. Nucleic Acids Res. 2008; 36(10): 3374–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFärber V, Erben E, Sharma S, et al.: Trypanosome CNOT10 is essential for the integrity of the NOT deadenylase complex and for degradation of many mRNAs. Nucleic Acids Res. 2013; 41(2): 1211–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLueong S, Merce C, Fischer B, et al.: Gene expression regulatory networks in Trypanosoma brucei: insights into the role of the mRNA-binding proteome. Mol Microbiol. 2016; 100(3): 457–71. PubMed Abstract | Publisher Full Text\n\nSingh A, Minia I, Droll D, et al.: Trypanosome MKT1 and the RNA-binding protein ZC3H11: interactions and potential roles in post-transcriptional regulatory networks. Nucleic Acids Res. 2014; 42(7): 4652–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBenz C, Clayton CE: The F-box protein CFB2 is required for cytokinesis of bloodstream-form Trypanosoma brucei. Mol Biochem Parasitol. 2007; 156(2): 217–24. PubMed Abstract | Publisher Full Text\n\nMauxion F, Prève B, Séraphin B: C2ORF29/CNOT11 and CNOT10 form a new module of the CCR4-NOT complex. RNA Biol. 2013; 10(2): 267–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nErben ED, Fadda A, Lueong S, et al.: A genome-wide tethering screen reveals novel potential post-transcriptional regulators in Trypanosoma brucei. PLoS Pathog. 2014; 10(6): e1004178. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoheisel JD, Alhamdani MS, Schröder C: Affinity-based microarrays for proteomic analysis of cancer tissues. Proteomics Clin Appl. 2013; 7(1–2): 8–15. PubMed Abstract | Publisher Full Text\n\nPanasenko O, Landrieux E, Feuermann M, et al.: The yeast Ccr4-Not complex controls ubiquitination of the nascent-associated polypeptide (NAC-EGD) complex. J Biol Chem. 2006; 281(42): 31389–98. PubMed Abstract | Publisher Full Text\n\nShen S, Arhin GK, Ullu E, et al.: In vivo epitope tagging of Trypanosoma brucei genes using a one step PCR-based strategy. Mol Biochem Parasitol. 2001; 113(1): 171–3. PubMed Abstract | Publisher Full Text\n\nSiegel TN, Hekstra DR, Wang X, et al.: Genome-wide analysis of mRNA abundance in two life-cycle stages of Trypanosoma brucei and identification of splicing and polyadenylation sites. Nucleic Acids Res. 2010; 38(15): 4946–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTegha-Dunghu J, Neumann B, Reber S, et al.: EML3 is a nuclear microtubule-binding protein required for the correct alignment of chromosomes in metaphase. J Cell Sci. 2008; 121(Pt 10): 1718–26. PubMed Abstract | Publisher Full Text\n\nChakraborty C, Fadda A, Erben E, et al.: Dataset 1 in: Interactions of CAF1-NOT complex components from Trypanosoma brucei. F1000Research. 2017. Data Source\n\nChakraborty C, Fadda A, Erben E, et al.: Dataset 2 in: Interactions of CAF1-NOT complex components from Trypanosoma brucei. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23380",
"date": "13 Jun 2017",
"name": "Susanne Kramer",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors were interested in identifying additional subunits of the trypanosome CAF1-NOT complex, an important macromolecular assembly involved in the first step (deadenylation) of mRNA degradation. Towards this aim, they carried out a genome-wide Y2H screen with a shotgun library using four known subunits of the complex as baits. For one of these bait proteins (CAF40), interactions were also determined by Co-IPs. While the question is of great interest, my main problem is that the experiments do not actually identify any new members of the CAF-NOT complex with confidence, contrary to what is suggested by the title.\n\nThe problem with the Y2H screen is a lack of adequate controls. With one exception, none of the previously-known interacting proteins were identified (lack of positive controls). The negative control, Y2H with CBP1, yielded over eight hundred interaction partners, but not the one that was previously known; the authors state themselves that this clearly lacks specificity. None of the identified proteins from the Y2H screen were verified, for example by reciprocal IP, or by localisation studies (members of the CAF1-NOT complex are expected in P-bodies; thus, a P-body localisation seen in TrypTag(!), could indicate, although not prove, a correct hit). I was also confused with the presentation of the data on several occasions. For example, on page 3 it says ‘we obtained 6, 158, 15 and 3 interaction partners with NOT2, NOT10, NOT11 and CAF40 …’, but I could not find a list of these proteins anywhere. The numbers in the abstract and in Table 1 are different. I understand that these were the most specific interaction partners, but I was confused a mitochondrial protein was among those. Overall, I felt that the presentation of the data in the tables was a little ‘raw’. In their conclusion, the authors state that ‘The Y2H data by themselves provided no clues confirming functional partners of the Not complex’.\nThe affinity purification of Caf40 ±RNAse identified several interesting proteins, although the one CAF40 interaction partner identified in the Y2H was not among these. However, these results were weakened by the authors themselves as the paragraph ends with ‘This was, however, only a single experiment, so the results will not be discussed further’. As with the Y2H data, I also felt that the data presentation was rather raw.\n\nI’m sorry not to be more positive with this manuscript. I do realise that the Y2H was a lot of work, but it is nonetheless important to increase confidence in the results using the controls/replicates suggested above. An additional approach such as BioID would also be a good control, although I realise that this is beyond the scope of this manuscript.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23381",
"date": "20 Jun 2017",
"name": "Martine A. Collart",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this study the authors try to identify proteins interacting with specific components of the CAF1-NOT complex in Trypanosome brucei. They used the 2 hybrid method for this and analyzed CBF1 as a control bait to determine specificity. Since they did not identify any significant partner for one of their baits, CAF40, they used alternative approaches, one of which, the affinity purification of CAF40 from Trypanosome brucei followed by mass spectrometry, gave some results.\nGeneral comments\nThis work really does not go beyond the description of the screening and identified potential partners with a very minimal description of these potential partners.\nThe filtering used to identify which clones are more likely to be true interaction partners has its limitations. The authors use a protein of interest CBF1, find 800 partners and conclude that Cbf1 is promiscuous. We have no information about this protein, or why the identification of 800 partners means that it is promiscuous. It is also not clear why comparing partners of the CAF1-NOT proteins to those of CBF1 is useful, and the authors end up using data that is not in this manuscript, namely partners of others proteins that were screened, to determine which proteins might be specific partners of the CAF1-NOT proteins. It seems that in this case, it would just be more appropriate to provide all the clones isolated for the specific baits and then use a letter in the table to indicate that clones were isolated also in many other unrelated screens. The data exists in a supplementary table, but it could be put in a reader-friendly manner like Table 1 in the main text.\nIt is curious that no CAF1-NOT components were isolated. I am not sure that the size of the library fragments is a good explanation. This could be tested directly, by using the structural information available and make a very small NOT1 clone. But in any event, it seems that the more likely explanation is that the screening was not saturated.\nPrevious data has indicated that chaperones co-purify with NOT10-11, hence the finding of many partners may not be indicative of non-specific interactions but may have significance.\nIn conclusion, I do think that the results of a screen like this would be useful to the community. It could be that if we wait for the authors to try to follow up on all the clones and get relevant biological data, this may take a lot of time and hence some of the data will end up not being made visible. However I feel that the effort the authors make to validate some clones over others is not a good idea. In any event, if they chose to do so, then there should be more description about these clones.\nMinor comment\nThe way by which Not4 acts in protein quality control is somewhat controversial, I would not write that it is by acting as an E3 ligase.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2811",
"date": "20 Jun 2017",
"name": "Christine Clayton",
"role": "Author Response F1000Research Advisory Board Member",
"response": "We agree entirely with both reviewers that these results need more follow-up in order to be really useful. For the yeast 2-hybrid, this would require individual verification by 2-hybrid on full-length proteins, followed by immunoprecipitations. The CAF40 mass spectrometry would need to be repeated twice more, then individual partners followed up by other means. We had obviously hoped that some CAF40 interaction partners would emerge from both screens, but that did not happen.We decided to publish as is because my lab does not currently have the resources available for further follow-up and it seemed silly to just leave the data on a private hard disk. We thought that at least, it could help other researchers who are doing pull-downs or two-hybrid. First, it will prevent other people from wasting time doing the 2-hybrid screens again. Secondly, it might help other people to decide which results to follow up in their own interaction experiments. For example, if a lab is working on a protein X, and discovers that it pulls down a NOT complex subunit, they could look at our results and see if their protein X was in one of our datasets. If it was, that would encourage them to investigate further."
}
]
}
] | 1
|
https://f1000research.com/articles/6-858
|
https://f1000research.com/articles/6-518/v1
|
20 Apr 17
|
{
"type": "Opinion Article",
"title": "We can shift academic culture through publishing choices",
"authors": [
"Corina J Logan"
],
"abstract": "Researchers give papers for free (and often actually pay) to exploitative publishers who make millions off of our articles by locking them behind paywalls. This discriminates not only against the public (who are usually the ones that paid for the research in the first place), but also against the academics from institutions that cannot afford to pay for journal subscriptions and the ‘scholarly poor’. I explain exploitative and ethical publishing practices, highlighting choices researchers can make right now to stop exploiting ourselves and discriminating against others.",
"keywords": [
"exploitative publishing",
"ethical publishing",
"academic culture",
"discrimination"
],
"content": "The problem\n\nIn December 2016, over 100 UK universities signed away over £200 million (Gowers, 2016) to the publishing giant Elsevier so researchers at those institutions that can afford it can read their own research. However, it costs only $1.30–318 to post and preserve a PDF on the internet (Bogich et al., 2016), which is essentially all that is needed for modern publishing. How did academic publishing become dissociated from the actual cost of publishing?\n\nThe cause of the problem is multifaceted; however, I argue that researchers have played a key role because they pursue prestige, which has further distanced researchers from understanding how publishing works and how much it costs. Current incentive structures pressure researchers into pursuing prestige to advance their careers – a cultural tradition that is maladaptive because it leads to poor research methods and practices (e.g., Edwards & Roy, 2017; Lawrence, 2016; Nosek et al., 2012; Smaldino & McElreath, 2016). Much attention has been given to this topic elsewhere, therefore I will focus on how researchers can instigate a cultural shift to change the incentive structure by valuing the improvement of research rigor through ethical publishing.\n\nThe publishing landscape has had the potential to change rapidly since the internet made communicating results cheap and easy, and many options now exist to place the focus back on increasing scientific rigor. Publishers represent a large industry in which each researcher might feel like they play a small and insignificant role. Researchers focus on their research and the myriad of other time demanding activities needed to attempt a career in academia, leaving no time to conduct the meta-research needed to unpack how large publishers hide what they do. I present this meta-research here by explaining two contrasting routes to publication: exploitative and ethical.\n\n\nExploitative route to publication\n\nWhen a paper is accepted at a journal that will put it behind a paywall (i.e., require a journal subscription to read), we researchers are excited and think it was free because it cost us nothing. However, academia (i.e., university libraries) pays an average $5000 per article on our behalf through subscription fees, which results in a 37% profit margin for Elsevier for example (van Noorden, 2013), whose goal is to maximize profits (Figure 1A). The goal of academia is to share research, which is in direct competition with the publisher’s goals of making profits.\n\n(A) The exploitative route exploits researchers and academia and discriminates against who can read research because only individuals at those institutions that can afford journal subscriptions can read the research. (B) The ethical route keeps profits inside academia and does not discriminate against who can read research. OA=Open Access, APC=Article Processing Charge.\n\nPublishers obtain the product (the journal article) for free, as well as many of the services involved in the peer review of the product (e.g., volunteer editor and peer reviewer time). It is estimated that the global academic community contributes £1.9 billion per year in kind so their researchers can serve as peer reviewers (Research Information Network, 2008). After obtaining these free products and services, publishers sell our research back to us at a profit.\n\nWhen the paper is published, only individuals at institutions that can afford journal subscriptions can read the research. This is a form of indirect discrimination, which is “a practice, policy or rule which applies to everyone in the same way, but it has a worse effect on some people than others” (Citizen’s Advice, 2017). Therefore, we not only discriminate against the public (who usually pays for our research in the first place), we also discriminate against other researchers and the ‘scholarly poor’ (e.g., medical doctors, dentists, patients, industry, politicians) when publishing behind paywalls (Murray-Rust, 2011; Tennant et al., 2016). This violates anti-discrimination policies that exist at most universities.\n\nFurther, staff at the World Health Organization (HINARI http://www.who.int/hinari/en/) and the United Nations (AGORA http://www.fao.org/agora/en/) spend valuable resources trying to get low-income countries access to our research, rather than focusing on more pressing matters, such as feeding hungry people.\n\nAdditionally, whole research fields are discriminated against because their papers do not generate as many citations as papers in other fields (e.g., Falagas & Alexiou, 2008). If a generalist journal in the sciences accepts papers from less cited fields, their journal’s impact factor would decrease. The same problem exists in the humanities only here books are the research products and publishers are the gatekeepers. Consequently, generalist science journal and humanities publisher interests influence what research is conducted because this is the only kind they will publish.\n\n\nEthical route to publication\n\nWhen a paper is accepted at a 100% open access (OA) journal, an article processing charge (APC) is incurred or there is no cost depending on which journal a researcher chooses (Figure 1B). APCs are paid by researchers, their funders, or their institutions. The researcher, not the publisher, decides how much is being paid to publish an article by choosing a journal with an APC they can afford.\n\nChoosing a 100% OA journal is not enough. For money to stay inside academia, that journal must also be published by an ethical publisher. Ethical publishers are academic non-profit organizations, which ensure that profits are reinvested in academia, and for-profit corporations that charge no or low APCs and/or heavily invest profits in academia and/or are working to modernize publishing infrastructure (Table 1).\n\n100% open access journals (listed in the Directory of Open Access Journals; www.doaj.org) at publishers that keep profits inside academia. Article processing charges vary from $0–2900 and fit a range of budgets. Other factors that can promote scientific rigor include publishing the review history alongside the published article (Open Reviews), having the methods and analyses peer-reviewed before the data are collected (Registered Reports), and selecting articles based on their scientific validity rather than their predicted impact on the field (which is subjective). CC-BY licenses allow people to not only read the article, but also to access its content. Some researchers prefer to submit papers to society-owned journals. NP=non-profit organization, FP=for-profit organization.\n\n*These for-profit publishers reinvest profits into academia and are working to modernize publishing infrastructure\n\n^If institutions can pay, an article processing charge of $1000 is requested\n\nEditor and peer reviewer time are donated as in the exploitative route. However, the services go toward benefiting academia rather than decreasing publisher costs to maximize profits. In either publishing route, one can make their peer reviewing efforts more valuable to academia by making pre- and/or post-publication reviews public (e.g., via Publons.com, PubPeer.com, or a blog).\n\nOne common misconception is that publishing in journals owned by academic societies is always ethical. This is not actually the case because many society journals are not 100% OA and are published by exploitative publishers. For example, in the field of animal behavior, the Association for the Study of Animal Behaviour owns the journal Animal Behavior, which is a hybrid journal (not 100% OA) published by Elsevier. The Ethological Society owns the journal Ethology, which is also a hybrid journal and is published by Wiley. Both Elsevier and Wiley drain profits from academia (van Noorden, 2013). If your favorite journals are not on the ethical route, you can ask them to make their journal 100% OA and to change to an ethical publisher or use free open source publishing software (see Tennant et al., 2016 and www.corinalogan.com/journals.html).\n\nOA articles do not discriminate against who can read them because they are freely available to read by everyone. This results in OA articles having more readers, citations, and media attention, and their authors benefit from more job and funding opportunities (McKiernan et al., 2016; Tennant et al., 2016). Additionally, OA journals with CC-BY licenses ensure authors retain the copyright to their research, and enable others to reuse the work (with credit) and mine the content (https://sparcopen.org/our-work/author-rights/introduction-to-copyright-resources/). This means that rather than simply gaining access to a PDF to read, individuals instead gain access to the information inside the PDF, such as the data, figures, and content.\n\n\nNot all open access is equal\n\nJust because an article is OA does not mean it is ethically published. Some subscription journals give researchers the option to pay APCs, which allows that article to be OA (a hybrid journal). However, hybrid APCs are more expensive than APCs at 100% OA journals, which exploits researchers and academia (Pinfield et al., 2015; Solomon & Björk, 2016). Moreover, many publishers ‘double dip’ by collecting APCs in addition to journal subscription fees for OA articles. These publishers charge more than once for the same article, further increasing their profits. Therefore, the ethical route to publication is also the cheapest option.\n\n\nEthical publishing is social justice for researchers and the public\n\nSince researchers are primarily funded by the public, we have a responsibility to publish ethically (Edwards & Roy, 2017; Tennant et al., 2016). We are also responsible for creating a culture that values ethical practices that increase scientific rigor – a legacy we can leave to future generations.\n\n\nResearchers can change the incentive structure by changing publishing choices\n\nFunders are driving changes in incentive structures by requiring OA (e.g., Research Councils UK, Wellcome Trust, European Commission, Bill and Melinda Gates Foundation). Researchers can also drive change. One way forward is to connect researchers with the costs and consequences of our publishing choices and shift academic publishing away from exploitative models, which will also save academia millions. All of the options we need to publish ethically already exist, and at prices that fit a range of budgets.",
"appendix": "Competing interests\n\n\n\nCJL is an (unpaid) Associate Editor at Royal Society Open Science.\n\n\nGrant information\n\nCJL has a Leverhulme Early Career Research Fellowship from the Leverhulme Trust and Isaac Newton Trust.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nI thank Laurent Gatto, Stephen Eglen, Peter Lawrence, Peter Murray-Rust, Rupert Gatti, Yvonne Nobis, and Erin McKiernan for manuscript feedback and discussions, and Ross Mounce for discussions.\n\n\nReferences\n\nBogich T, Ballesteros S, Berjon R: On the marginal cost of scholarly communication. science.ai. 2016. Reference Source\n\nCitizen’s Advice: Indirect discrimination. 2017. Reference Source\n\nEdwards MA, Roy S: Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition. Environ Eng Sci. 2017; 34(1): 51–61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFalagas ME, Alexiou VG: The top-ten in journal impact factor manipulation. Arch Immunol Ther Exp (Warsz). 2008; 56(4): 223–226. PubMed Abstract | Publisher Full Text\n\nGowers T: Time for Elsexit? 2016. Reference Source\n\nLawrence PA: The Last 50 Years: Mismeasurement and Mismanagement Are Impeding Scientific Research. Curr Top Dev Biol. 2016; 116: 617–631. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcKiernan EC, Bourne PE, Brown CT, et al.: How open science helps researchers succeed. eLife. 2016; 5: pii: e16800. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMurray-Rust P: The scholarly poor. 2011. Reference Source\n\nNosek BA, Spies JR, Motyl M, et al.: Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspect Psychol Sci. 2012; 7(6): 615–631. PubMed Abstract | Publisher Full Text\n\nPinfield S, Salter J, Bath PA: The “total cost of publication” in a hybrid open-access environment: Institutional approaches to funding journal article-processing charges in combination with subscriptions. J Assoc Inf Sci Technol. 2015; 67(7): 1751–1766. Publisher Full Text\n\nResearch Information Network: Activities, costs and funding flows in the scholarly communications system in the UK. 2008. Reference Source\n\nSmaldino PE, McElreath R: The natural selection of bad science. R Soc Open Sci. 2016; 3(9): 160384. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSolomon D, Björk BC: Article processing charges for open access publication-the situation for research intensive universities in the USA and Canada. PeerJ. 2016; 4: e2264. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTennant JP, Waldner F, Jacques DC, et al.: The academic, economic and societal impacts of Open Access: an evidence-based review [version 3; referees: 3 approved, 2 approved with reservations]. F1000Res. 2016; 5: 632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan Noorden R: Open access: The true cost of science publishing. Nature. 2013; 495(7442): 426–9. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22021",
"date": "24 Apr 2017",
"name": "Björn Brembs",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, the author proposes a more ethical publishing system compared to the one we have today. The author starts by explaining, in simple and broadly understandable terms the current parasitic relationship between corporate publishers and academia today. She correctly notes that the main driver for these developments on the academic side was \"pursuing prestige\".\nHowever, from reading the article, the reader is forced to conclude that the author believes that today, scientists do not strive to pursue prestige any more, as essentially every single suggestion the author puts forth asks scientists to do the opposite of pursuing prestige, by asking them to \"publish ethically\", regardless of the consequences for their salary, funding or other career aspects.\nSuch a text constitutes a laudable appeal to the selflessness of scholars, echoing many similar appeals that have been formulated over the last 20+ years. Clearly, those are lofty ideals, but I have strong reservations as to the mass applicability of such a plan. After all, her predecessors have asked their colleagues exactly the same thing without much tangible effect for the last 20+ years. I doubt that more of the same will drastically alter anything.\nIn contrast to what the author (and this reviewer) might hope for, it is highly unlikely that the prestige factor historically and currently dominating publication practices everywhere will disappear tomorrow. Thus, at the very least, this article needs to deal with how scholars either a) may be convinced more effectively to adjust their publication practices against their own self-interest (and if one needs to refer to \"changing the incentive structure\", please explain how this could be done in realistic steps without brainwashing of about 7 million 'full-time equivalent' researchers) or b) remove the current source of prestige differential: journal rank. Without such an explanation, I see no value in adding this article to the already bulging literature on this topic.\nBelow the more detailed comments to each segment of the article.\nFirst paragraph and Figure 1: It makes little sense to compare the subscription costs of a subset of libraries of a single country with the online archiving costs of some file on the internet. There is no relation at all between such two completely arbitrary numbers.\nWe do know what the annual cost of publishing scholarly articles is: several sources mention converging ballpark figures just under US$10b. With the number of articles per year at about 2m, we arrive at a current consensus figure of ~US$5k per article the taxpayer is currently paying. We also know that a whole slew of publishers operate on per-article costs of just under ~US$100 up to ~US$500, which constitute the lower bound of actual per article costs. In other words, anything above ~US$500 requires an explanation (in some cases even costs above US$100). In the case of current subscription publishers, this difference includes (but is not limited to) profit and paywalls. In the case of gold publishers, it is not at all clear where the difference to 100-500 goes. Hence, in Fig. 1A there is a lot missing and in Fig. 1B, it is not at all clear why charging $2900 should not be similarly exploitative as in Fig. 1A.\nI suggest to drop all the numbers in Fig. 1 and just show profit and paywalls in A as excluding scholarship, while whatever costs accrue in B are investments and not lost.\nExploitative route: The author writes: \"Publishers obtain the product (the journal article) for free, as well as many of the services involved in the peer review of the product (e.g., volunteer editor and peer reviewer time).\" This wording invites misunderstandings: scholars don't work for free, many if not most of them earn a (in some cases more than decent) wage. They provide their services mostly for the authors, which coincidentally means at no cost for the publishers. This is not to be confused with \"free\" - it is actually a coincidental subsidy of publishers inasmuch as the scholars' salary is paid out of the public purse.\n\"When the paper is published, only individuals at institutions that can afford journal subscriptions can read the research.\" In principle, this is correct. However, this statement is complicated by, e.g., the fact that some institutions may be able to afford subscriptions, but choose not to subscribe to certain journals and that most publishers offer reduced or even waived subscription fees to developing countries on the IUGG or UNDP lists.\nThe author also cited an \"impact factor\" without reference. In the case of Clarivate Analytics' Impact Factor, the author cannot cite the IF as if it were computed rather than, at least in part, negotiated, without clarifying citations.\nIn this section, the author also neglects the standard acquisition rules in academia (and indeed in the entire public sector!) that acquisitions need to follow a bidding process. Subscriptions these days, especially the \"Big Deal\" bought by large public institutions, are negotiated behind closed doors, commonly with professional publisher negotiators completely outmaneuvering their hapless librarian counterparts. Any mention of costs should reasonably also mention the way academia pays for them: by breaking or at least bending commonplace rules.\nEthical route to publication: Already in the first paragraph, the author paints a misleading picture, contradicting her own text until this point. Above, the author stated: \" researchers have played a key role because they pursue prestige \" Indeed: researchers pursue prestige. Even if all journals were OA provided by NP organizations, they would still pursue prestige, all else being equal. Hence, the authors would *not* choose a journal with an \"APC they can afford\", but with a *prestige* they can afford. This, of course, makes all the difference in the world: if a lab can afford, say, 50k for a Nature article, of course they will pay for it. If a lab cannot, then the authors will have to pay out of their pocket what is required to secure a permanent position. Hence, without eliminating prestige, the injustice and discrimination so rightfully called out by the author above, will simply be transferred from reading to publishing: today, the scholarly poor can't read. In a gold-OA world as described in the article so far, the scholarly poor can't publish (at least not where they get noticed). Given sci-hub et al., the gold-OA route described so far seems even less ethical than the exploitative publishing system where the rich subsidize an obscenely expensive anachronism, such that at least the poorest countries can read and publish for free.\nThere remains much work for the author to convince anyone that just because there are no profits and no paywalls, the proposed system will be any fairer.\nTable 1: Likewise, there is little to convince at least this reviewer that all the journals listed here are really that much more ethical than the current corporate parasites. Certainly, the RoySoc journal looks perfect, but the reader doesn't know where the money is coming from and has to trust the name of the publisher in terms of functionalities, such as, e.g. digital long-term preservation, TDM, data and code requirements, and many more. PeerJ (which I support) are a business where we have to trust their founders that they really use our money wisely. eLife is published by the MPG and only publishes a small fraction of submitted articles at a cost prohibitive for most scholarly poor. In terms of reproducibility, we do not have any data, yet, but if eLife can be lumped in with the GlamMagz in this regard, the statistics tell us that eLife will be part of the problem, rather than the solution - and who wants the public to have access to unreliable research? CCBR publishes with a very restrictive license, which can hardly be called \"OA\" (e.g., no TDM allowed!), PLoS APCs are also much higher than they need to be in case of P1 due to this journal subsidizing their community journals and for the community journals due to their selectivity, which increases unreliability (statistically, on average). Neither ScienceOpen nor Biology Open (nor any of the other journals!) offer competitors to take over their services in case users are not pleased with what they get.\nThus, in brief, the list in Table 1 looks like a half-hearted attempt at saving a 20th century industry from obsolescence and badly mangling product functionality and market effectiveness as unintended consequences.\nAvailability to read by everyone leads to additional benefits: Actually, PDF is probably among the worst formats for TDM. What would be required is a scholarly mark-up language that can be easily converted into any format the user desires. Just flipping our existing journals to ethical publishers and hoping that the \"invisible hand\" of the market will then automagically create such scholarly standards will likely not be sufficient.\nResearchers can change the incentive structure by changing publishing choices: While the author is merely simplistic and/or naive in her approach thus far, this last paragraph borders on wishful thinking. For more than 20 years we have had the possibility to make our work OA at point of publication with just a few clicks and haven't done so: as long as hypercompetition demands that we publish in certain venues, just making people pay won't change a thing. If I'm an early-career researcher and Nature has accepted my manuscript, I will publish there, as long as it carries the prospect of getting a job. In that case (and this is how it still is), this researcher will publish there if it is TA, hybrid or OA, (almost) regardless of cost. In the US (and increasingly in the UK and other countries as well), people go into debt for the prestige of a degree from certain universities. Surely they will go a little more into debt for the prestige of a certain journal? No, scholars are not free to choose where they publish and just making it expensive for them won't change that - other than making the procedure more hateful than it already is.\nIn conclusion, I'm far from convinced that the world the author describes will be neither more ethical nor fairer than today. In fact, from most relevant aspects, it seems it will make things even worse than what we have today, as bad as it currently is. Other than from a historical perspective, the author completely fails to account for the main driver of publication practices: prestige. For her suggestion to actually improve anything, she needs to explain why the prestige factor should completely disintegrate overnight, which seems highly implausible. I hence cannot see anything that this article could possibly contribute to the debate on this topic that hasn't already been said elsewhere, with more competence and persuasion.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "2752",
"date": "09 Jun 2017",
"name": "Corina Logan",
"role": "Author Response",
"response": "Dear Björn, Thank you very much for taking the time to read and comment on my article. I really appreciate the time you invested in helping me make the piece better. Please see my responses (marked by >>>) to your comments below. Many thanks, Corina 1) In this manuscript, the author proposes a more ethical publishing system compared to the one we have today. The author starts by explaining, in simple and broadly understandable terms the current parasitic relationship between corporate publishers and academia today. She correctly notes that the main driver for these developments on the academic side was \"pursuing prestige\". However, from reading the article, the reader is forced to conclude that the author believes that today, scientists do not strive to pursue prestige any more, as essentially every single suggestion the author puts forth asks scientists to do the opposite of pursuing prestige, by asking them to \"publish ethically\", regardless of the consequences for their salary, funding or other career aspects. Such a text constitutes a laudable appeal to the selflessness of scholars, echoing many similar appeals that have been formulated over the last 20+ years. Clearly, those are lofty ideals, but I have strong reservations as to the mass applicability of such a plan. After all, her predecessors have asked their colleagues exactly the same thing without much tangible effect for the last 20+ years. I doubt that more of the same will drastically alter anything. In contrast to what the author (and this reviewer) might hope for, it is highly unlikely that the prestige factor historically and currently dominating publication practices everywhere will disappear tomorrow. Thus, at the very least, this article needs to deal with how scholars either a) may be convinced more effectively to adjust their publication practices against their own self-interest (and if one needs to refer to \"changing the incentive structure\", please explain how this could be done in realistic steps without brainwashing of about 7 million 'full-time equivalent' researchers) or b) remove the current source of prestige differential: journal rank. Without such an explanation, I see no value in adding this article to the already bulging literature on this topic. >>> One confusing aspect of version 1 was that I phrased the aim of my paper as changing the incentive structure; however, as Chris Hartgerink pointed out, this wasn’t actually the aim of the paper. I agree because I don’t focus on incentives or prestige in this article. The focus of my paper is to explain how the current publishing landscape works because this is what researchers always find surprising when I give this paper as a talk. After my talks, researchers often approach me saying how they want to change something about their publishing choices. So it seems that researchers don’t generally know much about how publishing works (it’s no wonder, we are so removed from it when we submit papers. And this kind of understanding isn’t taught or mentored unless their mentor is an expert in this topic). For me, if the information in this paper is able to change the publishing choices of even a few researchers, I will consider it a success (it has already influenced at least one researcher: https://twitter.com/van_tho/status/855317905759547392). Indeed, it is only because a colleague questioned me about why I publish where I publish that I started learning how publishing works in the first place, which caused me to change my publishing choices. There is some evidence that researchers at universities with increased awareness of OA policies also have more experience publishing OA (Zhy 2017). This suggests that education about OA has the potential to change behavior. I changed the aim of the paper to reflect its actual aim: My aim here is to explain how the current publishing landscape works and to highlight ethical and exploitative aspects that are not always obvious. I argue that it is in the best interest of researchers, academia, the public, and research rigor to adopt ethical publishing choices. Adopting such choices will instigate a cultural shift in academia. Addressing prestige and incentive structures is beyond the scope of this article; however, to elaborate on how individual publishing choices can result in a culture change, I added: academia is made up of individuals, and the values of these individuals create academic culture. If researchers align their publishing choices with ethical publishing practices, then academic culture changes. We can all easily change our values and actions right now. 2) First paragraph and Figure 1: It makes little sense to compare the subscription costs of a subset of libraries of a single country with the online archiving costs of some file on the internet. There is no relation at all between such two completely arbitrary numbers. >>> Good point. For a cost per article comparison, I added: The global average cost of publishing a paywalled article is $5000 ( van Noorden, 2013). 3) We do know what the annual cost of publishing scholarly articles is: several sources mention converging ballpark figures just under US$10b. With the number of articles per year at about 2m, we arrive at a current consensus figure of ~US$5k per article the taxpayer is currently paying. We also know that a whole slew of publishers operate on per-article costs of just under ~US$100 up to ~US$500, which constitute the lower bound of actual per article costs. In other words, anything above ~US$500 requires an explanation (in some cases even costs above US$100). In the case of current subscription publishers, this difference includes (but is not limited to) profit and paywalls. In the case of gold publishers, it is not at all clear where the difference to 100-500 goes. Hence, in Fig. 1A there is a lot missing and in Fig. 1B, it is not at all clear why charging $2900 should not be similarly exploitative as in Fig. 1A. I suggest to drop all the numbers in Fig. 1 and just show profit and paywalls in A as excluding scholarship, while whatever costs accrue in B are investments and not lost. >>> Regarding the difference between the actual cost of publishing an article and the wide range of APCs on the ethical route in Figure 1B, I added: Given that the actual cost of publishing an article is $1.30–318 ( Bogich et al., 2016), it is important to consider where the additional money goes when paying APCs. Some journals charge higher APCs to cover their additional costs, which might involve paying staff for editorial services, promoting the journal, writing news stories, or developing new publishing technology (e.g., see eLife’s cost breakdown at: https://elifesciences.org/elife-news/inside-elife-what-it-costs-publish). For some journals, their higher APCs also provide income for the publisher’s shareholders (see the Exploitative route to publication). If you choose to pay a higher APC, make sure the activities the public’s money will be invested in are aligned with the three ethical principles above. I want to keep the numbers in Figure 1 because I think it is important to directly connect researchers with the costs of publishing. If readers can see price tags for the different routes to publishing, it will help connect them with the reality of their choices. 4) Exploitative route: The author writes: \"Publishers obtain the product (the journal article) for free, as well as many of the services involved in the peer review of the product (e.g., volunteer editor and peer reviewer time).\" This wording invites misunderstandings: scholars don't work for free, many if not most of them earn a (in some cases more than decent) wage. They provide their services mostly for the authors, which coincidentally means at no cost for the publishers. This is not to be confused with \"free\" - it is actually a coincidental subsidy of publishers inasmuch as the scholars' salary is paid out of the public purse. >>> Thanks for pointing this out. I clarified the sentence to: Publishers pay nothing for the product (the journal article) or the services involved in the peer review of the product In the next sentence, I replaced the word “free” with “publicly-funded” products and services. 5) ”When the paper is published, only individuals at institutions that can afford journal subscriptions can read the research.\" In principle, this is correct. However, this statement is complicated by, e.g., the fact that some institutions may be able to afford subscriptions, but choose not to subscribe to certain journals and that most publishers offer reduced or even waived subscription fees to developing countries on the IUGG or UNDP lists. >>> Regarding waived subscription fees to some developing countries, I added: What’s more, publishers breach these agreements by denying previously-promised access (Perez Koehlmoos & Smith 2011). 6) The author also cited an \"impact factor\" without reference. In the case of Clarivate Analytics' Impact Factor, the author cannot cite the IF as if it were computed rather than, at least in part, negotiated, without clarifying citations. >>> I added a reference and changed the sentence to: If a generalist journal in the sciences accepts papers from less cited fields, their journal’s Thomson Reuters impact factor would decrease (PLoS Medicine Editors 2006). 7) In this section, the author also neglects the standard acquisition rules in academia (and indeed in the entire public sector!) that acquisitions need to follow a bidding process. Subscriptions these days, especially the \"Big Deal\" bought by large public institutions, are negotiated behind closed doors, commonly with professional publisher negotiators completely outmaneuvering their hapless librarian counterparts. Any mention of costs should reasonably also mention the way academia pays for them: by breaking or at least bending commonplace rules. >>> Great point. I added: Additionally, universities breach their standard practice of choosing the most competitive bid: publishers do not compete with each other to obtain university subscriptions on the premise that each publisher’s goods are unique (Eve 2016). 8) Ethical route to publication: Already in the first paragraph, the author paints a misleading picture, contradicting her own text until this point. Above, the author stated: \" researchers have played a key role because they pursue prestige \" Indeed: researchers pursue prestige. Even if all journals were OA provided by NP organizations, they would still pursue prestige, all else being equal. Hence, the authors would *not* choose a journal with an \"APC they can afford\", but with a *prestige* they can afford. This, of course, makes all the difference in the world: if a lab can afford, say, 50k for a Nature article, of course they will pay for it. If a lab cannot, then the authors will have to pay out of their pocket what is required to secure a permanent position. Hence, without eliminating prestige, the injustice and discrimination so rightfully called out by the author above, will simply be transferred from reading to publishing: today, the scholarly poor can't read. In a gold-OA world as described in the article so far, the scholarly poor can't publish (at least not where they get noticed). Given sci-hub et al., the gold-OA route described so far seems even less ethical than the exploitative publishing system where the rich subsidize an obscenely expensive anachronism, such that at least the poorest countries can read and publish for free. There remains much work for the author to convince anyone that just because there are no profits and no paywalls, the proposed system will be any fairer. >>> I agree that incentive structures need to change to address the massive prestige issue. The Smaldino and McElreath (2017) paper is particularly useful for providing the incentive to change incentives because their model indicates that selective journals actually select for bad science. Since prestigious journals are selective, they are more likely to have selected for bad science than a journal that selects papers based on scientific validity and ignores subjectively determined potential impact. A nuanced treatment of the prestige issue is beyond the scope of this opinion piece, but I added: Academia is made up of individuals, and the values of these individuals create academic culture. If researchers align their publishing choices with ethical publishing practices, then academic culture changes. We can all easily change our values and actions right now. Regarding discriminating against who can pay to publish, I added: There is a further argument to be made that no money should be exchanged when publishing research products, neither via journal subscriptions nor APCs, because the public has already paid for the research. Any costs that are charged in addition to the initial funding creates inequalities in who can pay to publish or read (Fuchs & Sandoval 2013), and violates ethical principle 1. 9) Table 1: Likewise, there is little to convince at least this reviewer that all the journals listed here are really that much more ethical than the current corporate parasites. Certainly, the RoySoc journal looks perfect, but the reader doesn't know where the money is coming from and has to trust the name of the publisher in terms of functionalities, such as, e.g. digital long-term preservation, TDM, data and code requirements, and many more. PeerJ (which I support) are a business where we have to trust their founders that they really use our money wisely. eLife is published by the MPG and only publishes a small fraction of submitted articles at a cost prohibitive for most scholarly poor. In terms of reproducibility, we do not have any data, yet, but if eLife can be lumped in with the GlamMagz in this regard, the statistics tell us that eLife will be part of the problem, rather than the solution - and who wants the public to have access to unreliable research? CCBR publishes with a very restrictive license, which can hardly be called \"OA\" (e.g., no TDM allowed!), PLoS APCs are also much higher than they need to be in case of P1 due to this journal subsidizing their community journals and for the community journals due to their selectivity, which increases unreliability (statistically, on average). Neither ScienceOpen nor Biology Open (nor any of the other journals!) offer competitors to take over their services in case users are not pleased with what they get. Thus, in brief, the list in Table 1 looks like a half-hearted attempt at saving a 20th century industry from obsolescence and badly mangling product functionality and market effectiveness as unintended consequences. >>> It is my intent with this paper to list the most ethical journals that are available right now so researchers can go out tomorrow and change their publishing choices without having to wait for the infrastructure to change. This doesn’t mean that all of the journals in the list are ideal models for how publishing should work, nor does it mean that they will remain ethical (according to the criteria in the paper) in the future if they are, for example, sold to Elsevier. PeerJ is pretty open about where they spend their money (e.g., https://peerj.com/blog/post/115284878682/new-publication-prices-at-peerj/), and eLife provides APC waivers (https://elifesciences.org/articles/21230). I contacted CCBR, bringing it to their attention that they should use a CC-BY licence instead - thanks for pointing that out! I agree with you that selective journals are part of the problem (see Smaldino & McElreath 2016), which is why I make it clear which journals select based on subjective impact in Table 1. Since I do not address the prestige issue in this paper, I’m not going to go into the details about the difference between selective and non-selective journals. I lay out the options for people so they can make their own choices based on information about these options. 10) Availability to read by everyone leads to additional benefits: Actually, PDF is probably among the worst formats for TDM. What would be required is a scholarly mark-up language that can be easily converted into any format the user desires. Just flipping our existing journals to ethical publishers and hoping that the \"invisible hand\" of the market will then automagically create such scholarly standards will likely not be sufficient. >>> I agree with you and have been learning about reproducible manuscripts (see Hartgerink 2017 http://onsnetwork.org/chartgerink/2017/03/30/reproducible-manuscripts-are-the-future/). Depending on the publishing platform a publisher uses, it could be very easy for a journal to change from publishing PDFs to reproducible manuscripts. Indeed, eLife is considering doing so in response to a conversation a few of us researchers had with one of their staff members (https://elifesciences.org/labs/cad57bcf/composing-reproducible-manuscripts-using-r-markdown). In contrast, it would likely be difficult for a publisher like the Royal Society to switch because they use ScholarOne, which is inflexible and expensive. If journals are aware that their researchers want to publish reproducible manuscripts, then they can adapt accordingly, but we need to speak directly to the journals to make this happen. 11) Researchers can change the incentive structure by changing publishing choices: While the author is merely simplistic and/or naive in her approach thus far, this last paragraph borders on wishful thinking. For more than 20 years we have had the possibility to make our work OA at point of publication with just a few clicks and haven't done so: as long as hypercompetition demands that we publish in certain venues, just making people pay won't change a thing. If I'm an early-career researcher and Nature has accepted my manuscript, I will publish there, as long as it carries the prospect of getting a job. In that case (and this is how it still is), this researcher will publish there if it is TA, hybrid or OA, (almost) regardless of cost. In the US (and increasingly in the UK and other countries as well), people go into debt for the prestige of a degree from certain universities. Surely they will go a little more into debt for the prestige of a certain journal? No, scholars are not free to choose where they publish and just making it expensive for them won't change that - other than making the procedure more hateful than it already is. >>> I removed references to incentives (see my response to comment 1). I have found that by educating researchers about how publishing works and how we exploit ourselves and discriminate against others in the process, many are interested in making changes. I know researchers who only publish OA and in non-glamorous journals and they have gotten excellent jobs (including myself). Additionally, there is growing evidence that publishing OA gives researchers an advantage in their careers (McKiernan et al 2016 https://elifesciences.org/articles/16800). 12) In conclusion, I'm far from convinced that the world the author describes will be neither more ethical nor fairer than today. In fact, from most relevant aspects, it seems it will make things even worse than what we have today, as bad as it currently is. Other than from a historical perspective, the author completely fails to account for the main driver of publication practices: prestige. For her suggestion to actually improve anything, she needs to explain why the prestige factor should completely disintegrate overnight, which seems highly implausible. I hence cannot see anything that this article could possibly contribute to the debate on this topic that hasn't already been said elsewhere, with more competence and persuasion. >>> Please see my responses to comments 1 and 8. References Zhu, Y. (2017). Who support open access publishing? Gender, discipline, seniority and other factors associated with academics’ OA practice. Scientometrics, 111(2), 557-579."
}
]
},
{
"id": "22132",
"date": "04 May 2017",
"name": "Anthony Dart",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper addresses the cost of academic publishing, the role of the profit margin in this cost and the inequitable access inherent with a reader pays model whereby potential readers with limited financial resources are prevented from accessing articles relevant to their research. The author also acknowledges that the prestige – seeking behaviours of researchers are important factor in considering academic publications.\n\nIt is a given that publishing, in whatever form, academic research has to incur a cost. This can come from reader in the form of subscriptions etc, the author, in a form of an article processing charge, from third parties such as advertises or a combination of some or all of these. I imagine all or most researchers would agree with the notion that cost should be kept to a minimum. As stated in the article it is also important to improve equitability of access and to retain as much funding as possible for research itself.\n\nAs one solution the author suggests that researchers should elect to use an ethical route to publication. One of the features of this would be that ‘any profit’ would be returned to academia and this would be most readily achieved through learned societies or institutions publishing in their own right. As the author indicates the true nature of the finances behind publications can be opaque and it would be a big burden on researchers to undertake and keep up-to-date with the financial arrangement of the myriad of publishing vehicles now available. In relation to this, and perhaps a little over looked in the article, is in indeed the gross proliferation of journals now touting their business to the academic community. Almost all these journals require an article processing charge to be paid and it is usually not evident how much of this contributes to the publishers profit margin.\n\nThe authors suggestion that researchers could elect to publish with the publishers whose charges are within their means is not really going to help with issues of equitability. Researchers, certainly under the current usual means of performance evaluation, will have an overriding desire to publish in the most prestigious journal available. This is specially so with the proliferation of journals situation has been reached whereby almost anything could be published providing that authors persevere! Therefore there needs to be other ways to overcome the perception that impact factor is a surrogate measure of important and validity of data. Certainly publications of original data and comprehensive methods etc, generally in a supplementary or appendix, can help in this regard.\n\nAlthough not the subject of this article, there is no doubt that real reform in this sector thus requires a change in the wide academic careers are evaluated. The reliance on the numbers of publications and the impact of the publications become so important in most institutions that real reform could only happen once this reliance measures is reduced.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "2751",
"date": "09 Jun 2017",
"name": "Corina Logan",
"role": "Author Response",
"response": "Dear Anthony, I greatly appreciate your taking the time to read and review this paper! Your comments really helped me improve the manuscript. Below, I indicate how I addressed each of your comments (marked by >>>). Thank you again! My best, Corina 1. As one solution the author suggests that researchers should elect to use an ethical route to publication. One of the features of this would be that ‘any profit’ would be returned to academia and this would be most readily achieved through learned societies or institutions publishing in their own right. As the author indicates the true nature of the finances behind publications can be opaque and it would be a big burden on researchers to undertake and keep up-to-date with the financial arrangement of the myriad of publishing vehicles now available. In relation to this, and perhaps a little over looked in the article, is in indeed the gross proliferation of journals now touting their business to the academic community. Almost all these journals require an article processing charge to be paid and it is usually not evident how much of this contributes to the publishers profit margin. >>> There are definitely lots of journals out there to filter through, which is time consuming. I added: It is time consuming to investigate all available journals to determine which are more ethical. Lists, such as the Directory of Open Access Journals (DOAJ), can help determine which journals are reputable, but further criteria are needed about a journal and publisher’s business model to evaluate their ethical or exploitative practices. I provide such a list for the field of animal behavior in Table 1. If a similar list does not exist for your field, consider making one and sharing it. I made sure to describe that the DOAJ is a way of quality-checking OA journals. Researchers investigating the other criteria in Table 1 for a particular journal will likely be able to answer the question about where the publisher’s profits go. 2. The authors suggestion that researchers could elect to publish with the publishers whose charges are within their means is not really going to help with issues of equitability. Researchers, certainly under the current usual means of performance evaluation, will have an overriding desire to publish in the most prestigious journal available. This is specially so with the proliferation of journals situation has been reached whereby almost anything could be published providing that authors persevere! Therefore there needs to be other ways to overcome the perception that impact factor is a surrogate measure of important and validity of data. Certainly publications of original data and comprehensive methods etc, generally in a supplementary or appendix, can help in this regard. >>> I agree that incentive structures need to change to address the prestige issue. The Smaldino and McElreath (2017) paper is particularly useful for providing the incentive to change incentives because their model indicates that selective journals actually select for bad science. Since prestigious journals are selective, they are more likely to have selected for bad science than a journal that selects papers based on scientific validity and ignores subjectively determined potential impact. A nuanced treatment of the prestige issue is beyond the scope of this opinion piece, but I added: Academia is made up of individuals, and the values of these individuals create academic culture. If researchers align their publishing choices with ethical publishing practices, then academic culture changes. We can all easily change our values and actions right now. 3. Although not the subject of this article, there is no doubt that real reform in this sector thus requires a change in the wide academic careers are evaluated. The reliance on the numbers of publications and the impact of the publications become so important in most institutions that real reform could only happen once this reliance measures is reduced. >>> I completely agree that evaluation structures need to change to effectively evaluate the quality of someone’s research. There are many good summaries of the problem (e.g., http://www.nature.com/news/the-focus-on-bibliometrics-makes-papers-less-useful-1.16706, https://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351), so I don’t feel I can add much there. Of course, talking about it doesn’t necessarily change people’s actions, which is what we really need. Perhaps change is beginning to happen at institutions that sign the Declaration on Research Assessment (DORA, http://www.ascb.org/dora/). However, I don’t know of case studies that indicate academic culture is moving away from bibliometrics."
}
]
},
{
"id": "22316",
"date": "08 May 2017",
"name": "Chris H.J. Hartgerink",
"expertise": [
"Reviewer Expertise meta-research",
"statistics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nConsidering this is an opinion piece, my peer-review should be regarded more as a discussion. Judging whether an opinion is scientifically valid makes no sense to me; it is still worthwhile to discuss the contents nonetheless.\nIn general, this opinion piece aims to incentivize a shift towards more Open Access (OA) publishing, and specifically more ethical OA publishing. In the first two sections (\"The problem\" and \"Exploitative route to publication\") the author aptly summarizes the current situation, albeit sometimes implying that the reader is familiar with certain aspects of the discussion.\nMoreover, in the first section, the author makes quite a promising statement: \"I will focus on how researchers can instigate a cultural shift to change the incentive structure by valuing the improvement of research rigor through ethical publishing.\" However, in the sections following, I was disappointed to see that the author primarily focusing on describing the landscape instead of actually providing ways to instigate cultural change. Understanding the landscape is important, but what the effective, actionable aspect of the piece that was offered in the beginning remains absent. As such, the piece does not deliver.\nIn the next section on exploitation, the author mentions exploitative and ethical publishing. Although I tend to agree that OA is less exploitative, calling it ethical is rather difficult without an explanation as to what normative framework is being applied to judge this. Why, for example, is the APC range of 0-2900 USD seen as ethical, when in the first paragraph it is mentioned that publishing costs range between 1.30-318USD? I understand many of the underlying principles, but I think the discussion of these issues can be honed and would make it much more convincing for people unfamiliar with many of the underlying principles that are implicit for OA proponents. E.g., is the ethical statement made from a Kantian viewpoint that OA is more sustainable? If so, please make it more explicit so it can show the underlying logic instead of just the conclusions.\nContinuing with providing explanations as to why certain things are considered ethical, I think the piece could really benefit from justification as to why keeping money inside academia would be considered more ethical. For-profit businesses can very much contribute ethically to the knowledge ecosystem and retain the profits, albeit it would require some changes in how the system is setup (e.g., knowledge should no longer be commodified). It is rather narrow to state that keeping money within academia is beneficial to academia more so than a combination of inside and outside, at least without thorough analysis as to why that would be the case. The premises seem to be implied now, which makes it rather unconvincing (despite that I somewhat agree with the outcome).\nAs such, it seems to me that the perspective proposed here is lacking in thoroughness of the reasoning proposed (despite that I am a proponent of OA). As such, I would encourage the author to make the implicit steps taken in the reasoning more explicit. Moreover, calling something ethical without providing a framework is, to me, rather difficult. Deeming something ethical is always subject to cultural context and the normative framework.\nFinally, I would like to ask the author whether she thinks that philanthropic efforts to increase OA are ethical in themselves. For example, OA is promoted by the Bill and Melinda Gates Foundation (BMGF), but recent efforts that put pressure on publishers have created an OA privilege so it seems. Researchers funded by the BMGF now have the possibility to publish gold-OA in Science for example 1, but non-BMGF financed researchers do not. As such, considering the Merton's ethical framework for science, this decreases equality between researchers and could be considered unethical. If OA is deemed ethical, are the means to an end here deemed ethical as well? It seems that this is a crucial question that is being neglected throughout this opinion piece.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "2750",
"date": "09 Jun 2017",
"name": "Corina Logan",
"role": "Author Response",
"response": "Dear Chris, Thank you so much for bringing up excellent discussion points on my opinion piece! Your comments were spot on and really helped me improve the message in the revised version. Please see my responses (marked by >>>) to your comments below. All my best, Corina 1. In general, this opinion piece aims to incentivize a shift towards more Open Access (OA) publishing, and specifically more ethical OA publishing. In the first two sections (\"The problem\" and \"Exploitative route to publication\") the author aptly summarizes the current situation, albeit sometimes implying that the reader is familiar with certain aspects of the discussion. >>> I hope I clarified the parts of the discussion that assumed familiarity by addressing your comments below. I also identified sentences that could benefit from references and added the relevant references. 2. Moreover, in the first section, the author makes quite a promising statement: \"I will focus on how researchers can instigate a cultural shift to change the incentive structure by valuing the improvement of research rigor through ethical publishing.\" However, in the sections following, I was disappointed to see that the author primarily focusing on describing the landscape instead of actually providing ways to instigate cultural change. Understanding the landscape is important, but what the effective, actionable aspect of the piece that was offered in the beginning remains absent. As such, the piece does not deliver. >>> This is a good point: I don’t focus on incentives or prestige in this article. The focus of my paper is to explain how the current publishing landscape works because this is what researchers always find surprising when I give this paper as a talk. After my talks, researchers often approach me saying how they want to change something about their publishing choices. So it seems that researchers don’t generally know much about how publishing works (it’s no wonder, we are so removed from it when we submit papers. And this kind of understanding isn’t taught or mentored unless their mentor is an expert in this topic). I changed the sentence to: My aim here is to explain how the current publishing landscape works and to highlight ethical and exploitative aspects that are not always obvious. I argue that it is in the best interest of researchers, academia, the public, and research rigor to adopt ethical publishing choices. Adopting such choices will instigate a cultural shift in academia. 3. In the next section on exploitation, the author mentions exploitative and ethical publishing. Although I tend to agree that OA is less exploitative, calling it ethical is rather difficult without an explanation as to what normative framework is being applied to judge this. Why, for example, is the APC range of 0-2900 USD seen as ethical, when in the first paragraph it is mentioned that publishing costs range between 1.30-318USD? I understand many of the underlying principles, but I think the discussion of these issues can be honed and would make it much more convincing for people unfamiliar with many of the underlying principles that are implicit for OA proponents. E.g., is the ethical statement made from a Kantian viewpoint that OA is more sustainable? If so, please make it more explicit so it can show the underlying logic instead of just the conclusions. >>> I can see where this was confusing. Regarding the difference between the actual cost of publishing an article and the wide range of APCs on the ethical route in Figure 1B, I added: Given that the actual cost of publishing an article is $1.30–318 ( Bogich et al., 2016), it is important to consider where the additional money goes when paying APCs. Some journals charge higher APCs to cover their additional costs, which might involve paying staff for editorial services, promoting the journal, writing news stories, or developing new publishing technology (e.g., see eLife’s cost breakdown at: https://elifesciences.org/elife-news/inside-elife-what-it-costs-publish). For some journals, their higher APCs also provide income for the publisher’s shareholders (see the Exploitative route to publication). If you choose to pay a higher APC, make sure the activities the public’s money will be invested in are aligned with the three ethical principles above. Excellent point about the paper lacking an ethical framework. At the beginning of the paper I added: In this ethical framework, I rely on three principles: 1) Researchers and publishers have a responsibility to the public to provide them with free access to publicly funded products, which are a common good (Woodward 1990, Stilgoe et al. 2013) 2) Publishers of research products have a responsibility to researchers to value the generation and packaging of knowledge (Fuchs & Sandoval 2013) 3) Researchers have a responsibility to the public to conduct rigorous research because it will serve as the foundation for the advancement of discoveries, it provides the best value for money, and earns public trust (Nosek & Bar-Anan 2012) I added references to which principles were broken or upheld throughout the paper. 4. Continuing with providing explanations as to why certain things are considered ethical, I think the piece could really benefit from justification as to why keeping money inside academia would be considered more ethical. For-profit businesses can very much contribute ethically to the knowledge ecosystem and retain the profits, albeit it would require some changes in how the system is setup (e.g., knowledge should no longer be commodified). It is rather narrow to state that keeping money within academia is beneficial to academia more so than a combination of inside and outside, at least without thorough analysis as to why that would be the case. The premises seem to be implied now, which makes it rather unconvincing (despite that I somewhat agree with the outcome). >>> I added an explanation about why keeping money inside academia is ethical: To uphold ethical principle 2, researchers must be valued for their innovation and labor. Keeping publishing profits inside academia values researchers by making more money available to them, for example, by increasing grant funding and freeing up money for their universities to invest more in research, teaching, and new faculty positions. I agree that for-profit publishers can be ethical. I hope this is clear from my statement that: Ethical publishers are academic non-profit organizations, which ensure that profits are reinvested in academia, and for-profit corporations that charge no or low APCs and/or heavily invest profits in academia and/or are working to modernize the publishing infrastructure for researchers. 5. As such, it seems to me that the perspective proposed here is lacking in thoroughness of the reasoning proposed (despite that I am a proponent of OA). As such, I would encourage the author to make the implicit steps taken in the reasoning more explicit. Moreover, calling something ethical without providing a framework is, to me, rather difficult. Deeming something ethical is always subject to cultural context and the normative framework. >>> I agree and thank you very much for pointing this out! I hope that my responses to comments 2-4 sufficiently address this. 6. Finally, I would like to ask the author whether she thinks that philanthropic efforts to increase OA are ethical in themselves. For example, OA is promoted by the Bill and Melinda Gates Foundation (BMGF), but recent efforts that put pressure on publishers have created an OA privilege so it seems. Researchers funded by the BMGF now have the possibility to publish gold-OA in Science for example 1, but non-BMGF financed researchers do not. As such, considering the Merton's ethical framework for science, this decreases equality between researchers and could be considered unethical. If OA is deemed ethical, are the means to an end here deemed ethical as well? It seems that this is a crucial question that is being neglected throughout this opinion piece. >>> I would argue that creating inequalities in who can pay to publish is unethical because it inhibits the sharing of knowledge as a common good. I added: There is a further argument to be made that no money should be exchanged when publishing research products, neither via journal subscriptions nor APCs, because the public has already paid for the research. Any costs that are charged in addition to the initial funding creates inequalities in who can pay to publish or read (Fuchs & Sandoval 2013), and violates ethical principle 1. I don’t endorse the BMGF approach as a way to transition to 100% OA. We have already seen such exploitative transitory approaches fail. For example, hybrid OA was supposed to be a transition stage to gold OA (Björk 2012); however, publishers now exploit the hybrid OA business model by charging higher APCs than those of 100% OA journals (Pinfield et al. 2015, Kingsley 2016, Laakso & Björk 2016, Solomon & Björk 2016). Given this increased monetary gain, there is no incentive for publishers of hybrid journals to switch them to 100% OA. It seems that funders are one of the only groups that are able to make effective changes in the current publishing landscape, as evidenced by their requirement that all of the research products they fund are published gold OA (e.g., RCUK http://www.rcuk.ac.uk/research/openaccess/, Wellcome Trust https://wellcome.ac.uk/funding/managing-grant/open-access-policy, Open Access Statements from many institutions http://www.digital- scholarship.org/oab/2statements.htm). I think the only way publishers will change to 100% OA is if funders refuse to pay for hybrid OA APCs and only fund APCs at journals that are 100% OA. Adopting such a policy would address the inequality issue that BMGF created with their transitory policy by forcing researchers to publish in more ethical venues. Some researchers might object to being restricted from publishing in some journals that are subjectively considered prestigious; however, I think this is a necessary part of the transition because the prestige issue is massive and leads to bad science (Smaldino & McElreath 2016). I added: The hybrid business model was originally implemented as one step in the transition to a 100% OA publishing landscape. However, the goal was never achieved because publishers make more money off of the hybrid business model (Björk 2012) References Björk, B. C. (2012). The hybrid model for open access publication of scholarly articles: A failed experiment?. Journal of the American Society for Information Science and Technology, 63(8), 1496-1504. http://www.openaccesspublishing.org/hybrid/hybrid.pdf Kingsley. 2016. Unlocking Research. https://unlockingresearch.blog.lib.cam.ac.uk/?p=969 Laakso, M., & Björk, B. C. (2016). Hybrid open access—A longitudinal study. Journal of Informetrics, 10(4), 919-932. Pinfield, S., Salter, J., & Bath, P. A. (2015). The “total cost of publication” in a hybrid open‐access environment: Institutional approaches to funding journal article‐processing charges in combination with subscriptions. Journal of the Association for Information Science and Technology. Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. http://rsos.royalsocietypublishing.org/content/3/9/160384 Solomon D, Björk B. (2016) Article processing charges for open access publication—the situation for research intensive universities in the USA and Canada. PeerJ 4:e2264 https://doi.org/10.7717/peerj.2264"
}
]
}
] | 1
|
https://f1000research.com/articles/6-518
|
https://f1000research.com/articles/6-853/v1
|
08 Jun 17
|
{
"type": "Research Article",
"title": "The menstrual cycle affects recognition of emotional expressions: an event-related potential study",
"authors": [
"Madoka Yamazaki",
"Kyoko Tamura",
"Kyoko Tamura"
],
"abstract": "Background: Several studies have investigated the relationship between behavioral changes and the menstrual cycle in female subjects at a reproductive age. The present study investigated the relationship between the menstrual cycle and emotional face recognition by measuring the N170 component of ERPs. Methods: We measured N170 of twelve women in both follicular phase and late luteal phase who were presented with human facial expressions as stimuli (happy and angry). Results: In the follicular phase, participants showed a significantly larger response to happy male facial expressions. In the late luteal phase, participants had longer reaction times to all emotional stimuli, and a significantly reduced response to happy faces, especially happy male facial expressions (P<0.001). Conclusions: Our findings suggest that the menstrual cycle modulates early visual cognitive processing, and highlight the importance of considering the menstrual cycle phase in studies that investigate emotion and cognition.",
"keywords": [
"menstrual cycle",
"late luteal phase",
"premenstrual syndrome",
"ERP",
"N170",
"emotion"
],
"content": "Introduction\n\nIn cognitive neuroscience, gender differences have been discussed since the late 1990s. This is partly due to the increasing amount of reports showing the gender differences of brain structure20,38,42,45,46 and metabolism1,36. For instance, the corpus callosum is larger in women than in men2, and so are the left cortical language-associated regions20,38,46. Several behavioral studies have also shown differences between genders, such that women showed higher performance on verbal and memory tasks13,31, whereas men were excellent at spatial tasks12,17. Many studies have revealed that the differences in brain anatomy correlated with the behavioural differences between genders, by integrating brain imaging with cognitive tasks2,19,25.\n\nFemales at reproductive age experience dynamic changes in levels of sex hormones (e.g. estrogen and progesterone) every menstrual cycle. Premenstrual syndrome (PMS) symptoms include mood and behavioral changes such as irritability, depression, mood swings, fatigue and food cravings that develop during the luteal phase within a few days of menstruation. PMS occurs in up to 75% of females that are at a reproductive age24,41. Although the etiology of PMS is unclear, the hormonal shift from estrogen to progesterone may cause of some of the symptoms of PMS, as this hormonal shift affects the level of serotonin and serotonergic antidepressants that are causative for both the physical and psychological symptoms23,37.\n\nSeveral studies have investigated the relationship between behavioral changes and the menstrual cycle in female subjects at a reproductive age. Dreher et al.14 reported that women in the follicular phase of their cycles showed higher activation in the orbitofrontal cortex and amygdala than they did during the luteal phase during a gambling task. Slyepchenko et al.40 showed that subtle working memory and selective attention impairment occurred more frequently in women with moderate to severe PMS than women in with mild or no PMS symptom. These studies were concordant with symptoms of PMS (difficulty concentrating, lowered performance, lowered judgement). In contrast, Eggert L et al.16 reported a kind of paradoxical effect, where women with PMS in the luteal phase of their menstrual cycle showed a greater emotional stroop effect with respect to picture and facial stimuli, compared to a control group. It still remains poorly understood how the menstrual cycle that provokes the changes in sex hormone levels affects emotional cognition.\n\nEvent-related potential (ERP) studies have been used for investigating the attention and emotional effects produced with facial stimuli. N170 is an ERP component showing a negative peak at around 140–200ms post-stimulus in the posterior temporal region, and is thought to reflect the detection and global processing of facial images7,10,15,22. It is also thought to be sensitive to the emotion displayed in facial expressions4,8,36. Several authors have reported gender differences in N170 that females show greater response to facial stimuli than males11,26,43. The N170 is also affected by neurological/psychiatric conditions9,44. However, no studies have addressed the relationship between N170 and the menstrual cycle, measured with emotional facial expressions.\n\nThe present study was conducted to investigate whether the menstrual cycle affects the N170 elicited by the emotional facial expressions. We compared results from the follicular phase and the late luteal phase.\n\n\nMethods\n\nTwelve female, right-handed students participated in this study with a mean age of 21.6±2.0 (mean ± SD). All participants had normal or corrected to normal vision and regular menstrual cycles between 25 and 33 days with no history of neurological or psychiatric illness.\n\nEach female participant was examined both during the follicular phase (9–12, mean 10.1 days after the first menses) and late luteal phase/premenstrual phase (7-3, mean 4 days before the first day of menses). Half of the participants were examined firstly in their follicular phase to avoid the test–retest effect. All experimental sessions were conducted between the 13:00 and 18:00 to control for the effects of circadian rhythm.\n\nThis study was approved by Daito Bunka University research ethical committee (K14-008) and written informed consent was obtained from all the participants before the experiment.\n\n1. Salivary hormone measurements. Salivary estradiol and progesterone (4-pregene-3, 20-dione) were measured using Sal metrics, LLC (State College, PA) ELISA kits and measured optically using xMarkmicroplate spectrophotometer (Bio-Rad, Tokyo, Japan). Approximately 10 minutes after their arrival, participants provided a 1mL saliva sample using the “passive drool” collection method.\n\n2. Menstrual Distress Questionnaire (MDQ).33 A Japanese version of the MDQ translated by the authors was given to participants during both their follicular phase and late luteal phase / premenstrual phase, to evaluate their psychological and physiological status (see Supplementary File S1 for the original questionnaire and Supplementary File S2 for the translated questionnaire in Japanese). The MDQ consists of 47 items which are grouped into eight subcategories: pain, water retention, autonomic reaction, negative affect, impaired concentration, behavioral change, arousal, and control. Participants were required to rate their symptoms using a four-point scale1–4, ranging from “no experience of symptoms” to “severe” on 47 items.\n\nParticipants were seated on an armchair, and a PC screen was placed in front of them at a distance of 80 cm. Participants were asked to respond as fast as possible by pressing the left mouse button with their right index finger when the human facial expression (happy or angry) appeared.\n\n1. Stimuli. Stimuli consisted of pictures of 24 different adult faces (12 male and 12 female), that were obtained from the Karolinska Directed Emotional Faces28. The pictures were shown upright, adjusted to a width of 60 mm and height of 90 mm, and presented on a black background. Three types of facial expression (neutral, happy and angry) were displayed. Faces were displayed for 400 ms, and then a white fixation cross on a black background was displayed lasting randomly between 1300 and a1600 ms. Stimulus delivery was controlled by the presentation software Neurobehavioral systems, version 18.0 (Albany, CA).\n\nPresentation of stimuli occurred in four blocks. In each block, 24 pictures with expressions of emotion (12 happy and 12 angry), and 96 pictures with neutral expression were selected at random, resulting in a total of 480 trials. Error rate and response time were recorded.\n\n2. ERP recording. EEGs were recorded with Ag-AgCl electrodes and electrodes were placed according to the 10–20 system using a Neurofax EEG-1200 (Nihon Kohden, Tokyo, Japan). Electrode impedance was kept < 5kΩ. The amplifier bandpass was 0.1–40 Hz and sampled with a digitization rate of 500Hz.\n\n3. Data analysis. The continuously recorded data were divided into epochs of 900 ms in length, starting 100 ms before stimulus onset. EEGs for the happy and angry facial expressions were averaged separately using the EMSE software suite version 5.52 (Source Signal Imaging, San Diego CA). Tests with wrong responses, or eye blinks, lateral eye movements, or muscle discharges which showed over 100μV were excluded. We analyzed the peak amplitude, latency of interest of ERP components, N170 at posterior temporal head region, T5 and T6 between 140 and 200 ms post-stimulus.\n\n4. Statistical analysis. Statistical tests involved performing paired t-tests using SPSS version 19.0 (SAS Institute Inc., Chicago). A value of p<0.05 was taken to indicate statistical significance.\n\n\nResults\n\nThe salivary concentrations of 17β-Estradiol and progesterone (4-pregene-3, 20-dione) are presented in Table 1. 17β-Estradiol was higher in the follicular phase and progesterone was significantly higher (t(11)=7.11, p<0.05) in the late luteal phase.\n\nMean MDQ scores for female participants in the follicular phase and late luteal phase are presented in Table 2. Participants in the late luteal phase of their menstrual cycle showed significantly higher scores for pain, concentration, behavioral changes, water retention and negative affect (t(11)=6.41, 4.81, 4,63, 4,66, 3,47, 6,11, all p<0.05) compared to when they were in the follicular phase.\n\nThe average error rate across all conditions (male/female, happy/angry) was below 1.5% in both the follicular phase and the late luteal phase. Figure 1 shows the mean (±SD) reaction times (RTs) of participants to the target stimuli (happy/angry facial expressions). Participants in the follicular phase of their menstrual cycle responded more quickly to all stimuli than when they were in their late luteal phase. Participants in their late luteal phase showed significantly longer RTs for both male (t(11)=2.99, p<0.05) and female (t(11)=2.84, p<0.05) happy faces.\n\n*: p<0.01.\n\nFigure 2A shows the ERP grand averages for the facial expressions of emotion (happy and angry) from all participants. N170 was recorded in posterior-temporal and occipital electrodes. Figure 2B shows the ERP grand averages separately for each stimuli at the T6 electrode. Participants in the follicular phase showed higher N170 amplitude (-8.49μV) than in the late luteal phase (-6.13μV) for happy female facial expressions (t(11) = 4.31, p<0.01) (Figure 3A). A similar effect was seen for happy male facial expressions (10.9μV in follicular phase, 6.39μV in late luteal phase) (t(11) = 7.09, p<0.001) (Figure 3A). The amplitude for both female and male angry facial expressions did not differ between phases of the menstrual cycle (Figure 3A, Figure 4). Participants in follicular phase showed shorter peak latency of the N170 component irrespective of the type of stimulus, when compared to the late luteal phase. There were significant differences in latency of the N170 component observed between follicular phase and late luteal phase using both happy male facial expressions and happy female facial expressions as stimuli (t(11)= 4.49 / 6.04, p<0.001) (Figure 3B).\n\n(A) ERP grand averages for the emotional facial stimuli. The N170 component localized at the posterior-temporal and occipital electrodes is indicated with an arrow. (B) N170 component grand averages for each of the emotional facial stimuli at the T6 electrode.\n\nN170 component peak latency (A) and peak amplitude (B). **: p<0.01, ***: p<0.001.\n\nUpper panel: follicular phase; lower panel: late luteal phase.\n\n\n\n\n\n\n\n\nDiscussion\n\nThe present study investigated the effect of the menstrual cycle on emotional facial recognition, by measuring ERPs. The most important findings in this study were the effects of the menstrual cycle on behavioral data (RTs) and the N170 component of ERPs.\n\nWe expected participants in late luteal phase to respond to the human facial expressions more slowly compared to participants in the follicular phase, as indicated by previous studies3,27,30. Significantly longer RTs to happy faces were observed in the late luteal phase than in the follicular phase. Lord T and Taylor K27 reported that women scored lower in concentration tasks in the late luteal phase and Maki PM et al.30 also reported the same pattern in performance of motor skill tasks. Our results are consistent with these studies, in which lower performances during late luteal phase were caused by the changes of estrogen level.\n\nThe N170 recorded between 140 and 200ms in the lateral temporal region showed faster response in the follicular phase than in the late luteal phase (pre-menstrual phase), irrespective of facial expression. Additionally, the N170 response to happy facial expressions was larger in amplitude in the follicular phase than in the late luteal phase, and also larger in amplitude for male facial stimuli than for female facial stimuli, thus showing for the first time an effect of the menstrual cycle on early components of visual evoked potentials.\n\nSeveral studies have reported that N170 amplitude can be modified by facial expressions of emotion, especially fearful expressions6,39. In the present study, the amplitude of the N170 component was significantly larger in response to happy male facial expressions than in response to angry facial expressions in the follicular phase, and slightly larger in response to angry male facial expressions in the late luteal phase. There are several potential explanations for these findings; it has been suggested that stimulus “intensity” may be an important variable in determining N170 amplitude50. Thus, the differences in N170 amplitude elicited by the emotional faces in the present study and other studies may be due to the fact that the emotional faces may be more “intense” or “provocative” to the brain than the other faces.\n\nThe present study also found a larger N170 amplitude, especially in response to happy male facial expressions in the follicular phase. This finding may be interpreted as an effect of the existence of an opposite-/same-sex bias in face processing. Several studies have shown that individuals respond more quickly and strongly to attractive faces of the opposite sex than to the same sex9,21,34. Therefore, the participants in the follicular phase of their menstrual cycle showed the largest response to man happy face of all stimuli.\n\nParticipants in the late luteal phase showed a decreased N170 amplitude and a significantly reduced response to happy facial expressions, compared to the same participants in the follicular phase. As expected, participants in the late luteal phase reported significantly increased scores in MDQ (Table 2), while the same participants in the follicular phase showed lower scores or absent symptoms. Several researchers have investigated the menstrual effects on cognitive function with ERPs3,29,47,49. They reported that women in the follicular phase showed decreased response (longer latency and smaller amplitude) in the cortical processing of visual stimuli compared to in late luteal phase. The N170 component is also negatively modulated by the psychiatric condition; for instance, high anxiety or a depressive state will influence its properties5,48.\n\nThe decreased response, especially to happy facial expressions, may be the due to the lack of positivity bias, but may also be due to there being a reduced perception of positive stimuli, caused by changes in ovarian hormone levels in the late luteal phase that result in attention deficits.\n\nIn summary, this is the first study to provide electrophysiological evidence showing the effects of the menstrual cycle on emotional facial recognition, with the N170 component reflecting early visual processing. Participants in the follicular phase showed a greater response to happy male facial expressions; and participants in late luteal phase (pre-menstrual phase) showed a suppressed response to human facial expressions. These findings highlight the importance of considering the menstrual cycle phase in studies that investigate emotion and cognition.\n\n\nData availability\n\nDataset 1: Raw data for ERP grand averages, for the target stimuli (angry/happy facial expressions), recorded from all participants. The ERP grand average waveforms were re-referenced offline to the average of the left and right mastoids, filtered at 1.0–15 Hz and calculated separately for non-target (neutral face) and target (angry/happy face) stimuli and electrode site, with reference to a 200ms baseline preceding stimulus onset.\n\n10.5256/f1000research.11563.d16369951\n\nDataset 2: Raw data for the N170 component grand averages for each of the emotional facial stimuli at the T6 electrode. The ERP waveforms were averaged separately for each target stimuli (female/male, angry/happy) in each menstrual phase. T6 electrode activity was extracted as N170 was being recorded.\n\n10.5256/f1000research.11563.d16370052\n\nDataset 3: Raw data for the averaged ERP waveforms for each target stimuli (female/male, angry/happy in follicular/late luteal phase), with 19 electrodes and exported to a separate sheet. The data was used to create 2-D voltage topographic maps, by calculating the voltage distribution for the N170 component for each of emotional facial stimuli at each peak latency, with EMSE software suite (Source Signal Imaging, San Diego, CA). Spherical spline interpolation was applied.\n\n10.5256/f1000research.11563.d16370153",
"appendix": "Author contributions\n\n\n\nMY conceived the study. MY designed the experiments. MY and KT carried out the research. All authors prepared the first draft of the manuscript and were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by Daito Bunka University Research Foundation grant to MY.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript\n\n\nSupplementary material\n\nSupplementary File S1: Original Menstrual Distress Questionnaire (MDQ).\n\nClick here to access the data.\n\nSupplementary File S2: Menstrual Distress Questionnaire (MDQ), translated by the authors into Japanese.\n\nClick here to access the data.\n\n\nReferences\n\nAanerud J, Borghammer P, Rodell A, et al.: Sex differences of human cortical blood flow and energy metabolism. J Cereb Blood Flow Metab. 2016; 1: 271678X16668536. PubMed Abstract | Publisher Full Text\n\nAndreasen NC, Flaum M, Swayze V 2nd, et al.: Intelligence and brain structure in normal individuals. Am J Psychiatry. 1993; 150(1): 130–134. PubMed Abstract | Publisher Full Text\n\nAvitabile T, Longo A, Caruso S, et al.: Changes in visual evoked potentials during the menstrual cycle in young women. Curr Eye Res. 2007; 32(11): 999–1003. PubMed Abstract | Publisher Full Text\n\nBabiloni C, Vecchio F, Buffo P, et al.: Cortical responses to consciousness of schematic emotional facial expressions: a high-resolution EEG study. Hum Brain Mapp. 2010; 31(10): 1556–1569. PubMed Abstract | Publisher Full Text\n\nBar-Haim Y, Lamy D, Glickman S: Attentional bias in anxiety: a behavioral and ERP study. Brain Cogn. 2005; 59(1): 11–22. PubMed Abstract | Publisher Full Text\n\nBatty M, Taylor MJ: Early processing of the six basic facial emotional expressions. Brain Res Cogn Brain Res. 2003; 17(3): 613–20. PubMed Abstract | Publisher Full Text\n\nBentin S, Allison T, Puce A, et al.: Electrophysiological Studies of Face Perception in Humans. J Cogn Neurosci. 1996; 8(6): 551–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlau VC, Maurer U, Tottenham N, et al.: The face-specific N170 component is modulated by emotional facial expression. Behav Brain Funct. 2007; 3: 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCamfield DA, Mills J, Kornfeld EJ, et al.: Modulation of the N170 with Classical Conditioning: The Use of Emotional Imagery and Acoustic Startle in Healthy and Depressed Participants. Front Hum Neurosci. 2016; 10: 337. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampanella S, Hanoteau C, Dépy D, et al.: Right N170 modulation in a face discrimination task: an account for categorical perception of familiar faces. Psychophysiology. 2000; 37(6): 796–806. PubMed Abstract | Publisher Full Text\n\nChoi D, Egashira Y, Takakura J, et al.: Gender difference in N170 elicited under oddball task. J Physiol Anthropol. 2015; 34(1): 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCollins DW, Kimura D: A large sex difference on a two-dimensional mental rotation task. Behav Neurosci. 1997; 111(4): 845–849. PubMed Abstract | Publisher Full Text\n\nDelgado AR, Prieto G: Sex differences in visuospatial ability: do performance factors play such an important role? Mem Cognit. 1996; 24(4): 504–510. PubMed Abstract | Publisher Full Text\n\nDreher JC, Schmidt PJ, Kohn P, et al.: Menstrual cycle phase modulates reward-related neural function in women. Proc Natl Acad Sci U S A. 2007; 104(7): 2465–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEimer M, Holmes A: An ERP study on the time course of emotional face processing. Neuroreport. 2002; 13(4): 427–31. PubMed Abstract | Publisher Full Text\n\nEggert L, Kleinstäuber M, Hiller W, et al.: Emotional interference and attentional processing in premenstrual syndrome. J Behav Ther Exp Psychiatry. 2017; 54: 77–87. PubMed Abstract | Publisher Full Text\n\nGeary DC, Saults SJ, Liu F, et al.: Sex differences in spatial cognition, computational fluency, and arithmetical reasoning. J Exp Child Psychol. 2000; 77(4): 337–53. PubMed Abstract | Publisher Full Text\n\nGonzález-Garrido AA, López-Franco AL, Gómez-Velázquez FR, et al.: Emotional content of stimuli improves visuospatial working memory. Neurosci Lett. 2015; 585: 43–7. PubMed Abstract | Publisher Full Text\n\nGur RC, Turetsky BI, Matsui M, et al.: Sex differences in brain gray and white matter in healthy young adults: correlations with cognitive performance. J Neurosci. 1999; 19(10): 4065–4072. PubMed Abstract\n\nHarasty J, Double KL, Halliday GM, et al.: Language-associated cortical regions are proportionally larger in the female brain. Arch Neurol. 1997; 54(2): 171–176. PubMed Abstract | Publisher Full Text\n\nHofmann SG, Suvak M, Litz BT: Sex differences in face recognition and influence of facial affect. Pers Indiv Differ. 2006; 40(8): 1683–1690. Publisher Full Text\n\nHolmes A, Vuilleumier P, Eimer M: The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials. Brain Res Cogn Brain Res. 2003; 16(2): 174–84. PubMed Abstract | Publisher Full Text\n\nImai A, Ichigo S, Matsunami K, et al.: Premenstrual syndrome: management and pathophysiology. Clin Exp Obstet Gynecol. 2015; 42(2): 123–8. PubMed Abstract\n\nJohnson SR: The epidemiology and social impact of premenstrual symptoms. Clin Obstet Gynecol. 1987; 30(2): 367–376. PubMed Abstract | Publisher Full Text\n\nKareken DA, Gur RC, Mozley PD, et al.: Cognitive functioning and neuroanatomic volume measures in schizophrenia. Neuropsychology. 1995; 9(2): 211–219. Publisher Full Text\n\nKim EY, Lee SH, Park G, et al.: Gender difference in event related potentials to masked emotional stimuli in the oddball task. Psychiatry Investig. 2013; 10(2): 164–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLord T, Taylor K: Monthly fluctuation in task concentration in female college students. Percept Mot Skills. 1991; 72(2): 435–439. PubMed Abstract | Publisher Full Text\n\nLundqvist D, Flykt A, Ohman A: The Karolinska Directed Emotional Faces -KDEF(CD ROM). Stockholm: Department of Clinical Neuroscience, Psychology section, KarolinskaInstitutet. 2011.\n\nLusk BR, Carr AR, Ranson VA, et al.: Early visual processing is enhanced in the midluteal phase of the menstrual cycle. Psychoneuroendocrinology. 2015; 62: 343–51. PubMed Abstract | Publisher Full Text\n\nMaki PM, Rich JB, Rosenbaum RS: Implicit memory varies across the menstrual cycle: estrogen effects in young women. Neuropsychologia. 2002; 40(5): 518–529. PubMed Abstract | Publisher Full Text\n\nMcGivern RF, Huston JP, Byrd D, et al.: Sex differences in visual recognition memory: support for a sex-related difference in attention in adults and children. Brain Cogn. 1997; 34(3): 323– 336. PubMed Abstract | Publisher Full Text\n\nMurphy DG, DeCarli C, McIntosh AR, et al.: Sex differences in human brain morphometry and metabolism: an in vivo quantitative magnetic resonance imaging and positron emission tomography study on the effect of aging. Arch Gen Psychiatry. 1996; 53(7): 585–94. PubMed Abstract | Publisher Full Text\n\nMoos RH: The development of a menstrual distress questionnaire. Psychosom Med. 1968; 30(6): 853–67. PubMed Abstract | Publisher Full Text\n\nPenton-Voak IS, Jones BC, Little AC, et al.: Symmetry, sexual dimorphism in facial proportions and male facial attractiveness. Proc R Soc. 2001; 268(1476): 1617–1623. PubMed Abstract | Publisher Full Text | Free Full Text\n\nProverbio AM, Riva F, Martin E, et al.: Neural markers of opposite-sex bias in face processing. Front Psychol. 2010; 18(1): 169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRellecke J, Sommer W, Schacht A: Emotion effects on the N170: a question of reference? Brain Topogr. 2013; 26(1): 62–71. PubMed Abstract | Publisher Full Text\n\nRyu A, Kim TH: Premenstrual syndrome: A mini review. Maturitas. 2015; 82(4): 436–40. PubMed Abstract | Publisher Full Text\n\nSchlaepfer TE, Harris GJ, Tien AY, et al.: Structural differences in the cerebral cortex of healthy female and male subjects: a magnetic resonance imaging study. Psychiatry Res. 1995; 61(3): 129–135. PubMed Abstract | Publisher Full Text\n\nStekelenburg JJ, de Gelder B: The neural correlates of perceiving human bodies: an ERP study on the body-inversion effect. Neuroreport. 2004; 15(5): 777–80. PubMed Abstract | Publisher Full Text\n\nSlyepchenko A, Lokuge S, Nicholls B, et al.: Subtle persistent working memory and selective attention deficits in women with premenstrual syndrome. Psychiatry Res. 2017; 249: 354–362. PubMed Abstract | Publisher Full Text\n\nSteinmetz H, Staiger JF, Schlaug G, et al.: Corpus callosum and brain volume in women and men. Neuroreport. 1995; 6(7): 1002–1004. PubMed Abstract | Publisher Full Text\n\nSteiner M: Premenstrual syndromes. Annu Rev Med. 1997; 48: 447–455. PubMed Abstract | Publisher Full Text\n\nSun Y, Gao X, Han S: Sex differences in face gender recognition: an event-related potential study. Brain Res. 2010; 1327: 69–76. PubMed Abstract | Publisher Full Text\n\nWieser MJ, Moscovitch DA: The Effect of Affective Context on Visuocortical Processing of Neutral Faces in Social Anxiety. Front Psychol. 2015; 6: 1824. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWitelson SF: Hand and sex differences in the isthmus and genu of the human corpus callosum. A postmortem morphological study. Brain. 1989; 112(Pt 3): 799–835. PubMed Abstract | Publisher Full Text\n\nWitelson SF, Glezer II, Kigar DL: Women have greater density of neurons in posterior temporal cortex. J Neurosci. 1995; 15(5 Pt 1): 3418– 3428. PubMed Abstract\n\nWu H, Chen C, Cheng D, et al.: The mediation effect of menstrual phase on negative emotion processing: evidence from N2. Soc Neurosci. 2014; 9(3): 278–88. PubMed Abstract | Publisher Full Text\n\nWu X, Chen J, Jia T, et al.: Cognitive Bias by Gender Interaction on N170 Response to Emotional Facial Expressions in Major and Minor Depression. Brain Topogr. 2016; 29(2): 232–42. PubMed Abstract | Publisher Full Text\n\nZhang W, Zhou R, Ye M: Menstrual cycle modulation of the late positive potential evoked by emotional faces. Percept Mot Skills. 2013; 116(3): 707–23. PubMed Abstract | Publisher Full Text\n\nZhang Z, Deng Z: Gender, facial attractiveness, and early and late event-related potential components. J Integr Neurosci. 2012; 11(4): 477–87. PubMed Abstract | Publisher Full Text\n\nYamazaki M, Tamura K: Dataset 1 in: The menstrual cycle affects recognition of emotional expressions: an event-related potential study. F1000Research. 2017. Data Source\n\nYamazaki M, Tamura K: Dataset 2 in: The menstrual cycle affects recognition of emotional expressions: an event-related potential study. F1000Research. 2017. Data Source\n\nYamazaki M, Tamura K: Dataset 3 in: The menstrual cycle affects recognition of emotional expressions: an event-related potential study. F1000Research. 2017. Data Source"
}
|
[
{
"id": "24701",
"date": "02 Aug 2017",
"name": "Shunsuke Takagi",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript reports response and ERP (N170) difference to human facial expression in the different phase of menstrual cycle of women. The study is well designed and the result is clear and meaningful. However, the authors should address some issues.\n#1 The cause of the result should be clarified or discussed. The result (slowed reaction time and enlarged N170 in late luteal phase for emotional stimuli) is impressive and apparently affected by menstrual cycle. However, what factors of the menstrual cycle did affect this data? As the author pointed out, menstrual cycle has many aspects. It is caused by several types of hormones and their cyclic increase and decrease. Such hormones cause emotional changes, fluid balance change etc. during menstrual cycle. Is the result caused by hormonal changes directory or by emotional changes driven by hormones? This matter should be clarified or discussed better.\n#2 Method to obtain N170 should be clarified. The method to obtain N170 (recording ERP during presentation of emotional face) is not noted in the method clearly.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23420",
"date": "15 Aug 2017",
"name": "Don M. Tucker",
"expertise": [
"Reviewer Expertise neuropsychology",
"EEG and ERP research"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well-organized and well-written study of the important topic of the neural and psuychological effects of menstrual hormonal variation. The observation of both behavioral (reaction time) and event-related potential measures suggest that the neural changes may be important for everyday behavior and emotional responses.\n\nWhat was the recording reference? This is important for interpreting the ERP waveforms. It appears the plots are baseline corrected; is the large ramp over frontal polar channels a result of this? It looks like there was a large negativity before the stimulus, and that the positive ramping is actually a recovery following a large stimulus-preceeding-negativity that returns to baseline during the perceptual process.\nAddressing these minor points will improve this already solid manuscript.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-853
|
https://f1000research.com/articles/6-851/v1
|
08 Jun 17
|
{
"type": "Opinion Article",
"title": "The ABCs of finding a good antibody: How to find a good antibody, validate it, and publish meaningful data",
"authors": [
"Poulomi Acharya",
"Anna Quinlan",
"Veronique Neumeister",
"Anna Quinlan",
"Veronique Neumeister"
],
"abstract": "Finding an antibody that works for a specific application can be a difficult task. Hundreds of vendors offer millions of antibodies, but the quality of these products and available validation information varies greatly. In addition, several studies have called into question the reliability of published data as the primary metric for assessing antibody quality. We briefly discuss the antibody quality problem and provide best practice guidelines for selecting and validating an antibody, as well as for publishing data generated using antibodies.",
"keywords": [
"antibody validation",
"western blotting antibodies",
"immunohistochemistry",
"primary antibodies",
"antibody quality",
"reproducibility crisis"
],
"content": "Introduction\n\nAntibodies are widely used for applications that range from flow cytometry and immunohistochemistry to western blotting and ELISA. Even though antibodies are central to basic research as well as drug development and diagnostics, quality concerns remain high, and finding an antibody that works well for a specific application is a formidable challenge.\n\nOne source of the antibody quality problem is that it is not easy to generate a high-performing antibody. Production of monoclonal and polyclonal antibodies relies on an animal’s immune response, which is unpredictable and can vary from animal to animal even when they have the same genetic background. Some proteins do not elicit a strong immune response, others are too immunogenic, and yet others share too much homology with non-target proteins to yield a highly specific antibody. As part of the Human Protein Atlas project, Berglund et al quantified their antibody production success rate; 49% of their 9,000 internally generated antibodies failed validation (Berglund et al., 2008).\n\nThe Berglund study also highlights a second source of the antibody problem: commercially available antibodies have similar failure rates. This confirmed what many already knew to be true; just because an antibody is commercially available does not ensure its quality. However, the study also points to the solution. Failure rates among the 51 represented vendors ranged from 0 to 100%, suggesting that proper validation and quality control can allow vendors to provide high quality reagents.\n\nThe high failure rate of commercially available antibodies identified by Berglund and authors of similar studies is concerning because time and money are wasted. An estimated US$800 million are wasted annually on poorly performing antibodies and US$350 million are lost in biomedical research because published results cannot be replicated, with bad antibodies the likely culprit in many cases (Bradbury & Plückthun, 2015).\n\nFor example, several years of research from multiple laboratories suggested that erythropoietin activates the erythropoietin receptor (EpoR) in tumor cells; a follow-up study, however, showed that only one of the four EpoR antibodies used in these studies detected EpoR and none of the four antibodies were suitable for immunohistochemistry (Elliott et al., 2006). Similarly, Prassas and Diamandis spent two years and $500,000 investigating CUZD1, a potential biomarker for pancreatic cancer, using an ELISA assay that turned out to recognize CA125 instead (Prassas & Diamandis, 2014). Cases such as these have led some to suggest that irreproducibility should carry with it greater consequences, such as a requirement for academic institutions to return some or all of the grant money used to fund studies that prove irreproducible (Rosenblatt, 2016).\n\nThe EpoR and CUZD1 examples not only demonstrate the devastating effect poorly performing antibodies can have on a research program, they also emphasize a third component of the antibody problem: the lack of enforced standards for antibody validation. Vendors frequently show cropped blots, share validation data that lack appropriate controls, or use large amounts of purified protein or cell lines overexpressing proteins of interest instead of physiologically relevant samples as positive controls. Such practices make it impossible to accurately assess an antibody’s performance. Similarly, journals have historically not provided guidelines for publishing validation data for antibody based assays.\n\nHere we have compiled and share recommendations for (I) how to scrutinize available antibodies pre-purchase, (II) how to validate antibodies post-purchase, and (III) what information to include in publications to ensure that antibody quality and results can be evaluated by the reader.\n\nThe first step to finding an antibody can be the most daunting — identifying antibodies that could work for your application. Product information and validation data can be difficult to decipher, and with hundreds of vendors to choose from it becomes difficult to know when a search has been exhaustive. The following guidelines are meant to simplify the process of identifying high-quality antibodies.\n\n1. Use search engines to find and compare available antibodies.\n\nSearch engines, such as those available through Biocompare, SelectScience, UniProt, or NCBI, allow you to find and in some instances even compare antibodies from many different vendors. This saves valuable time that is otherwise spent visiting each vendor’s website, and allows you to extend your search to vendors you may not be familiar with.\n\n2. Match the antibody type to your application.\n\nAntibodies fall into three classes, polyclonal, monoclonal, and recombinant, each with distinct advantages and disadvantages.\n\nA polyclonal antibody is a mixture of antibodies that all recognize different epitopes of the protein of interest. This makes these antibodies well-suited for proteins that may have posttranslational modifications or heterogeneity in structure or sequence, proteins present at low concentrations, or applications that require fast binding to a protein of interest. Because polyclonal antibodies are generated in animals, they show relatively high batch-to-batch variability and are thus a poor choice for long-running studies that require repurchasing of the antibody, or for applications that have low tolerance for variability. If your experiments have low tolerance for variability, but only polyclonal antibodies are available, ask the vendor to provide antibodies from only a single lot.\n\nMonoclonal antibodies are generated by a single B-cell line and thus recognize only a single epitope of a protein of interest. This makes these antibodies highly specific and results generated with them more reproducible. Their high specificity makes monoclonals an ideal choice for immunohistochemistry applications, and the ability to generate immortal B-cell hybridomas ensures greater batch-to-batch homogeneity. Because these antibodies recognize a single epitope they can be more challenging to work with when looking at low-abundance proteins or proteins that show variability, such as those with posttranslational modifications, in the epitope recognized by the antibody.\n\nOne caveat of monoclonal antibodies is that immortal B-cell hybridomas are not as eternal as their name implies; cell lines can die, not recover from frozen stocks, or even lose their antibody gene. Thus, for applications that have no tolerance for variability, recombinant antibodies are recommended. These custom synthetic antibodies provide an unlimited supply of identical antibodies, removing any batch-to-batch variability. Recombinant antibodies can be engineered to bind an epitope of choice with much higher affinity than that obtained in vivo. Because large libraries can be screened in a high-throughput manner, antibodies can be generated that distinguish similar compounds and bind their ligands only under desired conditions, such as a specific pH.\n\nThe high reproducibility and entirely animal-free production process has led the pharmaceutical industry to adopt recombinant antibodies as their preferred tool. Many academics, on the other hand, understandably consider recombinant antibodies a last resort due to their higher cost. However, particularly for long-term studies, recombinant antibodies should be seriously considered due to their batch-to-batch consistency and their guaranteed continuity of availability without any dependence on animal immunization.\n\n3. Buy from companies that will work with you.\n\nChoose a vendor who is willing to help you troubleshoot if an antibody does not perform as expected. If a vendor is unable or refuses to do so it may be a sign that they did not validate the antibody or that they are selling antibodies purchased from another vendor without additional quality control or the expertise to advise customers. Avoid vendors who provide only generic troubleshooting advice as this will be of little use if problems are encountered, and it suggests a lack of technical expertise. Regardless of the reason for not helping customers troubleshoot, a vendor’s inability to do so will leave you without technical support should it be needed.\n\nAlso be careful of what may at first glance seem like generous exchange or return programs — letting customers test multiple antibodies for their target of interest can indicate poor quality and turns you, the customer, into an antibody testing tool.\n\n4. Look for antibodies with complete validation data.\n\nBe wary of incomplete validation data; this is often a sign that an antibody is of poor quality and/or that the vendor will be able to do little to help you troubleshoot if the antibody does not perform as expected. Look for vendors who show the entire western blot image, provide detailed validation protocols, and validate their antibodies using multiple biologically relevant sample types or tissues. Not only do multiple sample types speak to the ability of an antibody to detect varying levels of expression, they often also reveal sample types that can be used as negative controls for your experiments.\n\nIt is also important to carefully scrutinize a vendor’s validation data. If the positive control is merely purified protein, keep in mind that the specificity of the antibody remains unknown, since you are not looking at a complex biological sample. Make sure the vendor specifies how much protein was loaded and compare this amount to that expected in your sample. If your protein of interest is present in much lower amounts, you may still be able to use the antibody for some applications by enriching for your protein of interest through fractionation or IP.\n\n5. Select antibodies that have been validated for your application.\n\nWhenever possible choose an antibody that is recommended by the vendor for your species and application. If such an antibody does not exist, contact the antibody vendor; in some cases the antibody may have failed validation for your application, while in other cases the vendor may not have tested it. You can also look to validation data in published studies to evaluate antibody performance. If no validation data are available for your application, choose a trusted vendor rather than one who simply states that antibodies have been validated for all applications. If you must use an antibody for non-recommended applications, be prepared to rigorously validate the antibody and to optimize vendor-suggested protocols for your specific experimental conditions.\n\n6. Check to ensure additives are compatible with your application.\n\nVendors often include additives that stabilize and extend the shelf life of antibodies. For most applications this is unproblematic, but there are some notable exceptions. For example, sodium azide can interfere with HRP-conjugated antibodies, antibody conjugation, and staining of live samples. Similarly BSA should not be added to antibodies that you will conjugate because it competes with the antibody for your label and can reduce conjugation efficiency. Another common additive, glycerol, lowers the freezing point to below -20°C, preventing freeze-thaw damage at -20°C because the antibody does not freeze. This cryoprotection does not extend to -80°C; at this temperature even antibodies stored in glycerol will freeze and will thus be subject to freeze-thaw damage (Johnson, 2012). If you are adding glycerol to an antibody yourself, be sure to use sterile glycerol as it is easily contaminated with bacteria.\n\nWhen an antibody is not available without the interfering additive you may have to take steps to remove the additive or work with the vendor to see whether they can supply the antibody without the interfering additive. Additives can be removed through dialysis or by using commercially available kits. Keep in mind that these steps can reduce the antibody’s concentration and impact its performance.\n\n7. Review publications, but carefully scrutinize antibody data and references.\n\nJournals like Nature and JBC are now starting to enforce guidelines for publishing antibody data, but this was not true in the past. When reviewing the literature, trust an antibody cited in a publication only if appropriate positive and negative controls are included. A new antibody should have validation data as well. When references are provided in place of validation data confirm that the authors of the original study performed and published the required validation experiments. If validation data are not presented in the original study, contact the authors to request this information. If authors cannot provide validation data, use the antibody only with the highest degree of caution and be sure to thoroughly validate the antibody before using it for your experiments.\n\nFocus your literature search on studies similar to yours. An antibody that performs well for flow cytometry may not be a good choice for immunoprecipitation, and host specificity can vary greatly. As you review the literature, be wary of antibodies that show discrepancies, such as an antibody detecting proteins of different molecular weights or showing different protein expression patterns in the same tissue types in different studies. If an antibody detects a protein with an unexpected molecular weight, look for controls that validate that the protein detected is actually the target protein.\n\nIf authors show cropped western blots, contact them to request the full blot before you purchase the antibody. And if you struggle with a published antibody, don’t hesitate to contact the authors as they can often provide valuable troubleshooting information.\n\nOnce you have selected two to five promising candidates, the time-consuming process of validating these antibodies for your application begins. The temptation to skip this process, especially when an antibody vendor has not provided extensive validation data, should be resisted. However, if a vendor has provided extensive validation data, including data for your sample or a closely related sample type and application, there may be no need to test multiple antibodies.\n\nNevertheless, always test antibodies yourself on your sample, regardless of the antibodies’ source and validation state. Validation data provided by vendors do not always reflect the current antibody lot, antibodies may perform differently in your hands, and although it doesn’t occur frequently, mistakes do happen during antibody production and processing. For example, a research laboratory at an academic center recently encountered unexpected specificity issues within the same lot of an antibody that had been validated and used successfully over an extended period of time. The source of this problem was a packaging error.\n\n1. Optimize protocols for your specific applications.\n\nAlways optimize protocols and antibody dilutions and report final concentrations used. It is important to know the concentration of an antibody as dilutions are meaningful only when the stock concentration is known. Contact the vendor, as many will provide this information when queried. If the vendor has tested the antibody using physiologically relevant samples and provides detailed validation protocols, use their experimental conditions as a starting point. This can help considerably to reduce effort and time spent testing for the optimal conditions.\n\nIf antibody-based protein evaluation is performed in a quantitative manner, signal-to-noise ratio and dynamic range are two of the most critical objective parameters to define the best antibody concentration for a given assay. Using too much antibody can yield nonspecific results, and too little can lead to no data or false-negative results. Based on the antibody application, the critical steps should be outlined and the experiment should have proper controls in place to make sure there are no or minimal artifacts. Optimizing assay conditions by conventional DAB/IHC should also be performed using a range of antibody concentrations.\n\nPay attention to protein-specific antigen retrieval methods, as it is best to follow the vendor’s recommendations when optimizing antibody concentration. If the assay does not perform as expected, different retrieval methods may yield better results. Note that as you alter retrieval methods the optimal antibody concentration might need to be adjusted as well.\n\n2. Test each antibody for specificity, sensitivity, and reproducibility.\n\nWhen assessing specificity, sensitivity, and reproducibility it is key to keep your intended application in mind. Will you be looking at native proteins or denatured proteins, a complex biological sample or a purified protein? These considerations will allow you to set meaningful performance criteria that an antibody must meet. Whenever possible, set quantitative quality control criteria rather than using qualitative measures that are often less reproducible and stringent (Ramos et al., 2016).\n\nThe specificity of an antibody can be assessed by comparing its performance in cell lines with and without the target protein; signal in knock-out cell lines can be attributed to unspecific binding (Bordeaux et al., 2010). When knock-out cell lines are not readily available, RNAi can be used to knock down the protein of interest. If protein shows tissue-specific expression patterns, another easy way to assess specificity is by using samples known to express and not express the protein of interest.\n\nSensitivity can be assessed by using protein-specific index arrays that contain sample and/or cell lines with varying but known amounts of target protein (Carvajal-Hausdorf et al., 2015; Welsh et al., 2011). A simpler method for assessing the sensitivity of an antibody is to spike a sample that does not express the protein of interest with known amounts of purified protein.\n\nTo assess reproducibility, run your validated antibody on 20 – 40 tissue samples, either as whole tissue sections or represented on a tissue microarray (TMA) for IHC. For western blotting, it is key to run replicates of lysates generated from the same batch of cells. Irrespective of the application, run your experiment in triplicate, using the same lot of antibody on different days and by different operators. In addition, use antibodies from different lots to compare lot-to-lot reproducibility. If you have previously used the antibody or trust published data generated using the antibody, compare your results to those data.\n\nComparing antibodies from different vendors targeting the same protein adds further value to validation and reproducibility assessments. It is, however, important to consider that antibodies raised against different epitopes of the same protein can yield significantly different results, depending on how accessible a given epitope is in a sample of interest.\n\nPerform your validation experiments using the same buffers, sample types, and experimental conditions that will be used for your final experiments. An antibody validated in one buffer system will not necessarily perform similarly in another.\n\nKeep in mind that purified protein is sufficient to benchmark the target protein’s molecular weight in your sample, but it does not allow you to draw conclusions about specificity because purified protein is not a complex biological sample. Purified protein also does not allow you to determine the sensitivity or dynamic range of an antibody unless a dilution curve is set up to establish these parameters. Also keep in mind that purified proteins are often tagged, which changes their molecular weight. To facilitate antibody validation, whenever possible choose a vendor that provides a physiologically relevant positive control sample rather than a purified protein.\n\n3. Run controls with every experiment.\n\nEvery experiment should include a positive and negative control to assess antibody performance, ideally a set of samples with variable expression levels of the protein of interest. Protein-specific TMAs consisting of tissue samples and/or a set of cell lines can also be run alongside the experiments for quality control and reproducibility purposes. Arrays of cell lines with a range of expression levels and target-specific test TMAs can be purchased from a number of vendors. When a protein of interest is not expressed in immortalized cell lines or is expressed only transiently during a specific developmental stage, tissue samples may have to be used to validate an antibody’s performance.\n\nKnock-out or knock-down cells or samples known to not express the protein of interest are also frequently used as negative controls, especially since techniques like CRISPR and siRNA have simplified generation of such cell lines. Samples overexpressing the protein of interest, or even purified recombinant proteins, are commonly used as positive controls. However, results from such experiments are not always physiologically relevant, as knockdowns or knockouts can cause compensatory changes in cellular physiology. One way to avoid these pitfalls is to test samples with varying, known endogenous expression levels of the target protein. When researchers are working with freshly isolated primary cells or tissue samples, this becomes particularly important since over expression or knock-down validation is not always feasible.\n\nDepending on your application, additional controls should be included. For example, every quantitative western blot should include a housekeeping protein loading control unless you are performing total protein normalization (TPN), and every ELISA should include a standard curve. In both cases, make sure that your signal is within the assays’ dynamic range. When using TPN, be aware that this method detects proteins by interacting with tryptophans (Trp). If the total amount of Trp in your sample is altered by your experimental treatment, TPN will no longer serve as a reliable control.\n\n4. Retest antibodies before using them with an important sample.\n\nAntibodies have limited shelf lives and are often shared resources in a laboratory. It is therefore wise to retest your antibody before performing a critical experiment. This does not need to be full validation; in these cases a quick experiment with relevant controls under previously established conditions is sufficient to ensure that an antibody is still performing as expected.\n\n5. Store antibodies as recommended by the vendor.\n\nCarefully review vendor recommendations and store antibodies accordingly. Write the date of first use on the vial to track antibody usage and do not store working dilutions in buffer for later use because this can affect stability; as you dilute your antibody you are also diluting stabilizers added by the vendor. If an antibody has been stored for a long time or has expired, it is best to use it only with caution. Validation experiments should be repeated and working concentrations may need to be adjusted as antibody stability decreases over time. If you have altered vendor storage conditions by, for example, removing additives or stabilizers, the antibody shelf life can decrease significantly. It is thus advisable to carefully mark any alterations in storage or formulation so that both current and future users are aware of these changes.\n\n6. Train all new lab personnel.\n\nTake the time to familiarize new lab members with proper antibody etiquette. Ensure that they understand the importance of antibody validation, proper controls, and agreed upon best practices.\n\nMost journals do not specify reporting criteria for the publication of antibody-generated data. This is highly problematic because many scientists turn to previously published data to inform not only their antibody choice but also the direction of their research. As the reproducibility debate is gaining momentum, more journals are defining stricter reporting criteria (Fosang & Colbran, 2015; http://www.nature.com/authors/policies/image.html). Until these criteria are universally enforced, it falls to the scientific community to implement minimum guidelines both in their own publications and when participating in the peer review process.\n\n1. Provide complete antibody information.\n\nThe full antibody name, vendor, lot number, and antibody concentration and dilution, and incubation time should be provided. If a new in-house antibody is used, include information about how the antibody was generated.\n\n2. Always include positive and negative controls in published data.\n\nAll antibody-generated data should include positive and negative controls, as well as all additional controls required for your particular application (loading controls for western blots, standard curves for ELISAs, etc.). Not including these controls makes published data uninterpretable.\n\n3. Include validation data for all new antibodies.\n\nWhen using non-established antibodies or established antibodies for a new application, validation data that determine antibody specificity, sensitivity, and reproducibility should be presented. This information can be included in supplementary data, but should not be missing from the published study. Without this crucial information conclusions drawn from presented experiments are difficult to evaluate.\n\n4. Present complete data and describe all quantitative methods.\n\nDo not crop western blots or splice lanes from different blots into a single image. If lanes need to be cropped out of a blot, crop lines should be clearly indicated. All quantitation using antibodies should be described carefully in the methods or supplementary materials, including how signal intensity was measured, linearity of the assay was determined, and signal was normalized for quantitation.\n\n\nConclusions\n\nThe antibody quality problem is well documented in the literature and can no longer be ignored. With growing discussion and awareness vendors and scientists alike must be held to higher validation and reporting standards. We have summarized the above minimum best practice guidelines in Table 1 in the hope that they will simplify the antibody search, serve as a starting point for further conversation, and improve the quality of antibody data published until strict antibody reporting standards are agreed upon and universally enforced.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nBerglund L, Björling E, Oksvold P, et al.: A genecentric Human Protein Atlas for expression profiles based on antibodies. Mol Cell Proteomics. 2008; 7(10): 2019–27. PubMed Abstract | Publisher Full Text\n\nBordeaux J, Welsh A, Agarwal S, et al.: Antibody validation. Biotechniques. 2010; 48(3): 197–209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBradbury A, Plückthun A: Reproducibility: Standardize antibodies used in research. Nature. 2015; 518(7537): 27–9. PubMed Abstract | Publisher Full Text\n\nCarvajal-Hausdorf DE, Schalper KA, Pusztai L, et al.: Measurement of Domain-Specific HER2 (ERBB2) Expression May Classify Benefit From Trastuzumab in Breast Cancer. J Natl Cancer Inst. 2015; 107(8): pii: djv136. PubMed Abstract | Publisher Full Text | Free Full Text\n\nElliott S, Busse L, Bass MB, et al.: Anti-Epo receptor antibodies do not predict Epo receptor expression. Blood. 2006; 107(5): 1892–5. PubMed Abstract | Publisher Full Text\n\nFosang AJ, Colbran RJ: Transparency Is the Key to Quality. J Biol Chem. 2015; 290(50): 29692–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJohnson M: Antibody Shelf Life/How to Store Antibodies. Mater Methods. 2012; 2: 120. Publisher Full Text\n\nPrassas I, Diamandis EP: Translational researchers beware! Unreliable commercial immunoassays (ELISAs) can jeopardize your research. Clin Chem Lab Med. 2014; 52(6): 765–6. PubMed Abstract | Publisher Full Text\n\nRamos P, Leahy A, Pino I, et al.: Antibody Cross-Reactivity Testing Using the HuProt™ Human Proteome Microarray. 2014. Reference Source\n\nRosenblatt M: An incentive-based approach for improving data reproducibility. Sci Transl Med. 2016; 8(336): 336ed5. PubMed Abstract | Publisher Full Text\n\nWelsh AW, Moeder CB, Kumar S, et al.: Standardization of estrogen receptor measurement in breast cancer suggests false-negative results are a function of threshold intensity rather than percentage of positive cells. J Clin Oncol. 2011; 29(22): 2978–84. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "23764",
"date": "26 Jun 2017",
"name": "Steven Elliott",
"expertise": [
"Reviewer Expertise Molecular biology",
"Immunology",
"Immunological methods"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is a welcome addition to the community because it identifies a significant problem (antibody nonspecificity) and provides guidelines for improvement. While there are helpful recommendations in this submission, including how to do experiments and recommendations on finding antibodies, it does not describe a complete and proper plan. The authors indicate that antibodies should be validated but do not provide enough detailed information on how this should be done nor the criteria expected of a properly validated antibody. There is also no detailed guidance on how to validate all procedures, reagents and thereby select proper controls. In this regard checklists would be helpful. For example, staining intensity must match expression level, a band on a gel must be the correct size, negative controls must be included, sensitivity of the antibody should be determined and sufficient to detect the target in the sample of interest etc. Red-flags that would invalidate an antibody should also be described. Preferably a second antibody should be used for cross-validation and it must give the same pattern of staining.\n\nSpecific comments:\nPage 2 paragraph 5(left). All of the antibodies in Elliott et al., 20061 detected EpoR, however the sensitivity of all were low with poor specificity. One of the 4 antibodies (M-20- Santa Cruz Inc) was initially thought to be useful for westerns because it passed some tests. However M-20 was later invalidated because it detected a correctly sized protein thought to be EpoR that turned out to be a non-EpoR protein. (see Elliott et al., 2013 2).\nPage 2 paragraph 6 (right). Polyclonal antibodies are not necessarily more sensitive. In fact because of the elevated noise with polyclonals, they can have a poor signal to noise ratio.\nPage 2 paragraph 7 (right). High specificity and reproducibility are related to affinity and the nature of the binding site (epitope), not to monoclonal antibodies vs polyclonal per se.\nPage 3 paragraph 4 (right): The premise that an investigator can trust any other lab or publication misses what should be the main point of the paper. The ultimate responsibility must belong to the end user. Even under the best circumstances it is impossible to fully evaluate validation done by others. For example, how does one know if only select data is shown or whether experiments were repeated or reproducible etc.\nPage 4 paragraph 2(left). Few manufacturers do “extensive” validation. Vendors frequently only show limited and selective (best) data with little description of the validation of reagents and controls.\n\nPage 4 paragraph 2(left). A discussion on the limitations of RNAi is warranted. Like antibodies, RNAi can also give misleading data due to non specific knockdown of the presumed target protein.\n\nPage 5 last paragraph (right). A discussion of what are appropriate positive and negative “controls” is missing.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23624",
"date": "26 Jun 2017",
"name": "Alison H. Banham",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAntibody validation and its contribution to scientific reproducibility is a topical area. While there are several recent reviews on the subject each brings a slightly different perspective that adds value. This is true of the current article, which provides some very helpful and practical advice. The article also highlights the key role of dialogue between antibody vendors and end users, between academic laboratories and the importance of a partnership within the scientific community to improve the standards of antibody validation and publishing antibody-based data.\n\nWhile commercially available antibodies have the advantage of being easily accessible it would be worth mentioning the published literature earlier than section 7 as a resource for finding antibodies. In some instances the best antibody may be produced by an academic laboratory and might not be commercially available.\n\nThe section on recombinant antibodies gives the impression that these reagents are always generated by library screening, “without any dependence on animal immunization”. It would be helpful to clarify that any antibodies, including classical monoclonal antibodies derived from hybridoma cell lines, can be produced in a recombinant format by isolating and cloning their immunoglobulin genes. Indeed recombinant therapeutic antibodies used by the pharmaceutical industry are commonly derived from classical monoclonal antibodies and efforts are underway to convert many monoclonal antibodies used as research tools into a recombinant format to ensure their longevity.\n\nWhile the authors indicate they have no competing interests it would be worthwhile for full transparency to declare that two of the authors work for a commercial antibody vendor, particularly as the article makes strong recommendations regarding criteria for vendor selection.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23765",
"date": "30 Jun 2017",
"name": "C. Glenn Begley",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGiven that Bio-Rad Laboratories is a supplier of antibodies to the scientific community, the two authors who are employed by Bio-Rad Laboratories could be perceived as having a ‘competing interest’. This should be declared as such.\n\nMy biggest concern with this paper is what might be interpreted to be an over-reliance upon published studies to support the use of antibodies for particular indications. For example:\n(i)The statement that “Vendors frequently show cropped blots, share validation data that lack appropriate controls, or use large amounts of purified protein or cell lines overexpressing proteins of interest instead of physiologically relevant samples as positive controls” is correct.\nBut this is further compounded by investigators who continue the same practices and further contaminate the literature by reporting results that purport to support the use of a particular antibody by not disclosing its lack of specificity.\n(ii) The comment “Their high specificity makes monoclonals an ideal choice for immunohistochemistry applications” is correct. However it is worth adding the caveat that simply because these are monoclonal antibodies, does not guarantee that they will be suitable for immunohistochemistry. The control experiments are still required to validate their utility.\n(iii) It is strongly recommended that the statement “You can also look to validation data in published studies to evaluate antibody performance” be modified. One should be extremely careful about relying upon the published literature as a source of confirmation for an antibody. It is very rare that published studies validate an antibody for the purpose for which it is applied. It is unfortunately much more common that antibodies that should not be used immunohistochemistry or flow cytometry are used for that purpose. Once the first paper is published, subsequent investigators simply cite that paper as evidence of ‘validation’ without any confirmatory studies. This is a major problem for careful investigators.\n(iv) “When reviewing the literature, trust an antibody cited in a publication only if appropriate positive and negative controls are included.” Because most publications typically only show a tiny ‘window’ of a gel, I recommend adding the phrase “and for western blots and immunoprecipitation experiments, only if the entire gel is shown”.\n\nIt may be worth commenting that different vendors sell the same antibody but with a different name and lot number. Thus while an investigator is purchasing antibodies that appear to be different, they might actually be the identical antibody. Researchers should at least be aware that is currently occurring.\n\nWith respect to “Publishing meaningful antibody data”, I suggest the Authors add a comment that\n(i) All westerns and IPs should have size standards shown (in addition to showing the complete gel).\n(ii) Subjective assessment of IHC (for example counting the number of metastases in the lung) should be performed by blinded investigators.\n(iii)For flow cytometry experiments, “outlier points” should not be removed.\n(iv)Experiments should be repeated – investigators should resist publishing a single positive western result, or analysing a single IHC sample as “typical”.\n\n“IP”, “DAB”, “IHC” should be defined when they occur in the text.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23340",
"date": "14 Aug 2017",
"name": "Andrew D. Chalmers",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe opinion article by Poulomi Acharya and colleagues is an interesting commentary on three important areas that relate to antibody use, how to select, validate and report antibody use.\n\nI think it provides a valuable contribution to discussion in this field and makes many valuable points on the topic.\n\nI would like to suggest a number of minor changes/corrections that should be considered if/when the article is revised.\nThe authors state in the abstract that “in addition, several studies have called into question the reliability of published data as the primary metric for assessing antibody quality”. I think if the authors are going to make this statement in the abstract they should specifically return to it in the article and discuss it more widely. Which studies are they and what were the conclusions. What other metrics are available and do the authors believe these should these be used instead or with published data?\n\nMy own opinion is that there are clear cases where peer reviewed published antibody results have turned out not to be reliable and the authors are right to raise them and warn researchers. I also believe that supplier validation is in many cases a very good source of information, reviews can also provide value, but I would argue that peer reviewed published results, when available and looked at critically, are still the best source of information on antibody quality, short of validating the antibody in your own laboratory.\n\nI would of course agree with the authors that researchers should use search engines to help identify possible candidate antibodies (Page 2). The validation data for these antibodies can then be investigated and a final choice made. However, the search engines proposed omit several well used ones such as CiteAb (I am obviously biased), but also Antibodypedia and PabMabs and others. The three sites mentioned rank by citations or reviews and complement Biocompare and Select Science which I believe rank on a financial basis. I am also not aware of how UniProt and NCBI can be used as antibody search engines so some explanation might be helpful.\n\nIt is not true that all monoclonal antibodies are highly specific (Page 2), a recent study showed this for antibodies against the oestrogen receptor1.\n\nRecombinant monoclonals can potentially show batch-to-batch variability caused by changes in the manufacturing process, so I think it is more accurate to say that they should have the least batch-to-batch variability, rather than saying “removing any batch to batch variability” (Page 3).\n\nI think the authors are right to the stress the need to select antibodies, where possible, that have been validated for the application of interest (Page 3; final paragraph). I wonder if the authors could also make more of the need to try and find antibodies validated for the tissue and cell type of interest?\n\nThey are also right to point out that comparing antibodies from different vendors can add further value to the validation (page 4), but it might be worth stressing that due to cross selling researchers need to be careful to make sure that the antibodies are actually different.\n\nIn section III, number 1. The authors have omitted the catalogue code as part of the information that should be listed. They do put this in Table 1. I think this should be added and would also suggest the clone number and any conjugate can also provide valuable information and should in an ideal world be included.\n\nTwo of the authors work for an antibody supplier and this should be declared.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-851
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.