link
stringlengths 41
45
| date
stringlengths 9
9
| paper
dict | reviews
listlengths 1
6
| version
int64 1
5
| main
stringlengths 38
42
|
|---|---|---|---|---|---|
https://f1000research.com/articles/5-2765/v1
|
25 Nov 16
|
{
"type": "Research Article",
"title": "Early embryo mortality in natural human reproduction: What the data say",
"authors": [
"Gavin E. Jarvis"
],
"abstract": "It is generally accepted that natural human embryo mortality during pregnancy is high – losses of 70% and higher from fertilisation to birth are frequently claimed. The first external sign of pregnancy occurs two weeks after fertilisation with a missed menstrual period. Establishing the fate of embryos before this is challenging, and hampered by a lack of data on the efficiency of fertilisation under natural conditions. Four distinct sources are cited to justify quantitative claims regarding embryo loss: (i) a hypothesis published by Roberts & Lowe in The Lancet is widely cited but has no practical quantitative value; (ii) life table analyses give consistent assessments of clinical pregnancy loss, but cannot illuminate losses at earlier stages of development; (iii) studies that measure human chorionic gonadotrophin (hCG) reveal losses in the second week of development and beyond, but not before; and (iv) the classic studies of Hertig and Rock offer the only direct insight into the fate of human embryos from fertilisation under natural conditions. Re-examination of Hertig’s data demonstrates that his estimates for fertilisation rate and early embryo loss are highly imprecise and casts doubt on the validity of his numerical analysis. A recent re-analysis of hCG study data suggests that approximately 40-60% of embryos may be lost between fertilisation and birth, although this will vary substantially between individual women. In conclusion, it is clear that some published estimates of natural embryo mortality are exaggerated. Although available data do not provide a precise estimate, natural human embryo mortality is lower than is often claimed.",
"keywords": [
"early pregnancy loss",
"occult pregnancy",
"embryo mortality",
"human chorionic gonadotrophin",
"Hertig",
"pre-implantation embryo loss"
],
"content": "Introduction\n\nIt is widely accepted that under natural circumstances, human embryo mortality is high, particularly immediately after fertilisation. Quantitative estimates of embryo loss are found in diverse media including television documentaries (“You made it through the first round” presented by Michael Mosley: video at http://www.bbc.co.uk/timelines/z84tsg8; transcript at http://a.files.bbci.co.uk/bam/live/content/z3b87hv/transcript: accessed on 22nd October, 2016), online educational videos (“Bill Nye: Can We Stop Telling Women What to Do With Their Bodies?” presented by Bill Nye, the Science Guy: video at https://www.youtube.com/watch?v=4IPrw0NYkMg: accessed on 22nd October, 2016), online museum exhibits (“Who Am I? What happens in week 1?” presented by The Science Museum; available at http://www.sciencemuseum.org.uk/WhoAmI/FindOutMore/Yourbody/Wheredidyoucomefrom/Howdoyougrowinthewomb/Whathappensinweek1: accessed on 22nd October, 2016), news reports (“Scientists get ‘gene-editing’ go-ahead” by James Gallagher: article at http://www.bbc.co.uk/news/health-35459054: accessed on 22nd October, 2016), as well as academic philosophical articles1 and legal judgements2. Among reputable scientific publications, including medical and reproductive biology text books, scientific reviews and primary research articles, reported mortality estimates include: 30–70% before and during implantation3; >50%4, 73%5 and 80%6 before the 6th week; 75% before the 8th week7; 70% in the first trimester8; 40–50% in the first 20 weeks9; and 49%10, >50%11,12, 53%13, 54%14, 60%15, >60%16, 63%17,18, 70%19–23, 50–75%24, 76%5,25, 78%26, 80–85%27, >85%28, and 90%29 total loss from fertilisation to term. The variance in these estimates is striking. 90% intrauterine mortality implies a maximal live birth fecundability of 10%, and only then if all other stages of the reproductive process are 100% efficient. Observed human fecundability is low compared to other animals13, but at approximately 20–30%4,30 it is still higher than implied by such a high embryo mortality rate.\n\nEarly human embryo mortality is of interest not only to reproductive biologists and fertility doctors, but also to ethicists31, theologians32 and lawyers2. Nevertheless, becoming pregnant and having children is of primary and personal importance to many women and their families. As with all biological processes, nothing works perfectly all the time33, and failure to conceive and pregnancy loss are common problems. However, inconsistent estimates of early pregnancy loss are not reassuring, nor do they provide a sound basis for either a quantitative understanding of natural human reproductive biology or an unbiased appraisal of artificial reproductive technologies. The divergent and excessive values noted above therefore invite scrutiny of the evidence that supports them. In this article, I identify and re-evaluate published data that contribute to claims regarding natural human embryo mortality.\n\n\nA quantitative framework for embryo mortality\n\nA quantitative framework has been proposed to facilitate the calculation and comparison of embryo mortalities from fecundability and pregnancy loss data34. The model comprises conditional probabilities (π) of the following biological processes: (1) reproductive behaviours resulting in sperm-ovum-co-localisation per cycle = πSOC; (2) successful fertilisation given sperm-ovum-co-localisation = πFERT; (3) implantation of a fertilised ovum as indicated by increased levels of human chorionic gonadotrophin (hCG) = πHCG; (4) progression of an implanted embryo to a clinically recognised pregnancy = πCLIN; (5) survival of a clinical pregnancy to live birth = πLB.\n\nFecundability is the probability of reproductive success per cycle, but may take different values depending on the definition of success. The following four fecundabilities broadly follow Leridon30:\n\n1. Total (all fertilisations): FECTOT = πSOC × πFERT\n\n2. Detectable (implantation): FECHCG = πSOC × πFERT × πHCG\n\n3. Apparent (clinical): FECCLIN = πSOC × πFERT × πHCG × πCLIN\n\n4. Effective (live birth): FECLB = πSOC × πFERT × πHCG × πCLIN × πLB\n\nHence, the probability that a fertilised egg will perish prior to implantation is [1 - πHCG], and prior to clinical recognition is [1 - (πHCG × πCLIN)]. In theory, embryonic mortality may be estimated at different stages; however, in practice, this depends on available data. Clinical and live birth fecundabilities are most easily quantified and most frequently reported. Total and detectable fecundabilities are less frequently reported, although of direct relevance.\n\n\nWhat the data say\n\nPublications containing data relevant to early human embryo mortality were identified primarily by tracing citations found in articles, reviews and textbooks. Systematic online searches did not capture all of these studies. Some are particularly old, many were not conducted to address the specific question, and others are in books or publications that are not adequately indexed. If not entirely complete, nevertheless the data presented form a substantial proportion of relevant, available scientific information on natural early human embryo mortality.\n\nStudies that contribute analysis and data relevant to the quantification of natural human embryo mortality fall into the following four categories and will be considered in turn.\n\n1. A speculative hypothesis published in The Lancet.\n\n2. Life tables of intra-uterine mortality.\n\n3. Studies of early pregnancy by biochemical detection of hCG.\n\n4. Anatomical studies of Dr Arthur Hertig and Dr John Rock.\n\nIn 1975, a short hypothesis published in The Lancet entitled “Where Have All The Conceptions Gone?” concluded that 78% of all conceptions were lost before birth26. It has been widely cited by both scientists4,17,19,20,35 and non-scientists36,37 alike. Conceptions among married women aged 20–29 in England and Wales in 1971 were estimated and compared to infants born in the same period. In this analysis (Table 1) there are reliable values, e.g., census data, and simple arithmetical calculations. However, speculative values are necessary to perform the calculations. Three are biological: (1) fertilisation rate following unprotected coitus during the fertile period was estimated as 50% and supported by reference to Hertig38 (although his estimate was 84%33); (2) the length of a menstrual cycle (28 days); and (3) the duration of the fertile period (2 days). These latter values are plausible, but also variable. No justification is provided for three behavioural variables: (1) coital frequency estimated at twice per week; (2) proportion of unprotected coital acts estimated at 25%; and (3) either a random or regular distribution of coital acts during menstrual cycles such that 1/14 of all coital acts fall within a fertile period.\n\nThe table replicates the values and calculations of Roberts & Lowe26 with more explanatory detail. In addition, it illustrates how introducing variance into speculative estimates influences the final calculated value of embryo loss. *Data type indicates whether the numerical value is reliable (e.g., derived from census data), the result of a simple arithmetical calculation, or speculative (shown in italics). §Values are the 2.5th and 97.5th percentile boundaries, assuming a normal distribution for the variables centred on Roberts & Lowe’s values with a coefficient of variation of 20%. †Speculative values were adjusted either up or down by 25% compared to Roberts & Lowe’s values. Values for ‘Length of menstrual cycle’ were adjusted by 10%. ‡The median values of the 2.5th and 97.5th percentile boundaries from 1,000 simulations, each containing 10,000 separate estimates for embryo loss. The derivation of these values is described in the text. Briefly, each separate estimate of embryo loss was calculated using variable speculative values that were obtained by random sampling from a normal distribution with a mean equal to the Roberts & Lowe value and a coefficient of variation of 20%. The median value of the mean percentage loss was 73.3% and of the median was 76.5%. ¥The most frequent duration of a menstrual cycle is 28 days but there is substantial variability and the mean length is generally 30–31 days30.\n\nThe validity of Roberts & Lowe’s conclusion depends largely on the accuracy and precision of these speculative values. The following two simple analyses illustrate the sensitivity of their conclusion on the speculative values.\n\n1. When four of the speculative values are reduced by 25% (e.g., coital frequency reduced to 1.5/week) and cycle length increased by 10% (from 28 days to 31 days30), the estimate for embryo loss drops to 22%. The opposite operation (e.g., coital frequency increased to 2.5/week) results in an estimate of 92% (Table 1). Embryo loss of 22% is barely sufficient to account for observed clinical losses, and 92% indicates a maximum FECLB of 8%. Neither scenario is biologically plausible.\n\n2. A non-zero variance was applied to each speculative value reflecting their uncertain nature. Using the random number generator in Microsoft® Excel (Office 2010) simulated values were obtained by random sampling from normal distributions with means equal to Roberts & Lowe’s speculative values with coefficients of variation equal to 20%. For simplicity, it was assumed that there was no covariance between the different speculative values. Table 1 shows the expected range within which 95% of these simulated values fall (e.g., coital frequency is 1.2–2.8/week). For each simulated record, a new estimate of embryo loss was calculated and from 10,000 of these, the mean, median and 2.5th and 97.5th percentiles of embryo loss were determined. This was repeated 1,000 times: the mean value of the simulated means was 73.3% and of the simulated medians was 76.5%. The mean values of the 2.5th and 97.5th percentile boundaries for embryo loss were 37% and 90% (Table 1). The same simulation was also performed using NONMEM 7.3.0® (Icon PLC, Dublin, Eire) and generated 100,000 data records. The outcome of this is shown in Figure 1. The code and simulated data values are in Dataset 1.\n\nEmbryo loss values were calculated using alternative speculative values (see text and Table 1) obtained by randomly sampling from normal distributions with mean values equal to the Roberts & Lowe’s values with a coefficient of variation of 20%. 100,000 simulated embryo loss values were obtained. Frequencies within a bin size of 0.25% are shown. The 2.5th and 97.5th percentiles are indicated. The simulation was performed using NONMEM 7.3.0® (Icon PLC, Dublin, Eire). Simulated values are in Dataset 1.\n\nThe sole purpose of these simple sensitivity analyses is to illustrate that modest adjustments to Roberts & Lowe’s original speculative values can result in any biologically plausible estimate for embryo loss. The output from the calculation is therefore substantially dependent on the subjectively selected input. Such an analysis has no practical quantitative value.\n\nOther sources of bias in their model include the failure to account for intentionally terminated pregnancies and the reduced fecundability of already pregnant women and nursing mothers. Despite this, it was described as “persuasive”39 and it has been claimed that “it is still difficult to better the original calculations of Roberts and Lowe (1975)”19. By contrast, others have noted that “their calculations can be criticized”4 and are “tenuous”40. Considering its quantitative limitations, it has been cited surprisingly often8,20,41.\n\nConstructing a life table of intrauterine mortality is challenging since embryonic death may occur even before the presence of an embryo is recognised. Nevertheless, in 1977 Henri Leridon published a complete life table of intrauterine mortality18. Leridon highlighted the consequences of inappropriate analysis and the quantitative biases produced by alternative numerical methods. Overall, he discussed sixteen studies, and provided detailed commentary on six42–47. These data are summarised in Figure 2 and suggest that 12–24% embryos alive at 4 weeks’ gestation (i.e., approx. 2 weeks’ post-fertilisation) will perish before birth.\n\nThe figure is generated using values in Table 4.3 of Leridon18 and are derived from six different studies (see text). The Kauai Pregnancy Study data42 are shown in thick black. Data from Shapiro (1970)46 were analysed either with all pregnancies included (ALL) or with those pregnancies excluded that aborted within one week of study entry (EXCL.). The greater loss observed with ALL may be due to a correlation between study entry and abortion risk. Based on these data, the risk of losing a pregnancy ongoing at 4 weeks’ gestation ranges from 12.5% to 23.7% (excluding Shapiro (1970) ALL). Values are in Dataset 2.\n\nLeridon described the Kauai Pregnancy Study42 in particular detail. In this study, an attempt was made to identify every pregnancy on Kauai from 1953–56. Women were encouraged to enrol as soon as they missed a period. Early pregnancy loss may therefore have been overestimated, since not all amenorrhoea is caused by conception, although other studies that relied upon medically-identified pregnancies probably underestimated early pregnancy loss by not capturing all cases48. Whatever the truth, it is clear that, among the studies reviewed by Leridon, the Kauai Pregnancy Study revealed the highest levels of pregnancy loss (Figure 2).\n\nAll recorded pregnancies in the Kauai study were categorised by date of enrolment in four week intervals, beginning with 4–7 weeks’ gestation. This time-staggered approach enabled risk of miscarriage to be associated with stage of gestation. However, despite considerable efforts, only 19% of the 3,197 recorded Kauai pregnancies were enrolled between 4–7 weeks’ gestation, thereby reducing the precision of pregnancy loss estimates for this earliest of time intervals. Although pregnancies were grouped in four week periods, Leridon suggested that early mortality may change week by week, resulting in underestimation of pregnancy loss. He re-allocated the 592 study entries and 32 pregnancy losses for weeks 4–7 (Table 2) generating an overall probability of pregnancy loss during this period of 15.0%, higher than 10.8% originally reported42. Leridon’s own description of this interpolation as “risky” can be illustrated by adjusting his re-allocation18. Transferring just two of the pregnancy losses out of or into the first week results in estimates of the 4–7 week pregnancy loss of 10.9% and 19.1% respectively (Table 2). The validity of adjusting Leridon’s re-allocation may be questioned. However, pregnancy loss in week 4–5 of the Kauai Study would manifest as a menstrual period delayed by up to one week. This is far from being a robust pregnancy diagnosis and in different study46, exclusion of pregnancy losses reported within one week of study entry resulted in substantially different loss probabilities (Figure 2) suggesting a confounding correlation between entry and loss18. Nevertheless, the re-allocation does reinforce a concern highlighted by Leridon, namely the uncertainty that affects the first probability. Clearly, these estimates of early loss should be treated with caution.\n\nMinor differences in the re-allocation of the earliest pregnancy losses have a substantial effect on the overall measure of pregnancy loss for that period. (Adapted from Table 4.2 in Leridon18.)\n\nA more fundamental problem is that these data offer no insight into the fate of embryos prior to the earliest possible point of clinical pregnancy detection. Leridon completed his life table with values from Hertig’s analysis33. He concluded that among 100 ova exposed to the risk of fertilisation, 16 are not fertilised, 15 die in week one (before implantation), and 27 die in week two (before the menstrual period). After two weeks his life table follows the Kauai probabilities closely ending with 31 live births. Leridon’s table therefore indicates an embryo mortality of 50% (42/84) within the first two weeks after fertilisation and a total mortality of 63% (53/84) from fertilisation to birth.\n\nLeridon’s account of intrauterine mortality has been widely cited. However, its accuracy depends entirely on the quality and interpretation of the data from Hertig33 and French & Bierman42. French & Bierman’s approach probably resulted in an overestimate of total pregnancy loss and is certainly imprecise in its estimate of embryo loss in the four weeks following the first missed menstrual period. The reliability of Hertig’s estimates of embryo loss in the two weeks following fertilisation is considered below.\n\nQuantification of pregnancy loss requires pregnancy diagnosis. The earliest outward sign of pregnancy is a missed menstrual period, approximately 2 weeks after fertilisation, although amenorrhoea in women of reproductive age is not exclusively associated with fertilisation49,50. Several potentially diagnostic pregnancy-associated proteins have been identified51 of which only one, Early Pregnancy Factor (EPF)52, has been claimed to be produced by embryos within one day of fertilisation. However, there is doubt about the utility of EPF for diagnosing early pregnancy53 and little has been published on it in the past five years.\n\nModern pregnancy tests detect human chorionic gonadotrophin (hCG), a highly glycosylated 37 kDa protein hormone produced by embryonic trophoblast cells54. Mid-cycle elevation of hCG is associated with embryo implantation19,20,55. Early assays for the detection of hCG were probably confounded by antibody cross-reactivity with luteinizing hormone56 but modern tests are more specific and a positive result is a reliable indicator of early pregnancy. Highly sensitive assays have revealed low levels of hCG in non-pregnant women and healthy men57; hence, quantitative criteria are required to distinguish between non-pregnant women and those harbouring early embryos55.\n\nFigure 3 and Table 3 summarise findings from thirteen studies that used hCG to identify so-called early, occult or biochemical pregnancy loss, i.e., pregnancy loss between the initiation of implantation and clinical recognition58–70. Notwithstanding design and subject differences, estimates for clinical pregnancy loss, ranging from 8.3% - 21.2% (Figure 3), are similar to previous estimates (Figure 2). Estimates for early/occult loss ranged from 0% to 58.3% in studies58–62 prior to Wilcox in 198863. This high variance was probably due to reduced specificity and sensitivity of the hCG assays and sub-optimal study design48,51,71–74. Studies from 198863 onwards have produced more consistent data indicating early/occult loss of approximately 20% (Figure 3). In the three largest studies63,66,70 pregnancies were clinically recognised only if they lasted ≥6 weeks after the onset of the last menstrual period66,75. Hence, early pregnancy losses in these studies included those lost up to approximately two weeks after a missed menstrual period: this may influence comparison of study results34,73. An overview of the thirteen studies suggests that overall pregnancy loss from first detection of hCG through to live birth is approximately one third (Table 3). This is consistent with another recent study which found that 98 out of 301 (32.6%) singleton pregnancies diagnosed by an early positive hCG test and followed-up to either birth or miscarriage were lost76.\n\nData are arranged by publication date and the first author of the study is shown. Three datasets are shown: (i) the percentage of at risk reproductive cycles that were hCG positive; (ii) the percentage of hCG positive cycles that did not manifest as clinical pregnancies = early pregnancy loss; and (iii) the percentage of clinical pregnancies lost prior to 12 or 28 weeks or live birth (definitions vary between studies). A clinical pregnancy may be manifest by a missed period although criteria vary between studies. Videla-Rivero et al.61, Sasaki et al.67, Cole69 and Mumford et al.70 do not report sufficient data to calculate all three values. Values are in Dataset 3.\n\nRaw FECHCG is the ratio of hCG pregnancies detected and the number of cycles monitored in each study. Where available, mean (SD) ages of the participating women are taken directly from the published study. In some cases mean and SD (indicated by *) or SD (indicated by †) were estimated based on published demographic characteristics. §These data relate to the whole study cohort (n=124) which included known sub-fertile women, and not just to the 74 apparently fertile women. ‡Mean value from Wilcox et al. (2001)78. ¶Some studies only provide data up to late pregnancy (e.g., up to 28 weeks) rather than to term. ND = no data. ¤Wilcox subsequently reported an additional hCG pregnancy which had not been detected and reported in the 1988 paper, making a total of 199 hCG pregnancies and 44 pre-clinical losses in the study group75. #Mumford reported data from aspirin- and placebo-treated subjects who had at least one prior miscarriage. Summary data from both treatment groups are included as there was no effect of aspirin70.\n\nThe much cited Wilcox study63 is the earliest of several large well-designed studies that made use of a specific and sensitive hCG assay and led to numerous further publications75,77–83. Two other studies (Zinaman65 and Wang66) were similar in purpose, design and execution. These studies provide some of the best available data to calculate pregnancy loss between implantation and birth34. In each study, women intending to become pregnant and with no known fertility problems were recruited and hCG levels monitored cycle by cycle in daily urine samples until they became pregnant. Most women were followed through to late pregnancy or birth. Although these studies provide evidence regarding the outcome of both clinical and hCG pregnancies, determining the fate of embryos prior to implantation is more difficult. To relate the study results to pre-implantation embryo loss, it is necessary to determine fecundability. In each study FECCLIN declined in successive cycles as the proportion of sub-fertile women increased. Hence, reported FECHCG values of 30%65 and 40%66, and FECCLIN values of 25%63 and 30%66 are biased underestimates of the fecundability of normal fertile women. A recent re-analysis of these data provides statistical evidence for discrete fertile and sub-fertile sub-cohorts within the study populations34. The proportions of sub-fertile women (mean [95% CI]) were estimated as 28.1% [20.6, 36.9] (Wilcox); 22.8% [12.9, 37.2] (Zinaman); and 6.0% [2.8, 12.3] (Wang). For normally fertile women, FECHCG was, respectively: 43.2% [35.6, 51.1]; 38.1% [32.7, 43.7]; and 46.2% [42.8, 49.6]. FECCLIN was: 33.9% [29.4, 38.6]; 33.3% [27.6, 39.6]; and 34.9% [33.0, 36.8]. There was no apparent difference in πCLIN between fertile and sub-fertile sub-cohorts, which was estimated as: 78.3% [69.2, 85.3]; 87.5% [76.0, 93.9]; and 75.4% [71.5, 79.0]34.\n\nWhy do a proportion of menstrual cycles in women attempting to conceive fail to show any increase in hCG? Since FECHCG = πSOC × πFERT × πHCG, there can be various causes for this failure including mistimed coitus, anovulation, failure of fertilisation or pre-implantation embryo death. Although FECHCG puts limits on the extent of pre-implantation embryo loss, uncertainty in the estimates of πSOC, πFERT and πHCG translates into uncertainty in estimates of pre-implantation embryo mortality. In the Wang study, for normally fertile women, FECHCG = 46.2%; hence, the absolute maximum value for pre-implantation embryo loss must be 53.8%, although only if πSOC = πFERT = 1, conditions both extreme and unlikely34. Studies of the relationship between coital frequency and conception indicate that fecundability is greater with daily compared to alternate day intercourse34,84,85. Hence, when coital frequency is less than once per day a proportion of reproductive failure will be due to mistimed coitus, i.e., πSOC < 1. In the Wilcox study, coitus occurred on only 40% of the six pre-ovulatory days34,79, and in the Zinaman study participants were advised that alternate day intercourse was optimal65. Based on the difference in fecundability between daily and alternate day intercourse as modelled by Schwartz85, a value of πSOC = 0.80 was used to calculate pre-implantation embryo mortality34. However, this is a speculative estimate, and in reality the value may be higher, or lower.\n\nA further critical missing piece of the equation is knowledge of the efficiencies of fertilisation and implantation under normal, natural, propitious circumstances. Assuming that either of these processes may be up to 90% efficient, and based on data from the three hCG studies63,65,66, a plausible range for pre-implantation embryo loss in normally fertile women is 10–40% and for loss from fertilisation to birth, 40–60%34. Even with these wide ranges of mathematically possible outcomes, it is clear that estimates for total embryonic loss of 90%29, 85%28, 83%31, 80–85%6,27, 78%26, 76%5,25 and 70%19–23 are excessive.\n\nA previous review concluded that “at least 73% of natural single conceptions have no real chance of surviving 6 weeks of gestation”5,86. Live birth fecundability was estimated as “not over 15%”, substantially lower than Leridon’s 31%. Despite this discrepancy, Boklage’s conclusions were derived from a review of data including several hCG studies55,58–61,63 and Leridon’s analysis18. He derived a model describing the survival probability of human embryos comprising the sum of two exponential functions:\n\n\n\nin which t is the time in days post-fertilization. This is the source of the 73% in the conclusion.\n\nThere are, however, serious problems with this analysis. Firstly, data presented as embryo survival probabilities at different times post-fertilization55,58,59,61,63 are fecundabilities, i.e., successes per cycle, not per fertilised embryo. Secondly, for reasons that are unclear, data from Whittaker60 and Leridon18 were excluded from the modelling analysis and the data from an earlier Wilcox report55 were included twice since this preliminary data had been incorporated into the later report63. Thirdly, the modelled data were normalised to a survival probability of 0.287 at 21 days post-fertilization. This value was derived from data published by Barrett & Marshall on the relationship between coital frequency and conception84. Barrett & Marshall had concluded that coitus during a single day alone, 2 days before ovulation resulted in a conception probability of 0.30. Boklage’s value of 0.287 is his calculated equivalent. However, conception in this study was “identified by the absence of menstruation, after ovulation”84. Hence, 0.30 (and similarly, 0.287) is a clinical fecundability and not a measure of embryo survival. Furthermore, 0.30 is a non-maximal fecundability, since it was an estimate based on coitus on a single day (2 days before ovulation) within the cycle. Barrett & Marshall clearly report that as coital frequency increased so did the fecundability, up to a maximum of 0.68 associated with daily coitus84.\n\nBoklage’s analysis can only make biological sense if it is assumed that every cycle in the Barrett & Marshall study resulted in fertilisation. Under these circumstances, failure to detect conception in 71.3% (1 – 0.287) of cycles would be due entirely to embryo mortality. However, this is highly implausible and explicitly contradicted by the higher estimate of fecundability reported84. Boklage’s implicit assumption also contradicts his further conclusion that “only 60–70% of all oocytes are successfully fertilized given optimum timing of natural insemination”5. The vertical normalisation of the hCG study data to a value of 0.287 at 21 days is the principal determinant of the parameters that define the two exponential model. Any change in this value would commensurately alter the balance between the two implied sub-populations of embryos. Since it is evident that the value of 0.287 is neither an embryo survival rate nor even a maximal fecundability, it follows that quantitative conclusions from this analysis in relation to the survival of naturally conceived human embryos are of doubtful validity.\n\nHowever, Boklage is right about two things: firstly, the difficulty of calculating pre-clinical losses, because “In the place of the necessary numbers for the first few weeks of pregnancy we find editorially acceptable estimates which, while perhaps not far wrong, are difficult to defend with any precision”, and secondly, that the source of some of the only directly relevant data (even though he excluded it from his modelling analysis), namely, “Hertig’s sample is, and will probably remain, unique”.\n\nAt the start of the 1930s, no-one had ever seen a newly fertilised human embryo. It was barely 60 years since Oscar Hertwig had first observed fertilisation in sea urchins87, and just 40 years before the birth of the first test tube baby88,89. In Boston, Dr Arthur Hertig and Dr John Rock’s search to find early human embryos generated an irreplaceable collection which has left an indelible mark on our understanding of human embryology.\n\nHertig and Rock recruited 210 married women of proven fertility who presented for gynaecological surgery38. (In most of their publications, the number is given as 21033,90,91 although 211 subjects are mentioned elsewhere38.) Of these, 107 were considered optimal for finding an embryo because they apparently: (i) demonstrated ovulation; (ii) had at least one recorded coital date within 24 hours before or after the estimated time of ovulation; (iii) lacked pathologic conditions that would interfere with conception. Hertig examined the excised uteri and fallopian tubes, and over fifteen years found 34 human embryos aged up to 17 days33,38,90–97. Of these, 24 were normal and 10 abnormal33,90. (There is some confusion over this: in three publications38,91,97, 21 embryos are described as normal and 13 as abnormal. It appears that the three alternatively described embryos (C-8299; C-8000; C-8290) were originally defined as abnormal based on their position or depth of implantation38.) Table 4 provides information about the 34 embryos found in these 107 women. Although the study was primarily intended to find and describe early human embryos, Hertig subsequently used the data to derive estimates of reproductive efficiency including early embryo wastage33,90.\n\nThe embryos were collected from 107 out of 210 women. *In Hertig’s figure, day 28 of the ovulatory cycle is identified with day 1 of the next cycle and is the day of the presumed missed period in cases where pregnancy had commenced. The 36 cases that provide the evidential foundation for his numerical analysis are shown in bold.\n\nHertig’s analysis33,90 relies heavily on the 15 normal and 6 abnormal implanted embryos found in 36 women from cycle day 25 onwards. He assumed the 6 abnormal embryos would perish around the time of the first period concluding that fertility (% pregnant) at this stage = 42% (15/36). Of the 8 pre-implantation embryos identified (7 in the uterus and 1 in the fallopian tubes), 4 were abnormal. Hertig assumed the 4 normal embryos would implant successfully but that some of the abnormal ones would not, such that the proportion of normal embryos would increase from 50% (4/8) before implantation to 71% (15/21) after implantation as observed. Hence, among the 36 post-cycle day 25 cases, in addition to the 15 normal embryos, there must have been 15 abnormal pre-implantation embryos of which 60% (9/15) failed to implant and were not observed, and 40% (6/15) did implant and were observed, although these 6 would have perished shortly afterwards. This left 6/36 eggs that must have been unfertilised. The ratio of ‘unfertilised’ : ‘fertilised abnormal’ : ‘fertilised normal’ was therefore 6:15:15, matching the 16% infertility (no fertilisation), 42% sterility (post-fertilisation death) and 42% fertility (reproductive success) reported in Figure 9 of Hertig’s article, “The Overall Problem in Man”33. This is the source of Hertig’s 84% fertilisation rate and 50% embryo loss before and during implantation, and is reproduced in Leridon’s life table18 as 84/100 eggs surviving at time zero (ovulation and fertilisation) and 42 surviving to 2 weeks (time of first missed period).\n\nHertig provides almost the entire body of evidence used to quantify natural human embryo loss in the first week post-fertilisation. Most claims regarding early human embryo mortality find their source here. Before considering how reliable the figures are, it is worth repeating Hertig’s own caveat, namely, the lack of data on the efficiency of natural fertilisation33. All estimates of embryo mortality from fertilisation onwards are subject to commensurate inaccuracy in the absence of reliable fertilisation probabilities (i.e., πFERT), which are “surprisingly difficult to estimate”13.\n\nThere are several problems with Hertig’s analysis. As noted by others, the observations are cross-sectional, but the inferences are longitudinal48. Hertig detected 21 embryos from 36 cases (58.3%) from cycle day 25 onwards. If this detection rate were representative, then on average, prior to day 25, the detection rate should either be the same or higher; however, they are all lower, and substantially so (Table 4). Hertig suggested that this was due to the technical difficulty of finding newly fertilised embryos. However, the detection rate for cycle days 18–19 was good (46.7%) and embryos one or two days younger would not have been much smaller, at which stage the detection rate was poor (11.1%). An alternative explanation for this discrepancy might simply be random variation. Furthermore, from cycle day 25 onwards, embryos would probably have produced hCG and therefore FECHCG would have been at least 58%. This is approximately double the equivalent values observed in more recent and robust hCG studies (Table 3) further suggesting that this subset of the data is not representative.\n\nDespite having proven fertility, these women presented with gynaecological problems, suggesting sub-optimal reproductive function. Furthermore, Hertig’s reproductively ‘optimal’ coital pattern does not include 2 days pre-ovulation and does include one day post-ovulation, conditions which are known not to maximise fertilisation34,79,84,85,98. Hence, detection rates before cycle day 25 may be more representative than those after. Given the numerical discrepancies, they cannot both be.\n\nHertig does not provide error estimates with his conclusions. In order to estimate the precision of his derived proportions, a bootstrap analysis was performed as follows: Hertig’s 107 optimal cases were categorised according to stage of cycle (Category 1 = cycle days 16–19 (n=24); Category 2 = cycle days 20–24 (n=47); Category 3 = cycle days ≥25 (n=36)), and presence and type of embryos (Category 0 = no embryo (n=73); Category 1 = normal embryo (n=24); Category 3 = abnormal embryo (n=10)). Five hundred pseudo-datasets each containing 107 cases were generated using a balanced random re-sampling method using Microsoft Excel®. The original and pseudo datasets are in Dataset 4.\n\nHertig’s numerical calculations, as detailed above, were repeated for each pseudo-dataset thereby generating 500 estimates for each parameter, from which median values and [95% CIs] were derived: fertility = 42% [26%, 59%]; sterility = 42% [5%, 182%]; infertility = 16% [-127%, 61%]; pre-implantation embryo survival probability = 69% [27%, 128%]; post-implantation to week two survival probability = 71% [50%, 91%]; detection rate for cycle day 25 onwards = 58% [41%, 74%]. Median values matched estimates calculated from the original dataset. Bootstrap 95% CIs for the day 25 detection rate (58%) matched those calculated using the “exact” method of Clopper & Pearson99, [41%, 74%], which are a little wider than those calculated using the “more exact” method of Agresti & Coull100, [42%, 73%]. (These analyses was performed using an online GraphPad® calculator accessed on 21st October 2016: http://www.graphpad.com/quickcalcs/ConfInterval1.cfm.) The congruence between these confidence intervals and the point estimates provides some reassurance that that the bootstrap procedure worked effectively. Estimates of parameters other than the day 25 detection rate (58%) are derived from more complex proportional relationships, and are therefore less precise. Table 5 reproduces a life table in the style of Leridon18 and includes probabilities for each reproductive step with confidence intervals. These intervals (and some noted above) are impossibly wide highlighting further problems with Hertig’s analysis.\n\nThe table is modelled on Leridon’s life table18 and includes his values for survivors and data from Hertig33. Probabilities are also shown for each stage of the early development process. Medians and 95% confidence intervals derived from a bootstrap analysis of Hertig’s data indicate the precision in the estimates for fertilisation and embryo loss in the first two weeks. *Although Leridon’s values are based on Hertig, they do not fully match. Leridon reports losses of 15 and 27 in the first and second weeks respectively. However, Hertig’s 60% loss of abnormal pre-implantation embryos implies 25 (0.6 7#215; 42) losses in the first week leaving 58, and 16 (58 7#215; (6/21)) losses in the second week, leaving 42. ¥A value of πSOC = 0.90 was used to avoid the calculation of probabilities greater than 1.\n\nHertig’s analysis omits 47 cases from cycle days 20–24, comprising 44% of his data. It is clear why he cannot use it, since all five embryos were normal and, given his mathematical and biological assumptions, five normal implanting embryos could not become 29% (6/21) abnormal post-implantation. Furthermore, the data that define the 50% proportion of abnormal pre-implantation embryos (i.e., 4/8) are so few that any numerical variation will make a substantial difference to derived proportions. If he had observed 3/8 abnormal embryos, his estimate of pre-implantation loss would have been 13% rather than 30%: for 5/8 it would have been 48%, with a fertilisation rate of 111%, which is clearly impossible. It seems therefore, that Hertig designed his analysis based on a post-hoc examination and selective use of the data. His own caveat about the lack of relevant and necessary data should be taken at least as seriously as his conclusions.\n\nHertig and Rock’s contribution to human embryology is undeniable. However, their quantitative conclusions regarding early embryo mortality have a low precision that undermines their biological credibility or utility. Such estimates cannot be regarded as a reliable foundation upon which to evaluate and understand natural human reproduction.\n\n\nDiscussion\n\nAnswering the question “How many fertilised human embryos die before or during implantation under natural conditions?” is difficult. Relevant, credible data are in short supply. Among regularly cited publications, the Lancet hypothesis26 is entirely speculative and in the view of the current author should cease to be used as an authoritative source. Clinical pregnancy studies are only useful for quantifying clinical pregnancy loss and contribute nothing to estimates of embryo mortality in the first two weeks’ post-fertilisation. Even Hertig’s unique dataset is inadequate to draw quantitative conclusions and oft-repeated values should be treated with scepticism. The hCG studies from 1988 onwards provide the best data for estimating embryo mortality although a lack of information on fertilisation rates13,15,33,48,101 prevents satisfactory completion of the calculations. A recent re-analysis of these data has proposed plausible limits for reproductively normal women indicating that approximately 10–40% of embryos perish before implantation and 40–60% do so between fertilisation and birth34. However, these ranges are wide, particularly for pre-implantation mortality, reflecting the lack of appropriate data. Is there any possibility of narrowing down the numbers?\n\nTwo separate groups have previously collected embryos from women following carefully timed artificial insemination as part of fertility treatment. Insemination around the time of ovulation in women of proven fertility was followed 5 days later by uterine lavage to recover ova102–105. These data appear to hold promise for determining fertilisation efficiency and some authors have made quantitative inferences about embryo mortality from them16,19,20. However, such inferences are complicated by numerous confounding factors. For example, in one series104, from 88 uterine lavages following artificial insemination by donor (AID), 4 unfertilised eggs, 6 fragmented eggs and 27 embryos from 2 cell to blastocyst stage were retrieved. In the 51 cycles in which no egg or embryo was retrieved, there was one retained pregnancy suggesting that the lavage and ova retrieval efficiency was reasonably high, albeit not perfect. These data therefore suggest that FECTOT was low (≈31/88 = 35%) although a proportion of fertilised eggs may have completely degenerated within the first 5 days. Assuming πSOC was high (given the targeted insemination), this suggests that πFERT ≈ 50%. In the context of the recent analysis34, this implies that πHCG is high and that levels of embryo mortality are therefore towards the lower end of the 10–40% and 40–60% ranges. However, the clinical pregnancy rate following transfer of the embryos was only 40%. This is equivalent to πHCG × πCLIN. If πCLIN ≈ 75%, as suggested by the hCG studies, this would mean that πHCG ≈ 50%. This would imply that πFERT is high, fertilised egg degeneration is high, occurs before day 5 and was therefore unobserved, and hence levels of embryo mortality tend towards the upper end of the 10–40% and 40–60% ranges.\n\nIt is possible that the lavage/transfer procedure reduced implantation and early developmental efficiency thereby reducing πHCG × πCLIN. A comparison of AID pregnancy rates may provide some insight as suggested by the authors104. The clinical pregnancy rate in their pharmacologically unstimulated cohort was 12.5% (11/88) which is lower than an equivalent 18.9% observed for fresh semen AID106, and also the live birth rate (which also incorporates clinical pregnancy losses) of 14.7% reported by the HFEA for AID in 2012 in unstimulated women aged 18–34107. These different success rates suggest that the lavage/transfer procedure did adversely affect implantation and early gestation with clear implications for quantitative extrapolation. Furthermore, the women who were embryo recipients were receiving fertility treatment and their overall fertility may have been lower than expected in a normal healthy cohort. In summary, it seems that there are too many unresolved variables in these data to narrow down estimates of fertilization (πFERT) or implantation (πHCG) rates.\n\nWith high fecundability, the range of possible embryo mortality rates falls. Red deer hinds have pregnancy rates of >85% following natural mating108: establishing numerical limits for embryo mortality under these efficient reproductive circumstances is more straightforward. By contrast, humans lack the instinct to mate predominantly during fertile periods thereby reducing observed reproductive efficiency substantially. In studies of early pregnancy loss, owing to sub-optimal coital frequency and cohorts including sub-fertile couples, natural fecundability was almost certainly not maximised34. Combining data on coital frequency and hCG elevation may help to address this. In a later analysis, applying the Schwartz model85 to hCG data, Wilcox calculated a FECHCG value of 36% for high coital frequencies (>4 days with intercourse in 6 pre-ovulatory days)79. However, the model assumed that cycle viability was evenly distributed among couples, a condition which the authors recognised was not true and is contradicted by a subsequent analysis which suggests that approximately a quarter of the Wilcox cohort was sub-fertile34. If possible, focussing analytical attention on normally fertile women with the highest coital frequencies may help to further narrow the range of plausible embryo mortality.\n\nIn this review of natural early embryo mortality no use has been made of data from in vitro fertilisation (IVF) and associated laboratory studies. Sub-optimal conditions for embryo culture mean that it was109,110 and probably still is111 doubtful that reliable values can be extrapolated from laboratory in vitro to natural in vivo circumstances20. Importantly, the reproductive stages are also altered. In IVF, πSOC = 1 and for transferred embryos πFERT = 1. Furthermore, transferred embryos are selected based on quality criteria, however inexact those may be111,112. IVF program manipulations may reduce πHCG compared to natural circumstances3 and implantation failure remains a substantial issue for IVF113,114. Although for IVF cycles, the reported live birth rate per cycle has gone up (from 14% in 1991 to 25.4% in 201234), comparison of IVF success rates and natural live birth fecundability values involves too many undefined variables to shed numerical light on early natural embryo development and mortality.\n\nIn vitro fertilisation per se may provide some insight into values of πFERT, since πSOC = 1, and successful fertilisation can be observed. In seven studies of natural cycle IVF, fertilisation was successful in 70.9% (443/625) of attempts115–121. If this represented natural, in vivo fertilisation, based on the recent analysis34, it implies that πHCG ≈ 0.75, focusing estimates for pre-implantation embryo loss on 25%, and for total loss on 50%. However, high frequencies of chromosomal aberrations caused by the in vitro handling of human oocytes122 can render any comparison of natural and assisted reproduction open to criticism4.\n\nIn calculating summary values of embryo mortality, it is important to note that human fertility is as numerically heterogeneous as it could possibly be. Some couples are infertile and some are highly fertile. Excessive attention to averages and neglect of variances fosters a misleading appreciation of reality. The hCG studies clearly had both fertile and sub-fertile participants: use of overall values underestimated fecundability for the fertile majority34. Furthermore, apparently ‘optimal’ conditions for conception may not maximise human biological fecundability. Other biological factors also contribute to reproductive heterogeneity in humans; however, even after controlling for age-related decline, fecundability remains highly variable107,123. For intercourse occurring 2 days prior to ovulation, average fecundabilities resembled those previously published124, but for couples at the 5th and 95th percentiles, fecundabilities were 5% and 83%. 83% fecundability implies a very low embryo mortality rate. In conclusion, apparent low fecundability in humans need not necessarily be caused by embryo mortality, but also defects of ovulation, mistimed coitus, or fertilisation failure34. Where fecundability is low, any or all of these factors may contribute.\n\nPregnancy loss and embryo mortality under natural conditions are real and substantial. However, estimates of 90%29, 85%28, 80%6,27, 78%26, 76%5,25 and 70%19–23 loss are excessive and not supported by available data. Estimates for clinical pregnancy loss are approximately 10–20%. For women of reproductive age, losses between implantation and clinical recognition are approximately 10–25%. Loss from implantation to birth is approximately one third34,63,65,66.\n\nNatural pre-implantation embryo loss remains quantitatively undefined. In the absence of knowledge of πSOC and πFERT it is almost impossible to estimate precisely. Hertig’s estimate is 30%; however, mathematically and biologically implausible confidence intervals [-28%, 73%] betray the quantitative weaknesses in his data and analysis. The best available data are from studies monitoring daily hCG levels in women attempting to conceive63,65,66. Based on analyses of these data, in normal healthy women, 10–40% is a plausible range for pre-implantation embryo loss and overall pregnancy loss from fertilisation to birth is approximately 40–60%34. This latter range is similar to, although a little narrower than the 25–70% suggested by Professor Robert Edwards125.\n\nIn the absence of suitable data to quantify pre-implantation loss, many published articles and reviews merely restate previously published values6,20,21. It has been suggested that “for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias”126. Widely held views on early embryo mortality may reflect an entrenched and biased view of the biology. For example, the Macklon “Black Box” review20 has been cited over 200 times (Web of Knowledge citations on 10th October 2016) with many articles explicitly referencing its 30% survival/70% failure value8,21,113,127–133. Macklon’s quantitative summary in his “Pregnancy Loss Iceberg” (30% implantation failure; 30% early pregnancy loss; 10% clinical miscarriage; 30% live births) is a direct, unedited reproduction of estimates published over 10 years previously19. 30% pre-implantation loss fairly represents Hertig’s conclusions although, as has been shown, this estimate is highly imprecise. However, Macklon misrepresents the best data which he reviews63,65. Wilcox reports early pregnancy loss (i.e., [1 - πCLIN]) of 21.7% whereas Macklon’s iceberg implies that 43% (30/70) of implanting embryos fail before clinical recognition. The iceberg’s clinical loss rate of 25% (10/40) is also higher than relevant data indicate (Figure 2 & Figure 3). Total loss of implanting (hCG+) embryos (i.e., [1 - (πCLIN × πLB]) is 57% (40/70) according to the iceberg. By contrast, Wilcox63 and Zinaman65, both included in Macklon’s review, both report that only 31% of hCG positive pregnancies fail.\n\nIf Macklon’s (and Chard’s19) estimates are excessive as the data suggest, this casts doubt on claims113,132 that the frequency of embryonic abnormalities observed in vitro is representative of the natural in vivo situation. In turn, this implies that many of the chromosomal abnormalities observed in in vitro human embryos are, to a greater extent than currently recognised113, an artefact of the clinical and experimental context of assisted reproduction technologies.\n\nIn attempting to quantify pre-implantation embryo mortality it is easy to appreciate why “a claim of ‘no significant difference’ might easily be sustained against any interpretation proffered”48, and why estimates are “difficult to defend with any precision”5. In conclusion, “poor estimates of fertilization failure rate and the mortality at 2 weeks after fertilisation”15 drawn “from unusual or biased samples”134 indicate that the “black box” of early pregnancy loss20 is not as wide open as has been thought.\n\n\nData availability\n\nF1000Research: Dataset 1. Figure 1 data, 10.5256/f1000research.8937.d140569135\n\nF1000Research: Dataset 2. Figure 2 data, 10.5256/f1000research.8937.d140570136\n\nF1000Research: Dataset 3. Figure 3 data, 10.5256/f1000research.8937.d140571137\n\nF1000Research: Dataset 4. Pseudo-datasets of Hertig’s study, obtained via a bootstrap procedure, 10.5256/f1000research.8937.d140572138",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThanks are due to Professor David Paton, Dr Paul Schofield and Dr Amanda Sferruzzi-Perri for reviewing and providing helpful comments during the writing of this article.\n\n\nReferences\n\nOrd T: The scourge: moral implications of natural embryo loss. Am J Bioeth. 2008; 8(7): 12–9. PubMed Abstract | Publisher Full Text\n\nR (on the application of Smeaton) v Secretary of State for Health. [2002] EWHC 610 (Admin), [2002] All ER (D) 115 (Apr), 2002. Reference Source\n\nKennedy TG: Physiology of implantation. In Vitro Fert Ass Rep. 1997; 729–35. Reference Source\n\nBenagiano G, Farris M, Grudzinskas G: Fate of fertilized human oocytes. Reprod Biomed Online. 2010; 21(6): 732–41. PubMed Abstract | Publisher Full Text\n\nBoklage CE: Survival probability of human conceptions from fertilization to term. Int J Fertil. 1990; 35(2): 75, 79–80, 81–94. PubMed Abstract\n\nVitzthum VJ, Spielvogel H, Thornburg J, et al.: A prospective study of early pregnancy loss in humans. Fertil Steril. 2006; 86(2): 373–9. PubMed Abstract | Publisher Full Text\n\nBainbridge DR: Making Babies. A Visitor Within. London: Phoenix; 2001; 101–62 at 59ff. Reference Source\n\nRamos-Medina R, García-Segovia Á, León JA, et al.: New decision-tree model for defining the risk of reproductive failure. Am J Reprod Immunol. 2013; 70(1): 59–68. PubMed Abstract | Publisher Full Text\n\nNorwitz ER, Schust DJ, Fisher SJ: Implantation and the survival of early pregnancy. N Engl J Med. 2001; 345(19): 1400–8. PubMed Abstract | Publisher Full Text\n\nJames WH: The incidence of spontaneous abortion. Popul Stud (Camb). 1970; 24(2): 241–5. PubMed Abstract | Publisher Full Text\n\nSilver RM, Branch DW: Sporadic and recurrent pregnancy loss. In: Reece EA, Hobbins JC, editors. Clinical Obstetrics: The Fetus and Mother. 3rd ed: Blackwell Publishing; 2007; 143–60. Publisher Full Text\n\nNishimura H: Fate of human fertilized eggs during prenatal life: present status of knowledge. Okajimas Folia Anat Jpn. 1970; 46(6): 297–305. PubMed Abstract | Publisher Full Text\n\nShort RV: When a conception fails to become a pregnancy. Ciba Found Symp. 1978; (64): 377–94. PubMed Abstract\n\nOpitz JM: The Farber lecture. Prenatal and perinatal death: the future of developmental pathology. Pediatr Pathol. 1987; 7(4): 363–94. PubMed Abstract | Publisher Full Text\n\nBiggers JD: Risks of In Vitro Fertilization and Embryo Transfer in Humans. In: Crosignani PG, Rubin BL, editors. In Vitro Fertilization and Embryo Transfer. London: Academic Press; 1983; 393–410. Reference Source\n\nJohnson MH: Chapter 15: Fetal Challenges. Essential Reproduction. 7th ed. Oxford: Wiley-Blackwell; 2013; 258–69. Reference Source\n\nBiggers JD: In vitro fertilization and embryo transfer in human beings. N Engl J Med. 1981; 304(6): 336–42. PubMed Abstract | Publisher Full Text\n\nLeridon H: Intrauterine Mortality. Human Fertility: The Basic Components. Chicago: The University of Chicago Press; 1977; 48–81. Reference Source\n\nChard T: Frequency of implantation and early pregnancy loss in natural cycles. Baillieres Clin Obstet Gynaecol. 1991; 5(1): 179–89. PubMed Abstract | Publisher Full Text\n\nMacklon NS, Geraedts JP, Fauser BC: Conception to ongoing pregnancy: the 'black box' of early pregnancy loss. Hum Reprod Update. 2002; 8(4): 333–43. PubMed Abstract | Publisher Full Text\n\nFord HB, Schust DJ: Recurrent pregnancy loss: etiology, diagnosis, and therapy. Rev Obstet Gynecol. 2009; 2(2): 76–83. PubMed Abstract | Free Full Text\n\nMcCoy RC, Demko Z, Ryan A, et al.: Common variants spanning PLK4 are associated with mitotic-origin aneuploidy in human embryos. Science. 2015; 348(6231): 235–8. PubMed Abstract | Publisher Full Text\n\nLoke YW, King A: Human Implantation: Cell Biology and Immunology. Cambridge: Cambridge University Press; 1995. Reference Source\n\nAmerican College of Obstetricians and Gynecologists: Technical Bulletin No. 212: Early pregnancy loss. Int J Gynaecol Obstet. 1995; 51(3): 278–85. PubMed Abstract | Publisher Full Text\n\nDrife JO: What proportion of pregnancies are spontaneously aborted? Brit Med J. 1983; 286(6361): 294.\n\nRoberts CJ, Lowe CR: Where have all the conceptions gone? Lancet. 1975; 305: 498–9. Publisher Full Text\n\nJohnson MH, Everitt BJ: Chapter 15: Fertility. Essential Reproduction. 5th ed. Oxford: Wiley-Blackwell; 2000; 251–74. Reference Source\n\nBraude PR, Johnson MH: The Embryo in Contemporary Medical Science. In: Dunstan GR, editor. The Human Embryo: Aristotle and the Arabic and European Traditions. Exeter: University of Exeter Press; 1990; 208–21. Reference Source\n\nOpitz JM: Human Development - The Long and the Short of it. In: Furton EJ, Mitchell LA, editors. What is Man, O Lord? The Human Person in a Biotech Age; Eighteenth Workshop for Bishops. Boston, MA: The National Catholic Bioethics Center; 2002; 131–53.\n\nLeridon H: Fecundability. Human Fertility: The Basic Components. Chicago: The University of Chicago Press; 1977; 22–47. Reference Source\n\nHarris J: Stem cells, sex, and procreation. Camb Q Healthc Ethics. 2003; 12(4): 353–71. PubMed Abstract | Publisher Full Text\n\nRahner K: Theological Investigations, Vol IX. London: DLT; 1972.\n\nHertig AT: The Overall Problem in Man. In: Benirschke K, editor. Comparative Aspects of Reproductive Failure. An International Conference at Dartmouth Medical School. Berlin: Springer Verlag; 1967. Publisher Full Text\n\nJarvis GE: Estimating limits for natural human embryo mortality [version 1; referees: 2 approved]. F1000Res. 2016; 5: 2083. Publisher Full Text\n\nBulletti C, Flamigni C, Giacomucci E: Reproductive failure due to spontaneous abortion and recurrent miscarriage. Hum Reprod Update. 1996; 2(2): 118–36. PubMed Abstract | Publisher Full Text\n\nDevolder K, Harris J: The ambiguity of the embryo: Ethical inconsistency in the human embryonic stem cell debate. Metaphilosophy. 2007; 38(2–3): 153–69. Publisher Full Text\n\nGreen RM: The Human Embryo Research Debates: Bioethics in the Vortex of Controversy. Oxford: Oxford University Press; 2001. Reference Source\n\nHertig AT, Rock J, Adams EC: A description of 34 human ova within the first 17 days of development. Am J Anat. 1956; 98(3): 435–93. PubMed Abstract | Publisher Full Text\n\nLetter: Where have all the conceptions gone? Lancet. 1975; 1(7907): 636–7. PubMed Abstract | Publisher Full Text\n\nCooke ID: Failure of implantation and its relevance to subfertility. J Reprod Fertil Suppl. 1988; 36: 155–9. PubMed Abstract\n\nCatalano RA, Saxton KB, Bruckner TA, et al.: Hormonal evidence supports the theory of selection in utero. Am J Hum Biol. 2012; 24(4): 526–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrench FE, Bierman JM: Probabilities of fetal mortality. Public Health Rep. 1962; 77(10): 835–47. PubMed Abstract | Free Full Text\n\nShapiro S, Jones EW, Densen PM: A life table of pregnancy terminations and correlates of fetal loss. Milbank Mem Fund Q. 1962; 40(1): 7–45. PubMed Abstract\n\nErhardt CL: Pregnancy Losses in New York City, 1960. Am J Public Health Nations Health. 1963; 53(9): 1337–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPettersson F: Epidemiology of Early Pregnancy Wastage. Stockholm: Svenska Bokförlaget; 1968. Reference Source\n\nShapiro S, Levine HS, Abramowicz M: Factors associated with early and late fetal loss. Adv Planned Parenthood. 1970; 6: 45–63.\n\nTaylor WF: The Probability of Fetal Death. In: Fraser FC, McCusick VA, editors. Congenital Malformations. Amsterdam: Excerpta Medica; 1970; 307–20.\n\nKline J, Stein Z, Susser M: Conception and Reproductive Loss: Probabilities. Conception to Birth. Epidemiology of Prenatal Development. New York: OUP; 1989; 43–68.\n\nMaster-Hunter T, Heiman DL: Amenorrhea: evaluation and treatment. Am Fam Physician. 2006; 73(8): 1374–82. PubMed Abstract\n\nCommittee on Practice Bulletins—Gynecology: Practice bulletin no. 128: diagnosis of abnormal uterine bleeding in reproductive-aged women. Obstet Gynecol. 2012; 120(1): 197–206. PubMed Abstract | Publisher Full Text\n\nGrudzinskas JG, Nysenbaum AM: Failure of human pregnancy after implantation. Ann N Y Acad Sci. 1985; 442: 38–44. PubMed Abstract | Publisher Full Text\n\nMorton H, Rolfe B, Clunie GJ: An early pregnancy factor detected in human serum by the rosette inhibition test. Lancet. 1977; 1(8008): 394–7. PubMed Abstract | Publisher Full Text\n\nChard T, Grudzinskas JG: Early pregnancy factor. Biol Res Pregnancy Perinatol. 1987; 8(2 2D Half): 53–6. PubMed Abstract\n\nCole LA: hCG, the wonder of today's science. Reprod Biol Endocrinol. 2012; 10: 24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilcox AJ, Weinberg CR, Wehmann RE, et al.: Measuring early pregnancy loss: laboratory and field methods. Fertil Steril. 1985; 44(3): 366–74. PubMed Abstract\n\nRegan L: A prospective study of spontaneous abortion. In: Beard RW, Sharp F, editors. Early Pregnancy Loss: Mechanisms and Treatment. Springer-Verlag; 1988; 23–37. Publisher Full Text\n\nOdell WD, Griffin J: Pulsatile secretion of human chorionic gonadotropin in normal adults. N Engl J Med. 1987; 317(27): 1688–91. PubMed Abstract | Publisher Full Text\n\nMiller JF, Williamson E, Glue J, et al.: Fetal loss after implantation. A prospective study. Lancet. 1980; 2(8194): 554–6. PubMed Abstract | Publisher Full Text\n\nEdmonds DK, Lindsay KS, Miller JF, et al.: Early embryonic mortality in women. Fertil Steril. 1982; 38(4): 447–53. PubMed Abstract | Publisher Full Text\n\nWhittaker PG, Taylor A, Lind T: Unsuspected pregnancy loss in healthy women. Lancet. 1983; 1(8334): 1126–7. PubMed Abstract | Publisher Full Text\n\nVidela-Rivero L, Etchepareborda JJ, Kesseru E: Early chorionic activity in women bearing inert IUD, copper IUD and levonorgestrel-releasing IUD. Contraception. 1987; 36(2): 217–26. PubMed Abstract | Publisher Full Text\n\nWalker EM, Lewis M, Cooper W, et al.: Occult biochemical pregnancy: fact or fiction? Br J Obstet Gynaecol. 1988; 95(7): 659–63. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Weinberg CR, O'Connor JF, et al.: Incidence of early loss of pregnancy. N Engl J Med. 1988; 319(4): 189–94. PubMed Abstract | Publisher Full Text\n\nHakim RB, Gray RH, Zacur H: Infertility and early pregnancy loss. Am J Obstet Gynecol. 1995; 172(5): 1510–7. PubMed Abstract | Publisher Full Text\n\nZinaman MJ, Clegg ED, Brown CC, et al.: Estimates of human fertility and pregnancy loss. Fertil Steril. 1996; 65(3): 503–9. PubMed Abstract | Publisher Full Text\n\nWang X, Chen C, Wang L, et al.: Conception, early pregnancy loss, and time to clinical pregnancy: a population-based prospective study. Fertil Steril. 2003; 79(3): 577–84. PubMed Abstract | Publisher Full Text\n\nSasaki Y, Ladner DG, Cole LA: Hyperglycosylated human chorionic gonadotropin and the source of pregnancy failures. Fertil Steril. 2008; 89(6): 1781–6. PubMed Abstract | Publisher Full Text\n\nKoot YE, Boomsma CM, Eijkemans MJ, et al.: Recurrent pre-clinical pregnancy loss is unlikely to be a 'cause' of unexplained infertility. Hum Reprod. 2011; 26(10): 2636–41. PubMed Abstract | Publisher Full Text\n\nCole LA: Hyperglycosylated hCG and pregnancy failures. J Reprod Immunol. 2012; 93(2): 119–22. PubMed Abstract | Publisher Full Text\n\nMumford SL, Silver RM, Sjaarda LA, et al.: Expanded findings from a randomized controlled trial of preconception low-dose aspirin and pregnancy loss. Hum Reprod. 2016; 31(3): 657–65. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Baird DD, Weinberg CR, et al.: The use of biochemical assays in epidemiologic studies of reproduction. Environ Health Perspect. 1987; 75: 29–35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrattebø G: Occult biochemical pregnancy: fact or fiction? Br J Obstet Gynaecol. 1989; 96(2): 252–4. PubMed Abstract | Publisher Full Text\n\nWalker EM, Lewis M, Howie PW: Authors' reply. Br J Obstet Gynaecol. 1989; 96(2): 253–4. Publisher Full Text\n\nWilcox AJ, Weinberg CR, Baird DD: Subclinical embryonic loss. Fertil Steril. 1989; 51(5): 907–8. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Weinberg CR, Baird DD: Risk factors for early pregnancy loss. Epidemiology. 1990; 1(5): 382–5. PubMed Abstract\n\nSapra KJ, Buck Louis GM, Sundaram R, et al.: Signs and symptoms associated with early pregnancy loss: findings from a population-based preconception cohort. Hum Reprod. 2016; 31(4): 887–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilcox AJ, Baird DD, Weinberg CR: Time of implantation of the conceptus and loss of pregnancy. N Engl J Med. 1999; 340(23): 1796–9. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Dunson DB, Weinberg CR, et al.: Likelihood of conception with a single act of intercourse: providing benchmark rates for assessment of post-coital contraceptives. Contraception. 2001; 63(4): 211–5. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Weinberg CR, Baird DD: Timing of sexual intercourse in relation to ovulation. Effects on the probability of conception, survival of the pregnancy, and sex of the baby. N Engl J Med. 1995; 333(23): 1517–21. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Baird DD, Dunson D, et al.: Natural limits of pregnancy testing in relation to the expected menstrual period. JAMA. 2001; 286(14): 1759–61. PubMed Abstract | Publisher Full Text\n\nWeinberg CR, Gladen BC, Wilcox AJ: Models relating the timing of intercourse to the probability of conception and the sex of the baby. Biometrics. 1994; 50(2): 358–67. PubMed Abstract | Publisher Full Text\n\nWeinberg CR, Moledor E, Baird DD, et al.: Is there a seasonal pattern in risk of early pregnancy loss? Epidemiology. 1994; 5(5): 484–9. PubMed Abstract\n\nWeinberg CR, Hertz-Picciotto I, Baird DD, et al.: Efficiency and bias in studies of early pregnancy loss. Epidemiology. 1992; 3(1): 17–22. PubMed Abstract | Publisher Full Text\n\nBarrett JC, Marshall J: The risk of conception on different days of the menstrual cycle. Popul Stud (Camb). 1969; 23(3): 455–61. PubMed Abstract | Publisher Full Text\n\nSchwartz D, Macdonald PD, Heuchel V: Fecundability, coital frequency and the viability of Ova. Popul Stud (Camb). 1980; 34(2): 397–400. PubMed Abstract | Publisher Full Text\n\nBoklage CE: The frequency and and survival probability of natural twin conceptions. In: Keith LG, Papiernik E, Keith DM, Lukie B, editors. Multiple Pregnancy: Epidemiology, Gestation and Perinatal Outcome. New York: Parthenon Publishing Group; 1995; 41–50. Reference Source\n\nHertwig O: Beiträge zur Kenntniss der Bildung, Befruchtung und Theilung des thierischen Eies (Contributions to the knowledge of the formation, fertilization and division of the animal egg). Morphol Jahrb.1876; 1: 347–434.\n\nSteptoe PC, Edwards RG: Birth after the reimplantation of a human embryo. Lancet. 1978; 2(8085): 366. PubMed Abstract | Publisher Full Text\n\nClift D, Schuh M: Restarting life: fertilization and the transition from meiosis to mitosis. Nat Rev Mol Cell Biol. 2013; 14(9): 549–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHertig AT, Rock J, Adams EC, et al.: Thirty-four fertilized human ova, good, bad and indifferent, recovered from 210 women of known fertility; a study of biologic wastage in early human pregnancy. Pediatrics. 1959; 23(1 Part 2): 202–11. PubMed Abstract\n\nHertig AT: A fifteen-year search for first-stage human ova. JAMA. 1989; 261(3): 434–5. PubMed Abstract | Publisher Full Text\n\nRock J, Hertig AT: Some aspects of early human development. Am J Obstet Gynecol. 1942; 44(6): 973–83. Publisher Full Text\n\nHertig AT, Rock J: On a human blastula recovered from the uterine cavity 4 days after ovulation. Anat Rec. 1946; 94: 469. PubMed Abstract\n\nHertig AT, Rock J: A series of potentially abortive ova recovered from fertile women prior to the first missed menstrual period. Am J Obstet Gynecol. 1949; 58(5): 968–93, illust. PubMed Abstract | Publisher Full Text\n\nHertig AT, Rock J: Two human ova of the pre-villous stage, having a developmental age of about 8 and 9 days respectively. Contrib Embryol. 1949; 33(213–221): 169–86. PubMed Abstract\n\nHertig AT, Adams EC, McKay DG, et al.: A thirteen-day human ovum studied histochemically. Am J Obstet Gynecol. 1958; 76(5): 1025–40; discussion 40-3. PubMed Abstract | Publisher Full Text\n\nHertig AT, Rock J: Searching for early fertilized human ova. Gynecol Invest. 1973; 4(3): 121–39. PubMed Abstract | Publisher Full Text\n\nBarrett JC: Fecundability and coital frequency. Popul Stud (Camb). 1971; 25(2): 309–13. PubMed Abstract | Publisher Full Text\n\nClopper CJ, Pearson ES: The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika. 1934; 26(4): 404–13. Publisher Full Text\n\nAgresti A, Coull BA: Approximate is better than \"exact\" for interval estimation of binomial proportions. Am Stat. 1998; 52(2): 119–26. Publisher Full Text\n\nEdwards RG: The Cleaving Embryo and the Blastocyst. In: Conception in the Human Female. London: Academic Press; 1980; 668–766 at 47ff.\n\nBuster JE, Bustillo M, Rodi IA, et al.: Biologic and morphologic development of donated human ova recovered by nonsurgical uterine lavage. Am J Obstet Gynecol. 1985; 153(2): 211–7. PubMed Abstract | Publisher Full Text\n\nFormigli L, Formigli G, Roccio C: Donation of fertilized uterine ova to infertile women. Fertil Steril. 1987; 47(1): 162–5. PubMed Abstract | Publisher Full Text\n\nFormigli L, Roccio C, Belotti G, et al.: Non-surgical flushing of the uterus for pre-embryo recovery: possible clinical applications. Hum Reprod. 1990; 5(3): 329–35. PubMed Abstract\n\nSauer MV, Bustillo M, Rodi IA, et al.: In-vivo blastocyst production and ovum yield among fertile women. Hum Reprod. 1987; 2(8): 701–3. PubMed Abstract\n\nRichter MA, Haning RV Jr, Shapiro SS: Artificial donor insemination: fresh versus frozen semen; the patient as her own control. Fertil Steril. 1984; 41(2): 277–80. PubMed Abstract | Publisher Full Text\n\nHFEA: Fertility Treatment in 2013 - trends and figures. Human Fertilisation & Embryology Authority. 2013. Reference Source\n\nAsher GW: Reproductive cycles of deer. Anim Reprod Sci. 2011; 124(3–4): 170–5. PubMed Abstract | Publisher Full Text\n\nBolton VN, Braude PR: Development of the human preimplantation embryo in vitro. Curr Top Dev Biol. 1987; 23: 93–114. PubMed Abstract\n\nJones HW Jr, Oehninger S, Bocca S, et al.: Reproductive efficiency of human oocytes fertilized in vitro. Facts Views Vis Obgyn. 2010; 2(3): 169–71. PubMed Abstract | Free Full Text\n\nBolton VN, Leary C, Harbottle S, et al.: How should we choose the 'best' embryo? A commentary on behalf of the British Fertility Society and the Association of Clinical Embryologists. Hum Fertil (Camb). 2015; 18(3): 156–64. PubMed Abstract | Publisher Full Text\n\nMachtinger R, Racowsky C: Morphological systems of human embryo assessment and clinical evidence. Reprod Biomed Online. 2013; 26(3): 210–21. PubMed Abstract | Publisher Full Text\n\nNiakan KK, Han J, Pedersen RA, et al.: Human pre-implantation embryo development. Development. 2012; 139(5): 829–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoot YE, Teklenburg G, Salker MS, et al.: Molecular aspects of implantation failure. Biochim Biophys Acta. 2012; 1822(12): 1943–50. PubMed Abstract | Publisher Full Text\n\nDaya S, Gunby J, Hughes EG, et al.: Natural cycles for in-vitro fertilization: cost-effectiveness analysis and factors influencing outcome. Hum Reprod. 1995; 10(7): 1719–24. PubMed Abstract\n\nZayed F, Lenton EA, Cooke ID: Natural cycle in-vitro fertilization in couples with unexplained infertility: impact of various factors on outcome. Hum Reprod. 1997; 12(11): 2402–7. PubMed Abstract | Publisher Full Text\n\nBassil S, Godin PA, Donnez J: Outcome of in-vitro fertilization through natural cycles in poor responders. Hum Reprod. 1999; 14(5): 1262–5. PubMed Abstract | Publisher Full Text\n\nRoesner S, Pflaumer U, Germeyer A, et al.: Natural cycle IVF: evaluation of 463 cycles and summary of the current literature. Arch Gynecol Obstet. 2014; 289(6): 1347–54. PubMed Abstract | Publisher Full Text\n\nOmland AK, Fedorcsák P, Storeng R, et al.: Natural cycle IVF in unexplained, endometriosis-associated and tubal factor infertility. Hum Reprod. 2001; 16(12): 2587–92. PubMed Abstract | Publisher Full Text\n\nJanssens RM, Lambalk CB, Vermeiden JP, et al.: In-vitro fertilization in a spontaneous cycle: easy, cheap and realistic. Hum Reprod. 2000; 15(2): 314–8. PubMed Abstract | Publisher Full Text\n\nFahy UM, Cahill DJ, Wardle PG, et al.: In-vitro fertilization in completely natural cycles. Hum Reprod. 1995; 10(3): 572–5. PubMed Abstract\n\nBraude PR, Johnson MH, Pickering SJ, et al.: Mechanisms of Early Embryonic Loss In Vivo and In Vitro. In: Chapman. M, Grudzinskas G, Chard T, editors. The Embryo: Normal and Abnormal Development and Growth. London: Springer-Verlag; 1991; 1–10. Publisher Full Text\n\nDunson DB, Colombo B, Baird DD: Changes with age in the level and duration of fertility in the menstrual cycle. Hum Reprod. 2002; 17(5): 1399–403. PubMed Abstract | Publisher Full Text\n\nWilcox AJ, Weinberg CR, Baird DD: Post-ovulatory ageing of the human oocyte and embryo failure. Hum Reprod. 1998; 13(2): 394–7. PubMed Abstract | Publisher Full Text\n\nEdwards RG: Sexuality and Coitus. In: Conception in the Human Female. London: Academic Press; 1980; 525–72 at 60ff.\n\nIoannidis JP: Why most published research findings are false. PLoS Med. 2005; 2(8): e124. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCoy RC, Demko ZP, Ryan A, et al.: Evidence of Selection against Complex Mitotic-Origin Aneuploidy during Preimplantation Development. PLoS Genet. 2015; 11(10): e1005601. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarris J: Germline Modification and the Burden of Human Existence. Camb Q Healthc Ethics. 2016; 25(1): 6–18. PubMed Abstract | Publisher Full Text\n\nSaravelos SH, Regan L: Early pregnancy failure after assisted reproductive technology. Pregnancy after Assisted Reproductive Technology. 2012; 51–65.\n\nJones DG, Towns CR: Navigating the quagmire: the regulation of human embryonic stem cell research. Hum Reprod. 2006; 21(5): 1113–6. PubMed Abstract | Publisher Full Text\n\nDupont C, Froenicke L, Lyons LA, et al.: Chromosomal instability in rhesus macaque preimplantation embryos. Fertil Steril. 2009; 91(4): 1230–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDaughtry BL, Chavez SL: Chromosomal instability in mammalian pre-implantation embryos: potential causes, detection methods, and clinical consequences. Cell Tissue Res. 2016; 363(1): 201–25. PubMed Abstract | Publisher Full Text\n\nShorten PR, Peterson AJ, O'Connell AR, et al.: A mathematical model of pregnancy recognition in mammals. J Theor Biol. 2010; 266(1): 62–9. PubMed Abstract | Publisher Full Text\n\nPotts M, Diggory P, Peel J: Spontaneous Abortion.Abortion. Cambridge: Cambridge University Press; 1977; 45–64. Reference Source\n\nJarvis G: Dataset 1 in: Early embryo mortality in natural human reproduction: What the data say. F1000Research. 2016. Data Source\n\nJarvis G: Dataset 2 in: Early embryo mortality in natural human reproduction: What the data say. F1000Research. 2016. Data Source\n\nJarvis G: Dataset 3 in: Early embryo mortality in natural human reproduction: What the data say. F1000Research. 2016. Data Source\n\nJarvis G: Dataset 4 in: Early embryo mortality in natural human reproduction: What the data say. F1000Research. 2016. Data Source"
}
|
[
{
"id": "19546",
"date": "20 Jan 2017",
"name": "Philippa Saunders",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author has provided have provided a provocative and timely review of the evidence related to pregnancy success. The author has focused on the evidence as he sees it from four different categories of review published in the last few decades.\n\nWe think the title is appropriate and will attract attention and the abstract is generally well drafted but the final sentence ends rather abruptly. We suggest the author might wish to consider a more robust/informative ending to his abstract as this will be read alone in Pubmed.\n\nThe following comments are provided in a spirit of trying to increase access to this article for a broader readership than might not be otherwise able to consider its contents, i.e. as currently written it seems largely to appeal to a people who might be interested in statistical analysis. Specifically we would like to see the author consider see how he might frame the evidence he provides alongside a timeline of the different stages of early pregnancy – this would mean individuals who are not well versed in reproductive function would be able to understand the arguments he is providing.\n\nWe are pleased to see this article being written. We think it is timely, thought-provoking and this is an excellent moment in which to consider in realistic terms the kind of evidence that is constantly requoted in the debate about how fertile the human species is. Currently this topic is dominated by data from studies on women who are sub/infertile receiving medical support to achieve a pregnancy.\n\nSpecific points\n\nWho is the audience for this paper and does the introduction set the scene in such a way that the reader will be both interested and motivated to read the remaining part of the paper, which I would like to see them do? I think as written the Introduction may not achieve this objective. For example the first sentence starts with some glib comments about it being ‘widely accepted’ that under natural circumstances human embryo mortality is high, and then there is an extensive section quoting number of popularist articles and websites – why have this up front? It seemed to undermine the erudite arguments of the rest of the paper.\n\nThe second paragraph with some modification would make a sufficient introduction. The aim of the review as stated in the discussion ‘How many fertilized human embryos..?’ should also be frontloaded at some point here. Clearly embryo mortality is of interest to both reproductive biologists and fertility doctors but why not also mention couples trying to conceive?\n\nReading the introduction we were struck by the pressing need for ’key terms’ box – the kind of thing you see in Nature papers – where there is a definition of each of the terms used, e.g. Fecundability, embryo, HCG, etc. If this paper is going to be read by individuals who are not fertility experts or experts in reproductive biology but people interested in ethics or chance or statistics, I think they will be very confused by the different terms that are used.\n\nWhat is not clear from the paper is the chronology of the observations/data being discussed. It is common for people (even those familiar with the field but who work on animal models) to be very confused by the timings in women. For example – the day on which fertilisation takes place versus the last menstrual period, e.g. fertilisation versus gestation versus the first day (depending on when you count from) on which you might reasonably expect to detect HCG in the urine. We would argue there needs to be a figure defining when each of these happens in terms of days in a woman's reproductive span. This could also help clarify the points in the process that the probabilities of πFERT, πCLIN etc can apply to.\n\nThe second piece of information where we think it would be very helpful is under the section called 'What the data say' where the terms such as ‘old’ are added and there are no dates or refs provided. What do they mean by 'old' - pre 1960, pre 1950, pre 1940?\n\nBecause the author has used numbered references, there is also no sense of the relationship of one study to another in terms of dates i.e. how they chronologically relate to each other. Some minor reworking in which the author says, for instance, \"the work of Hertig and Rogg in the 1950’s\" would be helpful.\n\nThe author is also slightly confusing when talking about the pregnancy study (ref 42) not giving the names of the authors nor the date on which it was published in the section on page 4, and then in the reference, for instance in Fig 2, they talk about the pregnancy study ref 42 but in the figure it is shown as French and Bierman 1962. This is the kind of things that make it difficult to get a sense of the chronology of observations and how people have built on each other's observations in order to support subsequent studies, and this after all is one of the most crucial points of this paper.\n\nOn page 6 we finally get to some discussion about modern pregnancy tests. It is not until some pages after that we know whether they are in blood or urine. Mid cycle elevation of HCG - this is not defined in terms of days (cf comments above). For information the fact that these assays were likely to be urine-based assays is not mentioned until page 7.\n\nWe think many aspects of this paper are extremely well argued, very much so the data provided. The very great detailed analysis in Table 3 and also in other parts of page 7, and some very good points are made about the over-emphasis on using data from patient groups where infertility is probably one of the reasons for presentation that may have caused a less robust data set.\n\nThe author makes a valid argument about potential subfertility within the Hertig cohort but this is not balanced. Equally, these women were selected for proven fecunditity and this factor affects interpretation of this cohort as much as the other.\n\nOn page 10 the discussion starts with a key question how many fertilised human embryos die. It is slightly frustrating that this was not put up front as the question being addressed in this paper. Maybe the author might like to consider setting out aims more clearly.\n\nAgain, in the discussion, many of the arguments being made would have been greatly enhanced by telling us the dates on which some of these studies were conducted. When looking at the reference list I see many of them were in the '80s and early '90s.\n\nWe wonder if the first paragraph on page 12 might reasonably be eliminated - it feels repetitive compared to other parts of the paper. I think the discussion of the studies by Macklon review ref 20 is extremely insightful and useful. However we draw the author's attention to a more recent study by Macklon and Brosens which we believe puts forward some interesting arguments that might reasonably be discussed in his study about how the endometrium in which the embryos are set to implant might be acting as a ‘sensor’ of embryo quality. This is in Biology Reproduction 2014, vol 91. There is also a complementary paper in Sci Rep, vol 6, Brosens et al. 2014.\n\nThe conclusion of the discussion seems more like a continuation of the critique of the final few paragraphs. It would be desirable to provide a concluding paragraph which holistically draws together the content of the review. Again the heavy use of quoting references as appears in the introduction masks the opportunity for the author to provide his own conclusions.\n\nIn summary we welcome this review which we think makes many erudite comments on a difficult field.",
"responses": [
{
"c_id": "2682",
"date": "07 Jun 2017",
"name": "Gavin Jarvis",
"role": "Author Response",
"response": "I would like to thank Professor Saunders and Dr Gibson for their helpful review. I have tried to address each point as detailed below. The abstract has been re-drafted to provide a clearer and more robust conclusion. I have also brought the “How many…” question posed in the discussion to the beginning of the Abstract and into the Introduction as suggested. I hope that the changes made will make the article more accessible to a wider readership. Responses to specific points The article is intended primarily for a scientific audience, although I agree that it may have wider appeal and I would be pleased for it to be read as such. The references to claims in the press and popular media are important (see later); however, I agree that as an opening this may be a distraction. I have therefore moved this section to the discussion. The list of scientific citations that make claims about embryo mortality is essential since it substantiates my claim that high embryonic mortality is widely reported. Additionally, it illustrates the large variance in these estimates. The introduction has been re-organised along the lines suggested. The second paragraph has become the start and the quantitative claims incorporated into it. The importance of embryo mortality to women trying to conceive is clear, and was already present in Version 1. It remains in Version 2 and has also been emphasised in the discussion. The “How many…” question has been frontloaded as suggested, and is now also in the abstract. A glossary of key terms box has been provided. A new figure provides a timeline for non-fecund and fecund cycles and key biological events. Most of the pre-1960 references are found in databases, e.g., PubMed; however, search terms appear less comprehensive as articles get older. Hertig (1959) (Ref. 90, PMID: 13613882) has no reference to pregnancy loss in its MeSH terms. Neither Rock & Hertig (1942) (Ref. 92) nor Opitz (2002) (Ref. 29) is indexed in PubMed. A PubMed search on {\"early pregnancy loss\"[All Fields]} generates 831 hits (25th April 2017) of which the earliest is 1971. However, I agree that the description in Version 1 is vague. I have removed the reference to “old”. I have edited the text in several places to include names and dates. I hope this will assist readers in following the chronology. The hCG studies are already arranged by date in Table 3 and Figure 3. Reference numbers have been added to the legend within the Figure, and names and dates included in the legend text to increase clarity. All of the 13 studies measured hCG in urine except two. These two employed serum samples and have been identified in the text. The term “mid-cycle” has been removed and the meaning clarified using the new Figure 1. Thank you for these comments. I agree with the point made. In this section the first point I make about the 210 recruited women is that they were of “proven fertility”. The same point is made again in the commentary on Hertig’s data. I have also edited the text in response to Reviewer 3 and hope that the final result is appropriately balanced. The question (slightly modified) has been included in the Abstract and Introduction. I hope that this helps to clarify and reinforce the purpose of the article. Some dates have been incorporated to enhance chronological clarity (see point 6). The first paragraph on page 12 addresses the importance of biological variance. It does not go into detail but stresses that point estimates of risk do not provide the whole picture when considering either populations or individual cases. As I have put it, a neglect of variance fosters a misleading appreciation of reality. I would prefer to retain this paragraph, in the hope that it will encourage readers to consider the importance and implications of numerical diversity when interpreting data.The arguments proposed by Macklon and Brosens relating to endometrial receptivity are indeed interesting. However, they propose mechanistic explanations for implantation failure and do not directly address the issue of how frequently such events occur. Nevertheless, their inclusion is contextually valuable, and I have made some comments on their studies. The final paragraph has been edited. The quotations are useful to make it clear that I am not alone in drawing attention to the limitations of the available data. I have endeavoured to summarise the broad purpose and value of this work."
}
]
},
{
"id": "19502",
"date": "09 Feb 2017",
"name": "J Wilkinson",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThanks for the opportunity to review this high-quality manuscript. Peer review can be a chore, but this was a pleasure to read.\nI will state that my training is in statistics and research methodology. Although much of my work is in the field of fertility, I have no clinical expertise and no familiarity with the literature discussed in this review. Any comments I make are from the point of view of the statistician and, with respect to the subject-matter, the layperson.\nI am unable to comment on whether or not the body of evidence discussed in the review is comprehensive. However, the critical appraisal of these studies is conducted to a high standard, with a strong command of quantitative research methods on display. I can’t fault it. The reader is left in no doubt as to the considerable limitations (many of which appear to be fatal) of these studies. All data used in the manuscript have been made available for the purposes of reproducing the analysis.\nI was slightly confused by the description of the simulation study as a two-stage procedure in the critique of Roberts & Lowe. If I understand correctly, sets of simulated values for five quantities were drawn from Normal distributions centred around the estimates used by Roberts & Lowe, with standard deviations equal to these values multiplied by 0.2. Each time a new set of these five quantities was drawn, the values were used to calculate (predict) a value for embryo loss. This was done 100,000 times. However, the author speaks of 1,000 simulations, each containing 10,000 separate estimates. It is unclear what exactly varied within and between the 1000 simulations. If the data generating model was the same for all of these (ie: this was just done for computational reasons), then it would be helpful if the author could make this clear in the text.\nThe author assumed that the simulated quantities were independent in the simulation – I confess to having no real intuition as to the implications of this assumption. However, I don’t believe this would affect the author’s conclusion.\n\nOne minor typo; ‘this is far from being a robust pregnancy diagnosis and in different study [46]…’ I believe that it would be appropriate to accept this manuscript without revision, although the author may wish to clarify the point about the first simulation described above.",
"responses": [
{
"c_id": "2681",
"date": "07 Jun 2017",
"name": "Gavin Jarvis",
"role": "Author Response",
"response": "I would like to thank Dr Wilkinson for his helpful review. I hope the following clarify the points raised. Roberts & Lowe Simulation: 10,000 simulated records provided data to generate 1 each of the following parameters: mean, median, and 2.5th & 97.5th percentiles. Repeating this process generated slightly different values for these parameters owing to the specification of the random number generator. Hence, the “10,000 simulations step” was repeated to obtain 1,000 means, medians and percentile values. It is the means of these parameters that are reported. The 100,000 simulated records were generated separately, albeit using the same model structure, simply to produce data for the Figure. I have edited the text to clarify these points.Independence of random variables: It is plausible to suppose that some of the random variables may be correlated, e.g., length of cycle and length of fertile period. However, they may not be, or the extent of any correlation may be weak. I do not believe that constructing a full variance-covariance matrix for the model would shed any further light on the precision of the estimates of embryo mortality. There are too many undefined and imprecise variables for the model to be quantitatively useful. A solution to this imprecision would be to obtain and use more robust estimates, rather than build a more complex statistical model.The minor typo has been corrected. Thank you."
}
]
},
{
"id": "22945",
"date": "23 May 2017",
"name": "Steven H. Orzack",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nDr. Jarvis assesses the empirical support for the belief that there is a “great deal” of fetal wastage in humans. His conclusion is that there is less wastage than is often believed and that the percent loss between conception and birth is 40-60%. Resolution of this issue is important, as it has substantial implications for our understanding of early human development.\nDr. Jarvis describes present understanding as (p. 2):\n\nAmong reputable scientific publications, including medical and reproductive biology text books, scientific reviews and primary research articles, reported mortality estimates include: 30–70% before and during implantation; >50%, 73% and 80% before the 6th week; 75% before the 8th week; 70% in the first trimester; 40– 50% in the first 20 weeks; and 49%, >50%, 53%, 54%, 60%, >60%, 63%, 70%, 50–75%, 76%, 78%, 80–85%, >85%, and 90% total loss from fertilisation to term.\nHe states (p. 2) that four types of evidence underlie these claims:\nA speculative hypothesis published in The Lancet. Life tables of intra-uterine mortality. Studies of early pregnancy by biochemical detection of hCG. Anatomical studies of Dr Arthur Hertig and Dr John Rock.\nOn the basis of his review of this evidence, Dr. Jarvis concludes (p. 12) that “….10-40% is a plausible range for pre-implantation embryo loss and overall pregnancy loss from fertilization to birth is approximately 40-60%.”\n\nThis means that the best estimate of pre-birth mortality according to Dr. Jarvis is consistent with many previous estimates. In order to understand this consistency, it is useful to examine these types of evidence and what Dr. Jarvis makes of each. I discuss them in turn.\n\n1. The Lancet article is Roberts & Lowe (1975). These authors concluded (p. 498) from their “speculative” analysis of the number of married women age 20-29 in England and Wales and of the number of live and dead births that 78% of conceptions are lost. In order to generate this estimate, the authors estimated the number of conceptions in any given year (based on the number of sexual encounters, probability of fertilization, etc.). Dr. Jarvis assesses the influence of changing the number of conceptions on the estimate of fetal wastage and shows (p. 3) that a low estimate of the number of conceptions results in an estimate of 22% conceptions lost and that a high estimate of the number of conceptions results in an estimate of 92% loss. He also generates a 95% confidence interval for the loss percentage of 37% - 90% by doing a simulation in which each value contributing to the number of conceptions is normally-distributed with a mean identical to Roberts and Lowe’s value and a coefficient of variation of 20%. On this basis, he concludes about Roberts and Lowe’s analysis that (p. 1) it “….has no quantitative value.” and that (p. 4) it “….has no practical quantitative value”.\nDr. Jarvis provides a useful sensitivity analysis of Roberts and Lowe’s estimate, which should be taken seriously by those who may believe that their analysis is definitive (their paper has been cited more than 300 times, with many citations that point to the 78% estimate). That said, Dr. Jarvis’ conclusion that Roberts and Lowe’s analysis is quantitatively useless is itself incoherent. A number is a number and as a starting point, their estimate is useful although limited. If their analysis lacks “practical quantitative value” so too does the analysis of Dr. Jarvis. After all, there is no empirical basis for his assumptions about the statistical independence of the components contributing to his estimate of percentage or that these components are normally-distributed or that they have a coefficient of variation of 20%. It is not as though simply making arbitrary assumptions about the variability of parameters somehow means that an analysis is more quantitatively useful than one without such assumptions. The point is that both analyses have value. It is telling in this regard that their estimate is “close” to Dr. Jarvis’ estimate. In fact, one could readily claim that Dr. Jarvis’s analysis validates Roberts and Lowe’s estimate in as much as their estimate is within the 95% confidence interval he generates.\nBy way of understanding Robert and Lowe’s self-described “speculative” work, it is important to note it belongs to the voluminous “gray” literature relating to human pregnancy. This is the literature that is published without much review (if any) and without much requirement for rigor and data. To see this, one need go farther than this passage (p. 498):\n\nAnimal studies, which allow a more systematic investigation of [pregnancy loss], have shown detectable prenatal losses ranging from 15 to 60% in domestic cattle, sheep, and pigs and in wild forms such as stoats, rats, squirrels, and rabbits.\nThey cite Austin (1972) for this claim. He merely states (p. 134):\n\nThe data show that prenatal losses ranging between 15 and 60 per cent occur in cattle, sheep and pigs, as well as in wild forms such as stoats, rats, squirrels and rabbits.\nNo data are cited! In fact, Austin’s gloss on the loss percentage for domesticated species is reasonably accurate (Casida, 1953; First & Eyestone, 1988; Lasley, 1957) although there are less data than one might imagine. It is of note that these species have been selected for offspring production and so how relevant these data are is not completely resolved. Perhaps fetal wastage in their wild relatives would be greater. My guess is that the data alluded to as being from “wild forms” are in papers such as those by Brambell (1942, 1948). That said, to my knowledge, it is not clear that such studies reliably account for early gestational losses. More generally, there are few “wild forms” for which there are estimates.\n\nThe overall point is that Robert and Lowe’s paper contains a disconnection between data and conclusions that would be sustained even if one read the cited source. Their paper is best viewed as a heuristic exercise. This is not a criticism. It is meant to underscore that Dr. Jarvis’ conclusion that their paper is “useless” treats it as something that it isn’t. We are ignorant of the training of Drs. Robert and Lowe but like many authors of the gray literature concerning pregnancy, they may have lacked rigorous training in research practice and data analysis. This is not inherently bad, as long as the nature of such publications is properly understood. As a community of scientists, we can make use of their insight into human pregnancy as long as its potential limitations are understood. We need all the help we can get!\n\n2. The “life tables of intra-uterine mortality” are French & Bierman (1962) and Léridon (1977).The former study is an analysis of pregnancies in Kauai, Hawaii; the authors’ conclusion was that approximately 24% of the pregnancies registered with an estimated gestational age of greater than four weeks would die. Léridon married this result with the data of Hertig, Rock, Adams, & Menkin (1959), which provide an estimate of wastage prior to four weeks, to infer that 63% of conceptions die before birth (Table 4.20, p. 81). Dr. Jarvis’ cautions about the assumptions that underlie this estimate are reasonable. That said, it is important to note that the Léridon’s chapter (“Intrauterine Mortality”, pp. 48-81) is no casual exercise. It is the longest chapter in the book and an open-minded reader can see that Table 4.20 is based upon reasonable assumptions that Léridon clearly states do not have as much of a solid empirical basis as would be desired. Unfortunately, Dr. Jarvis’ sole mentions of Léridon’s caveats are a statement (p. 5) in which Léridon describes (p. 56) an interpolation he makes (in his analysis of French and Bierman’s data) as “risky” and another in which his (Dr. Jarvis) reanalyses of the French and Bierman data (p. 5) “reinforce a concern highlighted by Léridon”. To this extent, a reader of Dr. Jarvis’ paper could easily come away with the mistaken belief that Léridon’s analysis is superficial at best. As in the case of Roberts and Lowe’s estimate, it is important to note that Léridon’s estimate of conceptions lost of 63% is close to Dr. Jarvis’ estimate of 40-60%.\n\n3. “Studies of early pregnancy by biochemical detection of hCG.” The modern pregnancy test is based upon an assay of human chorionic gonadotrophin (hCG), an oligosaccharide glycoprotein hormone produced by embryonic cells. An elevated level of hCG is detectable six to fourteen days post-conception (Nepomnaschy, Weinberg, Wilcox, & Baird, 2008; Wilcox, Baird, & Weinberg,1999). By this time, most embryos capable of implantation will have done so. Unfortunately, earlier pre-implantation detection of pregnancy based upon assay of the “Early Pregnancy Factor”, a heat-shock protein expressed within 48 hours of conception, is not in widespread use (Clarke, 1997; Fan & Zheng, 1997; Morton, Rolfe, & Cavanagh, 1992; Rolfe, 1982; Shahani, Moniz, Chitlange, & Meherji, 1991; Shahani, Moniz, Gokral, & Meherji, 1995; Smart, Fraser, Roberts, Clancy, & Cripps, 1982). Dr. Jarvis correctly describes the pioneering hCG results of Wilcox et al. (1988) and others (as summarized in Table 3), which indicate that the percentage loss of conceptions after hCG detection is between approximately 20 and 60%, with many estimates between 30 and 40%; Dr. Jarvis concludes (p. 6) that this percentage loss is approximately 33%.\nDr. Jarvis goes on to estimate that the “…loss from fertilization to birth [is] 40- 60%”; this is based on the combination of three estimates based on hCG assay of percentage loss from conception to birth (35.7%: Wang et al., 2003; 31.3%: 31.3%: Wilcox et al., 1988; 31.3% Zinaman, Clegg, Brown, O’Connor, & Selevan, 1996) and his estimate (pp. 7-8) that the efficiency of implantation of embryos “…may be up to 90% efficient….” in order. He concludes that higher estimates of loss from fertilization to birth from the literature are “excessive”.\nDr. Jarvis’ estimate is likely an underestimate. There is strong circumstantial evidence that many more than 10% of embryos do not successfully implant, as discussed below. The implication of this is that Dr. Jarvis’ estimate and the previous estimates are consistent. It is also worth noting that Dr. Jarvis uses an arbitrary estimate for implantation rate, even though he judges other analyses to be useless because they contain an arbitrary parameter estimate.\n\nDr. Jarvis goes on to criticize Boklage (1990) who estimated the percentage of unsuccessful conceptions based on an analysis of hCG data (see his Figure 2, p. 84). Dr. Jarvis is right to raise concerns (p. 8) that Boklage’s analysis is less definitive than desired. In particular, he states (p. 8) that Boklage’s assumption that the 21-day survival rate of conceptions is 28.7% is based upon a misinterpretation of a previous study. That said, Dr. Jarvis makes an unsubstantiated conclusion (p. 8) that “…quantitative conclusions from [Boklage’s] analysis in relation to the survival of naturally conceived human embryos are of doubtful validity”. This may be true, but this remains to be seen given the lack of any demonstration of the sensitivity of Boklage’s quantitative conclusions to changes in the underlying assumptions. Boklage’s analysis needs more careful scrutiny than given by Dr. Jarvis. For example, Boklage presents a formula for the percentage loss of conceptions as a function of time (p. 84). Are the coefficients estimated via a standard statistical approach such as maximum likelihood estimation and chosen via a likelihood ratio test or via comparison of AIC values associated with competing models? This is not clear. As such, it is unclear as to what to make of the predictions even putting aside Dr. Jarvis’ concerns about the biological validity of some of the underlying data. The equation appears to be based upon the assumption that a cohort of embryos is an admixture of those that are likely to die before six weeks and those that will survive longer. The basis for this assumption is unclear. The lack of transparency of Boklage’s equation is underscored by the fact that Dr. Jarvis does not mention that it predicts 75.8 percent fetal wastage between conception and full-term birth (270 days). As above, this estimate is rightly or wrongly consistent with most previous estimates.\n\n4. The “anatomical studies of Dr Arthur Hertig and Dr John Rock” are investigations of conceptions recovered from uteri obtained via gynecologic surgery. Their results are summarized in Hertig et al. (1959); Hertig & Rock, (1973); Hertig, (1967). As described by Dr. Jarvis (p. 9), Hertig et al.’s conclusion is that 50% of embryos will die within two weeks after conception.\n\nDr. Jarvis’ is correct to point out concerns about their conclusion, although we believe that it has been well recognized that it is “impressionistic” as opposed to something that has a solid quantitative underpinning. Of course, as noted by Dr. Jarvis, their work remains important.\n\nDr. Jarvis makes some assertions about Hertig et al.’s work that seem mainly intended to accentuate doubts about it as opposed to placing it in proper context. He notes correctly (p. 9) that the sample is cross-sectional and not longitudinal. Given the nature of this study, this was unavoidable. Dr. Jarvis notes there are some unresolved discrepancies among age-specific detection rates for embryos and also between the estimated implantation rate and the rate inferred from other studies. These are worth mentioning but the implications of these discrepancies remain ambiguous in the absence of a quantitative analysis that accounts for sampling variation.\n\nSimilarly un-useful is Dr. Jarvis’ statement (p. 9) that “Despite having proven fertility, these women presented with gynaecological problems, suggesting suboptimal reproductive function.” There is a wide range of “gynaecological problems” and an unanchored assertion that such a broad category might result in “sub-optimal reproductive function” means nothing in the absence of evidence that whatever problems were present had some influence on embryonic viability. In an effort to “estimate the precision” of the various proportions presented by Hertig et al. (e.g., the survival rate to implantation), Dr. Jarvis generated 500 so called “bootstrap” samples from the original data consisting of 107 cases. These samples arise from sampling with replacement of the original data (e.g., see Efron & Tibshirani, 1986; Efron, 1987). Such an investigation is worthwhile, although a bootstrap analysis is not a “cure” for small sample size. In any case, Dr. Jarvis’ analyses of the bootstrap results are incorrect. He describes (p. 10) “95% CIs” for various proportions that are outside of the range of 0-100%. For example, the confidence interval (p. 10) he provides for pre-implantation embryo survival probability is 27-128%. Such an interval cannot be generated by a correct bootstrap analysis. There are various ways to calculate a bootstrap confidence interval (Efron & Tibshirani, 1986). The simplest, known as the “percentile method”, generates a 95% bootstrap confidence interval for a proportion directly from the range of proportions associated with the central 95% of the bootstrap estimates. Accordingly, the confidence interval must be between 0 and 100% because each of the bootstrap samples must generate a proportion between 0 and 100%. Dr. Jarvis’ mistake appears to be that he estimated an average proportion and its variance from the ensemble of bootstrap estimates and then calculated the confidence interval using standard formulae (p. 10). The purpose of bootstrap estimation is to avoid such calculations, which can generate inaccurate confidence intervals. Although some of the bootstrap confidence intervals provided by Dr. Jarvis do not fall below 0% or surpass 100%, we guess that all of them are incorrectly calculated. Unfortunately, the incorrect confidence intervals are described by Dr. Jarvis (p. 12) as “mathematically and biologically implausible” and taken to “….betray the quantitative weaknesses in [Hertig et al.’s] data and analysis.” Indeed, they are “mathematically and biologically implausible” but the reason is that they were not correctly calculated. Whatever bearing a bootstrap analysis has on our understanding of the “precision” of Hertig et al.’s data and analyses remains to be seen.\nDr. Jarvis’ central argument is that there is more ambiguity associated with estimates of fetal wastage in humans and that this ambiguity is not widely understood. Many of his concerns should be taken seriously. Nonetheless, his analysis is undermined by errors of analysis and overstatement. In the end, his estimate of fetal wastage from conception to birth is consistent with many of the previous estimates.\nDr. Jarvis’ analysis is also undermined by an incorrect dismissal of data from embryos created via assisted reproductive technology (ART), which he refers to as in vitro fertilization (IVF). On page 11, he alludes to “…sub-optimal conditions for embryo culture…” and implies that somehow ART embryos are “different” in undefined ways from naturally-conceived embryos that negate their potential use in regard to estimating fetal wastage. This is an exercise in rhetoric, not a scientific argument. It is true that ART embryos are different from natural embryos in ways that could influence an estimate of fetal wastage. However, it is essential to note that they constitute the best available sample for insight into the “black box” of early pregnancy, despite the possible biases they may have that could distort our view into the black box. To this extent, it is best to assess what information they can provide about fetal wastage, rather than provide tenuous or irrelevant reasons as to why they are not useful.\nDr. Jarvis mistakenly assumes (p. 11) that only ART embryos transferred into mothers would provide information about fetal wastage. In fact, as Dr. Jarvis notes, there are a number of reasons why transferred embryos are not representative of all embryos (e.g., conscious or unconscious quality biases, sex selection) and accordingly, this kind of sample could be misleading. That said, studies of such samples suggest that at least some aspects of their biology are identical to that of naturally-conceived embryos. For example, the sex ratio at birth for ART embryos is statistically identical with that of natural conceptions (Orzack et al., 2015).\nMore importantly, the entire ensemble of ART embryos (untransferred and transferred) provides information about fetal wastage. Almost all ART embryos undergoe testing for chromosomal abnormalities, such as aneuploidy. The consequences of aneuploidy are well-known – it results in almost certain death before birth. This is consistent with the fact that many spontaneous abortions are karyotypically abnormal (Boué, Boué, & Lazar, 1967, 1975; Jauniaux & Burton, 2005). To this extent, the frequency of such abnormalities provides strong circumstantial evidence as to the amount of fetal wastage. Orzack et al. (2015) investigated a sample of ART embryos whose karyotypes were assayed via FISH or CGH and reported that 84,881 out of 139,704 embryos contained at least one aneuploid chromosome. The implied percentage of fetal wastage (60.8%) is remarkably consistent with the central tendency of the many reports that Dr. Jarvis dismisses as unreliable, as well as with his own estimate. As noted, we need to be cautious about inferences from this sample but not avoid making them. There is no compelling reason to think that “suboptimal” conditions for embryo culture (if any) cause many chromosomal abnormalities, most of which very likely arise during meiosis (e.g., Hassold & Hunt, 2001; Hunt & Hassold, 2007; Jones, 2008; Nagaoka, Hassold, & Hunt, 2012). What deserves scrutiny are whether the frequency of chromosomal abnormalities is elevated by techniques for collecting eggs and/or because women providing them for use in ART are unrepresentative of all reproductive women. There are limited data that unstimulated and stimulated oocytes have similar frequencies of abnormality (Labarta et al., 2010). Of course, women using ART are often older than many typical mothers. However, a high frequency of karyotypic abnormality is also observed among oocytes from young women (Baart et al., 2006; Munné et al., 2006). These concerns should continue to be investigated but they in no way imply that ART embryos cannot provide useful insights about early human development and fetal wastage, especially given the current lack and very likely continuing lack of a large sample of naturally-conceived human embryos.\nWe see then a web of circumstantial evidence implying that there is a substantial amount of fetal wastage in humans. This insight arises from imperfect types of knowledge (as documented by Dr. Jarvis) but nonetheless, there is a signal consistent with the claim that approximately half or more of conceptions fail. More needs to be done to improve our understanding.\nThe study of fetal wastage shares with the study of the human sex ratio during pregnancy the fact that many different kinds of scientists are involved and so, the associated balkanization has reduced the accountability that arises from a shared disciplinary perspective about the standards for the interpretation of data (Orzack, 2016; Orzack et al., 2015). One cause and consequence of this division is the gray literature mentioned above.\n\nWhat contributes to the continuing “life” of the gray literature? Science abhors a vacuum and claims about high fetal wastage in humans have been repeated often in a way that the connection with assumptions and data have gotten obscured or lost. Some claims date well before there was any means by which early mortality could be assessed (Mall, 1917; Meyer, 1920; Pearson, 1897). Pearson clearly acknowledged the lack of direct evidence but such caveats get lost especially in medicine in which attention to standards of evidence, recognition of the assumptions needed to connect data with conclusions, and awareness of needed statistical techniques have been less as compared to in biological research. These deficiencies have diminished as medical training has incorporated more scientific training but have not disappeared. Nonetheless, during medical training the “inhalation” of facts is important. It is one reason as to why many believe that fetal wastage is high, despite having little or no familiarity with the available data along with the ins and outs of their analysis and interpretation.\nIn this context, care is needed when assessing the nature of claims about fetal wastage. This can be illustrated by considering Dr. Jarvis’ claim (p. 8):\n\n….it is clear that estimates for total embryonic loss of 90% (Opitz, 2002), 85% (Braude & Johnson, 1990), 83% (Harris, 2003), 80– 85% (Johnson & Everitt, 2000; Vitzthum, Spielvogel, Thornburg, & West, 2006), 78% (Roberts & Lowe, 1975), 76% (Boklage, 1990; Drife, 1983) and 70% (Chard, 1991; Ford & Schust, 2009; Loke & King, 1995; Macklon, Geraedts, & Fauser, 2002; McCoy et al., 2015) are excessive.\n\n(We have replaced number citations with author citations). Several of these claims are in medical textbooks and are akin to newspaper articles, i.e., they are reports on prior research as opposed to being independent estimates. Even then the nature of the evidence can go unmentioned. For example, in their text book Johnson & Everitt (2000) include no evidence or citations in which to find evidence underlying their estimate. Of the claims in the primary literature, we again see a lack of independent evidence in as much as someone else’s estimate is reported. For example, Chard (1991); Drife (1983); Vitzthum et al. (2006) merely present Roberts & Lowe's (1975) estimate. A few claims present their own evidence. For example, Harris (2003) contains this passage (p. 362):\n\nWe now know that for every successful pregnancy that results in a live birth many, perhaps as many as five early embryos will be lost or will “miscarry”….\n\nand accompanying footnote (p. 371):\n\nRobert Winston gave the figure of five embryos for every live birth some years ago in a personal communication. Anecdotal evidence to me from a number of sources confirms this high figure, but the literature is rather more conservative, making more probable a figure of three embryos lost for every live birth. See: Boklage CE. Survival probability of human conceptions from fertilization to term. International Journal of Fertility 1990;35(2)75–94. See also: Leridon H. Human Fertility: The Basic Components. Chicago: University of Chicago Press; 1977. Again, in a recent personal communication, Henri Leridon confirmed that a figure of three lost embryos for every live birth is a reasonable conservative figure.\n\nThis is clearly a heuristic estimate! The point is that there is less of a monolithic ensemble of flawed estimates that need to be debunked than one might imagine given Dr. Jarvis’ passage. In any case, there is nothing inherently problematic about the citations just described. Indeed, it would be preferable if attributions were better and speculation was better highlighted as such. Nonetheless, such estimates should be used with caution but not discarded, given the substantial difficulties associated with the estimation of fetal wastage in humans.\nAn ideal future investigation of fetal wastage is easy to imagine: daily assessment of EPF and hCG for a cohort of women attempting to get pregnant. Easier said than done! Consider what such a study would require: a reliable assay for EPF, the enrollment of thousands of women, collection of and accurate assessment of thousands of samples, and more. Perhaps these technical and logistical barriers can be overcome soon. In the meantime, we can recognize that there is strong circumstantial evidence that human fetal wastage is likely between 50 and 75%. At the same time, we can recognize along with Dr. Jarvis that this conclusion lacks definitive proof and that additional investigations and scrutiny are needed.",
"responses": [
{
"c_id": "2744",
"date": "07 Jun 2017",
"name": "Gavin Jarvis",
"role": "Author Response",
"response": "I would like to thank Professor Orzack and Professor Zuckerman for their extensive review. I have amended the article in light of their remarks, and hope they consider it improved as a result.IntroductionThe purpose of my article is to evaluate available data that contribute to our quantitative understanding of natural human embryo mortality. The body of relevant data is small, as noted by the reviewers, although I have attempted to identify all of it. I deliberately avoided IVF/ART data since there is so much, and it is not obvious how such data illuminate natural circumstances (I comment further on this below). My comments on IVF/ART data are therefore confined to the Discussion. Orzack & Zuckerman repeatedly refer to my estimates of 10-40% preimplantation loss and 40-60% total embryo loss, which are important benchmarks for my article. They are critical of these, although they do not seem to appreciate where they come from. Contrary to what they imply (“On the basis of his review of this evidence…”), they do not arise from analyses described in this article. Rather, they are from an analysis described in a previous article in F1000Research1. I have amended the article to clarify this point. Concerns with the validity of these estimates should focus on that analysis, which is not listed among their 53 references.In their review, the reviewers are ambiguous (one might say ‘gray’) in their use of quotation marks and appear to ascribe to me things I did not write. For example, I do not use the phrase “great deal”. Thus, for the sake of clarity, and to separate literary emphasis from quotation, I will follow the convention employed by GEM Anscombe, who coined a useful phrase2, to distinguish between ‘scare quotes’ and “quotations”.I address points raised in the review, approximately in the order in which they appear.1. Roberts & LoweOrzack & Zuckerman state that I calculate 95% confidence intervals. This is incorrect. The range [37-90%] is not a confidence interval, I do not refer to it as such, and nor can it be, since there are no data. As described in the article, it is the range within which 95% of simulated estimates fall, based on Roberts & Lowe’s speculative values and other assumptions.The reviewers suggest that my analysis lacks “practical quantitative value”. I agree. This is the point and I am glad they have recognised it, if not entirely appreciated its significance. My analysis has “no practical quantitative value” for estimating the number of conceptions that are lost. As I explicitly point out, the sole purpose of the sensitivity analyses is to show that modest changes in the speculative estimates used by Roberts & Lowe may result in any biologically plausible value for embryo loss. That my simulated estimate of 76.5% is close to Roberts & Lowe’s 78% is not telling since it uses their original speculative values. On the contrary, it would be telling (of something) if they were not close. I simply added variance to the speculative values. I comment on the nature of this variance/covariance in my response to Reviewer 2. Thus, my analysis does not validate Roberts & Lowe, it exposes its quantitative futility.Gray Literature is “documentary material which is not commercially published or publicly available, such as technical reports or internal business documents.”3 The Lancet is not ‘Gray Literature’. I comment further on this below. Contrary to the reviewers’ suggestion, we are not completely “ignorant of the training of Drs. Roberts & Lowe” or unaware of their experience in “research practice and data analysis”. Charles Ronald Lowe was the more senior of the two. He was 63 years old and Professor of Social and Occupational Medicine at the University of Wales College of Medicine when The Lancet article was published. He “contributed much to the growth of academic public health and the teaching of epidemiology and statistics.”4I do not describe their work as “useless” – if intended as a quote, then it is a misquote. I describe it as having “no practical quantitative value”. These are carefully chosen words. (I have edited the equivalent phrase in the Abstract to match the full text.) The critique offered by the reviewers and their description of the paper as heuristic support this view. Nevertheless, I have added a statement that, as a model for highlighting factors that influence fecundity, the Roberts & Lowe analysis has some value. In all fairness, on four separate occasions, I describe the analysis of Roberts & Lowe as a “hypothesis”, i.e., the banner under which it was originally published in The Lancet. Indeed, they describe their arithmetic as “speculative”; however, they also describe their estimate as “conservative”, implying that the true result may be even higher than 78%. My critique would be less germane had their hypothesis not been cited so widely (“more than 300 times”, as helpfully pointed out by the reviewers). I suggest that it is not I, but those who enthusiastically cite it5 who treat it as “something that it isn’t”.2. Life Tables of Intrauterine MortalityI do not consider Leridon’s chapter6 a “casual exercise” or “superficial”. On the contrary, it is a well-reasoned attempt to answer a challenging biological question. I have included a tribute in my article to Leridon’s review. I hope this prevents readers from gaining such false impressions.I agree with the reviewers that Leridon’s 63% is close to my 40-60%. However, Roberts & Lowe’s 78% is not, as they imply.A critique of Leridon’s life-table is not a critique of Leridon at all, but of French & Bierman7 and Hertig8. I discuss briefly why French & Bierman may be an overestimate and, in detail, how Hertig’s analysis is flawed. Leridon’s account has been widely cited, especially by those describing embryo loss at the earliest stage. I hope readers will find it useful to know how Leridon’s values are derived.3. hCG studies of early pregnancy lossThe Edmonds (1982) estimate of approximately 60% loss9 is the highest I report and, for reasons discussed in the article and mentioned by others10, is likely to be an over-estimate. Nevertheless, years after the more credible Wilcox (1988)10 study was published, Edmonds is still widely cited to justify high levels of embryo wastage. For example, Hyde & Schust (2015)11 cite both Edmonds and Wilcox to support their claim that “Approximately 70% of human conceptions fail to achieve viability, with almost 50% of all pregnancies ending in miscarriage before the clinical recognition of a missed period…” By showing Edmonds’ results in context, I hope this kind of overstatement can be avoided.My conclusion of one third loss is based on the average of the eight listed studies from Wilcox to the present day (unweighted average = 31.9%). I have edited the paper to make this clear. I also discuss why the estimates prior to Wilcox are less reliable and cite several studies that make similar observations.As already noted, my 40-60% estimate is from a previous analysis1 and is not a combination of the values (31.3; 35.7; 31.3) highlighted by the reviewers. My rationale for using a 90% implantation (and fertilisation) efficiency is found in that analysis1. My conclusion regarding the validity of Boklage’s analysis of embryo mortality12 is not “unsubstantiated”. Indeed, the reviewers mention a key point of substance: namely, that Boklage’s value of 28.7% misinterprets the biology. Boklage uses this as measure of embryo mortality, whereas it is a fecundability. If fecundabilities are analysed as embryo mortalities, surely this casts doubt on the validity of conclusions regarding embryo mortality.I cannot comment on Boklage’s statistical methodology (i.e., use of MLE, LRTs or AIC values) since he reports no such detail. However, I thank the reviewers for highlighting the lack of clarity in Boklage’s analysis. Contrary to the claim of the reviewers, I refer to Boklage’s estimate of 76% loss from conception (fertilisation) to birth on three occasions. This 76% estimate is consistent with Roberts & Lowe’s value. It is somewhat higher than Leridon’s (whose life table is inexplicably omitted from the Boklage analysis). It is clearly not consistent with my 40-60% estimate1. 4. Hertig’s data and analysisRegarding Hertig’s conclusion, Orzack & Zuckerman “believe that it has been well recognized that it is ‘impressionistic’ as opposed to something that has a solid quantitative underpinning”. I agree that Hertig’s conclusion does not have a “solid quantitative underpinning”; however, it is precisely the quantitative underpinning of Leridon’s life table and other claims about early natural embryo mortality. This is a key point of my article. It is not clear what the reviewers mean by ‘impressionistic’13: some authors seem to offer an ‘unimpressionistic’ account of Hertig. For example, in the widely-cited ‘Black Box’ review14, Macklon et al. write regarding Hertig’s study: “…the high rate of early pregnancy loss before the time of the first missed period was thus clearly demonstrated…” Other less widely-cited articles15 do address the design and analytical shortcomings. Pointing out shortcomings in studies is what scientists (and reviewers) are meant to do. Thus, I agree with the reviewers that they are “worth mentioning”. Furthermore, by pointing out that Hertig’s subjects were of proven fertility, had gynaecological problems and may have had suboptimal reproductive function, I am placing Hertig’s study “in proper context”. This is not “un-useful”. Nevertheless, I have edited this section, to accommodate these reviewers’ scepticism with the more positive view of others16. I hope I have struck an acceptable balance. Orzack & Zuckerman appear to have concerns with well-established statistical techniques, referring to my “so-called ‘bootstrap’ samples”. I agree that bootstrapping is “not a ‘cure’ for small sample size”, but I do not claim that it is. Bootstrapping can provide estimates of precision when it is not possible to calculate these analytically. As with all analyses, outputs require appropriate interpretation.The reviewers state that the “analyses of the bootstrap results are incorrect” because some of the confidence intervals lie outside the range 0-100%. I am aware that this is impossible (for a probability) as I explicitly point out. Such outputs do indicate a serious flaw in the analysis, which is as follows: Hertig ignores 47 of his 107 cases. These cases are included in my bootstrap. The reader may consider whether ignoring 44% of the data is reasonable and the extent to which by doing so Hertig has generated biased estimates of the probabilities he calculates. Kline et al. (1989) make a similar point: “The missing data are sufficient to engender an entirely different result”15. The bootstrap therefore illustrates the extent to which Hertig’s estimates are biased by ignoring his own data. There are other reasons to doubt the precision of his conclusions and the representative nature of the subset of data upon which he relies so heavily – these are described in the article. The bootstrap pseudo-datasets are available for scrutiny (Dataset 4). Thus, if there are any flaws in my reasoning or bootstrap, the reviewers may point these out. I used the percentile method (to which they refer) to calculate the 95% CIs and I have edited the text to clarify this. I do not believe there are any flaws in my bootstrap.IVF/ART dataThere is a wealth of data from IVF/ART studies and I have only mentioned a tiny proportion of this. Orzack & Zuckerman and a previous reviewer17 suggest that such data could contribute to a quantitative understanding of the in vivo situation. In the broadest sense, this is of course true. However, there are difficulties in extrapolating from in vitro to in vivo circumstances. I am not alone in pointing this out14, and I have illustrated some of these difficulties in the Discussion.My description of “sub-optimal conditions for embryo culture” is drawn from two papers: Bolton & Braude (1987)19: “Optimal culture conditions for human embryos have yet to be defined” and “suboptimal culture conditions are undoubtedly responsible for a proportion of this embryonic failure”. Bolton et al. (2015)20: “Embryo culture conditions in vitro are likely to be suboptimal compared to those in vivo.” Is this just rhetoric or a reasonable consideration? Describing in vitro data as the “best available” is a weak claim in the absence of equivalent natural in vivo data. The extent to which in vitro embryos are representative of in vivo embryos is precisely the point in question. Is there really numerical consistency between natural and IVF/ART embryos? There may be consistency in sex ratios21, but does that extend to aneuploidy rates, mosaicism, epigenetic defects, implantation potential, spontaneous abortion rates, etc? These are big questions and this article is not the place to answer them. However, if 70% loss14 is the natural benchmark by which IVF/ART embryos are judged to be equivalent to natural embryos22, but the true rate of natural loss lies in the range 40-60%, this therefore casts doubt on the judgement that IVF/ART and natural embryos are equivalent. Furthermore, the suggestion that IVF/ART and natural embryos may be different is neither radical, novel, nor strong23. However, the real reason I do not consider IVF/ART embryo data is that the article is a critique of data from natural circumstances. Comparison of natural and IVF/ART embryos is a project for the future.The reviewers refer to my “tenuous or irrelevant reasons” why ART embryos are not useful for quantifying early embryo mortality, yet they provide the perfect reason themselves: “it is true that ART embryos are different from natural embryos in ways that could influence an estimate of fetal wastage”24. Nevertheless, I do discuss circumstances in which different ART interventions (e.g., observation of in vitro fertilisation per se; retrieval of embryos following timed artificial insemination, as well as AID/IVF success rates) may cast light on embryonic/fetal wastage. Orzack & Zuckerman extrapolate from 84,881 aneuploidies among 139,704 IVF/ART embryos21 to an “implied percentage of fetal wastage” of 60.8%. They state that this is the “central tendency” of “many reports” that I dismiss as unreliable. Of course, if this were true, then the observation would add little to what was already known. It is not clear which are the “many reports”.Let us consider the hypothesis that in vitro aneuploidy predicts natural total fetal wastage. Firstly, “The only well-established epidemiological facts about EPL {early pregnancy loss} are that about 50-60% of cases are associated with a chromosomal defect of the conceptus”25 suggesting that euploid embryos may also fail. Secondly, “FISH may overestimate the incidence of aneuploidy”21,26 suggesting a proportion of apparently aneuploid embryos may not fail. Furthermore, aneuploidy may not developmentally compromise embryos27; estimates of IVF/ART embryo aneuploidy/mosaicism vary considerably28; mosaic embryos can self-correct29; aneuploidy in trophoblast/placental cells may be less developmentally problematic23 – who knows, it may even be advantageous!The point is simple. There are too many undefined variables associated with IVF/ART embryos to shed more than the faintest light on the question of natural embryo survival. I have included a brief discussion of some of these issues and edited the penultimate paragraph to be more circumspect by replacing an “are” with a “may be”. I hope this meets with the reviewers’ approval.Gray LiteratureOn several occasions, the reviewers refer to Gray Literature. They offer a revealing account and speculate on its continuing ‘life’. Gray Literature has been defined as follows: “That which is produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers.”30,31The list of references reproduced by the reviewers, starting with Opitz, 2002 and ending with McCoy et al. 2015 are all from academic books, journals, or text books. They are all published by commercial publishers. They were all written (with one exception) by medical practitioners or scientists, many of whom are experts in reproductive biology. The one exception (Harris, 2003) is a moral philosopher; however, the reviewers usefully point out that his estimate comes from a well-known and eminent reproductive biologist. None of this is Gray Literature. Human Reproduction Update, Fertility & Sterility & PLOS Genetics are reputable academic journals. Many of these articles will have been peer-reviewed. Even pieces “akin to newspaper articles” (the Drife (1983) BMJ piece could be described as such and was probably not peer-reviewed32) are subject to editorial control, and an expectation of academic professionalism is surely reasonable from such experts.The reviewers state that it would “be preferable if attributions were better and speculation was better highlighted”. I agree. Yet they highlight my 'so-called' “errors of analysis” and “overstatement” whilst passing over errors and overstatement in these citations as “nothing inherently problematic”.What Orzack & Zuckerman describe and defend is not Gray Literature, but ‘Gray Scholarship’.HeuristicsA heuristic estimate may be based on simplified quantitative criteria, educated guesswork, rules of thumb, common sense, past experience, etc. Despite their utility, in the absence of evidence heuristic estimates may become biased. Faced with inconsistent estimates, on the one hand, those that are heuristic or based on circumstantial evidence, and on the other, those based on well-defined analysis of relevant data, surely an appropriate scientific response is to favour the latter and re-evaluate the former.A further problem with heuristic estimates is that the process for deriving them is not always transparent. For example, it is not obvious how Orzack & Zuckerman use the “web of circumstantial evidence” to which they refer to conclude that “human fetal wastage is likely between 50 and 75%”. There is something ‘gray’ about this. My estimates of 10-40% preimplantation loss and 40-60% total loss are partly evidence-based and partly heuristic. They may be imperfect, and no doubt will not be the last word on the matter, but it is at least clear how they were derived1.ConclusionOrzack & Zuckerman often repeat the point that my estimates are consistent with previously published values. In some cases they are, and I have drawn more attention to the fine chapter by Kline et al. (1989)15 who conclude that “perhaps half of all conceptions are lost before birth”33. However in other cases, reported values are clearly not consistent with my estimates. I have used 70% total embryo loss as a threshold, at and above which I describe estimates as exaggerated. This is based on my previous analysis1 and thus my claim rests heavily, although not solely, on its credibility. There are other reasons to cast doubt on these high values, but these are for another time. I have modified the conclusion of my article to highlight that while precision may be elusive, exaggeration can be avoided.“Nature abhors a vacuum”, so the proverb says, but how science, or more properly scientists, should fill it is another matter entirely. Recognising and quantifying limits of knowledge is an essential part of a credible scientific process. As a philosopher once wrote: “Wovon man nicht sprechen kann, darüber muss man schweigen”34.References: Jarvis GE: Estimating limits for natural human embryo mortality [version 2; referees: 2 approved]. F1000Res. 2016, 5:2083 (doi: 10.12688/f1000research.9479.2). Anscombe GEM: Aristotle and the Sea Battle. Mind, 1956;65(257):1-15. Oxford English Dictionary (http://www.oed.com/). Roberts CJ: Obituary: C R Lowe. Brit Med J. 1994; 308(6921): 129. “It is still difficult to better the original calculations of Roberts & Lowe (1975)” from: Chard T: Frequency of implantation and early pregnancy loss in natural cycles. Baillieres Clin Obstet Gynaecol. 1991; 5(1): 179-89. Leridon H: Intrauterine Mortality. Human Fertility: The Basic Components. Chicago: The University of Chicago Press; 1977; 48-81. French FE, Bierman JM: Probabilities of fetal mortality. Public Health Rep. 1962; 77(10): 835-47. Hertig AT: The Overall Problem in Man. In: Benirschke K, editor. Comparative Aspects of Reproductive Failure. An International Conference at Dartmouth Medical School. Berlin: Springer Verlag; 1967; 11-41. Edmonds DK, Lindsay KS, Miller JF, et al.: Early embryonic mortality in women. Fertil Steril. 1982; 38(4): 447-53. Wilcox AJ, Weinberg CR, O'Connor JF, et al.: Incidence of early loss of pregnancy. N Engl J Med. 1988; 319(4): 189-94. Hyde KJ, Schust DJ: Genetic considerations in recurrent pregnancy loss. Cold Spring Harb Perspect Med. 2015; 5: a023119. Boklage CE: Survival probability of human conceptions from fertilization to term. Int J Fertil. 1990; 35(2): 75, 79-80, 81-94. According to Wikipedia, scare quotes too often serve to confuse rather than clarify (https://en.wikipedia.org/wiki/Scare_quotes#Criticism). Macklon NS, Geraedts JP, Fauser BC: Conception to ongoing pregnancy: the ‘black box’ of early pregnancy loss. Hum Reprod Update. 2002; 8(4): 333-43. Kline J, Stein Z, Susser M: Conception and Reproductive Loss: Probabilities. Conception to Birth. Epidemiology of Prenatal Development. New York: OUP; 1989; 43-68. Saunders P, Gibson DA. Referee Report For: Early embryo mortality in natural human reproduction: What the data say [version 1; referees: 1 approved, 2 approved with reservations]. F1000Research 2016, 5:2765 (doi: 10.5256/f1000research.9616.r19546) Trounson AO. Referee Report For: Estimating limits for natural human embryo mortality [version 2; referees: 2 approved]. F1000Research 2016, 5:2083 (doi: 10.5256/f1000research.10209.r16765). Benagiano G, Farris M, Grudzinskas G: Fate of fertilized human oocytes. Reprod Biomed Online. 2010; 21(6): 732-41. Bolton VN, Braude PR: Development of the human preimplantation embryo in vitro. Curr Top Dev Biol. 1987; 23: 93-114. Bolton VN, Leary C, Harbottle S, et al.: How should we choose the ‘best’ embryo? A commentary on behalf of the British Fertility Society and the Association of Clinical Embryologists. Hum Fertil (Camb). 2015; 18(3): 156-64. Orzack SH, Stubblefield JW, Akmaev VR, et al.: The human sex ratio from conception to birth. Proc Natl Acad Sci USA. 2015; 112(16): E2102-11. Niakan KK, Han J, Pedersen RA, et al.: Human pre-implantation embryo development. Development. 2012; 139(5): 829-41. Ledbetter DH: Chaos in the embryo. Nat Med. 2008; 14(5): 490-1. Zuckerman JE, Orzack SH. Referee Report For: Early embryo mortality in natural human reproduction: What the data say [version 1; referees: 1 approved, 2 approved with reservations]. F1000Research. 2016, 5:2765 (doi: 10.5256/f1000research.9616.r22945). Jauniaux E, Burton GJ. Pathophysiology of histological changes in early pregnancy loss. Placenta, 2005; 26: 114-23. Treff NR, Levy B, Su J, et al.: SNP micro-array based 24 chromosome aneuploidy screening is significantly more consistent than FISH. Mol Hum Rep. 2010; 16(8): 583-9. Bolton H, Graham SJL, Van der Aa N, et al.: Mouse model of chromosome mosaicism reveals lineage-specific depletion of aneuploid cells and normal developmental potential. Nat Commun. 2016; 7: 11165. Vanneste E, Voet T, Le Caignec C, et al.: Chromosome instability is common in human cleavage-stage embryos. Nat Med. 2009; 15(5): 577-83. Munne S, Velilla E, Colls P, et al.: Self-correction of chromosomally abnormal embryos in culture and implications for stem cell production. Fertil Steril. 2005; 84(5): 1328-34. The Grey Literature Report (http://www.greylit.org/about). See also: Wikipedia (https://en.wikipedia.org/wiki/Grey_literature); GreyNet International 1992-2017 (http://www.greynet.org/home/aboutgreynet.html); California State University, Long Beach Libraries (http://csulb.libguides.com/graylit). Personal Communication from Brit Med J, 18th April 2016. Kline J, Stein Z, Susser M: Developmental Abnormalities: I. Measuring Frequencies. Conception to Birth. Epidemiology of Prenatal Development. New York: OUP; 1989; 69-80. Wittgenstein L: Tractatus 7 Tractatus Logico-Philosophicus. New York: Harcourt, Brace & Co, Inc.; 1922 (translated as: Whereof one cannot speak, thereof one must be silent)."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2765
|
https://f1000research.com/articles/6-788/v1
|
06 Jun 17
|
{
"type": "Research Article",
"title": "A simple mathematical approach to the analysis of polypharmacology and polyspecificity data",
"authors": [
"Gerry Maggiora",
"Vijay Gokhale",
"Vijay Gokhale"
],
"abstract": "There many possible types of drug-target interactions, because there are a surprising number of ways in which drugs and their targets can associate with one another. These relationships are expressed as polypharmacology and polyspecificity. Polypharmacology is the capability of a given drug to exhibit activity with respect to multiple drug targets, which are not necessarily in the same activity class. Adverse drug reactions (‘side effects’) are its principal manifestation, but polypharmacology is also playing a role in the repositioning of existing drugs for new therapeutic indications. Polyspecificity, on the other hand, is the capability of a given target to exhibit activity with respect to multiple, structurally dissimilar drugs. That these concepts are closely related to one another is, surprisingly, not well known. It will be shown in this work that they are, in fact, mathematically related to one another and are in essence ‘two sides of the same coin’. Hence, information on polypharmacology provides equivalent information on polyspecificity, and vice versa. Networks are playing an increasingly important role in biological research. Drug-target networks, in particular, are made up of drug nodes that are linked to specific target nodes if a given drug is active with respect to that target. Such networks provide a graphic depiction of polypharmacology and polyspecificity. However, by their very nature they can obscure information that may be useful in their interpretation and analysis. This work will show how such latent information can be used to determine bounds for the degrees of polypharmacology and polyspecificity, and how to estimate other useful features associated with the lack of completeness of most drug-target datasets.",
"keywords": [
"drugs",
"drug targets",
"polypharmacology",
"polyspecificity",
"networks",
"edge-colored",
"bipartite networks",
"latent information"
],
"content": "Introduction\n\nThe study of drug-target interactions and their manifestation in polypharmacology and polyspecificity is playing a major role in the growing field of chemogenomics in particular, and in drug research in general. Polypharmacology describes the multiplicity of drug targets against which a given compound exhibits some form of biological activity1–6. A less appreciated characteristic of drug targets is their polyspecificity, namely the ability of multiple, structurally dissimilar drugs to exhibit biological activity against the same target.\n\nThe principal manifestation of polypharmacology is adverse drug reactions (‘side effects’), a phenomenon that has been recognized ever since the administration of the first drug7,8. In an interesting turnabout, side-effect similarity has recently been used to identify drug targets9. A useful public data source called SIDER has also been developed; it links approximately 1000 drugs to nearly 1500 side effects10. An emerging role of polypharmacology is in the repositioning of existing drugs for new therapeutic indications11.\n\nThe term polyspecificity was primarily used to describe antibody recognition, and has been around for more than three decades12,13. It is only in the last few years, however, that it has been employed in the context of drug-target interactions. Consequently, there are fewer papers on this topic, and many of them deal with transporters and the efflux pumps that confer drug resistance14–18, which is hardly a broad sample of biological activity. This is somewhat surprising, given that the polyspecificity of drugs has not always been explicitly recognized as such. For a number of years, it has been manifest in many different forms in drug research, under the guise of multiple lead series19, scaffold hopping20, and pharmacophore-based structure-activity studies21. All of these applications suggest that diverse structures may nevertheless exhibit biological activity with respect to the same target. This view is further supported by more recent evidence on the surprising prevalence of similarity cliffs22, and indirectly by the enhanced effectiveness of group fusion in identifying new active compounds23. These examples and the widespread occurrence of drug side effects suggest that some type of relationship might exist between polypharmacology and polyspecificity.\n\nThe alternative terminologies ‘drug promiscuity’ and ‘target promiscuity’ that are sometimes used instead of polypharmacology and polyspecificity, are slightly more general since they do not require the occurrence of biological activity, only that drugs and their targets interact (e.g., bind) in some specific fashion. Likewise, the term drug-target is sometimes replaced by the more general terms ligand-target or compound-target. However, the more popular although less general terms polypharmacology, polyspecificity, and drug-target will be used throughout the remainder of this work, with the caveat that their usage may sometimes be too narrow and may not always be strictly correct.\n\nRecognition of the growing importance of polypharmacology in drug research and in biological research in general has resulted in the development of a number of drug-target databases24–32 summarized in Table 1. A cursory examination of these databases shows that most drugs, as well as many xenobiotics, apparently exhibit very high degrees of polypharmacology. However, the data in these databases needs to be considered with caution, because it may not be of uniform quality since many experimental methods or computational techniques of varying accuracy may have been used in its generation. This is further exacerbated by the fact that reproducing biological data can be difficult even when the same experimental method is used in different laboratories, or even in the same ones! The paper by Jasial33 provides an interesting discussion that is relevant to this point.\n\nTo counter this issue database developers have established ‘reliability scores’ based on criteria of data quality, but there is no uniform procedure that is applied in all cases. Hence, drug-target datasets assembled with data obtained from multiple, diverse sources are unlikely to be of uniform quality. And this can give rise to significant uncertainties in the inferences that are drawn from analyses of such datasets.\n\nBy contrast, a number of more stringent evaluations have led to significantly reduced values for degrees of polypharmacology of many drugs33–36. But these values represent lower bounds to the true values, since the datasets from which these results are drawn are typically incomplete, an issue that is discussed further in this section. Additional study is certainly warranted in order to determine the true degree of polypharmacology for most drugs. As discussed in the following section, the multiplicity of ways that drugs can bind to a wide variety of different structural features in protein targets suggests the possibility that polypharmacology may be more prevalent than the most conservative view suggests. It does not, however, provide incontrovertible support for the extremely high degrees of polypharmacology implied by the data in many drug-target databases.\n\nData quality is not the only issue associated with drug-target datasets; another important concern is that of data completeness, as discussed in a recent paper by Mestres et al.37. Data on all of the possible drug-target interactions within a given dataset of drugs and targets is generally unavailable, making a complete analysis of these interactions impossible. This issue is aggravated by the fact that almost all drug-target databases only report data on active compounds. The most complete datasets undoubtedly can be found in the laboratories of pharmaceutical companies, but since their data is proprietary it is of little value to researchers outside of these companies. The problem of data availability is also affected by biases that arise from the popularity of particular research areas such as GPCRs, ion channels, protein kinases, and proteases, which make up a significant portion of all targets in drug discovery research38.\n\nThe crux of this paper is based on an analysis of the relationship between polypharmacology and polyspecificity, and it is demonstrated that they represent mathematical duals of one another. We describe (1) a rigorous mathematical relationship between polypharmacology and polyspecificity, based on a simple mathematical argument, and (2) an analysis of the latent information associated with drug-target interactions, described by edge-colored bipartite drug-target networks. The use of edge-colored networks provides the means for establishing bounds on the degrees of polypharmacology and polyspecificity. A simple example of a drug-target network is presented in order to clarify a number of the technical points raised in this paper. Currently, there is greater research focus on polypharmacology, since it has a seemingly more direct relationship to the pharmacological behavior of drugs. However, as far as we can determine, a definitive study rigorously linking polypharmacology and polyspecificity has yet to be published by other authors.\n\n\nStructural basis of drug-target interactions\n\nIt is important to recognize that polypharmacology and polyspecificity are purely phenomenological concepts. As such, they do not contain or require any specific structural information on the drugs or the targets they interact with. This is akin to classical chemical thermodynamics where, for example, the entropy, enthalpy, and free energy functions are purely phenomenological and do not in any way take account of the structural features of molecules39. In the case of drug-target interactions, all that is needed is some measure of the degree of interaction, such as an activity, inhibition constant, or an IC50 value, all of which are phenomenological constants.\n\nIt has been generally assumed that in most instances of polypharmacology, the drug binding-site of one target or the domain within which it resides is in some fashion structurally related to the binding-site or domain of other targets that the drug interacts with40–42. A number of papers43–46 have taken a more high-resolution approach that focuses on individual groups within binding sites. The work from these laboratories has dramatically expanded the rather limited contemporary view of the structural requirements of drug-target interactions43–46. It counters the widely held, albeit changing, belief that if similar ligands bind to different proteins they must bind to structurally similar subsites in these proteins. The paper by Ehrt, et al.47 provides an overview of this developing area of research.\n\nRecent work from Shoichet’s group at UCSF is based on detailed structural studies of the binding of 59 different ligands in 116 complexes, where the binding of a given ligand involved pairs of proteins with different folds. In almost half of the protein pairs examined, a given ligand interacted with unrelated residues in the two proteins. Even in cases with similar binding-site environments, the ligands interacted with different residues. All of this shows that multiple patterns of residues and binding site environments are capable of interacting with highly structurally similar, even identical ligands. The investigators concluded that “There appears to be no single pattern-matching ‘code’ for identifying binding sites in unrelated proteins that bind identical ligands”. This view is in line with what has been espoused by Mathews for protein-DNA interactions almost two decades earlier48.\n\n\nMathematical representations of drug-target interactions\n\nMathematically, drug-target interactions can be characterized as binary relations, R(D,T), that describe an association between a set of drugs\n\nThese relations are described by ordered-pairs of elements, (di,tj), formed by the Cartesian product of these two sets, D × T, i.e.\n\nThe meaning associated with ordered-pairs in a given relation depends on the nature of the relation. In this work we are interested in whether a drug is active with respect to a specific target. This is given by the characteristic function r(di, tj) ∈ R associated with the relation, which satisfies\n\nNow consider the transpose of the relation, R(D,T)′ = R(T,D). This changes the order of the elements in the ordered-pairs, i.e.\n\nNothing has fundamentally changed, except the arrangement of the elements of the relation; their values remain the same\n\nIn order to simplify and clarify all subsequent discussion, the following three categories of relations associated with ordered drug-target pairs are defined:\n\n(1) ‘active’, which includes all drug-target pairs whose activity has been experimentally measured or computationally estimated to meet or exceed the designated activity threshold value;\n\n(2) ‘inactive’, which includes all drug-target pairs whose activity value has been experimentally measured or computationally estimated to fall below the designated activity threshold value; and\n\n(3) ‘unknown’, which includes all drug-target pairs whose activities have neither been measured experimentally nor estimated computationally.\n\nThe following simple, illustrative example shows that the 8 × 4 dimensional drug-target interaction matrix and its transpose, the 4 × 8 target-drug interaction matrix, contain entirely equivalent information – only the ‘viewpoint’ has changed:\n\n\n\nIn R+, the rows correspond to drugs and the columns to targets, while in R+′ the rows correspond to targets and the columns to drugs. The positive subscript indicates that the matrix represents active drug-target pairs.\n\nIt may also be desirable to represent the information in Equations (5), (7), and (8) as a network49,50, since a considerable amount of the data on biological interactions is presented in the literature as networks. When the entities that are being compared belong to different sets, for example drugs and targets, a bipartite network such as that given in Equation (9) is commonly used:\n\nIn networks, pairs of nodes directly linked by edges are said to be adjacent and constitute the elements of the (n + m) × (n + m) dimensional adjacency matrix:\n\nWhile not technically correct, for simplicity in this work A will be termed the adjacency matrix of 𝒩, since it contains all of the information in 𝒩. The zero valued submatrices in A˜ show that there are no links among nodes within D or among those within T. Since the elements of A are in one-to-one correspondence with the elements of R, the two matrices are isomorphic. Hence, R and A, and by implication 𝒩, contain essentially the same information.\n\nFigure 1 depicts the bipartite network corresponding to the drug-target interaction matrix R+ given Equation (8). From the discussion of the general relationship of R and A in the previous paragraphs it follows that\n\n\nDrug-target networks\n\nYildrim, et al.52 provided the earliest example of drug-target networks. Vogt and Mestres53 have also discussed a number of issues associated with such networks including, as mentioned earlier, the issue of data completeness37. Other related databases have also been developed such as those based on drug-side effects10 and gene-disease networks54.\n\nWhile it is true that drug-target networks provide dramatic views of the complex interrelationships amongst drugs and their putative targets, they are difficult to interpret when the number of drug-target pairs becomes too large, as is demonstrated by several of the figures depicted in references 52 and 53. In those cases networks merely provide a visual sense of drug-target relationships and their overall complexity.\n\nBecause of this, such networks are rarely used directly to draw detailed inferences. Rather, as the information contained within them is available in various matrices such as the adjacency matrices shown in Equations (11) – (13), it can be analyzed by algebraic procedures, some of which are described in this work. However, even the matrix algebraic approach becomes limiting for the adjacency matrices of large drug-target systems, which are quite sparse. In such cases, normal matrix-algebraic procedures become very inefficient. Storing the limited amount of data in such large sparse matrices is also very wasteful. This necessitates the development of efficient data structures and algorithmic procedures that facilitate the management and analysis of large drug-target datasets55. The fact that so many large networks such as the Internet have been analyzed has led to the development of highly efficient algorithms that are more than capable of handling the size problems typically encountered with drug-target networks. The last part of the book by Newman49 describes a number of these algorithms. They are not employed here, since the goal of the current work is the development of an understanding of some of the overlooked characteristics of drug-target network data and their analysis. Consequently, a very simple example is used as a basis for describing the underlying principles.\n\nMany databases have been developed in order to provide a more unified source of experimental and computational data on drug-target interactions. Table 1 provides a summary of some useful drug-target databases. References to the various experimental methods used can best be found in the databases themselves. Because of the size and complexity of the chemogenomic space, computational methods have begun to play a larger role in determining drug-target interactions. A sample of some of the many computational techniques is given in the following references6,56–59.\n\nThe work described here is based on a phenomenological model of interactions between a set of drugs and a corresponding set of targets. Thus, as noted earlier, there is no requirement for any information on the molecular structure of the drugs, their targets, or any details on the nature of their inter- molecular interactions.\n\nThe degree of a given drug node is equal to the number of edges connected to that node, which is equivalent to the degree of polypharmacology of the drug associated with that node. The degree of a given target node is equivalent its degree of polyspecificity. It should be clear from Figure 1 that knowing the polypharmacology associated with the drug nodes is tantamount to knowing the degree of polyspecificity of the target nodes, and vice versa.\n\nThat this is the case can also be seen from the relational matrix, R+, given by Equation (8) or from the adjacency matrix, A+, given by Equation (13). In both instances, the rows represent drugs and the columns targets. Rows can be thought of as binary vectors associated with each of the drugs whose components are the targets the drugs can potentially interact with; correspondingly, columns can be thought of as binary vectors associated with each of the targets whose components are the drugs they can potentially interact with. Thus, all of the information on the degrees of polypharmacology and polyspecificity are contained in R+ and A+. Polypharmacology data, polyspecificity data, or some combination of the two can be used to ‘fill in’ the elements of R+ and A+. The degrees of polypharmacology and polyspecificity can then be computed by the expressions given in Equation (14) where the row and column sums correspond to the usual nodal degrees of the drug and target nodes, k̂+(di) and k̂+(tj), which are equivalent to their corresponding degrees of polypharmacology and polyspecificity, π̂PP(di) and π̂PS(tj), i.e.\n\nTable 2 summarizes the degrees of polypharmacology and polyspecificity for the sets of drugs and targets in the example depicted in Figure 1, and represented by the adjacency matrix in Equation (13). But there is more that needs to be considered.\n\nThe rows correspond to drugs and the columns to targets. The far right hand column gives values for the degree of polypharmacology, while the bottom most row gives values for the degree of polyspecificity. The binary values at the center of the table show whether a given drug-target pair is active (1) or inactive (0) or of unknown activity (0).\n\nThe network representation of drug-target interactions effectively captures the information associated with active drug-target pairs, but in many instances it does not capture comparable information on inactive drug-target pairs or pairs whose activities have not been evaluated experimentally or computationally. This can lead to considerable uncertainty in the dataset and can be a latent source of error in the determination of degrees of polypharmacology and polyspecificity. The situation is exacerbated by the fact that most drug-target databases do not report data on drugs that are inactive, even if such data exists. In those cases, the drug-target pair must be assumed to belong to the category of pairs with unknown activity. How this affects the analysis of drug-target interactions is described in the sections that follow.\n\nIt is quite likely that within larger datasets, the activity of many of the drug-target pairs has not been evaluated experimentally or computationally. Since some of these may nevertheless be active, it follows that the degrees of polypharmacology and polyspecificity are typically underestimated and hence only provide approximate lower bounds to the true values. They are not true lower bounds because the data used for their determination are not always entirely consistent or accurate. Hence their reliability may be questionable.\n\nEven though the number of drug-target pairs in the inactive and unknown categories is small in the example given here, in reality the number can be substantial and generally exceeds the number of active drug-target pairs. This makes total sense given that the number of active compounds in large corporate databases is generally only a few percent of the total number of compounds in their database. Thus, the problem now becomes how to obtain data on drugs in a dataset that are known to be inactive. As mention earlier, this is a significant problem for two reasons. First, activity data in corporate databases, where such information is likely to exist, is generally unavailable to the general research community. Second, most databases accessible by the non-industrial research community either do not report or report very little data on inactive drugs. Because of this, it is difficult to determine the contributions of drugs to the inactive category, which directly affects our knowledge of drugs in the category of unknown activity status. As will be seen in a forthcoming section, this impacts the size of the bounds to the degrees of polypharmacology and polyspecificity. Thus, while data on inactive drug-target pairs does not provide information that is useful for identifying drug targets, its availability reduces the size of the category of drugs of unknown activity, which improves the bounds on the degrees of polypharmacology and polyspecificity. The details of this argument are presented in a forthcoming section and are exemplified by the expression given in Equation (22).\n\nMore importantly, in many cases the number of possible drug-target pairs whose activity status is unknown may be significant. If they were experimentally or computationally determined, at least some of these might have activity values that meet or exceed the desired activity threshold. Not including these data will result in a less reliable estimation of the degrees of polypharmacology and polyspecificity. It may also suggest that the observed drug-target interactions involve a more limited region of target space than is actually the case. All of these issues raise questions as to how such data can be effectively incorporated into an analysis of drug-target interactions. One way to address this issue is by extending the current networks to include the class of edge-colored bipartite networks.\n\nAn edge-colored bipartite network is depicted in Figure 2 for the simple example shown in Figure 1. Edges corresponding to active drug-target pairs are colored green, those corresponding to inactive pairs are colored red, and those corresponding to pairs of unknown activity are colored black. Thus, all of the possibilities are now incorporated into a single edge-colored network. Figure 3a represents a separation of this network into its three components, corresponding to active (+), inactive (−), and unknown (*) bipartite subnetworks. Figure 3b depicts their respective adjacency matrices, A+, A-, and A*, where the colored squares correspond to matrix elements with value ‘1’ and the uncolored squares correspond to matrix elements with value ‘0’. An examination of Figure 3b shows that the matrix elements of A, A+, A-, and A* satisfy\n\n(a) Decomposition of the bipartite, edge-colored network depicted in Figure 2 into its three component subnetworks, namely drug-target pairs that are active, inactive, and of unknown activity status. (b) The adjacency matrices corresponding to the bipartite, edge-colored subnetworks given in (a). The colored cells correspond to a value of unity and the uncolored cells to zero values.\n\nBecause of this, it is possible to determine the degrees of nodes for each of the subnetworks independently. Thus, the row and column sums for the three colored networks associated with A+, A-, and A*, are given, respectively, by\n\nThe results for the simple example depicted in Figure 1–Figure 3 are collected in Table 3 and Table 4. In Table 3, k̂-(di) corresponds to the right hand column designated ‘Row-Sum’, and k̂-(tj) corresponds to the bottom row designated ‘Col-Sum’, and similarly for ε̂PP(di) and ε̂PS(tj), respectively, in Table 4. These latter quantities associated with the drug-target pairs of unknown activity are important since they contain information, albeit latent information, that bears on the degrees of polypharmacology and polyspecificity for any drug-target dataset. As noted earlier, some of the drugs known to be inactive may nonetheless fall in the category of drugs of unknown activity, because inactivity data is not generally incorporated into many of the widely available drug-target databases. Moreover, the terms associated with inactive drug-target pairs k-(di) and k-(tj) provide useful information since they eliminate the possibility of being considered as active pairs. They also have an effect on the sizes of ε̂PP(di) and ε̂PS(tj), as discussed in a forthcoming section.\n\nThe rows correspond to drugs and the columns to targets. The far right hand column gives values for the row sums (‘Row-Sum’), while the bottom most row gives values for the corresponding column sums (‘Col-Sum’). The binary values at the center of the table show whether a given drug-target pair is inactive (1) or active (0) or of unknown activity (0).\n\nThe rows correspond to drugs and the columns to targets. The far right hand column gives values for the row sums (‘Row-Sum’), while the bottom most row gives values for the corresponding column sums (‘Col-Sum’). The binary values at the center of the table show whether a given drug-target pair is of unknown activity (1) or active (0) or inactive (0).\n\nThe information in Table 2–Table 4 can be represented as three-dimensional Euclidean vectors\n\nIn the case where the activities of all of the drug-target pairs have been measured, ideally the points will lie entirely within the ‘Active-Inactive’ plane. In general, the information provided exceeds that of typical bipartite drug-target networks, because of the explicit inclusion of data on drug-target pairs of inactive and unknown activity.\n\n(a) Three-dimensional plots of the information in Table 2–Table 4 for drugs. (b) Three-dimensional plots of the information in Table 2–Table 4 for targets.\n\n\nMeasures of data completeness\n\nA global measure of data completeness that accounts for experimentally determined or computationally estimated activities of drug-target pairs is given by\n\nIn the example given in Figure 2 and Figure 3 and Equation (8) and Equation (13), and μ̂+ = 19, μ̂− = 7, μ̂* = 6. Thus,\n\nIn many instances, it is desirable to have local measures that are associated with individual drug or target nodes. One possible local measure is related to the nodal degrees of bipartite subnetworks associated with drug-target pairs of unknown activity status, ε̂PP(di) and ε̂PS(ti), which can be viewed as measures of error or uncertainty. Fractional measures could also be defined by dividing each of them by | T | and | D |, respectively, but this will not be done here.\n\nIn order to develop these measures, the nodal degrees are combined with respect to all three types of relations given by Equation (16) for each of the nodes di ∈ D and tj ∈ T. Combining and simplifying terms using Equation (15) yields\n\nKnowing that the values of k−(di) and k−(tj) are useful is seen by rearranging Equation (21)\n\n\nBounds for the degrees of polypharmacology and polyspecificity\n\nBounds to the values of π̂PP(di) and π̂PS(tj) can be derived in a relatively straightforward manner from two basic assumptions:\n\n(1) all (di, tj) pairs of unknown activity are actually active, i.e. a*(di, tj) ⇒ a+(di, tj) = 1, for all a*(di, tj) ∈ A*; and\n\n(2) all (di, tj) pairs of unknown activity are actually inactive, i.e a*(di, tj) ⇒ a−(di, tj) = 1 for all a*(di, tj) ∈ A*.\n\nIn the first case the magnitudes of ε̂PP(di) and ε̂PS(tj) determine the respective uncertainties of π̂PP(di) and π̂PS(tj), while in the second case, assuming that all (di, tj) pairs of unknown activity are in fact inactive gives values of π̂PP(di) and π̂PP(tj) that are lower bounds to their true values. But as noted earlier their true values may be lower because of measurement, computational, or other types of errors.\n\nThe mathematical expressions in Equation (23) show that the true values, πPP(di) and πPS(tj), are bounded, i.e.\n\nApplying the expressions in Equation (23) to the data in Table 2 and Table 4 yields the bounds given in Table 5 and Table 6. As discussed earlier, these bounds are unrealistically small, since in real cases the sizes of ε̂PP(di) and ε̂PS(tj) are likely to be much larger than those used in the simple example presented here. Nevertheless, it illustrates a number of relevant points. In carrying out this analysis it is important to remember that all drug-target pairs whose activity has not been determined must be included in the class of drug-target pairs of unknown activity, which directly contributes to the uncertainty in π̂PP(d) and π̂PS(t).\n\n\nSummary and conclusions\n\nThe study of polypharmacology is becoming increasingly important in drug research because it raises awareness of the inherent lack of specificity of drugs and xenobiotics for specific targets. Moreover, it provides a basis for understanding the prevalence of side effects and the rationale behind the repurposing of drugs for new therapeutic indications. The concept of polyspecificity, on the other hand, affords support for the lack of specificity of drug targets. A simple mathematical argument shows that these seemingly disparate characteristics of drugs and targets are, in fact, closely related, a result that to the best of our knowledge has not been previously published by other authors. This is supported by a growing number of structural studies that suggest that the variety of different structural patterns arising in drug-target interactions is so large it is highly unlikely that high degrees of specificity in these interactions will occur.\n\nConstructing networks is a popular enterprise in biology nowadays. Although useful, these networks have some significant limitations. For example, while they offer a highly visual depiction of the interrelationships among entities associated with the nodes in the network it is difficult to extract detailed information from them when the number of entities is large, a situation that also obtains in the case of drug-target networks. The issue can be overcome by utilizing the adjacency matrix of the network, which provides a faithful representation of its edge structure, and thus preserves the relations associated with active drug-target pairs. Because of this the degrees of polypharmacology and polyspecificity can be computed directly from adjacency matrices.\n\nThere is other information associated with drug-target pairs that is rarely if ever dealt with. Representing this information involves the use of the edge-colored bipartite drug-target networks introduced in this paper. In addition to representing active drug-target pairs, which is the case with standard drug-target networks, these augmented networks represent data associated with inactive drug-target pairs and with pairs of unknown activity. By including this heretofore latent data it is possible to compute global and local measures of data completeness as well as bounds for the degrees of polypharmacology and polyspecificity. These parameters can be viewed as diagnostics of the suitability of a given analysis of a drug-target network.\n\nIn the simple example describe here, the values for the uncertainties ε̂PP(d) and ε̂PS(t) are quite small, and hence the upper bounds lie close to the values of π̂PP(d) and π̂PS(t). This is not likely to be the case in larger, more realistic drug-target networks. In such cases, the uncertainties will be considerably larger due to a lack of data availability. As noted above, the reliability of the analysis can be increased by the use of experimentally or computationally determined data on inactive drug-target pairs. Unfortunately, such data is not as readily available in many publicly accessible databases where the focus is largely on drugs that are active with respect to specific targets. Assuming drugs without activity data are inactive, as is the case in the use of ‘decoys’ to test various computational methodologies, clearly leads to a loss of information. This trend needs to be reversed.\n\nAlthough the analysis presented here is useful, it is just a start and by no means exhausts the possibilities for further study. Three areas for to consider for future research include:\n\n(1) Expanding statistical analysis of drug-target network properties;\n\n(2) Examining higher-order drug-target interactions; and\n\n(3) Developing weighted and fuzzy representations of drug-target networks.\n\nA lot of work is still needed in order to provide a suitably rigorous formalism for treating drug-target networks in ways that allow maximum extraction of information, which clarifies a number of the subtle issues associated with these biologically important networks.",
"appendix": "Author contributions\n\n\n\nGM and VG both conceived the study and both contributed to the general outline of the work. GM wrote most but not all of the initial draft of the manuscript. VG contributed his expertise in database searching and analysis and how it applied to the work carried out for this manuscript. GM contributed his mathematical expertise and formulated most of the mathematical material. Both authors were involved in the revision of the manuscript and have agreed to its final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nGM wishes to thank Professor Jürgen Bajorath and Dr. Martin Vogt, both from Department of Life Science Informatics, B-IT, Rheinische Friedrich-Wilhelms-Universität in Bonn, Germany, for a number of useful comments regarding this work.\n\n\nReferences\n\nPeters JU, Ed: Polypharmacology in Drug Discovery. John Wiley & Sons, New York. 2012. Publisher Full Text\n\nHopkins AL: Network pharmacology: the next paradigm in drug discovery. Nature Chem Biol. 2008; 4(11): 682–690. PubMed Abstract | Publisher Full Text\n\nHopkins AL: Introduction: The case for polypharmacology. In Polypharmacology in Drug Discovery. Peters JU, Ed., John Wiley & Sons, 2012; 1–6. Publisher Full Text\n\nAnighoro A, Bajorath J, Rastelli G: Polypharmacology: Challenges and opportunities in drug discovery. J Med Chem. 2014; 57(19): 7874–7887. PubMed Abstract | Publisher Full Text\n\nTan Z, Chaudhai R, Zhang S: Polypharmacology in Drug Development: A Minireview of Current Technologies. ChemMedChem. 2016; 11(12): 1211–1218. PubMed Abstract | Publisher Full Text\n\nAchenbach J, Tiikkainen P, Franke L, et al.: Computational tools for polypharmacology and repurposing. Future Med Chem. 2011; 3(8): 961–968. PubMed Abstract | Publisher Full Text\n\nPérez-Nueno VI, Souchet M, Karaboga AS, et al.: GESSE: Predicting drug side effects from drug-target relationships. J Chem Inf Model. 2015; 55(9): 1804–1823. PubMed Abstract | Publisher Full Text\n\nLounkine E, Keiser MJ, Whitebread S, et al.: Large-scale prediction and testing of drug activity on side-effect targets. Nature. 2012; 486(7403): 361–367. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampillos M, Kuhn M, Gavin AC, et al.: Drug target identification using side-effect similarity. Science. 2008; 321(5886): 263–266. PubMed Abstract | Publisher Full Text\n\nKuhn M, Campillos M, Letunic I, et al.: A side effect resource to capture phenotypic effects of drugs. Mol Syst Biol. 2010; 6: 343. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarratt MJ, Frail DE: Drug Repositioning – Bringing New Life to Shelved Assets and Existing Drugs. John Wiley & Sons, New York. 2012. Publisher Full Text\n\nDimitrov JD, Pashov AD, Vassilev TL: Antibody polyspecificity: what does it matter? Adv Exp Med Biol. 2012; 750: 213–226. PubMed Abstract | Publisher Full Text\n\nVan Regenmortel MH: Specificity, polyspecificity, and heterospecificity of antibody-antigen recognition. J Mol Recog. 2014; 27(11): 627–639. PubMed Abstract | Publisher Full Text\n\nYoung DD, Jockush S, Turro NJ, et al.: Synthetase polyspecificity as a tool to modulate protein function. Bioorg Med Chem Lett. 2011; 21(24): 7502–7504. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMartinez L, Arnaud O, Henin E, et al.: Understanding polyspecificity within the substrate-binding cavity of the human multidrug resistance P-glycoprotein. FEBS J. 2014; 281(3): 673–682. PubMed Abstract | Publisher Full Text\n\nLyons JA, Parker JL, Solcan N, et al.: Structural basis for polyspecificity in the POT family of proton-coupled oligopeptide transporters. EMBO Rep. 2014; 15(8): 886–893. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLytvynenko I, Brill S, Oswald C, et al.: Molecular basis of polyspecificity of the small multidrug resistance efflux pump AbeS from Acinetobacter baumannii. J Mol Biol. 2016; 428(3): 644–657. PubMed Abstract | Publisher Full Text\n\nEsser L, Zhou F, Pluchino KM, et al.: Structures of the multidrug transporter P-glycoprotein reveal asymmetric ATP binding and the mechanism of polyspecificity. J Biol Chem. 2017; 292(2): 446–461. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlass BE: Basic Principles of Drug Discovery and Development. Academic Press, New York. 2015. Reference Source\n\nBrown N, Ed: Scaffold Hopping in Medicinal Chemistry. Wiley-VCH, New York. 2014. Publisher Full Text\n\nSaha R, Tanwar O, Alam NM, et al.: Pharmacophore based virtual screening, synthesis and SAR of novel inhibitors of Mycobacterium sulfotransferase. Bioorg Med Chem Lett. 2015; 25(3): 701–707. PubMed Abstract | Publisher Full Text\n\nIyer P, Stumpfe D, Vogt M, et al.: Activity Landscapes, Information Theory, and Structure - Activity Relationships. Mol Inform. 2013; 32(5-6): 421–430. PubMed Abstract | Publisher Full Text\n\nMaggiora GM: Introduction to molecular similarity and chemical space. In Foodinformatics: Applications of Chemical Information to Food Chemistry. Martinez-Mayorga K, Medina-Franco JL, Eds. Springer International Publishing Switzerland; 2014; 1–81. Publisher Full Text\n\nLaw V, Knox C, Djoumbou Y, et al.: DrugBank 4.0: shedding new light on drug metabolism. Nucleic Acids Res. 2014; 42(Database issue): D1091–D1097. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSzklarczyk D, Santos A, von Mering C, et al.: STITCH 5: augmenting protein-chemical interaction networks with tissue and affinity data. Nucleic Acids Res. 2016; 44(D1): D380–D384. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOlah M, Rad R, Ostopovici L, et al.: WOMBAT and WOMBAT-PK: Bioactivity Databases for Lead and Drug Discovery. In Chemical Biology: From Small Molecules to Systems Biology and Drug Design. Schreiber SL, Kapoor T, Wess G, Eds., John Wiley & Sons, New York; 2008; 760–786. Publisher Full Text\n\nKim S, Thiessen PA, Bolton EE, et al.: PubChem Substance and Compound databases. Nucleic Acids Res. 2016; 44(D1): D1202–D1213. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu T, Lin Y, Wen X, et al.: BindingDB: a web-accessible database of experimentally determined protein-ligand binding affinities. Nucleic Acids Res. 2007; 35(Database issue): D198–D201. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGaulton A, Hersey A, Nowotka M, et al.: The ChEMBL database in 2017. Nucleic Acids Res. 2017; 45(D1): D945–D954. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTym JE, Mitsopoulos C, Coker EA, et al.: canSAR: an updated cancer research and drug discovery knowledgebase. Nucleic Acids Res. 2016; 44(D1): D938–D943. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvon Eichborn J, Murgueitio MS, Dunkel M, et al.: PROMISCUOUS: a database for network-based drug-repositioning. Nucleic Acids Res. 2011; 39(Database issue): D1060–D1066. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGünther S, Kuhn M, Dunkel M, et al.: SuperTarget and Matador: resources for exploring drug-target relationships. Nucleic Acids Res. 2008; 36(Database issue): D919–D922. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJasial S, Hu Y, Bajorath J: Determining the degree of promiscuity of extensively assayed compounds. PLoS One. 2016; 11(4): e0153873. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu Y, Gupta-Osterman D, Bajorath J: Exploring compound promiscuity patterns and multi-target activity spaces. Comput Struct Biotechnol J. 2014; 9(13): e201401003. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu Y, Bajorath J: How promiscuous are pharmaceutically relevant compounds? A data-driven assessment. AAPS J. 2013; 15(1): 104–111. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu Y, Bajorath J: Exploring molecular promiscuity from a ligand and target perspective. In Frontiers in Molecular Design and Chemical Information Science. Bajorath J, Ed. ACS Symposium Series, American Chemical Society, 2016; 1222. : 19–34. Publisher Full Text\n\nMestres J, Gregori-Puigjané E, Valverde S, et al.: Data completeness--the Achilles heel of drug-target networks. Nat Biotechnol. 2008; 26(9): 983–984. PubMed Abstract | Publisher Full Text\n\nSantos R, Ursu O, Gaulton A, et al.: A comprehensive map of molecular drug targets. Nat Rev Drug Discov. 2017; 16(1): 19–34. PubMed Abstract | Publisher Full Text\n\nKlotz IM, Rosenberg RM: Chemical Thermodynamics: Basic Concepts and Methods. 7th Edition. John Wiley & Sons, New York 2008. Publisher Full Text\n\nMilletti F, Vulpetti A: Predicting polypharmacology by binding site similarity: from kinases to the protein universe. J Chem Inf Model. 2010; 50(8): 1418–1431. PubMed Abstract | Publisher Full Text\n\nMoya-García AA, Ranea JA: Insights into polypharmacology from drug-domain associations. Bioinformatics. 2013; 29(16): 1934–1937. PubMed Abstract | Publisher Full Text\n\nMoya-Garcia AA, Dawson NL, Kruger FA, et al.: Structural and functional view of polypharmacology. Preprint posted online 18 March 2016 (not peer reviewed). 2017. Publisher Full Text\n\nBareller S, Sterling T, O’Meara MJ, et al.: The recognition of identical ligands by unrelated proteins. ACS Chem Biol. 2015; 10(12): 2772–2784. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKahraman A, Morris RJ, Laskowski RA, et al.: Shape variation in protein binding pockets and their ligands. J Mol Biol. 2007; 368(1): 283–301. PubMed Abstract | Publisher Full Text\n\nKahraman A, Morris RJ, Laskowski RA, et al.: On the diversity of physicochemical environments experienced by identical ligands in binding pockets of unrelated proteins. Proteins. 2010; 78(5): 1120–1136. PubMed Abstract | Publisher Full Text\n\nSturm N, Desaphy J, Quinn RJ, et al.: Structural insights into the molecular basis of the ligand promiscuity. J Chem Inf Model. 2012; 52(9): 2410–2421. PubMed Abstract | Publisher Full Text\n\nEhrt C, Brinkjost T, Koch O: Impact of Binding Site Comparisons on Medicinal Chemistry and Rational Molecular Design. J Med Chem. 2016; 59(9): 4121–4151. PubMed Abstract | Publisher Full Text\n\nMatthews BW: Protein-DNA interaction. No code for recognition. Nature. 1988; 335(6188): 294–295. PubMed Abstract | Publisher Full Text\n\nNewman ME: Networks. An Introduction. Oxford University Press, Oxford, UK. 2010. Publisher Full Text\n\nVan Steen M: Graph Theory and Complex Networks. An Introduction. M van Steen Publisher; 2010. Reference Source\n\nAsratian AS, Denley TM, Häggkvist R: Bipartite Graphs and Their Applications. Cambridge University Press, Cambridge, UK. 1999. Publisher Full Text\n\nYildirim MA, Goh KI, Cusick ME, et al.: Drug-target network. Nat Biotechnol. 2007; 25(10): 1119–1126. PubMed Abstract | Publisher Full Text\n\nVogt I, Mestres J: Drug-Target Networks. Mol Inform. 2010; 29(1–2): 10–14. Publisher Full Text\n\nBauer-Mehren A, Rautschka M, Sanz F, et al.: DisGeNET: a Cytoscape plugin to visualize, integrate, search and analyze gene-disease networks. Bioinformatics. 2010; 26(22): 2924–2926. PubMed Abstract | Publisher Full Text\n\nKolaczyk ED: Statistical Analysis of Network Data: Methods and Models. Springer, New York. 2009. Publisher Full Text\n\nCheng F, Liu C, Jang J, et al.: Prediction of drug-target interactions and drug repositioning via network-based inference. PLoS Comput Biol. 2012; 8(5): e1002503. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYamanishi Y, Araki M, Gutteridge A, et al.: Prediction of drug-target interaction networks from the integration of chemical and genomic spaces. Bioinformatics. 2008; 24(13): i232–i240. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLu Y, Guo Y, Korhonen A: Link prediction in drug-target interactions network using similarity indices. BMC Bioinformatics. 2017; 18(1): 39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeng L, Liao B, Zhu W, et al.: Predicting drug-target interactions with multi-information fusion. IEEE J Biomed Health Inform. 2017; 21(2): 561–572. PubMed Abstract | Publisher Full Text\n\nJain AK, Dubes RC: Algorithms for Clustering Data. Prentice Hall, Englewood Cliffs, New Jersey. 1988. Reference Source"
}
|
[
{
"id": "23317",
"date": "09 Jun 2017",
"name": "Jose L. Medina-Franco",
"expertise": [
"Reviewer Expertise Computer-aided drug design",
"chemoinformatics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis well-written and organized manuscript addresses an extremely timely topic in drug discovery.\nThe authors starts defining the basic concepts of polypharmacology and polyspecificity. Then, in a very clear and didactic manner (using nice illustrations), propose a general and intuitive mathematical approach to quantify the degrees of both concepts. It is clear from the manuscript the mathematical relationship of polypharmacology and polyspecificity (e.g., paraphrasing the authors “the two sides of the same coin”). The new measures address at some extent data incompleteness that is a major issue of chemogenomics data sets. As the authors point out in the Conclusions, this paper sets the ground to implement these metrics to public or private chemogenomics data sets. In particular, I found quite innovative and clear the edge-colored bipartite networks introduced in this manuscript.\n\nI strongly support indexing of this paper. Minor suggestions to further improve the manuscript:\n\nThe term “frequent hitter” related to polypharmacology can be added in the Introduction.\n\nComment on the effect of drug concentration in chemogenomics data sets. For instance, adverse drug reactions, and drug-interaction networks in general, will depend on the drug concentrations.\n\nPage 4: Include reference related to the statement: “Recent work from Shoichet’s group at UCSF …”. I believe the authors refer to the paper published in ACS Chem. Biol. 20151. This manuscript is not included in the Reference section of the current version.\n\nSpell out \"UCSF\" (University of California at San Francisco).\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2835",
"date": "28 Jun 2017",
"name": "Gerry Maggiora",
"role": "Author Response",
"response": "Medina-Franco suggests that \"The term 'frequent hitter' related to polypharmacology can be added in the Introduction\". We chose not to include it in our initial version of the manuscript because we felt that the term was too general since it also includes drug-target interactions induced by a variety of non-specific modes of interaction that do not typically lead to genuine pharmacological responses. However, we will include a mention of it in the next version of the paper along with relevant caveats regarding non-specific modes of interaction.Medina-Franco also suggest that a comment should be made regarding the effect of drug concentration in chemogenomics datasets since, for example, \"...adverse drug reactions and drug-interaction networks in general, will depend on the drug concentrations\". While it is certainly true that drug concentration has a significant effect on biological processes it does not per se directly affect the structure of the drug-target threshold networks described in our paper because the presence of an edge between two network nodes is solely dependent on the activity value, e.g. a pKi or IC50, and the activity threshold imposed.Medina-Franco has noted that reference to the work in Shoichet's lab at the University of California at San Francisco (UCSF) is apparently missing. The missing citation is reference [43]."
}
]
},
{
"id": "23316",
"date": "19 Jun 2017",
"name": "Karina Martinez-Mayorga",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an original and nice contribution to the field. The authors propose a mathematical approach to analyse the relation between polypharmacology and polyspecificity, that are, as presented here “two concepts running on the same avenue”. I particularly like the idea of extracting latent information to describe relationships between the degrees of these two complementary features.\nThis work highlights the inherent complexity of biological systems providing a view of drug-target interactions as a pattern where both sides have an array of possibilities. Pattern recognition involved in the perception of odorants provides an additional example (See for instance DOI: 10.1038/81774) of the complexity involved in the recognition of ligands by biomacromoleules. It could be envisioned that the mathematical approach described in this paper will be attractive to parallel areas of biological processes governed by pattern interactions.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23318",
"date": "19 Jun 2017",
"name": "John Van Drie",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is as close to 'publish as is' as I've ever seen. Excellent work, well articulated, good overview of literature.\nMy only suggestion is that a paragraph at the end would be helpful, laying out the experimental implications of this theory, i.e. if this theory holds or if such analyses pan out, how would an experimentalist change their research plan?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2834",
"date": "28 Jun 2017",
"name": "Gerry Maggiora",
"role": "Author Response",
"response": "I completely agree with Van Drie's comment and will include a discussion regarding the experimental implications of our work in the subsequent version of the paper."
}
]
},
{
"id": "23321",
"date": "20 Jun 2017",
"name": "Tudor I. Oprea",
"expertise": [
"Reviewer Expertise Cheminformatics",
"pharmacoinformatics",
"drug target analytics",
"drug target curation"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis subject is relevant for the drug discovery community. The theoretical approach is potentially sound, as far as I can tell - but (full disclosure) while familiar with statistics and matrices algebra, I believe someone more competent in such mathematics should judge that part of the publication.\nThe problem is genuine: Indeed, after more than a century of pharmaceutical research, it has become clear (owing to high throughput screening of large chemical libraries) that many drugs bind to multiple targets. This problem is compounded by other aspects such as tissue distribution, on- and off- dissociation constants, half-life and other pharmacokinetics parameters. Target and drug (*see below) specific elements influence the relevance of both polypharmacology and polyspecificity.\nWhich begs the question, how relevant is target polyspecificity? The authors encode \"structurally dissimilar drugs\" in their definition (see Abstract and Introduction). This in itself is a slippery slope, considering Maggiora's 2006 Commentary that similar molecules do not always share the same activity landscape. The implication being that structural similarity does not always work. So, dis-similarity would have to be defined... at the 2D level (which fingerprints)? 3D? (shape? electrostatics? etc.). In my opinion, polyspecificity does NOT require \"dissimilar\" in the definition.\nPolyspecificity is relevant when one considers drugs co-administered simultaneously - with the possibility of exacerbating some side-effects or, perhaps, staying \"on target\". This is likely to occur, considering that 15% of U.S. adults are likely to use 5 or more prescription drugs (aka polypharmacy). Thus, the issue of target polyspecificity is relevant and ought to be investigated more in the context of co-prescribed medications.\nThe main topic of this paper is polypharmacology. The issue of potency appears to be brushed aside, as shown in the assumption that \"drug-target interactions can be characterized as binary relations\" (see Drug-Target Relationships). This, of course, implies that Drug D1, with a Ki of 1 nM (10-9 M) has the same relevance for polypharmacology and polyspecificity as Drug D2, with a Ki of 1 mM (10-3 M). In practice, this is not likely to be the case.\nPolypharmacology is not a binary issue of binding or not binding. The bi-partite drug-target network in Figure 1, therefore, not only has nodes and edges, but edges have values: D1 binds to target T1 with potency P1, D1 binds to T2 with P2 and so on... Which would change Table 2 into something more familiar to medicinal chemists, i.e., a Structure-Activity Table.\n\nThe issue of what's \"active\" vs. \"inactive\" (e.g., Fig 3) is a somewhat subjective issue. Take for example ropinirole: \"although the anti-Parkinsonian drug ropinirole is more potent at the D3 receptor than the D2 receptor by an order of magnitude, we annotate the D2 receptor as the mechanism of action target because D2 receptors, but not D3 receptors, are expressed in the substantia nigra, the pathologically relevant tissue for anti-Parkinsonian drugs\". Our own DrugCentral entry shows other targets, such as the 5-HT1A and alpha-2B adrenergic receptors, with potency similar to D2 receptors. Is that relevant? Should all targets with potency below 6 (on the negative log scale) be considered \"inactive\"? The answer to these questions depends on the problem at hand.\nBy the same token, the issue of polyspecificity may be regarded differently given a target for which over 20 potent (approved) drugs are known (some Receptor Tyrosine Kinases fit this profile), compared to a target for which only 2 drugs are approved (e.g., cyclin-dependent kinases 4 and 6).\nGiven the wealth of data for drug-target interactions from a variety of sources such as ChEMBL, DrugBank, DrugCentral or GuideToPharmacology, it is recommended that real examples are used in this paper. Although \"data completeness\" remains an issue, the authors can no doubt identify a subset of 20-50 drugs, say anti-depressants or anti-psychotics, for which a wealth of in vitro bioactivity data are available through various channels, including PDSP in addition to the above.\nThat would provide clear and immediate utility to the upper and lower bounds for the degree of polypharmacology (Table 5), which would make this paper more impactful. The authors are clearly aware of this, as discussed in Conclusions...\nI found the discussion related to the limitations of network biology representations particularly interesting. Perhaps that section could be expanded...\n\n(*) Footnote. Two simple scenarios are discussed. These do not include target mutations (e.g., causing drug resistant cancers or infections), allelic variation, or other population-specific phenomena.\n\nThe target is in the CNS, but the drug itself is an ABCB1 substrate (see for example the impact of ABCB1 on CNS side-effects), or the drug lacks blood-brain barrier permeability - in which case the potency of the drug in vitro is irrelevant in vivo. The drug can have significant in vitro potency on many targets, e.g., dobutamine hits over 20 human targets according to DrugCentral. However, its half-life is 2 minutes. Therefore, these \"off target effects\" are irrelevant.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2830",
"date": "23 Jun 2017",
"name": "Gerry Maggiora",
"role": "Author Response",
"response": "Oprea raises the issue of \"how relevant is target polyspecificity?\" and he also takes issue with usage of the terminology \"structurally dissimilar drugs\" with regard to the concept of polyspecificity, noting that \"similar molecules do not always share the same activity landscape\". Moreover, he states that in his opinion \"polyspecificity does NOT require 'dissimilar' in the definition\". These three points are addressed in order. First, The relevance of polyspecificity is not explicitly addressed in the paper, rather the focus is on the fact that polyspecificity is closely (and mathematically) related to polypharmacology, a concept that most will agree is quite relevant to drug research. The point is that both concepts are related to one another, and as stated in the paper are metaphorically \"two sides of the same coin\". Second, the reason that 'structurally dissimilar drugs' was mentioned explicitly in the definition of polyspecificity is that polyspecificity implies multiple specificities and hence a diversity of drug structures. Third, that structurally similar molecules will interact with the same protein is generally expected, although as Oprea has pointed out above, \"similar molecules do not always share the same activity landscape\". While the 'spirit' of this quote is relevant, it is not entirely accurate since an activity landscape is associated with the target being assayed. The molecules that make up the dataset will all lie on that particular activity landscape. Although relatively rare, two structurally similar molecules may nevertheless exhibit widely different activities, an occurrence that gives rise to 'activity cliffs' on the landscape. Oprea also opines that the issue of target polyspecificity is relevant and ought to be investigated more in the context of co-prescribed medications. This seems like a very relevant application of polyspecificity especially, as pointed out by Oprea, that \"15% of US adults are likely to use 5 or more prescription drugs. Moreover, it should be noted that geriatric patients generally are on two or more times as many drugs as younger patients making such an analysis even more desirable. Oprea brings up the important subject of potency, pointing out that it is not specifically addressed in the formulation presented in the paper. For example, he states that \"The bipartite drug-target network in Figure 1, therefore, not only has nodes and edges, but the edges have values...\". While this statement is technically correct, with regard to the paper the point he is making is incorrect since it is addressed, albeit in limited fashion, by the fact that the bipartite networks employed in our work are threshold networks. By choosing activity values that are greater than or equal to a given threshold value, say 100 nM, ensures that all of the edges in the network correspond to drug-target pairs of reasonable activity with respect to the targets assayed. Hence, Oprea's point \"that Drug D1, with a Ki of 1 nM has the same relevance for polypharmacology and polyspecificity as Drug D2, with a Ki of 1 mM\" is not correct. For example, for a threshold value of 100 nM an edge would be drawn between Drug D1 and its target, while no edge would be drawn between Drug D2 and the same target. It is true, however, that not dealing explicitly with drug-target activity values results in a loss of information, but this can be accounted for if one uses weighted bipartite networks, which are more complicated and require a higher level of theory. Hence, we chose to explore the issues associated with drug-target networks using the simplest level of theory first, but we intend to deal with weighted drug-target networks in a future publication. Oprea raises a number of important issues regarding the inherent subjectivity of interpreting drug activity with respect to different targets. He makes the point that receptors with lower activity for a given drug may, nevertheless, be more pharmacologically/biologically relevant than other receptors for which the drug has a higher affinity. His point is well taken and, no doubt, needs to be addressed when assessing the pharmacological/biological relevance of a particular drug-target interaction. However, doing so is a much more demanding task than identifying putative polypharmacologies and polyspecificities and requires significant additional information on pathways, biopharmaceutical properties, and drug metabolism. Our aim in this paper was merely to address drug-target interactions within an in vitro setting. An additional complication in the study of drug-target interactions, most of which involve in vitro and ex vivo experiments, is that the information in drug-target databases is very heterogeneous since it is made up of data obtained from a wide variety off different sources. As noted in the paper, even experiments carried out in the same lab using the same experimental protocol on different days can result in significantly divergent experimental values. Further complicating this issue is the fact, also stated in the paper, that a growing number of values are obtained computationally. All of these factors conspire to raise the uncertainty of the information used to construct drug-target networks. Lastly, it is assumed, mostly tacitly, that drug-target interactions, which are generally determined in in vitro and ex vivo experiments, can be used to interpret complicated in vivo biological phenomena. However, this must be done with caution. For example, protein-protein interaction data are used in the construction of biological pathways, when in fact most such data are determined in in vitro experiments that are far removed from the context in which the pathways reside. Nevertheless, while such data may be problematic in some cases, they can be useful in advancing our understanding of the biological functionality of many processes taking place in living systems, with the caveat that care must be used in drawing inferences from such potentially problematic data. Oprea suggests that data from such databases a ChEMBL, DrugBank, DrugCentral, or GuideToPharmacology be used to construct an example from 'real data'. This is an excellent suggestion and one that we are currently working on. There are two main issues that we wanted to highlight in the paper: (1) the relationship between polypharmacology and polyspecificity and (2) the development of a method for estimating error bounds for drug-target network parameters such as the degrees of polypharmacology and polyspecificity. Hence, we focused our attention on the mathematical relationships that exemplify these network properties, and we left the development of actual examples for future work."
}
]
}
] | 1
|
https://f1000research.com/articles/6-788
|
https://f1000research.com/articles/6-786/v1
|
06 Jun 17
|
{
"type": "Research Article",
"title": "Community and Code: Nine Lessons from Nine NESCent Hackathons",
"authors": [
"Arlin Stoltzfus",
"Michael Rosenberg",
"Hilmar Lapp",
"Aidan Budd",
"Karen Cranston",
"Enrico Pontelli",
"Shann Oliver",
"Rutger A. Vos",
"Arlin Stoltzfus",
"Michael Rosenberg",
"Hilmar Lapp",
"Aidan Budd",
"Karen Cranston",
"Enrico Pontelli",
"Shann Oliver"
],
"abstract": "In recent years, there has been an explosion in the popularity of hackathons — creative, participant-driven meetings at which software developers gather for an intensive bout of programming, often organized in teams. Hackathons have tangible and intangible outcomes, such as code, excitement, learning, networking, and so on, whose relative merits are unclear. For example, a frequent complaint is that code is abandoned when the hackathon ends, and questions like, “which outcomes are produced most reliably?” and, “how valuable are they for participants, organizers, and sponsors?” remain open. As a first step in giving “hackology” a more rigorous footing, this paper describes the NESCent hackathon model, developed over the course of a decade to serve the academic discipline of evolutionary biology, with the dual goals of augmenting the community’s shared software infrastructure, and fostering a diverse community of practice in open scientific software development. The paper presents a detailed guide to staging a NESCent-style hackathon, along with a structured information set on nine events involving 54 team projects. NESCent hackathons have produced tangible products with downstream impacts, including prototypes that were leveraged for major funding, incremental additions to production code bases, and creative drafts (designs, standards, and proofs-of-concept) that contributed to further work. The impacts of intangible outcomes could not be assessed objectively, but the subjective experience suggests that hackathons have a positive impact by (1) providing individuals with valuable experiences, networking, and training, and (2) fostering a more cohesive community of practice by enhancing awareness of challenges and best practices and by building links of familiarity between and among resources and people. Future research that recognizes the diverse outcomes of hackathons might enable evidence-based decisions about how to design hackathons for effectiveness.",
"keywords": [
"hackathon",
"programming",
"software development",
"scientific software",
"NESCent"
],
"content": "Introduction\n\nHackathons (also called hackfests or codefests) are short-term software development events that emphasize spontaneity and collaboration, bringing together developers, and sometimes end-users, with the goal of innovative software development, often in conjunction with other objectives such as fostering a community (i.e., building a stronger “community-sense”1), or drawing attention to particular data or services. Since the early 2000s, hackathons have become increasingly popular (Figure 1) - including across academic, non-profit, corporate, and government sectors - with events focused on a variety of topics, such as bioinformatics2, promoting open data3, medical education4, and healthcare informatics5.\n\nValues are relative to the highest point on the chart, thus the week with the greatest search interest in the term receives 100% and other weeks are scaled accordingly.\n\nAlthough a lot of information on hackathons can be found online - including various guides6,7 and reports on specific events - there is very little academic, peer-reviewed literature on the topic. Of the small amount of published work available, most consists of reports on specific hackathon events, some of which are short4, while others go into depth about the technical products of the event rather than the process2,8–11. Others are news and opinion pieces12–15. Only a few published sources are based on systematic methodology such as surveys, interviews, or organizing structured data16–20.\n\nThus, in spite of the popularity of hackathons, there is currently no systematic basis for evidence-based approaches to planning or organizing a hackathon. For a prospective organizer, the immediate practical question is how best to carry out a hackathon. If we assume that a hackathon is typically carried out with the intent to maximize its benefits to its sponsors and its participants, then the question of how to conduct a hackathon requires understanding these benefits, and more generally, understanding why hackathons are carried out at all.\n\nIt turns out that there is no clear consensus on exactly how hackathons bring value to participants or sponsors. For example, although the obvious expected outcome of a software development hackathon is software, organizers frequently note that these events generate prototypes, not products used after the event21,22.\n\nIf the source code generated at hackathons is rarely used, then why are hackathons so popular? One possibility is that, even if only a small fraction of code remains useful, this small fraction may still justify the event. Another possibility is that the benefit of hackathons arises partly or largely from less tangible outcomes. When a hackathon is focused on utilizing a sponsor’s newly released API, the event may uncover bugs, or bring valuable exposure to the sponsor’s resources or products (e.g. as in some of the hackathons described in 22). Even if a prototype developed at a hackathon is never used, the developers may leave the event with the experience and confidence to build a similar (perhaps improved) implementation later. Participants may benefit from gaining technical skills, from sharing best practices, and from making connections with colleagues, i.e. professional networking. For example, participants of the BioHackathon series of events23–26 are strongly encouraged by the organizers to connect with each other on social networks, such as LinkedIn.\n\nNot only direct participants themselves, but also the community they belong to may benefit from discussions and interactions that spread technical knowledge and create a shared awareness of domain-specific challenges, opportunities, and best practices27. The expectation of stimulating creativity and building camaraderie seems to be one of the motivations of internal hackathons (e.g., 22). In addition, participating in a community event may promote “collaborative learning”, which is one of the top two reasons for attending a hackathon, according to participants cited in a recent publication28, the other reason being networking.\n\nArguably, how a hackathon event is organized and executed will affect how the beneficial outcomes of hackathons, tangible and intangible, are enhanced or diminished. Indeed, hackathons vary in many ways, even within the broad categories of corporate, community, and internal hackathons13. They may be one-off events29, or a series that repeats yearly23–26 or even more frequently15. The event may last a single day (e.g., 12), an entire week9, or longer18. The number of participants may range from a few dozen (e.g., 8, 27) to hundreds. Some events offer prizes14,30. There is considerable variety in how development targets are determined (e.g., 5) and how teams are formed19,31. Some events are carefully planned for months10, while others emerge more spontaneously.\n\nHackathon organizers frequently establish a process to engage participants in learning, socializing, or brainstorming prior to the event10,29,32,33. For most hackathons there are no planned follow-up activities, but in some contexts (e.g., internal hackathons), resources may be set aside to build on promising outcomes15,34. In light of the extensive variability of hackathons, better information - and ultimately, systematic studies - on hackathon practices, outcomes, and impacts will be needed to better understand how and why to conduct a hackathon. To begin laying the foundations for a more systematic understanding, we offer a description and analysis of a series of relatively well-documented hackathons sponsored by the erstwhile National Evolutionary Synthesis Center (NESCent), an academic research center in the USA funded by the US National Science Foundation (NSF). Over a 10-year period, NESCent sponsored nine hackathons focused on software development to improve interoperability of software and data in the domain of evolutionary biology (comparative analysis, phylogenetics, etc.) (Table 1). Each event was planned by a leadership team whose membership intersects with the set of authors of this work, that is, each team included at least one of us, and most included several of us. The events all followed a common model for process and format, including length (4–5 days) and size (roughly 30 participants). The hackathons were designed both to develop tangible products and to foster a community of practice35,36.\n\nIn the remainder of this paper, we begin with a detailed guide to the NESCent hackathon model, including the organizational process, and the motivations behind chosen practices. Then we describe known outcomes and impacts of the nine NESCent hackathons held, and reflect on some of the lessons learned as organizers and participants. Though our results on outcomes tend to confirm the sense that hackathon teams rarely produce novel prototypes that go on to be used, they often make incremental additions of code and documentation to production codebases that remain in use. In the rare event that novel prototypes and designs do contribute importantly to future work, the impact can be disproportionately large. Several hackathon projects led to publications, and two led to funding that exceeded the total cost of the hackathon by two orders of magnitude. Regarding intangible outcomes, although we lack sufficient data to draw firm conclusions, participants in NESCent hackathons seem to value the coding experience; they will have gained experience in problem-solving and teamwork, acquired training in supportive technologies, and improved their knowledge of best practices and awareness of resources, and opportunities for personal networking. NESCent hackathons also seem to build community by building operational links between community resources, creating excitement and a common focus of attention, and fostering cohesion and awareness with regard to best practices and domain-specific challenges.\n\n\nMethods\n\nThe hackathons we describe (Table 1) were sponsored mainly by NESCent. As a consequence of the sponsor’s commitment to open science, a large amount of information on NESCent hackathons was public from the outset. Agendas, slide decks, and other documents were developed and shared on public wikis; event rosters were shared publicly; teams prepared reports using public wikis, and were expected to share code in public source-code repositories. Most of this information has remained accessible on the web subsequent to NESCent’s closure in May of 2015. From these sources, we have gathered a systematic set of data on NESCent hackathons, including data on (1) nine events (name, dates, scope, location, etc); (2) 54 projects (titles, descriptions); (3) 148 products (mostly team reports and repositories); and (4) numbers of participants (207 in total).\n\nThe vast majority of information on the nine hackathons (time, place, theme, participant roster) and their team projects (goals, repositories, team reports) is available from public resources (e.g., wikis, code repositories). We also contacted participants to fill in gaps in this knowledge. In passing, we note that the quality of the available documentary record on NESCent hackathons decreases as one goes back in time, even beyond what one expects from the decay of records over time. It appears that participants in later hackathons were simply more effective at documenting their work, and organizers became more experienced in recognizing and emphasizing what kind of information needed to be documented. For example, the wiki for the first NESCent hackathon (phylohack, see Table 1) contained a relatively large amount of detailed planning information prepared before the event, but few specifics about what happened at the event.\n\nIn some cases, the interpretation of this source material requires judgment and domain knowledge, e.g., when a hackathon team did not provide a succinct statement of purpose or goals, we constructed a statement from the materials available, drawing on our recollections and our domain knowledge. In some cases the records left by participants made it difficult to distinguish prospective plans from actual accomplishments, and in these cases, we used our best judgment.\n\nWhile the information on events, projects, and participants is comprehensive (in the sense of describing - albeit incompletely - all events, projects and participants), we could not create a comprehensive list of products. This is partly because some products emerge long after the event, but also because teams sometimes produce several distinct products, but do not document all of them. The products that were easiest to find were (1) a team’s report or activity log, as these were nearly always linked to the main web page for the event, and (2) the main code repository for a team.\n\nIt is much harder to track follow-on products. Examples of follow-on products include participants giving a talk at a conference, posting a blog, publishing a paper, or submitting a grant proposal based on hackathon outcomes or activities. To better characterize outcomes, we explored at greater depth a randomly selected set of nine projects, one from each of the hackathons. For each one, we sifted through online information and conducted a preliminary assessment of outcomes and impact, and then contacted a member of the original team to review the assessment and obtain further information, before settling on a list of outcomes and impacts. The entire dataset, as well as other supplementary material, is available at https://nescent.github.io/community-and-code/\n\nIn this manuscript, individuals referred to as instigators get funding; sponsors support the event with funds and other resources such as space, logistics and IT staff; organizers participate in planning and making arrangements; facilitators manage group activities during the hackathon; trainers provide training on Day 1; and participants participate in Day 1 activities and join a hackathon team. A single person may play multiple roles over the course of a hackathon.\n\nNESCent hackathons were five days (in one case, four days) long. For participants, the event was the main focus of attention and activity for the duration of the event. For organizers, by contrast, the hackathon was the culmination of a process that began months earlier when one or more instigators solicited support from sponsors, and assembled a Leadership Team (LT) of organizers to carry forward the planning process, recruit participants, and make all arrangements for a successful, well-facilitated event with sufficient training opportunities.\n\nTable 2 provides a typical timeline of steps in the process, which might be accelerated in other contexts. Figure 2 illustrates the flow of the planning process. We do not provide complete guidance for prospective organizers in this article, but we refer prospective organizers to the Concise Guide, succinct instructions for planning and executing a NESCent-style hackathon, at https://nescent.github.io/community-and-code/doc/ These guidelines represent our current recommendations based on practices that have evolved over a decade. It is supplemented with other materials such as sample advertisements and application forms, all available in the online repository mentioned above.\n\nThe first steps (yellow) are taken by an informal group of instigators. Subsequently, a leadership team (LT) finalizes the pre-planning process (cyan), at which point the recruitment process starts. Inviting potential participants, reviewing and ranking their applications, and finalizing the roster are time-sensitive and labor-intensive steps (green), which lead up to steps that both LT and invitees participate in (fuchsia): the planning of the logistics and the actual substance of the event, including any follow-ups. As a final step, the LT reports back to any sponsoring organizations.\n\nFunding. Hackathons began with instigators who secured support based on a vision for a successful hackathon. These instigators typically came from NESCent’s informatics staff or from one of two NESCent “working groups” (i.e., periodically convening collaborations among in-house NESCent staff and extramural researchers). NESCent was the sole or lead sponsor for most of the nine events. Sometimes there were grant-funded projects critical to the success of the hackathon that offered support, with the understanding that project staff would be participants in the hackathon. For example, the Phylotastic hackathon, which received support from the NSF-funded iPlant project, was staged at iPlant’s home institution, and included a number of iPlant staff. The typical budget for a hackathon was $25,000 to $30,000 USD, nearly all of which was spent on travel. Meeting space was arranged with sponsor organizations at no cost. Sponsors also provided logistics support to arrange travel and on-site IT staff.\n\nPlanning. To initiate planning, the instigators recruited a leadership team (LT) of around five to seven organizers. To complete the entire planning process from scoping to finalizing a roster (i.e. the steps marked in green in Figure 2), the LT typically met five to eight times for hour-long teleconferences or videoconferences, over a period of two to three months. The motivation behind having a large team of organizers was partly to broaden decision-making, and partly to spread out the burden of making meeting arrangements, drafting advertisements, reviewing applications, and so on, in the absence of support staff.\n\nLT members were chosen based on expertise, willingness to “think big”, diversity, and expected effectiveness in hackathon planning. They were given an estimate of work hours expected (roughly one to two hours per week over the organizational period). Those who agreed to take part often had a keen interest in the topic of the hackathon and its potential to enhance their individual goals; for the team of recruited organizers (which are distinct from the instigators) to take true ownership of the project, they had to be allowed to re-think the scope and goals. Difficulty in assembling a committed LT, or in reaching closure on scope and goals, indicated a weakness in the instigators’ vision (see “Lessons learned” section).\n\nNext, the LT decided on a preferred set of supportive technologies for version control, shared documents, and communication. This made it easier for teams to collaborate, and for the LT to track progress and make sure all hackathon products were readily accessible. These choices changed over the years with changes in technology, e.g., early hackathons used SourceForge or Google Code repositories, while more recent ones used GitHub. We have used many technologies for creating and editing shared documents, including MediaWikis, Google Docs, Mozilla’s Etherpad, GitHub documents, and others. In some cases, the use of a consistent document strategy resulted in a rich online record with links to code, screencasts, live demos, slides, etc.\n\nThe choice of communication strategies was most important before and after the hackathon. Email lists were an effective choice for organizers to convey plans, and also provided a forum for discussions in the pre-event engagement stage. NESCent organizers created two email lists that were used for multiple hackathons, with new participants added and prior participants retained. The choice of communication technologies used was also important to consider during a hackathon when remote participants were to be supported (see Concise Guide, at https://nescent.github.io/community-and-code/doc/).\n\nRecruiting participants. Participants were either chosen from a pool of applicants responding to an open call for participation, or chosen directly and offered a seat at the hackathon. Dissemination of the open call (and any advertisements) was done via email lists and websites that reached the target community, as well as by spreading word through emails to colleagues. Sample advertisements are included in Supplementary Material. The open call was a way to reach out broadly and engage unexpected members of the targeted community.\n\nOver time, we relied less and less on direct invitations. In the most recent hackathons, we did not offer seats directly to anyone other than those organizers who wished to participate (organizers sometimes declined to participate so that someone else would be able to attend). Instead, individuals targeted for participation (for technical or diversity reasons) were invited personally to apply. This made the process more democratic, at the cost of occasionally not choosing someone who was invited to apply.\n\nThe application process typically was simple, though review of applications was the most time-consuming step in the organizing process. Over the years we developed a simple application form implemented as a Google form, allowing online entry of information that goes into a spreadsheet. Sample applications are available at https://nescent.github.io/community-and-code/doc/sample_applications, along with a link to a template that can be used to create an online form. Beyond basic information, we did not ask for much (see the Concise Guide at https://nescent.github.io/community-and-code/doc/ for an exact list). Two key parts were a statement describing qualifications (ideally with references to tangible accomplishments), and a statement of goals or aspirations for the hackathon.\n\nApplicants for open seats were ranked according to estimates of expected impact on the success of the hackathon, taking into account that success of the hackathon requires teamwork, and may benefit from homogeneity in some areas (e.g. having a critical mass of people working on a particular topic) and heterogeneity in others (e.g. mixing users and programmers together).\n\nFacilitating. Typically, two or three of the organizers served as meeting facilitators to guide participants through the hackathon process. In the weeks prior to the hackathon, we engaged participants with the aim of raising their comfort level, by introducing supportive technologies, providing a forum for discussion of ideas, and identifying gaps in technical or scientific knowledge. Our strategy and level of effort varied greatly; for example, at the repeat hackathon Phylotastic 2, less lead-up to supportive technologies or discussion of ideas was needed because these had already been established at the first one. In contrast, at the Database Interoperability Hackathon, novel technologies were introduced such as RDF. With repeated efforts and considerable prodding, organizers could get nearly everyone to join a mailing list or teleconference and introduce themselves to each other. In the more recent hackathons, we encouraged discussion via a GitHub issue list, which required the participants to sign up for a GitHub account if they did not have one already.\n\nAs many attendees were new to participant-driven meetings, facilitators repeatedly stressed that the event belongs to the participants and the teams they form; that each participant belongs at the meeting, and has a responsibility to become engaged so as to become part of a team where they are either contributing or learning.\n\nThe first day of NESCent hackathons consisted entirely of structured activities (see Figure 3). After a welcome and introductions, the organizers arranged for technical presentations on topics chosen based on the scope of the hackathon and the results of pre-event engagement. For instance, the TreeForAll hackathon focused on leveraging OpenTree’s new web-services API, thus the organizers arranged for OpenTree staff to describe the API in the opening session of Day 1.\n\nAfter these presentations, participants were engaged in an open discussion of ideas and challenges, with the aim of identifying a sufficient number of project ideas that were feasible and that aligned with the scope. Then the facilitators invited brief “pitches”, project ideas proposed for broader adoption. Most pitches were anticipated based on earlier discussions. In practice, they often came from more senior people (including organizers) with a more confident sense of what projects would have an impact.\n\nThe champion for each pitch then created an impromptu poster. Participants were free to wander around the room, discussing pitches, offering suggestions, and deciding how to fit in. At this stage, the potential fit of a participant to a project is not like the fit of a key to a pre-existing lock, because the definition of the project is still in flux. Except in one instance in which the process carried over to the next day, the first day ended with a set of five to seven hackathon teams committed to a project with recorded goals.\n\nThe guidelines provided at https://nescent.github.io/community-and-code/doc/ outline the space requirements and room configuration for this team-development process. Some space configurations are inadvisable. A room with fixed stadium seating, for instance, is unsuitable, no matter how large. Other room configurations tend to create or amplify inequalities, e.g. a room with a single table large enough for most, but not all participants, will leave some without a seat at the table. A configuration where all team members can sit at the same table or tables such that they can interact easily without too much cross-talk from other teams works well.\n\nThe remaining days of the hackathon were spent with teams working on their individual projects, with pre-determined times for plenary sessions to hear team reports or “stand-ups”. The stand-ups were meant to be short, and generally only needed to happen once a day, to avoid wasting too much time on updates. On the final day, stand-ups were skipped in lieu of a final plenary session to wrap up the meeting, typically including final team reports, along with discussion of possible products that might be achieved with minimal effort after the hackathon (e.g. publications, presentations at conferences, commits to codebases). Some wrap-up sessions included more general discussions about long-term follow ups (e.g. identification of potential funding sources that would enable scaling up of some of the development efforts).\n\nOrganizers typically carried out follow-up activities after the hackathon. They ensured that travel reimbursements were made, and they produced a report on the hackathon, which ranged from a blog to a manuscript for publication. Generally, very little could be expected of participants once they left the hackathon and went back to their “day jobs”. However, the organizers sometimes interacted with participants to follow up on projects, e.g. to make sure that a team’s report was going to be made available on a public web site.\n\n\nResults and discussion\n\nThe results of a hackathon (Table 3) can be separated by outcome. For present purposes, we define outcomes as direct results of the activities of hackathon participants, whereas impacts are defined by how these outcomes penetrated in the larger world. We distinguish outcomes of the hackathon itself from outcomes of follow-on activities by participants. Also, outcomes may be tangible or intangible. For instance, code is a tangible outcome of a hackathon that can be counted (e.g. as lines of code, number of functions or objects), and the impact of the code can be assessed in terms of the number of times the code is invoked in a production setting or mentioned in online discussions. Typically, these outcomes result from the efforts of a specific hackathon team, but some outcomes result from the event as a whole. Thus we distinguish below between project (team) products (PP) and event products (EP).\n\nAs mentioned in the methods section, we took a closer look at nine projects (one at random from each hackathon). The remarks below illustrate what it looks like for hackathon products to have an impact. Additional cases have been added, whenever we happen to have knowledge of their outcomes and impacts. Of course, in those cases, our information on impact will not be systematic.\n\nTangible hackathon outcomes included, in rough order of decreasing frequency: (1) new code repositories and incremental additions to existing code; (2) documentation; (3) designs, standards and schemata; (4) installations; (5) data products; and (6) community infrastructure.\n\nTypically, the main product of a hackathon team is computer code. Hackathon code often represents a new project in a new repository. For instance, the last 3 events (the two Phylotastic hackathons and the Open-Tree hackathon) produced 15 new Bitbucket or GitHub repositories (PP#1 through PP#14, PP#17). The “Integrating Ontologies” group (project 18) from the VoCamp produced several different standalone products (PP#18 to PP#22), most representing an integration of an ontology with data, or with another ontology. Sometimes, developers familiar with an existing software package make additions or improvements that become part of production code. For instance, the first hackathon targeted improvements in existing toolboxes, including BioPerl, BioRuby, BioPython and Bio::Phylo (a Perl package separate from BioPerl, Vos et al.37). Other examples would be additions to Phylomatic (PP#119), Forester (PP#123), CDAO (PP#125), and geiger (PP#27). (The identifiers that are referenced here are primary keys in the data tables that we provide at https://nescent.github.io/community-and-code/data/ - specifically, the items listed here are from the project_products table, hence their primary keys have the \"PP\" prefix.)\n\nOf the repositories developed for the OpenTree hackathon, only two remained active after the hackathon, both from the team that developed library wrappers for OpenTree services. One of the active repositories (PP#140) has an innovative test system that uses the same interface to test Python, Ruby and R libraries. The other is for the R library rotl, which we discuss further in the next section.\n\nIn the case of incremental additions to existing codebases, the impact of any new code is difficult to judge, unless the code adds distinctly new features whose use can be tracked. For instance, the “Integrating Ontologies” team mentioned above added useful features to a previous hackathon product, an XSLT stylesheet for translating between NeXML and CDAO. This enhanced version is still in use in the production version of TreeBASE to provide trees in an RDF-XML format.\n\nHackathon teams sometimes produce documentation, though much less commonly than code. Sometimes this took the form of screencasts illustrating prototypes. Of two Phylotastic screencasts, one of them (PP#115) has received 420 views, and another (PP#130) has received 200 views (at time of writing). Perhaps a more useful documentation product is the “phylogenetics” task view (event product EP#15) on the Comprehensive R Archive Network (CRAN), which provides a concise synopsis of available R packages for phylogenetics. This documentation continues to be updated (most recently 2017-04-09), but we have no way of knowing how frequently it is used.\n\nSometimes, the main product of a team is a design or schema. Team #52 (“skelesim”) from the R popgen hackathon aimed to integrate several different simulation packages in a common framework: this goal proved far too ambitious, and the group had only a design when the hackathon ended (PP#112). Another example of a team tackling a difficult challenge was the work of Team 20 on “phyloreferencing”, essentially a topological query language for trees. This work was important in subsequently securing major funding (PP#139).\n\nSome additional kinds of products are rather infrequent. A unique tangible outcome of the first R hackathon was the development of an email list that is still in use today, the r-sig-phylo list (EP#4), which we mention further below. Though it sometimes happens (e.g., EP#9), hackathon teams rarely provide a public-facing demonstration, because those require a high level of completion and extra effort. One hackathon team worked on a data product, consisting of an annotated set of high-value phylogenetic trees (PP#6). The challenge was to develop a completely machine-readable scheme of annotation based on available ontologies. The trees were not used subsequently.\n\nBeyond the direct code produced at the hackathon, tangible outcomes may continue to develop after the event is over. In fact, most of the time that a hackathon product has a major impact, this is due to follow-up work by participants. The most common of these are (1) demonstrations and production code, (2) communications such as blogs or meeting presentations, (3) manuscripts for publication, and (4) proposals for funding to support further work.\n\nSometimes an individual participant continues working after an event, either to finish up a specific product, or simply acting on a burst of enthusiasm. An example of the latter would be an enormous spike of 238 commits to DendroPy by a single individual in a month-long period beginning a week before event 6, and continuing two weeks after (PP#134). We also identified some cases in which individuals developed a formal communication, such as a blog series (e.g., EP#11) or a meeting presentation (e.g., EP#16, EP#17).\n\nMore commonly, follow-up activities emerge in the context of a group commitment to continue working together. The “PhyloGeoTastic” team did not finish their implementation at the hackathon, but this was completed afterwards so that a live demonstration (PP#4) would be available (although this demo has subsequently gone offline). Two of the events produced published reports that included all of the hackathon participants as authors (EP#1, EP#10;8,9). In both cases, the process of writing and submitting the articles took many months, and was driven and managed by the organizers, reflecting an uneven commitment, with some individuals contributing much more than others. The three participants at the OpenTree hackathon who worked on the R library, now called “rotl”, developed this product mostly after the hackathon (with hundreds of commits), leading to a publication38, and a package that has already been used in a subsequent scientific study39.\n\nOccasionally, participants pursue funding for more extensive follow-ups. Several projects led to Google Summer-of-Code proposals (EP#12, EP#13, PP#137), two of which were funded. The “skelesim” group at the R popgen hackathon later wrote a proposal (PP#129) that won funding for a four-day meeting to continue their work several months later. Two participants in the “phyloreferencing” group at the VoCamp eventually wrote a grant proposal (PP#139) and secured three years of funding for that project. The two Phylotastic hackathons also led to a successful proposal for major funding (EP#8). So, if we look at hackathons as proposal germinators, this is rather high impact. The total award amount for the two National Science Foundation grants is approximately $7.5 M. By comparison, the total amount spent on the nine NESCent hackathons was roughly $250,000. Of course, one cannot calculate a return-on-investment from these two numbers alone, because it does not take into account the significant amount of post-hackathon work required to write a proposal. However, if a grant proposal typically results from three modestly paid academics working quarter-time for three months, and there are not a large number of failed proposals that we are not counting (and we know of no failed major proposals based on NESCent hackathon products), this does not change the overall impression that hackathons are a good investment.\n\nOther authors have pointed out that hackathons are highly social events that provide opportunities to build relationships19,28 and experience excitement around shared motivations16,40. However, such intangible outcomes are difficult to document. In some cases, an intangible outcome is apparent because it has a tangible impact. For example, while the r-sig-phylo mailing list (EP#4) was a direct outcome of the Comparative Methods in R hackathon, the mere existence of a list did not guarantee that this list would be used subsequently, nor that it would garner any new subscribers beyond the initial set of 28 participants. However, eight years later, the mailing list now has 1155 subscribers (as of December 5th, 2016) and generates approximately thirty to sixty messages per month. From this, we would argue that the hackathon helped to nurture a community of practice as an intangible outcome.\n\nAnother example of a case where a hackathon helped to foster a new community as an intangible outcome is the follow-up on team #52, whose six members, most of whom had not worked together before, had become sufficiently motivated to obtain funding for a second face-to-face meeting of four days (mentioned above), and then to meet virtually on a biweekly basis for eight months, in order to finish a project and submit a manuscript on it. Likewise, two of the authors of the present manuscript (AS, EP) are leaders of an ongoing Phylotastic project, and several hackathon participants are consultants. Yet, of the many code repositories developed by teams participating in the two Phylotastic hackathons, only one code repository remains in active development (PP#2 for DateLife, part of the funded project), while a second repository is maintained for providing web content. The continuity between the hackathon and the current project is primarily a continuity of people, plans, excitement, and working relationships, not a continuity of code. In a somewhat similar way, the web-services interface to TreeBASE, written at the third hackathon (see PP#37) was not used in TreeBASE, but the main author later wrote a production version based on the initial implementation. The intangible outcome in this case was the knowledge and the confidence that a particular problem could definitely be solved.\n\nWe would argue from such examples that the experience of a hackathon results in intangible outcomes that sometimes yield tangible benefits. The intangible outcomes are various forms of technical learning, developing a shared awareness (e.g. of what is technically possible), and involve building new relationships. Participants seem to understand this: Briscoe et al.28 report survey results indicating that the top two reasons for participation in hackathons are “learning” and “networking.” Again, we cannot document these intangible outcomes in a direct and rigorous way, but we suggest that some of the following are worth considering:\n\n\n\nTechnology learning: NESCent hackathons often relied on a recommended set of assistive technologies. Many participants learned these technologies for the first time; e.g. obtaining GitHub accounts and learning how to use GitHub. They also learned about specific resources (e.g. code libraries) while participating.\n\nExposure to best practices: In many cases, hackathons provided scientific programmers with critical exposure to best practices widely accepted among professional programmers, such as using collaborative versioning systems, writing documentation, and running automated unit tests.\n\nOpportunity to learn: In some cases, the goal of a group was simply to learn by doing. For example, the “integrating ontologies” group at the VoCamp did not have a functional goal in mind, but aimed to do hands-on work in order to learn how to build bridges between ontologies.\n\nTeam programming experience: Obviously a hackathon provides the actual experience of coding, but the team-based aspect of this experience is often novel: scientific programmers frequently work alone. For many, the chance to discuss designs and develop code with a team or as a pair (“pair programming”) is a rare opportunity.\n\nAwareness of technical challenges and opportunities: Discussion and information-sharing often had the effect of promoting a shared understanding of technical challenges and opportunities. This is vital in a technology landscape that is constantly changing, especially in the evolutionary informatics community, which is as a small and dispersed community.\n\nIn this paper, we describe nine hackathons that we co-organized and participated in, in varying teams, over the course of roughly a decade. After the last of these hackathons, we re-convened at NESCent in a separate meeting to discuss and summarize our experiences. Over the course of this meeting, during our email discussions afterwards, and during the writing of this paper, we developed a synthesis of “lessons learned” that we all agree with as being key in organizing a NESCent hackathon. In this section we discuss these lessons.\n\nLesson 1: Choose a clear yet flexible theme. In our experience, a well chosen theme: (1) leverages the skills and interests of likely participants in such a way that the projects that emerge will serve the goals of the hackathon as identified during the initial scoping (and will align with the interests of sponsors); and (2) allows abundant flexibility for participants to exercise creativity and maximize the value of their participation, including their desire for learning and networking28. We are inspired by the OpenSpace philosophy41 that a theme “must have the capacity to inspire participation by being specific enough to indicate the direction, while possessing sufficient openness to allow for the imagination of the group to take over”. The importance of having a well-defined problem or theme that is communicated effectively to participants has also been stressed by Mohajer Soltani et al.18. Others have suggested that the hackathon should balance a sponsor’s desire for tangible outcomes with the participants desire to learn42.\n\nIn our experience with organizing hackathons, the scope of the hackathon typically emerges after organizers have reflected on community needs. This sometimes involves pre-event discussions with participants, like the use-case driven approach in 8,33. The scope typically has both a technological and a thematic aspect. For instance, in the case of the two R hackathons, the technological limitation was the use of R, and the domain of application was either population genetics or comparative phylogenetic methods.\n\nOur choices of scope were not always ideal. In the case of the VoCamp, the scope was loosely focused on the intersection of evolutionary biology with “ontologies and controlled vocabularies”. With a theme that is too broad, the pre-pitching discussion is diffuse and there is little reason to value one idea over another. This makes it less likely that strong teams will emerge. For an entirely different reason, the second Phylotastic was also not as successful. The first Phylotastic hackathon took a good idea (to create an ecosystem of web services that deliver time-calibrated subsets of the Tree of Life) and turned it into a prototype, which created a large amount of excitement in the community. The second Phylotastic hackathon had essentially the same theme, which meant that for projects to be successful, they had to go beyond prototyping, without relying on a drawn-out process of analysis or design that had not yet happened.\n\nLesson 2: Build the right leadership team. Leadership team members are often selected among more established and senior researchers. The benefit is that they have a greater level of awareness of the community and may provide better guidance in the scoping of the problem and in the identification of effective participants. On the other hand, more senior researchers tend to have an extensive agenda of commitments, which detracts from the dedication required by the organization of a hackathon event that is highly focused, intense over an extended period of time, and guided by rigid deadlines. LT members need to be available for regular meetings (e.g. weekly or bi-weekly teleconferences), and participate in the preparatory activities (e.g. attending pre-hackathon working group meetings; preparing, disseminating and evaluating applications).\n\nCommitments of LT members tend to change rapidly, leading to shifts in focus and in the level of engagement. There have been instances where LT members have had to abandon the team due to a sudden lack of time and availability. Major shifts in the LT composition may endanger the success of the event. Just as the initial success of the hackathon event is dependent on the time and effort dedicated by members of the LT, they are also often the individuals that organize the post-hackathon activities to summarize results, guide follow-up efforts (e.g. development of manuscripts that present the achievements of the hackathon projects), and ensure that the hackathon outcomes are made fully available to the broader community.\n\nLesson 3: Pre-select assistive technologies. Many online platforms are available to assist in communication and collaboration. These include text chat, teleconferencing and videoconferencing; collaborative document editing platforms; issue trackers and project management tools; and source code revision control systems. It is a good idea to pre-select certain preferred technologies from amongst these, and commit to them. Allowing multiple technologies reduces the chances of effective coordination among participants during the event, and also impedes any post-event attempts to create a cohesive record or to track outcomes.\n\nIn our experience, ideal assistive technologies are ones that allow you to track activity and outcomes so that they feed into a system of record-keeping and results-tracking. As has been noted in more formal systematic reviews and meta-analyses43, it is not obvious how much data is missing from the literature until one attempts to collect the data. For many hackathons, we used wikis for open document planning and note taking. This resulted in a rich historical record; yet we still find that basic data about the hackathons can be hard to compile because there was not a clear plan for gathering, organizing and preserving information.\n\nEncouraging the common use of source code revision control systems such as GitHub offers many opportunities to access information about (1) contributions made by participants, such as the extent of their usage of such platforms before the event, during, and after (2) the development of the source code repositories worked on, and (3) the dynamics of collaborations, for example looking at the degree to which hackathon participants worked on the same repositories before, during, and after the event.\n\nLesson 4: Diversify and grow the community. Our experience with hackathons is in academic settings, and so our participants have been a mix of faculty, postdocs, students, and research staff. Research staff are less likely to be able to participate in post-hackathon commitments after the hackathon is over because of their busier schedules; they are also less likely to generate a career benefit from a product. Conversely, postdocs and students may be able to engage more fully, including in the preparatory and follow-up stages, but will benefit from the presence of more senior faculty that might provide informal mentoring opportunities2,42,44. Hence, like others (e.g. 18,40), we recognize the importance of diversity in participant competences and career stages, and made efforts to balance diversity in this respect.\n\nIn addition to assembling participants with diverse levels of expertise, we also made a conscious effort to bring together and benefit from international participants, as well as participants from traditionally underrepresented groups. One strategy to increase diversity is to pay attention to the language used in recruitment materials, bearing in mind that women often undervalue their skills relative to men45. Thus, we avoided announcements that seemed to set a highly restrictive standard of technical skill or domain knowledge; i.e. appealing to \"power users\" would be a mistake (and appealing to “gurus” or “wizards” would be worse). However, our main strategy for increasing diversity was targeted invitation: we identified qualified participants who could increase the diversity of the event, and personally invited them to apply. Women and scientists from minority groups in senior positions are often good sources for providing names of other women and scientists from minority groups in junior positions. It also helps to have a diverse organizing team. In practice, we assembled a list of candidates, and split the task of writing personal invitations among the leadership team. Whereas our open call (distributed electronically) reached thousands of people and generated only a few dozen applications at most, we estimated that applications were received from about 1 out of 2 people personally invited to apply by a hackathon organizer. In our experience, applicants recruited in this manner have similar qualifications to other applicants, and have roughly the same chance of being accepted.\n\nDirect invitation to a hackathon serves not only to increase diversity, but also to target expertise. However, we found it to be too limiting as a general strategy: choosing participants from an open pool is important if one of the goals of the hackathon is to grow the community, whereas invitation-only hackathons (e.g. the BioHackathons23–26 organized by DBCLS, Japan) have a danger of ossifying patterns of inclusion and exclusion.\n\nLesson 5: Engage participants early on. A well-organized hackathon includes sufficient pre-event engagement with participants so that they can hit the ground running on day one. A number of topics need to be addressed; a well-defined theme needs to be effectively communicated to the participants18; there needs to be group consensus on objectives and their domain-specific context10; and any assistive technologies need to be chosen and their requirements assessed. Pre-event engagement intends to ensure that participants are well-prepared in practical terms. For example, this includes them having practiced with new technologies19 - and having signed up for such technologies ahead of time, if need be. However, beyond such practical preparation, they should also have mentally worked themselves up for “invested participation”28 in the event.\n\nThe need for sufficient time prior to the event is emphasized by Christopherson et al.10, who describes two successive hackathons with vastly different amounts of time to engage participants and develop ideas prior to the respective events: “This time crunch added undue pressure on the team, and some participants reported that this made it more difficult to achieve synergy as quickly as expected. It ultimately resulted in less working code . . . and contributed to lower reported satisfaction.”\n\nAt the NESCent hackathons, most of the pre-event engagement was on a mailing list that participants were subscribed to as soon as possible. To foster community engagement, we used the same mailing list and simply added new members for each hackathon. We also experimented with real-time communication prior to the events, using videoconferencing (Google Hangouts, prior to Phylotastic 1). Providing opportunities for engagement can be effective even if only a minority of participants are involved: the ones who feel the greatest need to prepare and to learn more about what will happen at the hackathon are the ones most likely to participate.\n\nLesson 6: Be welcoming and encouraging. As many have pointed out,16,19,40,46, hackathons are highly social events where success depends on what Briscoe and Mulligan call “invested participation”28. That is, participants must feel invested personally in the event. Yet, hackathons have earned a reputation as unwelcoming events catering to insiders and to men. To remedy this, we have made conscious efforts to communicate in ways that are welcoming and encouraging, and to manage the hackathon event in ways that are welcoming and encouraging.\n\nFirst, we designed recruitment materials to appeal to a wide audience, avoiding highly technical language except where absolutely necessary. We explicitly specified non-programmer roles (e.g. “domain expert”, “use-case consultant”), and avoided implicitly equating participants with programmers (e.g. we did not refer to them as “programmers” or “coders”).\n\nSecond, we made it a practice, prior to the event, to reach out personally to individuals who were not already part of our professional network (typically one to two-thirds of the participants were new to us). In most cases this was as simple as an organizer writing a brief email thanking the individual for applying and offering a statement of encouragement about participating in the upcoming event. During the event, there were many opportunities to improve participation by making people feel welcome, e.g. expressing appreciation for opinions and suggestions that were brought forward by newcomers. To ensure that everyone can participate fully, novice participants were given permission to join a team simply to learn and assist, even if they did not have sufficient technical or domain knowledge to be a key contributor. One of the ways we encouraged this practice, was to tell participants that, after teams formed and work started, we wanted everyone to be “either learning or doing\".\n\nDuring team formation, facilitators may intervene to discourage teams from unintentionally closing ranks around a pitch (some participants will commit early to a pitch and begin deep technical discussions, sometimes with their backs to everyone else, which discourages others from approaching or getting involved). When organizers acted as discussion facilitators, they would model the process of asking non-negative, open-ended questions; rather than “Isn’t that out of scope?”, they would ask “What are some ways that this idea aligns with our goals of...?”.\n\nLesson 7: Minimize remote participation. The technological possibilities for remote and asynchronous collaboration make it seem superficially attractive for hackathon organizers to expand the scope of the event by supporting remote participants. However, remote participation is not without cost, and is typically considerably less effective than direct participation. Most potential team-mates at a hackathon have not collaborated before. The face-to-face and real-time nature of hackathons allows for considerable transfer of information that turns out to be quite frustrating to achieve through remote communication, costing extra time to deal with lossy communication and lacking in-person dynamics like whiteboarding or looking over a shoulder10.\n\nOrganizers should therefore consider in advance whether they will support remote participation (strategies for doing so are described at https://nescent.github.io/community-and-code/doc/concise_guide/remotes/). Allowing remote participation has the advantage of reaching more people in the community, and expanding the productive capacity of the hackathon, but it carries a risk of frustration, and comes at a predictable cost as there is a burden to supporting remote participation, e.g. an increased demand for on-site participants to adhere closely to a fixed schedule.\n\nWe explored various means of including remote participants. In one case, we made an arrangement for a satellite hackathon to be held in parallel by a small group on the west coast (NESCent is on the east coast). The westcoast participants were all from the same research group: they formed a single team that contributed importantly to the hackathon. This allowed a significant expansion of the scope of the hackathon, at no monetary cost, and with little trouble.\n\nSingle individuals also participated remotely in NESCent hackathons on numerous occasions, with uneven success. The factors that seemed to contribute to success included their level of previous experience with hackathons and remote collaboration, a commitment to avoid local distractions, and a clear sense of where to fit into a team project. Perhaps most importantly, in all successful cases, the remote person was already part of the community and had collaborators on site. By contrast, remote participation is not an advisable way to include new people.\n\nWe recommend a buddy system where each remote participant is paired with an on-site participant who maintains a video connection throughout the meeting, and serves as a conduit for communication at team work sessions and plenary sessions. Sticking to specific communication technologies is also critical; if the in-person team changes technology halfway through, the remote participants may quickly become lost and forgotten.\n\nLesson 8: Manage the team formation process. The coalescence of participants into teams is a critical step. At some hackathons discussed in the literature, the teams were fully pre-specified, with no obvious team formation process during the event16, while other hackathons were organized around, for example, student projects32 or the desire to learn a new technology44. At the NESCent hackathons, we emphasized self-organization. At the first hackathon in 2006, this self-organization was guided firstly by use cases that had been decided upon by the participants prior to the event, and secondly by participants’ existing connections to open source software projects, such as the Bio* toolkits. However, this may have impeded the building of new connections. At later hackathons, the group formation process was deferred to the first day of the event itself.\n\nSeveral authors have discussed the social nature of the team-formation process (e.g. Jones et al.46), and we have also sought to promote this. We did so by arranging for final team formation to occur by a facilitated self-organizing process, described in more detail at https://nescent.github.io/community-and-code/doc/concise_guide/managing/. During the final stage of this process, participants pitch hackathon activities to each other using whiteboards or flipboards, and teams coalesce around pitches in a manner akin to OpenSpace team coalescence (e.g. as in Mulholland and Meredig12). It is important to precede this stage with an opportunity for the group to discuss the relevance, importance, and chances of success of pitches, to ensure that weak points are addressed.\n\nLesson 9. Manage expectations for follow up. When hackathon teams are working energetically, organizers and team members may have enthusiastic discussions of follow-ups, yet when the hackathon ends, team dynamics and energy often dissipate rapidly as team members return to other responsibilities, resulting in little follow-up (e.g. 21). After all, the nature of hackathons is that we steal talented people from their day jobs for a limited time, and so a team dynamic during the event is unlikely to persist beyond the face-to-face conditions that fostered the team (though this does occasionally happen, e.g. 2).\n\nWe therefore adopted two strategies to manage expectations for follow up. The first strategy involved accepting that, because of the low prospects for follow-up, organizers should instruct participants to focus on producing tangible products within the space of the hackathon, with the expectation that tasks unfinished on the last day would never be finished. The second strategy involved encouraging commitment to a follow-up program to build on successful projects (an example of this approach is in 32). In several cases, NESCent hackathon projects have provided proofs-of-concept and specifications that were important for obtaining funding for further development. In two cases, our hackathons resulted directly in a scholarly publication8,9, with additional examples found elsewhere (e.g. 30).\n\nBecause the cases of successful follow-ups are small in number, it is hard to make generalizations. However, it seems obvious that the potential for follow-up increases when a hackathon project aligns with the interests of participants, and with a leader who has the time to manage a follow-up effort. The more junior participants may be more driven to pursue a project after an event because the outcome may have a larger career impact. Thus, to achieve tangible working products and manage followup, one should consider two things. First, whether deliverables can be achieved in the time allotted for the hackathon, and second, whether choosing participants who may lack certain skills or experience would serve the overall goal better by their likelihood of being able to dedicate extra time after the event to follow-up and complete the project. The success of this latter strategy is further influenced by the ability to coalesce a wider community around activities performed at a hackathon. The development of open data repositories and the contribution to widely accessed code repositories are aspects that facilitate the community “buy-in” and enable long-term sustainability of hackathon products. An example of this is represented by the contributions to the NeXML code base achieved during the Database Interoperability hackathon.\n\n\nConclusions\n\nWe have provided systematic information on events, participants, teams, projects, and outcomes pertaining to the nine NESCent hackathons that took place from 2006 to 2015. NESCent hackathons represent a unique form of participant-driven software development meeting. The NESCent model was designed, not only to stimulate software development and to provide training and experience to participants, but to nurture a larger community of practice so that members develop a shared awareness of best practices, available resources, and strategic challenges. To allow others to use this model, we have developed detailed guidance and sample materials for planning, advertising, recruiting, and facilitation.\n\nThe impacts of hackathons depend on tangible and intangible outcomes. The most obvious tangible outcome of a hackathon is computer code. Some hackathon teams made incremental additions of code or documentation to pre-existing (production) codebases, but most produced standalone products such as prototypes, draft standards, or designs. Standalone products are rarely used or maintained after the hackathon ends, but may have downstream impacts as inspiration or proof-of-concept. To date, two NESCent hackathon projects have led to major NSF funding for the development of production systems. In addition, NESCent hackathons have led rather directly to 4 publications, along with various posters, blogs, websites, and presentations. The intangible impacts of NESCent hackathons, which are perhaps more important, are much more difficult to track. We have described some indications of positive impacts of hackathon-associated training, networking, and community-building, but we can draw no firm conclusions in these areas.\n\nWithout systematic information on other types of hackathons, one cannot draw conclusions on the effectiveness of the NESCent model relative to other types of hackathon. Indeed, a direct comparison with other hackathon types that are designed with different aims may not be appropriate, given the specialized aims of the NESCent model to serve a geographically dispersed academic community. Nevertheless, we hope that the systematic information provided here will lay the foundation for future research on the effectiveness of participant-driven meetings.\n\n\nData availability\n\nTo accompany this paper, we have developed a website that contains a Concise Guide for organizing NESCent hackathons here: https://nescent.github.io/community-and-code/doc/\n\nIn addition, we collated a dataset itemizing all events; their participants; the projects worked on at each event; and the outcomes, both at the level of individual projects and at the level of events. We make this dataset available for download as machine readable, tab-separated tables, here: https://nescent.github.io/community-and-code/data/\n\nThese tables summarize, in structured form, data that were previously spread out over the different websites and wikis that were used for each hackathon, and that since may have gone offline (or moved) subsequent to NESCent’s closing. As such, all data were, and are, in the public domain.",
"appendix": "Author contributions\n\n\n\nAB: data entry, data management; ms writing. AS: data entry, data management; literature analysis; ms writing; guidelines writing. EP: data entry, data management; ms writing. HL: data entry, data management; ms writing. KC: data entry, data management; ms writing. MSR: ms and supplement writing. RV: data entry, data management; literature analysis; ms writing. SO: ms writing.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe National Evolutionary Synthesis Center (NSF #EF-0905606) extended its support for the HIP (Hackathons, Interoperability, Phylogenies) working group by funding the authors to gather at NESCent in March of 2015 to lay the groundwork for this project.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are grateful to the organizations and projects that supported the hackathons described here, including NESCent, the iPlant Collaborative, the Biodiversity Synthesis Center of the Encyclopedia of Life (BioSync), Biodiversity Information Standards (TDWG) and the OpenTree project. The identification of any specific commercial products is for the purpose of specifying a protocol, and does not imply a recommendation or endorsement by the National Institute of Standards and Technology.\n\n\nReferences\n\nCaron B: Getting a handle on community. Figshare. Blog Post; 2015. Publisher Full Text\n\nBusby B, Dillman A, Simpson CL, et al.: Building Genomic Analysis Pipelines in a Hackathon Setting with Bioinformatician Teams: DNA-seq, Epigenomics, Metagenomics and RNA-seq. bioRxiv. 2015. Publisher Full Text\n\nAlmirall E, Lee M, Majchrzak A: Open innovation requires integrated competition-community ecosystems: Lessons learned from civic open innovation. Bus Horiz. 2014; 57(3): 391–400. Publisher Full Text\n\nAungst TD: Using a hackathon for interprofessional health education opportunities. J Med Syst. 2015; 39(5): 60. PubMed Abstract | Publisher Full Text\n\nDePasse JW, Carroll R, Ippolito A, et al.: Less noise, more hacking: how to deploy principles from MIT’s hacking medicine to accelerate health care. Int J Technol Assess Health Care. 2014; 30(3): 260–4. PubMed Abstract | Publisher Full Text\n\nMcArthu K, Lainchbury H, Horn D: Open Data Hackathon: How to Guide. Web Page, 2012. Reference Source\n\nMurby R: How to throw the perfect hackathon. Blog Post, 2014. Reference Source\n\nLapp H, Bala S, Balhoff JP, et al.: The 2006 NESCent Phyloinformatics Hackathon: A field report. Evol Bioinform. 2007; 3: 287–296. Free Full Text\n\nStoltzfus A, Lapp H, Matasci N, et al.: Phylotastic! Making tree-of-life knowledge accessible, reusable and convenient. BMC Bioinformatics. 2013; 14: 158. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChristopherson L, Idaszak R, Ahalt S: Developing Scientific Software through the Open Community Engagement Process. Figshare. 2015. Publisher Full Text\n\nTanenabaum K, Tanenbaum JG, Williams AM, et al.: Critical making hackathon: situated hacking, surveillance and big data proposal. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems. 2014. Publisher Full Text\n\nMulholland G, Meredig B: Hackathon aims to solve materials problems. MRS Bull. 2015; 40(4): 366–370. Publisher Full Text\n\nKuchinskas S: How to run a winning hackathon. Forbes. 2014. Reference Source\n\nJohnson P, Robinson P: Civic hackathons: Innovation, procurement, or civic engagement? Rev Policy Res. 2014; 31(4): 349–357. Publisher Full Text\n\nBurnham K: Inside facebook’s hackathons: 5 tips for hosting your own. CIO. 2012. Reference Source\n\nRaatikainen M, Komssi M, Dal Bianco V, et al.: Industrial experiences of organizing a hackathon to assess a device-centric cloud ecosystem. In Computer Software and Applications Conference (COMPSAC), 2013 IEEE 37th Annual. 2013; 790–799. Publisher Full Text\n\nBond RR, Mulvenna MD, Finlay DD, et al.: Multi-faceted informatics system for digitising and streamlining the reablement care model. J Biomed Inform. 2015; 56: 30–41. PubMed Abstract | Publisher Full Text\n\nSoltani PM, Pessi K, Ahlin K, et al.: Hackathon: A method for digital innovative success: A comparative descriptive study. In Proceedings of the 8th European Conference on IS Management and Evaluation. Academic Conference and Publishing International, Ltd. 2014; 367–373. Reference Source\n\nTrainer EH, Chaihirunkarn C, Kalyanasundaram A, et al.: Community code engagements: Summer of code and hackathons for community building in scientific software. In Proceedings of the 18th International Conference on Supporting Group Work. ACM Press. 2014; 111–121. Publisher Full Text\n\nCalco M, Veeck A: The markathon: Adapting the hackathon model for an introductory marketing class project. Mark Educ Rev. 2015; 25(1): 33–38. Publisher Full Text\n\nKnight Foundation: Four ideas for the future of hackathons. Blog Post. 2012. Reference Source\n\nKomssi M, Pichlis D, Raatikainen M, et al.: What are hackathons for? Software, IEEE. 2015; 32(5): 60–67. Publisher Full Text\n\nKatayama T, Arakawa K, Nakao M, et al.: The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium. J Biomed Semantics. 2010; 1(1): 8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatayama T, Wilkinson MD, Vos R, et al.: The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications. J Biomed Semantics. 2011; 2: 4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatayama T, Wilkinson MD, Micklem G, et al.: The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies. J Biomed Semantics. 2013; 4(1): 6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKatayama T, Wilkinson MD, Aoki-Kinoshita KF, et al.: BioHackathon series in 2011 and 2012: penetration of ontology and linked data in life science domains. J Biomed Semantics. 2014; 5(1): 5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMöller S, Afgan E, Banck M, et al.: Community-driven development for computational biology at sprints, hackathons and codefests. BMC Bioinformatics. 2014; 15(Suppl 14): S7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBriscoe G, Mulligan C: Digital innovation: The hackathon phenomenon. Report, CreativeWorks London, 2014. Reference Source\n\nCorreia NN, Tanaka A: Prototyping audiovisual performance tools: A hackathon approach. In Proceedings of the international conference on New Interfaces for Musical Expression. 2015. Reference Source\n\nFafalios P, Papadakos P: Theophrastus: On Demand and Real-Time Automatic Annotation and Exploration of (Web) Documents using Open Linked Data. Web Semant. Elsevier, 2014; 29: 31–38. Publisher Full Text\n\nGroen D, Calderhead B: Science hackathons for developing interdisciplinary research and collaborations. eLife. 2015; 4: e09944. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHecht BA, Werner J, Raskar R, et al.: The kumbhthon technical hackathon for nashik: A model for stem education and social entrepreneurship. In Integrated STEM Education Conference (ISEC), 2014 IEEE. 2014; 1–5. Publisher Full Text\n\nVos RA, Biserkov JV, Balech B, et al.: Enriched biodiversity data as a resource and service. Biodivers Data J. 2014; (2): e1125. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSouza S: Lessons learnt from a public-private big data hackathon. Report, Big Innovation Centre, 2013. Reference Source\n\nLave J, Wenger E: Situated Learning. Legitimate peripheral participation. University of Cambridge Press, Cambridge, 1991. Reference Source\n\nWenger-Trayner E, Wenger-Trayner B: Communities of Practice: a Brief Introduction. 2015. Reference Source\n\nVos RA, Caravas J, Hartmann K, et al.: BIO::Phylo-phyloinformatic analysis using perl. BMC Bioinformatics. 2011; 12: 63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMichonneau F, Brown J, Winter D: rotl, an R package to interact with the Open Tree of Life data. Methods Ecol Evol. 2016; 7(12): 1476–1481. Publisher Full Text\n\nBrandvain Y, Coop G: Sperm should evolve to make female meiosis fair. Evolution. 2015; 69(4): 1004–14. PubMed Abstract | Publisher Full Text\n\nZapico JL, Pargman D, Ebner H, et al.: Hacking sustainability: Broadening participation through green hackathons. In Fourth International Symposium on End-User Development. 2013. Reference Source\n\nOwen H: A brief user’s guide to open space technology. Nd, 2008. Reference Source\n\nLinnell N, Figueira S, Chintala N, et al.: Hack for the homeless: A humanitarian technology hackathon. In Global Humanitarian Technology Conference (GHTC), 2014 IEEE. 2014; 577–584. Publisher Full Text\n\nKoricheva J, Gurevitch J, Mengersen K: Handbook of Meta-analysis in Ecology and Evolution. Princeton University Press, Princeton, 2013. Publisher Full Text\n\nMtsweni J, Abdullah H: Stimulating and maintaining students’ interest in computer science using the hackathon model. The Independent Journal of Teaching and Learning. 2015; 10: 85–97. Reference Source\n\nNafus D, Leach J, Krieger B: Gender: Integrated report of findings. Report, University of Cambridge, 2006. Reference Source\n\nJones GM, Semel B, Le A: There’s no rules. it’s a hackathon: Negotiating commitment in a context of volatile sociality. J Linguist Anthropol. 2015; 25(3): 322–345. Publisher Full Text"
}
|
[
{
"id": "23270",
"date": "14 Jun 2017",
"name": "Eva Amsen",
"expertise": [
"Reviewer Expertise Science communication",
"with expertise in organising participant-driven events for academics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes a post-hoc analysis of nine hackathons organised by NESCent between 2006 and 2015 to work on the development of software tools for evolutionary biology. Although it’s presented as a research article, the main value of this article is the “Lessons Learned” section, which, in addition to the “Concise Guide” available on Github, serves as useful guidelines for anyone planning to organise hackathon-style events for academics. I particularly like the detailed description of the authors’ approach to ensure participant engagement and to increase attendance by underrepresented groups, as this is something many event organisers can benefit from.\n\nThe authors address the shortcomings of the study by pointing out that an objective systematic analysis of these nine events (which they organised themselves over a period of several years) was not possible. However, thanks to its detailed descriptions and evaluations, this article and the accompanying guidelines may help others perform a more systematic analysis of future hackathons.\n\nMy only comments (below) are suggestions to make the text easier to read. In particular, the Introduction could be more informative to readers who are new to hackathons, by including some information that in the present version is mentioned much later in the article:\nWho was the intended audience for the NESCent hackathons? The introduction mentions “developers and end-users”, but it is not until Lesson 4 that the reader finds out they were “a mix of faculty, postdocs, students and research staff”. Considering the academic audience of the article, this might be useful to mention earlier.\n\nAnother point to potentially address in the article is the differences between a hackathon and any other type of scientific workshop. For example, the participant-driven nature of hackathons (first mentioned in Lesson 6) is key to understanding the concept of the event. If this is something that needs to be explained to new hackathon participants, perhaps it also needs to be explained to new readers.\n\nSuggested minor cosmetic changes:\nSome in-text citations in the introduction are not linked to their respective references. There are three of these close together, starting with “The event may last a single day (e.g. 12)….”\n\nThere is a typo in reference 6: First author should be “McArthur”, not “McArthu”\n\nIt takes some effort to find the sample advertisements mentioned in the “recruiting participants” section. This could be improved with a direct link to the corresponding files.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23275",
"date": "21 Jun 2017",
"name": "Cameron Neylon",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article provides a valuable introduction and \"lessons\" perspective based on a decade of Hackathons based at NESCent. This provides a valuable perspective on the conduct and impact of this approach in an academic setting and provides a good bases for developing assessment approaches for the value of Hackathons in academic settings.\nI concur with the first referee that more background would enhance the value of the paper as it stands.\nMy primary concern is with the way \"intangible\" outcomes are presented as \"naturally hard to assess\". I'm not convinced firstly with the use of the word \"intangible\" in this context. In many ways the primary outputs of the hacks are \"intangible\" code, and things described as intangible (community building, enhanced training/skills) are in some ways quite tangible. I can't readily offer a replacement term, which suggests to me that actually framing this distinction in a different way might be valuable.\nI would suggest considering using the language of research impact assessment of \"outputs\", \"outcomes\", and \"impacts\". The \"tangible outcomes\" described here are mostly \"outputs\", specific objects that are the result of the work. The \"intangible outcomes\" are largely \"outcomes\", that is effects and consequences that result from the outputs. Finally the goal of the paper seems to be to work towards ways of describing \"impacts\", measurable changes in the world that result from the project.\nI feel this matters because there is a tendency in the article to discount the possibility of evaluating these \"intangible\" outcomes, while at the same time there is a recognition that they may be the most important outcomes and impacts of the hack. In turn there are ways to measure these: user surveys, network analysis (pre and post), case studies (as is explored a little here), but these approaches require a degree of preparation and analysis beyond that reported here.\nI think the article would therefore benefit from a close look at what approaches might be deployed to measure these outcomes and impacts, and what the challenges of doing that might be. In my view this would enhance the paper significantly, particularly in its goal of starting a conversation on how best to evaluate and communicate the success of these events.\nMore specific comments:\nIn the Abstract: “Intangible outcomes could not be assessed objectively…” Suggest rewording this in any case. Intangible outcomes can of course be assessed objectively, just perhaps not directly. Regardless of whether you choose to make the other changes I'd reword this to something like \"The less tangible [direct?] outcomes of the events are harder to track and measure, but may be amongst the most important\".\nIntroduction: There is an online literature critical of (particularly commercial) hackathons which might be worth touching on as a negative side, even if it is not peer reviewed. I'm thinking of people like Emma Mulqueeny and Chris Thorpe who have blogged on the subject of what not to do (eg https://mulqueeny.wordpress.com/2010/11/18/developers/).\n“The event may last a single day (e.g., 12), an entire week9, or longer18. The number of participants may range from a few dozen (e.g., 8, 27)” - Should those references be linked?\n“So, if we look at hackathons as proposal germinators, this is rather high impact. The total award amount for the two National Science Foundation grants is approximately $7.5 M. By comparison, the total amount spent on the nine NESCent hackathons was roughly $250,000.” - I wonder what the fair comparator would be here? Is ROI a sensible measure at all. It seems to me that subsequent (grant) income a good proxy for activity which is something you might want to measure.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-786
|
https://f1000research.com/articles/2-269/v1
|
09 Dec 13
|
{
"type": "Research Note",
"title": "Understanding the association between chromosomally integrated human herpesvirus 6 and HIV disease: a cross-sectional study",
"authors": [
"Mundeep K. Kainth",
"Susan G. Fisher",
"Diana Fernandez",
"Amneris Luque",
"Caroline B. Hall",
"Anh Thi Hoang",
"Anisha Lashkari",
"Alexandra Peck",
"Lubaba Hasan",
"Mary T. Caserta",
"Susan G. Fisher",
"Diana Fernandez",
"Amneris Luque",
"Caroline B. Hall",
"Anh Thi Hoang",
"Anisha Lashkari",
"Alexandra Peck",
"Lubaba Hasan",
"Mary T. Caserta"
],
"abstract": "We conducted a cross-sectional investigation to identify evidence of a potential modifying effect of chromosomally integrated human herpes virus 6 (ciHHV-6) on human immunodeficiency virus (HIV) disease progression and/or severity. ciHHV-6 was identified by detecting HHV-6 DNA in hair follicle specimens of 439 subjects. There was no statistically significant relationship between the presence of ciHHV-6 and HIV disease progression to acquired immunodeficiency syndrome. However, after adjusting for use of antiretroviral therapy, all subjects with ciHHV-6 had low severity HIV disease; these findings were not statistically significant. A multi-center study with a larger sample size will be needed to more precisely determine if there is an association between ciHHV-6 and low HIV disease severity.",
"keywords": [
"30th May 2017. Dataset 1 has been removed from this article",
"as it was found to contain identifying information. A new version of this article has been published with an updated version of this dataset",
"with the identifying data removed. The article type has also been updated to “Research Note” to reflect a change in our article type naming conventions."
],
"content": "Introduction\n\nThe relationship between chromosomally integrated human herpesvirus 6 (ciHHV-6) and the progression of human immunodeficiency virus (HIV) infection in humans has, to date, never been examined.\n\nHHV-6 causes ubiquitous human infection with approximately 100% of the population seropositive by 3 years of age. The virus infects lymphocytes with a predominant tropism for CD4+ T cells1. In individuals with ciHHV-6, the entire viral genome is present in every nucleated cell of the body and transmitted in a Mendelian fashion2,3. The presence of HHV-6 DNA in hair follicle DNA specimens is a marker for ciHHV-64. Published data have demonstrated the prevalence of ciHHV-6 to be approximately 0.85%5. If ciHHV-6 is associated with HIV infection, then this could uncover new possibilities for understanding HIV disease severity and progression.\n\nNumerous studies have attempted to unravel the relationship between HHV-6 and HIV. Initially, the identification of HHV-6 (not ciHHV-6) DNA in 100% of saliva samples from HIV-infected adults with high CD4+ T-cell counts suggested a protective effect of HHV-6 on the progression of HIV6,7. HHV-6 produces a functional chemokine, U83A, which binds C-C chemokine receptor 5 (CCR5) and blocks HIV infection of susceptible cells in vitro providing a plausible mechanism for HHV-6 inhibition of HIV8.\n\nAlternatively, in vitro activation of the HIV long terminal repeat (LTR) sequences by HHV-6 has been described suggesting that HHV-6 may promote HIV disease progression9. More recent data from animal models have linked infection with HHV-6 and the production of RANTES, the ligand for CCR5, with the acceleration of HIV disease via increased HIV virulence10,11. Clinical studies have found reactivation or persistence of HHV-6 antigen and IgM antibody in patients with progression from asymptomatic HIV to AIDS12.\n\nAfter acquiring HIV infection, approximately 80% of adults gradually develop low CD4+ T-cell counts over 5–10 years in the peripheral blood with a high HIV viral burden, and are otherwise referred to as ‘typical progressors’ (TP). A small proportion of individuals develop low CD4+ T-cell counts and high viral load within two years of initial infection; these individuals are identified as ‘rapid progressors’ (RP). The mechanisms surrounding these differences are unclear13,14. Additionally, despite prior investigations, there is also no clear understanding of the role of HHV-6 in the pathophysiology of HIV disease.\n\nWe conducted an exploratory investigation to estimate the prevalence of ciHHV-6 in an HIV-infected population and to determine if there is a potential modifying effect of ciHHV-6 on HIV disease progression or severity.\n\n\nMaterials and methods\n\nThis cross-sectional study included patients seen at the Infectious Diseases Clinic at Strong Memorial Hospital in Rochester, NY. The inclusion criterion limited the population to subjects with HIV infection over the age of 18. The exclusion criteria were lack of visible hair and inability to consent. Enrollers were available four days a week, 3–6 hours a day to approach eligible subjects.\n\nWritten informed consent for publication of their clinical details and/or clinical images was obtained from the patient/parent/guardian/ relative of the patient. This study protocol was reviewed and approved by the Research Subject Review Board at the University of Rochester (RSRB00032054).\n\nApproximately one-half of the patients’ clinical data was obtained through the HIV Data Registry supported by the Developmental Center for AIDS Research (D-CFAR). The remainder of the patient data were obtained from electronic medical records with the use of a standardized chart abstraction tool (see data file) that included HIV diagnoses dates, date of AIDS defining illnesses, and viral load/CD4+ T cell counts. Demographic data such as race/ethnicity and sex/gender were provided by subject self identification and included in grant agency reports. Hair follicles were collected by enrollers by pulling 2–3 strands of scalp hair with gloved hands.\n\nBased on established standards, the definition of RP HIV included patients with CD4+ T cell counts of less than 200 or an AIDS-defining illness within 2 years of diagnosis of HIV infection. Subjects who had confirmed HIV infection did not meet criteria for RP were placed in a TP group. Diagnosis of HIV infection was identified by a positive ELISA/Western blot test or indicated by a provider in chart documentation based on patient history. Testing for HIV during the study period was performed by antibody testing (Vitros ECI) at Strong Memorial Hospital. Confirmation by Western blot (INNO-LIA HIV I/II Score) was performed at the Mayo Clinic in Rochester, MN.\n\nWe also compared HIV disease severity in ciHHV-6 positive and negative patients because initial disease progression from HIV to AIDS may not necessarily correlate with current disease severity in subjects on antiretroviral treatment (ART). Subjects were considered to have severe disease if CD4+ T cell counts were less than 200 cells/uL and viral load was greater than 10,000 copies/mL. If CD4+ T cell counts were greater than 200 cells/uL, and viral load was less than 10,000 copies/mL, subjects were considered to have non-severe disease. All subjects with any remaining combination of viral load and CD4+ T cell counts were considered to have ‘moderately’ severe disease.\n\nHair samples were chosen to identify HIV positive subjects with ciHHV-6 and to exclude subjects with acquired HHV-6 infection. Hair follicle samples were digested with proteinase K followed by DNA extraction using QIAamp DNA mini kit (Valencia, CA). DNA samples were tested by a nested qualitative PCR amplifying the HHV-6 U38 DNA polymerase gene as previously described15. In order to verify the presence of ciHHV-6, qualitative PCR was repeated on all of the positive samples followed by quantitative real time PCR for the HHV-6 U4 gene as described by Zhen et al.16 with modifications developed in our laboratory17.\n\nAll data was analyzed with SAS (v9.2). Prevalence rates of ciHHV-6 were compared between the entire cohort of HIV-infected subjects and the prevalence of ciHHV-6 in the general population5. Next, the prevalence of ciHHV-6 was compared between RP and TP subjects to determine if there was a significant difference of ciHHV-6 between these two groups. We also determined whether the presence of ciHHV-6 was associated with markers of HIV disease severity ascertained at the time of enrollment.\n\nStudent’s t-test was used to compare age and the chi-square test for gender, ethnicity, and race. An analysis of subjects positive for ciHHV-6 and markers of HIV disease progression was performed comparing prevalence rates between the RP and TP groups and the three severity groups using Fisher’s exact test. An association was considered to be statistically significant at p<0.05.\n\n\nResults\n\nThe clinic population available for this study included 1035 HIV-infected patients, 714 men and 321 women between 18 and 79 years of age (African American – 418 (40%); White – 453 (44%); Hispanic – 155 (7%); other – 9 (<0.01%)). Approximately115 patients per week were invited to join this study during routine clinic visits from October 2010 to April 2012 and 463 subjects were enrolled. From this group, 1 subject withdrew and 9 were erroneously enrolled twice. 4 subjects with unobtainable hair follicle DNA were excluded. Clinical data on 215 subjects were obtained from the University of Rochester D-CFAR database. For the remaining 248 subjects, electronic medical records were reviewed. When discrepancies between HIV and AIDS diagnosis dates occurred, the earliest date identified on the chart was selected. HIV diagnosis date was unavailable for 10 subjects, with a final number of 439 subjects with complete data available for analysis.\n\nApproximately 1/3 of the total cohort of subjects identified themselves as female, 2/3 male, and less than 1% were transgender. Reported racial distributions for enrolled subjects were: White 205 (47%), Black-African American 171 (39%), White-Black 5 (1%), unknown or other 54 (12.3%), Asian 1, and Hawaiian Pacific-Islander 1 (Table 1). Fifty-eight (13.2%) subjects reported their ethnicity as Hispanic. Age ranged from 18 to 74 years with a mean age of 46.7 years and a median of 48 years.\n\naCI-HHV6 (Chromosomally Integrated HHV6)\n\nbAA (African-American)\n\ncHaw/PI (Hawaiian/Pacific-Islander)\n\nEight ciHHV-6 positive samples out of a total of 439 hair follicle samples were identified by initial qualitative PCR testing. Upon re-testing, only three samples were positive. Real-time quantitative PCR analysis verified 4 positive samples (Table 2). Due to the higher specificity of the assay, the quantitative PCR data were used in the remaining study analyses.\n\naciHHV-6 (chromosomally integrated human herpes virus 6)\n\nbValues >5 Log10 genome equivalent copies (gec) per μg of DNA are considered to be positive for high viral loads of HHV-6 DNA indicating chromosomal integration\n\nThe calculated prevalence estimate of ciHHV-6 in the cohort was 0.91% with a 95% CI of [0.37–2.31%]. Of the four positive ciHHV-6 subjects, 2 were female and 2 were male; 2 were white and 2 were black.\n\nOf 340 (77%) TP subjects, 3 (0.68%) were positive for ciHHV-6. Ninety-nine (23%) subjects were classified as having RP HIV disease based on the time to AIDS diagnosis after the date of HIV infection. Out of these 99 subjects, one (0.23%) was positive for ciHHV-6. Due to the low ciHHV-6 prevalence rate in the study population, the difference between ciHHV-6 among RP and TP is not statistically significant (p<0.42).\n\nPatients were categorized into three severity groups based on their CD4+ T cell count and HIV viral loads as described above. In order to control for antiretroviral therapy (ART) use, 9 individuals who were ART naïve were excluded. 332 subjects (84%) had non-severe disease, 42 (11%) had moderately severe disease, and 19 (5%) had severe disease. All of the subjects with ciHHV-6 had non-severe HIV disease. Given the small numbers, this finding did not reach statistical significance (p<0.51).\n\n\n\n\nDiscussion\n\nThis pilot study is the first to identify the presence of ciHHV-6 in an HIV-infected population. Because ciHHV-6 is inherited in a Mendelian fashion, our data confirmed that the prevalence rate of ciHHV-6 in our total HIV-infected cohort was similar to the published prevalence rate in otherwise healthy populations. We hypothesized that ciHHV-6 would be associated with rapid HIV disease progression or markers of disease severity. However, we did not identify a significant association between HIV disease progression and ciHHV-6 status. All four subjects with ciHHV-6 had non-severe disease; yet this was also not significant due to the small numbers of individuals with ciHHV-6 identified.\n\nThe strengths of the study include that a large number of HIV-infected subjects were available for study and the ease of hair sample collection. The testing was appealing to subjects and allowed efficient sample storage. The major limitation of this study was the low prevalence of ciHHV-6 in the HIV-infected cohort. Additionally, we were not always able to identify dates for HIV infection and AIDS diagnosis to establish clinical disease progression. We attempted to overcome differential misclassification by evaluating CD4+ T cell counts and viral loads at the time of sample collection to establish a subject’s HIV disease severity. This last method revealed that all 4 subjects with ciHHV-6 were in the low disease severity group, but due to the low ciHHV-6 prevalence, we were unable to conduct further statistical analyses.\n\nWhile earlier in vitro studies have suggested a protective effect of HHV-6 infection on HIV disease, our study is the first clinical investigation to identify a possible protective effect of ciHHV-6 on HIV disease severity. However, due to a lack of statistical significance of the association between ciHHV-6 and HIV disease severity, more data will need to be collected to assess this relationship.",
"appendix": "Author contributions\n\n\n\nMTC conceived the study. CBH designed the experiments. MKK, ATH, AL, AP and LH carried out the research. SGF and IDF contributed to the design of the study and provided expertise in statistics. MKK, SGF, IDF and MTC prepared the first draft of the manuscript. All surviving authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nMary T. Caserta has received funding from the HHV-6 Foundation for unrelated projects. No other competing interests were disclosed.\n\n\nGrant information\n\nMundeep K. Kainth received a Strong Children’s Research Center Bradford Fellowship and a Developmental Center for AIDS Research pilot award. She was supported by NIH/NIAID 2 T32 AI 007464-16 during her fellowship training.\n\n\nAcknowledgements\n\nWe are grateful for the assistance of the staff at the Strong Memorial Hospital AIDS Center, including David Clinton, Phyllis Mulvaney and Linda Plano, for their enthusiastic recruitment efforts as well as the research staff at the AIDS Clinical Trials Unit, especially Carol Greisberger and Nurhan Calisir. In memoriam, we dedicate this paper to Caroline Breese Hall, MD, our mentor, educator and friend.\n\n\nReferences\n\nSalahuddin SZ, Ablashi DV, Markham PD, et al.: Isolation of a new virus, HBLV, in patients with lymphoproliferative disorders. Science. 1986; 234(4776): 596–601. PubMed Abstract | Publisher Full Text\n\nTanaka-Taya K, Sashihara J, Kurahashi H, et al.: Human herpesvirus 6 (HHV-6) is transmitted from parent to child in an integrated form and characterization of cases with chromosomally integrated HHV-6 DNA. J Med Virol. 2004; 73(3): 465–473. PubMed Abstract | Publisher Full Text\n\nWard KN, Thiruchelvam AD, Couto-Parada X: Unexpected occasional persistence of high levels of HHV-6 DNA in sera: detection of variants A and B. J Med Virol. 2005; 76(4): 563–570. PubMed Abstract | Publisher Full Text\n\nWard KN, Leong HN, Nacheva EP, et al.: Human herpesvirus 6 chromosomal integration in immunocompetent patients results in high levels of viral DNA in blood, sera, and hair follicles. J Clin Microbiol. 2006; 44(4): 1571–1574. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPellett PE, Ablashi DV, Ambros PF, et al.: Chromosomally integrated human herpesvirus 6: questions and answers. Rev Med Virol. 2012; 22(3): 144–155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFairfax MR, Schacker T, Cone RW, et al.: Human herpesvirus 6 DNA in blood cells of human immunodeficiency virus-infected men: correlation of high levels with high CD4 cell counts. J Infect Dis. 1994; 169(6): 1342–1345. PubMed Abstract | Publisher Full Text\n\nFabio G, Knight SN, Kidd IM, et al.: Prospective study of human herpesvirus 6, human herpesvirus 7, and cytomegalovirus infections in human immunodeficiency virus-positive patients. J Clin Microbiol. 1997; 35(10): 2657–2659. PubMed Abstract | Free Full Text\n\nCatusse J, Parry CM, Dewin DR, et al.: Inhibition of HIV-1 infection by viral chemokine U83A via high-affinity CCR5 interactions that block human chemokine-induced leukocyte chemotaxis and receptor internalization. Blood. 2007; 109(9): 3633–3639. PubMed Abstract | Publisher Full Text\n\nEnsoli B, Lusso P, Schachter F, et al.: Human herpes virus-6 increases HIV-1 expression in co-infected T cells via nuclear factors binding to the HIV-1 enhancer. EMBO J. 1989; 8(10): 3019–3027. PubMed Abstract | Free Full Text\n\nLusso P, Crowley RW, Malnati MS, et al.: Human herpesvirus 6A accelerates AIDS progression in macaques. Proc Natl Acad Sci U S A. 2007; 104(12): 5067–5072. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBiancotto A, Grivel JC, Lisco A, et al.: Evolution of SIV toward RANTES resistance in macaques rapidly progressing to AIDS upon coinfection with HHV-6A. Retrovirology. 2009; 6: 61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAblashi DV, Marsh S, Kaplan M, et al.: HHV-6 infection in HIV-infected asymptomatic and AIDS patients. Intervirology. 1998; 41(1): 1–9. PubMed Abstract | Publisher Full Text\n\nFontaine J, Coutlée F, Tremblay C, et al.: HIV infection affects blood myeloid dendritic cells after successful therapy and despite nonprogressing clinical disease. J Infect Dis. 2009; 199(7): 1007–1018. PubMed Abstract | Publisher Full Text\n\nLing B, Veazey RS, Hart M, et al.: Early restoration of mucosal CD4 memory CCR5 T cells in the gut of SIV-infected rhesus predicts long term non-progression. AIDS. 2007; 21(18): 2377–2385. PubMed Abstract | Publisher Full Text\n\nHall CB, Long CE, Schnabel KC, et al.: Human herpesvirus-6 infection in children. A prospective study of complications and reactivation. N Engl J Med. 1994; 331(7): 432–438. PubMed Abstract | Publisher Full Text\n\nZhen Z, Bradel-Tretheway B, Sumagin S, et al.: The human herpesvirus 6 G protein-coupled receptor homolog U51 positively regulates virus replication and enhances cell-cell fusion in vitro. J Virol. 2005; 79(18): 11914–11924. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaserta MT, Hall CB, Schnabel K, et al.: Human herpesvirus (HHV)-6 and HHV-7 infections in pregnant women. J Infect Dis. 2007; 196(9): 1296–1303. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "2694",
"date": "07 Jan 2014",
"name": "Susana N. Asin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study addresses an interesting question: the potential impact of chromosomally integrated Human Herpes Virus 6 (HHV6) on HIV-1 disease progression. The rationale underlying the proposed hypothesis is weak. The authors do not suggest potential mechanisms underlying their hypothesis, which may provide a more provoking discussion. It is appropriate to anticipate that immunosuppression will lead to HHV6 reactivation. Since HHV6 shares tropism for CD4+ T cells with HIV-1, HHV6 could accelerate disease progression by impairing the immune response to HIV-1. Based on observation demonstrating that HHV6 replication accelerates progression to AIDS in macaques, I am curious to understand why the authors hypothesize that chromosomally integrated HHV6 could impact HIV-1 disease progression specially without evaluating HHV6 replication/reactivation.It would have been appropriate to define how the authors evaluated chromosomally integrated DNA. Are the primers used specific for HHV6 or do these primers cross react with other HHV family members? Only describing the target gene will allow the reader to define the potential confounding effects of additional endogenous herpes viruses.The manuscript is well written and the authors do not draw overambitious conclusions. The fact that only one individual in the rapid progression group had ciHHV6 support the author’s conclusion to conduct a multi-center study with a larger sample size. Based on higher HHV6 prevalence rates described in another groups of HIV-1 infected individuals I would be curious to know whether the authors evaluated HHV6 antibodies in this group and how do they explained that, despite high prevalence, the percentage of integrated HHV6 DNA is so low. The pathogenesis of HHV6 in HIV-1 infected individuals should be further investigated.",
"responses": []
},
{
"id": "3026",
"date": "07 Jan 2014",
"name": "Mario Clerici",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBecause of the very low ciHHV-6 positivity in the analyzed HIV-infected population, the author's suggestion that the presence of ciHHV-6 associates with HIV disease severity seems to be too strong and not (yet) justified.In particular, the abstract is misleading: the authors should add the number of ciHHV-6 positive subjects, and delete/modify the sentence about the severity of disease and ciHHV-6.",
"responses": []
},
{
"id": "2695",
"date": "09 Jan 2014",
"name": "David Camerini",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis interesting, well-conducted study investigated the relationship between chromosomally integrated human herpes virus six (ciHHV-6) on HIV disease severity and progression. Individuals with ciHHV-6 constitute just under 1% of the general population and a similar proportion of the 439 HIV positive individuals analyzed in this study. Due to the small sample size, only 4 ciHHV-1, HIV+ individuals were found and no significant conclusions could be drawn regarding any association with HIV disease. Nevertheless, this study is thought provoking and may provide impetus for a larger study that could shed light on this potential viral interaction in vivo.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/2-269
|
https://f1000research.com/articles/6-760/v1
|
31 May 17
|
{
"type": "Research Article",
"title": "MinION Analysis and Reference Consortium: Phase 2 data release and analysis of R9.0 chemistry",
"authors": [
"Miten Jain",
"John R. Tyson",
"Matthew Loose",
"Camilla L.C. Ip",
"David A. Eccles",
"Justin O'Grady",
"Sunir Malla",
"Richard M. Leggett",
"Ola Wallerman",
"Hans J. Jansen",
"Vadim Zalunin",
"Ewan Birney",
"Bonnie L. Brown",
"Terrance P. Snutch",
"Hugh E. Olsen",
"MinION Analysis and Reference Consortium",
"David A. Eccles",
"Justin O'Grady",
"Sunir Malla",
"Richard M. Leggett",
"Ola Wallerman",
"Hans J. Jansen",
"Vadim Zalunin"
],
"abstract": "Background: Long-read sequencing is rapidly evolving and reshaping the suite of opportunities for genomic analysis. For the MinION in particular, as both the platform and chemistry develop, the user community requires reference data to set performance expectations and maximally exploit third-generation sequencing. We performed an analysis of MinION data derived from whole genome sequencing of Escherichia coli K-12 using the R9.0 chemistry, comparing the results with the older R7.3 chemistry. Methods: We computed the error-rate estimates for insertions, deletions, and mismatches in MinION reads. Results: Run-time characteristics of the flow cell and run scripts for R9.0 were similar to those observed for R7.3 chemistry, but with an 8-fold increase in bases per second (from 30 bps in R7.3 and SQK-MAP005 library preparation, to 250 bps in R9.0) processed by individual nanopores, and less drop-off in yield over time. The 2-dimensional (“2D”) N50 read length was unchanged from the prior chemistry. Using the proportion of alignable reads as a measure of base-call accuracy, 99.9% of “pass” template reads from 1-dimensional (“1D”) experiments were mappable and ~97% from 2D experiments. The median identity of reads was ~89% for 1D and ~94% for 2D experiments. The total error rate (miscall + insertion + deletion ) decreased for 2D “pass” reads from 9.1% in R7.3 to 7.5% in R9.0 and for template “pass” reads from 26.7% in R7.3 to 14.5% in R9.0. Conclusions: These Phase 2 MinION experiments serve as a baseline by providing estimates for read quality, throughput, and mappability. The datasets further enable the development of bioinformatic tools tailored to the new R9.0 chemistry and the design of novel biological applications for this technology. Abbreviations: K: thousand, Kb: kilobase (one thousand base pairs), M: million, Mb: megabase (one million base pairs), Gb: gigabase (one billion base pairs).",
"keywords": [
"MinION",
"nanopore sequencing",
"R9 chemistry",
"CsgG",
"data release",
"long reads",
"minoTour",
"marginAlign",
"NanoOK",
"third-generation sequencing"
],
"content": "Introduction\n\nThe Oxford Nanopore Technologies (ONT) MinION Access Programme (MAP) released the MinION™ nanopore sequencer to early access users in June 2014. The MinION Analysis and Reference Consortium (MARC) was formed by a subset of MAP participants to perform independent evaluation of the platform, share standard protocols, collaboratively produce reference data for the nanopore community, and to address biological questions. The Phase 1 MARC analysis of October 20151 was an evaluation of the library preparation chemistry version SQK–MAP005, R7.3 flow cell chemistry, and a base-calling algorithm derived from a Markov model (HMM) using a 5-mer model. The R9.0 chemistry and protocol, (https://www.youtube.com/watch?v=nizGyutn6v4) was made available to users in June 2016 (https://londoncallingconf.co.uk/lc/2016-plenary#168687629). This substantial upgrade to the platform included the CsgG membrane protein for the pore and a recurrent neural network (RNN) for base-calling. In part, ONT claimed these changes made substantial improvements to data yields and quality, to the extent that 1-dimensional (“1D”) reads, without a hairpin, could be used for analyses in many use-cases.\n\nBefore embarking on further analyses, MARC performed “bridging experiments” to evaluate the effect of the R9.0 changes on data yield, quality, and accuracy. To capture variability and reproducibility among experiments using R9.0 chemistry, two labs concurrently sequenced Escherichia coli strain K-12 substrain MG1655, the same strain used for MARC Phase 11. Sequencing was performed using both the 2-dimensional (“2D”) “ligation” kit and the newer 1D “rapid” kit. The analyses performed included characterizing throughput, read quality, and accuracy. This work also marks the release of MinION Phase 2 data for both sequencing modes with the R9.0 chemistry. Although the newer R9.4 flow cell chemistry has become available to the community since the Phase 2 experiments were performed in late July and early August 2016, ONT have stated that R9.4 flow cell chemistry has similar base-calling characteristics compared to R9.0, as it uses the same pore and base-calling strategy. Thus, this data release and analysis is of interest as it describes the major changes introduced with the R9 chemistry. It is a resource to aid further developments in nanopore informatics as well as the development of biological applications using the MinION.\n\n\nMaterials and methods\n\nTwo laboratories each performed a 1D and a 2D experiment using the protocol described in MARC Phase 11 to obtain total genomic DNA from freshly grown cells (Supplementary File 1) and slightly modified protocols for 1D “rapid” and 2D “ligation” library preparation and sequencing.\n\nE. coli cells were cultured and DNA was extracted using the protocol described in MARC Phase 1 (Supplementary File 1).\n\nSequencing libraries were prepared according to the ONT recommended 2D protocol (SQK-NSK007 kits), which included addition of the lambda control sample, with the following changes:\n\n(i) genomic DNA was sheared to ~10 kb; and\n\n(ii) both labs performed a 0.4x AMPureXP cleanup post-FFPE treatment.\n\nSequencing libraries were prepared according to the ONT recommended 1D protocol (SQK-RAD001 kits, referred to as 1D “rapid” sequencing) with the following changes:\n\n(i) a 0.4x AMPureXP cleanup was performed prior to 1D library preparation;\n\n(ii) an unsheared input DNA sample of 400 ng was used for the library;\n\n(iii) 0.4 μl Blunt/TA Ligase was added; and\n\n(iv) a 10 min incubation was used in the final step.\n\nNote that this protocol does not include addition of the lambda control sample DNA.\n\nAll sequencing runs used MinKNOW (version 1.0.3) and Metrichor Desktop Agent. The experiments are henceforth referred to as P2-Lab6-R1-2D, P2-Lab7-R1-2D, P2-Lab6-R1-1D and P2-Lab7-R1-1D following a “phase-lab-replicate-kit” format. All flow cells used for sequencing underwent the standard MinION Platform QC for analysis of overall quality and number of functional pores. This was followed by the recommended priming step, after which the prepared library was loaded onto an R9.0 flow cell. Final library volume for the 1D runs was 11.2 μl, which was loaded once with running buffer at the start of the experiment. A 500 μl flush with running buffer alone was performed at 24 hrs on the P2-Lab6-R1-1D run. The final volume of 2D libraries was 25 μl, of which 12 μl was loaded with running buffer at the start of the sequencing run followed by addition of another 12 μl library aliquot 16 hours into the run. All sequencing runs were performed on MinION Mk1b devices using the standard MinKNOW 48-hour sequencing protocol (NC_48Hr_Sequencing_Run_FLO-MIN104).\n\nThe sequencing data for 1D MinION runs were base-called using the Metrichor 1D Base-calling RNN for the SQK-RAD001 (v1.107) workflow. This workflow classified base-called sequence data into “pass” and “fail” categories based on the mean Phred-scaled quality score for that read. The threshold for a read to be categorized as “pass” was a Q-value of 6. The sequencing data for 2D MinION runs were base-called using the Metrichor 2D Base-calling RNN for the SQK-NSK007 (v1.107) workflow. Similarly, this workflow classified reads into “pass” and “fail” with a Q-value threshold of 9 required for pass reads.\n\nAs in Phase 1, the base-called FAST5 files and meta-data were collated on a server at the European Nucleotide Archive (ENA). These data were then processed using several tools. The base-calls in FASTQ format were extracted using poretools (version 0.5.1)2 and then aligned against the E. coli K-12 reference genome (NCBI RefSeq, accession NC_000913.1) using BWA-MEM (version 0.7.12-41044), parameter “-x ont2d”3 and LAST (version 460)4, parameters “-s 2 -T 0 -Q 0 -a 1” as recommended by 5. Both alignments were then improved with marginAlign (version 0.1)6, and were statistics computed using marginStats6.\n\nThe R9.0 data were characterized by collating statistics for a typical run from MARC Phase 1 (P1b-Lab2-R2, hereafter referred to as P1b-Lab2-R2-2D for consistency with the Phase 2 experiment naming convention) and the four Phase 2 experiments. In keeping with the MARC Phase 1 analyses1, we computed alignments and error-rate measurements using BWA-MEM and LAST, followed by re-alignment using marginAlign6. Real-time evaluation of the runs was performed by minoTour7 (more information available from: http://minotour.github.io/minoTour), run locally at the two experimental laboratories. The “pass” and “fail” reads from each experiment were evaluated with NanoOK (version 0.95)8 using bwa alignments. Additional metrics and analyses were performed with bespoke Python and R scripts, (available at https://github.com/camilla-ip/marcp2)9.\n\n\nResults\n\nThe MARC Phase 2 experiments were performed by two laboratories (Supplementary File 1) between 27 July and 2 August 2016 (Table 1). The total number of functional g1 pores prior to sequencing on R9.0 flow cells was ~94%, an improvement from ~88% for R7.3 (Table 1). The operating ASIC (chip) temperature on the R9.0 flow cell ranged from 30 to 34°C, and the temperature regulation of the flow cell heat sink was a uniform 34°C across all flow cells (Table 1). All experiments ran for at least 40 hours of the 48 hour run script. However, experiment P2-Lab6-R1-2D crashed when the controlling computer’s hard-drive reached capacity; it was restarted ~42 hours after the initial experiment start time using modified recipe scripts, but produced few further reads. Experiment P2-Lab7-R1-2D was terminated after ~44 hours. Experiment P2-Lab7-R1-1D was restarted twice between 24 and 32 hours and terminated at 41.5 hours (Table 1).\n\nP1 refers to a typical R7.3 run from MARC Phase 11. P2 refers to the MARC Phase 2 R9.0 data presented in this study. NA: not available.\n\nOne challenge of MinION data analysis is referencing the proper data format after major upgrades, such as the switch from an HMM to an RNN base-caller. The new or superseded fields in the resulting table after introduction of R9 chemistry are shown in Supplementary File 29.\n\nThe read count, base yield, and read lengths of the 2D and 1D R9.0 experiments compared to a typical R7.3 experiment (Table 2 and Table 3, and Figure 1) were inferred from NanoOK reports (Supplementary File 3) and bespoke scripts9. There was considerable variability between the quantity of data produced by the two 2D experiments and the two 1D experiments, but overall, the R9.0 chemistry showed an increase in data yield and read length when compared with a typical Phase 1 R7.3 experiment.\n\n(“-”) indicates not applicable.\n\n(“-”) indicates not applicable.\n\n** : Longest read here is pre-alignment.\n\nThe distribution of template (“1D”) read lengths for experiments based on 1D “rapid” libraries (P2-Lab6-R1-1D and P2-Lab7-R1-1D) was skewed toward shorter read lengths due to enzymatic, rather than mechanical, DNA fragmentation. The long tails of the distributions were truncated at 40,000 bases for clarity.\n\nImprovements in base yield and read length were observed for the 2D R9.0 experiments compared with a typical R7.3 experiment (Table 2 and Table 3). The 2D R9.0 experiments sequenced 127–217 K molecules (compared with ~49 K molecules for the typical Phase 1 R7.3 experiment). Of these, ~50% resulted in 2D reads (an improvement from ~44% for the typical R7.3 experiment) and a total of 64–111 K 2D pass reads (compared with 21 K for the typical R7.3 experiment). The proportion of “pass” reads with a Q-value threshold of 9 was 66% to 72%, about the same as that observed for the typical R7.3 experiment, with a base quality threshold of 9.0. Average read lengths of “pass” 2D base-calls were higher at 6.6–7.7 Kb (compared with 6.4 Kb for the typical R7.3 experiment), and for “all” 2D base-calls at 6.0–6.5 Kb (compared with 6.0 Kb for the typical R7.3 experiment). The longest 2D reads observed in R9.0 (50.9 Kb,Table 3) were comparable to those observed in R7.3 experiments (59.7 Kb)1. However, the longest 2D aligned read observed increased to 50.9 Kb (from 35.2 Kb in the typical R7.3 experiment) (Table 3). The increase in N50 read length to 7.3–9.1 Kb for all 2D reads in the R9.0 experiments (compared with 7.4 Kb for the typical R7.3 experiment) and 7.8–9.8 Kb for “pass” R9.0 reads (compared with 7.6 Kb for the R7.3 experiment) indicates, as for the 1D data, an overall increase in the proportion of longer 2D base-called reads.\n\nThe 1D R9.0 experiments sequenced 57–96 K molecules (compared with 49 K for the typical Phase 1 R7.3 experiment), resulting in a total template base yield of 410–830 Mb (compared with 242 Mb for the typical R7.3 experiment), of which ~60% were higher-quality “pass” reads with a Q-value threshold of 6.0 (compared with ~31% for the typical Phase 1 experiment classified with 2D base quality threshold of 9.0) (Table 2). Read lengths also improved, with the mean template length for “pass” reads increasing to 7.2–8.6 Kb (from 5.0 for R7.3) and increasing to 7.8–9.4 for “fail” reads (from 6.2 for the R7.3 experiment). The longest mappable template read observed across all of the R9 runs was 151.2 kb and the artefactually long reads, detectable by a discrepancy between the longest read lengths and the longest mappable read lengths, were comparably rare (Table 3). Read length N50 increased to 13.1–15.7 Kb for “pass” reads (compared with 6.9 Kb for the typical R7.3) and 13.6–16.2 Kb (compared with 7.5 Kb for the typical R7.3), indicating that more of the base-calls were contained in longer reads.\n\nWe observed that the speed and convenience of the 1D “rapid” library protocol came at a cost. The distribution of template “pass” read lengths was skewed toward shorter reads peaking closer to 1 Kb rather than the ~6.5 Kb obtained through the 2D “ligation” library protocol. However, one benefit was that a greater proportion of longer reads was also produced (Figure 1). The addition of the lambda control sample in the 2D library protocol resulted in a variable ratio of “target” to “control” sample reads, evident in the relative sizes of the bimodal read length distributions for the 2D library experiments (Figure 1).\n\nThe proportion of alignable reads is a measure of the accuracy of the base-calls. For template reads from both 1D and 2D experiments, 99.9% of “pass” reads were alignable from both 1D and 2D experiments, and 60% and 83% for “fail” reads from 1D and 2D experiments, respectively (Table 4).\n\n(“-”) indicates the metrics were not applicable for that experiment. NA: not available.\n\nThe median identity of reads from 1D and 2D experiments (Table 4) was similar to that observed for the R7.3 chemistry in MARC Phase 1. The median identity for 1D template reads was ~88% and ~76%, for “pass” and “fail”, respectively (compared with 78% and 75% for the typical R7.3 experiment). For the 2D experiments, the read identity was ~89% and ~85%, for “pass” and “fail”, respectively (compared with ~92% and ~82%, respectively, for the typical R7.3 experiment).\n\nAnother metric of overall error, the longest perfectly aligned subsequence, showed improvement associated with the R9.0 chemistry. The longest perfectly aligned subsequences in the R9.0 1D runs were 235 and 273 bases (compared with 87 in the typical R7.3 experiment), and in the 2D runs were 713 and 750 bases (compared with 333 bases in the typical R7.3 experiment).\n\nThe total error of “pass” reads in the 1D sequencing experiments reduced from 26.7% in R7.3 to 15.0% in R9.0 (miscalls 6.2%, insertions 3.1%, deletions 5.7%) (Table 4). Little change was observed for the “fail” template reads, between the 32.8% observed for a typical R7.3 experiment and the 31.1% for the R9.0 experiments (miscalls 15.4%, insertions 5.9%, deletions 9.8%) (Table 4).\n\nTotal error of the 2D reads was reduced from 9.1% in R7.3 to 7.3% in R9.0 for “pass” reads, whereas the total error increased for “fail” reads from 19.7% in R7.3 to 25.4% in R9.0 (Table 4).\n\nIn the MARC Phase 1 analysis of R7.3 chemistry experiments, the quantity and quality of data produced during an experiment varied as material passed from one side of the membrane to the other. This was punctuated by periodic changes in voltage every 4 hours, and a switch to the group 2 pores at 24 hours1. To enable a direct comparison between the performance of the R7.3 and R9.0 chemistry, key metrics were plotted for 15 minute windows over the course of the 48 hour experiment for the typical R7.3 experiment (P1b-Lab2-R2-2D) and the four R9.0 experiments on the same scale (Figure 2). The mean of each time window was computed from “pass” reads that mapped to the E. coli reference genome, to remove irregularities due to poor quality reads. The metrics computed from template base-called reads were plotted for the 1D library experiments, and those from 2D base-called reads for the 2D library experiments.\n\nThe mean read length (kb), Q-score, base quality (BQ), and GC%, speed (bases per second), and throughput (count) for each experiment, computed from “pass” reads that mapped to the E. coli reference, were plotted for 15 minute intervals. The values for template reads (“1D”) are plotted for the 1D libraries (P2-Lab6-R1-1D and P2-Lab7-R1-1D) whereas the values for 2D reads were plotted for the 2D libraries (P1b-Lab2-R2-2D, P2-Lab6-R1-2D, and P2-Lab7-R1-2D).\n\nThe plots show some irregularities due to lower throughput before the pore group switch at 24 hours, towards the end of the runs, during run script restarts (in P2-Lab7-R1-1D and P2-Lab6-R1-2D), and at the early termination (P2-Lab7-R1-2D). However, in general, the read lengths and GC% varied around a constant value over time and the Q-score and base quality dropped at a similar rate (Figure 2) This was despite sequencing speed increasing (measured in bases per second) from about 30 bps to 250 bps. Differences in the Lab6 and Lab7 1D “rapid” run plots around the 24hr point can be attributed to flushing of the flowcell with 500μl of fresh running buffer in the case of Lab6. This appears to be of benefit for speed and quality, but would require further investigation on a chemistry no longer in use. This procedure may be worth bearing in mind going forward, however, for possible beneficial effects with newer chemistries.\n\nWe noticed an increase in the GC content of the template reads from the 1D “rapid” library experiments and to a lesser extent for the 2D reads from the 2D experiments (Figure 2). These plots should have shown stochastic variation throughout the run around the mean GC of 50.8% for the E. coli sample. We considered a number of possible factors that could account for this artefact including: (i) low data density; (ii) an over-representation of poorer-quality “fail” reads; (iii) an over-representation of unmappable reads; or (iv) high-GC repetitive motifs. We found a negative correlation for the R9.0 1D data between %GC and average QV scores and also a decrease in base qualities over time. This was particularly pronounced for 1D “fail” reads (Q 3–10), but persisted even for 2D reads, likely due to 1D consensus follow through. The current report is for the initial R9.0 chemistry, and the GC-bias seems to be less pronounced with the improved version of the R9 pore (R9.4 data not shown).\n\n\nDiscussion\n\nThe MARC Phase 2 experiments were performed with the MinION Mk1b device to provide an independent evaluation of the performance, data yield, and data quality of the R9.0 chemistry and scripts. By comparing the data from four R9.0 experiments on the same E. coli isolate sequenced with R7.3 chemistry in MARC Phase 11, we have established new benchmarks for data from the 1D “rapid” and 2D “ligation” protocols and kits available in late July 2016. (Table 1).\n\nWe have verified that the MinION Mk1b device reliably maintains the R9.0 flow cell at an appropriate temperature (Table 1). The R9.0 flow cells improve overall data yield through provision of a higher proportion of available functional pores during an experiment, with 94% functional group 1 pores observed in this study (Table 1). With higher yields comes an increased chance of experiment failure as the file system accepting the data is likely to reach capacity during a run (Table 1). This suggests scripts should be deployed routinely to move the data from the file system during the sequencing run. The FAST5 data format continues to evolve and improve (Supplementary File 2) to store more comprehensive metadata in a more logical internal structure, and is now beginning to be documented on the MAP Community Forum (available via https://nanoporetech.com).\n\nIn the 12 months between the MARC Phase 1 and Phase 2 experiments (Table 1), we observed that for 2D base-calls, the distribution of read lengths remained the same (Figure 1, Table 3). The yield of higher-quality “pass” base-calls increased from ~100 Mb to ~450 Mb per flow cell (Table 2), and the total error of the “pass” base-calls reduced from 9.1% to 7.5% (Table 4). The read length and GC% over the course of the experiment remained uniform (Figure 2). The initial mean Q-scores increased from ~11 to over 12. The initial mean base qualities increased from ~12.5 to over 17.5, and both decreased gradually over the course of an experiment as observed previously (Figure 2). Finally, the proportion of mappable reads remained comparable, between 96 and 98% (Table 4) despite the sequencing speed increasing from 50 to 250 bases per second (Supplementary File 2). The yield improvements are a result of higher speeds and proportion of available pores, and the increase in data quality is attributed to the newer RNN basecaller.\n\nThe new 1D “rapid” library protocol, which sequences a single DNA strand, has the potential to query twice as many molecules during the lifetime of a flow cell. We found that this technique is a viable alternative to 2D library chemistry for use-cases where rapid scanning of the population of library molecules is important. The higher total error of 15.3% for “pass” template base-calls, compared with 7.5% for “pass” 2D base-calls (Table 4), is an acceptable trade off.\n\nWe confirm that the yield and quality of MinION data continues to improve. The data released in this study provide a benchmark to compare the newer R9.4 chemistry to and can be used to develop bioinformatic tools tailored to the newer chemistry. The updated reports of achievable data yield and quality, along with the characteristics of data production during the lifetime of a flow cell, will enable the design of new biological applications for this third-generation sequencing technology. Although a newer R9.4 chemistry has recently become available, ONT has emphasized that R9 platforms that use the CsgG nanopore will be backward compatible. This study provides the first comprehensive description of data from R9.0 flow cells and RNN base-calling software. We anticipate that it will serve as a framework for evaluating changes resulting from subsequent R9-based chemistries.\n\n\nData and software availability\n\nAll data presented in this study are available via ENA with accession PRJEB18053.\n\nArchived source code as at the time of publication: http://dx.doi.org/10.5281/zenodo.58231110\n\nLicense: CC BY 4.0",
"appendix": "Author contributions\n\n\n\nMJ and JT coordinated the study. The MARC group collectively designed the study. ML, SM, and JT performed the experiments. VZ, RL, ML, MJ, RL and CI ran data pre-processing steps. MJ, CI and JT analysed the data. MJ and BB drafted the manuscript. All authors participated in discussions relating to the generation and analysis of the data and edited and approved the final manuscript for submission.\n\n\nCompeting interests\n\n\n\nAll flow cells and library preparation kits were provided by ONT free of charge. Ewan Birney is a paid consultant of ONT. MJ, HEO, JT, ML, CI, HJ, JOG and BB have accepted reimbursement for conference travel expenses from ONT. VZ was funded for his work on this project from Oxford Nanopore through an agreement with EMBL.\n\n\nGrant information\n\nThe following grants supported the research of the following authors: NHGRI, USA award numbers HG006321 and HG007827 (MJ and HEO, UCSC), Canadian Institutes of Health Research #10677 (JT and TS, UBC), Brain Canada Multi-Investigator Research Initiative Grant with matching support from Genome British Columbia, the Michael Smith Foundation for Health Research and the Koerner Foundation (JT and TS, UBC), BBSRC grant BB/M020061/1 (ML and SM, Nottingham), Wellcome Trust grant 090532/Z/09/Z (CI, WTCHG), UK Antimicrobial Resistance Cross Council Initiative supported by the seven research councils (MR/N013956/1) and Rosetrees Trust grant A749 (JOG, UEA), BBSRC grant BB/J010375/1 (RML, Earlham), E o R Börjessons foundation (OW, Uppsala), and National Science Foundation DEB-1355059 (BB, VCU).\n\n\nAcknowledgements\n\nWe thank Rosemary Dokos and her colleagues at ONT for promptly responding to questions and ONT for providing the flow cells used in these experiments free of charge.\n\n\nSupplementary materials\n\nSupplementary File 1. Laboratories. List of laboratories that generated data for this study.\n\nClick here to access the data.\n\nSupplementary File 2. Experimental constants. Table of metadata fields and values shared across experiments.\n\nClick here to access the data.\n\nSupplementary File 3. NanoOK experiment reports. NanoOK PDF reports for three sets of reads (pass only, fail only, and both pass and fail) for each experiment.\n\nClick here to access the data.\n\n\nReferences\n\nIp CL, Loose M, Tyson JR, et al.: MinION Analysis and Reference Consortium: Phase 1 data release and analysis [version 1; referees: 2 approved] F1000Res. 2015; 4: 1075. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoman NJ, Quinlan AR: Poretools: a toolkit for analyzing nanopore sequence data. Bioinformatics. 2014; 30(23): 3399–3401. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H: Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. arXiv [q-bio.GN]. 2013. Reference Source\n\nKiełbasa SM, Wan R, Sato K, et al.: Adaptive seeds tame genomic sequence comparison. Genome Res. 2011; 21(3): 487–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuick J, Quinlan AR, Loman NJ: A reference bacterial genome dataset generated on the MinION™ portable single-molecule nanopore sequencer. Gigascience. 2014; 3(1): 22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJain M, Fiddes IT, Miga KH, et al.: Improved data analysis for the MinION nanopore sequencer. Nat Methods. 2015; 12(4): 351–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoose M: minoTour - a platform for real-time analysis and management of Oxford Nanopore minION reads.2014. Publisher Full Text\n\nLeggett RM, Heavens D, Caccamo M, et al.: NanoOK: multi-reference alignment analysis of nanopore sequencing data, quality and error profiles. Bioinformatics. 2016; 32(1): 142–144. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMARC Phase 2 analysis documentation and scripts. Reference Source\n\nJain M, Tyson JR, Loose M, et al.: Code used in analysis titled “MinION Analysis and Reference Consortium: Phase 2 data release and analysis of R9.0 chemistry”. Zenodo. 2017. Data Source"
}
|
[
{
"id": "23414",
"date": "28 Jun 2017",
"name": "Wigard P. Kloosterman",
"expertise": [
"Reviewer Expertise Genomics",
"bioinformatics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this work the Minion Analysis and Reference Consortium describes the analysis of Oxford Nanopore sequencing data generated from E.coli using the R9.0 sequencing chemistry. The R9.0 data characteristics were benchmarked against previous R7.3 data. The work is of much interest to (new) users of the Oxford Nanopore sequencing technology, as it provides a realistic overview of how the technology has developed over the course of 2016 and what can be expected in terms of data throughput and quality.\nI have the following remarks:\nMajor point\nOxford Nanopore sequencing technology has developed rapidly over the last year. Yet, the data presented in the paper are derived from older R9.0 and R7.3 chemistries. The value of the data analysis and comparisons would much increase if one or two runs of the more recent R9.4 data will be added as an extra column to each of the plots/tables. R9.4 chemistry is mentioned several times in the manuscript, but unfortunately no data are shown.\n\nOther points\nMaterials & methods:\nThe authors mention that DNA extraction procedures are described in MARC Phase 1, but no reference is given. Instead, the authors do refer to Supplementary File 1, but this file only contains a list of affiliations. I would suggest that the authors provide the appropriate reference here and/or refer to a Supplementary file that describes the DNA extraction procedures (or add this in the methods). 1D/2D library preparation: The authors list some modifications with respect to ONT protocols. For the more ignorant reader, the authors could spell out why these modifications were added. Sentence page 4: “Both alignments were then improved with marginAlign.” What does ‘improved’ mean in this case? If specific marginAlign settings were used, then these should be listed as well.\n\nResults:\nBase yield and read lengths:\nThere appear some inconsistencies regarding claims about read lengths for R7.3 vs R9.0. The abstract states that read length N50 was not different, while page 4 states that “R9.0 chemistry showed an increase in [...] read length” compare to R7.3. Are differences in yield and read lengths statistically significant between R7.3 and R9.0? Related to this: the authors mention that median read length is longer for R9.0 compared to R7.3 (6.6kb - 7.7kb and 6.4kb), yet they mention that the maximum read length is comparable (50.9kb and 59.7 kb). Why are median values regarded as different, while maximum values are regarded as comparable?\n\nBase quality:\n“The proportion of “pass” reads with a Q-value threshold of 9 was 66% to 72%, about the same as that observed for the typical R7.3 experiment, with a base quality threshold of 9.0.” Is Q-value equivalent to base quality value here? Or do the authors mean Q-value instead of base quality? A similar statement is made later in this paragraph.\n\nAlignment identity and accuracy:\nFrom the Table, it appears that R9.0 “fail” reads are worse than R7 failed reads. Could the authors comment why this is the case? Page 7, second column: There appears to be a mistake in the given read identities for 2D R9.0 experiments (89% and 85% given, while these numbers appear for R9.0 template reads; should be ~94% and ~70%). The authors make a point about read quality and mention that the longest subsequence that perfectly aligns increases around 2-4 times for 1D runs, going from R7.3 to R9. Does this mean that the errors are less randomly distributed in R9 data, given that the median percent identity does not change substantially? The authors could improve this analysis, by evaluating the randomness of the error distribution within reads, or across the genome and how this relates to genome sequence context.\n\nPerformance over time (Figure 2):\nFinal sentence of results: “The current report is for the initial R9.0 chemistry, and the GC-bias seems to be less pronounced with the improved version of the R9 pore (R9.4 data not shown).” It would be better if the authors draw a clear conclusion whether this bias is present or not, and include data to support this. Page 9: the authors mention that GC content differs for different run and read types. It would be good if the authors quantify these differences and provide the numbers in the text. Figure 2: What does count mean here? Read counts or event counts? Figure 2 legend: Read length is referred to as ‘kb’, but probably the authors mean ‘b’ (looking at the y-axis of the length plot). Page 8: “the quantity and quality of data produced during an experiment varied as material passed from one side of the membrane to the other.” What does this mean exactly? This could be replaced by a more precise statement.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "24143",
"date": "27 Jul 2017",
"name": "Martin C. Frith",
"expertise": [
"Reviewer Expertise Computational biology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study tests version R9.0 of the MinION nanopore sequencer, and describes its error rate, read lengths, throughput, and other characteristics. The article's timing is unfortunate, because I believe R9.0 had already been superceded at the time of publication, but this does not affect the study's soundness. As far as I can tell, this study is basically sound, but there are some careless mistakes:\nThis text does not match Table 2: maybe pass -> total? \"64–111 K 2D pass reads (compared with 21 K for the typical R7.3 experiment)\".\n\nIs this really an \"increase\"? \"increase in N50 read length to 7.3–9.1 Kb for all 2D reads in the R9.0 experiments (compared with 7.4 Kb\". The abstract says: \"The 2-dimensional (“2D”) N50 read length was unchanged\".\n\nThis text does not match Table 3, maybe pass/fail -> total/pass? \"mean template length for “pass” reads increasing to 7.2–8.6 Kb (from 5.0 for R7.3) and increasing to 7.8–9.4 for “fail” reads (from 6.2\":\n\nThis text also does not match Table 3: \"Read length N50 increased to 13.1–15.7 Kb for “pass” reads (compared with 6.9 Kb for the typical R7.3) and 13.6–16.2 Kb\"\n\nThis is true only for 1D reads: \"For template reads from both 1D and 2D experiments, 99.9% of “pass” reads were alignable from both 1D and 2D\".\n\n88% is not similar to 78%: \"The median identity of reads from 1D and 2D experiments (Table 4) was similar... The median identity for 1D template reads was ~88% and ~76%, for “pass” and “fail”, respectively (compared with 78% and 75%\".\n\nThis is not comparing like with like (R9.0 template versus R7.3 2D): \"For the 2D experiments, the read identity was ~89% and ~85%, for “pass” and “fail”, respectively (compared with ~92% and ~82%, respectively, for the typical R7.3 experiment).\"\n\nA few things should be clarified:\nWhy do \"Identity %\" and \"Total error %\" not sum to 100?\n\nFig 2: - what is the difference between Q-score and BQ? - what is \"(Temp)\"? - what is the difference between \"GC\" and \"GC (Temp)\"? - what is \"throughput\": count of what per what?\n\nWhat is \"1D consensus follow through\"?\n\nWhat does \"bridging experiment\" mean?\n\nWhat is a \"run script\"?\n\nOther minor comments:\nThe title should be shortened to something like \"Analysis of MinION R9.0 chemistry\". The rest is not scientifically meaningful, and might be perceived as \"appeal to authority\".\n\nThe abstract \"methods\" section is incorrectly brief.\n\nThe abstract \"conclusions\" section should probably not say \"new\" R9.0 chemistry.\n\nIs this really \"higher\"? \"higher at... 6.0–6.5 Kb (compared with 6.0 Kb\".\n\nPage 4: \"were statistics computed\" -> \"statistics were computed\".\n\nThe LAST usage is likely suboptimal (though I guess it matters little here). The currently-recommended usage has been here since 2016-11-22: https://github.com/mcfrith/last-rna/blob/master/last-long-reads.md\n\nData availability:\nI found it excessively hard to obtain the data. The PRJEB18053 link leads to three \"component projects\", each of which has numerous files. Which file is which dataset? This should be better organized, or at least described. The best I could do was to mouse-over the links: a name like \"Nott_R9_run2_1D_pass_f74a133aa1ac903384a928a51051582db2cc412b_0.fastq\" gives me a clue, but a name like \"ERR2025969.fastq\" is hopeless.\nFor example, what is the difference between these files? Nott_R9_run2_1D.pass.1D.fastq Nott_R9_run2_1D_pass_f74a133aa1ac903384a928a51051582db2cc412b_0.fastq\n\nSuggestions for future studies of this type:\nCharacterize the substitution errors further, e.g. is A->G more frequent?\n\nAre the insertions and deletions long-and-rare, or short-and-numerous?\n\nAre the base quality scores accurate/useful? (And what do they even mean when indels are the main error?)\n\nCharacterize context-dependence of errors, e.g. homopolymers, CCXGG context (http://www.biorxiv.org/content/early/2017/06/29/157040).\n\nCan rearrangement errors be characterized? Long reads are promising for finding rearrangements (e.g. inversions, translocations), but do artifactual rearrangements occur?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23369",
"date": "28 Jul 2017",
"name": "C. Titus Brown",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a followup data release and analysis by the MinION Analysis and Reference Consortium (MARC) to the Phase 1 release by Ip et al. in 2015 (https://f1000research.com/articles/4-1075/v1), who compared the consistency, rate, volume and quality of E. coli K-12 data produced by 5 labs from 2 R7.3 flow cell runs each. In this Phase 2 data release and analysis, the MARC characterized the throughput, read quality, and accuracy of one run each of 1D and 2D library preps of R9.0 chemistry.\n\nAside from a fundamental issue with this study that it would have been nice to have data from more than 2 labs, and more than 1 flowcell run of each 1D and 2D (only 4 total flowcells were used in this study), there were several really nice features:\nGreat idea to compare! \"Sequencing was performed using both the 2-dimensional (“2D”) “ligation” kit and the newer 1D “rapid” kit.\"\nExcellent, thank you for providing these data! \"It is a resource to aid further developments in nanopore informatics as well as the development of biological applications using the MinION.\"\nCool! \"overall, the R9.0 chemistry showed an increase in data yield and read length when compared with a typical Phase 1 R7.3 experiment\"\nA few criticisms and questions -\nWould have liked more robust comparison and discussion on differences between 2D and 1D sequencing since the consensus in the community seems to be now that 1D sequencing libraries are fine (nobody uses 2D anymore). \"The higher total error of 15.3% for “pass” template base-calls, compared with 7.5% for “pass” 2D base-calls (Table 4), is an acceptable trade off.\"\n\nENA accessions could be more clear, such as Table S10 from MARC Phase 1 paper (Ip et al. 2015). * These data are clearly generated by experts (some of whom are long-term experts and paid consultants supported by ONT), with available pore numbers and sequencing yields representing best case scenarios. While perhaps beyond the scope of this benchmark, it would be nice to see similar data comparisons by novice labs trying to figure this technology out.\n\nWhy are 1D and 2D library preparation modifications made in this study not part of standard ONT protocols? What was reasoning behind making these changes? One of the hardest parts of figuring out ONT is troubleshooting the little modifications like the ones mentioned in this study. Modifications indicated in the manuscript: genomic DNA was sheared to ~10 kb and 0.4x AMPureXP cleanup treatment. And 1D: 0.4x AMPureXP cleanup prior to prep, unsheared DNA input of 400ng, 0.4ul blunt/TA ligase; 10 min incubation used in final step.\n\nThis might be obvious, but I'm not sure: why was the lambda control DNA not included in the 1D runs?\n\nWhy does the % of active pores decrease from g1 to g2? It is difficult to compare the percentage of active pores between flowcells since, as the manuscript states, the computer from Lab7 crashed in the middle of the experiment and these numbers were not available. And there were only 4 flowcells used in this study. What are some of the reasons why the number of active pores fluctuates between flow cells?\n\nWere these spot on flowcells? This feature was added recently sometime after R9.0 was released, and I am curious what effect this had on sequencing yield.\n\nPerhaps include data analyses software versions? \"we computed alignments and error-rate measurements using BWA-MEM and LAST, followed by re-alignment using marginAlign. Real-time evaluation of the runs was performed by minoTour (more information available from: http://minotour.github.io/minoTour), run locally at the two experimental laboratories. The “pass” and “fail” reads from each experiment were evaluated with NanoOK (version 0.95) using bwa alignments. Additional metrics and analyses were performed with bespoke Python and R scripts, (available at https://github.com/camilla-ip/marcp2).\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-760
|
https://f1000research.com/articles/6-99/v1
|
01 Feb 17
|
{
"type": "Opinion Article",
"title": "On the primacy and irreducible nature of first-person versus third-person information",
"authors": [
"Patrizio E. Tressoldi",
"Enrico Facco",
"Daniela Lucangeli",
"Enrico Facco",
"Daniela Lucangeli"
],
"abstract": "In this essay, we will support the claim that a) some first-person accounts cannot be reduced to their third-person neural and psychophysiological correlates and b) that these first-person accounts are the only information to reckon when it is necessary to analyse qualia contents. Consequently, for many phenomena, first-person accounts are the only reliable source of information available and the knowledge of their neural and psychophysical correlates don’t offer any additional information about them.",
"keywords": [
"first-person",
"third-person",
"consciousness",
"mind-brain relationship"
],
"content": "Introduction\n\nFirst-person accounts (1PAs) are written, verbal or intentional (conscious) behaviour, e.g. sign language, accounts related to what a person feels, perceives or thinks, in other words, every mental content the person is aware of and can communicate to others if requested or desired. “I feel happy today”; “I see a pink rose”; “This panorama is awesome”; and “I think I had better do it tomorrow”, etc. are some typical examples.\n\nOn the contrary, third-person accounts (3PAs), are identical types of accounts plus their neuro and psychophysiological correlates, obtained by people who observe or measure other behaviour and mental contents and processes. “He seems happy”; “She’s looking at a rose”, and “He pushed the red button”, are example of verbal accounts. “The power of his EEG alpha band had an increase of 10%, when he relaxed”; “The medial frontal cortex increased its activity when she smiled at her partner”, and “Her heart rate decreased from 80 bpm to 60 bpm, when she heard pleasant music”, are examples of neuro and psychophysiological correlates of mental activity of the observed person.\n\nIn this essay, we will support the claim that a) some 1PAs cannot be reduced to third-person neural and psychophysiological correlates accounts (3PAs) (We will not enter here in the debate about how 1PAs can also considered 3PAs (Piccinini, 2010) with particular reference to the heterophenomenology as defined by Dennett (2003) and b) that their contents are the only information to reckon when it is necessary to analyse qualia contents, that is, emotions, beliefs, reality interpretations, quality of life and health and their effects on behaviour and the brain activity. Consequently, c) even a complete description of the brain and psychophysiological correlates of these 1PAs does not add any further information about their contents and characteristics.\n\nThis approach is at odds with the view that given the subjective and introspective nature of 1PAs, they lack objective contents and hence 3PAs are undeniably more informative.\n\nThere is not space here to describe the historical reasons of why, in psychology, 1PAs lost their importance in comparison to 3PAs. For those readers interested in this topic we suggest to refer to Klein (2015b).\n\n\nWhen first-person accounts are not reliable\n\nSince the seminal paper of Nisbett & Wilson (1977) evidence has been accumulated showing that people 1PAs can fail in the detection of their decision processes (but see Petitmengin et al., 2013, for a manipulation which reverted the accuracy to an high level).\n\nAccording to Schooler (2015), 1PAs become unreliable when translation dissociations occur. Translation dissociations “correspond to situations in which, while in the process of re-representation, one omits, distorts, or otherwise misrepresents one’s mental state to oneself and/or others.”page 9.\n\nA typical example is the monitoring of mind-wandering which is typically measured using self-catching and experience sampling techniques. Self-catching asks participants to monitor their mental activity and signal, for example by pressing a button, when they notice their mind activity was off-task. With experience sampling techniques, participants are probed to notice whether their mind was wandering at random time intervals.\n\n\nWhen first-person accounts are the only valid information to consider\n\nBelow is a (non-exhaustive) list of phenomena and conditions that can be described and known only by 1PAs whereas the third-person correlates are irrelevant in order to understand their characteristics. For each of the selected phenomena we will present some examples of 1PAs and 3PAs to make evident the different informational value of these accounts as supportive of our main thesis.\n\nEmotions and Emotion (Mood) Disorders. Emotions identification and their valence and arousal can be measured only taking in account 1PAs. For example the Self-Assessment Manikin in different version, see Figure 1 as an example, was used for the database of the International Affective Pictures System, whereas bipolar semantic slider scales from from 1 to 9, were used for the Nencki Affective Picture System (Marchewka et al., 2014)\n\nIn this case the participant is requested to rate the emotional valence and arousal of a stimulus on a 5-point scale. This figure has been reproduced with permission from Li et al., 2011.\n\nAs for the measure of emotions triggered by pictures, faces, persons, etc., even the measure of the mood and its disorders can only be done by referring to 1PAs, usually by way of structured questionnaires, e.g. the Beck Depression Inventory or interviews, e.g. The Structured Clinical Interview for DSM-5 (SCID-5), in which participants respond with their extent of agreement with statements such as “I feel sad” or “I don’t cry any more than usual”, etc.\n\nOn the contrary, neuro and psychophysiological accounts (e.g. Allen et al., 2004; Lin et al., 2010), consists of biological signals that cannot convey any subjective and qualitative information about their contents but simply represent a correlation with a different type of information. For example Matsubara et al. (2016), found that the anterior cingulate cortex volume could be a distinct endophenotype of bipolar disorders, while the insular volume could be a shared bipolar disorders and major depressive disorder endophenotype. Moreover, the insula could be associated with cognitive decline and poor outcome in bipolar disorders. Can we use this information to integrate our knowledge about the characteristics of bipolar and depressive disorders of those participants?\n\nVisual analogue, numerical rating and verbal rating scales (see Figure 2) are commonly used to assess pain intensity in clinical trials and in other types of studies. Among the multidimensional questionnaires designed to assess pain, the McGill Pain Questionnaire and Brief Pain Inventory are valid in many multilingual versions (Caraceni et al., 2002).\n\nParticipants are requested to rate their perceived pain choosing one of the six different options. No copyrighted figure.\n\nAn example of a 3PAs account is “The insula ipsilateral to the site of needling was activated to a greater extent during real acupuncture than during the placebo intervention” (Pariente et al., 2005). It seems clear that this type of information cannot convey any useful information about the subjective quality of pain of the persons experiencing it.\n\n\nConscious experiences\n\nAnomalous or non-ordinary experiences comprise a large group of personal experiences characterized by the lack of any clinical psychopathological syndrome, even if they may appear associated with some of them (Cardeña & Facco, 2015; Cardeña et al., 2014).\n\nAmong these experiences there are:\n\nSpiritual experiences. Spiritual experiences, independently from how they are obtained, e.g. spiritual practices, e.g. meditation (Chen et al., 2011), spontaneously or by using psychotropic drugs like the psilocybin (Griffiths et al., 2008), are only based on 1PAs.\n\nThe Revised Mystical Experience Questionnaire (Barrett et al., 2015) is one of the available questionnaires for the investigation of these experiences. Participants are requested to express their degree of experience related, for example to: loss of usual sense of time; experience of amazement; sense that the experience cannot be described adequately in words; gain of insightful knowledge experienced at an intuitive level, etc.\n\nBeauregard & Paquette (2006), investigated the neural correlates of such a type of experiences in a group of Carmelite nuns and found that this state was associated with significant loci of activation in the right medial orbitofrontal cortex, right middle temporal cortex, right inferior and superior parietal lobules, right caudate, left medial prefrontal cortex, left anterior cingulate cortex, left inferior parietal lobule, left insula, left caudate, and left brainstem. Can we achieve better understanding of the quality of these experiences with this information?\n\nNear-Death-Experiences. Near-Death-Experiences are peculiar mental experiences reported by persons who suffered severe injuries, e.g. cardiac arrest (Agrillo, 2011; Facco & Agrillo, 2012; van Lommel, 2011), characterized by increased vividness and sense of reality with respect to the normal awake state when neither consciousness nor cortical activity is expected: e.g. “Super awake. I could sense things more than I do in my usual state of awareness”, plus other peculiar experiences, for example encounters with spiritual beings: e.g. “I do remember a being of light, God, standing near me” and experiences of living a timeless dimension: e.g. “I became time and space”, etc. (Excerpts from the http://www.nderf.org/Archives/exceptional.html database)\n\nMobbs & Watt, (2011) are among those who are trying to explain these experiences as simply epiphenomena of some neural activity. For example, they stated: “the vivid pleasure frequently experienced in near-death experiences may be the result of fear-elicited opioid release, while the life review and REM components of the near-death experience could be attributed to the action of the locus coeruleus- noradrenaline system”(page 449). However statements like these, take for granted that the neural correlates “translate” into subjective experiences forgetting to offer a testable hypothesis on how this transformation can take place. Furthermore, this hypothesis of opioids has several weakenesses (Ersek et al., 2004; Facco, 2010; Facco & Agrillo, 2012; Lawlor & Bruera, 2002; Vella-Brincat & Macleod, 2007), that is: a) opioids are only wake hallucinogens, b) people administered opioids for pain therapy do not experience NDEs, while their adverse events may include a delirium, the phenomenology of which is totally different with fromNDEs; c) No hallucinogens induce standard reproducible experiences, which largely depend on subjects’ personalities, aims of their intake, context and rituality. In other words, when new facts challenge the endorsed axioms and theories, they are first interpreted trying to constraint them within the available knowledge, while their explanation may call for new, yet unknown, laws of nature (i.e., properties of consciousness).\n\n\nMemory\n\nDifferently from implicit memory, e.g. procedural and associative memory, all aspects of explicit memory, e.g. autobiographical, semantic, have to rely only on 1PAs (Wilson, 2002). For example, testing autobiographical memory requires the participants to retrieve and describe personal life episodes, e.g. celebrations, deseases, special encounters with friends and relatives, etc.\n\nMoreover Klein (2015) extensively discussed that in order to qualify as memory, “the product of learning needs to be a mental state that includes the feeling that one is reliving a past experience—that is, it provides a directly-given, non-inferential sense that one’s current mental state reflects a happening from one’s past.” (page 2). This distinction allows to interpret a series of impairments characterized by a dissociation between memory contents and the feeling of ownership of them (Klein, 2015a)\n\nAs to an example of 3PAs, Conway et al. (2001), recording the slow cortical potentials, found that left frontal negativity primarily reflects cortical activation associated with the operation of a complex retrieval process, whereas the later temporal and occipital negativity (the result of the retrieval process) reflects activation corresponding to the formation and maintenance of a detailed memory. Can you extract useful information related to the contents and the subjective experience of memory of participants from these data?\n\n\nReasoning\n\nAmong the many tasks that can be used to investigate reasoning, one is to judge whether the final statement after a series of propositions is true or false. For example, “All men are animals. All animals are mortal. Hence, all men are mortal.”: True or False?. Papageorgiou et al. (2016), investigated the EEG correlates of a series of valid and paradoxical statements and found that “During the processing of paradoxes, results demonstrated a more positive event-related potential deflection (P300) across frontal regions, whereas processing of valid statements was associated with noticeable P300 amplitudes across parieto-occipital regions”. Is there any useful information in these data that can integrate what the participants experience as thoughts, feelings and emotions?\n\nFurthermore, any judgement in terms of true vs false, is closely dependent on culture and available knowledge and, thus, is intrinsically weak and provisional. Judgements on both truth and falsity as well as paradoxes may change over time: for example the unity of space-time and matter-energy, the Heisnberg’s principle of indetermination and the concept of entanglement look to be true in quantum physics, false or ununderstandable acording to classical, Newtonian physics. Thus, neurophysiological data about judgements can only provide an estimation of brain mechanisms and, at best, helping one to check wheter the subject is processing them as paradoxes or valid statemens, without any possible inference on subject’s experience, cultural components and, last but not least, on knowledge and comprehension of the truth, which remains in the realm of mind.\n\n\nBeliefs and Self-evaluations\n\nAll cultural, ethical, religious, cultural and scientific beliefs as well as all kinds of delusional beliefs, can only be known by using 1PAs (e.g. Coltheart et al., 2011; Jonas & Fischer, 2006; Zeidler et al., 2002).\n\nFor example, Kapogiannis et al. (2009), investigated the neural correlates of three psychological dimensions of religious belief (God’s perceived level of involvement, God’s perceived emotion, and doctrinal/experiential religious knowledge). Participants 1PAs were obtained by requesting to rate different statements, e.g. “God cares about the worlds’welfare”; “All religions have truth”, on a 7-point Likert scale. The neural correlates of these dimensions were investigated by using fMRI. These authors found different neural networks associated with the three religious beliefs, e.g. more activation of bilateral inferior frontal gyrus, pars triangularis and Brodman area 45 in relationship with God’s lack of involvement and more activaction of the right middle frontal gyrus and Broadman area 11 in relationship to statements reflecting God’s love etc.\n\nHow much information can we add to what we obtained from 1PAs by using these 3PAs?\n\nVisual and auditory hallucinations such as hearing voices (Holt & Tickle, 2014), can be identified and assessed by using 1PAs (Haddock et al., 1999).\n\nBarkus et al. (2007), investigating the neural correlates of non-clinical auditory hallucinations of a group of participants by using the fMRI, found increased activation in the superior and middle temporal cortex. Does this information helped to increase what authors already knew about the auditory hallucinations of their participants?\n\nThe core components of placebo and nocebo effects are expectations/beliefs and conditioned reactions (Price et al., 2008; Rief & Petrie, 2016). Whereas conditioned reactions can be activated bypassing any mental activity, expectations and beliefs are intrinsically 1PAs independently from whether people are aware or not of them (Jensen et al., 2012) and cannot be interpretable by using their neural correlates.\n\nRisk perception both for natural, economic, political and hazard events is another important mental content that can only measured by using 1PAs (Sjoberg, 2000).\n\nFor example, Schmälzle et al. (2011), investigated the HIV risk perception by presenting photographs of unknown persons and recording the EEG evoked response potentials.\n\nThey found that the implicit processing of individuals prone to risky behavior was associated with an early occipital negativity (240–300 ms) and a subsequent central positivity between 430 and 530 ms compared to individuals with safer practices. It appears evident that this information cannot be used to increase the knowledge about risk perception obtained by 1PAs.\n\nAll natural (Daniel & Meitner, 2001), human (Berggren et al., 2010), animal and aesthetic appreciation and judgments, can only be assessed by 1PAs (Leder et al., 2004).\n\nThakral et al. (2012), investigated the neural correlates of van Gogh paintings evoking a range of motion experience by using the fMRI and found that the sensory motion processing region MT+ activity was correlated to the degree of motion experience (but not the experience of pleasantness), whereas the experience of pleasantness (but not motion experience) was associated with an inceased activity in the right anterior prefrontal cortex. Can this neural information add any useful information about pleasantness and motion appreciation experienced by these participants?\n\nThe World Health Organisation (WHO) define quality of life (QoL) as “individuals’ perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns” (WHO, 1998). QoL is evaluated by different versions of questionnaire of which the best known are those developed by the WHOQOL groups (WHO, 1998; WHOQOL Group, 1995).\n\nUrry et al. (2004) requested their participants to complete self-report measures of eudaimonic (leading a virtuous life and doing what is worth doing) well-being, hedonic well-being, and positive affect and subsequently recorded their EEG activity. They found a greater left than right superior frontal activation association with higher levels of both forms of well-being. May we use this information to gather more details about what already participants reported in their 1PAs?\n\n\nDiscussion\n\nAs anticipated in the introduction, the aim of this essay was that of supporting the claim that there are many varieties of 1PAs the contents and characteristics of witch can be known and investigated only by these accounts and cannot be integrated with information gathered by 3PAs in particular those related to their neural or psychophysiological correlates.\n\nWe have listed ten types of phenomena that can be studied only by referring to 1PAs, even if for each of them there is a legitimate interest in knowing their neural and psychophysiological correlates. However it is important to realize, on the part of both researchers and the funders of their investigations, that the knowledge of their neural and psychophysiological correlates has nothing to add to the knowledge of these phenomena.\n\nAccording to the authors of “Neuromania: on the limits of brain science” (Legrenzi & Umiltà, 2011) the popularity of the prefix “neuro” before economy (Camerer et al., 2005), aesthetics (Skov & Vartanian, 2009), marketing (Ariely & Berns, 2010), theology (Barrett, 2011), etc., represents a degeneration of an acritical adhesion of a metaphysical physicalism or mind-brain identity theory and of a superficial knowledge of the complex relationship between mind contents and its neural correlates. Many authors continue to alert researchers about the problems in defining such relationship. Max Colthearth for example repeatedly warned that “testing theories of cognition” by using fMRI investigations requires “both sensitivity (a claim that brain region X will always be active when cognitive process C is being executed) and specificity (the claim that brain region X will not be active except when cognitive process C is being executed). pag.102 (Coltheart, 2013) avoiding the so-called “consistency fallacy” that is the erroneous inference that when data that are consistent with some theory they cannot, just in virtue of this consistency, be offered as the only evidence in support of that theory. Something additional is needed, that is, evidence against the contradictory of the hypothesis.\n\nOur statement that 1PAs are irriducible to 3PAs, could be falsified by the evidence suggesting that it is possible to change 1PAs by acting on their biological correlates. For example Saitoh et al. (2007) were successful in reducing pain due to spinal cord or peripheral lesions by applying high-frequency repetitive transcranial magnetic stimulation on the primary motor cortex. Conversely, hypnosis may yield a significant increase of pain threshold up to the level of surgical anesthesia: providing proper instructions and suggestions to the patient (Facco et al., 2011; Facco et al., 2013); this is a very relevant fact allowing for Enhanced Recovery After Surgery without costs and adverse events (Facco, 2016); the same is for meditation a valuable introspctive technique sharing several features with hypnosis (Facco, 2017). As a result, the 1PA is no less relevant than 3PA, even in the context of the pragmatic approach of clinical medicine, despite having been understated by the ruling reductionist paradigm.\n\nThe main aim of our paper is not that of supporting the view that the study of the biological correlates of many 1PAs is irrelevant and a waste of resources, but that the information we can gather from 1PAs are irriducible to 3PAs and these cannot increase the information we got from 1PAs even when is it possible to infer a direct causal relationship between 3PAs and 1PAs. In fact in the Saitoh et al. (2007) example, the modification of primary cortex activity do not contain any useful information about the participants’ change in pain perception.\n\nOur approach is akin Jack’s (2013) statements “.. our experiential understanding of our own minds is fundamentally different from, and at least to some degree incompatible with, our understanding of the mind as a mechanism. At the same time, this experiential understanding is no less important than our mechanistic understanding of the mind. In fact, it is more important. Our experiential perspective guides our understanding of ourselves, and serves as the compass which aids our navigation through the social world, allowing us to see, and ultimately connect to, the humanity in others. page 670”.\n\nSimilar position is held by Guta, (2015): “..the knowledge [neuronal, chemical, electrical activities that take place in the brain] we gather in this regard, no matter how detailed it may turn out to be, offers no help whatsoever in and of itself by way of giving us access to the first-person data. To retrieve the latter data, the right thing to do would be to directly engage with subjects of experience, that is, with people. The imaging techniques scan brains but not people’s thoughts/intentions/plans/regrets, and the list goes on and on. page 241”\n\nWe hope this essay will alert all scientists who are endorsing a metaphysical fisicalist approach who posit that all mind contents are nothing but a byproduct of the brain or emerging properties of its computational complexity (Schwartz et al., 2016; Smart, 2014) that for many phenomena, the 1PAs are the only reliable source of information available and that the knowledge of their neural and psychophysical correlates does not offer any additional information about them. Furthermore, the wealth of data available on hypnosis and meditation see (Facco, 2014; Facco, 2017), as well as music perception and performance (Fauvel et al., 2014; Han et al., 2009; Koelsch et al., 2005; Ohnishi et al., 2001) provide an increasing evidence that the mind-brain relationship is not an unidirectional one, defined by a bottom-up hierarchy from brain to mind; rather, it can be better conceived as a bidirectional relationship, where mind may also engender both functional and steady, structural changes in the brain. Needless to say, music, its value and meaning, can only exist in the realm of 1PA. The whole problem is endowed with huge epistemological and metaphysical implications, to be reappraised in order to avoid any inadvertent dogmatic drift in the scientific approach to the world of subjectivity (Klein, 2013)\n\nGiven the enormous investments in the brain research both in the USA and Europe (see Global Brain Workshop, 2016; Markram, 2012), there is a serious risk that very few research resources (e.g. funds, personnel, etc.) will be devoted to the investigation of 1PAs. It is curious that a similar worry is shared by supporters of a mind-brain fisicalist metaphysic like Schwartz et al., (2016), when they declare that “..an eliminative reductionist perspective, in which behaviors, thoughts, feelings, and other experiences can be completely explained by biological processes at the cellular and molecular levels, may be difficult to square with much current scholarship in neuroscience and in the broader field of psychology. Nevertheless, given the dependence of researchers, departments, and universities on federal grant funding, priorities emphasized by funding agencies and by their review committees may “force the hands” of researchers, departments, and universities to prioritize neuroscience at the expense of other approaches”. Page 15\n\nFollowing Stanley Klein discussion about the limitations of reducing the study of Psychological Science to its biological mechanisms, we endorse his claim that “experiential aspects of reality (reflected in mental construct terms such as memory, belief, thought, and desire) give us reason to remain open to the need for psychological explanation in the treatment of mind.” (Klein, 2016)",
"appendix": "Author contributions\n\n\n\nPT, EF and DL conceived the paper. PT and EF wrote it. All author were involved in its revision.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed’.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe thank the Proof Reading Service for English revision.\n\n\nReferences\n\nAgrillo C: Near-death experience: Out-of-body and out-of-brain? Rev Gen Psychol. 2011; 15(1): 1–10. Publisher Full Text\n\nAllen JJ, Coan JA, Nazarian M: Issues and assumptions on the road from raw signals to metrics of frontal EEG asymmetry in emotion. Biol Psychol. 2004; 67(1–2): 183–218. PubMed Abstract | Publisher Full Text\n\nAriely D, Berns GS: Neuromarketing: the hope and hype of neuroimaging in business. Nat Rev Neurosci. 2010; 11(4): 284–292. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarkus E, Stirling J, Hopkins R, et al.: Cognitive and neural processes in non-clinical auditory hallucinations. Br J Psychiatry. 2007; 191(51): s76–s81. PubMed Abstract | Publisher Full Text\n\nBarrett FS, Johnson MW, Griffiths RR: Validation of the revised Mystical Experience Questionnaire in experimental sessions with psilocybin. J Psychopharmacol. 2015; 29(11): 1182–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarrett NF: Review of Principles of Neurotheology. Ars Disputandi. 2011; 133–136.\n\nBeauregard M, Paquette V: Neural correlates of a mystical experience in Carmelite nuns. Neurosci Lett. 2006; 405(3): 186–90. PubMed Abstract | Publisher Full Text\n\nBerggren N, Jordahl H, Poutvaara P: The looks of a winner: Beauty and electoral success. J Public Econ. 2010; 94(1–2): 8–15. Publisher Full Text\n\nCamerer C, Loewenstein G, Prelec D: Neuroeconomics: How Neuroscience Can Inform Economics. J Econ Lit. 2005; 43(1): 9–64. Publisher Full Text\n\nCaraceni A, Cherny N, Fainsinger R, et al.: Pain measurement tools and methods in clinical research in palliative care: recommendations of an Expert Working Group of the European Association of Palliative Care. J Pain Symptom Manage. 2002; 23(3): 239–55. PubMed Abstract | Publisher Full Text\n\nCardeña E, Facco E: Non-Ordinary Mental Expressions. Lausanne, CH: Frontiers Media SA. 2015. Publisher Full Text\n\nCardeña E, Lynn SJ, Krippner S: Varieties of Anomalous Experience: Examining the Scientific Evidence. 2nd Edition. Washington, DC: American Psychological Association. 2014. Reference Source\n\nChen Z, Qi W, Hood RW, et al.: Common Core Thesis and Qualitative and Quantitative Analysis of Mysticism in Chinese Buddhist Monks and Nuns. J Sci Study Relig. 2011; 50(4): 654–670. Publisher Full Text\n\nColtheart M: How Can Functional Neuroimaging Inform Cognitive Theories? Perspect Psychol Sci. 2013; 8(1): 98–103. PubMed Abstract | Publisher Full Text\n\nColtheart M, Langdon R, McKay R: Delusional Belief. Annu Rev Psychol. 2011; 62(1): 271–298. PubMed Abstract | Publisher Full Text\n\nConway MA, Pleydell-Pearce CW, Whitecross SE: The Neuroanatomy of Autobiographical Memory: A Slow Cortical Potential Study of Autobiographical Memory Retrieval. J Mem Lang. 2001; 45(3): 493–524. Publisher Full Text\n\nDaniel TC, Meitner MM: Representation validity of landscape visualizations: the effects od graphical realism on perceived scenic beauty of forest vistas. J Environ Psychol. 2001; 21(1): 61–72. Publisher Full Text\n\nDennett DC: Who’s on First? Heterophenomenology Explained. J Conscious Stud. 2003; 10(9): 19–30. Reference Source\n\nErsek M, Cherrier MM, Overman SS, et al.: The cognitive effects of opioids. Pain Manag Nurs. 2004; 5(2): 75–93. PubMed Abstract | Publisher Full Text\n\nFacco E: Esperienze di premorte. Scienza e coscienza ai confini tra fisica e metafisica. Lungavilla (PV): Edizioni Altravista. 2010.\n\nFacco E: Meditazione e Ipnosi tra neuroscienze, filosofia e pregiudizio. Lungavilla, PV Italy: Altravista. 2014. Reference Source\n\nFacco E: Hypnosis and anesthesia: back to the future. Minerva Anestesiol. 2016; 82(12): 1343–1356. PubMed Abstract\n\nFacco E: Hypnosis and meditation: two sides of the same coin? Int J Clin Exp Hypn. submitted to publication, 2017.\n\nFacco E, Agrillo C: Near-death experiences between science and prejudice. Front Hum Neurosci. 2012; 6: 209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFacco E, Casiglia E, Masiero S, et al.: Effects of hypnotic focused analgesia on dental pain threshold. Int J Clin Exp Hypn. 2011; 59(4): 454–468. PubMed Abstract | Publisher Full Text\n\nFacco E, Pasquali S, Zanette G, et al.: Hypnosis as sole anaesthesia for skin tumour removal in a patient with multiple chemical sensitivity. Anaesthesia. 2013; 68(9): 961–965. PubMed Abstract | Publisher Full Text\n\nFauvel B, Groussard M, Chetelat G, et al.: Morphological brain plasticity induced by musical expertise is accompanied by modulation of functional connectivity at rest. Neuroimage. 2014; 90: 179–188. PubMed Abstract | Publisher Full Text\n\nGlobal Brain Workshop 2016 Attendees: Grand challenges for global brain sciences [version 1; referees: 1 approved with reservations]. F1000Res. 2016; 5: 2873. Publisher Full Text\n\nGriffiths R, Richards W, Johnson M, et al.: Mystical-type experiences occasioned by psilocybin mediate the attribution of personal meaning and spiritual significance 14 months later. J Psychopharmacol. 2008; 22(6): 621–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuta MP: Consciousness, First-Person Perspective, and Neuroimaging. J Conscious Stud. 2015; 22(11–12): 218–245. Reference Source\n\nHaddock G, McCarron J, Tarrier N, et al.: Scales to measure dimensions of hallucinations and delusions: the psychotic symptom rating scales (PSYRATS). Psychol Med. 1999; 29(4): 879–889. PubMed Abstract | Publisher Full Text\n\nHan Y, Yang H, Lv YT, et al.: Gray matter density and white matter integrity in pianists' brain: a combined structural and diffusion tensor MRI study. Neurosci Lett. 2009; 459(1): 3–6. PubMed Abstract | Publisher Full Text\n\nHolt L, Tickle A: Exploring the experience of hearing voices from a first person perspective: a meta-ethnographic synthesis. Psychol Psychother. 2014; 87(3): 278–297. PubMed Abstract | Publisher Full Text\n\nJack AI: Introspection: the tipping point. Conscious Cogn. 2013; 22(2): 670–1. PubMed Abstract | Publisher Full Text\n\nJensen KB, Kaptchuk TJ, Kirsch I, et al.: Nonconscious activation of placebo and nocebo pain responses. Proc Natl Acad Sci U S A. 2012; 109(39): 15959–15964. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJonas E, Fischer P: Terror management and religion: evidence that intrinsic religiousness mitigates worldview defense following mortality salience. J Pers Soc Psychol. 2006; 91(3): 553–567. PubMed Abstract | Publisher Full Text\n\nKapogiannis D, Barbey AK, Su M, et al.: Cognitive and neural foundations of religious belief. Proc Natl Acad Sci U S A. 2009; 106(12): 4876–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKlein SB: The two selves: Their metaphysical commitments and functional independence. Oxford University Press. 2013. Reference Source\n\nKlein SB: What memory is. Wiley Interdiscip Rev Cogn Sci. 2015; 6(1): 1–38. PubMed Abstract | Publisher Full Text\n\nKlein SB: The feeling of personal ownership of one’s mental states: A conceptual argument and empirical evidence for an essential, but underappreciated, mechanism of mind. Psychology of Consciousness: Theory, Research, and Practice. 2015a; 2(4): 355–376. Publisher Full Text\n\nKlein SB: A defense of experiential realism: The need to take phenomenological reality on its own terms in the study of the mind. Psychology of Consciousness: Theory, Research, and Practice. 2015b; 2(1): 41–56. Publisher Full Text\n\nKlein SB: The unplanned obsolescence of psychological science and an argument for its revival. Psychology of Consciousness: Theory, Research, and Practice. 2016; 3(4): 357–379. Publisher Full Text\n\nKoelsch S, Fritz T, Schulze K, et al.: Adults and children processing music: an fMRI study. Neuroimage. 2005; 25(4): 1068–1076. PubMed Abstract | Publisher Full Text\n\nLawlor PG, Bruera ED: Delirium in patients with advanced cancer. Hematol Oncol Clin North Am. 2002; 16(3): 701–714. PubMed Abstract\n\nLeder H, Belke B, Oeberst A, et al.: A model of aesthetic appreciation and aesthetic judgments. Br J Psychol. 2004; 95(Pt 4): 489–508. PubMed Abstract | Publisher Full Text\n\nLegrenzi P, Umiltà C: Neuromania: On the limits of brain science. Oxford: Oxford University Press, 2011. Publisher Full Text\n\nLi Y, Li X, Ratcliffe M, et al.: A real-time EEG-based BCI system for attention recognition in ubiquitous environment. In Proceedings of 2011 international workshop on Ubiquitous affective awareness and intelligent interaction, ACM. 2011; 33–40. Publisher Full Text\n\nLin YP, Wang CH, Jung TP, et al.: EEG-Based Emotion Recognition in Music Listening. IEEE Trans Biomed Eng. 2010; 57(7): 1798–1806. PubMed Abstract | Publisher Full Text\n\nMarchewka A, Żurawski Ł, Jednoróg K, et al.: The Nencki Affective Picture System (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behav Res Methods. 2014; 46(2): 596–610. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarkram H: The Human Brain Project. Sci Am. 2012; 306(6): 50–55. PubMed Abstract | Publisher Full Text\n\nMatsubara T, Matsuo K, Harada K, et al.: Distinct and Shared Endophenotypes of Neural Substrates in Bipolar and Major Depressive Disorders. PLoS One. 2016; 11(12): e0168493. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNisbett RE, Wilson TD: Telling more than we know: Verbal reports on mental processes. Psychol Rev. 1977; 84: 231–259. Publisher Full Text\n\nOhnishi T, Matsuda H, Asada T, et al.: Functional anatomy of musical perception in musicians. Cereb Cortex. 2001; 11(8): 754–760. PubMed Abstract | Publisher Full Text\n\nPapageorgiou C, Stachtea X, Papageorgiou P, et al.: Aristotle Meets Zeno: Psychophysiological Evidence. PLoS One. 2016; 11(12): e0168067. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPariente J, White P, Frackowiak RS, et al.: Expectancy and belief modulate the neuronal substrates of pain treated by acupuncture. NeuroImage. 2005; 25(4): 1161–1167. PubMed Abstract | Publisher Full Text\n\nPetitmengin C, Remillieux A, Cahour B, et al.: A gap in Nisbett and Wilson’s findings? A first-person access to our cognitive processes. Conscious Cogn. 2013; 22(2): 654–669. PubMed Abstract | Publisher Full Text\n\nPiccinini G: How to Improve on Heterophenomenology: The Self-Measurement Methodology of First-Person Data. Journal of Consciousness Studies. 2010; 17(3–4): 84–106. Reference Source\n\nPrice DD, Finniss DG, Benedetti F: A comprehensive review of the placebo effect: recent advances and current thought. Annu Rev Psychol. 2008; 59: 565–590. PubMed Abstract | Publisher Full Text\n\nRief W, Petrie KJ: Can Psychological Expectation Models Be Adapted for Placebo Research? Front Psychol. 2016; 7: 1876. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSaitoh Y, Hirayama A, Kishima H, et al.: Reduction of intractable deafferentation pain due to spinal cord or peripheral lesion by high-frequency repetitive transcranial magnetic stimulation of the primary motor cortex. J Neurosurg. 2007; 107(3): 555–559. PubMed Abstract | Publisher Full Text\n\nSchmälzle R, Schupp HT, Barth A, et al.: Implicit and Explicit Processes in Risk Perception: Neural Antecedents of Perceived HIV Risk. Front Hum Neurosci. 2011; 5: 43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchooler J: Bridging the Objective/Subjective Divide: Towards a Meta-Perspective of Science and Experience. Retrieved November 28, 2016, 2015. Publisher Full Text\n\nSchwartz SJ, Lilienfeld SO, Meca A, et al.: The role of neuroscience within psychology: A call for inclusiveness over exclusiveness. Am Psychol. 2016; 71(1): 52–70. PubMed Abstract | Publisher Full Text\n\nSjoberg L: Factors in Risk Perception. Risk Analysis. 2000; 20(1): 1–11. PubMed Abstract | Publisher Full Text\n\nSkov M, Vartanian O: Introduction: What is neuroaesthetics? Baywood Publishing Co. 2009. Reference Source\n\nSmart JJC: The Mind/Brain Identity Theory. In The Stanford Encyclopedia of Philosophy (Winter 2014 Edition), Edward N. Zalta (ed.), (2014th ed.) 2014. Reference Source\n\nThakral PP, Moo LR, Slotnick SD: A neural mechanism for aesthetic experience. NeuroReport. 2012; 23(5): 310–313. PubMed Abstract | Publisher Full Text\n\nUrry HL, Nitschke JB, Dolski I, et al.: Making a Life Worth Living: Neural Correlates of Well-Being. Psychol Sci. 2004; 15(6): 367–372. PubMed Abstract | Publisher Full Text\n\nvan Lommel P: Near-death experiences: the experience of the self as real and not as an illusion. Ann N Y Acad Sci. 2011; 1234(1): 19–28. PubMed Abstract | Publisher Full Text\n\nVella-Brincat J, Macleod AD: Adverse effects of opioids on the central nervous systems of palliative care patients. J Pain Palliat Care Pharmacother. 2007; 21(1): 15–25. PubMed Abstract | Publisher Full Text\n\nWHO: Development of the World Health Organization WHOQOL-BREF quality of life assessment. The WHOQOL Group. Psychol Med. 1998; 28(3): 551–558. PubMed Abstract | Publisher Full Text\n\nWHOQOL Group: The World Health Organization Quality of Life assessment (WHOQOL): position paper from the World Health Organization. Soc Sci Med. 1995; 41(10): 1403–1409. PubMed Abstract | Publisher Full Text\n\nWilson BA: Assessment of Memory Disorders. In B. A. W. Alan D. Baddeley, Michael Kopelman (Ed.): The Essential Handbook of Memory Disorders for Clinicians. Chichester: John Wiley & Sons. 2002; 159–178. Reference Source\n\nZeidler DL, Walker KA, Ackett WA, et al.: Tangled up in views: Beliefs in the nature of science and responses to socioscientific dilemmas. Science Education. 2002; 86(3): 343–367. Publisher Full Text"
}
|
[
{
"id": "21807",
"date": "11 Apr 2017",
"name": "Maurits van den Noort",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the present opinion article 1, the authors firstly present support for the claim that some first-person accounts (1PAs) cannot be reduced to third-person neural- and psychophysiological correlates accounts (3PAs). Secondly, they state that the 1PAs contents are the only information to reckon when it is necessary to analyze qualia contents (e.g., emotions, beliefs, reality interpretations, quality of life and health) and their effects on behavior and the brain activity. Thirdly, according to the authors, even a complete description of the brain and psychophysiological correlates of these 1PAs does not add any further information about their contents and characteristics. Tressoldi et al. (2017) makes several challenging and highly interesting claims; moreover, they give a state of the art overview of the 1PAs and 3PAs results and research limitations so far. Last but not least, their article stimulates further discussion on how to best invest research money in order to make progress in this research field, all in all, we recommend publication, but we have several major and minor points that the authors should further address.\nFirstly, the authors are right (see page 2) that 1PAs are useful in clinical research and diagnostics of psychiatric disorders because they provide subjective and qualitative information; however, on the other hand, we would like to stress that self-rating instruments, such as the Beck Depression Inventory-II (BDI-II)2, have their own limitations3. For instance, the interpretation of results from self-report instruments in general but also for specific questionnaires, can contain flaws (e.g., Subjective Well-being under Neuroleptics scale – Short form, etc.)4. For instance, it was shown that patients might show a certain response pattern, like a tendency to exaggerate their symptoms or on the contrary, to willingly under-report the severity of their symptoms or the frequency in order to present their situation more positively3. In addition, test-taking attitude (e.g., social desirability) was found to play a critical role in the responses to clinical self-report instrument5. In other words, how should psychologists/psychiatrists deal with those methodological limitations in daily clinical practice according to the authors? In our opinion, relying on those (structured) questionnaire outcomes only, does not seem the way to go.\nThe authors are right when they write on page 3 about 3PAs: “It seems clear that this type of information cannot convey any useful information about the subjective quality of pain of the persons experiencing it”. However, the authors somehow do not mention that in the years after the Pariente et al. (2005)6 publication, the measurement of deqi scores of the participants in acupuncture studies were introduced and are now being collected alongside the 3PAs7-8, consisting of the following 12 Deqi sensations: aching, soreness, numbness, fullness, sharp or dull pain, pressure, heaviness, warmth, coolness, tingling, itching, and any others7-8. This methodology is also used in recent functional magnetic resonance imaging (fMRI) studies on acupuncture9; moreover, the MR signals of the brain areas that had been activated by acupuncture stimulation at a specific acupuncture point (for instance GB34) are then correlated9. The authors should add this to their manuscript because this would give a more complete picture of the current state of the art in this specific research field, especially since they attack this field for using 3PAs only, which is not correct.\nThirdly, one of their most provocative statements is the one on page 5 where the authors state: “However it is important to realize, on the part of both researchers and the funders of their investigations, that the knowledge of their neural and psychophysiological correlates has nothing to add to the knowledge of these phenomena”. We find this a challenging statement and we fully agree with the authors that both funding agencies and researchers are often not critical enough in their assessments of those studies and large grant applications. To date, the studies on neural and psychophysiological correlates not at all contribute significantly, taking into account the large amounts of research funding/resources that have been invested so far. However, why do the authors think that it would be technically, hypothetically, impossible to combine 1PAs with 3 PAs? Perhaps neural and psychophysiological measurements while the person is aware of and can communicate the mental contents to others (if requested or desired) could still add important clinical information (e.g., neural and psychophysiological measurements while patients with depression fill in the BDI-II)? In line with this, the authors write on page 5 in response to the Urry et al. (2004)10 study “May we use this information to gather more details about what already participants reported in their 1PAs?”, but despite mentioning this question, they further ignore this. In our opinion, they too easily dismiss this option. Therefore, in our opinion, their statement that the neural and psychophysiological correlates “have nothing to add” to the knowledge of these phenomena is too strict and too premature, it might be right, but it could also be totally wrong.\nThe fourth major point that we would like to tap (see page 6) is the fact that in their discussion the authors focus on the usefulness of biological correlates of 1PAs only. It is true that the biological perspective (significantly marked by the advances in neuroimaging techniques11) is very popular in psychology at the moment; however, we are wondering what the opinion of the authors is with respect to their claims, in terms of the fundamental laws of physics12? Note that to date, a unified brain processing theory (unifying physics and neuroscience) does not exist[ref-13)? How do the authors think that a better theory of its underlying fundamental laws of physics could describe and explain 1PAs and 3PAs? This area might build a bridge in the understanding of 1PAs and the underlying mechanisms that are partly measured by 3PAs.\nFinally, there are several minor issues that we would like for the authors to address in their final version of the paper. For instance, the authors should add suitable references behind “Beck Depression Inventory” and “The Structured Clinical Interview for DSM-5 (SCID-5)” (see page 2); moreover, the authors should include higher resolution images of Figure 1 and Figure 2 (see page 3). The authors should write out “NDEs” the first time that they use this abbreviation (see page 4). The easiest way seems to include “NDE” immediately after “Near-Death-Experiences” on page 4. In addition, the authors should take a closer look at “Klein (2015)14” on page 4 because there are 3 “Klein (2015)” references (Klein, 2015a14; Klein, 2015b15; Klein, 2015c[ref16]) but the authors only use “2015a” and “2015b” (see also the reference list on page 8). Furthermore, the authors should include suitable references behind “space-time and matter-energy”, “Heisenberg’s principle of indetermination” (note it should be “Heisenberg’s” instead of “Heisnberg’s”), and “the concept of entanglement” in order to support their statements (see page 4). The authors should correct the following misspellings/errors on page 2: “be” should be added to the sentence “We will not enter here in the debate about how 1 PAs can also be considered 3PAs”, on page 4: “acording” should be replaced by “according” (see the Reasoning subsection), and on page 5: “helped” should be replaced by “help” and “knew” should be replaced by “know” (see the Hallucinations subsection), “be” should be added to the sentence “that can only be measured” (see the Risk perception subsection), and it should be “which” instead of “witch” (see the Discussion section). Also we would suggest adding a “Conclusion section” to the paper at the end of their paper (on page 6) and or alternatively at the end of the Introduction section of their paper. The last minor revision is that the authors should add a “s” behind the word “author” in the Author contributions section of their paper.\nTo conclude, the present opinion article1 is definitely worth publishing and will stimulate further discussion on how to best investigate and use research money and resources in the study of 1PAs and 3PAs. Moreover, the future will show whether the authors are correct in their claim that even a complete description of the brain and psychophysiological correlates of these 1PAs does not add any further information about their contents and characteristics.",
"responses": [
{
"c_id": "2683",
"date": "04 May 2017",
"name": "Patrizio Tressoldi",
"role": "Author Response",
"response": "Thank you for your accurate and constructive review and sorry for the multiple typos.In the following we try to answer to all your main comments. …self-rating instruments, such as the Beck Depression Inventory-II (BDI-II)2, have their own limitations… how should psychologists/psychiatrists deal with those methodological limitations in daily clinical practice according to the authors? In our opinion, relying on those (structured) questionnaire outcomes only, does not seem the way to go. Reply: we acknowledged the limitations of all instruments and procedures for a complete assessment of 1PAs expanding the paragraph “First-person accounts are not always reliable” now moved before the Discussion. However, these limitations cannot be offset by 3PAs, but only improving the instruments and procedures for the knowledge of 1PAs, see for example Pastore M, Nucci M, Bobbio A and Lombardi L (2017). Empirical scenarios of fake data analysis: The Sample Generation by Replacement (SGR) approach. Front. Psychol. 8:482. doi: 10.3389/fpsyg.2017.00482; Lange R. Rasch scaling and cumulative theory-building in consciousness research. Psychology of Consciousness: Theory, Research, and Practice. 2017 Mar;4(1):135. .. in the years after the Pariente et al. (2005) publication, the measurement of deqi scores of the participants in acupuncture studies were introduced and are now being collected alongside the 3PAs, consisting of the following Deqi sensations: aching, soreness, numbness, fullness, sharp or dull pain, pressure, heaviness, warmth, coolness, tingling, itching, and any others. Reply: in the “Pain” paragraph we added the procedure used by Hui et al. (2007) for the assessment of Deqi sensations. Their procedure confirms that these sensations can only be investigated by referring to only 1PAs and not 3PAs To date, the studies on neural and psychophysiological correlates not at all contribute significantly, taking into account the large amounts of research funding/resources that have been invested so far. However, why do the authors think that it would be technically, hypothetically, impossible to combine 1PAs with 3PAs?. Reply: throughout our paper we presented examples where 1PAs and 3PAs are investigated together. However, our main thesis is that they offer very different information and that 1PAs cannot be obtained from 3PAs and hence are primary and irreducible. ... we are wondering what the opinion of the authors is with respect to their claims, in terms of the fundamental laws of physics? Note that to date, a unified brain processing theory (unifying physics and neuroscience) does not exist. How do the authors think that a better theory of its underlying fundamental laws of physics could describe and explain 1PAs and 3PAs? Reply: We agree completely with the necessity to consider valid alternatives to the mainstream physicalism metaphysics as we pointed out in the Discussion. Such new alternatives must not only unify the fundamental laws of physics, that are not those of classic physics, with neuroscience but also with subjective qualia. Finally, there are several minor issues that we would like for the authors to address in their final version of the paper. Reply: thank you very much for all these issues we fixed in the version 2 of the paper."
}
]
},
{
"id": "21505",
"date": "28 Apr 2017",
"name": "Zoltan Kekecs",
"expertise": [
"Reviewer Expertise Hypnosis",
"Psychophysiological mechanisms involved in mind-body interventions"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present an opinion article summarizing information from prior literature in defence of their claims that a) some first-person accounts cannot be reduced to their third-person neural and psychophysiological correlates and b) that these first-person accounts are the only information to reckon when it is necessary to analyse qualia contents. I believe this is an important topic to discuss, even if I am sceptical about whether the issue underlying the arguments, the reducibility of qualia to psychophysiological information, is one that can be decided just through scientific insight. Nevertheless, I feel that the manuscript needs a thorough revision before being finalized, making the claims themselves and the logic of the arguments supporting them clearer.\nThe authors make several claims in this opinion article. One of the main points is summarized well in the Abstract and Discussion: \"for many phenomena, first-person accounts are the only reliable source of information available and the knowledge of their neural and psychophysical correlates don’t offer any additional information about them”. Based on the information presented by the authors, I tend to agree with this statement if we start the sentence with “at this point in time” or “at our current level of scientific advancement”.\nIt is true that presently our neuroimaging and electrophysiological monitoring techniques used in humans are extremely crude and come nowhere close to providing the level of detail that can be gained from a first person account on most of the listed phenomenon. However, the author’s claim doesn’t seem to stop at the present time. They seem to argue that third person accounts will never provide reliable information about the listed phenomena. This is problematic because this part of their claim is not justified by any arguments. Instead, the authors seem to extrapolate from the fact that third person accounts are unreliable today to the claim that they will always be unreliable and redundant compared to first person accounts. I think that this logical jump is too much to ask of the reader. Thus, either the claim should be restricted in time, or further argumentation is necessary.\nAnother issue with the manuscript in its current form is that 1PAs and subjective experiences (qualia), are often confused. For example in this sentence: “Our statement that 1PAs are irreducible to 3PAs…”. I like that the authors take the time to define both first person accounts and third person account in the beginning of the paper. However, the claim that 1PAs are irreducible to 3PAs are refuted by the very definition that the authors provide. 1PA is defined as: “First-person accounts (1PAs) are written, verbal or intentional (conscious) behaviour, e.g. sign language, accounts related to what a person feels, perceives or thinks, in other words, every mental content the person is aware of and can communicate to others if requested or desired.” While 3PA is defined as: “third-person accounts (3PAs), are identical types of accounts plus their neuro and psychophysiological correlates, obtained by people who observe or measure other behaviour and mental contents and processes.“ An example for a 1PA by the authors is if a person says “I feel happy today”. This accounts can always be directly transformed to a 3PA like: “She feels happy today”. So in this sense a 1PA can be “reduced” or made directly equivalent to a 3PA.\nAt another point in the manuscript the authors use a longer version of this claim: “some 1PAs cannot be reduced to third-person neural and psychophysiological correlates accounts”. However, this cannot be true either with the current definition the authors have for 1PA, because all of the examples the authors bring for 1PA can be reduced to muscle movement (speech, writing, sign language), and it is well established that muscle movements are directly evoked by neurobiological phenomenon. So it is logically possible to completely reduce the movements produced when a person utters “I have a throbbing pain in my temple” verbally or in sign language to its efferent neural source. In fact, we understand the processes that are at play here so well that we can create an artificial limb with which an arm amputee will become able to produce the same sign language sentence on her own again. So I would venture that reducing these reports themselves to their neurobiological correlates is not only a logical possibility, but is plausible within a few years of research.\nI guess what the authors really meant is that the qualia, the subjective feeling of happiness or pain, the feeling that the 1PA refers to, can never be reduced to simple 3PAs (and because of the above argument about the equivalence of 1PAs and 3PAs, it cannot be reduced to 1PAs either if we define 1PAs as the authors do right now). So either the definition of 1PAs needs to be changed to involve the subjective feeling and not just the report of that feeling, or the manuscript needs to be looked over carefully to identify sections where the authors meant qualia (first person experiences) instead of first person reports about qualia.\nI also feel that several statements and claims in the manuscript could be clarified. For example, the authors claim that “the knowledge of their neural and psychophysiological correlates has nothing to add to the knowledge of these phenomena”. This statement is very general in its current form and the preceding text does not justify it. Let’s take for example pain or mood disorders, example phenomena brought up by the authors. I believe we have gained extremely useful knowledge already about these phenomenon by understanding the neural and biochemical mechanisms involved in them, which help us in their respective treatment. We are able to further improve our treatments by understanding the mechanisms even better. I am sure that the authors did not mean that we cannot learn anything useful about these phenomenon by studying their neural correlates. They probably meant that we do not get any useful information on the exact quality of the subjective experiences involved in these phenomenon by studying their neuronal correlates, or something similar. If so, the original sentence needs much clarification.\nThe quote from Coltheart (2013) is also misleading: ‘“testing theories of cognition” by using fMRI investigations requires “both sensitivity (a claim that brain region X will always be active when cognitive process C is being executed) and specificity (the claim that brain region X will not be active except when cognitive process C is being executed).”’ I don’t think any brain researcher today would think that a certain area of the brain would be responsible for a single thought or idea and nothing else. This is not even true for individual neurons. It is the networks and connections that are proposed to do the computations, and a brain area and even individual neurons are suspected to be part of multiple networks. So in this sense we cannot and do not expect this kind of specificity of brain areas anymore.\nIt is strange that the authors bring up a fact that falsifies one of their claims and then they never explain why this falsification is invalid. It is left hanging in the air: “Our statement that 1PAs are irreducible to 3PAs, could be falsified by the evidence suggesting that it is possible to change 1PAs by acting on their biological correlates. For example Saitoh et al. (2007) were successful in reducing pain due to spinal cord or peripheral lesions by applying high-frequency repetitive transcranial magnetic stimulation on the primary motor cortex.” Later they add: “As a result, the 1PA is no less relevant than 3PA, even in the context of the pragmatic approach of clinical medicine, despite having been understated by the ruling reductionist paradigm.” However, this is nowhere near as strong a claim as the original one. The original claim is that 1PAs are irreducible to 3PAs, while the later claim is that 1PAs are relevant as well, not just 3PAs. By leaving the falsification open like this the authors practically invalidate one of their main claims in this opinion article, so it is strange why they make this claim the first place, if they think it is in fact false, or incomplete in its original form. I suggest either refuting the falsification, or elaborating their claim in its original form (at every instance of its appearance in the manuscript, not just after the falsification is mentioned in the end), so it is no longer falsified by the fact that it is possible to change 1PAs by acting on their biological correlates. Relatedly, I found it unclear how does the fact that hypnosis brings about an increase in pain threshold relate to the same section. For me, this sentence in its context without any further explanation implied that the authors think that contrary to transcranial magnetic stimulation, hypnosis would affect 1PAs directly, without involving neural correlates of pain. This is not true, because we see from several neuroimaging studies that the brain behaves differently when noxious stimuli is applied with and without hypnosis. It is also very probable that hypnosis relies on at least some neural mechanisms to enact its effects on pain, if nothing else, by relying on the sensory neurons which allow the hypnosis participant to perceive the words of the hypnotherapist. If this sentence is important in the manuscript, the authors should make it clear how it is relevant exactly to this discussion. Otherwise I suggest deleting it because it invites misinterpretation.\n\nIt is also hard to see how does the following sentence help any of the arguments of the authors “In fact in the Saitoh et al. (2007) example, the modification of primary cortex activity do not contain any useful information about the participants’ change in pain perception.” Transcranial magnetic stimulation was an experimental manipulation in this example. I am not sure why should it contain any information on the change in pain perception. This is not a measurement, but a manipulation targeting the suspected mechanism underlying pain, which in the end was successfully able to modify the subjective pain experience, or at least the 1PA thereof. Similarly, a hammer blow does not have to contain information on the subjective quality of pain to cause pain. If the authors meant this sentence to refute the falsification, they need to make it clearer.\nCorrecting the following minor issues should also serve to improve the manuscript:\n“opioids are only wake hallucinogens” – weak instead of wake “according to classical, Newtonian physics” – according instead of acording “content that can only measured by using 1PAs” – content that can only be measured by using 1PAs Irreducible is spelled incorrectly as irriducible several time in the document “this is a very relevant fact allowing for Enhanced Recovery After Surgery without costs” – why is Enhanced Recovery After Surgery capitalized? Furthermore, “without costs” indicates that this is a completely free intervention, however, most surgeries do not have a trained professional who can use hypnosis in a clinical setting, so in most cases this would require the presence of a new professional, who needs to be paid. And even if the medical staff gets the proper training, the training itself is not without costs, etc. So I suggest deleting “without costs” from this sentence.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? No\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "2684",
"date": "04 May 2017",
"name": "Patrizio Tressoldi",
"role": "Author Response",
"response": "Thank you for your accurate and constructive review.In the following, we will try to reply to all your main comments. \"for many phenomena, first-person accounts are the only reliable source of information available and the knowledge of their neural and psychophysical correlates don’t offer any additional information about them”. Based on the information presented by the authors, I tend to agree with this statement if we start the sentence with “at this point in time” or “at our current level of scientific advancement”. Reply: we added the sentence “at the current level of scientific advancement” both in the abstract and in the discussion. Another issue with the manuscript in its current form is that 1PAs and subjective experiences (qualia), are often confused…..An example for a 1PA by the authors is if a person says “I feel happy today”. This accounts can always be directly transformed to a 3PA like: “She feels happy today”. So in this sense a 1PA can be “reduced” or made directly equivalent to a 3PA. Reply: even if in the paper we mainly referred to 3P neuro- and psychophysiological correlates, even the 3PA “She feels happy today” cannot convey any reliable information about how the person really feels happy. This 3PA remains an independent source of information and cannot guarantee what really the person feel. It is like to state “Peter felt a strong pain after hammering his thumb”, but only Peter can describe the degree and qualities of his pain. …So either the definition of 1PAs needs to be changed to involve the subjective feeling and not just the report of that feeling, Reply: in the Introduction we defined better our 1PA definition adding the term “qualia” with a reference. I am sure that the authors did not mean that we cannot learn anything useful about these phenomenon by studying their neural correlates. They probably meant that we do not get any useful information on the exact quality of the subjective experiences involved in these phenomenon by studying their neuronal correlates, or something similar. Reply: This is precisely our core message. In fact, in the discussion we wrote “The main aim of our paper is not that of supporting the view that the study of the biological correlates of many 1PAs is irrelevant and a waste of resources, but that the information we can gather from 1PAs are irreducible to 3PAs and these ones cannot increase the information we got from 1PAs even when is it possible to infer a direct causal relationship between 3PAs and 1PAs.” The quote from Coltheart (2013) is also misleading:…. ’ I don’t think any brain researcher today would think that a certain area of the brain would be responsible for a single thought or idea and nothing else. Reply: Coltheart’s et al. warnings are valid for any correlation between cognitive functions and their anatomical correlates independently if these ones are single or a network of brain areas. Unfortunately, these warnings are still largely ignored see for example.Tressoldi, P. E., Sella, F., Coltheart, M., & Umilta, C. (2012). Using functional neuroimaging to test theories of cognition: A selective survey of studies from 2007 to 2011 as a contribution to the Decade of the Mind Initiative. Cortex, 48(9), 1247-1250. It is strange that the authors bring up a fact that falsifies one of their claims and then they never explain why this falsification is invalid ……… I suggest either refuting the falsification, or elaborating their claim in its original form (at every instance of its appearance in the manuscript, not just after the falsification is mentioned in the end), so it is no longer falsified by the fact that it is possible to change 1PAs by acting on their biological correlates. Reply: we agree that the suggestion on how to falsify our main claim and the Saitoh example was badly presented. Now we revised that paragraph as follow: “Our statement that 1PAs are irreducible to 3PAs, could be falsified by the evidence that it is possible to determine precisely the changes and qualities of 1PAs only by observing the effects of the interventions on their biological correlates. For example, Saitoh et al. (2007) were successful in reducing pain due to spinal cord or peripheral lesions by applying high-frequency repetitive transcranial magnetic stimulation on the primary motor cortex. However, the modification of primary cortex activity didn’t give any useful information about the participants’ change in pain perception. In fact, this information was obtained by asking the participants to rate their pain with a visual analogue scale similar to that presented in Figure 2 and the Short-Form of the McGill Pain Questionnaire. .. the authors think that contrary to transcranial magnetic stimulation, hypnosis would affect 1PAs directly, without involving neural correlates of pain. This is not true,…. Reply: the example of hypnosis as a mean to influence pain perception has now be integrated with the Saitoh example: “Pain reduction can also be obtained by acting on mental beliefs and contents….."
}
]
}
] | 1
|
https://f1000research.com/articles/6-99
|
https://f1000research.com/articles/6-254/v1
|
13 Mar 17
|
{
"type": "Research Note",
"title": "Protein-bound polyphenols create “ghost” band artifacts during chemiluminescence-based antigen detection",
"authors": [
"Nathalie Plundrich",
"Mary Ann Lila",
"Edward Foegeding",
"Scott Laster",
"Nathalie Plundrich",
"Edward Foegeding",
"Scott Laster"
],
"abstract": "Antigen detection during Western blotting commonly utilizes a horseradish peroxidase-coupled secondary antibody and enhanced chemiluminescent substrate. We utilized this technique to examine the impact of green tea-derived polyphenols on the binding of egg white protein-specific IgE antibodies from allergic human plasma to their cognate antigens. Our experiments unexpectedly showed that green tea-derived polyphenols, when stably complexed with egg white proteins, caused hyperactivation of horseradish peroxidase resulting in the appearance of white “ghost” bands. This study suggests that caution should be taken when evaluating polyphenol-bound proteins by enhanced chemiluminescence Western blotting using horseradish peroxidase and demonstrates that protein-bound polyphenols can be a source of “ghost” band artifacts on Western blots.",
"keywords": [
"western blot artifacts",
"egg white proteins",
"enhanced chemiluminescence",
"ghost band",
"green tea polyphenols",
"horseradish peroxidase",
"protein-polyphenol interactions"
],
"content": "Introduction\n\nWestern blotting has been used extensively to identify and quantify relative amounts of specific proteins in complex mixtures. Proteins are identified using antigen-specific primary antibodies followed by various enzyme-coupled secondary antibodies. Commonly used conjugated enzymes are alkaline phosphatase and horseradish peroxidase (HRP)1. HRP is more popular due to its stability and smaller size, which allows for conjugation of multiple HRP moieties per secondary antibody and increased sensitivity2. Avidin-biotin systems can also be used to amplify reactivity and luminol-based enzyme substrates are commonly used to create a visible chemiluminescent signal.\n\nWe recently described an approach to reduce the allergenicity of light roasted peanut flour through complexation of peanut proteins with plant polyphenolic compounds. Peanut proteins formed stable aggregate particles with polyphenols and those particles showed substantially reduced allergenicity based on complementary assays, including chemiluminescence-based Western blotting3. In the present study, this blotting technique was used to investigate the binding of IgE antibodies to hen egg white proteins complexed with green tea-derived polyphenols. The polyphenols were mixed with the protein, frozen then freeze dried, which allows for stable protein-polyphenol aggregate particles to form. For detection on the blots, we used primary antibodies from allergic human plasma, secondary biotin-coupled goat anti-human IgE, avidin-HRP, and an enhanced luminol substrate.\n\n\nMethods\n\nPrecast mini TGX 4–20% polyacrylamide gels were purchased from BioRad (Hercules, CA, USA). Nitroblue tetrazolium and glycine were purchased from Sigma-Aldrich (Sigma-Aldrich, St. Louis, MO, USA). All other SDS-PAGE and immunoblotting reagents used are listed elsewhere3. Egg white protein (EWP) was purchased from Sigma-Aldrich (St. Louis, MO, USA). Commercially available organic dry green tea leaves (Camellia sinensis [L.] Kuntze) were provided by QTrade Teas & Herbs (Cerritos, CA, USA). Ground leaves were extracted and stored until further use as previously described1. Extraction was performed for 2 h at 80 °C.\n\nThe total phenolic content in the green tea extract was determined (36.8 mg mL-1 ± 0.26 mg mL-1, see Table S1) according to the 96-well microplate-adapted Folin-Ciocalteu method by Zhang et al.4 with modifications described by Herald et al.5. The amount of extract (mL) and protein powder (g) required to generate dry, stable protein-polyphenol aggregate particles containing 5, 10, 15, 30, or 40% polyphenols after complexation was added together and mixed under constant agitation for 15 min at room temperature. Mixtures were subsequently frozen at -20 °C and freeze-dried (FreeZone12, Labconco, Kansas City, MO, USA) to form stable protein-polyphenol aggregate particles.\n\nFollowing transfer of proteins by electroblotting from unmodified EWP and aggregate particles to a polyvinylidene difluoride (PVDF) membrane, the membrane was briefly hydrated in 100% methanol and polyphenol-modified proteins were detected with NBT and glycinate as described by Hagerman [6; www.users.muohio.edu/hagermae/]. At alkaline pH, the catechol moiety of polyphenols catalyzes redox-cycling in the presence of glycinate, generating superoxide that reduces NBT to insoluble, visible formazan7.\n\nAmounts of protein-polyphenol aggregate particles or unmodified EWP were normalized to provide 2 mg protein for SDS-PAGE. Samples were prepared in sample loading buffer containing 5% β-mercaptoethanol, resulting in 10 µg protein in 10 µL. Samples (10 µg protein/10 µL) were incubated for 5 min at 95 °C, loaded onto a gel, run (40 min at 200 V), and then stained with Coomassie Brilliant Blue (CBB). The immunoblotting method used, including reagent sources, is described elsewhere3. The following minor modifications were made: Pooled human plasma (containing polyclonal antibodies, among them egg white-specific IgE) from 7 egg white-allergic individuals (PlasmaLab International, Everett, WA, USA; 1:80; v/v) was used to bind antigens on the membrane. EWP-specific IgE levels ranged from 15.4 to 100 kU L−1 as determined via ImmunoCAP (Phadia, Uppsula, Sweden). Biotinylated polyclonal goat IgG anti-human IgE (Kirkegaard & Perry Laboratory, Inc., reference no. 01-10-04, Gaithersburg, MD, USA; 1:8,000; v/v) and NeutrAvidin HRP conjugate (Thermo Scientific, Rockford, IL, USA; 1:20,000; v/v) were used to bind plasma antibodies.\n\nIn separate experiments, proteins in aggregate particles containing 15% polyphenols were blotted onto a PVDF membrane. The membrane was subsequently cut into strips and subjected to various combinations of immunoblotting reagents. Transferred proteins from unmodified EWP served as a control and underwent full immunoblotting procedure.\n\n\nResults and discussion\n\nThe major EWPs ovotransferrin (76.6 kDa), ovalbumin (45 kDa) and lysozyme (14.3 kDa)8 from both aggregate particles and unmodified EWP were separated by SDS-PAGE and identified by staining with CBB (Figure 1A). An increase in molecular weight of ovotransferrin and ovalbumin, but not of lysozyme, was observed and this was polyphenol concentration dependent (Figure 1A). In fact, NBT staining indicated that ovalbumin and ovotransferrin, but not lysozyme were modified by polyphenols and the degree of staining was dependent on the concentration of polyphenol (Figure 1B). The staining also revealed several additional proteins not stained with CBB, suggesting that the NBT staining of polyphenols more sensitively reveals the presence of protein than does CBB staining. As expected, control EWP did not react with NBT (Figure 1B). The finding that polyphenols remain bound to proteins following SDS-PAGE and membrane transfer suggests a strong, perhaps covalent association between the molecules.\n\n(A) SDS-PAGE of unmodified egg white protein (CTL) or egg white protein-polyphenol aggregate particles containing 5, 10, 15, 30, and 40% polyphenols and stained with CBB; (B) Staining of green tea polyphenol-bound egg white proteins by NBT, following SDS-PAGE and subsequent electrophoretic transfer to a PVDF membrane; (C) corresponding Western blot. Pooled human plasma from 7 egg white-allergic individuals was used to bind antigens on the membrane. Egg white-specific IgE levels ranged from 15.4 to 100 kU L−1 as determined via ImmunoCAP (Phadia, Uppsula, Sweden). Biotinylated goat IgG anti-human IgE was used as the secondary antibody and NeutrAvidin HRP conjugate and substrate were used for signal production. M: molecular weight marker (kDa); CTL: control (unmodified egg white protein). Approximate locations for egg white allergens are indicated. Gray scale was used for gels and membranes and contrast was optimized to improve visualization.\n\nAs shown in Figure 1C, ovotransferrin, ovalbumin and lysozyme in unmodified EWP were recognized by antigen-specific IgE antibodies from human plasma. However, for protein samples that contained polyphenols, ovotransferrin and ovalbumin as well as several of the proteins revealed by NBT but not CBB staining, appeared as white “ghost” bands (Figure 1C). Generally, “ghost” bands occur when the substrate is depleted quickly by the enzyme at that location and ceases to produce light. Commonly, this is a result of a high concentration of one or more of the components of the enzymatic reaction. However, in this case, the phenomenon was not observed for the EWP control sample (which did not contain polyphenols) and increased with increasing amount of polyphenols, suggesting that the polyphenols are triggering the excessive consumption of substrate and appearance of the “ghost” bands. The phenomenon was also observed with other aggregate particles including whey protein isolate-green tea polyphenol and whey protein isolate-blueberry polyphenol aggregate particles (see Figure S1) indicating that “ghosting” was not dependent on specific EWPs.\n\nTo further investigate the mechanism underlying “ghost” band formation on those blots, PVDF membrane-transferred unmodified and polyphenol-modified EWPs underwent treatment with a combination of different immunoblotting reagents. Results revealed that polyphenols promoted “ghost” band formation by interacting with HRP during HRP-substrate reactions (Figure 2). “Ghost” bands were only observed on membrane strips containing green tea polyphenols and HRP (Figure 2B, D, and G) and only HRP was required to produce “ghost” bands with polyphenol-modified EWPs (Figure 2G). No “ghost” bands were observed when substrate alone was added to a membrane containing polyphenol-bound proteins (Figure 2E). It should be noted that the light background in Figure 2C, E, and F is caused through a different mechanism than white “ghost” bands seen in B, D, and G. Since HRP is required for signal production, antibody-bound proteins on membranes not exposed to HRP (Figure 2C, E, and F) were not detected, hence, the membrane appeared blank when imaged (grey spotting is an imaging artifact). In contrast, on membranes that were treated with HRP and contained polyphenols (Figure 2B, D, and G), polyphenol-bound proteins appeared as white “ghost” bands due to depletion of locally available substrate and subsequent cessation of local light production. Interestingly, the lysozyme band was unaffected and apparently represents another artifact. This band did not require the presence of the primary antibody (Figure 2D), indicating it occurs due to a non-specific reaction between the secondary HRP-conjugated antibody and the substrate. Further, the intensity of this band increased in the presence of polyphenols (Figure 2A, B and D), which seems contradictory since the NBT stain did not indicate polyphenols bound to lysozyme (Figure 1B). It is possible that in the presence of polyphenols, specific binding of primary and therefore secondary antibodies to proteins may be reduced resulting in excess free secondary antibodies to bind lysozyme (which did not contain bound polyphenols).\n\nWestern blot strips of (A) unmodified egg white proteins and (B–G) egg white protein-green tea polyphenol aggregate particles containing 15% total polyphenol content, after various immunoblotting treatments. (B) received all immunoblotting reagents after membrane blocking - primary antibody (pooled human plasma from 7 egg white allergic individuals with egg white-specific IgE levels ranging from 15.4 to 100 kU L−1), biotinylated goat IgG anti-human IgE secondary antibody, NeutrAvidin HRP conjugate, and substrate; (C) the secondary antibody and NeutrAvidin HRP conjugate were omitted; (D) the primary antibody was omitted and (E) the primary and secondary antibody and NeutrAvidin HRP conjugate were omitted; (F) the primary antibody and NeutrAvidin HRP conjugate were omitted and (G) the primary antibody and secondary antibody were omitted. A molecular weight marker (kDa) is shown on the far left. Approximate locations for egg white allergens are indicated. Gray scale was used and contrast was optimized to improve visualization.\n\nBased on this experiment, exact mechanisms of HRP promotion by polyphenols cannot be determined. It is possible, based on the fact that polyphenols are able to act as “bridges” between proteins9, that HRP non-specifically binds to protein-bound polyphenols at high concentrations, therefore rapidly depleting substrate (luminol) in close proximity to the enzyme. Further, it is possible that protein-bound polyphenols are able to promote HRP activity, as has been observed similarly with digestive enzymes10. In both cases, this could result in the cessation of light emittance (depletion of locally available luminol).\n\nIt is important to note that the observations made in this study applied to a specific set of protein samples, secondary antibody, enzyme and chemiluminescence substrate. Other types of conjugated or unconjugated secondary antibodies, enzymes (e.g. alkaline phosphatase), or substrates have not been evaluated. However, while proper Western blot experimental designs include appropriate controls such as evaluation of unmodified proteins or antibody-antigen specificity, no control for protein-bound polyphenols as shown above has been described to date. The present study highlights the importance of evaluating polyphenol effects on chemiluminescence-based antigen detection in order to prevent false interpretation of data and reveals a new source of “ghost” band artifacts.\n\n\nConclusion\n\nWe demonstrated that when attempting to evaluate IgE binding capacity of EWP-green tea polyphenol aggregate particles by enhanced chemiluminescence-based Western blotting, polyphenols which remained bound to egg white proteins after electrophoretic transfer to a PVDF membrane hyperactivated HRP, resulting in “ghost” bands. This study reveals protein-bound ligands as an unintended source of “ghost” band artifacts, and suggests that caution should be taken when evaluating polyphenol-bound proteins by enhanced chemiluminescence Western blotting.\n\n\nData availability\n\nDataset 1: Raw data for Figure 1. Protein distribution visualized by Coomassie Brilliant Blue staining (CBB), nitroblue tetrazolium (NBT) staining, and IgE binding capacity. (Full legend and table are in the file).\n\nDOI, 10.5256/f1000research.10622.d15236611\n\nDataset 2: Raw data for Figure 2. Evaluation of horseradish peroxidase hyperactivation by polyphenols. (Full legend and table are in the file).\n\nDOI, 10.5256/f1000research.10622.d15236712\n\nDataset 3: Raw data for Supplementary Figure S1. Protein distribution, nitroblue tetrazolium (NBT) staining, and IgE binding capacity.\n\n(Full legend and table are in the file).\n\nDOI, 10.5256/f1000research.10622.d15236813",
"appendix": "Author contributions\n\n\n\nNJP carried out the research, contributed to experimental design and wrote a first draft of the paper. MAL served as corresponding author and contributed to the preparation of the manuscript. EAF contributed to the design of experiments and provided expertise in protein chemistry. SML helped design experiments, shared expertise in immunology and was involved in manuscript preparation. All authors were involved in manuscript revision and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors declared that no grants were involved in supporting this work. The authors acknowledge the generous support for this project provided through the College of Agriculture and Life Sciences at NC State University.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe want to thank QTrade Teas & Herbs (Cerritos, CA, USA) for providing the green tea leaves.\n\n\nSupplementary material\n\nFigure S1: Protein distribution, nitroblue tetrazolium (NBT) staining, and IgE binding capacity. (Full legend and table are in the file).\n\nClick here to access the data.\n\nTable S1: Replicate measurements of green tea extract for total phenolic content. SD: standard deviation.\n\nClick here to access the data.\n\n\nReferences\n\nKurien BT, Scofield RH: Western blotting. Methods. 2006; 38(4): 283–293. PubMed Abstract | Publisher Full Text\n\nAlegria-Schaffer A, Lodge A, Vattem K: Performing and optimizing Western blots with an emphasis on chemiluminescent detection. Methods Enzymol. 2009; 463: 573–599. PubMed Abstract | Publisher Full Text\n\nPlundrich NJ, Kulis M, White BL, et al.: Novel strategy to create hypoallergenic peanut protein-polyphenol edible matrices for oral immunotherapy. J Agric Food Chem. 2014; 62(29): 7010–7021. PubMed Abstract | Publisher Full Text\n\nZhang Q, Zhang J, Silva A, et al.: A simple 96-well microplate method for estimation of total polyphenol content in seaweeds. J Appl Phycol. 2006; 18: 445–450. Publisher Full Text\n\nHerald TJ, Gadgil P, Tilley M: High-throughput micro plate assays for screening flavonoid content and DPPH-scavenging activity in sorghum bran and flour. J Sci Food Agric. 2012; 92(11): 2326–2331. PubMed Abstract | Publisher Full Text\n\nHagerman AE: Tannin Handbook. Miami University, Oxford 45056. 2002. Reference Source\n\nLi CM, Zhang Y, Yang J, et al.: The interaction of a polymeric persimmon proanthocyanidin fraction with Chinese cobra PLA2 and BSA. Toxicon. 2013; 67: 71–79. PubMed Abstract | Publisher Full Text\n\nStevens L: Egg white proteins. Comp Biochem Physiol. 1991; 100B: 1–9. PubMed Abstract\n\nSiebert KJ, Troukhanova NV, Lynn PY: Nature of protein-polyphenol interactions. J Agric Food Chem. 1996; 44(1): 80–85. Publisher Full Text\n\nTagliazucchi D, Verzelloni E, Conte A: Effect of some phenolic compounds and beverages on pepsin activity during simulated gastric digestion. J Agric Food Chem. 2005; 53(22): 8706–8713. PubMed Abstract | Publisher Full Text\n\nLila MA, Plundrich N, Foegeding E, et al.: Dataset 1 in: Protein-bound polyphenols create “ghost” band artifacts during chemiluminescence-based antigen detection. F1000Research. 2017. Data Source\n\nLila MA, Plundrich N, Foegeding E, et al.: Dataset 2 in: Protein-bound polyphenols create “ghost” band artifacts during chemiluminescence-based antigen detection. F1000Research. 2017. Data Source\n\nLila MA, Plundrich N, Foegeding E, et al.: Dataset 3 in: Protein-bound polyphenols create “ghost” band artifacts during chemiluminescence-based antigen detection. F1000Research. 2017. Data Source"
}
|
[
{
"id": "20889",
"date": "23 Mar 2017",
"name": "Christopher P Mattison",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe findings of this manuscript should be published because the implications of the HRP findings on current and past research could be widespread. The authors should be congratulated for taking the time and effort to examine the artifacts they observed rather than just ignoring them and moving on. In its current form, however there are some important points that need to addressed in the manuscript to ease reader comprehension and focus the research on one topic or more clearly describe of the findings and implications of the 2 topics in the manuscript.\n\nWhat is the focus of the manuscript, the study of the effect of polyphenols on egg, or the artifact(s) resulting from the use of HRP?\n\nThe title of the manuscript suggests the manuscript is focused to point out a potentially very serious and mis-leading artifact of using HRP for western blot signal generation, but the content of the text is mixed between pointing of the findings of the egg/tea polyphenol study findings and the HRP artifact I would argue that the HRP artifact is the primary purpose of the paper (as suggested in the title) and more in-depth discussion of the findings/implications is needed\n\nPlease consider re-writing the second paragraph of the introduction to sharpen the focus of the manuscript to coincide with the title…rather than the focus of green tea polyphenols on egg allergens. In the introduction some discussion and referencing of ‘ghost bands’ from past publications would be useful and possibly a discussion of the topic of reciprocity failure (if relevant here) in signal generation? Would the ghost bands be expected to obscure ‘real’ bands nearby or migrating at the same pace? Did the authors notice these artifacts in their own past publications on similar topics?Plundrich et al 20141. If so, this should be discussed and any discrepancies in their findings or conclusions that can be attributed to the HRP artifacts should be noted. Findings using peanut allergens and tea (or other sources of) poly-phenols that lead to the same artifacts are important to point out. Can the authors find a related published article/examples of other groups that may have suffered from the same artifact and mis-lead the authors of that research to put their findings in the context of other using the same reagents? Could the authors please star/mark the bands that are considered “several additional proteins” that were detected with NBT staining but not CBB on Fig 1B? Could the additional bands noted on Fig 1B represent oligomers/aggregates of the ovalbumin and ovotransferrin, and are these same bands present on the immunoblot? Could you test directly the proposed interaction between HRP and green tea polyphenols observed by the blot in Fig 1C and Fig 2G? Concerning the dark lysozyme band, this is a very important finding, but what evidence is there that this band is actually lysozyme? Are the authors aware of other examples of this non-specific artifacts with biotinylated 2ndary antibody-neutravidin-HRP complexes? Consider changing the wording of the section; “due to a non-specific reaction between the secondary HRP-conjugated antibody” referring to the band in Fig 2D that I believe requires the secondary biotinylated antibody and the neutravidin-HRP conjugate. In the conclusion the words “hyperactivated HRP” are mis-leading because there is no evidence of increased specific activity for the HRP so consider rewriting this sentence.",
"responses": [
{
"c_id": "2700",
"date": "26 May 2017",
"name": "Mary Ann Lila",
"role": "Author Response",
"response": "Comments to the Author: “The findings of this manuscript should be published because the implications of the HRP findings on current and past research could be widespread. The authors should be congratulated for taking the time and effort to examine the artifacts they observed rather than just ignoring them and moving on. In its current form, however there are some important points that need to addressed in the manuscript to ease reader comprehension and focus the research on one topic or more clearly describe of the findings and implications of the 2 topics in the manuscript. “ *What is the focus of the manuscript, the study of the effect of polyphenols on egg, or the artifact(s) resulting from the use of HRP? The title of the manuscript suggests the manuscript is focused to point out a potentially very serious and mis-leading artifact of using HRP for western blot signal generation, but the content of the text is mixed between pointing of the findings of the egg/tea polyphenol study findings and the HRP artifact I would argue that the HRP artifact is the primary purpose of the paper (as suggested in the title) and more in-depth discussion of the findings/implications is needed *Please consider re-writing the second paragraph of the introduction to sharpen the focus of the manuscript to coincide with the title…rather than the focus of green tea polyphenols on egg allergens.Answer: Thank you. We re-wrote the second paragraph of the introduction in consideration of these points, to emphasize that the HRP artifact is the primary purpose of sharing these research results.*In the introduction some discussion and referencing of ‘ghost bands’ from past publications would be useful and possibly a discussion of the topic of reciprocity failure (if relevant here) in signal generation?Answer: Thanks. We have now included a sentence about previous studies.*Would the ghost bands be expected to obscure ‘real’ bands nearby or migrating at the same pace?Answer: Thank you. Based on our observations, no. However, major proteins we investigated were well separated. We may not be able to exclude the possibility of “real” bands to be obscured by a (especially strong) “ghost” band close by and/or migrating at the same pace. *Did the authors notice these artifacts in their own past publications on similar topics? Plundrich et al 20141. If so, this should be discussed and any discrepancies in their findings or conclusions that can be attributed to the HRP artifacts should be noted.Answer: Thank. Yes, this was observed in Figure 2 of the Plundrich et al. 2014 paper (soluble fraction, top of blot shows high molecular weight material that appeared as a “ghost” band/smear). It was also observed in the Plundrich et al. 2015 paper, Figure 3 B (peanut protein-cranberry polyphenol complex) above Ara h 2 in the digestive samples (appears that smeary lanes appeared somewhat as “ghost” bands. In both cases, however, this did not affect findings made and conclusions drawn. In addition, the same treatments were re-tested using a new protocol (fluorescence Western blotting) and the data was consistent with that previously reported. We now included a sentence about this in the discussion.*Findings using peanut allergens and tea (or other sources of) polyphenols that lead to the same artifacts are important to point out.Answer: Thank you. Please see answer above.*Can the authors find a related published article/examples of other groups that may have suffered from the same artifact and mislead the authors of that research to put their findings in the context of other using the same reagents?Answer: Thank you, this is a good question. At this time, we are not aware of any other studies that reported on similar artifacts such as those we found. After all, the detection method we used is one of many possible approaches.*Could the authors please star/mark the bands that are considered “several additional proteins” that were detected with NBT staining but not CBB on Fig 1B?Answer: Thanks. We have now indicated those additional proteins and the respective sentence in the text slightly rephrased.*Could the additional bands noted on Fig 1B represent oligomers/aggregates of the ovalbumin and ovotransferrin, and are these same bands present on the immunoblot?Answer: Thanks. The additional bands/smears observed are protein-polyphenol complexes/aggregates that have been revealed by the NBT stain. Coomassie Brilliant Blue also stains proteins that are complexed with polyphenols (see smears in Figure 1 A), however, the NBT stain more sensitively stains proteins that have been modified by polyphenols. Those protein complexes are also present on the immunoblot, however, most of them appeared as “ghost” bands.*Could you test directly the proposed interaction between HRP and green tea polyphenols observed by the blot in Fig 1C and Fig 2G?Answer: Thank you. This is a good question, and could be followed up on. We think green tea extract could directly be added to a PVDF membrane and similar experiments as in this study could be performed to test the direct effects between green tea polyphenols and HRP.*Concerning the dark lysozyme band, this is a very important finding, but what evidence is there that this band is actually lysozyme? Are the authors aware of other examples of this non-specific artifacts with biotinylated 2ndary antibody-neutravidin-HRP complexes?Answer: Thanks. The tentative identification of lysozyme was based on literature. Lysozyme was the only protein found in the 15 kDa range (MW~14 kDa; Desert et al. J Agric. Food Chem., 2001, 49: 4553–4561). We are not aware of other examples of this non-specific binding. However, it is possible that the biotin-moiety of the secondary antibody was able to bind to lysozyme, as has previously been observed by Green et al. (Nature, 1968, 217: 254-256), although this group described weak interactions. It is also possible that the secondary antibody concentration used was high and resulted in non-specific binding to lysozyme when other reagents were omitted.*Consider changing the wording of the section; “due to a non-specific reaction between the secondary HRP-conjugated antibody” referring to the band in Fig 2D that I believe requires the secondary biotinylated antibody and the neutravidin-HRP conjugate.Answer: Thanks. In fact, the “secondary HRP-conjugated antibody” refers to the “secondary biotinylated antibody that has been bound by neutravidin-HRP conjugate”. We have reworded the sentence to make it clear.*In the conclusion the words “hyperactivated HRP” are misleading because there is no evidence of increased specific activity for the HRP so consider rewriting this sentence.Answer: Thank you. We agree and have reworded this sentence."
}
]
},
{
"id": "20891",
"date": "20 Apr 2017",
"name": "R Hal Scofield",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study assessed the binding capacity of IgE antibodies to egg white protein (EWP)-green tea polyphenol complex by enhanced chemiluminescence-based Western blotting method. The authors of this study found polyphenols that remained bound to egg white proteins following electrophoretic transfer to a PVDF membrane hyperactivated HRP, leading to the formation of “ghost” bands. Based on the results of this study the authors suggest caution when evaluating polyphenol-bound proteins by enhanced chemiluminescence Western blotting.\n\nWhile the article is of interest, this reviewer notes several concerns.\n\nThe authors should take into consideration the possibility that polyphenols bound to the protein prevents binding of primary antibody and thus could produce these artifacts. Will the effect go away if using lower amounts of protein in each well (e.g. 1, 2 or 5 µg/well; the study currently uses 10 µg/well)? The authors have studied various combinations of immunoblotting reagents, including exclusion of primary antibody or use of only HRP-avidin (using EWP with 15% polyphenol) to study the reason for the formation of these “ghost” bands. However, since there is the possibility of HRP-avidin interacting non-specifically with the antigen on the blot (in the absence of primary and secondary), the authors should try a system that does not involve the biotin-avidin system for increasing sensitivity of detection (just regular primary antibody, HRP secondary antibody and enhanced ECL detection). The authors should also try a non-chemiluminescence system to see if this problem could be reproduced (e.g. HRP with DAB detection).\n\nThere are few other issues-\n\nWhy is there a noticeable shift in lysozyme migration shown in Figure 1C if it does not bind polyphenols? Also, there is decreased detection of lysozyme in lanes with 30 and 40% polyphenols with the NBT system. Were the gels of different composition? The protein migration pattern appears different in Figures 1A, 1B and 1C. The use of a different molecular weight marker in Figure 1C probably accentuates this observed effect. Actually 5 different molecular weight markers have been used in this work (10 to 250 kD; 6 to 98 kD; 20-220 kD; 20 to 100 kD and 20 to 50 kD)! Figure S1A (and S1D), it is not clear how the proteins were stained? Were the proteins stained with Coomassie? In experiments shown in Figure 1C, the authors show that the “ghost” band increases with increasing amounts of polyphenol bound to the proteins. However, Figure S1C shows that there is no “ghost” band in the lane with β-lactoglobulin bound to 40% polyphenol, which is contrary to the hypothesis put forward by the authors. Do the authors have a reference to cite in support of the statement “Generally, “ghost” bands occur when the substrate is depleted quickly by the enzyme at that location and ceases to produce light”?\n\nThe authors should consider re-writing the following sentences-\n\n“Following transfer of proteins by electroblotting from unmodified EWP and aggregate particles to a polyvinylidene difluoride (PVDF) membrane, the membrane was briefly hydrated in 100% methanol and polyphenol-modified proteins were detected with NBT and glycinate as described by Hagerman”\n\n“Transferred proteins from unmodified EWP served as a control and underwent full immunoblotting procedure”\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2701",
"date": "26 May 2017",
"name": "Mary Ann Lila",
"role": "Author Response",
"response": "Comments to the Author: “This study assessed the binding capacity of IgE antibodies to egg white protein (EWP)-green tea polyphenol complex by enhanced chemiluminescence-based Western blotting method. The authors of this study found polyphenols that remained bound to egg white proteins following electrophoretic transfer to a PVDF membrane hyperactivated HRP, leading to the formation of “ghost” bands. Based on the results of this study the authors suggest caution when evaluating polyphenol-bound proteins by enhanced chemiluminescence Western blotting. While the article is of interest, this reviewer notes several concerns.” *The authors should take into consideration the possibility that polyphenols bound to the protein prevents binding of primary antibody and thus could produce these artifacts. Will the effect go away if using lower amounts of protein in each well (e.g. 1, 2 or 5 µg/well; the study currently uses 10 µg/well)? Answer: Thank you. We have tested 5 µg/well for egg white protein-polyphenol complexes as well as for whey protein isolate-polyphenol complexes and observed “ghost” bands. We did not test even lower amounts of protein in this study. Our experiments have shown that “ghost” band formation appeared to be independent of primary antibody binding but dependent on the presence of HRP. However, it is possible that, in Figure 2B, the primary antibody was not able to bind but HRP, which was added as well, ultimately caused observed ”ghost” bands. Figure 1B did not allow us to determine if the primary antibody bound to proteins that appeared as “ghost” bands or not. *The authors have studied various combinations of immunoblotting reagents, including exclusion of primary antibody or use of only HRP-avidin (using EWP with 15% polyphenol) to study the reason for the formation of these “ghost” bands. However, since there is the possibility of HRP-avidin interacting non-specifically with the antigen on the blot (in the absence of primary and secondary), the authors should try a system that does not involve the biotin-avidin system for increasing sensitivity of detection (just regular primary antibody, HRP secondary antibody and enhanced ECL detection). The authors should also try a non-chemiluminescence system to see if this problem could be reproduced (e.g. HRP with DAB detection). Answer: Thank you. We agree with the referee and in fact we now have moved on to fluorescence based detection in our recent studies. Experiments have shown no artifacts using this system. We wanted to move away from chemiluminescence based detection systems all together. There are few other issues- *Why is there a noticeable shift in lysozyme migration shown in Figure 1C if it does not bind polyphenols? Also, there is decreased detection of lysozyme in lanes with 30 and 40% polyphenols with the NBT system. Answer: Thank you for your comment. Figure 1B (NBT stain) shows that lysozyme was not detected at all. Or more specifically, the NBT stain revealed that lysozyme was not bound by polyphenols. The upward shift seen in Figure 1C is likely an artifact from either running the gel or occurred during the protein transfer onto the PVDF membrane (gel could have been shifted/skewed a bit during “sandwich” preparation in the iBlot electroblotting system). It can be seen that all lanes in Figure 1C appear to be skewed. Figure 1A (SDS-PAGE) shows an even run of lysozyme. *Were the gels of different composition? The protein migration pattern appears different in Figures 1A, 1B and 1C. The use of a different molecular weight marker in Figure 1C probably accentuates this observed effect. Actually 5 different molecular weight markers have been used in this work (10 to 250 kD; 6 to 98 kD; 20-220 kD; 20 to 100 kD and 20 to 50 kD)! Answer: Thanks. No, the same gels were used to create Figure 1 A, B and C (BioRad TGX mini protean precast gels 4-20%). Yes, the 10 to 250 kDa marker (BioRad Precision Plus) was used for gels, the 6 to 98 kDa marker (Invitrogen SeeBlue2) was used for NBT blots since this was the available marker at the time of data collection, and a 20-220 kDa marker (Invitrogen Magic Mark XP, an IgG labeled marker) was used for Western blots. We did not use a 20 to 100 kDa nor a 20 to 50 kDa marker. Only visible marker bands are shown alongside the Western blots, hence the possible confusion. *Figure S1A (and S1D), it is not clear how the proteins were stained? Were the proteins stained with Coomassie? Answer: Thanks. Yes, they were also stained with Coomassie Brilliant Blue and we have now added this information to the respective figure legend. *In experiments shown in Figure 1C, the authors show that the “ghost” band increases with increasing amounts of polyphenol bound to the proteins. However, Figure S1C shows that there is no “ghost” band in the lane with β-lactoglobulin bound to 40% polyphenol, which is contrary to the hypothesis put forward by the authors. Answer: Thanks. The “ghost” band in Figure S1C is not very pronounced but can be seen for β-lactoglobulin.*Do the authors have a reference to cite in support of the statement “Generally, “ghost” bands occur when the substrate is depleted quickly by the enzyme at that location and ceases to produce light”? Answer: Thank you. Yes, we have now added a reference. *The authors should consider re-writing the following sentences- “Following transfer of proteins by electroblotting from unmodified EWP and aggregate particles to a polyvinylidene difluoride (PVDF) membrane, the membrane was briefly hydrated in 100% methanol and polyphenol-modified proteins were detected with NBT and glycinate as described by Hagerman” Answer: Thanks. We have broken this rather long sentence into two, for greater clarity. It now reads: “Following transfer of proteins by electroblotting from unmodified EWP and aggregate particles to a polyvinylidene difluoride (PVDF) membrane, the membrane was briefly hydrated in 100% methanol. Subsequently, polyphenol-modified proteins were detected with NBT and glycinate as described by Hagerman” “Transferred proteins from unmodified EWP served as a control and underwent full immunoblotting procedure” Answer: Thanks. We have reworded for additional clarity. It now reads: “Transferred proteins from unmodified EWP served as controls. The proteins from unmodified EWP were subjected to the full immunoblotting procedure”."
}
]
}
] | 1
|
https://f1000research.com/articles/6-254
|
https://f1000research.com/articles/6-286/v1
|
17 Mar 17
|
{
"type": "Systematic Review",
"title": "Effects of physical activity on the link between PGC-1a and FNDC5 in muscle, circulating Ιrisin and UCP1 of white adipocytes in humans: A systematic review",
"authors": [
"Petros C. Dinas",
"Ian M. Lahart",
"James A. Timmons",
"Per-Arne Svensson",
"Yiannis Koutedakis",
"Andreas D. Flouris",
"George S. Metsios",
"Ian M. Lahart",
"James A. Timmons",
"Per-Arne Svensson",
"Yiannis Koutedakis",
"Andreas D. Flouris",
"George S. Metsios"
],
"abstract": "Background: Exercise may activate a brown adipose-like phenotype in white adipose tissue. The aim of this systematic review was to identify the effects of physical activity on the link between peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1a) and fibronectin type III domain-containing protein 5 (FNDC5) in muscle, circulating Irisin and uncoupling protein one (UCP1) of white adipocytes in humans. Methods: Two databases (PubMed 1966 to 08/2016 and EMBASE 1974 to 08/2016) were searched using an appropriate algorithm. We included articles that examined physical activity and/or exercise in humans that met the following criteria: a) PGC-1a in conjunction with FNDC5 measurements, and b) FNDC5 and/or circulating Irisin and/or UCP1 levels in white adipocytes. Results: We included 51 studies (12 randomised controlled trials) with 2474 participants. Out of the 51 studies, 16 examined PGC-1a and FNDC5 in response to exercise, and only four found increases in both PGC-1a and FNDC5 mRNA and one showed increased FNDC5 mRNA. In total, 22 out of 45 studies that examined circulating Irisin in response to exercise showed increased concentrations when ELISA techniques were used; two studies also revealed increased Irisin levels measured via mass spectrometry. Three studies showed a positive association of circulating Irisin with physical activity levels. One study found no exercise effects on UCP1 mRNA in white adipocytes. Conclusions: The effects of physical activity on the link between PGC-1a, FNDC5 mRNA in muscle and UCP1 in white human adipocytes has attracted little scientific attention. Current methods for Irisin identification lack precision and, therefore, the existing evidence does not allow for conclusions to be made regarding Irisin responses to physical activity. We found a contrast between standardised review methods and accuracy of the measurements used. This should be considered in future systematic reviews.",
"keywords": [
"Exercise",
"FNDC5",
"Irisin",
"UCP1"
],
"content": "Introduction\n\nBrown adipose-like phenotype in white adipose tissue (WAT) may play a role in reducing body weight, and consequently lessen obesity in mammals1. Recently, acute and chronic exercise has been found to induce a brown adipose-like phenotype in WAT2 through a number of sequential steps. Exercise is also known to increase the activation of the peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1a) gene in human skeletal muscle3. PGC-1a is a co-transcriptional regulator facilitating multiple transcription factors to regulate a complex network of genes4 and it has been implicated in both the control of tissue mitochondrial content and the program that results in brown adipose tissue (BAT) formation5.\n\nWhile skeletal muscle properly adapts to exercise in the absence of PGC-1a6, activation of PGC-1a was proposed to increase the fibronectin type III domain-containing protein 5 (FNDC5)2. FNDC5, is a membrane protein expressed in brain and skeletal muscle7. It was proposed that FNDC5 was cleaved during exercise, and released into the bloodstream as Irisin – a peptide fragment of FNDC5 measured by western blotting2. In vitro, exposure of white adipocytes to Irisin– through an unknown receptor – subsequently led to an increase of the peroxisome proliferator-activated receptor alpha, which in turn increased uncoupling protein one (UCP1) mRNA2,8. The increase in white adipocyte UCP1 mRNA observed with Irisin treatment, presented as fold-change over control, is hard to interpret since white adipocytes in culture do not usually express UCP1 mRNA9.\n\nSince, UCP1 is the only contributor to non-shivering thermogenesis that occurs in BAT10 and it appears that the presence of UCP1 in a white adipocyte is accompanied by “brown-adipocyte like” properties9,11,12, it was proposed that increased circulating Irisin in humans after a chronic exercise program may promote increased weight loss and improved metabolic control through induction of UCP12. This hypothesis seemed superficially plausible, as Irisin over-expression stimulated oxygen consumption and has been described to have an inverse association with blood glucose, insulin, total cholesterol and a positive association with adiponectin concentrations13. However, other studies have failed to observe such positive associations14–16, while the effect of exercise on “browning” of the white adipose phenotype remains unclear17–19.\n\nThe exact role of exercise in regulating circulating Irisin concentration remains to be established. Indeed, data indicate that while older adults appear to have a 30% increase in FNDC5 mRNA in muscle compared to younger adults, FNDC5 mRNA was unresponsive to six weeks of endurance training20, despite robust increases in mitochondria21. In general, results on the effects of exercise on circulating Irisin18,22–25 have been rather ambiguous; diverse methodology may explain the highly discrepant results26,27. Given that Irisin continues to be measured using a variety of methods, an evaluation of the available evidence for its relationship with humans’ health is warranted, due to the potential that the browning of white adipocytes may have on human health. In addition, the proposed exercise mechanism that may cause a browning process of WAT in humans must be evaluated. Therefore, the aim of the current review was to systematically identify the effects of physical activity on the link between PGC-1a and FNDC5 in muscle, and circulating Irisin, as well as evidence for regulation of UCP1 in WAT (indicating a browning process) in humans.\n\n\nMethods\n\nUsing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines28–30, two databases (PubMed and EMBASE) were searched up until 19th August 2016. Two investigators (PCD and IML) independently conducted two identical searches in both databases using appropriate search algorithms (PubMed: Supplementary File 1; EMBASE: Supplementary File 2). The lists of the included articles were reviewed to identify publications that were relevant to the topic under review.\n\nWe included studies that met at least one of the following eligibility criteria: a) measurements of PGC-1a (mRNA and/or protein concentrations) in conjunction with measurements of FNDC5; b) measurements of FNDC5, and/or Irisin concentrations and/or UCP1 in WAT, along with the following criteria: c) measurements of physical activity levels and/or exercise interventions, and d) human participant study. No other eligibility criteria were set (e.g., language, date of publication). From the included studies, we retrieved outcomes regarding the effects of physical activity on PGC-1a in conjunction with FNDC5 in muscle, FNDC5 in muscle, Irisin in the bloodstream and UCP1 in WAT. We report the studies’ design, the participants’ characteristics, the Irisin identification and other outcome methods and study outcomes. We have also recorded the secondary associations in the included studies, i.e. associations between FNDC5 and/or circulating Irisin and several health-related phenotypes [e.g. energy expenditure, blood pressure, waist to hip ratio, body mass index (BMI)].\n\nTwo independent reviewers (PCD and GSM) evaluated the risk of bias of the studies included in the current review via the “Cochrane Collaboration’s tool for assessing risk of bias”31. Conflicts in the risk of bias assessment were resolved by IL and ADF. We also evaluated independently (PCD and GSM) the quality of reporting in the included randomised controlled trials (RCTs), controlled trials (CTs) and single group design studies (SGS) using the Consolidated Standards of Reporting Trials (CONSORT) checklist32, which is a 25-item checklist and we provided a score for each study included. For CTs and SGS, we used a modified CONSORT checklist comprised of 18 items, given that these studies are not RCTs and therefore, seven out of the 25 items of the CONSORT checklist are not applicable for CTs and SGS (i.e. randomization, blinding). We also evaluated independently (PCD and GSM) the quality of the reporting data of the included cross sectional studies (CSS) using the 22-item checklist of the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) and we also provided a score for each study included33. Disagreements on studies’ CONSORT and STROBE scores were arbitrated by IL and ADF. JT and PS then reviewed the molecular and genomic content of the review independent of the search process.\n\n\nResults\n\nThe reporting of the available information in this systematic review is shown in a PRISMA checklist in Supplementary Table 1.\n\nThe initial searching date was the 14th September 2015 while weekly alerts were received from both databases up until the 19th August 2016. Overall, the searching procedure revealed 51 studies that involved 2474 participants and met the inclusion criteria, and were therefore included in this systematic review. The reference lists of these studies did not result in the identification of additional relevant articles. The searching outcome is presented in a PRISMA flow diagram in Supplementary Figure 1.\n\nThe characteristics and the results of the included studies can be found in Table 1. From the 51 eligible studies, 12 (23.5%) were RCTs, of which four were cross-over RCTs, eight (15.7%) were CTs, 23 (45%) were SGS, and eight (15.7%) were CSS. One of the included RCTs34 reported the effect of resistance exercise training versus the effects of resistance exercise training combined with Ursolic supplementation, because for the latter group the effects of resistance exercise cannot be isolated, we will report only the results from the resistance exercise training group. Furthermore, one of the CTs35 will be included in the results of both CTs and CSS because this study consisted of a controlled trial nested within a CSS. Eight of the included studies examined overweight/obese adults and children18,36–42, while 11 studies included a clinical population, including patients with chronic obstructive pulmonary disease (COPD)24,35,43, heart failure44, metabolic syndrome45, haemodialysis46, osteoporotic47, anorexia nervosa37,48, pre-diabetes17 and diabetes type II49.\n\nC-RCT: cross-over randomized controlled trail; F: females; M: males; AE: Acute exercise; PGC-1α: peroxisome proliferator-activated receptor-γ coactivator 1α; FNDC5: Fibronectin type III domain-containing protein 5; PP: Phoenix Pharmaceuticals; NA: none available; CT: Controlled trial; CE: chronic exercise; UCP1: Uncoupling protein 1; WAT: White adipose tissue; CI: confidence interval; HOMA: homeostatic model assessment; CNS: Code not specified; SGS: Single group design studies; VO2max: Maximal oxygen uptake; CSS: Cross-sectional study; RCT: Randomized control trial; COPD: Chronic obstructive pulmonary disease; AB: Aviscera Bioscience; BMI: Body mass index; MetS: Metabolic Syndrome; LBM: Lean body mass; WHR: Waist to hip ratio; REE: Resting energy expenditure; VO2peak: peak oxygen uptake; WHR: waist to hip ratio; ATP: Adenosine triphosphate; PA: Physical activity; HDL: High density lipoprotein; METs: Metabolic equivalent.\n\nThe estimated risk of bias assessment results can be found in Table 2, and a summary is displayed in Supplementary Figure 2. Five RCTs45,50–53, and all the included CTs and CSS, as well as 22 of the 23 SGS, displayed a high risk of bias due to inadequate generation of a randomised sequence, while four RCTs24,34,43,54 showed low risk of bias, and three RCTs42,55,56, as well as one SGS57, showed unclear risk of bias because there was no description of the method used for allocation (even though the participants were said to be “randomly” assigned). Six RCTs24,43,50,51,53,54 displayed low risk of bias for “allocation concealment”, while two45,55 showed unclear risk of bias because of the lack of description of the randomization allocation. Also, four RCTs34,42,52,56, and all the included CTs and SGS, as well as CSS, showed high risk of bias due to the lack of concealment of allocations before assignment. In “blinding of participants and personnel”, all RCTs, CTs, SGS and CSS displayed high risk of bias because the exercise interventions could not be blinded to the participants.\n\n+: Low risk of bias; -: High risk of bias; ?: Unclear risk of bias; RCT: Randomised controlled trials; CT: Controlled trials; SGS: Single group design studies; CSS: Cross sectional studies.\n\nIn “blinding of outcome assessment”, three RCTs displayed low risk of bias,24,54,55 while five RCTs43,45,50,51,53 and one CT17 showed unclear risk of bias because of the lack of information regarding the blinding of assessments. Also, four RCTs34,42,52,56, the remaining seven CTs, and all the included SGS and CSS showed high risk of bias due to the knowledge of the allocated interventions by the assessors. Seven RCTs24,34,42,43,52,54,55, one CT38, five SGS40,57–60 and one CSS61 displayed low risk of bias, while five RCTs45,50,51,53,56, the remaining seven CTs, the remaining 18 SGS and the remaining eight CSS showed unclear risk of bias for “incomplete outcome data” because of the lack of information on the participants who dropped out or exclusions in the analysis. All the included studies showed low risk of bias of “selective reporting” because they reported all the outcomes measured, and all the included studies displayed low risk of bias in “other bias”.\n\nThe results of our evaluation in the quality of the reporting data showed a mean score of 13.6 out of 25 (54.4%) for the included RCTs, 10.56 out of 18 (58.68%) for the included CTs and 10.52 out of 18 (58.44%) for the included SGS (Table 3). The CSS displayed a mean score of 13.37 out of 22 (60.8%) (Table 4). The score represents the number of items (with percentage of items) on the checklist that were reported satisfactorily in each study. Therefore, a high score represents a high adherence to reporting guidelines, while a low score represents low adherence to reporting guidelines.\n\nScore represents the number of items (with percentage of items) on the checklist that were reported satisfactorily in each study. Therefore, a high score represents a high adherence to reporting guidelines, while a low score represents low adherence to reporting guidelines.\n\nCONSORT: Consolidated Standards of Reporting Trials; RCT: Randomized controlled trial; CT: Controlled trial; SGS: Single group design study.\n\nScore represents the number of items (with percentage of items) on the checklist that were reported satisfactorily in each study. Therefore, a high score represents a high adherence to reporting guidelines, while a low score represents low adherence to reporting guidelines.\n\nSTROBE: Strengthening the Reporting of Observational Studies in Epidemiology; CSS: Cross-sectional study.\n\nThe link between PGG-1a and FNDC5 in muscle in response to physical activity/exercise\n\nAcute effects of exercise\n\nFive studies17,18,51,62,63 investigating the link between PGC-1a with FNDC5 in muscle in response to acute exercise showed an increase of the PGC-1a mRNA in muscle; however, only two studies17,63 also found an increase in muscle FNDC5 mRNA, while one study44 detected a positive association of PGC-1a with FNDC5 in muscle. More specifically, a study found that an aerobic (2.1±0.8-fold over baseline, p=0.05) and a resistance (3.5±0.9-fold over baseline, p=0.01) training session increased PGC-1a splice variant1 but it did not change FNDC5 mRNA in the muscle of healthy adults51. Similarly, a resistance training session increased PGC-1a splice variant1 four hours post exercise (200%, over baseline and over control, p<0.05), but it did not change FNDC5 mRNA in the muscle of healthy adults62. A 45-minute endurance exercise session increased Exon 11 of PGC-1a mRNA in muscle (7.4-fold over baseline, p<0.05), but it did not change FNDC5 mRNA in muscle in both healthy and pre-diabetic adults, while a positive association between PGC-1a and FNDC5 mRNA was found at baseline (r=0.82, p<0.01) when data of the two groups were combined17. Furthermore, PGC-1a mRNA in muscle increased (>6-fold over baseline, p<0.05) in response to acute exercise; however, FNDC5 mRNA in muscle was not altered in sedentary overweight and obese adults18. Also, a resistance exercise session increased Exon 11 of PGC-1a mRNA in muscle of both young (4-fold over baseline, p<0.05) and older (2-fold over baseline, p<0.05) healthy adults, while it increased FNDC5 mRNA in muscle only in young (1.4-fold over baseline, 95% Confidence Interval=0.3–2.2, p<0.05) healthy adults63. Finally, PGC-1a mRNA in muscle was positively associated with FNDC5 mRNA in muscle (r=0.56, p<0.05) in a sub-set of 24 patients with heart failure44; stratification was ad hoc.\n\nChronic effects of exercise\n\nOf the eleven eligible studies2,17–20,41,63–68 that examined the link between PGC-1a with FNDC5 in muscle in response to chronic exercise, only two17,67 showed that chronic exercise increased PGC-1a and FNDC5 mRNA in muscle, while four studies19,20,63,64 showed no effect of chronic exercise on PGC-1a and FNDC5 mRNA in muscle. In the five studies that only measured FNDC5 in muscle, one study2 found increased and four18,41,65,66 showed no effect of chronic exercise on FNDC5 mRNA in muscle.\n\nA 12-week of endurance and resistance combined exercise training increased Exon 11 of PGC-1a mRNA in muscle (1.2-fold in healthy and 1.6-fold in pre-diabetic adults over baseline, p<0.05) and FNDC5 mRNA in muscle (1.4-fold in healthy and 2-fold in pre-diabetic adults over baseline, p<0.05)17. Furthermore, an 8-week sprints exercise program increased PGC-1a and FNDC5 mRNA in muscle (p<0.05) in healthy adults67. Finally, Bostrom et al. (2012) showed that in eight older participants selected from a larger group of 27 participants, chronic exercise increased FNDC5 mRNA in muscle (p<0.05)2.\n\nA 21-week endurance and resistance combined exercise program in healthy adults did not alter PGC-1a and FNDC5 mRNA in muscle63. One of the included studies20 found no effect of chronic exercise on PGC1a or FNDC5 mRNA in younger adults (despite detecting significant changes in ~1,000 other mRNAs and finding mitochondrial enzyme activity was increased in ~25%)69. Similarly, an 8-week resistance exercise program did not alter PGC-1a or FNDC5 mRNA in muscle of young healthy adults64. In addition, 12 weeks of resistance training did not alter PGC-1a splice variant1 mRNA, and it did not change the FNDC5 mRNA in muscle in untrained young females19. Also, a 12-week aerobic and resistance exercise combined program18 and an 8-week aerobic exercise program41 did not alter FNDC5 mRNA in muscle of sedentary obese adults, while chronic exercise had no effect on FNDC5 mRNA in muscle of healthy adults65. Finally, a 3-week sprint interval training program did not alter FNDC5 mRNA in muscle of healthy adults66.\n\nThe effects of physical activity/exercise on Irisin\n\nAcute effects of exercise\n\nStudies using enzyme-linked immunosorbent assays (ELISA)\n\nEighteen of the included studies13,18,22,23,35,36,39,45,50,51,53,57,59,63,67,70–72 examined the effects of acute exercise on circulating Irisin, and a further seven studies35,37,47–49,61,73 investigated the association of circulating Irisin with physical activity levels using commercial ELISA kits. Thirteen studies13,22,23,39,45,50,51,53,59,67,70–72 showed that acute exercise increased circulating Irisin in healthy individuals, while five studies18,35,36,57,63 showed no effect of acute exercise on circulating Irisin. Also, three studies35,49,61 showed a positive association of circulating Irisin with physical activity levels in healthy and COPD patients, while four studies37,47,48,73 showed no association or a negative association of circulating Irisin with physical activity levels in both healthy and clinical populations.\n\nA resistance training session did not change FNDC5 mRNA in the muscle of healthy adults and circulating Irisin increased (p<0.001) over the following 24-hour51, indicating no short-term association between FNDC5 and Irisin. Furthermore, an aerobic exercise session increased circulating Irisin (p=0.04) and Irisin concentrations were measured at ~355–459 ng/ml51, greater than recent mass spectrometry measurements (3.6–4.3 ng/ml)74. Similarly, a running exercise session in healthy individuals50 and an aerobic exercise session, as well as a resistance exercise session, in healthy individuals and in metabolic syndrome patients45 increased circulating Irisin (p<0.05). In the latter studies, Irisin concentrations measured at ~99–175 ng/ml50 and ~80–94.6 ng/ml45, respectively, which is greater than recent mass spectrometry measurements (3.6–4.3 ng/ml)74. Also, an acute resistance exercise session increased circulating Irisin (p<0.05) as oppose to aerobic and combined (aerobic and resistance) sessions that did not alter circulating Irisin in healthy males (Irisin concentrations ~18–151 ng/ml)53. Furthermore, a 90-minute aerobic exercise session increased circulating Irisin during (54th minute) the exercise session (20.4% compared to baseline, F(3,36)=5.28, p=0.004), but circulating Irisin decreased after the exercise session (p=0.021) in healthy male adults70. In the latter study, the aerobic exercise session also increased circulating Irisin during (54th minute) the exercise session (F(3,24)=5.03, p=0.01) in healthy female adults70. Eight out of the 23 included SGS showed that acute exercise increased circulating Irisin in healthy populations13,22,23,39,59,67,71,72, while a resistance exercise session increased FNDC5 mRNA in muscle only in young healthy adults and it did not alter circulating Irisin of both young and older healthy adults63. In addition, 45 minutes of running did not alter circulating Irisin in obese healthy adults36. Similarly, an acute cycling session did not alter circulating Irisin in COPD patients35, while an acute exercise session did not alter FNDC5 mRNA in muscle or circulating Irisin in sedentary overweight and obese adults18. Finally, an acute exercise session of both low and high intensity resistance training did not alter circulating Irisin (p>0.05) in sedentary young healthy females (Irisin concentrations ~69–87 ng/ml)57.\n\nPhysical activity levels were positively associated with circulating Irisin in healthy adults (r=0.20, p=0.03), but not in patients with diabetes type II49, and they were not associated with circulating Irisin in osteoporotic women47 and in anorexic women48. Furthermore, circulating Irisin concentrations were higher in physically active (Irisin concentrations 128.55±78.71 ng/ml) than in sedentary individuals (Irisin concentrations 105.66±60.2 ng/ml) (p=0.006)73. However, physical activity levels were negatively associated with circulating Irisin (r=−0.22, p=0.001) in groups of anorexic, obese and healthy women37, while they were positively associated with circulating Irisin in both COPD patients (r=0.83, p<0.01) and healthy individuals (r=0.79, p<0.001)35. Finally, circulating Irisin was positively correlated with physical activity levels in individuals who demonstrated high weekly physical activity energy expenditure (2050–3840 kcal/week) (Irisin concentrations ~32–261 ng/ml, p=0.04).\n\nStudies using mass spectrometry and western blotting\n\nOnly one included study used both western blotting and mass spectrometry to detect circulating Irisin in response to acute exercise. This study showed that submaximal acute aerobic exercise increased circulating Irisin (3.1-fold over baseline, p<0.05), whereas maximal acute aerobic exercise did not alter circulating Irisin, even though tended to be significant (p=0.07), in two healthy volunteered adults25.\n\nChronic effects of exercise\n\nStudies using ELISA\n\nTwenty three included studies13,17,19,23,24,34,35,38,40,42,43,46,52,54–58,60,63,66,71,75 in the current review examined the effects of chronic exercise on circulating Irisin using commercial ELISA kits, while the populations examined showed large heterogeneity. Nine studies24,35,40,42,52,55,60,66,75 showed that chronic exercise increased circulating Irisin, while 12 studies13,17,19,23,34,38,43,46,54,58,63,71 showed no effects of chronic exercise on circulating Irisin, and two studies showed that chronic exercise decreased circulating Irisin56,57, in both healthy and clinical populations.\n\nA 6-month resistance training program increased circulating Irisin in healthy controls (p<0.01), but not in the exercisers55, while an 8-day vibration exercise increased circulating Irisin in COPD patients (p=0.01)24. Notably, the Irisin concentrations in the latter study24 were ~785–1196 ng/ml, a lot greater than recent mass spectrometry based detection of Irisin concentrations (3.6–4.3 ng/ml)74. Furthermore, a 12-week resistance exercise increased circulating Irisin in elderly healthy females (Irisin concentrations ~61–83 ng/ml, p<0.05,)52. In addition, a 12-week of endurance and resistance combined exercise training in both healthy and pre-diabetic adults increased FNDC5 mRNA in muscle, while it decreased circulating Irisin (p<0.05) when the data of both healthy and pre-diabetic groups were combined17. In the latter study, Irisin concentrations were detected at 160 ng/ml at baseline and 143 ng/ml after the exercise program, a lot greater than recent mass spectrometry based detection of Irisin concentrations (3.6–4.3 ng/ml)74. In addition, an 8-week endurance training program increased circulating Irisin only in middle-aged and not in young healthy adults (Irisin concentrations ~140–168 ng/ml, p<0.05)75, while an 8-week chronic exercise program in COPD patients increased circulating Irisin (p<0.05)35. Finally, a 12-month physical activity intervention increased circulating Irisin by ~12% (p=0.001) in obese children40. Notably, in the latter study, Irisin concentrations were 111 ng/ml, a lot greater than recent mass spectrometry based detection of Irisin concentrations (3.6–4.3 ng/ml)74.\n\nA 3-week sprint interval training program did not alter FNDC5 mRNA in muscle and showed a gender difference in circulating Irisin, which was decreased in healthy males and increased in healthy females (p<0.05)66. An 8-week resistance exercise training program increased circulating Irisin compared to control group (p<0.05), while the Irisin concentrations were ~700–850 ng/ml42. Similarly, 3-month cross-fit training increased circulating Irisin (Irisin concentrations ~300–850 ng/ml, p<0.05) only in females60. On the other hand, a 4-week sprint exercise training decreased circulating Irisin (Irisin concentrations ~200-340 ng/ml, p<0.05) in healthy males56. Three months of both non-individualized training and individualized training did not alter circulating Irisin (Irisin concentrations ~123–131 ng/ml, p>0.05) in COPD patients43. Finally, an 8-week low intensity resistance training program did not alter circulating Irisin, while an 8-week high intensity resistance training program reduced circulating Irisin (Irisin concentrations ~51–87 ng/ml, p=0.03)57.\n\nAn 8-week resistance training program in healthy adults did not alter circulating Irisin34 and a 26-week aerobic exercise program revealed no changes in circulating Irisin of healthy adults54. A 21-week endurance and resistance combined exercise program in healthy adults did not alter FNDC5 mRNA in muscle and circulating Irisin63. Similarly, a 16-week resistance exercise program in elderly women did not increase circulating Irisin38 and 12 weeks of resistance training did not alter FNDC5 mRNA in muscle or circulating Irisin19. However, circulating Irisin was positively correlated with FNDC5 mRNA in muscle (r=0.65, 95% Confidence Interval=0.12–0.89, p<0.05) in the latter study19. Finally, five SGS showed that chronic exercise did not alter circulating Irisin in healthy individuals13,23,58,71 and haemodialysis patients46.\n\nStudies using mass spectrometry and western blotting\n\nOnly two included studies used alternative methods than commercial ELISA kits to detect human circulating Irisin in response to chronic exercise. Initially, Bostrom et al. (2012) showed via western blotting that in eight older participants selected from a larger group of 27 participants68 chronic exercise increased FNDC5 mRNA in muscle (p<0.05) and circulating Irisin (2-fold over baseline, p<0.05)2. Finally, one study contrasted plasma Irisin concentrations in six younger individuals following 12 weeks high intensity aerobic exercise with those found in a separate group of four individuals (no pre-training samples were presented)74. This study used mass spectrometry and detected circulating Irisin at 3.6 ng/ml in controls and 4.3 ng/ml in exercisers, which was significantly different between the two groups (p=0.04). No details regarding training or control of hydration in the training group were reported74.\n\nWe located only one study that examined the effects of exercise on UCP1 mRNA in subcutaneous WAT in humans. This study found that a 12-week intervention of endurance and resistance combined exercise in both healthy and pre-diabetic adults had no significant effect on UCP1 mRNA in subcutaneous WAT, even though UCP1 mRNA was increased (1.82-fold over baseline, p<0.05) when data from both groups were combined17. Also, UCP1 mRNA did not associate with FNDC5 mRNA in muscle (r=0.28, p=0.18) and circulating Irisin (r=-0.11, p=0.60)17.\n\nThe secondary results of the included studies can be found in Table 1. In 118 muscle profiles, FNDC5 mRNA was modestly and positively correlated with BMI (r2=0.1, p=0.004), while FDNC5 mRNA was not related to fasting glucose or glycaemic control20. Furthermore, circulating Irisin was not associated with inflammatory indices40, blood glucose63,66, homeostatic model assessment (HOMA)63,66,72, insulin63,66,72, leptin72, lean body mass47,58, fat mass19,47,58, waist to hip ratio72, energy expenditure22,55, BMI72, and pulmonary function35.\n\nAdditional secondary results show that circulating Irisin was positively associated with BMI60,71,73,76, triglycerides71,73, fat mass37,60, HOMA73, insulin73, blood glucose72 and leptin76, and negatively with high density lipoprotein cholesterol71, all of which indicate unfavourable effects of Irisin on human health. Nevertheless, some secondary evidence suggests that circulating Irisin was positively associated with fat free mass37,71, muscle mass42 and energy expenditure37, and Irisin that was incubated within white adipocytes in vitro increased glucose and fatty acids uptake67. Furthermore, circulating Irisin after a maximal workload was significantly greater in individuals with higher VO2max than individuals with lower VO2max22. However, circulating Irisin was not associated with VO2peak before and post exercise in healthy females58 and sedentary overweight and obese individuals, while it was inversely correlated with VO2peak (p<0.05) in healthy males61.\n\n\nDiscussion\n\nThe aim of the current review was to systematically identify the effects of physical activity on the link between PGC-1a and FNDC5 in muscle and circulating Irisin, as well as evidence for regulation of UCP1 in WAT (indicating a browning process) in humans.\n\nWe identified 51 related studies (12 RCTs) with 2474 participants. Five studies showed an increase of PGC-1a mRNA in muscle in response to acute exercise; however, only two of them found increases in FNDC5 mRNA in muscle in healthy adults. Regarding chronic exercise, only two out of 11 studies showed increased PGC-1a and FNDC5 mRNA in muscle and one study found increased FNDC5 mRNA in muscle of healthy adults, while the remaining studies showed no effects of chronic exercise on the link between PGC-1a and FNDC5 mRNA in muscle in both healthy and clinical populations. Therefore, these results cannot confirm any link between PGC-1a and FNDC5 in muscle in response to both acute and chronic exercise.\n\nThe included studies that used commercial ELISA kits to examine the effects of both acute and chronic exercise on circulating Irisin show disparate results. One reason is the heterogeneity of the populations examined and the variation of the exercise protocols. In addition, the commercial ELISA kits that have been used by these studies had not been previously validated77 or they were found invalid27, which indicates that measurements of circulating Irisin with these commercial ELISA kits are not optimal. This is because of the polyclonal nature of the antibodies used that may attract cross-reacting proteins27. One of the three included studies that used western blotting and/or mass spectrometry methods to detect circulating Irisin showed disparate results regarding the effects of acute exercise (submaximal increased, whereas maximal did not alter circulating Irisin) in healthy individuals. The other two included studies that used western blotting and/or mass spectrometry to detect circulating Irisin showed that chronic exercise increased circulating Irisin in healthy individuals. Finally, we included only one study that examined the effects of chronic exercise on UCP1 mRNA in subcutaneous WAT that found no effect.\n\nWe were unable to find strong evidence that links PGC-1a and FNDC5 mRNA in muscle in response to exercise training or increased physical activity levels. Notably, we located only one study that examined the effects of exercise on UCP1 in WAT, and this found no effect17. Despite PGC-1a being firmly placed as a central regulator of adaptation to exercise in mice and humans, numerous aspects of the literature are contradictory or incomplete. For example, previous evidence indicates that PGC-1a mRNA accumulates with endurance training, while studies of PGC-1a protein reflect various antibodies that measure distinct molecular entities ranging from 70 to >110 kDa78–80. Furthermore, mice lacking PGC-1a adapt normally to endurance exercise training, and in humans the PGC-1a regulated gene network does not correlate with aerobic adaptation69. Thus any argument that places Irisin as part of the core PGC-1a regulated exercise adaptation program needs to reflect, on both technical and theoretical grounds, that there is great uncertainty of the nature and importance of PGC-1a in exercise and health81.\n\nWhen PGC-1a protein content is measured (albeit with uncertainty over protein identities) exercise training increases PGC-1a protein in skeletal muscle or causes nuclear translocation of protein82–85. However, the studies included in the current review only relied on measuring PGC-1a mRNA to determine the effects of exercise on PGC-1a, and the time-course of mRNA and protein responses to exercise are distinct. Thus, the link between PGC-1a and FNDC5 in skeletal muscle may reflect measurement of mRNA dynamics and this may explain inconsistent findings for PGC-1a. Also, the proposed mechanism by Bostrom et al. (2012) indicates that induction of PGC-1a mRNA and then protein would activate the transcription of FNDC5, and hence, if this theory was correct, it would be expected that a strong correlation between PGC-1a mRNA and FNDC5 mRNA would exist. However, previous evidence showed that FNDC5 mRNA in muscle is not regularly increased by exercise or differently regulated between those with and without insulin resistance20, and was only modestly increased in a subset of older people following chronic exercise training20.\n\nThe various commercially available antibodies used in the ELISA kits yield a protein concentration that appears to be 5-278 times greater than a more recent mass spectrometry data (data that may require independent validation), and still far above what others have found86. These technical considerations may explain part or all of the equivocal results of the included studies in this current review regarding both the effects of exercise on FNDC5 mRNA in muscle and circulating Irisin, while the evidence in terms of the effects of exercise on UCP1 of WAT is very limited. If we focus on more reliable mRNA measures of PGC-1a and FDNC5, then the variable findings may be explained by the different characteristics of the populations examined and the different exercise protocols used. There is trend for old muscle tissue to show a higher FDNC5 expression following exercise20.\n\nAn interesting aspect brought forward in the included studies showed that the start codon of the FNDC5 gene displays a variation in humans due to the non-ATG start codon65. In humans, ATG is usually the first codon to lead to efficient protein production, and therefore, the latter may suggest that Irisin, if produced, would be done so in an inefficient manner65. However, this notion has been questioned by a subsequent study, which supports that human Irisin is mainly translated from its non-ATG start codon, while the molecular weight of the protein is similar to that of important proteins in human body, such as insulin, leptin and resistin74, indicating a biological role of Irisin.\n\nAccording to the results of the current systematic review, two studies have measured circulating Irisin via mass spectrometry in response to exercise in humans. In the study by Jedrychowski et al. (2015), blood samples for Irisin identification were collected only after the exercise program from a small number of participants who were sedentary (n=4) or aerobic exercisers (n=6)74. In the study by Lee et al. (2014), Irisin was measured only pre and post-acute exercise without a control situation, and the sample size was only two participants87. Also, in the latter study a ~3-fold increase of Irisin was reported only after submaximal and not maximal exercise. These studies display methodological limitations and a small number of participants, which indicates that future longitudinal studies of changes in Irisin will clarify if the mass spectrometry measures reflect exercise-induced changes. Finally, while the studies that utilised mass spectrometry do not agree27,74,87, reflecting issues of sensitivity and methodology, the latest identification and analysis of Irisin74,87 indicates that Irisin may circulate in blood and probably has a similar or identical structure to the mouse structure; however, whether it has genuine biological activity remains to be elucidated.\n\nBased on the studies selected for the purposes of the current review, we cannot reach precise conclusions regarding the effects of acute and chronic exercise on PGC-1a in conjunction with FNDC5 mRNA in muscle; this is mainly due to the inconsistency of the findings and the different population characteristics examined. Most of the RCTs34,45,50–53 display high risk of bias, due to inadequate generation of a randomised sequence and a lack of concealment of allocations before assignment, while all the RCTs exhibit high risk of bias since the exercise interventions could not be blinded to the participants. In addition, four RCTs45,50,51,53 display unclear risk of bias because of the lack of information regarding the blinding procedures. Therefore, the risk of bias assessment of the included RCTs indicates that they may provide imprecise results (Table 2). In addition, the CTs and SGS display a high risk of bias due to the absence of generation of a randomised sequence, inadequate concealment of allocations before assignment and knowledge of the allocated interventions by the outcome assessors. They also display unclear risk of bias due to knowledge of the allocated interventions by the investigators during the study (Table 2). Finally, the included CSS display high risk of bias due to inadequate generation of a randomised sequence, lack of concealment of allocations before assignment and knowledge of the allocated interventions by the assessors, while they display unclear risk of bias for “incomplete outcome data” because of the lack of information of the participants who were excluded from the analysis. This evidence indicates that the CTs, SGS and CSS may also provide imprecise results. Furthermore, quality of reporting, as expressed through the adherence guidelines (i.e. CONSORT and STROBE), showed low scores of the required results that should have been reported (54.4% for RCTs, 58.68% for CTs, 58.44% for SGS and 60.8% for CSS) by the included studies in the current review. This shows inadequate reporting of the results of the included studies that may not aid the critical appraisal and interpretation of their outcomes.\n\nTo the best of our knowledge, this is the first systematic review that examines the effects of physical activity on the link between PGC-1a and FNDC5 in muscle, circulating Irisin and on UCP1 of WAT in humans. We compared our results with a recent meta-analysis that aimed to identify the effects of exercise on circulating Irisin88. This meta-analysis concluded that chronic exercise may decrease circulating Irisin in the RCTs while the non-RCTs cannot form any conclusion. However, the latter meta-analysis did not take into consideration the issues raised regarding the validity of the methods used for Irisin identification27. In contrast, while we considered the methods used for Irisin identification in the studies included in the current review, our review had a different aim, to systematically identify the effects of physical activity on the link between PGC-1a and FNDC5 in muscle, circulating Irisin and find evidence for regulation of UCP1 in WAT in humans. Regarding circulating Irisin, we also report that we cannot form any firm conclusion of the effects of exercise on circulating Irisin. Our review highlights previous evidence showing that circulating Irisin may only be detected in humans via mass spectrometry26,27,74, while we suggest that the previous available data coming from methods that have not been previously validated for circulating Irisin identification should not be used. This is because recent evidence questioned the antibodies used in the commercial ELISA kits given the polyclonal nature of these antibodies that may attract cross-reacting proteins27. However, publications that use commercial ELISA that have not been previously validated to detect human Irisin continue at an alarming rate. Therefore, our review indicates to consider using only valid methods for human circulating Irisin identification in the future. Furthermore, our results are in accordance with a previous review that showed equivocal results among studies examining circulating Irisin due to the methodological variations for Irisin detection77. In this review, the authors examined the commercial antibodies and ELISA used to measure circulating Irisin and concluded that the currently available antibodies should be tested for cross-reacting antigens detection77.\n\nInitially, Irisin was proposed to have a therapeutic effect given the potential to cause a browning formation of WAT that may have anti-obesity and antidiabetic effects2. This was mainly suggested when Irisin administered in obese mice improved glucose homeostasis and caused weight loss2. Also, the browning formation that Irisin may cause could lead to reduced weight gain, up-regulated insulin sensitivity, reduced risk of diabetes type II and other metabolic disorders as animal studies indicate89–93, as well as increase daily resting energy expenditure in humans94,95. However, the secondary outcomes of our systematic review shows that even when Irisin was measured with the same ELISA (PP, EK-067-52) there was no relationship22 or a positive relationship with resting energy expenditure37, and no association72 or a positive association with BMI71. This specific ELISA kit (PP, EK-067-52) has been tested by a previous study for validity27, which showed it to be invalid for circulating Irisin identification. Furthermore, Irisin measured with ELISA from the same manufacturer (Phoenix Pharmaceuticals) showed either no relationship72 or a positive relationship with waist circumference49. This evidence shows inconsistent results of the relationship of Irisin with indices indicating the therapeutic role of the protein, even though the Irisin identification methods that used were identical. The available evidence from the included studies in the current review revealed that when circulating Irisin was measured via commercial ELISA kits the concentrations were ~22–1196 ng/ml, a lot greater (~5–278 folds) than recent mass spectrometry detection (3.6–4.3 ng/ml)74, strongly indicating that the ELISA kits were detecting multiple proteins. Indeed, the available commercial ELISA kits for Irisin identification either were found to be invalid27,77 or they should be tested for validity77. Thus, we cannot confirm a favourable effect of Irisin on human metabolism. Finally, none of the included studies in the current review examined associations of circulating Irisin with indices indicate a therapeutic role of the protein using western blotting and/or mass spectrometry methods.\n\nThe current review has a number of strengths. For instance, we used the PubMed and the EMBASE databases using appropriate algorithms with standardized indexing terms. Standardized indexing terms can retrieve records that may use different words to describe the same concept and information beyond that may be contained in the words of the title and abstract96. Furthermore, the current review used a systematic manner to identify articles according to previous methodology28–30, and we used well-established tools31–33 to evaluate the included studies. To reduce bias, two investigators worked independently on the screening of the included studies for eligibility, risk of bias assessment, and in the provision of CONSORT and STROBE scores. Also, we have not excluded studies based on language. However, a limitation of the current review includes the use of only published literature; we did not include grey literature searching. In this light, there is a potential of publication bias in the current review. Nevertheless, the inclusion of grey literature may itself introduce bias and one reason to include grey literature would be the absence of peer-review sources96.\n\n\nConclusions\n\nWe found little evidence to determine the link between PGC-1a mRNA and FNDC5 mRNA in human muscle, and there was limited evidence on the effects of physical activity on UCP1 in subcutaneous WAT. We also found a heterogeneity in the populations examined, high risk of bias by the selected studies and a relatively small number of RCTs (n=12) with inconsistent findings regarding the link between physical activity, PGC-1a, FNDC5, and UCP1.\n\nMass spectrometry detection of Irisin of exercise effects were compromised by the methodological limitations of the existed studies (i.e. post exercise comparisons, lack of control, small samples). The current systematic review highlights previous evidence that indicates via mass spectrometry that Irisin is present in human blood at concentrations that are ~5–278 folds lower than those detected by commercial ELISA kits. Therefore, we are unable to conclude on the circulating Irisin response to physical activity due to methodological limitations. In this regard, our systematic review used well-established methodology (i.e. PRISMA and Cochrane Library guidelines). However, we have also considered the validity and accuracy of the measurements of Irisin protein concentrations in the included studies. This additional analysis completely redirected our conclusion compared to the conclusion that a well-established systematic review methodology would provide. Therefore, we suggest that future systematic reviews should also take into consideration the validity and accuracy of the measurements of the included studies, to avoid misleading conclusions. We also suggest that future studies should only consider currently valid methods for human circulating Irisin (i.e. mass spectrometry), until new methods are introduced. The latter also implies that future studies should re-examine the biological role for human Irisin and the effects of physical activity/exercise on the link between PGC-1a and FNDC5 in muscle, circulating Irisin and UCP1 in WAT.",
"appendix": "Author contributions\n\n\n\nPCD and IML formed the paper, developed the algorithms and conducted the searching procedure. PCD and GSM performed the risk of bias and the quality of the reporting of the results assessments. Disagreements in the assessment of both risk of bias and the quality of the reporting of the results was arbitrated by IML and ADF. JT and PS contributed in the data extraction from the selected studies, reviewed and modified the molecular and genomic content of the paper. YK contributed in the data extraction from the selected studies, reviewed and modified the content of the manuscript. All authors approved the submitted version.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nPCD and ADF were supported by the European Union 7th Framework Programme [FP7-PEOPLE-2012-IRSES (FUEGO grant no. 612547), and FP7-PEOPLE-2013-IRSES (U-GENE grant no. 319010)]. PS was supported by the Swedish Federal Government under the LUA/ALF agreement (grant no. ALFGBG-431481).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary File 1: PubMed search.\n\nClick here to access the data.\n\nSupplementary File 2: EMBASE search.\n\nClick here to access the data.\n\nSupplementary Table 1: PRISMA checklist.\n\nClick here to access the data.\n\nSupplementary Figure 1: PRISMA flow diagram of study selection and identification.\n\nClick here to access the data.\n\nSupplementary Figure 2: Summary of risk of bias assessment using the Cochrane Collaboration’s tool.\n\nClick here to access the data.\n\n\nReferences\n\nIshibashi J, Seale P: Medicine. Beige can be slimming. Science. 2010; 328(5982): 1113–1114. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoström P, Wu J, Jedrychowski MP, et al.: A PGC1-α-dependent myokine that drives brown-fat-like development of white fat and thermogenesis. Nature. 2012; 481(7382): 463–468. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNorrbom J, Sundberg CJ, Ameln H, et al.: PGC-1alpha mRNA expression is influenced by metabolic perturbation in exercising human skeletal muscle. J Appl Physiol (1985). 2004; 96(1): 189–194. PubMed Abstract | Publisher Full Text\n\nSpiegelman BM: Transcriptional control of mitochondrial energy metabolism through the PGC1 coactivators. Novartis Found Symp. 2007; 287: 60–3; discussion 63–9. PubMed Abstract | Publisher Full Text\n\nCannon B, Nedergaard J: Brown adipose tissue: function and physiological significance. Physiol Rev. 2004; 84(1): 277–359. PubMed Abstract | Publisher Full Text\n\nLeick L, Wojtaszewski JF, Johansen ST, et al.: PGC-1alpha is not mandatory for exercise- and training-induced adaptive gene responses in mouse skeletal muscle. Am J Physiol Endocrinol Metab. 2008; 294(2): E463–474. PubMed Abstract | Publisher Full Text\n\nTeufel A, Malik N, Mukhopadhyay M, et al.: Frcp1 and Frcp2, two novel fibronectin type III repeat containing genes. Gene. 2002; 297(1–2): 79–83. PubMed Abstract | Publisher Full Text\n\nCastillo-Quan JI: From white to brown fat through the PGC-1α-dependent myokine irisin: implications for diabetes and obesity. Dis Model Mech. 2012; 5(3): 293–295. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPetrovic N, Walden TB, Shabalina IG, et al.: Chronic peroxisome proliferator-activated receptor gamma (PPARgamma) activation of epididymally derived white adipocyte cultures reveals a population of thermogenically competent, UCP1-containing adipocytes molecularly distinct from classic brown adipocytes. J Biol Chem. 2010; 285(10): 7153–7164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeldmann HM, Golozoubova V, Cannon B, et al.: UCP1 ablation induces obesity and abolishes diet-induced thermogenesis in mice exempt from thermal stress by living at thermoneutrality. Cell Metab. 2009; 9(2): 203–209. PubMed Abstract | Publisher Full Text\n\nShabalina IG, Petrovic N, de Jong JM, et al.: UCP1 in Brite/Beige Adipose Tissue Mitochondria Is Functionally Thermogenic. Cell Rep. 2013; 5(5): 1196–203. PubMed Abstract | Publisher Full Text\n\nPetrovic N, Shabalina IG, Timmons JA, et al.: Thermogenically competent nonadrenergic recruitment in brown preadipocytes by a PPARgamma agonist. Am J Physiol Endocrinol Metab. 2008; 295(2): E287–296. PubMed Abstract | Publisher Full Text\n\nHuh JY, Panagiotou G, Mougios V, et al.: FNDC5 and irisin in humans: I. Predictors of circulating concentrations in serum and plasma and II. mRNA expression and circulating concentrations in response to weight loss and exercise. Metabolism. 2012; 61(12): 1725–1738. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYang Z, Chen X, Chen Y, et al.: Decreased irisin secretion contributes to muscle insulin resistance in high-fat diet mice. Int J Clin Exp Pathol. 2015; 8(6): 6490–6497. PubMed Abstract | Free Full Text\n\nHuth C, Dubois MJ, Marette A, et al.: Irisin is more strongly predicted by muscle oxidative potential than adiposity in non-diabetic men. J Physiol Biochem. 2015; 71(3): 559–68. PubMed Abstract | Publisher Full Text\n\nSesti G, Andreozzi F, Fiorentino TV, et al.: High circulating irisin levels are associated with insulin resistance and vascular atherosclerosis in a cohort of nondiabetic adult subjects. Acta Diabetol. 2014; 51(5): 705–713. PubMed Abstract | Publisher Full Text\n\nNorheim F, Langleite TM, Hjorth M, et al.: The effects of acute and chronic exercise on PGC-1α, irisin and browning of subcutaneous adipose tissue in humans. FEBS J. 2014; 281(3): 739–749. PubMed Abstract | Publisher Full Text\n\nKurdiova T, Balaz M, Vician M, et al.: Effects of obesity, diabetes and exercise on Fndc5 gene expression and irisin release in human skeletal muscle and adipose tissue: in vivo and in vitro studies. J Physiol. 2014; 592(5): 1091–1107. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEllefsen S, Vikmoen O, Slettaløkken G, et al.: Irisin and FNDC5: effects of 12-week strength training, and relations to muscle phenotype and body mass composition in untrained women. Eur J Appl Physiol. 2014; 114(9): 1875–1888. PubMed Abstract | Publisher Full Text\n\nTimmons JA, Baar K, Davidsen PK, et al.: Is irisin a human exercise gene? Nature. 2012; 488(7413): E9–10; discussion E10–11. PubMed Abstract | Publisher Full Text\n\nVollaard NB, Constantin-Teodosiu D, Fredriksson K, et al.: Systematic analysis of adaptations in aerobic capacity and submaximal energy metabolism provides a unique insight into determinants of human aerobic performance. J Appl Physiol (1985). 2009; 106(5): 1479–1486. PubMed Abstract | Publisher Full Text\n\nDaskalopoulou SS, Cooke AB, Gomez YH, et al.: Plasma irisin levels progressively increase in response to increasing exercise workloads in young, healthy, active subjects. Eur J Endocrinol. 2014; 171(3): 343–352. PubMed Abstract | Publisher Full Text\n\nHuh JY, Mougios V, Skraparlis A, et al.: Irisin in response to acute and chronic whole-body vibration exercise in humans. Metabolism. 2014; 63(7): 918–921. PubMed Abstract | Publisher Full Text\n\nGreulich T, Nell C, Koepke J, et al.: Benefits of whole body vibration training in patients hospitalised for COPD exacerbations - a randomized clinical trial. BMC Pulm Med. 2014; 14: 60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee P, Linderman JD, Smith S, et al.: Irisin and FGF21 are cold-induced endocrine activators of brown fat function in humans. Cell Metab. 2014; 19(2): 302–309. PubMed Abstract | Publisher Full Text\n\nAtherton PJ, Phillips BE: Greek goddess or Greek myth: the effects of exercise on irisin/FNDC5 in humans. J Physiol. 2013; 591(21): 5267–5268. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlbrecht E, Norheim F, Thiede B, et al.: Irisin - a myth rather than an exercise-inducible myokine. Sci Rep. 2015; 5: 8889. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhan KS, Kunz R, Kleijnen J, et al.: Five steps to conducting a systematic review. J R Soc Med. 2003; 96(3): 118–121. PubMed Abstract | Free Full Text\n\nLiberati A, Altman DG, Tetzlaff J, et al.: The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009; 339: b2700. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHarris JD, Quatman CE, Manring MM, et al.: How to write a systematic review. Am J Sports Med. 2014; 42(11): 2761–2768. PubMed Abstract | Publisher Full Text\n\nHiggins JP, Altman DG, Gøtzsche PC, et al.: The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011; 343: d5928. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchulz KF, Altman DG, Moher D: CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. J Pharmacol Pharmacother. 2010; 1(2): 100–107. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvon Elm E, Altman DG, Egger M, et al.: The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Int J Surg. 2014; 12(12): 1495–1499. PubMed Abstract | Publisher Full Text\n\nBang HS, Seo DY, Chung YM, et al.: Corrigendum to: Ursolic Acid-Induced Elevation of Serum Irisin Augments Muscle Strength During Resistance Training in Men. Korean J Physiol Pharmacol. 2014; 18(6): 531. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIjiri N, Kanazawa H, Asai K, et al.: Irisin, a newly discovered myokine, is a novel biomarker associated with physical activity in patients with chronic obstructive pulmonary disease. Respirology. 2015; 20(4): 612–617. PubMed Abstract | Publisher Full Text\n\nAydin S, Aydin S, Kuloglu T, et al.: Alterations of irisin concentrations in saliva and serum of obese and normal-weight subjects, before and after 45 min of a Turkish bath or running. Peptides. 2013; 50: 13–18. PubMed Abstract | Publisher Full Text\n\nPardo M, Crujeiras AB, Amil M, et al.: Association of irisin with fat mass, resting energy expenditure, and daily activity in conditions of extreme body mass index. Int J Endocrinol. 2014; 2014: 857270. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrestes J, da Cunha Nascimento D, Tibana RA, et al.: Understanding the individual responsiveness to resistance training periodization. Age (Dordr). 2015; 37(3): 9793. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhodadadi H, Rajabi H, Attarzadeh SR, et al.: The effect of High Intensity Interval Training (HIIT) and pilates on levels of irisin and insulin resistance in overweight women. [Persian]. Iranian Journal of Endocrinology and Metabolism. 2014; 16(3): 190–196. Reference Source\n\nBlüher S, Panagiotou G, Petroff D, et al.: Effects of a 1-year exercise and lifestyle intervention on irisin, adipokines, and inflammatory markers in obese children. Obesity (Silver Spring). 2014; 22(7): 1701–1708. PubMed Abstract | Publisher Full Text\n\nBesse-Patin A, Montastier E, Vinel C, et al.: Effect of endurance training on skeletal muscle myokine expression in obese men: Identification of apelin as a novel myokine. Int J Obes (Lond). 2014; 38(5): 707–713. PubMed Abstract | Publisher Full Text\n\nKim HJ, Lee HJ, So B, et al.: Effect of aerobic training and resistance training on circulating irisin level and their association with change of body composition in overweight/obese adults: a pilot study. Physiol Res. 2016; 65(2): 271–279. PubMed Abstract\n\nGreulich T, Kehr K, Nell C, et al.: A randomized clinical trial to assess the influence of a three months training program (gym-based individualized vs. calisthenics-based non-invidualized) in COPD-patients. Respir Res. 2014; 15(1): 36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLecker SH, Zavin A, Cao P, et al.: Expression of the irisin precursor FNDC5 in skeletal muscle correlates with aerobic exercise performance in patients with heart failure. Circulation Heart failure. 2012; 5(6): 812–818. PubMed Abstract | Publisher Full Text\n\nHuh JY, Siopi A, Mougios V, et al.: Irisin in response to exercise in humans with and without metabolic syndrome. J Clin Endocrinol Metab. 2015; 100(3): E453–457. PubMed Abstract | Publisher Full Text\n\nMoraes C, Leal VO, Marinho SM, et al.: Resistance exercise training does not affect plasma irisin levels of hemodialysis patients. Horm Metab Res. 2013; 45(12): 900–904. PubMed Abstract | Publisher Full Text\n\nPalermo A, Strollo R, Maddaloni E, et al.: Irisin is associated with osteoporotic fractures independently of bone mineral density, body composition or daily physical activity. Clin Endocrinol (Oxf). 2015; 82(2): 615–619. PubMed Abstract | Publisher Full Text\n\nHofmann T, Elbelt U, Ahnis A, et al.: Irisin Levels are Not Affected by Physical Activity in Patients with Anorexia Nervosa. Front Endocrinol (Lausanne). 2014; 4: 202. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAl-Daghri NM, Alokail MS, Rahman S, et al.: Habitual Physical Activity is Associated with Circulating Irisin in Healthy Controls but not in Subjects with Diabetes Mellitus Type 2. Eur J Clin Invest. 2015; 45(8): 775–81. PubMed Abstract | Publisher Full Text\n\nTsuchiya Y, Ando D, Goto K, et al.: High-intensity exercise causes greater irisin response compared with low-intensity exercise under similar energy consumption. Tohoku J Exp Med. 2014; 233(2): 135–140. PubMed Abstract | Publisher Full Text\n\nNygaard H, Slettaløkken G, Vegge G, et al.: Irisin in blood increases transiently after single sessions of intense endurance exercise and heavy strength training. PLoS One. 2015; 10(3): e0121367. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim HJ, So B, Choi M, et al.: Resistance exercise training increases the expression of irisin concomitant with improvement of muscle function in aging mice and humans. Exp Gerontol. 2015; 70: 11–17. PubMed Abstract | Publisher Full Text\n\nTsuchiya Y, Ando D, Takamatsu K, et al.: Resistance exercise induces a greater irisin response than endurance exercise. Metabolism. 2015; 64(9): 1042–1050. PubMed Abstract | Publisher Full Text\n\nHecksteden A, Wegmann M, Steffen A, et al.: Irisin and exercise training in humans - results from a randomized controlled training trial. BMC Med. 2013; 11: 235. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScharhag-Rosenberger F, Meyer T, Wegmann M, et al.: Irisin does not mediate resistance training-induced alterations in resting metabolic rate. Med Sci Sports Exerc. 2014; 46(9): 1736–1743. PubMed Abstract | Publisher Full Text\n\nTsuchiya Y, Ijichi T, Goto K: Effect of sprint training on resting serum irisin concentration - Sprint training once daily vs. twice every other day. Metabolism. 2016; 65(4): 492–495. PubMed Abstract | Publisher Full Text\n\nMoienneia N, Hosseini S: Acute and chronic responses of metabolic myokine to different intensities of exercise in sedentary young women. Obesity Medicine. 2016; 1: 15–20. Publisher Full Text\n\nHew-Butler T, Landis-Piwowar K, Byrd G, et al.: Plasma irisin in runners and nonrunners: no favorable metabolic associations in humans. Physiol Rep. 2015; 3(1): pii: e12262. PubMed Abstract | Publisher Full Text | Free Full Text\n\nComassi M, Vitolo E, Pratali L, et al.: Acute effects of different degrees of ultra-endurance exercise on systemic inflammatory responses. Intern Med J. 2015; 45(1): 74–79. PubMed Abstract | Publisher Full Text\n\nMurawska-Cialowicz E, Wojna J, Zuwala-Jagiello J: Crossfit training changes brain-derived neurotrophic factor and irisin levels at rest, after wingate and progressive tests, and improves aerobic capacity and body composition of young physically active men and women. J Physiol Pharmacol. 2015; 66(6): 811–821. PubMed Abstract\n\nKwaśniewska M, Kostka T, Jegier A, et al.: Regular physical activity and cardiovascular biomarkers in prevention of atherosclerosis in men: a 25-year prospective cohort study. BMC Cardiovasc Disord. 2016; 16: 65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCamera DM, Hawley JA, Coffey VG: Resistance exercise with low glycogen increases p53 phosphorylation and PGC-1α mRNA in skeletal muscle. Eur J Appl Physiol. 2015; 115(6): 1185–1194. PubMed Abstract | Publisher Full Text\n\nPekkala S, Wiklund PK, Hulmi JJ, et al.: Are skeletal muscle FNDC5 gene expression and irisin release regulated by exercise and related to health? J Physiol. 2013; 591(21): 5393–5400. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlvehus M, Boman N, Söderlund K, et al.: Metabolic adaptations in skeletal muscle, adipose tissue, and whole-body oxidative capacity in response to resistance training. Eur J Appl Physiol. 2014; 114(7): 1463–1471. PubMed Abstract | Publisher Full Text\n\nRaschke S, Elsen M, Gassenhuber H, et al.: Evidence against a beneficial effect of irisin in humans. PLoS One. 2013; 8(9): e73680. PubMed Abstract | Publisher Full Text | Free Full Text\n\nScalzo R, Peltonen G, Giordano G, et al.: Regulation of the browning of human white adipose: Evidence for sympathetic control and sexual dimorphic responses to sprint interval training. FASEB J. 2014; 28(1 Supplement): 1160.4. Reference Source\n\nHuh JY, Mougios V, Kabasakalis A, et al.: Exercise-induced irisin secretion is independent of age or fitness level and increased irisin may directly modulate muscle metabolism through AMPK activation. J Clin Endocrinol Metab. 2014; 99(11): E2154–2161. PubMed Abstract | Publisher Full Text\n\nHey-Mogensen M, Højlund K, Vind BF, et al.: Effect of physical training on mitochondrial respiration and reactive oxygen species release in skeletal muscle in patients with obesity and type 2 diabetes. Diabetologia. 2010; 53(9): 1976–1985. PubMed Abstract | Publisher Full Text\n\nKeller P, Vollaard NB, Gustafsson T, et al.: A transcriptional map of the impact of endurance exercise training on skeletal muscle phenotype. J Appl Physiol (1985). 2011; 110(1): 46–59. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKraemer RR, Shockett P, Webb ND, et al.: A transient elevated irisin blood concentration in response to prolonged, moderate aerobic exercise in young men and women. Horm Metab Res. 2014; 46(2): 150–154. PubMed Abstract | Publisher Full Text\n\nLöffler D, Müller U, Scheuermann K, et al.: Serum irisin levels are regulated by acute strenuous exercise. J Clin Endocrinol Metab. 2015; 100(4): 1289–1299. PubMed Abstract | Publisher Full Text\n\nAnastasilakis AD, Polyzos SA, Saridakis ZG, et al.: Circulating irisin in healthy, young individuals: Day-night rhythm, effects of food intake and exercise, and associations with gender, physical activity, diet, and body composition. J Clin Endocrinol Metab. 2014; 99(9): 3247–3255. PubMed Abstract | Publisher Full Text\n\nMoreno M, Moreno-Navarrete JM, Serrano M, et al.: Circulating irisin levels are positively associated with metabolic risk factors in sedentary subjects. PLoS One. 2015; 10(4): e0124100. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJedrychowski MP, Wrann CD, Paulo JA, et al.: Detection and Quantitation of Circulating Human Irisin by Tandem Mass Spectrometry. Cell Metab. 2015; 22(4): 734–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMiyamoto-Mikami E, Sato K, Kurihara T, et al.: Endurance training-induced increase in circulating irisin levels is associated with reduction of abdominal visceral fat in middle-aged and older adults. PLoS One. 2015; 10(3): e0120354. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPalacios-González B, Vadillo-Ortega F, Polo-Oteyza E, et al.: Irisin levels before and after physical activity among school-age children with different BMI: a direct relation with leptin. Obesity (Silver Spring). 2015; 23(4): 729–732. PubMed Abstract | Publisher Full Text\n\nSanchis-Gomar F, Alis R, Pareja-Galeano H, et al.: Inconsistency in circulating irisin levels: what is really happening? Horm Metab Res. 2014; 46(8): 591–596. PubMed Abstract | Publisher Full Text\n\nAquilano K, Vigilanza P, Baldelli S, et al.: Peroxisome proliferator-activated receptor gamma co-activator 1alpha (PGC-1alpha) and sirtuin 1 (SIRT1) reside in mitochondria: possible direct function in mitochondrial biogenesis. J Biol Chem. 2010; 285(28): 21590–21599. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLittle JP, Safdar A, Cermak N, et al.: Acute endurance exercise increases the nuclear abundance of PGC-1alpha in trained human skeletal muscle. Am J Physiol Regul Integr Comp Physiol. 2010; 298(4): R912–917. PubMed Abstract | Publisher Full Text\n\nPerry CG, Lally J, Holloway GP, et al.: Repeated transient mRNA bursts precede increases in transcriptional and mitochondrial proteins during training in human skeletal muscle. J Physiol. 2010; 588(Pt 23): 4795–4810. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHandschin C, Spiegelman BM: The role of exercise and PGC1alpha in inflammation and chronic disease. Nature. 2008; 454(7203): 463–469. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPilegaard H, Saltin B, Neufer PD: Exercise induces transient transcriptional activation of the PGC-1alpha gene in human skeletal muscle. J Physiol. 2003; 546(Pt 3): 851–858. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWright DC, Han DH, Garcia-Roves PM, et al.: Exercise-induced mitochondrial biogenesis begins before the increase in muscle PGC-1alpha expression. J Biol Chem. 2007; 282(1): 194–199. PubMed Abstract | Publisher Full Text\n\nMahoney DJ, Parise G, Melov S, et al.: Analysis of global mRNA expression in human skeletal muscle during recovery from endurance exercise. FASEB J. 2005; 19(11): 1498–1500. PubMed Abstract | Publisher Full Text\n\nLjubicic V, Joseph AM, Saleem A, et al.: Transcriptional and post-transcriptional regulation of mitochondrial biogenesis in skeletal muscle: effects of exercise and aging. Biochim Biophys Acta. 2010; 1800(3): 223–234. PubMed Abstract | Publisher Full Text\n\nErickson HP: Irisin and FNDC5 in retrospect: An exercise hormone or a transmembrane receptor? Adipocyte. 2013; 2(4): 289–293. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee DV, et al.: Irisin does not induce browning of mouse or human adipocytes. Diabetes. 2013; 62: A25.\n\nQiu S, Cai X, Sun Z, et al.: Chronic Exercise Training and Circulating Irisin in Adults: A Meta-Analysis. Sports Med. 2015; 45(11): 1577–1588. PubMed Abstract | Publisher Full Text\n\nKopecky J, Clarke G, Enerbäck S, et al.: Expression of the mitochondrial uncoupling protein gene from the aP2 gene promoter prevents genetic obesity. J Clin Invest. 1995; 96(6): 2914–2923. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKopecký J, Rossmeisl M, Hodný Z, et al.: Reduction of dietary obesity in aP2-Ucp transgenic mice: mechanism and adipose tissue morphology. Am J Physiol. 1996; 270(5 Pt 1): E776–786. PubMed Abstract\n\nCederberg A, Grønning LM, Ahrén B, et al.: FOXC2 is a winged helix gene that counteracts obesity, hypertriglyceridemia, and diet-induced insulin resistance. Cell. 2001; 106(5): 563–573. PubMed Abstract | Publisher Full Text\n\nTsukiyama-Kohara K, Poulin F, Kohara M, et al.: Adipose tissue reduction in mice lacking the translational inhibitor 4E-BP1. Nat Med. 2001; 7(10): 1128–1132. PubMed Abstract | Publisher Full Text\n\nSeale P, Kajimura S, Spiegelman BM: Transcriptional control of brown adipocyte development and physiological function--of mice and men. Genes Dev. 2009; 23(7): 788–797. PubMed Abstract | Publisher Full Text | Free Full Text\n\nvan Marken Lichtenbelt WD, Vanhommerig JW, Smulders NM, et al.: Cold-activated brown adipose tissue in healthy men. N Engl J Med. 2009; 360(15): 1500–1508. PubMed Abstract | Publisher Full Text\n\nVirtanen KA, Lidell ME, Orava J, et al.: Functional brown adipose tissue in healthy adults. N Engl J Med. 2009; 360(15): 1518–1525. PubMed Abstract | Publisher Full Text\n\nHiggins J, Green S: Cochrane Handbook for Systematic Reviews of Interventions. Version 5.1.0 edn, 2011. Reference Source"
}
|
[
{
"id": "21095",
"date": "27 Mar 2017",
"name": "Elke Albrecht",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe presented review is timely and of common interest since energy expenditure by “browning” of white adipocytes may be a mechanism to reduce body weight and a potential way to fight obesity in humans. Beneficial effects of exercise on metabolism are long known although not completely understood. A recently proposed pathway linked physical activity with increased PGC1-alpha and FNDC5. Cleavage of the transmembrane factor FNDC5 results in circulating irisin which in turn starts a sequence to induce a brown-like phenotype in white adipocytes. Consequently, numerous studies focused on this proposed link in different experimental settings in humans. The presented review systematically retrieved and analyzed such studies to further elucidate different steps of the proposed cascade PGC1-alpha – FNDC5 – Irisin – UCP1.\nThe value of this review bases on its rigorous quality assessment of the included studies. The authors analyzed all published reports with established tools on the risk of bias and the quality of the data. This part of the analysis demonstrates a rather low quality of many studies hampering the deduction of valid conclusions. The analysis of the reported results is given in different sections considering the analyzed parameters and the design of the studies. Results on PGC1-alpha and FNDC5 as well as on UCP1 are presented and discussed in an appropriate manner.\nIn contrast, the data on irisin are much more problematic. As the authors mention in the discussion, the missing validation of the used ELISA kits for irisin is a major issue in these studies. The values vary greatly and, more importantly, are mostly higher by magnitudes than the values determined by mass spectrometry. In some studies irisin was measured with different assays: Kurdiova et al. (ref. 18) measured irisin with a RIA (RK-067-16, Phoenix) and a subset of their samples additionally with ELISA (EK-067-29, Phoenix) and concluded “… The correlation between both assays was very weak, and definitely not admissible for two products that claimed to measure the concentrations of the same molecule…”. Albrecht et al. (ref. 27) re-analyzed all samples from Norheim et al. (ref. 17, EK-067-52) with the ELISA of Adipogen and found no correlation between both measurements (r = 0.03). Finally, Montes-Nieto et al. (2016)1 analyzed irisin in human plasma with two different lots of EK-067-29 (Phoenix) and stated an almost complete lack of agreement between the data. Latter reference should be included in this review. These results cast doubts at least on all irisin levels determined with those kits used in more than half of the included studies. Consequently, a meaningful discussion of the data is hardly possible.\nThe authors should mention these points at a prominent position and shorten the detailed discussion of the – most likely – invalid ELISA/RIA results from the included studies.\nMoreover, it should be noted that the studies using mass spectrometry delivered contradictory results. Boström et al. (ref. 2) and Lee et al. (ref. 25) used an antibody (Abcam, now discontinued) recognizing a peptide of FNDC5 which is not part of the secreted irisin to identify the bands subjected to mass spectrometry whereas Jedrychowski et al. (ref. 74) employed an antibody of Adipogen against the irisin peptide. This explains the discrepancy in the molecular weight of the analyzed peptides (~ 22 kDa [ref. 2, 25] vs. ~ 12 kDa [ref. 74]).\nThere is some redundancy in the description of results and repetitions in the discussion which should be omitted. It is e.g. not necessary to repeat several times the values for irisin determined with mass spectrometry. It would increase readability if more summarized results are presented.\n\nAdditionally, some minor points should be corrected:\nIntroduction\nReference 8 is not suited to support this statement because it comments the results of Boström et al. and is no independent confirmation\nTable 1, page 5\nAydin 2013: They used EK-067-52 for measurement of serum irisin. The given product H-067-17 is an antibody for immuno-histochemistry\npage 6\nMoienneia 2016: The correct name of the company is CUSABIO and it is based in China – the name and affiliation is wrong in the original publication.\n\nKhodadadi 2014: They mention CUSABIO in the article. This company provides only one ELISA for irisin therefore it is likely that they used the same test like Moienneia 2016.\npage 8\nMoienneia 2016: See above.\npage 10\nKwasniewska 2016: Typo in the name. They used the irisin ELISA produced by the Czech company BioVendor. The given Scottish company does not sell irisin ELISAs. This is misleadingly described in the original article.\nDiscussion, page 16\nReference 87 is not suited to support the statements regarding mass spectrometry (3 times). I guess ref. 87 (Lee DV et al.) was mixed up with ref. 25 (Lee P et al.). Please check whether ref. 87 is needed at all.\npage 17\nReference 26 is not suited to support the statement regarding mass spectrometry. It was probably mixed up with ref. 25.\n\nTaken together, this systematic review is a valuable contribution to guide through the confusing literature concerning the relationship between exercise and the proposed PGC1-alpha – FNDC5 – irisin – axis. The evaluation of the included studies with well-acknowledged quality measures adds additional value and makes the article unique in the reviewed field.",
"responses": [
{
"c_id": "2722",
"date": "26 May 2017",
"name": "Petros Dinas",
"role": "Author Response",
"response": "We have implemented the suggested information regarding Irisin identification in content with the already existing argument in the text. Also, we removed the detailed discussion of the most likely invalid ELISA (Page 37). We would like to mention that we completely agree with the Reviewer that a meaningful discussion regarding the effects of physical activity on Irisin is hardly possible, given the problematic methods used for Irisin identification. In this regard, we conclude that we cannot form any firm conclusion. Furthermore, we highlight the fact (Conclusions section) that even though we used a well-established methodology for systematic reviews, we had to additionally consider the validity and accuracy of the methods used in the included studies to avoid misleading conclusions. Finally, we have mentioned all this information that was suggested by the reviewer including the reference by Montes-Nieto et al. (2016) (Page 35).We have implemented the information (Page 36) that the studies using mass spectrometry delivered contradictory results.The results section is formed based on the mechanism that this systematic review examined (i.e. PGC-1a and FNDC5 in muscle, Irisin and UCP1 in white adipocytes) to directly reflect to the aim of the study. In addition, we report the results considering the different kind of exercises (i.e. acute and chronic exercise), while we separately present the relationships of the examined factors with physical activity levels. We believe that this is particularly important to increase clarity in the presented outcomes in line with the aim of our systematic review. However, considering your comment, we removed the repetition from the results section (i.e. values for Irisin determined with mass spectrometry) (Pages 31-32). We also removed repetition from the discussion section as per your suggestion (Pages 34).We removed the reference 8 from the relevant statement.We have corrected the Table 1 (Pages 9, 10, 11, 15 and 18)Thank you for bringing to our attention that reference 87 was mixed up with reference 25. We have now corrected this in the text.We have removed the reference 26 that was mixed up with reference 25.."
}
]
},
{
"id": "21091",
"date": "29 Mar 2017",
"name": "Fabian Sanchis-Gomar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, the authors analyzed the effects of physical activity on the connection between muscle peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC-1α) and fibronectin type III domain-containing protein 5 (FNDC5), circulating Irisin and uncoupling protein one (UCP1) of white adipocytes in humans through a systematic review. The authors review the evidence connecting PGC-1α, FNDC5, Irisin and UCP1 with physical exercise, which might play an important role in diabetes and obesity. Moreover, they have addressed a controversial topic since, though Irisin was initially described as a hormone potentially regulated by exercise and also with a potential role in obesity and diabetes, conflicting results have been reported. There is an ongoing intense debate on different results from different Irisin assays. In effect, in 2014, we were the first to underline those discrepancies and inconsistencies. For that reason, I feel this review is timed appropriately and is interesting for the scientific community. The review is effectively organized and the sequence of points is logical, and follows PRISMA guideline.\nIn my opinion, both results and discussion are properly presented, although the discussion section should be slightly shortened. Also, references 8 and 87 are not needed. I recommend changing PGC-1a by PGC-1α throughout the manuscript. Furthermore, the following manuscript that was recently published deserves to be included and discussed: Perakakis et al. (2017)1.",
"responses": [
{
"c_id": "2721",
"date": "26 May 2017",
"name": "Petros Dinas",
"role": "Author Response",
"response": "We thank you very much for your encouraging comments.As per your suggestion references 8 and 87 have been removed. We have also changed PGC-1a with PGC-1α throughout the manuscript. Finally, the reference you suggested has been included in the discussion section along with a relevant comment (Page 37)."
}
]
}
] | 1
|
https://f1000research.com/articles/6-286
|
https://f1000research.com/articles/5-73/v1
|
18 Jan 16
|
{
"type": "Antibody Validation Article",
"title": "Referencing cross-reactivity of detection antibodies for protein array experiments",
"authors": [
"Darragh Lemass",
"Richard O'Kennedy",
"Gregor S. Kijanka",
"Darragh Lemass",
"Richard O'Kennedy"
],
"abstract": "Protein arrays are frequently used to profile antibody repertoires in humans and animals. High-throughput protein array characterisation of complex antibody repertoires requires a platform-dependent, lot-to-lot validation of secondary detection antibodies. This article details the validation of an affinity-isolated anti-chicken IgY antibody produced in rabbit and a goat anti-rabbit IgG antibody conjugated with alkaline phosphatase using protein arrays consisting of 7,390 distinct human proteins. Probing protein arrays with secondary antibodies in absence of chicken serum revealed non-specific binding to 61 distinct human proteins. The cross-reactivity of the tested secondary detection antibodies points towards the necessity of platform-specific antibody characterisation studies for all secondary immunoreagents. Secondary antibody characterisation using protein arrays enables generation of reference lists of cross-reactive proteins, which can be then excluded from analysis in follow-up experiments. Furthermore, making such cross-reactivity lists accessible to the wider research community may help to interpret data generated by the same antibodies in applications not related to protein arrays such as immunoprecipitation, Western blots or other immunoassays.",
"keywords": [
"Protein arrays",
"Whole-cell immunisation",
"Antibody profiling",
"Cross-reactivity",
"Chicken IgY",
"Reference list",
"Secondary antibody",
"Detection antibody"
],
"content": "Introduction\n\nSecondary label-conjugated and non-conjugated detection antibodies are frequently used in a wide range of research applications. However, they are often affinity-isolated, polyclonal reagents that may lack the highest standard of antibody validation. The antibodies characterised in this study are a polyclonal anti-chicken IgY antibody produced in rabbit (31104, Thermo Fisher) and a polyclonal goat anti-rabbit IgG antibody conjugated with alkaline phosphatase (AP) (A3687, Sigma-Aldrich). Although the use of the rabbit anti-IgY antibody in the literature is limited, the goat anti-rabbit IgG AP was extensively utilised in research for over 15 years1,2.\n\nThe research conducted in this laboratory examines complex antibody repertoires in humans and animals by means of protein arrays. Protein arrays are frequently used to profile antibody binding to human proteins in autoimmune disease3, cancer4 and in healthy individuals5. Other protein array applications include recombinant6 and hybridoma-derived7 antibody characterisation studies. This article investigates the cross-reactivity of a rabbit anti-chicken IgY and an alkaline phosphatase-conjugated goat anti-rabbit IgG, which were used for the profiling of IgY antibody responses to human antigens in chickens immunised with human cancer cells. The protein array technology applied here, developed by Büssow and colleagues8, is comprised of a fully annotated set of 7,390 distinct human proteins, in its current version, that may serve as potential antigens. The aim of this study is to define a cross-reactivity reference list for the two described secondary antibodies, which can then be used to eliminate non-specific binders from ongoing chicken IgY profiling studies. Furthermore, publication of the cross-reactivity reference list may support other researchers using these antibodies in the evaluation of their experiments.\n\n\nMaterials and methods\n\nRabbit anti-chicken IgY (H+L) secondary antibody (Thermo Fisher Scientific, Product code 31104, Lot code PK19380211) is a polyclonal antibody that targets the variable heavy and light chains of chicken IgY immunoglobulins (Table 1). The antibody was isolated from the serum of the antigen-immunised rabbit through immunoaffinity chromatography using antigen coupled to agarose beads. The antibody was added to the protein array at a 1/1,000 dilution in 2% (w/v) bovine serum albumin (BSA, Sigma-Aldrich, A2153) in tris-buffered saline (TBS, Trizma® Base, Sigma-Aldrich, T6066 and sodium chloride, Fisher Scientific, S/3160/68) with 0.1%, v/v, Tween 20 (Sigma-Aldrich, P1379).\n\nAlkaline phosphatase-conjugated goat anti-rabbit IgG (whole molecule) (Sigma-Aldrich, Product code A3687, Lot code SLBJ6146V) is a polyclonal antibody that targets all rabbit IgGs (Table 1). The antibody was isolated through immunospecific purification of antisera from a rabbit IgG-immunised goat. Following isolation, the anti-rabbit IgG was conjugated to alkaline phosphatase using glutaraldehyde-based cross-linkage. The antibody was added to the protein array at a 1/1,000 dilution in 2% (w/v) BSA in tris-buffered saline (TBS) with 0.1%, v/v, Tween 20.\n\nUnipex protein arrays were obtained from Source Bioscience Life Sciences (Nottingham, UK). The Unipex arrays comprise of 15,300 fully annotated E. coli clones expressing a total of 7,390 distinct in-frame ORF human recombinant proteins. The Unipex proteins are immobilized under denaturing conditions directly on the PVDF membrane surfaces exposing linear sequence epitopes ideally suited for epitope mapping, antibody profiling and antibody cross-reactivity analyses. The details of protein arrays utilised in this study are provided in Table 2. For general information on Unipex protein arrays please refer to: (http://www.lifesciences.sourcebioscience.com/media/290406/sbs_ig_manual_proteinarray_v1.pdf).\n\nAntibody cross-reactivity was assessed using Unipex protein arrays. The detailed experimental protocol is provided in Table 3. Briefly, secondary rabbit anti-chicken IgY and goat anti-rabbit IgG AP were validated in preparation for a chicken IgY antibody profiling experiment of a chicken immunised with human cancer cells. Protein arrays were probed with secondary antibodies in the absence of IgY-containing chicken serum, as described in Table 3. Signal generation for array-bound secondary antibodies was obtained using AttoPhos AP fluorescent substrate system (Promega, S1001) diluted 1 in 8 in AP buffer (1mM MgCl2, Sigma-Aldrich, M4880 and 100mM Tris base, pH 9.5). Protein array image acquisition was conducted using a Fuji scanner Fla5100. Positive signals were localized according to the manufacturer’s protocol. Protein annotations were retrieved from the Unipex database provided by the manufacturer and updated using the National Cancer Institute’s UniGene CGAP Gene Finder tool (http://cgap.nci.nih.gov/Genes/GeneFinder).\n\n\nResults\n\nProbing protein arrays with antibodies enables the assessment of specificity and cross-reactivity on large numbers of potential antigens in parallel. Here we investigated the cross-reactivity of secondary anti-chicken IgY from rabbit and anti-rabbit IgG AP from goat using human protein arrays in the absence of chicken serum. The analysis revealed antibody binding to human proteins in the absence of chicken serum and hence chicken IgY immunoglobulins. The identified positive signals varied in strength, as shown in Figure 1, with intensity 3 being the strongest and 1 the weakest. The difference in signal intensities may relate to varying protein quantities on the array and differences in antibody affinities to corresponding proteins. A total of 63 binding events were visible on the protein arrays, of which 61 corresponded to unique proteins (Table 4). Five of the identified signals were scored as intensity 3, twelve signals were scored as intensity 2 and remainder were scored as intensity 1. The original protein array images are shown in Figure S1 and Figure S2 (Supplementary material) and protein array images with highlighted positive signals, which correspond the cross-reactive proteins listed in Table 4, are shown in Figure S3 and Figure S4 (Supplementary material).\n\n(A) Image of a whole protein array and a representative section illustrating antibody-antigen binding at three different signal intensities; 3 = strong, 2 = intermediate and 1 = weak. (B) The proteins are arranged in a 3×3 pattern on the array and all proteins are arrayed twice and appear as duplicate spots in a particular pattern within a block after a successful hybridization. (C) Description of proteins chosen as examples provided on the representative array image above; signal intensities, patterns, Unigene IDs and protein names are listed.\n\nThe investigated antibodies were found to bind to a wide range of human proteins (Table 4). However, it is worth noting that a total of six identified binding events correlated to human immunoglobulin proteins, with four scored at the highest intensity (Intensity 3). Such cross-reactivity is not surprising considering the antibodies are polyclonal and the immunogens of both hosts were immunoglobulins. In addition, the data sheet provided with the anti-chicken IgY antibody produced in rabbit (31104, Thermo Fisher) has specified that this antibody may cross-react with immunoglobulins from other species. The data sheet for the goat anti-rabbit IgG AP antibody (A3687, Sigma-Aldrich) has specified binding to all rabbit immunoglobulins.\n\n\nConclusion\n\nThis work illustrates the cross-reactivity of an antibody-based detection system for IgY binding. The polyclonal anti-IgY rabbit antibody in combination with an anti-rabbit IgG alkaline phosphatase-conjugated antibody was shown to bind to 61 human proteins present on Unipex protein arrays comprising of 7,390 human proteins. Characterisation of this cross-reactivity provides a ‘false-positive’ database for future chicken antisera characterisation on protein array systems not limited to the Unipex protein array used here. These results, in combination with ‘false-positives’ from earlier research investigating antibody cross-reactivity by this group9 and others10 may provide valuable information for future protein array-based experiments. Reference lists provided by such experiments would be further strengthened by arrays that include additional portions of the human proteome and/or post-translational modifications. Using antibodies that have been extensively characterised on protein arrays will reduce the risk of identifying irrelevant cross-reactive secondary antibody binding to the array as a host-antigen response.\n\nOverall, the antibodies tested here showed cross-reactivity to unrelated human proteins as well as to human immunoglobulin proteins, which are homologous to the original immunogens. Despite the identified non-specific binding, the tested antibodies are suitable for use in protein array experiments as the cross-reactive binding partners can be readily excluded from further analysis. As both antibodies were used as a pair in this study, the possibility to deduce the exact cross-reactivity profile for each individual antibody may be limited. However, the cross-reactivity reference list provided in this paper can be further utilised to validate research using those antibodies in applications other than protein arrays.",
"appendix": "Author contributions\n\n\n\nROK and GSK designed the study, DL performed the protein array experiments and GSK conducted data analysis. GSK wrote and DL and ROK critically reviewed and edited the article. All authors have agreed to the final content of the manuscript.\n\n\nCompeting interests\n\n\n\nThe authors do not declare any competing interests.\n\n\nGrant information\n\nThis material is based upon works supported by the Irish Cancer Society Research Fellowship Award CRF10KIJ (GSK), the Science Foundation Ireland under CSET Grant no. 10/CE/B1821 and the Enterprise Ireland Dairy Processing Technology Centre award.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nFigure S1. Unipex 1 pt.1 protein array image. Original image of protein array (Number 633.4.730) probed with rabbit anti-chicken IgY and alkaline phosphatase-conjugated goat anti-rabbit IgG, visualised using AttoPhos AP Fluorescent Substrate.\n\nFigure S2. Unipex 2 pt.1 protein array image. Original image of protein array (Number 634.5.737) probed with rabbit anti-chicken IgY and alkaline phosphatase-conjugated goat anti-rabbit IgG, visualised using AttoPhos AP Fluorescent Substrate.\n\nFigure S3. Unipex 1 pt.1 protein array image with highlighted positive signals. Cross-reactive proteins listed in Table 4 are highlighted corresponding to their intensity as red (intensity 3 = strong), green (intensity 2 = intermediate) and yellow (intensity 1 = weak) circles.\n\nFigure S4. Unipex 2 pt.1 protein array image with highlighted positive signals. Cross-reactive proteins listed in Table 4 are highlighted corresponding to their intensity as red (intensity 3 = strong), green (intensity 2 = intermediate) and yellow (intensity 1 = weak) circles.\n\n\nReferences\n\nCibelli G, Corsi P, Diana G, et al.: Corticotropin-releasing factor triggers neurite outgrowth of a catecholaminergic immortalized neuron via cAMP and MAP kinase signalling pathways. Eur J Neurosci. 2001; 13(7): 1339–1348. PubMed Abstract | Publisher Full Text\n\nBartos A, Majak I, Diowksz A, et al.: Omega-3 Fatty Acids Used as Cross-Linkers to Reduce Antigenicity of Wheat Flour. J Food Process Preserv. 2015. Publisher Full Text\n\nFathman CG, Soares L, Chan SM, et al.: An array of possibilities for the study of autoimmunity. Nature. 2005; 435(7042): 605–611. PubMed Abstract | Publisher Full Text\n\nKijanka G, Hector S, Kay EW, et al.: Human IgG antibody profiles differentiate between symptomatic patients with and without colorectal cancer. Gut. 2010; 59(1): 69–78. PubMed Abstract | Publisher Full Text\n\nNagele EP, Han M, Acharya NK, et al.: Natural IgG autoantibodies are abundant and ubiquitous in human sera, and their number is influenced by age, gender, and disease. PLoS One. 2013; 8(4): e60726. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolt LJ, Büssow K, Walter G, et al.: By-passing selection: direct screening for antibody-antigen interactions using protein arrays. Nucleic Acids Res. 2000; 28(15): E72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKijanka G, Barry R, Chen H, et al.: Defining the molecular target of an antibody derived from nuclear extract of Jurkat cells using protein arrays. Anal Biochem. 2009; 395(2): 119–124. PubMed Abstract | Publisher Full Text\n\nBüssow K, Cahill D, Nietfeld W, et al.: A method for global protein expression and antibody screening on high-density filters of an arrayed cDNA library. Nucleic Acids Res. 1998; 26(21): 5007–5008. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKijanka G, Ipcho S, Baars S, et al.: Rapid characterization of binding specificity and cross-reactivity of antibodies using recombinant human protein arrays. J Immunol Methods. 2009; 340(2): 132–137. PubMed Abstract | Publisher Full Text\n\nMichaud GA, Salcius M, Zhou F, et al.: Analyzing antibody specificity with whole proteome microarrays. Nat Biotechnol. 2003; 21(12): 1509–1512. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "12247",
"date": "03 Feb 2016",
"name": "Brigitte Hantusch",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study presents data concerning the issue of secondary antibody cross-reactivity towards antigens other than desired immunoglobulins. By screening a high-throughput protein array, the authors establish the amount and identity of proteins detected by commercially available secondary antibodies, a rabbit anti-chicken antibody combined with an AP-conjugated goat anti-rabbit antibody.Title and Abstract: The title might contain the information that two detection antibodies were used. The abstract represents a sound summary of the work performed.Article: The methods used are described clearly, especially by showing a concise work flow as seen in table 3.Data: Results are described appropriate and sufficiently. Supplementary Figures S1 and S2 have very huge size and are dispensable. The sentence about signal intensity differences due to varying protein amounts should be part of the conclusion section and also discussed more extensively.Conclusion: The conclusions drawn are appropriate and concise. Briefly can some information be drawn from the kind / category of proteins falsely detected?",
"responses": [
{
"c_id": "2709",
"date": "23 May 2017",
"name": "Gregor Kijanka",
"role": "Author Response",
"response": "The authors would like to thank Dr. Brigitte Hantusch for kindly reviewing this manuscript and the helpful and detailed comments. We have discussed the signal intensity differences in more detail in the text as highlighted by Dr. Hantusch. We decided to retain the current title of the article as it points to a more general applicability of our validation approach to other antibodies.Furthermore, we have extensively addressed Dr. Hantusch comments regarding categories of proteins detected on the protein array as part of the new in silico analysis as presented in Table 5 and in the supplementary Table 1"
}
]
},
{
"id": "12776",
"date": "30 Mar 2016",
"name": "Carsten Grötzinger",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article describes an experiment performed to characterize the background signals in a particular combination of three commercially available research tools: a protein macroarray on PVDF membranes used in conjunction with two antibodies used for detection. As no first antibody or serum was used in this experiment, all the signals could be attributed to unwanted, unspecific reactivity of the detection antibody combination used. A list of genes was generated from these signals that is proposed as a reference database of for other researchers.In general, the approach is scientifically sound and feasible. Background reactivities may limit antibody-based assays and need to be accounted for. So performing a control experiment without serum or first antibody on a protein array and with just the detection antibodies makes perfectly sense to control for unspecific binding. The title of the paper is appropriate, the abstract gives enough information on the setting. The background information about the antibodies is described in enough detail. However, the narrow focus of the paper and a number of technical issues limit the quality of the paper and its utility for the readership.Major issuesThe experiment was performed only once. Consequently, the reliability of the results will be limited. Only one specific combination of a protein macroarray with two consecutive detection antibodies was analyzed. It remains unclear, whether the results obtained would apply to other lots of the antibodies or whether they are specific for a certain preparation, limiting the benefit of this protein list as a reference database and also limiting the replication of results by other groups. The authors suggest that their results may also apply to other protein array systems. This claim needs substantiation, especially in the case of E.coli proteins derived from high-throughput cloning that do not show authentic posttranslational modification patterns and often contain extra amino acid sequences that may cause unspecific binding. The paper discusses cross-reactivity with human Ig genes. A sequence analysis of the other cross-reactive proteins with IgY and rabbit Ig sequences may provide evidence for the mechanisms behind this phenomenon, expanding scope and depth of this so far rather descriptive study.Minor issuesAntibody concentrations should be given explicitly, e.g. as µg/ml rather than as dilutions. The procedure of signal quantification and scoring needs to be described in more detail. The description states \"Positive signals were localized according to the manufacturer’s protocol\" - what exactly was done to identify positive signals? The pictures provided show varying background intensities as well as a number of very dark spots that do not appear in the analysis. Which algorithm was used to include or exclude signals? How were the different signal intensities attributed to the score values 1, 2 and 3? It would be interesting to know why this specific combination of two detection antibodies was used here: a polyclonal anti-chicken IgY antibody produced in rabbit and then a polyclonal goat anti-rabbit IgG antibody conjugated with alkaline phosphatase. Was there no conjugated anti-chicken antibody available? Every additional antibody will add to the number of unspecific reactions, so using just one instead of two may help reduce background. The abstract does not provide a conclusion on whether the antibodies should be used in a particular setting (see Article Guidelines For Antibody Validation Articles).",
"responses": [
{
"c_id": "2708",
"date": "23 May 2017",
"name": "Gregor Kijanka",
"role": "Author Response",
"response": "The authors would like to thank Dr. Carsten Grötzinger for his very helpful observations which prompted us to perform additional in silico analysis resulting in an improvement of this paper. Dr Grötzinger points out that the paper has its limitations due to the fact that only one experiment has been performed leading to questions regarding reliability of data, lot-to-lot reproducibility and combinations of antibody pairs. As those issues are certainly important, however, not feasible to address in this specialized antibody validation paper, we have discussed those within the text; For instance, the lot-to-lot reproducibility of polyclonal antibodies is an important issue that needs to be taken into consideration during the experimental design of a study, it goes however, beyond the scope of this particular article. The important issue of determining the origin of the identified signals to either of the secondary antibodies tested in a single protein array experiment is now, however, addressed in more detail. We have performed an additional in silico analysis comparing sequence similarities between the antibody immunogens used to produce the secondary antibodies and the human proteins identified on the arrays. The analysis shed some light into the possibility that all immunoglobulin (Ig) related signals were caused by both tested secondary antibodies and others were caused by either of the two antibodies. These findings are particularly interesting, as the binding patterns of the non-labelled secondary antibody are difficult to show unless additional labelling is performed directly on the antibody. Such additional labelling might, however, impact on the antibody binding specificity. The results of those analyses, as discussed in a similar manner in the Reviewer 1 response, are presented in a new Table 5 and Supplementary Table 1 and further discussed in the text. The authors have also addressed miner issues related to post-translational modification, antibody concentrations, signal quantification and others throughout the text. In addition, we concluded that the antibodies should be used in a particular setting and highlighted this in the abstract as required in the Article Guidelines For Antibody Validation Articles."
}
]
},
{
"id": "12773",
"date": "04 Apr 2016",
"name": "Konrad Büssow",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the present work, the authors have tested direct binding of secondary antibodies to arrays of human proteins.Readers who use array technology may benefit from the present work, since they will become aware of the problem of signals caused by secondary antibodies and not by the primary antibody. It appears that human immunoglobulins are frequently detected by secondary antibodies, which is a useful finding that would likely also be relevant for other secondary antibodies.The authors have included the original images in the supplement, which is useful for users of the technique.IssuesIn the Results section, it should be made clear that the arrays were probed with both antibodies in the same experiment, not one antibody at a time. It would be interesting how strong the signals caused by the secondary antibodies are in comparison to signals obtained in the presence of a primary antibody. In comparison, the part 1 image has a much higher background than part 2. It appears that very clear signals were obtained from part 2, but not from part 1. In the part 1 image, there is considerable background and almost all positions have been slightly stained. I would recommend repeating the experiment to verify whether the weak signals obtained on part 1 can be reproduced. Two secondary antibodies were used in the same experiment. Therefore, it cannot be determined which of the two antibodies gave rise to the signals on the array. This problem should be discussed.",
"responses": [
{
"c_id": "2707",
"date": "23 May 2017",
"name": "Gregor Kijanka",
"role": "Author Response",
"response": "The authors would like to thank Dr. Konrad Büssow for his thorough review of this article and his helpful comments. Dr. Büssow points out that the authors should stress that both secondary antibodies were used in the same experiment using one single set of protein arrays. This experimental design issue entails that it cannot be determined which signals are caused by which antibody. We have highlighted and discussed both issues throughout the text and we performed an additional sequence analysis in an in silico approach to clarify the origin of the signals on the protein array. The results of these analyses are presented in the new Table 5 and Supplementary table 1 and are further discussed in the text.Dr. Büssow has furthermore highlighted the differences in background signal between the two arrays of the protein array set. The authors have encountered similar background differences when using other sets of antibodies and serum samples and find similar discrepancies in background noise being likely due to different tissues and vectors used for the generation of the distinct expression clone libraries utilized for array 1 and 2. This issue is now specifically highlighted in the article."
}
]
}
] | 1
|
https://f1000research.com/articles/5-73
|
https://f1000research.com/articles/6-740/v1
|
23 May 17
|
{
"type": "Case Report",
"title": "Case Report: Multiple hemorrhagic metastases to the brain from primary lung choriocarcinoma",
"authors": [
"Sunil Munakomi"
],
"abstract": "Herein we report a very rare entity of multiple hemorrhagic metastases to the brain from a primary lung choriocarcinoma in a young woman. The patient presented with recent onset of progressive headache, decreased level of consciousness and multiple episodes of vomiting. CT of the head revealed multiple hemorrhagic lesions within the brain. The patient’s serum B-human chorionic gonadotrophin was increased. A chest X-ray revealed a right lung mass. The patient urgently underwent operative excision of the lesion in the posterior fossa, so as to prevent impending tonsillar herniation. The histology from the lesion provided the diagnosis of choriocarcinoma. After surgery, ultrasonography of the abdomen and pelvis was normal, and a chest CT revealed an enhanced and highly vascular right apical lung lesion, suggestive of lung primary choriocarcinoma, with regard to the clinical background. The patient was then started on chemotherapy, following which her serum B-HCG level decreased rapidly. This case highlights the importance of keeping this entity in the differential diagnosis of hemorrhagic lesions in any patients of a child bearing age. Early diagnosis and rapid initiation of multimodal therapy is prudent for ensuring a good outcome from an otherwise rapidly metastasizing and highly vascular lesion.",
"keywords": [
"primary",
"lung",
"choriocarcinoma",
"brain",
"metastasis"
],
"content": "Introduction\n\nPrimary lung choriocarcinoma is an extremely rare entity1. Choriocarcinoma is the malignant proliferation of the syncytial cells of trophoblastic origin following gestational events, such as a term pregnancy, molar pregnancy or an abortion. We herein report one such rare case of multiple hemorrhagic metastases to the brain from primary lung choriocarcinoma in a 22 year old young woman. We also review the literature regarding primary lung choriocarcinoma and discuss recent advancements in the management of this disease.\n\n\nCase report\n\nA 22 year old woman presented to our emergency department with a history of a recent onset progressive headache for 15 days, followed by decreased level of consciousness and multiple episodes of vomiting for the last 5 days. The patient had a history of normal vaginal delivery one month past. The patient had no history of fever, chills or any rigor associated with these symptoms, and there was no history of abnormal discharge or bleeding from the vagina. There was no other significant past medical and surgical illnesses or any relevant family history. On presentation, the patient was slightly drowsy with a Glasgow Coma Scale of E4V1M3, with bilateral pupils equal and reacting. She had bilateral sixth nerve palsies (left >> right; Figure 1). Neck rigidity was absent. There was no pallor or any lymphadenopathy. Remaining systemic examination was normal. Pelvic and genital examination from a gynecologist did not reveal any abnormal findings.\n\nCT and MRI images of the head revealed multiple hemorrhagic lesions both in the supra and the infra-tentorial compartment with evidence of effacement of the forth ventricle and evolving hydrocephalus (Figure 2 and Figure 3). There was no vascular blush seen within the brain in the MR angiography (Figure 4). Routine chest X-ray revealed the presence of a right lung mass (Figure 5). Urine for pregnancy test was also positive. However, an ultrasound of the abdomen and pelvis was normal. Therefore, choriocarcinoma was suspected and serum B-human chorionic gonadotropin (HCG) levels were assessed, >2,20,000 mIU/ml (normal range: <1 mIU/ml). The patient’s hemoglobin was 14.5 gm% (normal range: 12.1–15.1 gm%) and a platelet count of 2,15,000 (normal range: 1,50,000–4,00,000). Peripheral smear for cytology was normal. Her immune status was normal.\n\nConsequently, a differential diagnosis of multiple hemorrhagic metastases to the brain from the primary lung choriocarcinoma was made. The patient’s husband was informed about the disease condition and the immediate need for the removal of the posterior fossa lesion in order to prevent tonsillar herniation. The patient was in a poor medical condition, so could not decide on her treatment plan.\n\nThe patient immediately underwent sub-occipital craniactomy and excision of the well capsulated hemorrhagic lesion from the left cerebellar hemisphere (Figure 6). The patient made an uneventful recovery from the surgery and wound sutures were removed on the seventh day.\n\nHistopathological study of the excised lesion showed diffuse cohesive sheets of trimorphic malignant trophoblasts, consisting of intermediate trophoblasts and cytotrophoblast, and rimmed with syncytiotrophoblast with the presence of a central hemorrhage and necrosis (Figure 7). The cells showed striking cytological atypia, high mitotic activity and absence of villi consistent to choriocarcinoma.\n\nThe CT chest of the patient following her surgery revealed a vascular right apical lesion (Figure 8).\n\nA final diagnosis of multiple hemorrhagic lesions in the brain from primary lung choriocarcinoma was eventually made. The patient was referred to the National Cancer Centre for chemotherapy. The patient was started on the EMA-CO regime (Etoposide, Methotrexate and Actinomycin by drip over 2 days, followed by Cyclophosphamide and Oncovin the following week). The patient’s B-HCG decreased sharply after the first session of chemotherapy (serum B-HCG dropped to 1,50,000 mIU/ml). The patient was given three cycles of chemotherapy and has been on regular follow up at the cancer centre.\n\n\nDiscussion\n\nPrimary lung choriocarcinoma is a very rare entity, with < 50 cases reported currently2. This case report discusses an even rarer phenomenon of multiple hemorrhagic metastasis in the brain from primary lung choriocarcinoma.\n\nThere are various theories behind the etiology of primary lung choriocarcinoma. The foremost being embolism of trophoblastic cells during abortion, or even normal delivery, to the lung vasculature, thereby causing the cells to proliferate therein3. This may have occurred in the present case study. Other theories discuss the probable role of primordial germ cells and the genesis of metaplasia4. Choriocarcinoma can have either a gestational or non-gestational origin5,6.\n\nSometimes large cell anaplastic carcinoma, mediastinal germ cell tumor, bronchogenic carcinoma show ectopic HCG secretion, but this elevation is mild7,8. A high B-HCG level, as in our case, suggests a trophoblastic origin1.\n\nThe pathogenesis behind multiple hemorrhagic lesions in the brain is the tendency of such malignant trophoblastic cells to invade the vessels, and sometimes even leads to distal aneurysms9. Radiation has a poor response to such entity10. Therefore, the preferred therapy for gestational trophoblastic neoplasm is the EMA-CO regimen, similar to what was prescribed to our patient11.The prognosis of the condition is poor, with previous reports of a 5 year survival of <5%. However, recent advancements in chemoradiation therapy has helped to increase the overall 5 year survival rate up to 50%4,12. A multimodal approach is also required, constituting of neo-adjuvant chemotherapy followed by excision of the lung lesion9.\n\n\nConclusions\n\nPrimary lung choriocarcinoma metastasis should be recognized as a differential diagnosis in hemorrhagic lesions of the brain, especially in patients of a child bearing age. Early diagnosis and rapid initiation of therapy is the cornerstone for a better outcome in such patients.\n\n\nConsent\n\nWritten informed consent for the publication of the clinical case study and accompanying images was taken from the patient.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nRhee YK, Kim JH, Kim WH, et al.: Primary choriocarcinoma of the lung. Korean J Intern Med. 1987; 2(2): 269–272. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUmemori Y, Hiraki A, Aoe K, et al.: Primary choriocarcinoma of the lung. Anticancer Res. 2004; 24(3b): 1905–1910. PubMed Abstract\n\nTanimura A, Natsuyama H, Kawano M, et al.: Primary choriocarcinoma of the lung. Hum Pathol. 1985; 16(12): 1281–1284. PubMed Abstract | Publisher Full Text\n\nSnoj Z, Kocijancic I, Skof E: Primary pulmonary choriocarcinoma. Radiol Oncol. 2017; 51(1): 1–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVegh GL, Szigetvári I, Soltesz I, et al.: Primary pulmonary choriocarcinoma: a case report. J Reprod Med. 2008; 53(5): 369–72. PubMed Abstract\n\nMaestá I, Leite FV, Michelin OC, et al.: Primary pulmonary choriocarcinoma after human chorionic gonadotropin normalization following hydatidiform mole: a report of two cases. J Reprod Med. 2010; 55(7–8): 311–6. PubMed Abstract\n\nFusco FD, Rosen SW: Gonadotropin-producing anaplastic large-cell carcinomas of the lung. N Engl J Med. 1966; 275(10): 507–15. PubMed Abstract | Publisher Full Text\n\nHattori M, Imura H, Matsukura S, et al.: Multiple-hormone producing lung carcinoma. Cancer. 1979; 43(6): 2429–2437. PubMed Abstract | Publisher Full Text\n\nSridhar KS, Saldana MJ, Thurer RJ, et al.: Primary choriocarcinoma of the lung: report of a case treated with intensive multimodality therapy and review of the literature. J Surg Oncol. 1989; 41(2): 93–7. PubMed Abstract | Publisher Full Text\n\nPullar M, Blumbergs PC, Phillips GE, et al.: Neoplastic cerebral aneurysm from metastatic gestational choriocarcinoma. Case report. J Neurosurg. 1985; 63(4): 644–647. PubMed Abstract | Publisher Full Text\n\nLurain JR, Singh DK, Schink JC: Primary treatment of metastatic high-risk gestational trophoblastic neoplasia with EMA-CO chemotherapy. J Reprod Med. 2006; 51(10): 767–72. PubMed Abstract\n\nBerthod G, Bouzourene H, Pachinger C, et al.: Solitary choriocarcinoma in the lung. J Thorac Oncol. 2010; 5(4): 574–575. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "23780",
"date": "26 Jun 2017",
"name": "Ping Wang",
"expertise": [
"Reviewer Expertise Radiation"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper tells us “Her immune status was normal”. I think the authors should show us the date of the immune test. Since the patient was found by multiple hemorrhagic metastases to the brain, and the primary lesion from lung, how about the tumor marker from lung? Has she had a gene test of EGFR, ROS1, ALK, T790M, which could be useful if she needed target therapy? Did she need radiation for the lung tumor and when? The author need to discuss these questions.\n\nIs the background of the case’s history and progression described in sufficient detail? Partly\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "24086",
"date": "31 Jul 2017",
"name": "Lekhjung Thapa",
"expertise": [
"Reviewer Expertise Neurology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI must congratulate the author for reporting such a rare medical entity. The case has been nicely described and I am sure that this is going to be useful for other practitioners. However, I feel, few points to be considered in this case report are:\nSixth CN palsies may be better demonstrated on all directions of gaze. In the given picture, it looks like the patient also has BL ptosis!\n\nHistopathology of lung mass if included, would be better.\n\nThe patient has been on follow-up at cancer center. It would be interesting to know the neurological status at follow up.\n\nI think the author should discuss more with data on brain metastasis of choriocarcinoma1.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-740
|
https://f1000research.com/articles/6-736/v1
|
22 May 17
|
{
"type": "Research Article",
"title": "Leptin, insulin like growth factor-I levels and histology-diagnosed placental malaria in an area characterized by unstable malaria transmission in central Sudan",
"authors": [
"Hagir Elsheikh",
"Ishag Adam",
"Elhassan M. Elhassan",
"Ahmed A. Mohammed",
"Ammar H. Khamis",
"Mustafa I. Elbashir",
"Ishag Adam",
"Elhassan M. Elhassan",
"Ahmed A. Mohammed",
"Ammar H. Khamis",
"Mustafa I. Elbashir"
],
"abstract": "Background: There are few published data on the association between leptin, insulin like growth factor-1 (IGF-1) and malaria during pregnancy. This study aimed to investigate maternal and umbilical cord leptin and IGF-1 levels and malaria during pregnancy, and their association - if any - with birth weight. Methods: A cross-sectional study was conducted at Medani, Sudan. Medical and obstetrics history was gathered from each parturient woman (n=175) and malaria was investigated by blood film and placental histology. Maternal and umbilical cord leptin and IGF-1 levels were measured using ELISA. Results: Upon histological examination, 48 women were infected with placental malaria, and 127 were found free from the disease. Out of the 48, 2 of the patients showed signs of active infection, 3 of chronic infection and 43 of previous infection. Placental malaria and preterm delivery were associated with low birth weight (< 2500 g). Younger mothers and primigravidae had a higher risk for placental malaria infection. There was no significant difference in maternal and umbilical cord leptin and IGF-1 levels between women infected with placental malaria and those free from the disease. Conclusions: The current study showed that low birth weight was significantly associated with placental malaria. Young mothers and primigravidae had a higher risk to develop the infection. There was no significant difference in the levels of maternal and umbilical cord leptin and IGF-1 levels between women infected with placental malaria and those free from the disease. Both the levels of maternal and cord leptin and IGF-1were found not to be associated with birth weight. Abbreviations: IGF-1: Insulin like growth factor-1; LBW: Low birth weight; ELISA: Enzyme-linked immunosorbent assay; PM: Placental malaria.",
"keywords": [
"placental malaria",
"birth weight",
"leptin",
"Insulin-like growth factor 1",
"IGF-1"
],
"content": "Introduction\n\nMalaria during pregnancy is a major public health concern, especially in sub-Saharan Africa where there are approximately 125 million pregnant African women living in malaria-endemic regions. Almost one fifth of these pregnant women are at risk of malaria (Dellicour et al., 2010; Desai et al., 2007). Malaria during pregnancy is the main cause of maternal, perinatal and neonatal adverse effects, especially anemia and low birth weight (LBW) (Ahmed et al., 2014; Menendez et al., 2000; Rogerson et al., 2003).\n\nThe pathogenesis of placental malaria and LBW is not fully understood. Leptin is a hormone secreted mainly by adipocytes (Zhang et al., 1994) that can potentiate inflammation by enhancing macrophage phagocytosis (Loffreda et al., 1998; Pacifico et al., 2006). Previous reports have shown that leptin levels were decreased during malarial attack in pregnant women (Conroy et al., 2011), and that these decreased leptin levels were associated with placental malaria infection, as well as low birth weight (Kabyemela et al., 2008a; Kabyemela et al., 2008b).\n\nInsulin-like growth factor-I (IGF-1), also called somatomedin C, is a polypeptide with a sequence similar to that of insulin (Rinderknecht & Humbel, 1978). Recently, maternal and umbilical cord blood levels of IGF-1were investigated in malaria during pregnancy as possible determinants of birth weight (Ayoola et al., 2012; Umbers et al., 2011).\n\nResearch on malaria during pregnancy and its associated adverse effects e.g. LBW is highly valuable for researchers and clinicians as it can yield basic data needed for the future vaccine.\n\nPregnant Sudanese women are susceptible to malaria regardless of their age and parity, and malaria is associated with increased maternal mortality, anemia, LBW, and stillbirths (Adam et al., 2005a; Ali et al., 2011; Bader et al., 2010; Mohammed et al., 2013).\n\nCentral Sudan is characterized by unstable malaria transmission, and P. falciparum is the main malaria parasite species reported in the area (Malik et al., 2004). To add to the research on placental malaria during pregnancy that has been carried out already (Alim et al., 2015; Mostafa et al., 2015; Salih et al., 2011), the current study was conducted in central Sudan to investigate the maternal and umbilical cord levels of leptin and IGF-1in placental malaria infection.\n\n\nMethods\n\nA cross-sectional study was conducted from August to October 2014 in the labor ward of the Medani Maternity Hospital. After signing an informed consent form, women with singleton pregnancies were approached to participate in the study. Women with twins, hypertension, diabetes mellitus and antepartum hemorrhage were excluded from the study. Socio-demographic data (age, parity, residence and gestational age) and data on obstetric history, medical history, and bed net use were gathered using a structured questionnaire that was completed by a trained medical officer in the local language (Arabic). Maternal weight and height were measured and body mass index (BMI) was calculated and expressed as weight(kg)/height(m)2. Maternal hemoglobin concentrations were estimated (HemoCue AB, Angelhom, Sweden). Newborns were weighed immediately following birth using the Salter scale and the sex of each newborn was recorded. The total sample size was calculated assuming that at least 23% of parturient women would have placental malaria infection. To have over 80% power to detect a difference of 5% at α = 0.05, we recruited 175 women. We assumed that 10% of women might not respond or have incomplete data.\n\n\nGiemsa-stained blood smears for light microscopy\n\nMaternal, placental, and umbilical cord blood films were prepared for testing. Slides were stained by 10% Giemsa. In the slides positive for malaria the number of asexual parasites was counted per 200 leukocytes, assuming a leukocyte count of 8000 leukocytes per μl (for thick films) or per 1000 red blood cells (for thin films). Blood films were considered negative if no parasites were detected in 100 oil immersion fields of a thick blood film.\n\nThe maternal and umbilical cord blood was then allowed to clot, centrifuged for 10 minutes at 3000 rpm and the serum separated and stored at - 20°C until further analysis.\n\n\nPlacental histology\n\nThe details on placental histology have been mentioned previously (Alim et al., 2015; Mostafa et al., 2015; Salih et al., 2011). In summary, a 3cm3 full thickness sample was obtained from the maternal surface approximately half the distance between the umbilical cord and the edge of the placenta. The placental biopsy samples were immediately placed in 10% neutral buffered formalin. Buffer was used to prevent formation of formalin pigment, which might be difficult to differentiate from malaria pigment (Bulmer et al., 1993a). The placental biopsy samples were then embedded in paraffin wax/sections. In every case, the thick paraffin sections were stained with hematoxylin-eosin and Giemsa stains. Slides were read by a pathologist who remained blind to the clinical characteristics of each of these samples. Placental malaria infection was characterized using parameters previously described by Bulmer et al.: uninfected (no parasites or pigment), acute (parasites in intervillous spaces), chronic (parasites in maternal erythrocytes and pigment in fibrin, or cells within fibrin and/or chorionic villous syncytiotrophoblast or stroma), and previous (no parasites, and pigment confined to fibrin or cells within fibrin) (Bulmer et al., 1993b).\n\n\nELISA for measuring leptin and IGF-1 levels\n\nMaternal and umbilical cord serum levels of leptin and IGF-1 were measured using ELISA Kits and the manufacturers' instructions were strictly followed (DRG Diagnostics, Marburg, Germany).\n\n\nStatistical analysis\n\nThe data analyses were performed using SPSS statistical software for windows (version 18.0). Statistical significance was set at P value < 0.05. To compare means and proportions between groups, student’s t-test and Chi-square test were used, respectively. For non-parametric data significant differences in means between two groups were calculated using the Mann-Whitney test. Univariate and multivariate analyses were performed with a logistic regression model where placental malaria infection was the dependent variable and expected risk factors (mother’s age, parity, mother’s weight, mother’s haemoglobin, educational level, residence, use of bed net, antenatal care attendance, use of folic acids supplements and mother’s serum leptin and IGF-1) were the independent variables. Odds ratios (OR) and 95% confidence intervals (CI) were calculated. Linear regression models were set to investigate the factors associated with the level of mother's haemoglobin and birth weight. Predictor variables for mother’s haemoglobin model were; antenatal care attendance, parity, BMI, maternal serum leptin and use of folic acid supplements. Predictor variables for birth weight were; mother’s age, antenatal care attendance, mother’s haemoglobin, placental malaria, mother’s height, delivery gestational age.\n\n\nEthics\n\nThe study received ethical clearance from the Research Board at the Faculty of Medicine, University of Khartoum, Sudan. (Approval number: 2-2011).\n\n\nSelection of participants\n\nPregnant women who delivered at Medani Maternity hospital from August through to October 2014 were recruited for this study, following written informed consent. All participants finally included in the study had to satisfy the selection criteria and have none of the exclusion criteria.\n\n\nResults\n\nOut of the 175 women enrolled in the study, 77 (44%) were primiparae. The majority of them (105; 60.0%) had rural residency and used bed nets (157; 89.7%) during the index pregnancy (Dataset 1 (Elsheikh et al., 2017)).\n\nIn total, 36 (20.6%) had blood group A, 21 (12%) had blood group B, four (2.3%) had blood group AB, and 113 (64.6%) had blood group O. The mean (SD) hemoglobin level was 10.2 (1.1) g/dl, and 129 (73.7%) of the women were anemic (hemoglobin <11 g/dl).Eighteen (10.5%) women delivered low-birth weight neonates (<2500 g)(Dataset 1 (Elsheikh et al., 2017)).\n\nForty-eight women were infected with placental malaria (PM+), and 127 were free from the disease (PM−). Out of the 48, 2(4%) of PM+ patients had active infection, 3(6%) had chronic and 43 (90%) past infection (Dataset 1 (Elsheikh et al., 2017)).\n\nThe mean age (± SD) of PM+ patients was 26± 4.8 years and ranged from 17 to 38 years. The mean age (± SD) of PM−patients was 28± 6 years with a range of 18 to 41 years (Table 1). Younger women (25 – 30 years) were significantly more often infected with PM (P = 0.02)(Dataset 1 (Elsheikh et al., 2017)).\n\nData are means (SD). Women with placental malaria (PM) infection were on average younger (p= 0.02) and delivered lighter neonates than uninfected women (p= 0.054, not significant).\n\nMoreover, babies born to women with PM tended to be in the LBW (< 2500 g) category more often than those born to non-infected women, but the p-value failed to reach the significance level (p = 0.054). Maternal weight, BMI, gravidity, gestational age at delivery and hemoglobin levels were unchanged significantly between groups (Dataset 1 (Elsheikh et al., 2017)).\n\n\nPlacental malaria associated low birth weight\n\nLow birth weight (< 2500g) was significantly associated with placental malaria (N = 172, p = 0.006)(Dataset 1 (Elsheikh et al., 2017)).\n\n\nRisk factors for placental malaria\n\nUnivariate and multivariate analysis demonstrated that only the mother’s age and parity were significant risk factors (p-values were 0.008 for mother’s age and 0.009 for parity).\n\nYounger mothers and primigravidae had a higher risk for PM. The risk of infection was lower for older mothers, with an odds ratio (OR) of 0.881 (p = 0.008, 95%CI: 0.802 – 0.968). For each additional year in age, the odds of getting placental malaria lowered by a factor of 0.881. The OR for parity was 4.3 (p = 0.009, 95%CI: 1.45 – 12.998) (Table 2, Dataset 1 (Elsheikh et al., 2017)).\n\n*: Maternal age showed statistically significant association with placental malaria in univariate and multivariate analysis.\n\n*1: Adding one year in age decreases the risk of getting placental malaria by about 11.9%.\n\n*2: The risk for primigravidae to get placental malaria is 4.3 higher than for multigravidae.\n\n\nSerum levels of leptin and IGF-1\n\nThe levels of leptin were higher in LBW infants and their mothers, whilst IGF-1levels were higher in normal weight infants and their mothers. However, these differences failed to reach statistical significance (Table 3). Non-infected mothers and their infants showed higher levels of leptin and IGF-1 than infected ones (Figure 1 and Figure 2), but these differences also failed to reach statistical significance (Table 4, Dataset 1 (Elsheikh et al., 2017)).\n\nThe data is shown as median (interquartile range).\n\n(A) Boxplot of maternal serum leptin concentrations in women with and without placental malaria. (B) Boxplot of umbilical cord serum leptin concentrations in women with and without placental malaria. Maternal and cord leptin levels were measured in serum samples of non- (PM−, n= 122, 5missed samples) and women with placental malaria (PM+, n= 47, 1missed sample). The Mann-Whitney test was used to compare the levels of maternal and umbilical cord leptin between the two groups. PM+ women showed lower levels of maternal and cord leptin but these differences were not statistically significant (Dataset 1).\n\n(A) Boxplot of maternal insulin-like growth factor-I (IGF-I) concentrations in women with and without placental malaria. (B) Boxplot of umbilical cord IGF-I concentrations in women with and without placental malaria. Maternal and cord IGF-1 levels were measured in serum samples of non-infected women (PM−, n= 122, 5missed samples) and women with placental malaria (PM+, n= 47, 1missed sample). The Mann-Whitney test was used to compare the levels of maternal and umbilical cordIGF-1 between the two groups. PM+ women showed lower levels of maternal and cord IGF-I but these differences were not statistically significant (Dataset 1).\n\nThe data is shown as median (interquartile range).\n\nLinear regression analysis showed that gestational age had the strongest positive effect on birth weight (β = 0.191, p = 0.01), followed by antenatal care attendance (p= 0.043) and mother’s age (p= 0.85, not significant) (Table 5, Dataset 1 (Elsheikh et al., 2017)).\n\n*: Birth weight was significantly affected by gestational age and antenatal care attendance.\n\n\nDiscussion\n\nThe main findings of the current study are that placental malaria is significantly associated with LBW. Younger mothers and primigravidae had a higher risk for PM infection. There was no significant difference in leptin and IGF-1 levels between PM+ and PM− women and their infants, as well as between LBW infants and their mothers and normal weight infants and their mothers. Maternal and umbilical cord leptin and IGF-1 levels were not associated with birth weight.\n\nOur results coincide with what has been reported previously about LBW being significantly associated with placental malaria (Albitiet et al., 2010; Aribodor et al., 2009; Menendez et al., 2000). Although some studies conducted in different areas in Sudan did not report this association. A study conducted in Gadarif hospital in an area characterized by unstable malaria transmission in eastern Sudan found that placental malaria affects pregnant women regardless of their parity and had no effects on birth weight (Salih et al., 2011). Another study showed that, while placental malaria infections that were positive by histology were not associated with LBW, submicroscopic malaria infections (diagnosed by PCR) were (Mohammed et al., 2013). Moreover, a study conducted by Batran et al. (Batran et al., 2013) found that placental infections had no effect on LBW or anemia.\n\nMany other studies also observed that the mother’s age and parity are risk factors for placental malaria (Falade et al., 2010; Ndeserua et al., 2015; Ojurongbe et al., 2010; Tako et al., 2005; Walker et al., 2013), which is in contrast with our previously published results (Adam et al., 2005a; Adam et al., 2005b; Adam et al., 2007; Adam et al., 2009; Adam et al., 2011; Albiti et al., 2010).\n\nThe study showed that leptin levels were higher in non-infected mothers and their infants than in infected ones, however these differences failed to reach statistical significance. This concurs with a study conducted in Malawi which found a significant reduction of leptin levels in mothers infected with PM, and according to this the authors suggested leptin to be an informative biomarker for diagnosis of PM (Conroy et al., 2011). Another study in Tanzania also reported the same finding (Kabyemela et al., 2008a). Kabyemela and his colleges (Kabyemela et al., 2008b) also investigated the effect of PM in the relationship between cord leptin levels and birth weight. They found that cord leptin had a strong positive relationship with birth weight in offspring of PM− women (P = 0.02 to P < 0.0001) but not in offspring of PM+ women, however the differences did not reach significance level.\n\nAlthough the current study failed to detect a significant differences in IGF-1 levels between PM+ and PM− women, one study have shown that placental malaria-associated inflammation disturbs maternal and fetal levels of IGFs, which regulate fetal growth (Umbers et al., 2011). This may be one mechanism by which placental malaria leads to fetal growth restriction, but this study did not report the effect size of PM on IGF-1 levels.\n\nIt is worth mentioning that the differences between the current study and later ones that reported low leptin (Conroy et al., 2011; Kabyemela et al., 2008a; Kabyemela et al., 2008b) and IGF-1 levels in maternal and umbilical cord serum (Umbers et al., 2011) might be due to the duration of malaria infection itself. While the majority of malaria infections in the current study were past placental infection (4% of PM+ patients had active infection, 6% had chronic and 90% past infection), the later studies reported results from active placental infections. Furthermore, submicroscopic placental malaria infection using PCR (Polymerase Chain Reaction) was not investigated in the current study.\n\nWe have recently reported that women with submicroscopic malaria were at higher risk to have LBW (Mohammed et al., 2013). Likewise Adegnika et al. have reported that microscopic and submicroscopic P. falciparum infection, but not inflammation (C-reactive protein) caused by infection, is associated with low birth weight (Adegnika et al., 2006).\n\nThe main limitation of this study was that we relied on a single measure of leptin and IGF-1 levels at delivery; however it was not feasible to obtain the levels of these infants before birth. Most studies relating umbilical cord blood IGF-1levels and birth weight reported single measurements at birth (Ong et al., 2000; Yang & Yu, 2000), and findings are similar. Another limitation was that we did not measure the concentrations of any other components of the IGF axis, including growth hormone, insulin, IGF binding proteins (IGFBP1-5), and receptors (IGF-1R and 2R) to further establish the potential implication of the IGF system in fetal growth. More limitation is that although we are interested only in the biologically active IGF-1 (free form), the ELISA technique used in this study measures the total amount of IGF-1 (free and protein-bound) in serum.\n\n\nConclusions\n\nThe current study shows that there is no statistically significant difference in the levels of maternal and cord leptin and in the levels of IGF-1 between PM+ women and PM – women, and between women who delivered LBW infants and those who delivered normal weight ones. The main finding is that placental malaria is significantly associated with LBW. Neither maternal and umbilical cord leptin levels nor IGF-1 levels were associated with birth weight.\n\n\nData availability\n\nDataset 1. The file contains data on socio-demographics (age, parity, residence and gestational age), obstetric and medical history, bed net use, maternal weight, height and BMI, maternal hemoglobin, infant birth weights and maternal and umbilical cord leptin and IGF-1 levels for each participant. HME has confirmed that all raw data provided with this manuscript has been de-identified. DOI, 10.5256/f1000research.10641.d158697 (Elsheikh et al., 2017)",
"appendix": "Author contributions\n\n\n\nHME and IA designed the experiments. IA conceived the study and participated in study coordination. EME conducted the clinical work. AAM performed the pathological analysis. HME carried out the laboratory work, statistical analysis and study coordination. HME, IA and MIE prepared the first draft of the manuscript. MIE contributed to the experimental design. AHK contributed in statistical analysis. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was partially funded by the Ministry of Higher Education (Khartoum, Sudan).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe are grateful to the staff of the Department Of Immunology of the National Laboratory of Public Health (Khartoum, Sudan), Dr. Kawther Abdel Galeil, Mohammed Salih, Mohammed Karrar Abdalla, Sayed Mutasim and Omer Mahjoob, who participated in laboratory analysis. We are also grateful to Ms. Azza Osman Mohamed Osman for statistical consultation.\n\n\nReferences\n\nAdam I, Adamt GK, Mohmmed AA, et al.: Placental malaria and lack of prenatal care in an area of unstable malaria transmission in eastern Sudan. J Parasitol. 2009; 95(3): 751–752. PubMed Abstract | Publisher Full Text\n\nAdam I, A-Elbasit IE, Salih I, et al.: Submicroscopic Plasmodium falciparum infections during pregnancy, in an area of Sudan with a low intensity of malaria transmission. Ann Trop Med Parasitol. 2005b; 99(4): 339–344. PubMed Abstract | Publisher Full Text\n\nAdam I, Babiker S, Mohmmed AA, et al.: ABO blood group system and placental malaria in an area of unstable malaria transmission in eastern Sudan. Malar J. 2007; 6: 110. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdam I, Elhassan EM, Mohmmed AA, et al.: Malaria and pre-eclampsia in an area with unstable malaria transmission in Central Sudan. Malaria J. 2011; 10: 258. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdam I, Khamis AH, Elbashir MI: Prevalence and risk factors for Plasmodium falciparum malaria in pregnant women of eastern Sudan. Malaria J. 2005a; 4: 18. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdegnika AA, Verweij JJ, Agnandji ST, et al.: Microscopic and sub-microscopic plasmodium falciparum infection, but not inflammation caused by infection, is associated with low birth weight. Am J Trop Med Hyg. 2006; 75(5): 798–803. PubMed Abstract\n\nAhmed R, Singh N, ter Kuile FO, et al.: Placental infections with histologically confirmed Plasmodium falciparum are associated with adverse birth outcomes in India: a cross-sectional study. Malar J. 2014; 13: 232. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlbiti AH, Adam I, Ghouth AS: Placental malaria, anaemia and low birthweight in Yemen. Trans R Soc Trop Med Hyg. 2010; 104(3): 191–4. PubMed Abstract | Publisher Full Text\n\nAli AA, Elhassan EM, Magzoub MM, et al.: Hypoglycaemia and severe Plasmodium falciparum malaria among pregnant Sudanese women in an area characterized by unstable malaria transmission. Parasit Vectors. 2011; 4: 88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAlim A, E Bilal N, Abass AE, et al.: Complement activation, placental malaria infection, and birth weight in areas characterized by unstable malaria transmission in central Sudan. Diagn Pathol. 2015; 10: 49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAribodor DN, Nwaorgu OC, Eneanya CI, et al.: Association of low birth weight and placental malarial infection in Nigeria. J Infect Dev Ctries. 2009; 3(8): 620–623. PubMed Abstract | Publisher Full Text\n\nAyoola OO, Whatmore A, Balogun WO, et al.: Maternal malaria status and metabolic profiles in pregnancy and in cord blood: relationships with birth size in Nigerian infants. Malar J. 2012; 11: 75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBader E, Alhaj AM, Hussan AA, et al.: Malaria and stillbirth in Omdurman Maternity Hospital, Sudan. Int J Gynaecol Obstet. 2010; 109(2): 144–6. PubMed Abstract | Publisher Full Text\n\nBatran SE, Salih MM, Elhassan EM, et al.: CD20, CD3, placental malaria infections and low birth weight in an area of unstable malaria transmission in Central Sudan. Diagn Pathol. 2013; 8: 189. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBulmer JN, Rasheed FN, Francis N, et al.: Placental malaria. I. Pathological classification. Histopathology. 1993a; 22(3): 211–8. PubMed Abstract | Publisher Full Text\n\nBulmer JN, Rasheed FN, Morrison L, et al.: Placental malaria. II. A semi-quantitative investigation of the pathological features. Histopathology. 1993b; 22(3): 219–25. PubMed Abstract | Publisher Full Text\n\nConroy AL, Liles WC, Molyneux ME, et al.: Performance characteristics of combinations of host biomarkers to identify women with occult placental malaria: a case-control study from Malawi. PLoS One. 2011; 6(12): e28540. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDellicour S, Tatem AJ, Guerra CA, et al.: Quantifying the number of pregnancies at risk of malaria in 2007: a demographic study. PLoS Med. 2010; 7(1): e1000221. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDesai M, ter Kuile FO, Nosten F, et al.: Epidemiology and burden of malaria in pregnancy. Lancet Infect Dis. 2007; 7(2): 93–104. PubMed Abstract | Publisher Full Text\n\nElsheikh H, Adam I, Elhassan EM, et al.: Dataset 1 in: Leptin, insulin like growth factor-I levels and histology-diagnosed placental malaria in an area characterized by unstable malaria transmission in central Sudan. F1000Research. 2017. Data Source\n\nFalade CO, Tongo OO, Ogunkunle OO, et al.: Effects of malaria in pregnancy on newborn anthropometry. J Infect Dev Ctries. 2010; 4(7): 448–53. PubMed Abstract | Publisher Full Text\n\nKabyemela ER, Fried M, Kurtis JD, et al.: Fetal responses during placental malaria modify the risk of low birth weight. Infect Immun. 2008b; 76(4): 1527–1534. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKabyemela ER, Muehlenbachs A, Fried M, et al.: Maternal peripheral blood level of IL-10 as a marker for inflammatory placental malaria. Malar J. 2008a; 7: 26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoffreda S, Yang SQ, Lin HZ, et al.: Leptin regulates proinflammatory immune responses. FASEB J. 1998; 12(1): 57–65. PubMed Abstract\n\nMalik EM, Atta HY, Weis M, et al.: Sudan Roll Back Malaria Consultative Mission: Essential Actions to Support the Attainment of the Abuja Targets. Sudan RBM Country Consultative Mission Final Report. Geneva: Roll Back Malaria Partnership; 2004. Reference Source\n\nMenendez C, Ordi J, Ismail MR, et al.: The impact of placental malaria on gestational age and birth weight. J Infect Dis. 2000; 181(5): 1740–5. PubMed Abstract | Publisher Full Text\n\nMohammed AH, Salih MM, Elhassan EM, et al.: Submicroscopic Plasmodium falciparum malaria and low birth weight in an area of unstable malaria transmission in Central Sudan. Malar J. 2013; 12: 172. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMostafa AG, Bilal NE, Abass AE, et al.: Coagulation and Fibrinolysis Indicators and Placental Malaria Infection in an Area Characterized by Unstable Malaria Transmission in Central Sudan. Malar Res Treat. 2015; 2015: 369237. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNdeserua R, Juma A, Mosha D, et al.: Risk factors for placental malaria and associated adverse pregnancy outcomes in Rufiji, Tanzania: a hospital based cross sectional study. Afr Health Sci. 2015; 15(3): 810–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOjurongbe O, Oyedeji SI, Oyibo WA, et al.: Molecular surveillance of drug-resistant Plasmodium falciparum in two distinct geographical areas of Nigeria. Wien Klin Wochenschr. 2010; 122(23–24): 681–5. PubMed Abstract | Publisher Full Text\n\nOng K, Kratzsch J, Kiess W, et al.: Size at birth and cord blood levels of insulin, insulin-like growth factor I (IGF-I), IGF-II, IGF-binding protein-1 (IGFBP-1), IGFBP-3, and the soluble IGF-II/mannose-6-phosphate receptor in term human infants. The ALSPAC Study Team. Avon Longitudinal Study of Pregnancy and Childhood. J Clin Endocrinol Metab. 2000; 85(11): 4266–9. PubMed Abstract | Publisher Full Text\n\nPacifico L, Di Renzo L, Anania C, et al.: Increased T-helper interferon-gamma-secreting cells in obese children. Eur J Endocrinol. 2006; 154(5): 691–697. PubMed Abstract | Publisher Full Text\n\nRinderknecht E, Humbel RE: The amino acid sequence of human insulin-like growth factor I and its structural homology with proinsulin. J Biol Chem. 1978; 253(8): 2769–76. PubMed Abstract\n\nRogerson SJ, Pollina E, Getachew A, et al.: Placental monocyte infiltrates in response to Plasmodium falciparum malaria infection and their association with adverse pregnancy outcomes. Am J Trop Med Hyg. 2003; 68(1): 115–9. PubMed Abstract\n\nSalih MM, Mohammed AH, Mohmmed AA, et al.: Monocytes and macrophages and placental malaria infections in an area of unstable malaria transmission in eastern Sudan. Diagn Pathol. 2011; 6: 83. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTako EA, Zhou A, Lohoue J, et al.: Risk factors for placental malaria and its effect on pregnancy outcome in yaounde, Cameroon. Am J Trop Med Hyg. 2005; 72(3): 236–42. PubMed Abstract\n\nUmbers AJ, Boeuf P, Clapham C, et al.: Placental malaria-associated inflammation disturbs the insulin-like growth factor axis of fetal growth regulation. J Infect Dis. 2011; 203(4): 561–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWalker PG, Griffin JT, Cairns M, et al.: A model of parity-dependent immunity to placental malaria. Nat Commun. 2013; 4: 1609. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYang SW, Yu JS: Relationship of insulin-like growth factor-I, insulin-like growth factor binding protein-3, insulin, growth hormone in cord blood and maternal factors with birth height and birthweight. Pediatr Int. 2000; 42(1): 31–36. PubMed Abstract | Publisher Full Text\n\nZhang Y, Proenca R, Maffei M, et al.: Positional cloning of the mouse obese gene and its human homologue. Nature. 1994; 372(6505): 425–432. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "23495",
"date": "15 Jun 2017",
"name": "Edward Kabyemela",
"expertise": [
"Reviewer Expertise Immunology",
"Chemical Pathology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nDespite heightened global efforts to control and possibly eliminated malaria in the foreseeable future, the disease remain a public health problem in many parts of the tropical countries. Since malaria in pregnancy predisposes both mother and infant to adverse outcomes, the research on the pathogenesis of these outcomes is of importance as it can lead us to new interventions.\nThe merit of the paper is that it has explored both leptin and IGF-I (in the same setting) and their effect on placental malaria (PM) related low birth weight (LBW). This article reports the following main findings.\nPrimigarvidity and young age increases the risk that pregnant women will be infected with malaria Placental malaria and pre-term delivery are associated with low birth-weight (LBW) There is no significant difference in the cord blood levels of leptin or IGF-1 in offspring of infected and non--infected mothers There is no significant difference in the maternal levels of leptin or IGF-1 in offspring of infected and non--infected mothers Cord blood levels of leptin or IGF-1 are not associated with birth weight Maternal blood levels of leptin or IGF-1 are not associated with birth weight\nResults 1 and 2 above are largely confirming previous literature from areas where malaria transmission is endemic. The study is based on a substantial proportion of women (90% of the sample) with past malaria infection (by histology). The results differ from many other studies which were based on substantial proportion of women with active placental malaria infection and this has clearly been mentioned by the authors.\nComments on specific parts of the article:\nIntroduction The Introduction contains numerous sentences which I think need slight re-wording to make them convey the intended message.\nExamples are: 'there are approximately 125 million pregnant women living in malaria endemic regions' revise to 'approximately 125 million women in malaria endemic regions become pregnant each year' 'the pathogenesis of placental malaria and LBW is not fully understood' revise to 'the pathogenesis of placental malaria related LBW is not fully understood'\nMethods This section is brief. Some parts of important information are needed to allow readers relate the findings and the methodology. Specifically it will be important to reveal the following:\nWhat was the intensity of malaria during the study period? Was it low or high transmission season. If low season this may explain the very low prevalence of active infections reported in the study. What was the basis of 23% of women expected to have placental malaria in the study population? It will be good for authors to expand on how the blood samples from mother and the newborn were collected. Samples were collected in 2014. Were the laboratory analysis of these samples run in 2014 or 2017. Please clarify this since the storage temperature is -20°C and not for example -70°C. How exactly was the gestational age determined?\n\nStatistical analysis The authors used different statistical approaches to tests for associations between different variables in this study. This is a very commendable approach. However I have a few observations to make:\nIt is not clear how the independent variables for the risk of PM were identified. Specifically, why mother's weight, mother's hemoglobin, use of folic acid supplements, mothers serum leptin and IGF -I were included as independent variables for placental malaria. It will be useful for readers to know how these variables increase/decrease risk of PM. Any literature relating these to risk of PM is needed. If not available then the analysis may need to be revisited. Actually if they have data they may consider Iron supplementation as an independent variable in their analysis (studies relating Fe supplementation and risk of PM are available) Were these mothers of SP IPTp? Probably this justifies the inclusion of folic acid supplementation as an independent variable.\n\nSimilarly is not clear how the independent variables for maternal hemoglobin levels were identified. Why mother's leptin and IGF-I levels were included as independent variables for maternal hemoglobin levels. It will be useful for readers to know how these variables increase/decrease the risk of anemia. Any literature relating these to risk of anemia needed. If not available then the analysis may need to be revisited.\n\nResults\nReaders may want to know what were the results of malaria microscopy studies.\n\nThere are discrepancies in the numbers in the text and tables. Maternal age: Uninfected: Table (27.9) Text (28); Infected: Table (25.7) Text (26) Maternal Hemoglobin (ALL women): Table (10.3) text (10.2) Consistency will be welcome.\n\nSome of the reported results are not shown. It is stated that \"Moreover, babies born to women with PM tended to be in the LBW (<2500 g) category more often than those born to non-infected women, but the p-value failed to reach significance level (p = 0.054)\". What was the proportion of LBW infants in PM+ versus proportion of LBW infants in the PM- mothers?\n\nIt is stated that PM was associated with LBW: \"LBW (< 2500 g) was significantly associated with placental malaria (N= 172, p =0.009)\" What is this N = 172 ?\n\nIn table 2 univariate and multivariate analyses are shown. In the univariate analysis none of the variables investigated was associated with placental malaria. In the multivariate analysis Maternal age and parity are associated with PM. Can the authors explain this paradox?\n\nTable 3 shows that neither leptin nor IGF-I levels differ between LBW babies of PM + and PM - mothers. The sample size is now 166 instead of 175. Can the authors indicate why 9 subjects are missing ?\n\nIt is known that cord blood leptin levels correlate with fetal size. It was previously shown that cord leptin levels are lower in LBW babies born to PM- but not in offspring of PM+ mothers probably indicating that PM disrupts the normal relationship between leptin and fetal growth. The analysis in Table 3 would benefit from separating LBW infants into PM+ and PM- groups and see what will be the relationship with levels of leptin but also IGF-I.\n\nDiscussion A detailed discussion on why many findings in this area (PM) from Sudan differ from findings elsewhere would be most welcome.\n\nIn conclusion: This study explores an important area of public health in Africa. The study is largely based on past malaria infections in pregnant women. The main results are largely confirming previous findings. The study may be missing important aspects which can become clear if the statistical analysis is revisited.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "23068",
"date": "19 Jun 2017",
"name": "Chloe R. McDonald",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present an analysis of two biomarkers, Leptin and IGF-1, quantified in maternal and umbilical cord samples collected at delivery. The results are analyzed in association with malaria infection (by blood smear and/or histology) in pregnancy, a leading contributor to adverse birth outcomes in malaria endemic regions. I appreciate the authors’ focus on this important subject matter and present the comments below for their consideration. My comments namely focus on providing additional detail within the article to support the analysis, result and interpretation presented.\n\nComments:\nAbstract\nThe abstract states “Placental malaria and preterm delivery were associated with low birth weight (<2500g).” The authors don’t provide any information in the methods around gestational dating (LMP, ultrasound etc.) and don’t provide any analysis regarding the association with preterm birth. I would remove the association with and discussion around preterm birth\n\nIntroduction\nThe reference to Dellicour et al. 2010 and 125 million pregnancies includes geographies outside of sub-Saharan Africa. The first two sentences are a bit confusing and might be amended to read “number of pregnancies at risk, number in areas endemic for p. falciparum” since that’s the focus of the study. I would also amend “the main cause” to “a main cause” since malaria is an important contributor but other factors (nutrition, lack of access to antenatal care etc.) are also principal contributors to adverse outcomes. I think the introduction could be expanded to include more background around the selected biomarkers and the rationale for selecting them. e.g. relationship between Leptin and metabolic function (BMI etc.) and pregnancy, perhaps in relation to the nutritional status of women in the cohort. The authors state that “the pathogenesis of placental malaria and LBW is not fully understood” and then move into Leptin. It might be useful top provide some rationale around how the authors proposed Leptin and IGF-1 play a role in the pathogenesis of LBW resulting from placental malaria. I found the paragraph about vaccine development a bit confusing. I would provide more information around how the results of this study (or research on the effects of placental malaria) might contribute to development of a vaccine or potentially remove that paragraph\n\nMethods\nProvide background on any IPTp or anti-malaria clinical care the women in this cohort received in pregnancy (in particular in light of the relatively low levels of active infection), if women were receiving IPTp the authors should comment on how this may impact their results in the discussion (if not the authors should state why women in this cohort did not receive treatment) Provide a rationale for excluding women with hypertension and diabetes mellitus (and how was this diagnosis performed to ensure that the cohort doesn’t include any women with hypertension or diabetes) Provide information around gestational dating (LMP, ultrasound?) Provide additional information around the variable “antenatal care attendance” (e.g. is this number of visits total, number of women reaching 3 or 4 visits, number of women beginning visits in the first or second trimester, was this data collected by questionnaire or from the clinics?) All variables presented in Table 2 should have more information (in table and in methods) e.g. maternal age (years), weight (kg), education level (years), residence (type?), hemoglobin (g/dL) use of bed net (yes/no or months during pregnancy?), maternal leptn (ng/mL) etc.) Missing data should be reported (e.g. are they missing data on birth weight (175-166 = 9 data points on birth weight are missing?)) If these missing cases are home deliveries and/or missing because of very early delivery that could confound the results. Also in Table 4 where n = 47 “infected” but the authors report n = 48 cases of PM+ in the results, the missing data is reported in the figure legend but should also be included in the tables/methods. Are the authors confident that -20 is sufficient? Depending on how long the samples were stored for that might not be sufficient (compared with -80) to preserve the samples. Provide the name/institute of where the pathology took place. -Were the samples analyzed (by ELISA) in duplicate or triplicate? If the numbers of 2 = active infection, 3 = chronic infection are low (n = 5), is it appropriate to analyze that group separately? The authors should provide a rationale for collapsing active, chronic and past infections together in the analyses. If the study was powered with a primary outcome of placental malaria do they have sufficient power for the analysis around LBW if only n = 18 cases of LBW are reported? Authors should provide a rationale for the co-variates selected (a priori based on associations with outcomes or based on analysis-association with outcome of interest)\n\nResults\nAge and parity are likely highly correlated. That should be mentioned in the results (and potentially effects the analysis, for example including co-variates that are highly correlated in a multivariate model can influence the results) The authors present the breakdown of their cohort by blood group, they may want to provide a rationale as to why they present those results or some interpretation of the results in the discussion (did they examine blood group in relation to infection?) Since the results are presented with LBW as an primary outcome (in association with malaria) the authors may want to acknowledge (or examine if they have the relevant data) the relative contribution of preterm birth and small-for-gestational age outcomes to LBW (which will be made up of both PTB and SGA babies) The ELISA results should include the inter-assay co-efficient of variability (CV) and intra-assay. The authors should state “maternal” or “cord blood” (or both) when presenting their results (e.g. in paragraph 1 under the title “Serum levels of leptin and IGF-1” as this is how the results are presented in the tables (they may also want to emphasize in the results that the maternal blood samples were collected at delivery) Under the paragraph entitled “Placental malaria associated with low birth weight”, I was unclear why the reference to Elsheizh et al. was included? All tables, include what statistical tests were performed (e.g. P(t-test) is the result of what analysis?) Table 1, I would avoid the use of the term “lighter” neonates and use the term “delivered neonates with a lower mean weight in comparison with uninfected women” In Table 2 I would avoid presenting results within the table footnotes.\n\nDiscussion\nPerhaps provide a discussion around the levels in serum vs. in plasma. I would avoid the statement “failed to reach statistical significance” (also stated in results sections) and say instead (e.g.) “The study showed that leptin…., however, the difference was not statistically significant”. I’m not sure the authors can say that their results “concur with a study conducted in Malawi” if they’re not reporting statistical significance. Perhaps they could state instead that the study in Malawi also observed lower serum levels of leptin in women with placental malaria infection, but reported a statistically significant difference. I would avoid the term “normal weight ones” (last paragraph of the discussion) and say instead “those who delivered at normal birth weight (>2500 g)”. The authors may want to comment on the incidence of placental malaria (~27%) and LBW in this cohort (~11 %)…do they think this is low/high? In keeping with previous studies in the same region? Depending on how gestational age was measured (e.g. LMP?) the authors may want to acknowledge that the association between birth weight and gestational age (which is to be expected) will be influence by the method for gestational dating.\n\nMinor comments\nI would recommend reducing the size of the figures and collapsing them into 1 figure with 4 panels (a-d). I would also rename the y-axis titles to read “Leptin in Maternal Blood (ng/mL).” It’s likely also worth noting that “placental malaria” in this case is all cases of malaria (identified by microscopy and histology). It might be worth exploring the levels of Leptin and IGF-1 in maternal blood vs. cord blood. E.g. does placental malaria infection lower the ratio of Leptin or IGF-1 in maternal blood vs. cord blood, do women who delivery LBW babies have a lower ratio of IGF-1 in maternal vs. cord blood?\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-736
|
https://f1000research.com/articles/6-405/v1
|
31 Mar 17
|
{
"type": "Opinion Article",
"title": "Imagining tomorrow's university: open science and its impact",
"authors": [
"Adina Howe",
"Michael Howe",
"Amy L. Kaleita",
"D. Raj Raman",
"Michael Howe",
"Amy L. Kaleita",
"D. Raj Raman"
],
"abstract": "As part of a recent workshop entitled \"Imagining Tomorrow's University”, we were asked to visualize the future of universities as research becomes increasingly data- and computation-driven, and identify a set of principles characterizing pertinent opportunities and obstacles presented by this shift. In order to establish a holistic view, we take a multilevel approach and examine the impact of open science on individual scholars as well as on the university as a whole. At the university level, open science presents a double-edged sword: when well executed, open science can accelerate the rate of scientific inquiry across the institution and beyond; however, haphazard or half-hearted efforts are likely to squander valuable resources, diminish university productivity and prestige, and potentially do more harm than good. We present our perspective on the role of open science at the university.",
"keywords": [
"open science",
"university",
"education",
"training",
"data",
"impact"
],
"content": "Introduction\n\nAs part of a recent workshop entitled “Imagining Tomorrow's University”, we were asked to visualize the future of universities as research becomes increasingly data- and computation-driven, and to identify a set of principles characterizing pertinent opportunities and obstacles presented by this shift. To establish a holistic view, we take a multilevel approach and examine the impact of open science on individual scholars as well as on the university as a whole. Generally, we agree that increased transparency in the scientific process can broaden and deepen scientific inquiry, understanding, and impact. However, the realization of these outcomes will require significant time, effort, and aptitude to convey the means by which data are transformed into knowledge. We propose that open science can most effectively enable this evolution when it is conceptualized as a multifaceted pathway that includes:\n\nThe provision of accessible and well-described data, along with information about its context1;\n\nThe methodology and mechanisms necessary to reproduce data analyses;\n\nTraining products that provide transparent understanding of how the data can be applied to answer questions.\n\nThus, impactful open science requires investments from individual researchers that are often greater than those that might be needed for “non-open” science. At the university level, open science represents a double-edged sword: when well executed, it can accelerate the rate of scientific inquiry across the institution and beyond; however, haphazard or half-hearted efforts are likely to squander valuable resources and diminish university productivity and prestige, potentially doing more harm than good. Here, we present our perspective on the varying roles of open science.\n\n\nOpen science enables low-barrier collaborations\n\nFor some university researchers, open science can be both powerful and transformative. Imagine a research program that generates not only publications but also develops code that can quickly reproduce each analysis and publishable figure with a minimal amount of manual intervention. This structure can provide continuity in a project and accelerate the research enterprise by allowing researchers to rapidly repeat the same analysis on new datasets, all while lowering training and other human capital investments. Included in a publication, this “research notebook” and accompanying datasets (e.g., 2), could be compiled into a tutorial for others in the field who could then repeat this work with their own data – all without the need for formal collaborations. Such approaches can benefit not only the initiating research group but also an entire scientific discipline.\n\n\nOpen science requires significant investment\n\nWhile the opportunities of open science practices hold promise, several costs and obstacles may prevent its realization and impact. A key cost of open science is time – time to format, annotate and publish data and associated metadata; time to learn new tools that allow for automated analysis and reproduction; time to produce scripts with a sufficient level of robustness and documentation to be useful to others3, and so on. Of these, arguably, the least time-consuming step is simply providing access to data. While open data is an important component of open science, it is far from the whole enchilada, and does not provide the broad benefits of open science writ large.\n\nIt would be irresponsible to discuss open data and open science without acknowledging the risk posed to the anonymity that is so central to many human research studies. For example, to promote participant anonymity, data resulting from research currently conducted under the auspices of an IRB may be ineligible for distribution outside of the immediate research team. As multiple sources of open data become increasingly available, privacy concerns of this nature are likely to increase along with the prevalence of unintended participant identification4,5. In these cases, the benefits of open science may not stem from sharing data but rather reproducible analyses that may be more broadly useful, and the provision of open data does not in itself translate into our vision of open science. At the university level, the incentives to facilitate and expand open science at the university should not be monolithic (e.g., data-centric), but rather be selectively created and applied to maximize success and minimize unintended harm. Open science also presents unique challenges as universities and other research institutions turn increasingly to private sector funding, which comes with proprietary limitations on the dissemination of results.\n\n\nThe broader impact of open science is uncertain\n\nIt is possible that the increasing availability and transparency of scientific inquiry could ignite broader interest in research. The current publishing paradigm of most fields limits research availability to a relatively narrow audience, with paid access to scientific journals. Meanwhile, polling data from Gallup indicates a slow but relatively steady decline in Americans’ trust of institutions in general since 20006, although Gallup does not include “universities” specifically in the poll. In one study that compared follow-on inventions from discoveries that were made simultaneously but separately at a university and at a corporate firm, the same discovery at a university was 20–30% less likely to be used in follow-up innovations7,8. This study also included open-ended interviews to shed light on this “Ivory Tower effect”; and a key driver appeared to be “considerable skepticism toward academic science.” More openness in university science research may help to address this apparent skepticism.\n\nEven though there are concerns associated with society’s growing disconnect with the scientific enterprise and the accompanying devaluation of research, it should be noted that in general academics are still held in high regard and seen as reliable sources of information for a wide range of issues9,10. To maintain this esteem, it is important to realize that data without an understanding of what it entails or the questions it can answer can be considered useless and even dangerous when used improperly to influence decision-making and policy11. Thus, providing useful open data requires more thought on how this data can be translated into useful information. Mechanisms to reproduce analyses and communications that explain the complexities and intricacies of these tasks could be an important first step. While the peer-reviewed-publication paradigm currently provides an established, if not optimal, communication mechanism for conveying the results of scientific activities to our peers, no such standard currently exists to govern the creation and exchange of open science to our peers and beyond. Efforts at the university level that encourage the rigorous construction of appropriate dissemination systems are laying the foundation for success in this endeavor.\n\n\nA path forward: recognition, training and infrastructure\n\nUniversities have a moral responsibility to educate, and there are significant opportunities in the open science model to broaden the output of research with an eye towards education. Nevertheless, the current university promotion and tenure system is optimized for evaluating the traditional format of peer-reviewed journals as the only necessary and sufficient product of a research project. Given the “publish or perish” paradigm that currently pervades the academy, an accompanying lack of recognition for the time and effort put into facilitating open science is apt to dampen participation12. For example, utilizing openly available code for an analysis in a subsequent publication does not require a citation, and even if the code were to be highly cited, it does not carry the same weight as a peer-reviewed publication. Thus, universities have an opportunity to re-imagine what it means to contribute to research, specifically extending the definition to include more than a tally of peer reviewed publications. The development of robust, reliable, and transparent tools to track utilization of open science products may be one path forward to quantitatively measure the impact of faculty generated research outputs not currently tracked or rewarded, and both incentivize and acknowledge the resources required to effectively engage in open science.\n\nA notable effort to define the characteristics of open science products are the FAIR Data Principles13, which emphasize that scholarly products should be findable, accessible, interoperable, and reusable and that good data management is not a goal in itself but can catalyze knowledge discovery and innovation. At the university, training for sustainable data management best practices would deepen the overall understanding of the opportunities of open science. In many respects, the products of open science are a common good resource14, but require support infrastructure to share data, tools, and training to broaden participation. This infrastructure could also be re-imagined to include metrics to quantify impact, supporting the need to acknowledge contributions.\n\nIn conclusion, open science is a significant opportunity for universities, but a one-size-fits-all approach is sub-optimal. Executing open science in a way that facilitates meaningful advances requires a personal investment of time, both upfront to develop relevant capabilities, and ongoing for execution expenses. As such, it is important that universities develop infrastructure and training to support, measure, and reward efforts that deliver on the promise of open science, focusing on domains best positioned to further scientific understanding.\n\nA preprint of this article can be found on PeerJ (https://doi.org/10.7287/peerj.preprints.2781v1).",
"appendix": "Author contributions\n\n\n\nAH, MH, AK, RR contributed equally in the preparation of this manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nFaniel IM, Jacobsen TE: Reusing Scientific Data: How Earthquake Engineering Researchers Assess the Reusability of Colleagues’ Data. Comput Support Coop Work. Springer Netherlands. 2010; 19(3): 355–375. Publisher Full Text\n\nHowe A, Chain P: Example of a reproducible IPython Notebook for Analysis [Internet]. Reference Source\n\nBarnes N: Publish your computer code: it is good enough. Nature. Nature Publishing Group. 2010; 467(7317): 753. PubMed Abstract | Publisher Full Text\n\nSweeney L: Simple Demographics Often Identify People Uniquely. Tech Rep LIDAP-WP. 2000. Reference Source\n\nTonidandel S, King EB, Cortina JM: Big Data Methods: Leveraging Modern Data Analytic Techniques to Build Organizational Science. Organ Res Methods. SAGE Publications Sage CA: Los Angeles, CA; 2016. Publisher Full Text\n\nPoll G: Americans’ Confidence in Institutions Stays Low | Gallup [Internet]. [cited 1 Feb 2017]. Reference Source\n\nBikard M: Is Knowledge Trapped Inside the Ivory Tower? Technology Spawning and the Genesis of New Science-Based Inventions. 2012. Reference Source\n\nVermuelen F: Why Firms Don’t Trust Universities - Business Insider [Internet]. 2013. [cited 30 Jan 2017]. Reference Source\n\nNisbet MC, Kotcher JE: A Two-Step Flow of Influence?: Opinion-Leader Campaigns on Climate Change. Sci Commun. SAGE PublicationsSage CA: Los Angeles, CA. 2009; 30(3): 328–354. Publisher Full Text\n\nLeiserowitz A, Maibach EW, Roser-Renouf C, et al.: Climate Change in the American Mind: Americans’ Global Warming Beliefs and Attitudes in April 2013. SSRN Electron J. 2013. Publisher Full Text\n\nGorby YA, Yanina S, McLean JS, et al.: Electrically conductive bacterial nanowires produced by Shewanella oneidensis strain MR-1 and other microorganisms. Proc Natl Acad Sci U S A. 2006; 103(30): 11358–11363. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Rond M, Miller AN: Publish or Perish: Bane or Boon of Academic Life? J Manag Inq. 2005; 14(4): 321–329. Publisher Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. Nature Publishing Group; 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHardin G: The tragedy of the commons. Science. 1968; 162(3589): 1243–8. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "21433",
"date": "21 Apr 2017",
"name": "May Khanna",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis was a very well-written manuscript and quite relevant to our current times since there is a definite push for open science in academia. This is becoming increasingly necessary as we continually acquire larger data sets. I have a few comments and suggestions.\n\nI would replace the words “the whole enchilada” under the headline “Open science requires significant Investment”. I would also replace “so on” in the same section.\n\nInstead of the headline “Broader impact of open science is uncertain” may I suggest something along the line of: “Open and broad communication could impact open science”.\n\nIn the section “Open Science requires significant investment”, it is suggested that a key cost of open science is time, which is reasonable. However, one of the points is that “time to produce scripts with a sufficient level of robustness and documentation to be useful to others”; this point is less reasonable. I believe with or without open science, this should an integral requirement of all scientists and so this last point should be omitted.\n\nYour three first points to propose to enable open science more effectively are valid, however, you don’t circle back to these in your final discussion. May I suggest using instead the points that you end with, such as:\nFinding a way to properly cite open codes or data available through open sources Developing a reliable, robust tool to track utilization of open science (note: similar idea to 1, but goes one step further) Universities need to support infrastructure to implement FAIR data principles\nThis way you end with developing the points that should enable open science.\n\nAll these are suggestions. Should the authors decide to ignore them, it will not dampen my enthusiasm for this well-written manuscript.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2714",
"date": "19 May 2017",
"name": "Adina Howe",
"role": "Author Response",
"response": "I would replace the words “the whole enchilada” under the headline “Open science requires significant Investment”. I would also replace “so on” in the same section. Modified as suggested. Instead of the headline “Broader impact of open science is uncertain” may I suggest something along the line of: “Open and broad communication could impact open science”. Modified the heading to “The Potential for Broader Impacts with Open Science” In the section “Open Science requires significant investment”, it is suggested that a key cost of open science is time, which is reasonable. However, one of the points is that “time to produce scripts with a sufficient level of robustness and documentation to be useful to others”; this point is less reasonable. I believe with or without open science, this should an integral requirement of all scientists and so this last point should be omitted. Modified this sentence to “time to produce scripts with a sufficient level of robustness and documentation to be broadly useful to others”. We agree that there is a minimal requirement of reproducibility for all scripts. However, our aim with this sentence was to convey that the impact of this e.g, documentation or code and its ability to be reproduced by a broad audience (e.g., the public vs. domain experts) requires significant and accountable requirements on time. We feel like the inclusion of the “broadly useful” more appropriately captures our intent – thank you for your suggestion! Your three first points to propose to enable open science more effectively are valid, however, you don’t circle back to these in your final discussion. May I suggest using instead the points that you end with, such as: Finding a way to properly cite open codes or data available through open sources Developing a reliable, robust tool to track utilization of open science (note: similar idea to 1, but goes one step further) Universities need to support infrastructure to implement FAIR data principles Great suggestion. Given the length of this article, we’ve provided some more directed heading to guide the conclusion, which are aligned with your suggestions. This way you end with developing the points that should enable open science."
}
]
},
{
"id": "22285",
"date": "27 Apr 2017",
"name": "Nathan L. Vanderford",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article by Howe et al. is a generally well-written commentary on the importance of open science as well as some of the current barriers to its most effective implementation and use. This is a very timely topic that certainly deserves discussion of its value to the scientific community and of how it can be improved.\nI believe there are several noteworthy areas in which the article could be improved prior to it being approved for indexing.\nThe authors should step back at the very onset of the article and define open science. It would also be helpful to provide more context on the history of open science and why/ how that history is important to the current article.\n\nWhile this is a well-written article in general, there are a few phrases used that should be reconsidered. For example, the use of the phrase “the whole enchilada” should be re-written in a more professional phrasing.\n\nI would argue that the most significant “costs” or “barriers” to open science are the financial costs (of which personnel labor/ time and electronic/ computer storage space issues are perhaps the biggest components). It is a missed opportunity to not mention the financial costs of open science. As part of that discussion, it would be interesting for the authors to discuss who bears the financial costs and whether there is room for improvement. For example, could funders/sponsors do more to support the costs of open science? And, are there opportunities for new policies (at the level of funders/sponsors) to be developed that could further support the wider implementation of open science?\n\nThe title of the article refers to the “impact” of open science but the current content of the article falls short on convincing the reader of the current and potential impact of open science. It may be worth providing some specific examples of how open science has been used to make impactful discoveries, etc. Providing such examples could drive home the point of why it is so important to support and improve open science.\n\nIn summary, this is a timely article discussing a very important topic. There are limitations of the current version of the article that dampen this reviewer’s enthusiasm at this time regarding giving a full approval. As such, I look forward to reviewing a revised version of the article.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2713",
"date": "19 May 2017",
"name": "Adina Howe",
"role": "Author Response",
"response": "1. The authors should step back at the very onset of the article and define open science. It would also be helpful to provide more context on the history of open science and why/ how that history is important to the current article.Great suggestion. We agree that this was missing and we now provide a context for universities and their role in open science as an introduction.2. While this is a well-written article in general, there are a few phrases used that should be reconsidered. For example, the use of the phrase “the whole enchilada” should be re-written in a more professional phrasing.Modified.3. I would argue that the most significant “costs” or “barriers” to open science are the financial costs (of which personnel labor/ time and electronic/ computer storage space issues are perhaps the biggest components). It is a missed opportunity to not mention the financial costs of open science. As part of that discussion, it would be interesting for the authors to discuss who bears the financial costs and whether there is room for improvement. For example, could funders/sponsors do more to support the costs of open science? And, are there opportunities for new policies (at the level of funders/sponsors) to be developed that could further support the wider implementation of open science?Thank you for pointing out this opportunity in this piece. We have added a discussion to bring up the topic of financial costs associated with open science and the complexities of determining incentives at the university level. While there is opportunity for funders/sponsors to help bear these financial burdens, the scope of this effort is what a university can do, and we have limited our discussion to this topic. 4. The title of the article refers to the “impact” of open science but the current content of the article falls short on convincing the reader of the current and potential impact of open science. It may be worth providing some specific examples of how open science has been used to make impactful discoveries, etc. Providing such examples could drive home the point of why it is so important to support and improve open science. We agree that the title of this article was not a good fit for the content of the piece and have adjusted it accordingly. We have provided citations to one peer reviewed article that identifies examples and a perspective of how open science have provided impact. A challenge in providing specific examples it that the impact of open science is still debated and difficult to quantify. Most articles are often personal perspectives of specific authors and not necessarily data-driven approaches to study the positive impacts of open science (which are currently rather limited). Further, there are also examples of negative impacts of open science, http://www.nature.com/news/open-data-contest-unearths-scientific-gems-and-controversy-1.21572?WT.mc_id=TWT_NatureNews. We hope to present a balanced perspective on the complexities of open science at universities and guide thoughts on the most impactful path forward in this diverse environment."
}
]
},
{
"id": "22286",
"date": "15 May 2017",
"name": "Marie-Claire Shanahan",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a timely and important topic, and it is clear that the authors have first-hand experience with the area of open science related to creating open and shared analysis code (e.g., Citation #2). They have a very valuable perspective to add to the scientific community's conversations about open science. And while I definitely support the eventual indexing of this article, I feel that there are some areas in which the argument should be strengthened and clarified first.\nAs reviewer Vanderford notes, a clear definition of open science is needed early on. This would also be very helpful for strengthening the arguments that develop in the middle and final sections that open databases are not sufficient for open science and that universities need better infrastructure for recognizing shared code as an academic contribution. A clear definition would also help place the open analysis code argument more clearly. Is sharing analysis programs and code sufficient for open science (i.e., is it synonymous with open science) or is it instead an under-recognized but important element or type of open science that the authors wish to highlight?\nThe paper seems to settle in on open analysis code as the central argument later on, but in the opening sections the argument seems overly broad for the examples and support that are given. The only example of benefits that is given is of the \"open notebook\" (a good and valuable example) but it is not sufficient to support broad claims about open science as a double-edged sword (where neither the broads benefits or potential downfalls are explained in detail or supported with evidence from the literature). I think focusing the argument and placing their perspective more clearly within the broader field of open science early on would create a more cohesive argument and one that can be better supported with the experiences and examples the authors provide.\nIs the topic of the opinion article discussed accurately in the context of the current literature? Are all factual statements correct and adequately supported by citations? Are arguments sufficiently supported by evidence from the published literature?\nIn addition to the example above there are a few places where support for statements and accuracy with relation to the literature could be improved. Defining open science clearly and placing the authors' perspectives related to open analysis code within that larger definition would help improve connections to the literature. A few examples are below that might be helpful.\nLater on, citations 7 and 8 are used to support discussion of take up of academic discoveries. These are both citations of the same study though, with 7 being the study itself and 8 being a popular media report on the study. Using both seems to suggest that there are two independent sources to support this claim. There is also a published version of the study that might be a preferred citation1. And I think it might be helpful to note that the study does not find mistrust to be the main reason for lack of uptake but says it is secondary to the natural competitiveness of industrial science, where they are constantly monitoring competitors and therefore likely to notice discoveries that competitors make.\n\"To maintain this esteem, it is important to realize that data without an understanding of what it entails or the questions it can answer can be considered useless and even dangerous when used improperly to influence decision making and policy\" [11]. This is a strong claim, and it could represent important reservations about open science, but no support is provided. The citation does not seem to be related at all (Title: \"Electrically conductive bacterial nanowires produced by Shewanella oneidensis strain MR-1 and other microorganisms.\") Are the authors referring to some controversy surrounding the inappropriate use of that data for decision making and policy? If so, this needs to be explained and supported explicitly. Otherwise, other citations should be found to support this claim.\nReferring to Hardin's \"Tragedy of the commons\" [Citation 14] also does not seem like a closely related source for the use of the term \"common good\" as the authors have used it. Some clarification would be helpful there as well. Neilsen's book2, similarly talks about \"knowledge commons\" specifically in relation to open science in a way that might be more relevant to the arguments made here.\nAre the conclusions drawn balanced and justified on the basis of the presented arguments?\nIn the end, the authors make an important and valid argument about university supports and infrastructure, but the points leading up to that conclusion could be more clearly explained and better supported and connected. From the section \"A path forward\" and onward, the examples lead nicely towards the conclusion. Given the authors expertise I feel that the could be expanded a bit and to explore and support the conclusion, and the earlier paragraphs could focus more specifically on the issues of open analysis code to build towards that conclusion. For example, in discussing the Ivory Tower effect earlier on, the original study that is cited explores the publish or perish system as one of the reasons for distrust. The argument made about publish or perish later on could be more meaningful if that connection had been made in the previous section.\nOverall, the authors have an important contribution to make to discussions of open science and important expertise in the practice of open science. With some clarification to the supports and the argument, this paper will be a valuable and interesting piece of that conversation.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "2712",
"date": "19 May 2017",
"name": "Adina Howe",
"role": "Author Response",
"response": "1. As reviewer Vanderford notes, a clear definition of open science is needed early on. This would also be very helpful for strengthening the arguments that develop in the middle and final sections that open databases are not sufficient for open science and that universities need better infrastructure for recognizing shared code as an academic contribution. A clear definition would also help place the open analysis code argument more clearly. Is sharing analysis programs and code sufficient for open science (i.e., is it synonymous with open science) or is it instead an under-recognized but important element or type of open science that the authors wish to highlight? Thank you for this suggestion. We have now included a clearer open science definition within the introduciton to help clarify our perspective and define “sharing” data as an important element within open access themes of “access, use, modify, and sharing”. 2. The paper seems to settle in on open analysis code as the central argument later on, but in the opening sections the argument seems overly broad for the examples and support that are given. The only example of benefits that is given is of the \"open notebook\" (a good and valuable example) but it is not sufficient to support broad claims about open science as a double-edged sword (where neither the broads benefits or potential downfalls are explained in detail or supported with evidence from the literature). I think focusing the argument and placing their perspective more clearly within the broader field of open science early on would create a more cohesive argument and one that can be better supported with the experiences and examples the authors provide. In response to reviewer #2, more specific examples, of the benefits and barriers to open science have been included to represent the broader field of open science (e.g. publishing cost). Further, we believe that the modification of the title and introduction revisions help to focus the central topic of this effort on the impacts of a university in an open era. Finally, for more specific examples, we have also cited a McKiernan et al. 2016 which represents a more data-centric approach of the benefits of open science. 3. In addition to the example above there are a few places where support for statements and accuracy with relation to the literature could be improved. Defining open science clearly and placing the authors' perspectives related to open analysis code within that larger definition would help improve connections to the literature. A few examples are below that might be helpful. Later on, citations 7 and 8 are used to support discussion of take up of academic discoveries. These are both citations of the same study though, with 7 being the study itself and 8 being a popular media report on the study. Using both seems to suggest that there are two independent sources to support this claim. There is also a published version of the study that might be a preferred citation1. And I think it might be helpful to note that the study does not find mistrust to be the main reason for lack of uptake but says it is secondary to the natural competitiveness of industrial science, where they are constantly monitoring competitors and therefore likely to notice discoveries that competitors make. Thank you for these comments – we agree that two citations here is misleading and have removed the citation #8 and replaced the original preprint with the suggested citation provided more recent published study (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2333413). The general results of the cited study indicate that “the results suggest that the peer-based knowledge validation process in academia creates uncertainty about the reliability and relevance of academic science as a map for technology development.” [Bikard, 2015]. We also modified the reason inventors draw on knowledge from firms rather than academics to be associated (e.g., a driver vs a key driver) with skepticism toward academic science. 4. \"To maintain this esteem, it is important to realize that data without an understanding of what it entails or the questions it can answer can be considered useless and even dangerous when used improperly to influence decision making and policy\" [11]. This is a strong claim, and it could represent important reservations about open science, but no support is provided. The citation does not seem to be related at all (Title: \"Electrically conductive bacterial nanowires produced by Shewanella oneidensis strain MR-1 and other microorganisms.\") Are the authors referring to some controversy surrounding the inappropriate use of that data for decision making and policy? If so, this needs to be explained and supported explicitly. Otherwise, other citations should be found to support this claim. Thank you for this observation – we’ve corrected the citation. 5. Referring to Hardin's \"Tragedy of the commons\" [Citation 14] also does not seem like a closely related source for the use of the term \"common good\" as the authors have used it. Some clarification would be helpful there as well. Neilsen's book2, similarly talks about \"knowledge commons\" specifically in relation to open science in a way that might be more relevant to the arguments made here. To clarify this phrase, we modified “common good” to explicitly state “available to benefit by all”. 6. In the end, the authors make an important and valid argument about university supports and infrastructure, but the points leading up to that conclusion could be more clearly explained and better supported and connected. From the section \"A path forward\" and onward, the examples lead nicely towards the conclusion. Given the authors expertise I feel that the could be expanded a bit and to explore and support the conclusion, and the earlier paragraphs could focus more specifically on the issues of open analysis code to build towards that conclusion. For example, in discussing the Ivory Tower effect earlier on, the original study that is cited explores the publish or perish system as one of the reasons for distrust. The argument made about publish or perish later on could be more meaningful if that connection had been made in the previous section. Thank you for your suggestions. With these suggestions in mind, we have edited the text for a more natural flow, using both additional examples and citations and specific headers."
}
]
}
] | 1
|
https://f1000research.com/articles/6-405
|
https://f1000research.com/articles/6-688/v1
|
17 May 17
|
{
"type": "Data Note",
"title": "Initial genome sequencing of the sugarcane CP 96-1252 complex hybrid",
"authors": [
"Jason R. Miller",
"Kari A. Dilley",
"Derek M. Harkins",
"Manolito G. Torralba",
"Kelvin J. Moncera",
"Karen Beeri",
"Karrie Goglin",
"Timothy B. Stockwell",
"Granger G. Sutton",
"Reed S. Shabman",
"Kari A. Dilley",
"Derek M. Harkins",
"Manolito G. Torralba",
"Kelvin J. Moncera",
"Karen Beeri",
"Karrie Goglin",
"Timothy B. Stockwell",
"Granger G. Sutton",
"Reed S. Shabman"
],
"abstract": "The CP 96-1252 cultivar of sugarcane is a complex hybrid of commercial importance. DNA was extracted from lab-grown leaf tissue and sequenced. The raw Illumina DNA sequencing results provide 101 Gbp of genome sequence reads. The dataset is available from https://www.ncbi.nlm.nih.gov/bioproject/PRJNA345486/.",
"keywords": [
"Sugarcane genome",
"DNA sequencing",
"sequencing reads"
],
"content": "Introduction\n\nSugarcane is an important crop for food and energy production. The genomes of modern cultivars are hybrids of species that are themselves polyploid; see for example (Vilela et al., 2017). Selected genomic BAC sequences have been sequenced and assembled (de Setta et al., 2014) (Okura et al., 2016). Chloroplast and mitochondrial genomes have been published (Asano et al., 2004) (Shearman et al., 2016), as have several transcriptomes (Cardoso-Silva et al., 2014). Whole genome sequence assemblies have not been published. CP 96-1252 is the top commercial sugarcane cultivar in Florida, USA (Sandhu & Davidson, 2016). CP 96-1252 was developed by USDA-ARS, the University of Florida, and the Florida Sugar Cane League and released to growers in 2003. CP 96-1252 is a complex hybrid of Saccharum officinarum L., S. barberi Jeswiet, S. spontaneum L., and S. sinense Roxb. amend. Jeswiet (Edmé et al., 2005). Toward better understanding of this cultivar through its genome sequence, DNA reads were generated and made public.\n\n\nMethods\n\nUsing lab-grown plantlets, kindly provided by USDA, 14 g of tissue was harvested from the leaves of Saccharum hybrid cultivar CP 96-1252 (Reg. no CV-120, PI 634935, NCBI taxon ID 1983727). DNA was extracted from purified plant nuclei at Amplicon Express (Pullman, WA, USA). Separately, DNA was extracted from whole cells at JCVI (Rockville, MD, USA) using a Qiagen Plant DNA isolation kit. Extracted DNA was fragmented and size selected on the Blue Pippin (Sage Scientific) prior to library construction to ensure a 260 bp insert size. Standard Illumina PE libraries were generated using the NEBNext kit (NEB). Libraries were size selected, QC’d and quantified by qPCR prior to sequencing. Barcode BS78 AGCCATGC was used for the nuclei prep library and barcode BS79 AGGCTAAC was used for the cell prep library. The libraries were generated and sequenced at the JCVI sequencing core in La Jolla, CA, USA. To test for bacterial contamination, both DNA samples plus negative controls were used to generate amplicon libraries targeting the V4 16S region followed by Illumina MiSeq sequencing. These reads were processed by a pipeline using usearch version 8.1.1.1861 for clustering (Edgar, 2017), mothur version 1.36.1 for taxonomic classification (Schloss et al., 2011), and the SILVA SSURef NR99 123 database for reference (Quast et al., 2013). Hits to chloroplast and mitochondria were observed as expected, but bacteria were virtually absent and similar to controls.\n\nAn Illumina NextSeq 500 instrument was used to generate paired 150 bp shotgun reads. Run #1 applied the Illumina High Output kit to libraries BS78 and BS79. Run #1 instrument metrics were: 1.8 pM pool loaded, 1% PhiX spike-in with 1.8% aligned, cluster density 138 K/mm2, 96% pass filter, and 106 Gbp in 345 M PE reads. Barcode analysis indicated 46% BS78 and 49% BS79. Run #2 applied the Illumina High Output kit to library BS78 only. Run #2 metrics were: 1.8 pM pool loaded, 1% PhiX spike-in with 1% aligned, and 110 Gbp in 360 M PE reads. The resulting FASTQ files contained 101 Gbp in 161 M pairs from BS78 run #1, 169 M pairs from BS79 run #1, and 341 M pairs from BS78 run #2.\n\n\nDataset validation\n\nTo confirm sugarcane origin of the DNA, the run #1 reads were mapped to available BACs, namely the 608 Kbp of R570 BACs (GenBank accessions KF184657.1 to KF184973.1 (de Setta et al., 2014)). Reads were mapped with bowtie2 (Langmead & Salzberg, 2012) version 2.2.5 with options “-p 4 --no-unal --no-mixed --no-discordant --end-to-end --fast”. Both sequencing libraries demonstrated concordant pair mapping rates of 4.1% unique, 27% repeat, and 69% unmapped. Genome coverage analysis was inconclusive; the K-mer frequency distribution computed by Jellyfish (Marçais & Kingsford, 2011) version 2.2.4, with K=17 showed no peak above 1X coverage.\n\n\nData availability\n\nThe data are available at NCBI SRA under BioProject PRJNA345486, Study SRP091668. Amplified reads from BS78 and BS79 have respective accessions SRR5500242 and SRR5500243. Genomic reads from BS78 have accessions are SRR5500246 and SRR5500247. Genomic reads from BS79 have accession SRR5500249.",
"appendix": "Author contributions\n\n\n\nDesign of experiment: TBS, RS. Sample preparation: KD, DMH. Amplicons: MGT, KJM. Sequencing: KG, KB. Bioinformatics: GS, JM, DMH. Manuscript: JM.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by US Department of Homeland Security (contract HSHQDC-15-C-B0059).\n\n\nAcknowledgements\n\nThe authors are grateful for assistance from Jack Comstock, Per McCord, and M.D. Islam of USDA-ARS.\n\n\nReferences\n\nAsano T, Tsudzuki T, Takahashi S, et al.: Complete nucleotide sequence of the sugarcane (Saccharum officinarum) chloroplast genome: a comparative analysis of four monocot chloroplast genomes. DNA Res. 2004; 11(2): 93–99. PubMed Abstract | Publisher Full Text\n\nCardoso-Silva CB, Costa EA, Mancini MC, et al.: De novo assembly and transcriptome analysis of contrasting sugarcane varieties. PLoS One. 2014; 9(2): e88462. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Setta N, Monteiro-Vitorello CB, Metcalfe CJ, et al.: Building the sugarcane genome for biotechnology and identifying evolutionary trends. BMC Genomics. 2014; 15(1): 540. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgar RC: SEARCH_16S: A new algorithm for identifying 16S ribosomal RNA genes in contigs and chromosomes. bioRxiv. 2017; 124131. Publisher Full Text\n\nEdmé S, Tai P, Glaz B, et al.: Registration of 'CP 96-1252' sugarcane. Crop Sci. 2005; 45(1): 423–424. Publisher Full Text\n\nLangmead B, Salzberg SL: Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012; 9(4): 357–359. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarçais G, Kingsford C: A fast, lock-free approach for efficient parallel counting of occurrences of k-mers. Bioinformatics. 2011; 27(6): 764–770. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOkura VK, de Souza RS, de Siqueira Tada SF, et al.: BAC-Pool Sequencing and Assembly of 19 Mb of the Complex Sugarcane Genome. Front Plant Sci. 2016; 7: 342. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQuast C, Pruesse E, Yilmaz P, et al.: The SILVA ribosomal RNA gene database project: improved data processing and web-based tools. Nucleic Acids Res. 2013; 41(Database issue): D590–596. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSandhu H, Davidson W: Sugarcane Cultivars Descriptive Fact Sheet: CP 96-1252, CP 01-1372, and CP 00-1101. 2016. Reference Source\n\nSchloss PD, Gevers D, Westcott SL: Reducing the effects of PCR amplification and sequencing artifacts on 16S rRNA-based studies. PLoS One. 2011; 6(12): e27310. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShearman JR, Sonthirod C, Naktang C, et al.: The two chromosomes of the mitochondrial genome of a sugarcane cultivar: assembly and recombination analysis using long PacBio reads. Sci Rep. 2016; 6: 31533. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVilela MM, Del Bem LE, Van Sluys MA, et al.: Analysis of Three Sugarcane Homo/Homeologous Regions Suggests Independent Polyploidization Events of Saccharum officinarum and Saccharum spontaneum. Genome Biol Evol. 2017; 9(2): 266–278. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "22855",
"date": "30 May 2017",
"name": "Paulo Arruda",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe data note reported was produced by 150bp paired-end Illumina sequencing of genomic DNA prepared from the sugarcane variety CP-96-1252. A raw data set of 101 Gbps was generated and made public available.The authors did not present assemblage data which would be useful for the research community interested in sugarcane genomics. Sequence coverage was not stimated but it seems to be under 1X.\n\nSugarcane commercial varieties are hybrids between Saccharum officinarum and Saccharum spontaneum. These two parents are highly complex polyploids with ploidy varying from 8-12. In general, the hybrids conserve ~75% of the S. officinarum and 15% of S. spontaneum intact. Around 10% of the hybrid genome are chromosomal recombinants between the two species. This complex situation makes it very difficult assembling large non-chimeric contigs especially using short insert shotgun sequencing.\n\nThe high quality data set presented in this data note is of value for those interested in recover short gene regions of interest. Because sugarcane genome sequencing dataset is very scarse I recommend the publication of the note presented here as a source of genome data for the sugarcane community.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
},
{
"id": "23843",
"date": "29 Jun 2017",
"name": "Jeremy R. Shearman",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript describes the generation of whole genome shotgun sequence data from two separate DNA preparation methods. The methods for data generation are clearly described and the sample that was used has ample information about its origins publicly available and referenced. This dataset will be useful for SNP discovery and comparative genomics of sugarcane cultivars.\n\nIs the rationale for creating the dataset(s) clearly described? Yes\n\nAre the protocols appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and materials provided to allow replication by others? Yes\n\nAre the datasets clearly presented in a useable and accessible format? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-688
|
https://f1000research.com/articles/6-686/v1
|
17 May 17
|
{
"type": "Method Article",
"title": "Semantics for interoperability of distributed data and models: Foundations for better-connected information",
"authors": [
"Ferdinando Villa",
"Stefano Balbi",
"Ioannis N. Athanasiadis",
"Caterina Caracciolo",
"Stefano Balbi",
"Ioannis N. Athanasiadis",
"Caterina Caracciolo"
],
"abstract": "Correct and reliable linkage of independently produced information is a requirement to enable sophisticated applications and processing workflows. These can ultimately help address the challenges posed by complex systems (such as socio-ecological systems), whose many components can only be described through independently developed data and model products. We discuss the first outcomes of an investigation in the conceptual and methodological aspects of semantic annotation of data and models, aimed to enable a high standard of interoperability of information. The results, operationalized in the context of a long-term, active, large-scale project on ecosystem services assessment, include:\nA definition of interoperability based on semantics and scale; A conceptual foundation for the phenomenology underlying scientific observations, aimed to guide the practice of semantic annotation in domain communities; A dedicated language and software infrastructure that operationalizes the findings and allows practitioners to reap the benefits of data and model interoperability.\nThe work presented is the first detailed description of almost a decade of work with communities active in socio-ecological system modeling. After defining the boundaries of possible interoperability based on the understanding of scale, we discuss examples of the practical use of the findings to obtain consistent, interoperable and machine-ready semantic specifications that can integrate semantics across diverse domains and disciplines.",
"keywords": [
"Semantic annotation",
"Semantic meta-modelling",
"Semantic mediation",
"Interoperability",
"Linked open data",
"Semantic web",
"Artificial intelligence",
"Scientific workflows"
],
"content": "1. Introduction\n\nIn an increasingly connected world, the value of information depends not only on the ability to use it for the purposes for which it was collected, but also of reusing and linking it within an expanding information landscape. The term interoperability refers to the ability of information to be reused and linked across and beyond the institutional and disciplinary contexts where it originates. In recent years, much attention has been paid to interoperability, not only in empirical science, but also in sectors such as government, industry, the military, policy making and information management.\n\nDisciplines emphasizing the study of information in a variety of sciences (e.g. bioinformatics, ecoinformatics, geoinformatics) have emerged to focus on reusability and integration of data artifacts and models. Reusability, versatility, reproducibility, extensibility, availability, and interpretability of information were identified as key requirements for sustainability1. Wilkinson et al.2 outlined the FAIR principles for data stewardship and management, calling for Findable, Accessible, Interoperable, and Reusable scholarly data publication. In practice, these goals can be enabled in different ways, with an exact interpretation that depends on the application. The most demanding interpretation of the FAIR principles can be seen as the one sought in support of the Linked Open Data paradigm3, in which information can be found, retrieved, linked and operated upon in the ultimate “machine actionable” way: unsupervised and automated, so that distributed computational workflows and models can be not only built and run, but also discovered on distributed repositories with negligible risk of misalignments. We refer to this interpretation as FAIR+, where the “I” in FAIR is rigorous enough to be trusted for automated, unsupervised linking in model and data workflows. A corollary requirement of FAIR+ is the need for information products to carry enough metadata to allow ranking of multiple candidates for linking, in order to choose the one most appropriate for the context of use.\n\nThe work presented here is part of a wider investigation on a methodology we call semantic meta-modeling (SMM), which enables the definition and execution of potentially complex, distributed scientific computations (scientific workflows4), based on automated semantic inference and powered by FAIR+ interoperability. In SMM, data and models can be (1) discovered on linked repositories based on semantics alone; (2) ranked for appropriateness to the intended context of use; and (3) assembled automatically into coherent, working scientific workflows. The authors, in collaboration with others, have been working on SMM for about a decade and produced a proof-of-concept software stack, named k.LAB, which operationalizes the approach. The first large-scale project building on the SMM paradigm, ARIES (ARtificial Intelligence for Ecosystem Services:5), has provided the primary rationale and testing ground for the development of SMM. The ARIES project implements a distributed semantic web platform for ecosystem services6 modelling, where users are presented with the result of computing scientific workflows built automatically as a response to conceptually stated queries (e.g. “observe carbon sequestration in year 2010 in the Danube watershed”). In this article, first in a series of planned contributions illustrating the different aspects of SMM, we describe the semantic principles and methods that underlie and enable interoperability in the approach, incorporating the feedback of ~15 researchers and ~150 ARIES modelers since 2007. Further contributions will expand on aspects of SMM not described here, in particular (1) assembling and running model workflows, (2) the details of the software implementation, and (3) the community process that has allowed us to build a distributed base of semantically annotated informational resources.\n\nThe vision of a Semantic Web7,8 brought semantics to the foreground as an instrument to integrate diverse, independently developed information. The use of digitally stored ontologies (formal vocabularies paired with logical axioms describing their relationships and intended meaning9) has since become commonplace for annotating informational assets, i.e. adding concepts from ontologies to the associated metadata to enable their integration and reuse (e.g. 10,11). Research and progress in ontology-mediated interoperability have been significant, and interest in it remains high. Yet, the promise of semantic annotation has often been disappointing in practice, as describing the conceptual underpinnings of information in a way that is complete, consistent and understandable across disciplines and communities has proven difficult and elusive. Attributing stable, reliable and shared meaning to information is difficult because of a lack of accepted best practices, confusion about the phenomenological nature of observed entities and attributes, lack of accepted rules on how to choose, specialize and connect concepts, among other inevitable logical challenges. The result has been a confused landscape of mixed, incompatible attempts, which Goguen12 described as “… the creation of a constantly shifting foreground and background, with the latter being called ‘context’”.\n\nOntologies for science come in many varieties. Foundational (also upper or reference) ontologies aim to provide philosophical foundations upon which to build lower level, domain-specific ones. They describe abstract, high-level concepts with the aim of establishing foundational logic for the definition of domain-specific concepts in derived ontologies13. For example, they may define the difference between abstract and concrete or establish the logical underpinnings of spatial, temporal, or part-whole relationships. Well-known foundational ontologies include DOLCE14, BFO (15, also see 16), and more comprehensive efforts like SUMO17, endorsed by the IEEE. Foundational ontologies have been successfully used and some have seen relatively broad adoption, but due to their high abstraction they cannot alone solve the issue of “what is what”, aside from providing a base for more specific domain ontologies (see below).\n\nA second class of conceptualizations, observation ontologies (e.g. OBOE18, Ο&Μ19), also includes high-level concepts, but uses the notion of observation as the main device to introduce semantics, focusing on the “how” of observation rather than the “what” of phenomena. Emphasis is given to aspects such as the type of observation (e.g. measurements vs. rankings vs. classifications) and their use context. Observation ontologies thus try to occupy a foundational niche without committing to a specific phenomenology. Observation ontologies have been relatively successful in terms of adoption, but due in part to the lack of a phenomenological underpinning they cannot guide investigators in choosing appropriate observables, nor, by themselves, guarantee any of the FAIR principles for interoperability.\n\nDomain ontologies, in contrast, describe specific areas of interest. Although in principle they should be used in conjunction with foundational ontologies, they are typically produced to serve the needs of a specific community, and are seldom committed to interoperability with ontologies from other domains13. The interoperability enabled in such a situation is primarily syntactic (“we use the same vocabulary, even if the semantics are not well thought out”) and therefore limited to users of the same ontology. Commonly these are formed as taxonomies, using a hierarchy of specialization (is-a relationships, for example: Precipitation is-a AtmosphericPhenomenon) to organize and systematize the terms used within the communities endorsing their development. Examples abound in various domains, including earth and environment: for example, SWEET20, ENVO21, SPAN/SNAP22, Gene Ontology23, and PlantOntology for plant anatomy and morphology24. Some of them, such as the Gene Ontology23, have been highly successful; yet the terminology is easily drained of meaning when confronted with other disciplinary contexts that use the same terms differently. For example, a crop is, to an agricultural economist, the agricultural product that reaches the market, possibly further processed after harvest, while to an agronomist the same term refers to the producing plant species. In general, domain ontologies commonly strive to endorse the term usage that is most popular in a community; this is both the reason for their success within those communities and their primary limitation.\n\nLarge investments have been made in developing controlled vocabularies to describe and discover artifacts of interest for specific disciplinary sectors, meant to facilitate annotation and sharing of information objects. Controlled vocabularies are typically domain-centered, with little or no pretension of phenomenological adequacy, but have strong links to the culture, language and applications in their community of origin, usually endorsing terms and assumptions that are best recognized in a community of reference. When formally expressed, their structure is often inspired by organizational, rather than logical, reasons. Many high-adoption examples exist, including e.g. AGROVOC25 and CABT26 for agriculture, CUAHSI for hydrology27, and MESh for medicine28. Generally, vocabularies contain a large number of terms (e.g., for taxonomic or chemical species of interest), are often multilingual, and can grow rapidly with use, while, by contrast, ontologies strive for minimality and robust logics. The differences between ontologies and controlled vocabularies are often misinterpreted, and in common practice the terms are sometimes used interchangeably.\n\nExploring ontology repositories, such as the OBO Foundry29, immediately shows that notable inroads have been made in both fundamental and domain ontologies, and that efforts to produce multi-domain conceptualizations based on common semantics are made regularly. Despite these attempts, duplications, ambiguities and inconsistencies continue to hamper the development and adoption of semantic annotation standards in practitioner communities. With the current state of the art, and with the semantic web community seemingly more interested in enabling technologies than in the conceptual aspects of interoperability, FAIR+ interoperability remains all but impossible, perhaps aside from within very restricted communities. In our view, a major need for progress towards this goal is a solid, uncontroversial phenomenological base, i.e. the basic semantics for the types of phenomena and entities that can be understood by human observers. This phenomenology needs to be general enough to work across domains and worldviews. Formalisms and toolsets must be built to support it, to ease the specification of domains and allow for extension, while enforcing a consistent design discipline. We need clear best practices for specialization and connection of terms, and guidelines on how to integrate always growing, and potentially infinite, domain content from vocabularies without breaking the logical integrity of the resulting annotations. We start our discussion by focusing on the definition and preconditions of interoperability itself, before illustrating the details of the approach we have found useful as a possible starting point towards FAIR+ interoperability.\n\nIn this article, we are concerned with the process of creating informational artifacts describing an observable concept in a chosen context, to provide evidence for a scientific deduction or computation. By observation we refer any artifact that is resolved from the perspective of a scientific process, i.e. can be used without requiring any further observation. Observations in this sense are commonly called “data”; yet, this term can be ambiguous with respect to the semantics of the observables involved, as we will explore later.\n\nDefinitions for interoperability vary by purpose, and while emphasis is sometimes given to formats and protocols for encoding and transmission (syntactic), legal compatibility (i.e., copyright and licenses) or organizational aspects (e.g., openness and purposes of the data), most definitions involve semantics - the meaning of what the information represents. The most rigorously structured information, such as scientific data and models, present the most stringent challenges in establishing semantic equality. Interoperability in such situations concerns links between two informational endpoints: for example, finding data to link to model inputs, or linking the outputs of one model to the inputs of another, so that a single computational chain can be established without fear that its results will be invalid. We term observations compatible to refer to this definition of interoperability between two informational endpoints (we will provide a more formal definition in Section 2.6).\n\nWe maintain that this kind of information alignment has three nearly independent semantic dimensions: semantics of the observable, the observation and the context.\n\nObservable semantics describes what observations are about: physical objects, events, processes, agents, or characteristics that may be “observed” or measured. The human observer recognizes relevant observables, e.g. elevation (a quality pertaining to a location on Earth), households (subjects, part of villages or cities), or surface water flow (a process observable in watersheds). Much of what we call “data” consists of observations of qualities; their inherent subjects (e.g., the location on Earth whose elevation we observe) are often specified indirectly or implicitly. In order for observations to be made, and for their interoperability to be possible, it is crucial that such identities are fully specified and unchanging. For two observations to be compatible, their observables must be described by the same concept.\n\nIf a physical object, event, process or relationship can be simply acknowledged to exist in a context of interest, qualities, such as the elevation of a mountain or the temperature of a body, can only be observed indirectly, i.e. by comparison with reference observations. Units of measurement, currencies, rankings and classification systems define the ways human observers quantitatively or qualitatively describe such observations. Observation semantics describe how the observation activity is carried out, detailing the choice of reference metrics to ensure that a state can be understood and mediated. Mediation between different observation semantics is often possible, sometimes exactly (units of measurement, typically converted with negligible loss of precision), sometimes approximately (prices in different currencies can be more roughly compared by adjusting for inflation and purchasing power) and sometimes hardly (e.g. different land cover classifications systems are often extremely difficult to mediate). To be assessed for compatibility, qualities need the full statement of how they are observed; mediation operations may be necessary to harmonize two observations before computation involving both can take place.\n\nObservation always happens in a context, providing observables with a when and a where. The context is usually chosen a priori by the actor who created the original artifacts, and may differ, subtly or greatly, between two compatible observations. Just like the scale of a geographical map determines what entities are visible in it (urban streets disappear in a 1:2,000,000 map), certain observables only come into focus at a given geographical scale, and certain phenomena emerge only at a given temporal scale. Context semantics describes these aspects for an observation. While differently scaled observation can be mediated to some extent through aggregation or propagation (with loss of information), scale also reflects deeply on semantics; large scale shifts will determine incompatible semantic misalignments, with (e.g.) uncountable processes becoming countable events when their time scale is changed beyond a threshold. For example, lightning is seen as an event by a meteorologist and as a process by a high-energy physicist; subjects, e.g. the microorganisms in a lake, become visible only through qualities (the color of the lake) over a spatial extent that makes the observer lose sight of them. For this reason, scale is key to establishing meaning in more ways than usually recognized; scale depends entirely on the chosen observation context, therefore on the human decision of what to observe. The semantics of scale largely deal with space/time, and as such, can be formalized independently from the observables’.\n\nThe ability to accurately characterize semantics along observable, observation and context dimensions addresses the interoperable and reusable FAIR criteria. Semantic specifications can be rewritten into queries that select interoperable counterparts for an observation, addressing the findable and accessible requirements. If queries embodying all three dimensions can be executed and the resulting observations can be ranked for appropriateness, unsupervised linking becomes possible. While observation and context semantics are relatively well-understood, the characterization of what things are - observable semantics - remains difficult and uncertain, even with increasing investments in ontologies and vocabularies and an engaged community behind the current state of the art.\n\nThe rest of this article details our approach to building foundations for FAIR+ interoperability through semantics, using examples. We describe a conceptual framework and examples of reasoning and specifications to support the characterization of observable semantics, resulting from field-testing in years of initial application, followed by a discussion of goals already achieved and those that remain. Here, and in all applications of these principles and methods, we argue for SMM as a driver for a semantics-first workflow, where the lifecycle of information begins, before data collection or model development, with the understanding of semantics, which in turn guides data collection, organization and processing up to eventual documentation, storage and curation. This contrasts with the more commonly adopted annotation approach10 where data represent the first-class artifacts, collected and stored with a logic dictated primarily by practical constraints, and semantics may complement the artifacts “after the fact” to suit the data to specific applications.\n\n\n2. Materials and methods: specifying observable semantics\n\nSemantics starts with the act of cataloguing observed reality into classes that can be referenced and communicated. Terms describing commonly acknowledged classes of physical entities (such as persons or objects) are complemented through inference, comparison, association and imagination, to encompass objects, events, processes and relationships that may not be directly perceived by the senses, but still appear in human experience, thought and communication. Such observable entities can be arranged along a small number of fundamental phenomenological categories (for example physical objects, processes, characteristics or events) that determine how they can be described, observed, modeled and represented. Our perception of space/time is crucial to the process of organizing reality into communicable observables, as the “resolved” units of space and time determine how we classify. This is particularly important when science’s exacting descriptional needs of come into play: as described previously, spatiotemporal resolution influences an object of study’s perceived structural or functional character, and shifting resolution or extent can fundamentally alter an observable’s perceived category. It follows that interoperability can exist in a conceptualization, as long as the boundaries of stability of meaning for all concepts with respect to their fundamental phenomenology are stable. Scale, commonly defined as the choice of resolutions and extents through which we make observations of the world, binds the observables of informational artifacts to precise phenomenological categories, establishing boundaries of validity for conceptualizations.\n\nIn SMM, we call any domain conceptualization where every term has a stable and explicit phenomenological characterization a worldview; we recognize the worldview as the outer possible boundary for interoperability. In practice, a worldview is a set of ontologies that describes meaning under the viewpoints set by a given range of scales, where terminology is unambiguous and its relation with the chosen phenomenology is stable. Relationships between fundamental types in the chosen upper ontology create binding constraints for the entire conceptualization and provide guidance for semantic consistency and validation. The worldview we use in the ARIES project is the primary source for the examples in the rest of this article. This worldview focuses on spatial and temporal scales broadly in tune with human life, dealing with entities, processes, events and relationships that characterize and bound socio-ecological, economic and agricultural systems. We thus anticipate that it can provide semantic building blocks for data management and modeling across a wide range of applications in socioeconomic and environmental simulation. While we found this worldview adequate to represent Earth systems data and models, we would be hard pressed to suggest its use in disciplines whose scales of interest are widely distant, such as cosmology or high-energy physics.\n\nThe formalism described in the rest of this article outlines a simple metaphysics dedicated to the practical description of observations. In this view, things exist as long as observations of them can be produced. Our later use of philosophical terms, such as universal, particular, etc.30, exclusively serves this interpretation, and may slightly differ from other definitions used in philosophy and computer science. Ample discussion of these terms and their meaning can be found in the philosophical literature9,15,30. Our work aims to enable FAIR+ interoperability in scientific workflows, outlining a minimal and practical phenomenological basis intended to be simple and intuitive enough to be internalized by large numbers of practitioners. Our choice of terms has evolved along a span of about eight years, reflecting design and planning, plus several years of exposure to and feedback from diverse users in academic, governmental and non-governmental sectors.\n\nWe briefly articulate the phenomenological basis for our approach below, starting with the fundamental logical dichotomy of universals vs. particulars and further dividing particulars into continuants and occurrents. In our interpretation, these terms all refer to concepts; we are not concerned here with concrete instances (e.g., the individual tree, as opposed to the idea of a tree) as our only aim is to produce observations - informational artifacts generated through the process of observing concepts in the world.\n\nBased on this reasoning, we use the term particular to refer to concepts that describe observables for which an observation can be made, as described above, although in some literature the term is used to refer to instances. Particulars include (1) physical objects, (2) their qualities, the (3) processes and (4) events that affect them (whose observation is likely to describe the qualities affected, causal pathways and component objects) and (5) the relationships that connect them. In contrast with particulars, we use the term universals to refer to concepts that cannot be observed directly. In much literature (e.g. 15), the term universal simply means a concept (the abstraction of an entity, as opposed to the entity itself), so it includes concepts that we classify as particulars, such as processes or physical objects. We take a stance closer to Platonic realism31, which defines universals as those notions that cannot be directly incarnated unless associated with a particular. This includes classes of concepts such as attributes (e.g. ‘black’ cannot have an instance, but it can qualify a physical object, e.g., a cat), roles and others. In SMM, we translate universals into entities that cannot be observed in their own right. Only observations of particulars can be made, and universals are attributed to them to further specify their semantics.\n\nWithin particulars, the continuant vs occurrent distinction reflects how observed entities stand with regard to space and time. This distinction is found in all foundational ontologies with slightly different definitions or terminology16: for example, DOLCE14 uses the terms perdurant and endurant instead. Continuants are entities that maintain their identity through time, including physical objects (named subjects in SMM) and their measurable qualities. For example, color or height, which can be observed only when linked to other entities through mandatory inherency: e.g., the height of a tree. While continuants are, occurrents, such as events and processes, happen: their definition is intimately tied to time. As discussed previously, a spatial shift in the observation point can morph continuants from countable subjects to uncountable qualities as spatial resolution moves upwards and small-scale subjects lose their individual visibility in favor of larger-scale ones. Similarly, countable events can morph into uncountable processes, as temporal resolution shifts to allow appreciating change within what was formerly seen as an individual event. Relationships between two observations also reflect the continuant-occurrent dichotomy; accordingly, they can be seen as structural (unconcerned with time, such as parent-child) and functional (such as flows, whose expression is a time-dependent process).\n\nFigure 1 illustrates how observational scale and the categorization of particulars are intimately linked. A temporal scale gradient (Y axis) separates occurrents - for which fine temporal resolution allows an observer to appreciate change - from continuants, the meaning of which can be appreciated independent of time, due to the temporal scale being coarse enough to make change invisible. On a spatial scale gradient (X axis), “close” observation focuses within individual observations, impeding the appreciation of their individuality (therefore the “counting” of separate individuals), but enabling the observation of their inherent qualities and processes. As spatial scale is made coarser, the point of view moves outside the individual observation, allowing an observer to appreciate first the individual relationships between two of them, then an arbitrary number of them in the context of a larger-scale observation (not shown in the image). The property of countability tracks meaning along a spatial scale gradient in the same way that the dichotomy “happens/is” tracks meaning along time scale gradients. The diagram in Figure 1 has proven intuitive enough for ARIES users to remember and use as guidance in the first steps of semantic annotation.\n\nRefer to the text for explanations.\n\nDuring the development of the ARIES project, it quickly became apparent that the explicit statement of semantics was key to achieving our goals of building and linking community-driven, interoperable repositories of independently developed data and models. At the same time, it became clear that no community of modelers, data scientists or other prospective users would consider an investment in OWL or other semantic web-endorsed formalism as the vehicle to express the semantics in data and models, and that a different approach was necessary. Our solution was the design of a custom semantic specification and annotation language, for which we laid out four main requirements.\n\n1. Full compatibility with accepted semantic web standards. In the current implementation, this translates into the ability of any specification to compile to OWL2.\n\n2. Expressiveness: syntax and keywords should intuitively relate to the phenomenology and experience of scientific observation, so that the terminology and complex logical constraints in the underlying ontology do not need to be learned or exposed.\n\n3. Readability: the language should read as close as possible to English, using familiar terms that are as easy as possible to learn and memorize.\n\n4. Parsimony: the language should support flexible composition of terms to allow the terminology to remain as small as possible, enabling the greatest possible reuse of terms.\n\nThe result of many years of design and user feedback is the k.IM language, which currently makes worldviews accessible to ARIES modelers. The k.IM (for “knowledge-Integrated Modeling”) language is complemented by an open source software stack named k.LAB, which provides integrated tools to develop and use conceptualizations and models using k.IM. The software, which in its current alpha stage requires training to be applied, will not be discussed here, but can be freely downloaded and explored in source form.\n\nIn k.IM, particulars and universals are combined to specify observables; these can later be used to annotate data and models. Keywords and syntax rules are designed to make k.IM statements readable and understandable by mimicking English syntax, while specifying much more complex, correct and consistent OWL32 axioms. All k.IM statements compile to OWL2, the most widely used and accepted representational standards for ontologies. Conceptualizations written in k.IM can thus be exported and used in OWL-based systems with no loss of information.\n\nExperience with developing and teaching k.IM has highlighted three clearly distinguishable tiers of sophistication in semantic annotation practice, arranged here by decreasing levels of experience required and progressively shorter learning curves. Tier 1 is annotation of data and models, performed by minimally trained users utilizing the terms from domain ontologies, facilitated by context-aware search tools. Tier 2 is domain definition, where domain experience is the essential skill, but an investment in knowledge engineering remains necessary. Tier 3 is worldview definition, limited to knowledge engineers with ample time to invest in work with domain experts. All three tiers are enabled in the k.LAB community and will be discussed and exemplified below. Tier 3 usage is discussed in Section 2.3. Examples from tiers 2 and 1 will be given in Section 2.5.\n\nThe following examples, taken from the socio-ecological system worldview used in ARIES, illustrate some of the most important aspects of k.IM and their role in facilitating the conceptualization of domains. For clarity, the examples are highlighted in the same way the k.LAB editor does: keywords (identifiers recognized as part of the language) are in purple; user-generated text (including concept identifiers) is in black; literal text (such as quoted strings) is in blue. Table 1 provides a more systematic list of keywords with their associated meanings. We only discuss the features of the language used to specify concepts; those concerned with data annotation will only be briefly described, while the features concerned with modeling will be discussed in a forthcoming contribution.\n\nKeywords (in bold) are used in the language to state concepts. Other keywords indicated can be used to specify relationships between concepts (e.g. exposes, describes, implies).\n\nGeographical elevation is a quality inherent to regions of Earth, whose full specification involves different notions, some specific to the geographic domain, others of more general relevance. We use namespaces, associated with separate URLs or files, to separate concepts from different knowledge domains; a namespace can import another (through a using clause as shown below), so that the concepts defined in it can be referenced. Concepts from imported namespaces are referred to using the namespace identifier as a prefix to the concept name, separated by a colon (for earth:Region, the concept Region is defined in the earth namespace). The specification in definition (1) comes from the geography namespace, declared at the beginning along with its imports.\n\n\n\nIn this specification, the length keyword establishes the fundamental character of geographical elevation, including its physical nature (an extensive property whose value changes with the extent of the inherent subject) and the base unit for its measurement. This is done by tying the concept being defined to the core observation ontology, which lays out the phenomenological categories defined above, along with constraints and relationships for all common scientific observables, unseen by users. The language contains keywords for many fundamental quantities, allowing users easy specification in most situations (Table 1). It also provides semantic operators to easily and systematically modify existing concepts obtaining derived quality concepts:\n\n\n\nExample (2) needs only one concept, Earthquake, as the annotation of its probability can be done through a semantic operator (probability of), which can only be followed by a concept describing an event and produces a concept for its probability. Similar operators allow the expression of presence, occurrence, distance, proportion, ratio and value (Table 2). The use of semantic operators greatly reduces the number of concepts needed in the worldview and enables validation of the modified observable.\n\nText in square brackets indicates optional specifications in k.IM syntax. These operators only create concepts, with no assumption over their values. Observer statements (Table 4), which also build the correspondent concepts, are the k.IM specifications that are concerned with the actual states resulting from their observation.\n\nAs concrete qualities (those of which observations can be made) can only exist inherently to a direct observable, the observable must be made explicit before the concepts can be used (e.g., earth:Region in the previous elevation example). In example (1), the concept statement starts with a description (highlighted in blue) that is indexed in the k.LAB software, so that users can easily locate concepts by textual searching. The is keyword introduces the semantic specification for the term Elevation. In it, im:Height (from the base namespace im, for “integrated modeling”, also the name of the containing worldview) is first established as its fundamental nature; then, inherency is established by means of the keywords of and within. Inherency enables validation of the contexts in which the qualities are used. For example, after the definitions in (1), it will be correct to annotate elevation within a watershed, as long as a previous statement defines the Watershed concept as a type of earth:Region. In many situations, specifying within is enough to establish inherency. The of keyword is used when the quality refers to a second, implicit observable in the context of inherency. For example, the “height of trees” quality in a region is inherent to that region, but implicitly describes tree subjects in it. In keeping with our readability requirement, we only allow two levels of specification and use two different keywords (within and optionally of). We found that legitimate chained specifications, such as “x within y within z within …”, were awkward and difficult to understand in usage tests and decided against allowing such statements. Multiple chains of inherency of this kind can be defined using intermediate concepts.\n\nIn knowledge domains (as opposed to physical ones), the implicit inherent subject is often a configuration. This is a perceived, measured or inferred arrangement of observables that can be experienced and recognized by humans without being directly amenable to providing the observable of an informational artifact. For example:\n\n\n\nIn k.IM, configurations can only follow of in inherency specifications. This constraint allows the construction of clear and unambiguous statements that relate well to scientific discourse while remaining logically consistent. Common logical errors stem from confusing legitimate observables (such as qualities, subjects or events) with “objects of study” in science that are part of daily discourse but are not actually amenable to being directly described by informational artifacts. Configurations often allow keeping such concepts (such as Terrain above) in a specification without compromising logical integrity. Other examples of configurations include bathymetry, aesthetics and all types of networks, e.g. a stream network or a social network, whose observables are the actual subjects and relationships that create the perceived configuration.\n\nThe inherency requirement for qualities is one of the primary means for semantic inference and validation in SMM. Machine reasoning can be applied to ensure proper usage of each concept in data annotations and models. For example, models of a quality that is inherent to a specific subject or process are validated to ensure that all other qualities used for its computation are inherent to a compatible subject type (see below for a definition of compatibility). Any mismatch makes the model semantically inconsistent and must be solved before it can be computed. At the same time, many inferences are possible through reasoning on inherency. For example, a model’s requirement for “presence of biology:Tree” will automatically be satisfied if data for this specific concept are not available, by an observation of “im:Height of biology:Tree within earth:Region”. Because non-zero values of extensive physical properties imply the existence of their inherent subjects, a model for presence of trees can be automatically built using the height data and height > 0 as the criterion to establish presence. Interestingly, inherency underlies the mechanism through which shifts in a spatial scale affect identity and meaning in continuants. If a finer spatial scale resolves, e.g., the color of individual unicellular algae subjects in a volume of water, expanding the spatial extent and lowering the resolution may cause the algae subjects to go “out of focus”. Their color now becomes a quality inherent to a larger, previously invisible lake subject.\n\nIn specification (1), geographic elevation is established as a length by its fundamental keyword, but the definition introduced by is defines it as a Height, from the base im worldview namespace. The definition of Height shows the use of attributes to constrain the length concept to a specific orientation relative to the observer:\n\n\n\nThe attributes Vertical and Lineal (whose definitions are not shown here) are attributed to Height using the keyword inherits, establishing characteristics of Height beyond its definition as a length. In this worldview, Lineal (as opposed to Areal or Volumetric) is used to ensure that transformations of qualities involving dimensional reasoning (e.g. dimensional collapse under scale aggregation) carry the information that is needed for the algorithms to properly mediate scales and values. While the concept of “length” belongs to the foundational observation ontology, outside the worldview, dimensionality is specific to worldviews, as it could be interpreted differently in other domains (e.g., in non-classical physics). The Height concept is declared abstract to ensure that observations cannot be made of it; any concrete concepts derived from it - omitting the abstract keyword, such as elevation in (1) - must specify their inherency or the k.IM parser will flag an error.\n\nAttributes are an important feature for k.IM to enable fluent specifications while enforcing our parsimony requirement. Definition (1) exemplifies the English-like syntax used to specify attributes that restrict the context to terrestrial regions without creating a new concept. The context for geography:Elevation is declared to be “earth:Terrestrial earth:Region”, combining two concepts at the time of usage by simply mentioning them in sequence. Instead of merging them into another concept, earth:TerrestrialRegion, the sequential specification follows the grammatical conventions of the English language and yields more parsimonious ontologies. The ubiquitous use of is-a specialization to add attributes to observables (BlackCat is-a Cat) is a major cause for explosion of terminologies in domain ontologies. While this is legitimate from a phenomenological perspective (a black cat certainly is a cat) and from a mathematical logics perspective (black cats certainly are a subset of the set of all cats), we adopt the convention that only clear semantic distinctions should reflect in is-a inheritance. As long as an attribute does not obviously modify identity (a black cat is just a cat) the specialization should be described without explicitly creating a new concept. Attribute composition through is-a relationships can also yield ambiguous inheritance graphs, logical errors and specification dead-ends when many attributes are used but subtypes are intended to inherit only some. As most universals apply to broad classes of observables (color is certainly not just an exclusive attribute of cats), the advantages of the quasi-natural k.IM syntax quickly become apparent in terms of parsimony and readability. This syntax enables the creation of ontologies that are small enough to be learned and used but retain high expressive power. The underlying infrastructure, such as k.LAB, is left in charge of handling, unseen, the axiomatic complexities of concept inheritance and attribute composition.\n\nAttributes are often used to coarsely summarize the value of qualities. In k.IM, we preserve these relationships in order to allow inference of attributes:\n\n\n\nIn definition (5), it is clear how an attribute (ecology:Salinity) with its “child” sub-categories is a synthetic and approximate way to describe the actual concentration of sodium chloride in a natural water body, defined in the chemistry namespace (see Section 2.4 for details on chemical identities, and Table 2 for the proportion of semantic operator):\n\n\n\nBy establishing a semantic relationship between the salinity categories in the ecology namespace and the proper salinity definition in the chemistry domain, we open the way for classification models (not discussed in this article) to define specific ways to observe ecology:Salinity (i.e., establish the concrete sub-trait that applies to a chosen context). This occurs by observing and checking ranges of chemistry:Salinity that determine each category in context-specific ways (e.g. distinguishing brackish from fresh water; see 5 for practical examples of how similar models may be chosen, assembled and used).\n\nTo ease specification and enable inferences and functionalities, attribution in k.IM uses four categories of universals, collectively named traits, which correspond to different keywords used at declaration (Table 1). We distinguish general attributes from more specialized orderings (whose subtypes define an ordered sequence), realms (which identify mereologically arranged subdivisions of a context, such as atmospheric strata) and roles, which categorize the ways specific observables are seen when in the context of another. Each of these categories enables specific types of inference in applications; roles, in particular, are crucial for interoperability in modeling applications, and deserve a discussion that is outside the scope of this article. A final category of universals, identities, is instrumental for the use and reuse of external vocabularies and terminologies, and is described in detail in the next section.\n\nIn semantic annotation practice, it is common to encounter situations when an abstract observable (such as an individual animal, plant, or a material object such as a delimited volume of matter) must be identified by a “species”, such as a taxonomic or chemical one. For such situations, k.IM recognizes specific types of universals we name identities, which can be bound to observable concepts so that the use of a given identity type becomes mandatory to further specialize the observable:\n\n\n\nIn this case, the set of possible identities may be very large or even infinite. Since it is of course impractical to expect that ontologies can list all possible identities, this presents a problem when reasoning must compare concepts at two separate endpoints, as the identity used at one may not be known at the other. Having users create concepts for identities whenever a new one is needed would break interoperability, and the alternative - adding them to the shared worldview on an as-needed basis - would make the worldview prohibitively difficult to coordinate and maintain.\n\nIn such situations, we use authorities to link authoritative terminologies and ontologies. In k.LAB, authorities are software components that translate terms provided by authoritative terminologies, maintained by standard-defining organizations such as IUPAC for chemical nomenclature, into logical axioms that can be inserted into the namespaces provided in the worldview to create stable concepts that are available at all points of use. Authorities are identified in k.IM by names bound to a specific identity in a worldview:\n\n\n\nThis statement binds the GBIF.SPECIES authority to the biology:Species identity, requiring that any concrete biology:Individual is identified using it (based on definition 7, each Individual is in turn bound to adopting a biology:Species identity). For example, a spatial coverage (e.g., a raster GIS dataset) describing the counted occurrences of honeybee individuals (Apis mellifera) per square kilometer could be annotated as follows:\n\n\n\nCode 1341976 in the GBIF catalogue33 is the identifier for the Apis mellifera species, tracking its unchanging taxonomic identity through any changes in nomenclature that may have occurred over time. For increased readability, definition (9) can also be written with a concept declaration that makes the identity explicit for a reader:\n\n\n\nIn such situations, the user-defined concept (HoneybeeIndividual) functions as an alias for the GBIF honeybee concept, so that independent uses of the concept will not produce ambiguity, even if different specifications like (10) are given and different concept names are used in them. The two specifications (9) and (10) are functionally identical and compile to the same OWL axioms. Within the GBIF.SPECIES authority, producing logical axioms for the GBIF code 1341976 entails verifying that the code is a valid species identifier: a different outcome, such as using a non-existent or, e.g., a family code, would result in a parsing error reported to the user. This mechanism guarantees the ability to reason across namespaces and allows full interoperability of taxonomic names when used at independent and uncoordinated endpoints. Multiple sub-authorities (such as GBIF.FAMILY, GBIF.CLASS, etc.) allow binding different classes of identifiers managed by the same organization. The GBIF web-accessible catalog service33 provides codes that identify species and other taxonomic names in a stable and reliable way. It also provides metadata, such as labels, common names and broader terms, that are automatically linked to each concept created, allowing full specification of the identity and automated documentation of the resulting informational artifacts.\n\nIn addition to the identities managed by GBIF, representing the full taxonomic hierarchy from kingdom to variety, k.LAB provides authorities that recognize and interpret: (i) chemical identities (using the InChi naming conventions34); (ii) soil taxa according to the World Reference Database nomenclature35; and (iii) several classes of agricultural terms provided in AGROVOC25 (Table 3). In most cases, authorities provide both validation of identifiers and search facilities, building on services provided by the managing institutions. For example, if a user refers to a chemical compound using a wrongly formatted InChi string, an informative error is reported. In contrast, a correct string can be translated by the IUPAC authority into a molecular diagram for the user to check. Availability of a specific authority within a worldview is equivalent to an endorsement of that authority in it. Authorities, complemented with search tools and validation, such as those provided in k.LAB, provide consistency and a sound annotation discipline in a usage landscape characterized by widespread redundancy and inconsistency. “Bridging” authorities, while not yet attempted, might also be designed to accept terms from one authority and turn them into the same axioms of another covering the same domain. For example, SOIL.USDA may in the future complement the existing SOIL.WRB authority as an alternative source of soil taxonomy identifiers, producing axioms compatible with the latter. This would enable transparent mediation of competing vocabularies and further expand opportunities for interoperability and reuse of existing annotated data.\n\nEach authority uses an external service or vocabulary and can provide one or more views that bridges to a specific type of identity. The concepts produced by authorities carry the URIs of the original concepts as metadata, when those are produced by the corresponding authority.\n\nWith a common phenomenology, a structured language and validating supporting infrastructure, knowledge engineers can create worldviews with better prospects of consistency, expressiveness and reusability. Yet, the task of building a worldview remains daunting. We can consider the building of the worldview a Tier 3 activity, requiring significant expertise, long-term research investments and a careful vetting process involving consultation and continuing collaboration with a large number of experts. We will briefly summarize challenges and successes of worldview development for ARIES in the discussion and better discuss the topic in forthcoming contributions.\n\nWhen a suitable worldview is available, it should become possible to compose domain semantic annotations by combining existing concepts from the worldview and its endorsed authorities. We can consider this Tier 2 of difficulty in semantic annotation; it is the scope of many initiatives of which a representative example is the Agrisemantics initiative36. The Agrisemantics vision statement mentions height of corn as an archetypical example of a common observable whose interoperability for existing data resources is desirable. A fitting ontology available in the OBO foundry29 that provides concepts adequate for this task is the Plant Trait Ontology (PTO,37), which draws on the work of many experts and enjoys good community acceptance. The PTO provides a hierarchy of concepts starting at quality (imported from the BFO ontology, also a base ontology for k.LAB) specialized to morphology, then further into size height and plant height. One can assume that identifying height of corn would require a further specialization of plant height, and the corn identity would simply be implied syntactically by using a Corn… prefix in the term assigned to the concept. Further exploration of the PTO reveals that giant embryo (a gene type) is a sibling of plant height, both specialized from whole plant size through is-a inheritance. Further is-a specialization of plant height defines, among others, concept plant height uniformity (a quality not physically commensurable with height) and relative plant height (seemingly adding an observation-related attribute, relativity, out of many possible). The PTO is one of the most advanced domain ontologies in use with respect to phenomenological characterization, and its terms have proved useful to large communities. Yet it is clear from this example that no ontology can force users to adopt cogent annotation practices, ensuring that physical and biological identities are preserved along inheritance chains and attributes retain traceable and stable meaning. These are key requirements to help prevent inconsistencies and better assist annotation in service of the FAIR goals. If the same exercise were replicated in k.IM, for example to annotate a raster map file describing corn height in cm in a given region, the language itself would have driven the specification of the semantics:\n\n\n\nOr for increased readability:\n\n\n\nThe measure (observable) in (unit) syntax (see below and Table 4), one of k.IM’s observer statements (Table 4), embodies the semantics for the how of observation discussed above, and requires that the primary observable, in this case im:Height, be a physical property, simultaneously enforcing the use of units of measurement appropriate for its physical nature. Definitions (11) and (12) intentionally use agriculture:PlantIndividual instead of biology:Individual, as the latter requires a precise species identity (definition 7), while the former references the commonsense taxonomy used in AGROVOC for crop types, reflecting the intended semantics for the data. Most importantly, the adoption of rigorous phenomenological inheritance and specification syntax requires the realization (and the explicit statement) that height is first of all a quality of a plant subject, and that the data refer to plants within a cropfield subject. These logical axioms are a necessary base for any reasoning that can assess their compatibility within applications. While such details still need to be learned by a user, the syntax itself serves as a guide for the annotation workflow: the use of inconsistent observables or the lack of proper inherency would yield ungrammatical statements that are reported as errors. For example, leaving out the of specification would cause height to become abstract, therefore not usable for data annotation; leaving out within would leave the context of inherency for the quality blank, reported as an error for any non-abstract quality. The result is readable by a non-expert and compiles to axioms specifying a single OWL concept, which can be transferred to a remote endpoint in axiomatic form and reconstructed for reasoning or database querying; the shared worldview is the only requirement for its interpretation. The concept as constructed carries information about physical nature, dimensionality, domain of application, agricultural identity, biological identity and context of inherency (plants within a cropfield). These are assembled through consistent logical restrictions and are robust to validation and machine reasoning. On this basis, inferences can be performed that use the annotated dataset to satisfy queries beyond the asserted quality, e.g. for presence of CornPlant as discussed previously. Simply through reasoning on the concept, a query for an observation of the height of generic agriculture:PlantIndividual in any earth:Region (of which agriculture:CropField is a subtype) could be satisfied, in absence of a more specific match, by the same corn height data.\n\nThese statements are used in data and model annotation (as opposed to ontology definition) to express either data semantic or model dependencies; when necessary, they automatically apply the semantic operators of Table 2 to build the correspondent concepts. In addition, they specify observation semantics (such as units, currencies or categories) so that the concepts can be associated to specific data values and mediated when necessary.\n\nThe simplest examples of usage (Tier 1) exploit pre-defined concepts from the worldview to annotate resources. In such cases, knowledge of the syntax and simple search tools allow a user to produce annotations that can accompany informational assets for automated discovery and indexing:\n\n\n\nSpecifications such as (13) are simple enough to be added to metadata or “sidecar files” – files with a “.kim” extension that accompany data files with the same name - which may be automatically detected and indexed by specifically designed web crawlers, so that indexes of web-accessible, annotated datasets can be built and maintained. Specification (13) is complete and correct, as geography:Elevation is fully characterized in terms of inherency within the worldview, as seen in statement (1). The statement of the worldview name is enough to load the web-accessible worldview and use it to interpret the specification that follows. To annotate qualities, which encompass a majority of data artifacts, the syntax for observer statements (such as measure in the example above) is enough to represent all observation semantics known to k.IM. The set of available observer statements (Table 4) is small and has proven easy to learn and use in ARIES coursework and test user communities.\n\nWhile this article does not fully describe modeling and annotation features of k.IM and k.LAB, we note that data annotation is not restricted to qualities. Subjects can also be annotated with ease using a slightly different syntax:\n\n\n\nAs subjects are observed directly (without needing units or other known observations for comparison), the simple acknowledgement of the semantics is enough to annotate a source of objects, such as roads in a vector file. The keyword each, only applicable to countable observables, reflects the fact that such sources can produce one or more subjects with the specified semantics; the model statement also allows annotation of semantics for any attributes of the observed subjects (not shown). The k.IM language and k.LAB infrastructure build on these semantic foundations to enable a distributed modeling infrastructure, in which the resulting observations can be complemented, through further extension to the model syntax, with procedural information to create “live” observations interacting on a networked infrastructure, in compliance with their semantics. Such features, briefly described for the ARIES application in 5, will be more thoroughly illustrated in forthcoming contributions and documentation.\n\nFAIR+ interoperability requires the unsupervised assessment of compatibility between semantically annotated resources. We use the term compatibility to refer to concepts and interoperability to refer to observations of compatible observables. In k.LAB, compatibility enables interoperability in two fundamental ways:\n\n1. Validation of connections, for example in ensuring that a model’s dependent observables are compatible with the computed output in terms of inherency;\n\n2. Discovery and retrieval of compatible observables for queries stated only through their semantics, so that the best source of information (data or model) for a required observable can be located on the network when requested by users or models being computed.\n\nUsing the notion of interoperability illustrated in Section 1.2 and the semantic foundations illustrated so far, the assessment of compatibility for interoperability can be defined as follows.\n\nTwo observables (O1, O2) are compatible if and only if:\n\nThe main observable concept in O1 (without considering traits and inherency) equals, or is a more specialized version of, the main observable in O2;\n\nO1 adopts all the same traits and roles as O2 (which may have additional traits); e.g. the main observable in O1 and O2 may be a generic length, but if O1 is vertical, O2 must also be;\n\nIf O1 has an inherent type (of), O2 must have a compatible one;\n\nIf O1 has a context type (within), O2 must have a compatible one.\n\nIf observables are compatible, their observations are interoperable. They are FAIR+ interoperable if and only if:\n\nTheir observables are compatible;\n\nTheir observation semantics can be mediated (e.g. both are measurements in compatible, but possibly different units);\n\nTheir context can be mediated: the intersection of the extents (e.g. space, time) of the scale for both observables is non-empty and the resolution of each extent is the same as, or can be resampled to fit, the other’s.\n\nThis definition is amenable to being incorporated in an unsupervised algorithm. Mediation may engender information loss (e.g. aggregation error) and other uncertainties (e.g. when bridging different classification systems), which should be recorded as provenance38 in separate records kept with the dataflow. In queries, when more than one interoperable observation may be returned, any potential information loss can become part of the criteria used to rank the appropriateness of each candidate observation that matches the observable. On a semantic level, the match may also be incomplete. For example, some traits of the matching observation may not be stated in the query, e.g. a vertical length could match an unspecified one. This offers a base to develop ranking strategies considering, among other criteria, metrics of semantic accuracy or distance; the latter is an important criterion in k.LAB and will be discussed in detail in further contributions.\n\n\n3. Discussion and perspectives\n\nDistributed databases with their contents annotated according to a common worldview can allow the kind of large-scale, yet precise, semantically-driven interoperability that has so far remained a high-ranking wish in the semantic web community. SMM, a modeling approach where FAIR+ interoperability is an integral requirement, sees data and models as definitions for possible observations: while datasets can produce, possibly through mediation of observation or context semantics, the requested observations in a self-contained way, models do so through computation that may involve the observation of other concepts they depend on, to be resolved through other data or models. Distributed databases of k.IM-annotated data and models can be built using k.LAB and accessed through modern web services39 with distributed, certificate-based authentication. These services form an operational semantic web whose nodes contain FAIR-compliant scientific observations and models. In forthcoming contributions, we will describe the ways that k.LAB enables the assemblage, validation and computation of scientific workflows that observe an arbitrary user-requested concept in a user-defined context. These functionalities have been informally described in the context of the ARIES project5.\n\nWhile the coverage and scale of our applications so far remains too small to warrant claims of large-scale success, our experience with ARIES indicates that building such distributed knowledge bases is possible and practical. Large-scale initiatives, such as NEON40, CSDMS41, directives such as INSPIRE42 and many others, are seeking interoperability of data, increasing requirements and initiatives for data openness and publication of data43, and implementing new data release standards that emphasize accessibility of information. Approaches that can facilitate the development of consistent semantics beyond textual metadata and controlled vocabularies become essential. FAIR criteria outline a way to gather all those observations in a way that will greatly advance science synthesis44. Faced with a state of the art in which semantic interoperability is still often understood as “matching of terms”41, we argue that the semantic research and infrastructure available to date are still not ready for a FAIR+ interpretation of interoperability, and propose the work presented here as a contribution towards it.\n\nAt the time of this writing, the k.LAB infrastructure and the im worldview are used to annotate datasets and models numbering just below one thousand, and have been exposed to about 150 users, of which only about 20 use it for their daily work. These are very small numbers compared to the ambition of open data and the importance of interoperability in scientific discourse. Our experience in ARIES has highlighted both strong and weak points in the attempt of creating a systematic and accessible path to rigorous semantic annotation for practitioners. Advantages recognized by the user community are:\n\nClarifying the components of interoperability, so that conceptualization efforts are focused and a suitable workflow can be identified. The most important aspect in this sense is the clear focus on observable semantics: no time is wasted seeking semantics to express model-related concepts (“model”, “variable”), observation-related ones (“measurement”) or context-related ones (“spatial resolution”), all of which figure prominently in commonly used ontologies.\n\nFormalizing a simple phenomenology for observables and universals. The base observables in Figure 1 have proven intuitive enough to be understood and remembered by diverse users, helping them “home in” quickly on observable semantics as described in the previous point. Also, the use of independently defined and flexibly attributed universals to express attributes, identities and roles has effectively and intuitively solved, in our applications, the plaguing issue of excessive and improper specialization.\n\nThe k.IM language and k.LAB platform make ontologies and annotations immediately actionable, enforcing the logical consistency of each definition both by enforcing syntactical correctness through intelligent editing tools and by employing a machine reasoner45 to identify and report logical errors to the user. The language guides, simplifies and validates the definition of knowledge; the support software provides feedback and allows users to immediately perform user queries and compute workflows whose results enable at-a-glance validation of the semantic correctness of the concepts employed.\n\nAt the same time, clear difficulties remain in instrumenting a path to large-numbers adoption of an approach like the one we propose. For example, the use of a custom language to specify ontologies has disadvantages: the choice was inevitable for us due to the need to reach large bases of users other than knowledge engineers, but connecting to semantic web research and communities with a custom approach is of course much more difficult despite our commitment to OWL2. Another important difficulty is the need for complex, custom software to make the approach actionable, with obvious costs and difficulties related to its development, distribution and maintenance.\n\nFinally, building and sharing worldviews that reflect large and complex domains remains a daunting task, despite the guidance of a systematic conceptual framework and methodology. In particular, developing a collaborative process to ensure that the worldview reflect the uncontroversial thinking of large communities requires both large collaboration investments and sophisticated tooling for harmonization and refactoring. Despite the success of mid-size initiatives like ARIES, we are at the very beginning of an ambitious effort whose challenges may well prove too large for large-scale adoption.\n\nIn our mind, these difficulties are offset by the potential for the collaborative, wider use of scientific products that would be enabled by such a rigorous, semantically-driven interoperability. The ability to automatically discover and compute dataflows based only on conceptual queries opens pathways that may lead to much larger use of scientific products, with a potentially much larger involvement of decision-makers and citizen scientists. Our efforts are sustained and motivated by the realization of the potential of effective, actionable interoperability to promote and enable a more efficient economy of knowledge, creating clear incentives to the sharing of data and models, so that they may become part of large and yet undiscovered computational chains.\n\n\nSoftware availability\n\nThe k.LAB software is available in source form from Bitbucket and in binary form from the Integrated Modelling collaboration site.",
"appendix": "Author contributions\n\n\n\nFV is responsible for the original SMM vision, led the development of the approach, designed the k.IM language and the supporting k.LAB infrastructure, and wrote the majority of the text. SB and IA participated to the development of the approach, led crucial case studies, contributed to the design of the semantic principles, the software implementation and to the writing. CC contributed to the work on authorities, provided experience with philosophical underpinning, provided context for agricultural applications, and contributed to the writing.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nDevelopment was originally funded by the US National Science Foundation (grant 9982938) and received partial support from the ASSETS and WISER projects funded by ESPA/NERC (grants NE-J002267-1 and NE/L001322/1). ARIES is developed with partial support from the EU-Horizon 2020 project AQUACROSS (grant agreement no. 642317).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nKen Bagstad, Simon Willcock and Mikel Egaña Aranguren provided advice and feedback on early drafts. Many participants to the ARIES project and the International Spring University on Ecosystem Services Modeling provided valuable feedback and testing on the approach, the k.IM language and the software infrastructure over almost a decade. Discussion with the Global Agricultural Concept Scheme (GACS) working group (in particular Johannes Keizer, Thomas Baker, Devika Medalli, Elizabeth Arnaud, Medha Devare, Sophie Aubin) also greatly helped focus, shape and improve the details of the approach. Giovanni L’Abate drove the development of the SOIL.WRB authority.\n\n\nReferences\n\nKumazawa T, Saito O, Kozaki K, et al.: Toward knowledge structuring of sustainability science based on ontology engineering. Sustain Sci. 2009; 4: 99. Publisher Full Text\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJain P, Hitzler P, Sheth AP, et al.: Ontology alignment for linked open data. International Semantic Web Conference. Springer; 2010; 402–417. Publisher Full Text\n\nLudäscher B, Lin K, Bowers S, et al.: Managing scientific data: From data integration to scientific workflows. Geol Soc Am Spec Pap. 2006; 397: 109–129. Publisher Full Text\n\nVilla F, Bagstad KJ, Voigt B, et al.: A methodology for adaptable and robust ecosystem services assessment. PLoS One. 2014; 9(3): e91001. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMillennium Ecosystem Assessment: living beyond our means - Natural assets and human well-being. [Internet]. 2005. Reference Source\n\nBerners-Lee T, Hendler J, Lassila O: The semantic web. Sci Am. 2001; 284(5): 34–43. Publisher Full Text\n\nAntoniou G, Van Harmelen F: A semantic web primer [Internet]. MIT press; 2004. Reference Source\n\nGuarino N: Formal ontology and information systems. Proceedings of FOIS. 1998; 81–97. Reference Source\n\nVilla F, Athanasiadis IN, Rizzoli AE: Modelling with knowledge: A review of emerging semantic approaches to environmental modelling. Environ Model Softw. 2009; 24(5): 577–587. Publisher Full Text\n\nPorter CH, Villalobos C, Holzworth D, et al.: Harmonization and translation of crop modeling data to ensure interoperability. Environ Model Softw. 2014; 62: 495–508. Publisher Full Text\n\nGoguen JA: Data, schema, ontology and logic integration. Log J IGPL. 2005; 13(6): 685–715. Publisher Full Text\n\nKeet CM: The use of foundational ontologies in ontology development: an empirical assessment. Extended Semantic Web Conference. Springer; 2011; 321–335. Publisher Full Text\n\nGangemi A, Guarino N, Masolo C, et al.: Sweetening ontologies with DOLCE. International Conference on Knowledge Engineering and Knowledge Management. Springer; 2002; 166–181. Publisher Full Text\n\nArp R, Smith B, Spear AD: Building ontologies with basic formal ontology [Internet]. Mit Press; 2015. Reference Source\n\nMascardi V, Cordì V, Rosso P: A Comparison of Upper Ontologies. WOA. 2007; 55–64. Reference Source\n\nPease A, Niles I, Li J: The suggested upper merged ontology: A large ontology for the semantic web and its applications. Working notes of the AAAI-2002 workshop on ontologies and the semantic web. 2002. Reference Source\n\nMadin J, Bowers S, Schildhauer M, et al.: An ontology for describing and synthesizing ecological observation data. Ecol Inform. 2007; 2(3): 279–296. Publisher Full Text\n\nCox SJ: An Explicit OWL Representation of ISO/OGC Observations and Measurements. Proceedings of the 6th International Conference on Semantic Sensor Networks. Aachen, Germany, Germany: CEUR-WS.org, 2013; 1063: 1–18. Reference Source\n\nRaskin RG, Pan MJ: Knowledge representation in the semantic web for Earth and environmental terminology (SWEET). Comput Geosci. 2005; 31(9): 1119–1125. Publisher Full Text\n\nButtigieg PL, Morrison N, Smith B, et al.: The environment ontology: contextualising biological and biomedical entities. J Biomed Semant. 2013; 4: 43. Publisher Full Text\n\nGrenon P, Smith B: SNAP and SPAN: Towards dynamic spatial ontology. Spat Cogn Comput. 2004; 4: 69–104. Publisher Full Text\n\nAshburner M, Ball CA, Blake JA, et al.: Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet. 2000; 25(1): 25–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIlic K, Kellogg EA, Jaiswal P, et al.: The plant structure ontology, a unified vocabulary of anatomy and morphology of a flowering plant. Plant Physiol. 2007; 143(2): 587–599. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCaracciolo C, Stellato A, Morshed A, et al.: The AGROVOC linked dataset. Semant Web. 2013; 4(3): 341–348. Publisher Full Text\n\nHood MW, Ebermann C: Reconciling the CAB Thesaurus and AGROVOC. Q Bull IAALD IAALD. 1990. Reference Source\n\nTarboton DG, Horsburgh JS, Maidment DR: CUAHSI community Observations Data Model (ODM) version 1.1 design specifications. Des Doc. 2008. Reference Source\n\nNelson SJ, Johnston WD, Humphreys BL: Relationships in medical subject headings (MeSH). Relationships in the Organization of Knowledge. Springer; 2001; 2: 171–184. Publisher Full Text\n\nSmith B, Ashburner M, Rosse C, et al.: The OBO Foundry: coordinated evolution of ontologies to support biomedical data integration. Nat Biotechnol. 2007; 25(11): 1251–1255. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArmstrong DM: A theory of universals: Universals and Scientific Realism [Internet]. CUP Archive; 1978; 2. Reference Source\n\nJowett B, others: The republic of Plato. Clarendon press; 1888. Reference Source\n\nMotik B, Patel-Schneider PF, Parsia B, et al.: OWL 2 web ontology language: Structural specification and functional-style syntax. W3C Recomm. 2009; 27: 159. Reference Source\n\nFree and Open Access to Biodiversity Data GBIF.org [Internet]. [cited 3 Mar 2017]. Reference Source\n\nHeller S, McNaught A, Stein S, et al.: InChI - the worldwide chemical structure identifier standard. J Cheminform. 2013; 5(1): 7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFood and agriculture organization of the united nations: World reference base for soil resources. World Soil Resour Rep. 1998; 84: 21–22. Reference Source\n\nBaker T, Caracciolo C, Doroszenko A, et al.: GACS Core: Creation of a Global Agricultural Concept Scheme. Metadata and Semantics Research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22–25, 2016, Proceedings. Springer; 2016; 311–316. Publisher Full Text\n\nYamazaki Y, Jaiswal P: Biological ontologies in rice databases. An introduction to the activities in Gramene and Oryzabase. Plant Cell Physiol. 2005; 46(1): 63–68. PubMed Abstract | Publisher Full Text\n\nSimmhan YL, Plale B, Gannon D: A survey of data provenance in e-science. ACM Sigmod Rec. 2005; 34(3): 31–36. Publisher Full Text\n\nPautasso C, Zimmermann O, Leymann F: Restful web services vs. big’web services: making the right architectural decision. Proceedings of the 17th international conference on World Wide Web. ACM; 2008; 805–814. Publisher Full Text\n\nHampton SE, Strasser CA, Tewksbury JJ, et al.: Big data and the future of ecology. Front Ecol Environ. 2013; 11(3): 156–162. Publisher Full Text\n\nPeckham SD, Hutton EW, Norris B: A component-based approach to integrated modeling in the geosciences: The design of CSDMS. Comput Geosci. 2013; 53: 3–12. Publisher Full Text\n\nDirective I: Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 establishing an Infrastructure for Spatial Information in the European Community (INSPIRE). Publ Off J. 2007. Reference Source\n\nMolloy JC: The open knowledge foundation: open data means better science. PLoS Biol. 2011; 9(12): e1001195. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeters DP, Havstad KM, Cushing J, et al.: Harnessing the power of big data: infusing the scientific method with machine learning to transform ecology. Ecosphere. 2014; 5(6): 1–15. Publisher Full Text\n\nPoole DL, Mackworth AK, Goebel R: Computational intelligence: a logical approach [Internet]. Oxford University Press New York; 1998. Reference Source"
}
|
[
{
"id": "22974",
"date": "14 Aug 2017",
"name": "Carol Goble",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe work described represents the first in a number of expected publications from a long running and important effort to build an integrated “e-Laboratory” for ecosystems modelling. The ambition of this k-LAB is to support the automatic assembly of scientific workflows over data and models, which in turn requires that inputs and outputs are compatible. The ARIES project web site gives some indication of the ambition and the driving application of work reported – more so than the paper.\nThis paper chiefly sets out to describe the semantic principles and methods to support the required interoperability (dubbed “FAIR+”) through systematic semantic descriptions of observable phenomena, the observations made on the observables and the context of these observations. The majority of the paper is given over to the philosophy underpinnings of these semantics and the presentation of a bespoke language and its support software developed by the authors to support the expression of the semantics.\nThus the main thrust of the paper is\nThe fundamental arguments behind the development of an ontology for the systematic representation of observable phenomena (“specifying observable semantics”) suitable for ecoservices modelling across different domains and different contexts (notably scales) using this ontology to make observations about data and models (or “informational artifacts”) through annotations (“specifying observation semantics”) The presentation of the k.IM language of the authors’ invention to express the ontology and annotations with bespoke support software. that these annotations serve to support model input-output compatibility to support the automatic assembly of scientific workflows to answer questions posed using this (and other?) ontologies\nEach of these points is interesting and worthwhile, most notably the development of a semantics observables that can transcend the difficulties of scales. The final point – demonstrating interoperability through compatible observations – is the least well demonstrated in the paper.\nThe semantics of observations is an important topic and has long challenged ontologists and those working in the fields of data annotation. The paper presents valuable insights into the challenges and thought processes in the development of the authors’ phenomenon-based semantic infrastructure and makes a case for why interoperability of observations is paramount for ecosystem modelling. The notion of defining a “worldview” is a useful one. The tiers of users chimes with experiences and expectations reported in the biosemantics literature. The authors are also well aware of the limitations of their approach as discussed in section 3.\nThe k.IM language is interesting, and it is more than plausible that one would want a language to disguise OWL2: though claims to its ease of readability and its compatibility with OWL2 are not really demonstrated.\nIt’s a very dense paper that packs a lot in and requires several readings. Part of the challenge of the paper is to describe the semantics coherently and completely enough to be convincing whilst leaving examples of how the semantics are used to other papers. It partially succeeds. The paper’s presentation is also often frustrating.\nNonetheless this is stimulating and valuable work and a useful contribution to the semantics of observables and observations.\nSuggestions for improvements:\nOntologies, semantics, whatever you like to call them, are for a purpose. The authors note the success of domain ontologies is because of their purpose. However, very little information is given on the purpose of the semantics of observations: the application, the nature of the questions, data and models, whether the data to be interoperated is public data or privately annotated or the nature of compatibility that is being sought. Section 2.6 and some hints in section 1.2 are the only hints we have towards the driver of the work – that is input-output compatibility. Many readers will not know what socio-ecological modelling is. A much better description, all in one place with an example from the beginning would be beneficial.\nIt is not clear what is even meant by a model – the annotation example in section 2.5 suggests it is a tiff file. An example that will exercise compatibility across scales would be ideal. A clear description – with example - of the application of the enterprise would make it much easier to judge the value of the method and to judge the claims to interoperability made in the abstract and introduction. The salinity example in section 2.3 is the most compelling, as an example of how all the work of the semantics could be put to use and it’s a pity this wasn’t developed further in favour of a shorter treatment of the metaphysics of the semantics of observables.\nThere is a tendency to introduce key points and terminology almost as asides as one goes along which makes it much harder to digest than it should be. For example:\n\nInherency for qualities is an important concept and is frequently referred to but is not specifically defined. Configuration is briefly introduced as an aside example – does the configuration Terrain render the need to state that Elevation is within Earth:Terrestrial earth: Region redundant as Elevation is im:Height of earth:Terrain? The whole of section 2.2 is a drip-feed of new keywords and terms that makes it a slog to work out from the examples what the semantics and syntax of the k.IM language actually are. The software is not amenable to the uninitiated – there are few readmes and the documentation seems to be entirely in a password-guarded wiki. Table 1 gives some words, but definitions of “is”, “as”, “described”, “requires”, “has children” etc are not given.\nTables 1 and 2 should have an example for each entry. A complete definition of the full syntax of the language is needed at least in supplementary materials\n\nThe resultant ontology of observables is not available or referenced as far as I am aware. Granted that the semantics as defined by the language are expressed through the language: nevertheless it is not clear what the “common phenomenology” referenced in section 2.5 ended up as.\nWhat other ontologies are imported? We discover that quality is imported from the BFO ontology as an aside on page 13. In section 1.1 the authors do a good job of discussing various classes of ontology and their roles, but then drop any further references. Tables 1 and 2 need to do a better job of the provenance of terms. If the paper is aimed at ontologists (as one might expect) then more detail is needed in the description of the final ontology. Several times the authors’ mention that the descriptions compile to OWL 2.0. It would have been instructive to see one such compilation or to have an example as a supplementary material. Claims to compatibility with OWL2 are not really demonstrated. Is the ontology of observables available?\n\nFigure 1 gives the intuition that the authors hope for but perhaps doesn’t stand up to close scrutiny. Coarse grained spatial scale and temporal scale as continuant subject? Are all subjects coarse grained?\n\nThe section on identities is well written and clear – “bridging authorities” have been attempted for example http://www.bridgedb.org/ and to some extent identifiers.org.Scientific lenses to support multiple views over linked chemistry data. Batchelor C, et al In The Semantic Web – ISWC 2014, Lecture Notes in Computer Science Volume 8796, 2014, 98-113 sets out the notion of linksets used in the Open PHACTS linked data platform for pharmacological data\nOther related work has a few gaps.\nThe notion of ontological patterns and languages, some of which sit on top of OWL, has has been addressed in the literature. OPPL (the Ontology Pre-Processor Language) dates from 2010 (http://oppl2.sourceforge.net/documentation.html) and more recently tawny-owl (https://github.com/phillord/tawny-owl); both promote the notion of Ontology Patterns, which is inherent in the k.IM approach. Webulous takes a spreadsheet approach to ontology patterns rather than a language based one (http://www.ebi.ac.uk/spot/webulous/). Given that the k.IM language is effectively constraining Tier3 developers to certain patterns, this literature seems relevant. There is also the ontology annotation literature: examples include Zooma (http://www.ebi.ac.uk/spot/zooma/) and DOMEO (https://doi.org/10.1186/2041-1480-3-S1-S1) the latter of which uses the W3C Open Annotation Data Model. The notion of automated workflows using ontological annotations on inputs and outputs has also had some past attention. A well known example is the WINGS system: A Semantic Framework for Automatic Generation of Computational Workflows Using Distributed Data and Component Catalogs. Gil, et al Journal of Experimental and Theoretical Artificial Intelligence, 2011. http://dx.doi.org/10.1080/0952813X.2010.490962\n\nIs the rationale for developing the new method (or application) clearly explained? Yes\n\nIs the description of the method technically sound? Yes\n\nAre sufficient details provided to allow replication of the method development and its use by others? Partly\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "22819",
"date": "11 Dec 2017",
"name": "Pier Luigi Buttigieg",
"expertise": [
"Reviewer Expertise ontology",
"semantics",
"bioinformatics",
"omics",
"microbial ecology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments: In this contribution, the authors present part of a system to infuse models and the data linked with them with semantic content through a high-level programmatic interface, bringing digital content more in line with the FAIR guiding principles. The manuscript presents some encouraging progress and clearly represents a formidable effort, however, does reveal some concerning (but certainly addressable) issues with the system and the ontological framework it depends upon.\n\nIn my opinion, the strongest component of this contribution is the consolidated and practical approach to adding semantics to the toolbox of modellers. To me, material like code snippet (14) represents a hugely exciting and powerful advancement; I would be much more supportive of this submission if it focused primarily on these capabilities. The transparent and systematic handling of naming authorities and connection to internationally endorsed vocabularies is also promising, as is the care taken to prevent these conventions from dictating more rigorous semantics. As stated in the MS, the potential to use semantics to bridge these resources is immense and the authors’ work in this direction is of high value.\n\nI believe the weakest part of this work is the semantic framework described within it. I couldn’t find the ontologies themselves, so evaluation is based only on the descriptions in the manuscript. Frankly, I find the approach to handling \"observables\" haphazard and something of an uncomfortable shortcut relative to semantics developed in, e.g., the Biocollections Ontology (BCO). Initially, such shortcuts may appear to relieve a burden of using semantic technologies on end users, however, this quickly becomes uncontrolled. Further, in my experience, the commitment to realism in ontology (i.e. avoiding dealing with concepts and focusing on empirical phenomena) development is essential in the scientific realm to prevent rapid semantic drift between initiatives. The treatment of scale here seems to lend itself to this issue, unless the semantics reflect that scales overlap (I couldn’t find any evidence of this). Expecting a coherent \"common sense\" approach to prevail rarely bears fruit across multiple groups without concentrated effort at parsing out their knowledge into some mature upper level framework.\n\nFurther, I'm left wondering why there doesn't seem to be a developed attempt at reuse of ontologies such as BFO, DOLCE, or other upper level resources. The authors mention that they use BFO, but the framework they propose seems to counter many of its basic semantics without clarification or logical justifications. I’m afraid I can’t really understand how the reuse of these foundational and well-adopted semantics is actually implemented or if it’s defensible. That could be a paper in itself and it probably should be if the authors want to defend their proposals more completely: in this format, it’s very hard to endorse the proposals. I highlight a few specific issues in my detailed comments below.\n\nMuch the same can be said for the use of mid-level ontologies like the TO and ENVO in which the authors point out some important flaws/limitations. Generally speaking, the editors of such resources are happy to receive feedback and coordinate with external systems rather than forcing the duplication of work. Were there attempts at engagement via issue trackers or similar? The lack of engagement would be odd given how much the authors emphasise FAIRness. Perhaps a clarification of how, exactly, these resources were reused and what their future role in the ARIES system will be (assuming they are willing to cooperate with its development) would help.\n\nAdditionally, there are some descriptions and claims (e.g. that k.IM makes things \"immediately actionable\", more below) which need some form of support or demonstration to be substantiated. This need not be exhaustive: a few illustrative examples as a supplement should suffice. If no such demonstration is provided, I recommend omitting these claims.\n\nOverall, much of this contribution is more a specification rather than a report on results and outcomes. I suggest large parts of this (most tables and statements) be moved into a supplement and a few used to illustrate the key points made.\n\nDespite these critiques, I am very supportive of this work. This is one of the most streamlined semantics-to-modelling systems I’ve encountered and I’m hopeful that it can iron out the issues outlined above. Further, I echo the authors’ call for a more consolidated semantic landscape to allow such technologies the stability they need to grow efficiently. Parallel efforts should begin converging and complementing one another (e.g. OBO and other such resources providing worldviews for ARIES while ARIES reshapes their content when inconsistencies are found). I’d be very happy to support this moving forward\n\nDetailed comments: Below, please find a non-exhaustive set of specific comments to clarify and expand on my positions above. I don’t require a response to each one, but the major points they collectively describe should be addressed in a revision.\n\nWorldview: To me, a worldview sounds more or less like a “domain ontology”. Is this the case? If so, how are overlaps between domains handled?\n\nI feel uncomfortable with some of the syntax used in the various code snippets: e.g. how is a model a measure? This is not really intuitive and requires commitment to an unusual abstraction. This reflects some of the semantic ambiguity at the foundation of the ARIES/kIM system discussed above.\n\nWhile I certainly can appreciate the effort involved, many years of design and user feedback do not necessarily translate, 1:1, into quality: I would tone down suggestions to this effect. This is especially true as the user/testing group is rather small (which doesn't mean the feedback was poor, but reduces the weight nonetheless). Simply stating the begin date of the work and the number of active testers at the beginning and at the end would be enough.\n\nRegarding the statements regarding OWL/OWL2: It seems that authors are proposing that those working with OWL or using other upper level ontologies should either 1) switch to the ARIES system or 2) maintain translators between OWL2 and ARIES-compliant languages. This is quite a bold position for a relatively new and untested system. I would suggest that this should be the other way around for the early phases: ARIES should develop translation layers to the establish technologies.\n\nPerhaps I misunderstand, but I find it quite odd that the authors criticise other ontologies for requiring commitment to a given system of operation while simultaneously recommending that users subscribe to the IM system in the first few lines of discussion. As with any technology, users must understand the tools they work with and, indeed, commit to some sort of convention.\n\nWhat is promised in the forthcoming contributions is quite ambitious and exciting, but has little relevance to this submission and, naturally, no evidence to support the claims made. I thus suggest this be toned down considerably. An \"operational semantic web\" that is able to pull data together into a coherent data set for immediate analysis would indeed be very useful. There are application-specific examples of this in the biomedical domain (e.g. the Monarch Initiative). However, simply being linked in such a network does not make data and models FAIR-compliant. How do you know they are really interoperable and reusable unless the contributors of those datasets have made them so? k.LAB cannot magically make this happen, or can it? I think a careful look at what FAIR specifies is needed before claims of compliance are made.\n\n\"Faced with a state of the art in which semantic interoperability is still often understood as “matching of terms”\" By whom? It is true that many in the community tragically think of semantics as terminological mapping, but there are quite a few others that really do target the meaning behind terms. I think that statements like this don't do the community justice and give the false impression that the SMM/k.LAB solution is the only one addressing true semantics.\n\n\"In our view, a major need for progress towards this goal is a solid, uncontroversial phenomenological base, i.e. the basic semantics for the types of phenomena and entities that can be understood by human observers.\" I’m afraid that controversy is not likely to be dispelled: there will always be disagreement as human understanding is continually changing and growing, with researchers and other observers frequently disagreeing. Statements like this presuppose some form of authority which is antithetical to open progress. I would suggest that systems should be able to handle controversy while offering systemic stability. \"Formalisms and toolsets must be built to support it, to ease the specification of domains and allow for extension, while enforcing a consistent design discipline. We need clear best practices for specialization and connection of terms, and guidelines on how to integrate always growing, and potentially infinite, domain content from vocabularies without breaking the logical integrity of the resulting annotations.\" This sounds more or less identical to the objective and actual modus operandi of the OBO Foundry and Library. I would suggest that OBO Principles be discussed here.\n\nI take strong exception to how concepts are nearly equated to the objects of empirical observation – in my experience, this quickly becomes toxic to handling the informatics of natural science. This also doesn’t seem to resonate with how the authors treat particulars like real-world entities across the manuscript. This and superficial treatments of ideas like “Platonic realism” concern me. If particulars are concepts, how can they maintain their identity through time unless they are being conceptualized? What is exactly mean when the authors discuss the observation of a concept? I think such claims and arguments should be the put forth in a separate paper reviewed by logicians and developers of upper level ontologies. The value of this paper is more the implementation of the semantics-to-model link.\n\n\"If a physical object, event, process or relationship can be simply acknowledged to exist in a context of interest, qualities, such as the elevation of a mountain or the temperature of a body, can only be observed indirectly, i.e. by comparison with reference observations.\" I don't think this works - the same can be said for the physical objects themselves. Every distinction is based on comparison to one or more points of reference. The idea of indirect observation of a quality but direct acknowledgement of a physical object, event, process, or relationship is also occult to me. Further, I see no meaningful distinction between a process and event save an arbitrary fiat boundary.\n\nI recognise the importance of handling entities across scales, but the arguments offered don’t really convince me. Formalising scale semantics independently of the \"observables\" seems off: the observables don't disappear or cease to influence a system simply because the scale changes. Indeed, the scales being handled are anchored to the observables identified in a given application. This is true even if they are not “directly” observable. I'm not sure how the authors’ handling of scale is useful as it sounds like something that would actually amplify bias. Some sort of demonstration of why this approach is more useful is needed.\n\n\"The ability to accurately characterize semantics along observable, observation and context dimensions addresses the interoperable and reusable FAIR criteria. Semantic specifications can be rewritten into queries that select interoperable counterparts for an observation, addressing the findable and accessible requirements.\" Looking at the description of the FAIR criteria, I wouldn't say that they've been addressed so easily. Semantic technologies are of course needed to make data FAIR (especially the “I” criterion), but FAIRness is something that can only be evaluated in action, not in theory. I would weaken these claims until a demonstration is provided.\n\n\"While observation and context semantics are relatively well-understood, the characterization of what things are - observable semantics - remains difficult and uncertain, even with increasing investments in ontologies and vocabularies and an engaged community behind the current state of the art.\" Such a statement would need more support - how are these levels of understanding assessed by the authors? Some more clear arguments would go a long way here.\n\n\"Terms describing commonly acknowledged classes of physical entities (such as persons or objects) are complemented through inference, comparison, association and imagination...Such observable entities...\" These entities cannot be observable if they are imagined. The argumentation gets somewhat murky here. Or is imagination meant in terms of the creativity of the knowledge engineer? This is slightly less worrying.\n\n\"It follows that interoperability can exist in a conceptualization, as long as the boundaries of stability of meaning for all concepts with respect to their fundamental phenomenology are stable. Scale, commonly defined as the choice of resolutions and extents through which we make observations of the world, binds the observables of informational artifacts to precise phenomenological categories, establishing boundaries of validity for conceptualizations.\" Ideas like “fundamental phenomenology” are dubious: many ontology developers have been in search of this Holy Grail, but we can only deal with what empirical studies have predictably and repeatedly confirmed. Knowledge and meaning are maintained around these anchors. Further, scale is typically imposed by observers unless one considers the scale associated with an entity itself. I don’t see how this connects to the validity of a conceptualisation. Is it not valid to assert that roads are present in the European continent even though they can’t be observed due to coarse scale? Indeed, the idea that continuants themselves can be morphed is problematic: shifting perspective doesn’t make a continuant cease to be. This perceiver-based semantic basis seems counterproductive and no results are put forth to support arguments that it is superior to alternate modes of handling scale. I really don’t see how subjects can morph into qualities. Alluding to the cyanobacteria in water, the cyanobacteria haven't disappeared or changed, one is simply talking about the green quality of the chlorophyll in these cells integrated over the expanse of a different entity (a lake, pond, etc). A rigorous semantic solution would track this over multiple scales, rather than ‘fudge’ it and directly claim the water body is green.\n\nThere needs to be simplication of how \"structural\" and \"functional\" relationships are discussed. This is nothing more than differentiating instantaneous (in the SNAP logic) relationships from those that require a temporal window to be realised (e.g. realisations of functions and dispositions in the BFO world). Indeed, why do we need yet another way of doing something that is done by other upper level ontologies? If the authors must re-invent this, they should, at the very least, make the relationships to existing upper level resources clear and argue why their approach is superior (again, I think that should be a different paper).\n\n\"At the same time, it became clear that no community of modelers, data scientists or other prospective users would consider an investment in OWL or other semantic web-endorsed formalism as the vehicle to express the semantics in data and models\" Really? Why not? Why do they even need to know that OWL is being generated in the background? Why abandon a functional system because some unspecified community of users won't interact with it? Should we abandon HTML5 because average web users don't understand how to write it? Obviously not, it's just hidden and tools that generate it created (e.g. CMS tool). Claiming that some unspecified experience the authors have demonstrate the necessity of a completely new semantic standard is not scientific reporting or even convincing.\n\nI’m afraid that Figure 1 makes little sense to me. Not all occurrents are finer in scale that all continuants and, as the authors themselves argue in their examples, not all qualities are finer in spatial scale than all subjects. Full logical elucidations and robustness under counter examples must be provided (again, another paper).\n\nRegarding the criteria of compatibility, expressiveness, readability, and parsimony: Compatibility: What sort of maintenance do these compilers require? Who will maintain them? How is accuracy evaluated? Are there use cases? I feel as though this needs more clear specification and I would dedicate more room to this than the attempts at upper level semantics in this manuscript. The authors may be considering publishing these in a follow up paper, but they really should be here as these are the elements that are of the highest value. Expressiveness: Intuition is quite a subjective thing and actually the source of many semantic errors in the first place. I would consider it common knowledge that usage of similar terminology even by interacting expert groups often does not correspond to ‘uncontroversial’ usage unless they’re using a standardised guide. So I can’t support the idea that ‘keywords’ are sufficient to satisfy expressiveness, especially from a machine perspective. Further, most expressive (i.e. well-axiomatised) ontologies I'm aware of don't require users to know anything about the underlying axiomatisation in order to use their content, so this isn’t really novel. Sections like this switch the nature of the manuscript into something more like a review, rather than focus on the outcomes of the project at hand, which is confusing. Further, are synonyms of classes handled/matched or do keywords (i.e. primary labels) have to be memorised by the user?\n\nReadability: Naturally, very important, however do the examples provided really improve on expressions that are entered into ontology editors such as TopBraid or Protégé? Further, the readability outlined here requires some effort by the user to subscribe to and understand the ARIES upper level, which must be learned to be useful, as with any other system. Parsimony: I assume the authors mean that the system should allow post-composition. This is already done with \"dead simple design patterns\" or TermGenies. These should be acknowledged so as no to give the impression that this well-recognised issue hasn’t been at least partially addressed thusfar.\n\n\"In k.IM, particulars and universals are combined to specify observables;\" This makes very little sense to me - perhaps this just needs rephrasing, but I don't see how or why there would be a need to combine universals and particulars. One always observes the particular, which is linked through instantiation with the universal.\n\n\"k.IM statements readable and understandable by mimicking English syntax, while specifying much more complex, correct and consistent OWL2 axioms\" In principle, I see this as a useful step forward - but as the more friendly syntax is less specific and logically fuzzy (Table 1), what happens when new content is added and the syntax expands? Will this not just result in another layer of ambiguity through the creation of a strange k.IM dialect that silos users into its translation system? This connects to the issues with “expressiveness” outlined above.\n\nI would like to see more elucidation of the definitions in Table 1 and also some developed justification as to why existing upper level treatments are not sufficient for the ARIES objectives. There may not be 1:1 matching in all cases, but I don’t see great difficulty in pre-composing the classes noted in this table. Once again, reuse should be preferred to duplication unless there’s a very clear reason not to go this route. Subject: Is this not just the subject of observation? This then presupposes that an observation action has occurred, which suggests that this is not a high-level class, but something lower-level and operational. configuration: This seems superfluous: isn’t this true of any precomposed and well-axiomatised class? Thing: how is this different from entity? Why should this be inanimate? As this is a common term (due to Protégé and other resources), would it not be more sensible to call this “inanimate object”? Again, if this has to be the output of an observation process, is this not too low-level to warrant the use of the word “thing”? agent: This suggests all agents must be self-aware, correct?\n\nPriority: This is certainly not the normal, intuitive, or common-sensical usage of this word, which runs counter to the authors’ claims and reasoning above. Further, ranking is a numerical operation which can be arbitrarily applied across qualities, making this definition suspect. The same is true for quantity. There is a layer missing in these semantics. Class: Again, a very dubious use of a term that is more or less reserved in the semantics community and not naturally or intuitively associated with qualities.\n\nProcess: I’m not sure why a process would only include a single subject. This doesn’t seem very realistic. Do qualities need to change during participation in a process? I could go on with each row of this table. The bottom line is that this manuscript does not provide sufficient arguments to show that this treatment is stable: this should be done in a separate publication reviewed by developers of upper level semantic resources.\n\nConcerning the geographical elevation: it's very peculiar that the work done in other ontologies handling geology/geography and qualities is not used or their developers engaged to coordinate with this approach: again, it seems that this system is creating a silo, rather than linked, interoperable, and portable products. Given that this work is focused on FAIR thinking, I see this as a significant issue.\n\nStatement (1) is quite readable, but I don't really feel that this is so much easier than composing statements in Protege, using a reasonable set of ontologies. What is much easier is that all of the components can be called by just identifying the namespace. I can imagine this as a very approachable working environment, calling in external semantics that are more rigorously developed. I would encourage the authors to make this more prominent relative to the claim that k.IM/k.Lab offers an intuitive experience: it seems that it's just as idiosyncratic as other solutions.\n\nA graph/network figure illustrating the semantic model that corresponds to statement (1) - including a few classes not identified in the statement, but linked to those that are identified - would help show the interconnectivity and architecture of the system. This would make statements like \"This is done by tying the concept being defined to the core observation ontology\" much more meaningful.\n\n\"The language contains keywords for many fundamental quantities, allowing users easy specification in most situations (Table 1).\" Claims like this need some sort of support or explanation of why the authors think the coverage is so high. Which domain scientists have found this to be sufficient in most situations? If that data is not available, a description of what the authors have covered (as a supplement perhaps, as Table 1 cannot go into such depth) would be needed.\n\n\"along with constraints and relationships for all common scientific observables\" Similar issue to the above - I find it hard to believe or parse this statement. What is common here? How can the complete coverage be so confidently claimed?\n\nOperators in general: Is this not similar to/the same as what the Relations Ontology and other stores of Object Properties does? Are the properties defined for reasoning (more formally than the list in Table 2)? Is their reflexivity and inversion properties coded for reasoners and query systems to make use of? The language here suggests more novelty than there is and, once again, this doesn't seem reuse well-adopted object properties that already exist, making claims of FAIRness weaker.\n\nTable 2: This reveals some confusing patterns - how does an operator produce a quality? I can understand presence being represented as a BFO:'dependent continuant' or PATO:quality, but I don't see how a relation (operator) can \"produce\" one. - What is the distinction, metaphysically, of a quality and a quantity? - Are \"Countables\" defined in a class? or are range restrictions placed on the operator? - What is meant by \"in a (spatial) context\"? Is not everything that concerns this system in a spatial context? - It seems strange to have occurrence as a shorthand for ‘probability of’. There are more points of imprecision/ambiguity that I could list here, but perhaps I'm not appreciating how these are treated internally. This should be more clear in the paper: how are these checked for logical consistency?\n\nStatement (2): This seems like lax modelling: the CSV file is not the model, it's a CSV file which, perhaps, pertains to a model. I appreciate that the condensed syntax may expand internally to express things more precisely, but - if this is not the case - claims that this is semantically rigorous need to be softened.\n\n\"As concrete qualities (those of which observations can be made) can only exist inherently to a direct observable, the observable must be made explicit before the concepts can be used (e.g., earth:Region in the previous elevation example)\" This seems like semantics for no clear purpose. What is a non-concrete quality? Who defines the limits of observation? Naturally, a thing must exist (be instantiated) to be observed. Perhaps this is just poorly phrased?\n\n\"...so that users can easily locate concepts by textual searching.\" This assumes that users all use the same syntax and/or terminology – which is hardly supportable without training. A useful feature, nonetheless. However, does this mean that users can specify any \"concept description\" they wish? How is this semantically controlled? If these definitions are done haphazardly, this could defeat the purpose of a semantic resource.\n\n\"is first established as its fundamental nature\" Claims like this seem un-falsifiable, and thus suspect. Does this mean the authors take these as primitives? Is there a list of all these primitives with their logical elucidations present? If not, it's hard to support this as a semantic resource.\n\n\"it will be correct to annotate elevation within a watershed, as long as a previous statement defines the Watershed concept as a type of earth:Region.\" I’m not sure I follow. Does elevation inhere in the watershed or does it inhere in some entity that is, itself, somewhere within the watershed? This is not linguistically clear, and thus cannot be logically clear.\n\n\"The of keyword is used when the quality refers to a second, implicit observable in the context of inherency. For example, the “height of trees” quality in a region is inherent to that region, but implicitly describes tree subjects in it.\" This is hard to support - this quality does not actually inhere in a region. Like the cholophyll in the lake example, this can be understood as shorthand. While I appreciate the convenience, unless this is expanded in the background, this feels like a very risky route in semantic modelling with too many shortcuts.\n\n\"In keeping with our readability requirement, we only allow two levels of specification and use two different keywords (within and optionally of).\" This makes the language in k.IM very idiosyncratic. It's debatable whether this is useful for readability that results in reasonable semantics. This feels like setting one’s own goalposts without objective criteria.\n\n\"We found that legitimate chained specifications, such as “x within y within z within …”, were awkward and difficult to understand in usage tests and decided against allowing such statements.\" True, these are awkward, but why prevent them if they're logically valid? What's the alternative?\n\n\"Multiple chains of inherency of this kind can be defined using intermediate concepts\" This must be explained further.\n\n\"In knowledge domains (as opposed to physical ones), the implicit inherent subject is often a configuration.\" This raises a few red flags for me - it's very easy to create semantic resources that have little bearing on physical reality and are thus of questionable use in the natural sciences. I don't see why topology or terrain (3) is treated as not \"being directly amenable to providing the observable of an informational artifact\". It is the bearer of a quality, which is claimed to be linkable to an information artifact. Following this logic, wouldn't events also be indescribable? It is useful to be able to pre-compose semantics for a group of phenomena (~ a configuration) that are often referenced together, but these are still physical.\n\n\"One can assume that identifying height of corn would require a further specialization of plant height, and the corn identity would simply be implied syntactically by using a Corn… prefix in the term assigned to the concept.\" Are we talking about TO here? There is no PTO in OBO. Regarding TO and other OBO ontologies, this wouldn't/shouldn't be limited to a syntactic specialisation, but include an axiom linking the class to a taxonomy. Many classes in TO follow this recommendation. Also, \"giant embryo (a gene type)\" is not correct. It's not a gene type, it's a quality. Also, these are not \"concepts\" as OBO resources take a realist stance on knowledge representation.\n\n\"plant height uniformity (a quality not physically commensurable with height)\" One would have to be more clear about this. I see the issue here as one of inherence (this quality inheres in a collection of whole plants, thus conflicting with the assertion in the superclass). Did the authors reach out to the TO developers to correct this?\n\n\"relative plant height (seemingly adding an observation-related attribute, relativity, out of many possible).\" Again, this has to be more clear. I agree that this does not sit well in the TO as this is more an information artifact than a new type of quality and requires cross-axiomatisation with an ontology dealing with measurement processes. In OBO, this is not an uncommon occurrence and numerous \"stubs\" exist. These ontologies have issue trackers where ambiguities like these can be raised for later revision.\n\n\"Yet it is clear from this example that no ontology can force users to adopt cogent annotation practices, ensuring that physical and biological identities are preserved along inheritance chains and attributes retain traceable and stable meaning.\" I'm not sure that the examples provided illustrate this point. They do show that even well-developed ontologies aren't perfect and need some classes to be refactored. However, annotation practices are an order removed from the development of a semantic backbone. It’s of course possible to annotate in many ways, but whether these are going to work with the reference ontology and suit a user’s objectives is a different question. Traceable and stable meaning is handled by the URI scheme and obsolesence best practices. Further, I don't quite see how the k.IM example that follows (11) fixes these issues. The \"language itself\" is often rife with ambiguity and jargon and not a stable shortcut to develop rigorous semantic models (but can be good for a first pass). In the example, a \"measure\" is roughly the same as a BFO:quality, so they both require a material entity to inhere in. A quality and a unit do not embody the \"how\", one would need a process ontology like the \"protocol\" branch of OBI to do that. A particular quality/measure can only enforce a set of units of such constraints are hard-coded in the ontologies used for k.IM - is this the case? Would it not be sensible and more sustainable to create a working relationship with TO and other ontologies to fix the errors spotted and provide ARIES with better worldviews?\n\n\"Definitions (11) and (12) intentionally use agriculture:PlantIndividual instead of biology:Individual, as the latter requires a precise species identity (definition 7), while the former references the commonsense taxonomy used in AGROVOC for crop types, reflecting the intended semantics for the data.\" How do the PlantIndividual and biology:Individual interoperate? This sounds like a point where semantic drift can occur quickly. In the OBO world, one would create union classes of biological species for less specific common terminologies. \"Most importantly, the adoption of rigorous phenomenological inheritance and specification syntax requires the realization (and the explicit statement) that height is first of all a quality of a plant subject, and that the data refer to plants within a cropfield subject.\" I'm not sure where this requirement is coming from - the user? The task? The worldview? Further, expressing this (along with logical constraints on qualities etc) is quite possible using an application ontology dervied from PATO, PO, PCO and ENVO rather than developing an ad hoc semantic.\n\n\"While such details still need to be learned by a user, the syntax itself serves as a guide for the annotation workflow: the use of inconsistent observables or the lack of proper inherency would yield ungrammatical statements that are reported as errors.\" I don't really see how this is that better than the reasoning checks used by other ontologies. Further, I'm not sure if grammatical evaluation will always be a good indicator - that largely depends on the labels used, whether the syntax is sensible (I’ve already noted issues with some usage above) , and if the grammar is appropriate for all cases in the scope of this work (not evaluated). This is actually quite a major issue – whatever mechanisms are used to check logic should be absolutely true to that logic, not a grammar. Are there no reasoners used to check axiomatic consistency? \"non-abstract quality\" The distinction between abstract and non-abstract qualities etc remains very confusing to me. A box or a supplement is needed to help explain the need with clear examples.\n\n\"The result is readable by a non-expert and compiles to axioms specifying a single OWL concept, which can be transferred to a remote endpoint in axiomatic form and reconstructed for reasoning or database querying; the shared worldview is the only requirement for its interpretation.\" There are many \"ifs\" along this path: it may be readable by a non-expert, but so are asserted axioms in Protege - as the authors note above, the various conventions of both the k.IM system and the worldview chosen need to be familiar to the user for this to be not just read but understood. Yes, this can be transferred to an endpoint in an axiomatic form, but that assumes understanding of and stable interaction with the resources on the other end of that endpoint. I don't see evidence that this system does that as a rule: the authors should include examples of this or make this statement more clearly aspirational. The sentences following this one are claims of a similar kind that would need to be substantiated somehow.\n\n\"Specifications such as (13) are simple enough to be added to metadata data files with the same name - which may be automatically detected and indexed by specifically designed web crawlers, so that indexes of web-accessible, annotated datasets can be built and maintained.\" This is very interesting - similar approaches (tiny semantic annotation files) have shown promise (see the PhenoPackets project). I think this MS would be greatly improved by focusing on such practical approaches rather than semantics which are present in other resources. Spending more time talking about how the environment can link elements of information artifacts to ontologies (e.g. as in (14)) would target an urgent gap in research and implementation. Unfortunately, most of this is not shown.\n\n\"Specification (13) is complete and correct, as geography:Elevation is fully characterized in terms of inherency within the worldview, as seen in statement (1).\" What is meant by \"fully\" here? I don’t really see this as complete or correct. The definition of (1) is circular and calls for a model for description, which seems extraneous. Also, \"terrain\" is defined as pertaining to land surfaces only (3) implying that surfaces permanently covered by water don't have elevation (which I’m assuming means elevation above sea level), which is incorrect (e.g. many lakes such as Lake Tahoe and Khövsgöl Nuur).\n\n\"SMM, a modeling approach where FAIR+ interoperability is an integral requirement, sees data and models as definitions for possible observations: while datasets can produce, possibly through mediation of observation or context semantics, the requested observations in a self-contained way, models do so through computation that may involve the observation of other concepts they depend on, to be resolved through other data or models.\" Again, defining “FAIR+ interoperability” (which seems internally redundant) as accomplished without any real demonstration of it in action can’t be supported. Further, data and models are not definitions and treating them as such will likely lead to more ambiguity. The rest of this paragraph seems only half-developed and I’m not sure what the authors are trying to convey.\n\nReference 39 isn't really about k.LAB, while the prose suggests it is.\n\n\"The most important aspect in this sense is the clear focus on observable semantics: no time is wasted seeking semantics to express model-related concepts (“model”, “variable”), observation-related ones (“measurement”) or context-related ones (“spatial resolution”), all of which figure prominently in commonly used ontologies.\" I take strong exception to this - time is by no means wasted by handling the semantics of informational and procedural entities. The fact that these feature prominently in other ontologies should be an indicator of this. In fact, one could argue that procedural metadata is needed to handling anything FAIRly, otherwise there can be no true reproducibility. The fact that the current approach neglects these is not really a virtue to be celebrated, although I do recognise that an observational focus is a good point of initialisation.\n\n\"Formalizing a simple phenomenology for observables and universals. The base observables in Figure 1 have proven intuitive enough to be understood and remembered by diverse users, helping them “home in” quickly on observable semantics as described in the previous point. Also, the use of independently defined and flexibly attributed universals to express attributes, identities and roles has effectively and intuitively solved, in our applications, the plaguing issue of excessive and improper specialization.\" This is purely speculative without some sort of comparative study. As stated above, I see major issues with the phenomenology or base semantics used here and no evidence that they are more accurate or useful than other systems (assuming users have equal training in both).\n\n\"home in\" --> \"hone in\"\n\n\"The k.IM language and k.LAB platform make ontologies and annotations immediately actionable, enforcing the logical consistency of each definition both by enforcing syntactical correctness through intelligent editing tools and by employing a machine reasoner to identify and report logical errors to the user. The language guides, simplifies and validates the definition of knowledge; the support software provides feedback and allows users to immediately perform user queries and compute workflows whose results enable at-a-glance validation of the semantic correctness of the concepts employed.\" This is the most exciting part of this contribution, but its capacities are not shown or discussed at a meaningful depth. Which reasoners are used? Are they valid for any external resources brought in? What is meant by immediately actionable? How, exactly, does enforcement occur? Without more context, I can't see if the system actually simplifies and/or validates semantic work, as claimed.\n\nIs the rationale for developing the new method (or application) clearly explained? Partly\n\nIs the description of the method technically sound? Partly\n\nAre sufficient details provided to allow replication of the method development and its use by others? No\n\nIf any results are presented, are all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions about the method and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-686
|
https://f1000research.com/articles/6-685/v1
|
17 May 17
|
{
"type": "Case Report",
"title": "Case Report: Severe allergic reaction to Physiomesh® after laparoscopic ventral hernia repair",
"authors": [
"Laura Quitzau Mortensen",
"Thue Bisgaard",
"Tine Plato Hansen",
"Jacob Rosenberg",
"Thue Bisgaard",
"Tine Plato Hansen",
"Jacob Rosenberg"
],
"abstract": "Due to a relative high rate of recurrence compared with other meshes for laparoscopic ventral hernia repair, Physiomesh® was recalled in 2016. This case report describes one of two reported cases of allergic reaction after insertion of Physiomesh®. The patient was a 50-year-old male who had a laparoscopic operation for ventral hernia with insertion of Physiomesh® as an intraperitoneal onlay procedure. Two months later, the patient was admitted with intense epigastric pain, and CT scan showed signs of ileus. Emergency surgery was performed revealing severe allergic signs in the abdomen and retroperitoneum. The mesh was removed. The postoperative course was complicated; the patient had multiple admissions with incomplete ileus, as well as recurrence of the ventral hernia. An open sublay hernia repair using a monofilament polyester mesh was subsequently performed with good effect. This case report illustrates a rare complication with mesh insertion. This adverse event adds to the risk of complications following implantation of mesh to reinforce a hernia repair. The allergic reaction was not suspected until during operation. Therefore, this report also illustrates the importance of surgeons’ clinical assessment and ability to take relevant action, which in this case consisted of mesh removal and retrieving tissue samples for histology.",
"keywords": [
"Hypersensitivity",
"allergy",
"allergic reaction",
"physiomesh",
"hernia mesh",
"case report"
],
"content": "Introduction\n\nVentral hernia repair is a common surgical procedure1. Physiomesh® Flexible Composite Mesh (Ethicon, Somerville, NJ, USA) was recalled in May 2016, due to a higher rate of recurrence and reoperations following laparoscopic ventral hernia repair compared with other meshes (https://archive.org/details/EthiconPhysiomeshRecall).\n\nIn this case report, we describe a rare complication of using Physiomesh®. Mesh-related allergic reaction following hernia repair has only been described in one previous case report2. We present a patient with allergic reaction to Physiomesh® after laparoscopic ventral hernia repair.\n\n\nCase report\n\nThe patient was a 50-year-old male. Ten years prior, the patient had a primary umbilical open non-mesh repair with good effect. The patient’s father died of colon cancer. There were no other dispositions. In 2014, the patient had recurrence of the hernia and underwent a laparoscopic intraperitoneal onlay mesh (IPOM) reoperation with insertion of Physiomesh®. The hernia defect, measuring 2.5 cm, was sutured prior to application of a Physiomesh®, measuring 13×13 cm and placed with IPOM technique and fixated with ProTack® (Medtronic/Covidien, CT, USA) with double crown tacks. An emergency operation was performed later the same day due to severe abdominal pain reported by the patient. At the operation, no perforation, bleeding, or pathology was found. The surgeon removed three tacks at the point of maximum pain. The patient received intravenous (i.v) cefuroxime 1.5 g × 2 perioperatively and by oral administration ibuprofen (600 mg × 4) and paracetamol PRN. He was discharged two days later with regained bowel activity and no pain.\n\nTwo months later, the patient was readmitted with acute epigastric pain. Blood tests showed leucocytes 19×109/l and CRP 140 mg/l, with no eosinophilia. An abdominal CT scan was performed, due to suspicion of perforated ulcer. The CT showed signs of ileus with no obvious obstruction or recurrence of the hernia. The patient underwent diagnostic laparoscopy with conversion to laparotomy, due to insufficient overview. The proximal half of the small intestine was dilated with no sign of obstruction or adhesions. The retroperitoneum was edematous, increasing towards the mesh. A thick layer of fibrin was observed in close proximity to the mesh, and the peritoneum was covered by a hard layer of connective tissue, which resembled granulation tissue. Mesh and tacks were removed, and tissue samples were sent for histology. Histology showed inflammation with an allergic foreign body reaction (Figure 1). The tissue samples consisted of soft tissue with chronic inflammation, many infiltrating eosinophils, fibrinoid degeneration, fibrosis, and areas resembling granulation tissue. Scattered birefringent foreign body material was present but there were no giant cells or granulomas. Postoperatively, the patient was treated with analgesics and a nasogastric tube due to pain and nausea, respectively. The patient was discharged 2.5 weeks after his reoperation with normalized bowel function and no pain.\n\nH&E stained tissue sample taken from peritoneum in close proximity to the Physiomesh®. The histology showed inflammation with an allergic foreign body reaction. Arrow A indicates hyaline fibrosis; arrows B indicates eosinophils, some intact and some degranulated.\n\nTwo weeks later, the patient was readmitted with incomplete ileus. CT showed the small bowel with a thickened wall and gathered in a conglomerate. Small bowel follow-through indicated normal passage. The patient was not operated on and was well and discharged four days later. Six months after the mesh was removed, the patient had CT-diagnosed recurrence of his umbilical hernia, as well as a new incisional hernia. The patient did not want another operation, and the hernias were therefore relieved with a truss. Eleven months after removing the mesh, the patient was again admitted with incomplete ileus and was treated non-operatively. Due to strong pain from the umbilical and incisional hernias, the patient had a final surgical procedure 18 months after removal of the Physiomesh®. The hernia defect now measured 10 cm craniocaudally and 8 cm transversely. The surgical procedure was performed as an open repair. The small intestines were adherent to the hernia sac and were only covered by skin. Complete adhesiolysis was performed, and the hernia was repaired with a modified Stoppa procedure with a Progrip™ mesh (Medtronic/Covidien, CT, USA) in the sublay position. During admission, the patient received standard perioperative antibiotics and analgesics. No further treatment was given. Five days later, the patient was well and discharged. For follow-up, the patient was contacted once by telephone. One year after this final operation, there have been no further admissions to hospital.\n\n\nDiscussion\n\nThe focus on foreign body implantation to the abdominal cavity has increased. This is partly due to the recall of Physiomesh® Flexible Composite Mesh on the basis of high rates of recurrence, as well as a recently published study on mesh-related surgical complications3.\n\nWe have presented a rare case of allergic reaction to Physiomesh®. The clinical presentation was acute epigastric pain two months after a laparoscopic ventral hernia repair. CT showed ileus with no sign of obstruction or recurrent hernia. The diagnosis of allergic reaction to the mesh was finally confirmed by histological examination of peritoneal tissue samples with chronic inflammation, many infiltrating eosinophils, fibrinoid degeneration, fibrosis, and areas resembling granulation tissue.\n\nThe strengths in the approach to this case were that the surgeons, based on the patient history and the perioperative findings, suspected an allergic reaction. They did so despite the rarity of allergic reaction and no indication from paraclinical tests. Furthermore, based on their assessment, they removed the mesh and took out tissue samples for histological verification. The limitation in the approach to this case was that the allergic reaction was not recognized at the first operation following mesh insertion and the patient had to wait for two months and develop incomplete ileus before the hypersensitivity reaction was diagnosed and the mesh removed.\n\nHypersensitivity reactions are immune responses that cause tissue injury4. Type IV hypersensitivity is the T cell-mediated response, which for instance is involved in contact sensitivity, chronic inflammation, and graft rejection4. There are few published reports on hypersensitivity reactions to meshes in general and only one regarding a mesh used for a hernia repair2. There are, to the best of our knowledge, no previously published articles reporting hypersensitivity to Physiomesh®. However, an adverse event report has been filed at the U.S. Food and Drug Administration regarding a possible allergic reaction (https://archive.org/details/MAUDEAdverseEventReportETHICONINC). Physiomesh® is a polypropylene mesh encapsulated by polydioxanone and coated with a monocryl layer on both sides of the mesh5.\n\nIn the present case report, the indications that a component of the Physiomesh® was the cause of the allergic reaction were that the edema and the fibrin layer were most pronounced in close proximity to the mesh, that the patient experienced remission of his symptoms after removal of the mesh, and that the patient experienced no hypersensitivity to the monofilament polyester mesh that was used for the final sublay repair.\n\n\nConclusion\n\nWe have presented a very rare case of severe allergic reaction to Physiomesh®. This adverse event adds to the risk of complications following implantation of mesh to reinforce a hernia repair. The allergic reaction was not suspected until during operation, since paraclinical tests were not indicative of allergy. Therefore, this case also highlights the importance of surgeons’ clinical assessment and ability to take relevant action, which in this case consisted of mesh removal and retrieving tissue samples for histology.\n\n\nConsent\n\nWritten informed consent for publication of the patient’s clinical details and/or clinical images was obtained from the patient.",
"appendix": "Author contributions\n\n\n\nAll authors contributed to the acquisition of the data. LQM interpreted the data and drafted the manuscript. TB, TPH and JR revised the manuscript critically. All authors have approved the final manuscript and are accountable for the work.\n\n\nCompeting interests\n\n\n\nLQM, TB and TPH have nothing to disclose. JR reports grants from Johnson & Johnson, grants and personal fees from Bard, personal fees from Merck, outside the submitted work.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nPoulose BK, Shelton J, Phillips S, et al.: Epidemiology and cost of ventral hernia repair: making the case for hernia research. Hernia. 2012; 16(2): 179–183. PubMed Abstract | Publisher Full Text\n\nVedak P, St John J, Watson A, et al.: Delayed type IV hypersensitivity reaction to porcine acellular dermal matrix masquerading as infection resulting in multiple debridements. Hernia. 2015; 1–4. PubMed Abstract | Publisher Full Text\n\nKokotovic D, Bisgaard T, Helgstrand F: Long-term Recurrence and Complications Associated With Elective Incisional Hernia Repair. JAMA. 2016; 316(15): 1575–1582. PubMed Abstract | Publisher Full Text\n\nKobayashi K, Kaneda K, Kasama T: Immunopathogenesis of delayed-type hypersensitivity. Microsc Res Tech. 2001; 53(4): 241–245. PubMed Abstract | Publisher Full Text\n\nDeeken CR, Faucher KM, Matthews BD: A review of the composition, characteristics, and effectiveness of barrier mesh prostheses utilized for laparoscopic ventral hernia repair. Surg Endosc. 2012; 26(2): 566–575. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22919",
"date": "03 Jul 2017",
"name": "Marc Miserez",
"expertise": [
"Reviewer Expertise Surgery"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMajor comment\nWIth the current data provided, I am not convinced we are dealing with an allergic reaction:\nNo bacteriology results are provided: are we not dealing with chronic infection?\n\nMacroscopic picture of intraabdominal findings during reoperation after 2 months is missing\n\nI am not sure Fig 1 is sufficient to determine an “allergic foreign body reaction”; moreover the authors do not use the histological aspects in their argumentation for an allergic reaction (last paragraph of the discussion); do we have other lab results or patch testing available?\n\nHow do we explain the retroperitoneal inflammation if the problem arises intraabdominally on the ventral abdominal wall?\n\nIs the patient allergic to prolene, monocryl, polydioxanone or other parts of the mesh? Was patch testing performed? The advice of an allergic/immunology specialist is needed\n\nMinor comment\nThe authors should give (a) reference(s) on the higher rate of recurrences and reoperations with physiomesh after lap ventral hernia repair; are there any comparable findings described as in this case during reoperation?\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? No\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? No",
"responses": []
},
{
"id": "26035",
"date": "02 Oct 2017",
"name": "Ferdinand Köckerling",
"expertise": [
"Reviewer Expertise Mesh technology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the presented case report the authors explain the complications after a laparoscopic ventral hernia repair leading to mesh removal with an allergic reaction to Physiomesh. A revisional reoperation due to severe abdominal pain on the same day as the primary operation and high levels of leucocytes and CRP at the time of the second reintervention deliver arguments pro chronic mesh infection. We know from experimental studies that the histological findings of the incorporation of Physiomesh into the abdominal wall show different pictures compared to other composite meshes induced by the absorbable membrane on both sides of Physiomesh (Köckerling et al. 20171). As an allergic reaction to synthetic hernia meshes is not reported until today, the authors should discuss other possible explanations for their findings.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-685
|
https://f1000research.com/articles/6-684/v1
|
16 May 17
|
{
"type": "Software Tool Article",
"title": "ogttMetrics: Data structures and algorithms for oral glucose tolerance tests",
"authors": [
"Benjamin J. Stubbs",
"Keith Frankston",
"Marcel Ramos",
"Nancy Laranjo",
"Frank M. Sacks",
"Vincent J. Carey",
"Benjamin J. Stubbs",
"Keith Frankston",
"Marcel Ramos",
"Nancy Laranjo",
"Frank M. Sacks"
],
"abstract": "We describe an open source software package, ogttMetrics, to compute diverse measures of glucose metabolism derived from oral glucose tolerance tests (OGTTs). Tools are provided to organize, visualize and compare OGTT data from large cohorts. Numerical difficulties in estimation of parameters of the Bergman minimal model are described, and in one large clinical trial, the simpler closed form index of Matsuda is observed to lead to similar rankings of individuals with respect to insulin sensitivity, and similar inferences concerning effects of modifications to carbohydrate content and glycemic index of experimental diets.",
"keywords": [
"diabetes",
"carbohydrate metabolism",
"clinical trials",
"nonlinear models",
"multivariate analysis"
],
"content": "Introduction\n\nDisorders of carbohydrate metabolism contribute substantially to overall disease burden throughout the world. According to the International Diabetes Foundation (International Diabetes Federation: IDF Diabetes Atlas, 2015), over 400 million individuals are diabetic, and numbers afflicted continue to rise.\n\nVarious tests are used to diagnose diabetes or assess risk of diabetes. In the oral glucose tolerance test (OGTT), a specified quantity of glucose is ingested orally. Plasma concentrations of glucose and insulin are measured at specific times after ingestion. Panels (a) and (b) of Figure 1 illustrate trajectories of glucose and insulin concentrations in a single patient performing the 120 minute protocol.\n\n(a) The observed glucose concentrations (dots) and predictions (line). (b) Insulin concentrations (dots) and linear interpolation. (c) Predicted glucose vs. insulin action X(t). (d) Rate of appearance of glucose, Ra(t).\n\nMethods for administering, analyzing, and clinically interpreting OGTT results are subjects of active research. Concerns with the use of individualized compartmental models for OGTT analysis are discussed in the work of Theodorakis et al. (2017). These authors propose population level nonlinear modeling for estimation of insulin sensitivity, and demonstrate that empirical Bayes procedures have desirable properties for computation and interpretation.\n\nIn this report, we describe an open-source software, ogttMetrics, for the management and analysis of OGTT series collected in large cohorts, and illustrate the application of models and metrics to a cross-over clinical trial (Sacks et al. (2014)) of effects of varying glycemic index and carbohydrate content of controlled diets. A widely cited proprietary software tool for fitting compartmental models to OGTT data is SAAM-II (Barrett et al. (1998)). We developed ogttMetrics to allow open investigation into properties of OGTT series, for which the SAAM-II models yield untenable estimates or do not converge. In addition, we saw an opportunity to develop a formal structure for collections of large numbers of OGTT series. We adopted the Bioconductor MultiAssayExperiment structure for this purpose, and introduced methods for interactive visualization and quality assessment of OGTT series, exploiting structures and functions of this package to simplify the coding.\n\n\nMethods\n\nFollowing Bergman et al. (1979), let G(t) and I(t) denote time-dependent plasma concentrations of glucose and insulin respectively. Various time-dependent factors affect the trajectories of these concentration functions, and we will assume that derivatives of these function with respect to time, and partial derivatives of these functions with respect to relevant time-dependent variables, can be defined. Let Ġ denote the rate of change of glucose concentration in plasma over time. Glucose effectiveness is defined as E = –∂Ġ/∂G. This is described as “the quantitative enhancement of glucose disappearance due to an increase in the plasma glucose concentration” (Bergman et al., 1979, p. E673). At steady state, insulin sensitivity is SI = ∂E/∂I. A four compartment model (model VI of Bergman et al. (1979)) leads to differential equations\n\nG′(t) = (p1 – X(t))G(t) + B0\n\nand\n\nX′(t) = p2X(t) + p3I(t)\n\nwhere X(t) is an abstract time-dependent function representing insulin action, G(t) is the time-dependent function representing glucose concentration, I(t) represents time-dependent insulin concentration, and B0 represents “glucose balance” (difference between rates of hepatic release to circulation and uptake in peripheral tissue) extrapolated to zero glucose concentration. By the definition of glucose effectiveness, the first differential equation implies E(t) = X(t) – p1, and, at steady state, XSS = –ISSp3/p2. This final expression is substituted into the expression for E just obtained, and after formal partial differentiation by I, we obtain SI = –p3/p2.\n\nIn the following,\n\nG(t) is the plasma glucose concentration (mg/dl),\n\nI(t) is the plasma insulin concentration (μU/ml),\n\nGb and Ib are the baseline values of glucose and insulin,\n\nX(t) represents insulin action on glucose production and disposal (min–1),\n\nSI is insulin sensitivity (min–1/μU · ml–1),\n\np2 is a rate constant for dynamics of insulin action (min–1),\n\nRa(α, t) denotes a time-dependent function representing appearance of glucose in plasma, with parameters α (mg · min–1/kg),\n\nV is volume of distribution (dl/kg), and\n\nSG is glucose effectiveness per unit volume (min-1).\n\nWe consider the specific formalism for the dalla Man et al. minimal model given by Burattini et al. (2006):\n\n\n\n\n\nwith initial conditions G(0) = Gb and X(0) = 0.\n\nThe procedure of Dalla Man et al. (2002) involves two phases. In the first phase, the system of ordinary differential equations (ODE) above is solved on the basis of provisional settings of unknown parameters. The solution yields pointwise predictions of glucose concentrations Ĝt with t ranging over the sampling time course of the OGTT. In the second phase, parameters of the ODE system are updated using non-linear least squares. The phases are iterated until the sum of squared discrepancies ∑t(Gt – Ĝt)2 converges to a minimum. Inputs to the algorithm are measured time series of glucose and insulin concentrations, and individual body weight; other quantities, such as glucose effectiveness (SG) fraction of ingested dose absorbed (FA), and volume of distribution (V) are taken as fixed constants, with values derived from results of other experiments.\n\nThe kernel of fitOneMinMod in the ogttMetrics package is\n\n\n\nHere the interface to lsoda in the deSolve package is employed (Hindmarsh, 1983; Petzold, 1983). The formal variables Y[1] and Y[2] represent G(t) and X(t) respectively; ra() and Insulin() are specially defined functions that return, for any given time in the course of the OGTT, the rate of glucose appearance, and insulin concentration, respectively. The lsoda solver is invoked in the function mmsolfn, whose inputs a1, a2, a3 are free parameters of a piecewise linear model for Ra(t), the rate of appearance of glucose; input SI is the target quantity of interest, the measure of insulin sensitivity. The values of free parameters are obtained by minimizing the sum of squared differences between observed glucose g and values predicted by the ODE system for current values of the unknown parameters:\n\n\n\nAdditional quantities BW, D, FA, DC are used to implement the constraint of Dalla Man et al. (2002)\n\n\n\nin which BW is participant body weight, D is the dose of glucose ingested, and FA is the fraction of ingested glucose that is actually absorbed; DC is constant that determines the rate of exponential decay of glucose concentration in plasma past minute 120.\n\nFor concreteness, Figure 1 displays all components of a minimal model fitted to a single 120 minute OGTT.\n\nIn practice, OGTT series can be collected according to different protocols and may include additional biomarkers such as c-peptide concentrations. For flexible data management and analysis, we adopted the data structure of the MultiAssayExperiment package of Bioconductor. We extended this structure in a class called ogttCohort, which includes metadata about timing of concentration measures. Each biomarker series for each individual is stored as a column of an R matrix, with rows and columns coordinated across assays. Arbitrary additional sample-level information can be linked to assay data. High level functions getMinmodSIs and addMatsuda120 fit the minimal model or compute the Matsuda index for each series, and append results to the data container. Because the minimal model may be time-consuming to fit, support is provided for parallel computation of multiple models. Use of a compact formal representation of all the OGTT data collected on a cohort simplifies creation of generic reports and visualizations. Figure 2 is based on the QCplots function, that can be applied to any ogttCohort instance. The top two panels display aspects of marginal (time-specific) distributions using boxplots. The bottom two panels are views of joint distributions of features and samples using the biplot methodology of Gabriel (1971). Calibrated outlier detection, proceeding under the assumption that the OGTT series are multivariate normal with a common mean vector and unspecified covariance matrix, can be conducted for glucose and insulin series separately, using mvOutliers. The procedure of Caroni & Prescott (1992) is used.\n\nTop two panels are time-specific boxplots, bottom two are biplots based on principal components analysis of the 50x2 7-dimensional vectors of glucose and insulin concentrations in the dataset.\n\nTheodorakis et al. (2017) mentions that the standard (proprietary, closed source) software tool SAAM-II (Barrett et al. (1998)) failed to produce accceptable estimates of insulin sensitivity in over one-third of 106 samples. Similar difficulties were encountered in the OMNICarb study. These challenges motivated us to create an open source solution that would foster investigation of aspects of glucose and insulin series for which the minimal model fails to converge, and allow comparison of alternative metrics of carbohydrate metabolism on large datasets. Figure 3 displays the SIexplorer interactive interface. Given a collection of OGTT results in an ogttCohort structure, the SI vs Matsuda panel shows the association between estimated SI, Matsuda’s index, and convergence status of the Burattini et al. formulation of the minimal model. The display is made with transformed axes (log10 for SI, square root for Matsuda’s index). Negative estimates of SI are Winsorized to the smallest positive estimate observed in the data. Positive correlation between the indices is apparent, and the general trend appears to be obeyed for the majority of estimates of SI for which the dalla Man et al., algorithm does not converge.\n\nWe created the ogttMetrics package to analyze data from the OMNICarb study (Sacks et al. (2014)). This study involved over 150 overweight individuals (BMI > 25kg=m2) whose systolic blood pressure was in the interval 120–159 mmHg, or diastolic blood pressure in the interval 70–90 mmHg. Individuals with diagnoses of diabetes, cardiovascular disease, or chronic kidney disease were excluded. Four experimental diets were designed to provide contrasting values of overall carbohydrate content and glycemic index of foods consumed. Carbohydrate and glycemic index each had two levels denoted C and c (G and g) respectively, leading to the set (CG, Cg, cG, cg) of experimental diets. Each patient received a randomly ordered sequence of diets from this set, consuming each assigned diet for five weeks, with a pause of two weeks between diets. At the end of each feeding period a 120-minute OGTT protocol was administered. As noted previously, attempts to fit the Bergman minimal model with SAAM-II frequently failed to produce acceptable values, and so the study report of effects on insulin sensitivity used Matsuda’s index. We have used the ogttMetrics package to structure the data and compute both Matsuda’s index and the minimal model SI. Figure 4 shows how the diet effects are estimated using these two indices. Confidence intervals are presented for five different contrasts. The left panel of Figure 4 is identical in content to the Insulin sensitivity panel of Figure 3 of Sacks et al. (2014). The right panel shows results based on SI that are qualitatively similar to those found with Matsuda’s index, with the exception of the estimated effect of lowering glycemic index in the context of high overall carbohydrate content. With Matsuda’s index, the 95% confidence interval excludes zero, but this is not observed when SI is used. Further work on optimizing estimation of insulin sensitivity from the 120 minute OGTT protocol is warranted; the empirical Bayes approach of Theodorakis et al. (2017) is of particular interest as individual-level estimation in that procedure borrows strength from information assembled for the cohort as a whole.\n\nRight: Analogous confidence intervals for within-person diet contrasts based on SI estimated using ogttMetrics.\n\nInstallation of ogttMetrics can be accomplished using R 3.4 via devtools::install_github(\"vjcitn/ogttMetrics\", dependencies=c(\"Depends\", \"Imports\", “Suggests\")). The key infrastructure components required for ogttMetrics are CRAN package deSolve for minimal model estimation, and Bioconductor package MultiAssayExperiment for data management. The SIexplorer utility employs the shiny package. These key components have extensive dependencies among other CRAN packages, but these dependencies are automatically resolved by the install_github() command given above. All example data analyzed or visualized in this paper are accessible using the data() function. For example, to reproduce Figure 1, use the commands library(ogttMetrics); data(obaSamp); m1 = minmodByID(obaSamp, \"1\"); plot_OGTT_fit(m1). For Figure 2, in the same session, use QCplots(obasamp).\n\nUsers may import data managed in spreadsheets (CSV format) for use with this software. An executable example is available with example(csvImport).\n\n\nConclusions\n\nThe reference assay for glucose metabolism is the hyperinsulinemic-euglycemic clamp (Soonthornpun et al. (2003)). Because it is less expensive and much less invasive, the OGTT is an attractive assay for assessing insulin sensitivity, particularly in large studies. We have presented, and made freely available (at http://github.com/vjcitn/ogttMetrics), a collection of data structures and functions in the R programming language that help manage and interpret OGTT series collected in cohort studies and clinical trials.\n\nWe and others have found that the minimal model frequently fails to generate reasonable values for SI in OGTT series encountered in practice. In part this is manifested in non-convergence of the basic nonlinear model for the glucose trajectory. However, we have not observed a striking disparity between rankings of participants using the estimate of SI based on an unsatisfactory minimal model fit, and rankings obtained when the closed form Matsuda index is computed on the same OGTT data (Figure 3, Spearman correlation between Matsuda and estimated SI = 0.5782, p < .0001). The estimated SI may be good enough for practical use, but further investigation of features of OGTT data associated with non-convergence of the minimal model, and biologically motivated elaborations of the model that yield successful fits more generally, should be undertaken.\n\nThe tools for multivariate analysis and interactive model visualization in the SIexplorer component of ogttMetrics will be useful for gaining additional insight into subtyping of patients according to features of glucose and insulin trajectories.\n\n\nSoftware and data availability\n\nSoftware and all data analyzed in this paper are available from: http://github.com/vjcitn/ogttMetrics\n\nArchived source code as at time of publication: DOI, 10.5281/zenodo.570174 (Carey, 2017)\n\nLicense: GPL-3",
"appendix": "Author contributions\n\n\n\nBenjamin Stubbs and Keith Frankston developed software and visualizations, analyzed the data, and participated in manuscript development. Marcel Ramos developed the MultiAssayExperiment package of Bioconductor. Frank M. Sacks and Nancy Laranjo conceived and executed the OMNICarb study created the database from which ogttMetrics data are derived, and participated in manuscript development. Vincent Carey acquired funding for software development, developed software and visualizations, and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by US National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases (5R21DK098720-02; V. Carey, PI), and National Cancer Institute (5U24 CA180996-04; M. Morgan, PI.)\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nBarrett PH, Bell BM, Cobelli C: SAAM II: Simulation, Analysis, and Modeling Software for tracer and pharmacokinetic studies. Metabolism. 1998; 47(4): 484–92. PubMed Abstract | Publisher Full Text\n\nBergman RN, Ider YZ, Bowden CR, et al.: Quantitative estimation of insulin sensitivity. Am J Physiol. 1979; 236(6): E667–77. PubMed Abstract\n\nBurattini R, Casagrande F, Di Nardo F: Insulin sensitivity and plasma glucose appearance profile by oral minimal model in normotensive and normoglycemic humans. Lecture Notes in Computer Science, Biological and Medical Data Analysis. 2006; 4345: 128–36. Publisher Full Text\n\nCarey V: vjcitn/ogttMetrics: Runs on R 3.4 [Data set]. Zenodo. 2017. Data Source\n\nCaroni C, Prescott P: Sequential application of Wilks’s multivariate outlier test. J R Stat Soc Ser C Appl Stat. 1992; 41(2): 355–64. Publisher Full Text\n\nDalla Man C, Caumo A, Cobelli C: The oral glucose minimal model: estimation of insulin sensitivity from a meal test. IEEE Trans Biomed Eng. 2002; 49(5): 419–29. PubMed Abstract | Publisher Full Text\n\nGabriel KR: The biplot graphic display of matrices with application to principal component analysis. Biometrika. 1971; 58(3): 453–67. Publisher Full Text\n\nHindmarsh AC: ODEPACK, a Systematized Collection of ODE Solvers. IMACS Transactions on Scientific Computation. 1983; 1: 55–64.\n\nInternational Diabetes Federation: IDF Diabetes Atlas. 2015. Reference Source\n\nMultiAssayExperiment: Software for the integration of multi-omics experiments in Bioconductor. R package version 1.2.0. Reference Source\n\nPetzold L: Automatic Selection of Methods for Solving Stiff and Nonstiff Systems of Ordinary Differential Equations. SIAM J Sci and Stat Comput. 1983; 4(1): 136–48. Publisher Full Text\n\nSacks FM, Carey VJ, Anderson CA, et al.: Effects of high vs low glycemic index of dietary carbohydrate on cardiovascular disease Risk factors and insulin sensitivity: the OmniCarb randomized clinical trial. JAMA. 2014; 312(23): 2531–41. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoonthornpun S, Setasuban W, Thamprasit A, et al.: Novel insulin sensitivity index derived from oral glucose tolerance test. J Clin Endocrinol Metab. 2003; 88(3): 1019–23. PubMed Abstract | Publisher Full Text\n\nTheodorakis MJ, Katsiki N, Arampatzi K, et al.: Modeling the oral glucose tolerance test in normal and impaired glucose tolerant states: a population approach. Curr Med Res Opin. 2017; 33(2): 305–13. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22802",
"date": "01 Jun 2017",
"name": "Antti Honkela",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe submission describes an open source R software package for managing, visualising and analysing data from oral glucose tolerance tests (OGTTs).\nThe aim of the work in providing open source tools for the analysis of OGTT data is highly commendable. From a technical standpoint, the package clearly follows good software development practices by using existing data management infrastructure and extensive automated testing. The authors make a strong effort to make the results reported in the paper reproducible by providing code for reproducing half of the figures. The package includes a vignette that provides some example workflows that seem potentially very useful.\nWhile these basics are well covered, the package still has quite a few rough edges that may make it more difficult to adopt for potential end users. I believe these should be addressed to make the submission scientifically sound.\n1. Installation of the package on a fresh R 3.4 according to instructions fails, presumably due to inability of devtools::install_github() to install required dependencies: ERROR: dependencies ‘S4Vectors’, ‘MultiAssayExperiment’, ‘Biobase’, ‘SummarizedExperiment’, ‘parody’, ‘ggbiplot’ are not available for package ‘ogttMetrics’\n2. After manual install of the required Bioconductor packages, installation still fails because ggbiplot is not available in any standard repositories but only on GitHub.\n3. Installing the package using suggested approach after a manual install of all missing dependencies seems to fail to install the vignette. (Not visible in the listing provided by vignette().)\n4. Running the examples provided in the paper produces some errors: > QCplots(obasamp) Error in experiments(oc) : object 'obasamp' not found\nAfter fixing the command the first run gives: > QCplots(obaSamp) ... Error in UseMethod(\"depth\") :\n\nno applicable method for 'depth' applied to an object of class \"NULL\" > Oddly enough this works when used later.\n5. Additionally, when reading the example from the PDF, the command plot_OGTT_fit contains 'fi' ligature which breaks copy-paste of the command from the PDF.\n6. Running \"R CMD check\" produces notes and a warning, which probably would not be acceptable at the major repositories: * checking for missing documentation entries ... WARNING Undocumented data sets:\n\n‘omnicCG_samp’ ‘omniccG_samp’ ‘omniccg_samp’ All user-level objects in a package should have documentation entries. See chapter ‘Writing R documentation files’ in the ‘Writing R Extensions’ manual.\n\n7. These fairly trivial technical issues aside, I am unsure what is the intended audience of the package and how useful it would be for that audience. The authors present a smooth workflow for analysing pre-packaged data from existing large studies, but instructions for importing new data are limited to one sparsely documented example and it is not immediately obvious how to e.g. compute the minimal models for this example. The vignette contains some code snippets that are likely relevant, but more comments and explanation would be needed. I tried a little but could not get this working easily. In general the vignette would need to be clearer to be useful to new users.\n8. Related to the above note, the csvImport format should be documented better. The vignette could contain an example with different time points. A hard-coded default of time points seems difficult for something where there probably is no generally applicable default.\n9. The minimal model code contains a number of magic constants with some assumed default values. It would be very good to document with proper references where these come from. It is especially unclear where the constant 420 in the integral in Programming considerations comes from and if that can be safely used for data with a different sampling period.\n10. The implications of the piece-wise linear model using a different model before and after 120 min for data with different (either shorter or longer) sampling period and times should be discussed. Can the model be safely applied in these cases? Are there other hidden assumptions that could impact the end users?\n11. The unit for BMI in \"Application to a cross-over trial\" is reported incorrectly (25kg=m^2, units incorrectly in italics).\nFurther suggestions:\n12. It is good that the code contains many stopifnot() sanity checks, but more informative error messages suggesting how to fix things would be useful for the end users.\n13. The specification of the model might benefit from more consistent notation for derivatives. (Now sometimes d/dt, sometimes G' and X'.)\n14. It would be good to include a copyright notice with author and license information to each source file. See https://www.gnu.org/licenses/gpl-howto.html\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-684
|
https://f1000research.com/articles/6-97/v1
|
01 Feb 17
|
{
"type": "Software Tool Article",
"title": "haploR: an R-package for querying web-based annotation tools",
"authors": [
"Ilya Y. Zhbannikov",
"Konstantin Arbeev",
"Anatoliy I. Yashin",
"Konstantin Arbeev",
"Anatoliy I. Yashin"
],
"abstract": "There exists a set of web-based tools for integration and exploring information linked to annotated genetic variants. We developed haploR, an R-package for querying such web-based genome annotation tools (currently implementing on HaploReg and RegulomeDB) and gathering information in a format suitable for downstream bioinformatic analyses. This will facilitate post-genome wide association studies streamline analysis for rapid discovery and interpretation of genetic associations.",
"keywords": [
"R",
"databases",
"genomics",
"genetic variants",
"genome annotation",
"data mining"
],
"content": "Introduction\n\nGenomic experiments, including genome wide association studies (GWAS), produced and continue to produce a huge amount of data. To better understand the biological mechanisms involved in regulation complex traits, this information requires further analysis. Large projects, such as ENCODE1, are devoted to bring together accumulated knowledge about different functional and regulatory elements that control cells’ functioning. These projects manage such data to facilitate collaboration between researchers working in the area of genetics of complex traits.\n\nThere exists a set of web-based tools, such as HaploReg2 and RegulomeDB3, which offer a link of detected genetic variants to additional post-GWAS information. These include information about linkage disequilibrium (LD), expression quantitative trait loci (eQTL), allele frequencies, protein functions, chromatin states, etc., for annotated genetic variants. These tools are web-based, which requires the user to open a web page, manually enter information and obtain the results of such linking in a certain format.\n\nIn a number of situations, a user needs to have additional flexibility in working with such tools. For example, saving the results of such analyses in different file formats for further use. This can be provided using various kinds of computer languages available in Modern Bioinformatics and Computational Biology, including R, Python, Perl and other high-level languages and computational platforms. Among them, R language is one of the leaders, since it is free and offers a large set of packages to facilitate bioinformatics analysis.\n\nWe present an R-package, haploR, which allows for querying HaploReg and RegulomeDB web-based tools. The package connects to the corresponding web site, queries the database and downloads results in the form of a data frame or a file. The package can easily be included in bioinformatics pipelines, which will, in turn, facilitate analysis for rapid single nucleotide variant (SNP)/gene - phenotype association discovery.\n\n\nMethods\n\nThe R-package haploR relies on HTTP methods POST and GET to query, download and parse the content of web pages. Functions queryHaploreg(...) and queryRegulome(...) are designed to obtain data from the resources HaploReg (http://archive.broadinstitute.org/mammals/haploreg/haploreg.php) and RegulomeDB (http://www.regulomedb.org/), respectively.\n\nThe package is cross-platform (Windows, macOS and Linux), without any specific computer hardware requirements. A standard computer with the most-recent version of R (3.3.2 at the time of writing) will handle most applications of the haploR package.\n\n\nUse cases\n\nTo query HaploReg and download the results, the user needs to call queryHaploreg(query, file, study, ...) function. This function can accept three different inputs: (1) a vector of SNPs (query); (2) a text file (file); or (3) a study (study). Other parameters are directly linked to query options (see HaploReg web page) and described in the package user manual. Output of this function is a table with column names identical to those used in HaploReg. Examples below show usage of these options.\n\nlibrary(haploR)\n\nqueryHaploreg(query=c(\"rs10048158\",\"rs4791078\"))\n\nHere parameter query represents a vector of rs-IDs.\n\nIn this example, SNPs are stored in a text file, one SNP per line. In this case, to call queryHaploreg, the user has to execute the following command:\n\nqueryHaploreg(file=system.file(\"extdata/snps.txt\", package=\"haploR\"))\n\nHere file represents a path to the file with SNPs.\n\nHaploReg offers an option to use data from study done in the past. To use this option, the user should first obtain a list of studies and then use a particular study as a parameter:\n\n#Get a list of studies\n\nstudies <- getStudyList()\n\n#Query Hploreg\n\nqueryHaploreg(study=studies[[2]])\n\nOther options, such as a source for epigenomes, mammalian conservation algorithm, and others are also available; see the package’s user manual (https://cran.r-project.org/web/packages/haploR/haploR.pdf) and vignette (https://cran.r-project.org/web/packages/haploR/vignettes/haplor-vignette.html) for correct use.\n\nThe RegulomeDB project also allows exploration of properties of SNPs and presents results in different formats: (1) plain text (2) BED and (3) GFF formats. The function queryRegulome(query, format) is used to query the RegulomeDB:\n\nqueryRegulome(query=c(\"rs4791078\",\"rs10048158\"), format=\"full\")\n\nHere the query is a vector of rsIDs and format is an output format provided by the RegulomeDB web site. The output of this function is similar to that used in the queryHaploreg function, but has columns that correspond to the RegulomeDB output.\n\n\nConclusion and future work\n\nHere, we present a new package haploR, which currently allows querying web tools HaploReg and RegulomeDB. We plan to add other web-based tools, such as Regulatory Elements DB (http://dnase.genome.duke.edu/index.php), which provides the data from DNaseI-hypersensitivity and Affymetrix microarray experiments performed in 4.\n\n\nSoftware and data availability\n\nTool available from: https://cran.r-project.org/package=haploR\n\nSource code available from: https://github.com/izhbannikov/haploR\n\nArchived source as at time of publication: doi, https://doi.org/10.5281/zenodo.2599965; https://cran.r-project.org/src/contrib/haploR_1.4.1.tar.gz\n\nLicense: GPL-2 | GPL-3\n\nThe example script and output files for the package are available at: https://doi.org/10.5281/zenodo.2600396",
"appendix": "Author contributions\n\n\n\nIYZ developed the package, performed evaluation/validation tests and wrote the manuscript. KA, AIY contributed to the development of the package. KA, AIY revised the manuscript and gave comments helpful to finalize it. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the National Institute on Aging of the National Institutes of Health (NIA/NIH) under Award Numbers P01AG043352, R01AG046860, and P30AG034424. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIA/NIH.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nENCODE Project Consortium: The ENCODE (ENCyclopedia Of DNA Elements) Project. Science. 2004; 306(5696): 636–640. PubMed Abstract | Publisher Full Text\n\nWard LD, Kellis M: HaploReg: a resource for exploring chromatin states, conservation, and regulatory motif alterations within sets of genetically linked variants. Nucleic Acids Res. 2011; 40(Database issue): D930. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoyle AP, Hong EL, Hariharan M, et al.: Annotation of functional variation in personal genomes using RegulomeDB. Genome Res. 2012; 22(9): 1790–1797. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSheffield NC, Thurman RE, Song L, et al.: Patterns of regulatory activity across diverse human cell types predict tissue identity, transcription factor binding, and long-range interactions. Genome Research. 2013; 23(5): 777–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhbannikov I: izhbannikov/haploR: Query Haploreg and RegulomeDB [Data set]. Zenodo. 2017. Data Source\n\nZhbannikov I: izhbannikov/haploR_examples: haploR_examples first release [Data set]. Zenodo. 2017. Data Source"
}
|
[
{
"id": "19824",
"date": "13 Feb 2017",
"name": "Garrett M. Dancik",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe an R package named haploR for querying the HaploReg and ReglomeDB web-based databases. Because querying can be carried out in R, haploR adds convenience for querying these databases when subsequent downstream analyses in R are desired.\n\nThe R package is easy to use and works as described. However, the potential application of haploR is only vaguely described. The authors should include concrete examples of downstream analyses in order to demonstrate when haploR would be preferred to traditional queries executed from the web.\n\nIn addition, addressing the following items would add clarity to the manuscript and the tool:\nThe authors should describe when the results returned by haploR differ from the web-based results. For example, whereas the results table from querying HaploReg on the web may indicate that a particular variant has \"4 altered motifs\", providing links to the variant entry where the motifs are listed, haploR directly returns the motifs present. This is an advantage of haploR that should be described.\n\nThere are several spelling and grammatical errors which make the manuscript difficult to follow in some parts. For example, the Introduction states that \"Large projects...are devoted to bring together\", instead of \"bringing together\".",
"responses": [
{
"c_id": "2691",
"date": "15 May 2017",
"name": "Ilya Zhbannikov",
"role": "Author Response",
"response": "We thank the reviewer for insightful and thorough feedback. It was clear from those comments that our original paper did not emphasize clearly enough the unique contribution of the R package haploR. These comments critique helped us to revise the note and package vignette to clarify several aspects of data retrieval methodology used in the package. We revised the paper and this revision addresses all of the reviewer’s concerns. Reviewer comments/suggestions (RC) are in italics font; author’s responses (AR) are in regular, black font. RC1:The R package is easy to use and works as described. However, the potential application of haploR is only vaguely described. The authors should include concrete examples of downstream analyses in order to demonstrate when haploR would be preferred to traditional queries executed from the web.AR1:We provided corresponding examples in the package vignette and also on the package web page: https://github.com/izhbannikov/haploR . Please see “Motivation and typical analysis workflow” section.RC2:In addition, addressing the following items would add clarity to the manuscript and the tool:The authors should describe when the results returned by haploR differ from the web-based results. For example, whereas the results table from querying HaploReg on the web may indicate that a particular variant has \"4 altered motifs\", providing links to the variant entry where the motifs are listed, haploR directly returns the motifs present. This is an advantage of haploR that should be described.AR2:Thank you for this useful suggestion. Following your suggestion and due to limited article size (no more than 1,000 words) we emphasized it in a package vignette (please see the end of “One or several genetic variants” subsection).RC3:There are several spelling and grammatical errors which make the manuscript difficult to follow in some parts. For example, the Introduction states that \"Large projects...are devoted to bring together\", instead of \"bringing together\".AR3:We addressed these errors in the revised article.We are happy to make any other changes that may be required.Sincerely,Ilya Zhbannikov"
}
]
},
{
"id": "19826",
"date": "23 Feb 2017",
"name": "Claudia Vitolo",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis papers describes the implementation of the haploR R-package which is used to retrieve information from web-based genome annotation tools. This R-package aims to simplify the reproducibility of bioinformatics pipe lines.\nOverall, we think the structure of the paper and the aim of the project are inline with the journal’s guidelines. The haploR package seems a valuable open source tool for bioinformaticians and R users as it facilitates data retrieval from web-based databases (such as HaploReg and RegulomeDB) and makes the scientific workflow more reproducible. We also appreciate the intention to keep improving the package by extending the list of supported databases.\nWe mostly work on climate science and have a limited understanding of bioinformatics. However, we use R extensively and we decided to review this work from a generic R-user perspective. We focused our review on this paper and source code, we considered user manual and the vignette out of the scope of this review.\nIn our opinion, this paper deserves publication but requires some further work. We decided to approve it with reservations because we noticed some ambiguities in the paper that need to be clarified. We also suggest small changes to the code that could make the functions in the package less error-prone and more future proof. Our specific comments are listed below.\nMajor comments\nINTRODUCTION\n\nWe think the introduction is rather vague. There are several sentences such as “in a number of situations” or “in a certain format” which are too vague and require further explanations. For example, instead of saying “in a certain format”, the authors could explicitly mention the formats that they are referring to (e.g. csv, json, etc). Again, in the second sentence of the third paragraph “... saving the results of such analyses in different file formats ...” the authors should again specify what the different file formats are. Just before the fourth paragraph, the authors should mention if this package could be added to one of the CRAN Task Views (https://cran.r-project.org/web/views/) and whether there are other packages with similar goals. If there are other related packages, it would be interesting to mention whether the data could be combined.\n\nMETHODS\n\nThe second sentence of the sub-section Implementation says “Functions….are designed to obtain data from the resources HaploReg...and RegulomeDB….”. Here, it is important to describe the structure of the retrieved data. We appreciate that most bioinformaticians are familiar with web-based databases such as HaploReg and RegulomeDB. However, a student might want to use this tool and having a more detailed description of these web databases would be useful to get started. Please, also consider commenting on the use and interpretation of the retrieved information, for example plotting a subset of the full dataset. The Operation section should include clear instructions for the installation and a complete description of package dependencies, including versions of the dependent packages.\n\nUSE CASES\n\nThis section is rather vague. The authors should clearly describe all the input arguments of the functions, as well as the expected results. Querying HaploReg - Input vector of SNPs\nWhen writing example code, it is considered good practice to assign the result of a command to an object, e.g. x <- queryHaploreg(query=c(\"rs10048158\",\"rs4791078\")). Please consider making this change throughout the paper. When we run the command x <- queryHaploreg(query=c(\"rs10048158\",\"rs4791078\")) we get the following message: “No encoding supplied: defaulting to UTF-8”. Consider changing the encoding or removing non-Ascii characters from the table before outputting. After retrieving the data, please describe the structure of the retrieved object. In particular you should mention the expected number of columns and rows as well as the name and type of variables (the authors might find the str() function useful). We tried to print the object, the result filled the screen and was unreadable. We suggest to convert the dataframe into a tibble table (see tibble package) to generate a more readable printed output. We checked the structure of the retrieved objects and the data types are all characters. Some of the columns clearly contain numeric variables (e.g. r2, D , ARF…). We suggest to convert there columns from character to numeric before outputting. This conversion is important because users might incur into errors when generating basic statistics. For instance, running x <- queryHaploreg(query=c(\"rs10048158\",\"rs4791078\")); quantile(x$AFR) generates the following error message: “Error in (1 - h) * qs[i] : non-numeric argument to binary operator”.\n\nQuerying HaploReg - Input text file with SNPs: This example is reproducible but the authors do not specify how the \"extdata/snps.txt” is structured. We suggest to write something like “the text file should list the rs-IDs in one column, with one rs-ID per row”. Querying HaploReg - Using a particular study: When we extracted the list of studies, we noticed that we cannot subset it using names. Subsetting using indices is prone to errors because the list of studies could increase over time and their order could change.\n\nQuerying RegulomeDB\nPlease explain what the argument format is. It is not obvious to non-experts. The last sentence of this sub-section “the output of this function is similar to that used in the queryHaploreg…..” The outputs of queryHaploreg() and queryRegulome() are not similar. The former is a data.base, the latter is a list. Even comparing the data.frame from queryHaploreg() with the first element (res.table) of queryRegulome() and we found different number of rows, columns, variables and data types (the first contains factors and the second characters). What are the similarities between them?\n\nCONCLUSION AND FUTURE WORK: There is not a discussion about the use cases and the conclusions are poor. You should clearly state the advantages to use these packages over the original databases. For example, you could mention the opportunity to generate a more streamlined workflow, shorter retrieval times, a shallow learning curve, etc.\n\nSOFTWARE AND DATA AVAILABILITY\n\nLicence: It is unclear what license the authors use. The authors write GPL-2 | GPL-3, but it is not possible to use both at the same time. Author contributions: The authors mention that IYZ performed evaluation and validation tests. We were expecting these tests to be provided as unit tests. They don’t seem to be included in source code. We suggest to follow best practice by integrating unit tests using the test that framework and using travis-CI (https://travis-ci.org/) for continuous integration. Travis-CI works with Unix base systems, the authors could also test the package on Windows using the appveyor service (https://www.appveyor.com/). DESCRIPTION file:\nAccording to the manual “Writing R extensions”, the description should mention the role of the authors (https://cran.r-project.org/doc/manuals/r-release/R-exts.html#The-DESCRIPTION-file). The Depends section shows R (>= 3.3). This should be made consistent with the Operation section in which the authors mention to have used R 3.3.2.\n\nNAMESPACE file: You seem to use only few functions from the XML and httr packages, so we suggest to load them individually (using importFrom rather than import) to avoid masking.\n\nMinor comments\nABSTRACT\nFirst line of the abstract, “There exists a set of web-based tools for integration and exploring information linked to annotated genetic variants”. We think that this statement would be more appropriate for the introduction because it does not add any key information about the work carried out. The abstract could start with the second sentence, maybe something like, e.g. “This paper presents haploR, a novel R-package ...”\n\nINTRODUCTION\nSecond sentence of the fourth paragraph: “The package … downloads results in the form of a data frame or a file”. Technically, a data frame can be saved in a file. Please consider rewording this sentence. The second and the third paragraph could be joined because the topics are strongly related.\n\nGrant informations: In most research journals this section is called “Acknowledgments”.",
"responses": [
{
"c_id": "2693",
"date": "15 May 2017",
"name": "Ilya Zhbannikov",
"role": "Author Response",
"response": "We thank the reviewers for their careful reading of the manuscript, package testing and their constructive remarks. We have taken the comments on board to improve and clarify the manuscript. Please find below a detailed point-by-point response to all comments (reviewers comments/suggestions (RC) are in italics font; our responses (AR) are in regular, black font.). Unfortunately, due to limited size of the article we could not reflect all the suggestions provided by reviewers explicitly in the article, but we addressed them in corresponding package vignette and web site (https://github.com/izhbannikov/haploR, README section).Major comments INTRODUCTION RC1:We think the introduction is rather vague. There are several sentences such as “in a number of situations” or “in a certain format” which are too vague and require further explanations. For example, instead of saying “in a certain format”, the authors could explicitly mention the formats that they are referring to (e.g. csv, json, etc). Again, in the second sentence of the third paragraph “... saving the results of such analyses in different file formats ...” the authors should again specify what the different file formats are.AR1:We rewrote the Introduction section and explicitly mentioned file types. Please also see the package vignette for workflow examples.RC2:Just before the fourth paragraph, the authors should mention if this package could be added to one of the CRAN Task Views (https://cran.r-project.org/web/views/) and whether there are other packages with similar goals. If there are other related packages, it would be interesting to mention whether the data could be combined. AR2:We added information about other related packages to the Introductory section. haploR is not presented in CRAN Task Views yet but we are working on adding it to there. METHODS RC2: The second sentence of the sub-section Implementation says “Functions….are designed to obtain data from the resources HaploReg...and RegulomeDB….”. Here, it is important to describe the structure of the retrieved data. We appreciate that most bioinformaticians are familiar with web-based databases such as HaploReg and RegulomeDB. However, a student might want to use this tool and having a more detailed description of these web databases would be useful to get started. Please, also consider commenting on the use and interpretation of the retrieved information, for example plotting a subset of the full dataset. The Operation section should include clear instructions for the installation and a complete description of package dependencies, including versions of the dependent packages.AR2:Due to limited space of the article (1,000 words maximum) we provided data description and installation instructions at the package website (https://github.com/izhbannikov/haploR) and within the corresponding revised vignette (https://github.com/izhbannikov/haploR/blob/master/vignettes/haplor-vignette.Rmd) or just browseVignettes(“haploR”)). USE CASES RC3:This section is rather vague. The authors should clearly describe all the input arguments of the functions, as well as the expected results. Querying HaploReg - Input vector of SNPsAR3:Due to limited size of the paper, we now provide description of the input parameters in the package vignette and the website. Sorry for the inconvenience. RC4:When writing example code, it is considered good practice to assign the result of a command to an object, e.g. x <- queryHaploreg(query=c(\"rs10048158\",\"rs4791078\")). Please consider making this change throughout the paper.AR4:Thank you for pointing on this. Such issue is fixed in revised article: results of all data retrieval commands are assigned to objects.RC5:When we run the command x <- queryHaploreg(query=c(\"rs10048158\",\"rs4791078\")) we get the following message: “No encoding supplied: defaulting to UTF-8”. Consider changing the encoding or removing non-Ascii characters from the table before outputting.AR5:We fixed this warning in version 1.4.4 of the package. The parameter encoding added to queryHaploreg function. Default is set to UTF-8. RC6:After retrieving the data, please describe the structure of the retrieved object. In particular you should mention the expected number of columns and rows as well as the name and type of variables (the authors might find the str() function useful).AR6:We describe this in corresponding vignette due to limited space of the article (not more than 1,000 words). Please see sections Querying HaploReg, Querying RegulomeDB and their subsections Output.RC7:We tried to print the object, the result filled the screen and was unreadable. We suggest to convert the dataframe into a tibble table (see tibble package) to generate a more readable printed output.AR7:Thank you for this suggestion. Now we use tibble for generating a printable output.RC8:We checked the structure of the retrieved objects and the data types are all characters. Some of the columns clearly contain numeric variables (e.g. r2, D , ARF…). We suggest to convert there columns from character to numeric before outputting. This conversion is important because users might incur into errors when generating basic statistics. For instance, running x <- queryHaploreg(query=c(\"rs10048158\",\"rs4791078\"));quantile(x$AFR) generates the following error message: “Error in (1 - h) * qs[i] : non-numeric argument to binary operator”.AR8:This issue is fixed in the current version (1.4.4) of the package available from CRAN. Thank you very much for pointing on that. RC9:Querying HaploReg - Input text file with SNPs: This example is reproducible but the authors do not specify how the \"extdata/snps.txt” is structured. We suggest to write something like “the text file should list the rs-IDs in one column, with one rs-ID per row”.AR9:We moved this example to the package vignette and package web page where we describe the structure of extdata/snps.txt .RC10:Querying HaploReg - Using a particular study: When we extracted the list of studies, we noticed that we cannot subset it using names. Subsetting using indices is prone to errors because the list of studies could increase over time and their order could change. AR10:Thank you for emphasizing this important point. This issue is fixed in 1.4.4 version of the package.RC11:Querying RegulomeDB Please explain what the argument format is. It is not obvious to non-experts.AR11:We added instructions for the argument format details. Please see package web site README, subsection “Arguments” of section “Querying RegulomeDB” . RC12:The last sentence of this sub-section “the output of this function is similar to that used in the queryHaploreg…..” The outputs of queryHaploreg() and queryRegulome() are not similar. The former is a data.base, the latter is a list. Even comparing the data.frame from queryHaploreg() with the first element (res.table) of queryRegulome() and we found different number of rows, columns, variables and data types (the first contains factors and the second characters). What are the similarities between them?AR12:Thank you for this useful remark. We agree that technically these formats are different and similarities are in only the type of information retrieved. CONCLUSION AND FUTURE WORK: RC13:There is not a discussion about the use cases and the conclusions are poor. You should clearly state the advantages to use these packages over the original databases. For example, you could mention the opportunity to generate a more streamlined workflow, shorter retrieval times, a shallow learning curve, etc.AR13:We rewrote the conclusion according to your suggestions. SOFTWARE AND DATA AVAILABILITY RC14:Licence: It is unclear what license the authors use. The authors write GPL-2 | GPL-3, but it is not possible to use both at the same time.AR14:Thank you for this remark. License changed to GPL-3 in version 1.4.4 of the package.RC15:Author contributions: The authors mention that IYZ performed evaluation and validation tests. We were expecting these tests to be provided as unit tests. They don’t seem to be included in source code. We suggest to follow best practice by integrating unit tests using the test that framework and using travis-CI (https://travis-ci.org/) for continuous integration. Travis-CI works with Unix base systems, the authors could also test the package on Windows using the appveyor service (https://www.appveyor.com/).AR15:We added unit tests to version 1.4.4 of the package. DESCRIPTION file: RC16:According to the manual “Writing R extensions”, the description should mention the role of the authors (https://cran.r-project.org/doc/manuals/r-release/R-exts.html#The-DESCRIPTION-file).AR16:We updated the description file and now it describes the roles of listed contributors.RC15:The Depends section shows R (>= 3.3). This should be made consistent with the Operation section in which the authors mention to have used R 3.3.2.AR15:We changed the Depends section to R (>= 3.3.2).RC16:NAMESPACE file: You seem to use only few functions from the XML and httr packages, so we suggest to load them individually (using importFrom rather than import) to avoid masking.AR16:Thank you for this suggestion. Now we import only needed functions with “importFrom” statement. Minor commentsABSTRACTRC17:First line of the abstract, “There exists a set of web-based tools for integration and exploring information linked to annotated genetic variants”. We think that this statement would be more appropriate for the introduction because it does not add any key information about the work carried out. The abstract could start with the second sentence, maybe something like, e.g. “This paper presents haploR, a novel R-package ...”AR17:Thank you for this helpful suggestion. We adopted the text according to this. INTRODUCTIONRC18:Second sentence of the fourth paragraph: “The package … downloads results in the form of a data frame or a file”. Technically, a data frame can be saved in a file. Please consider rewording this sentence.AR18:We reworded this sentence to: \"The package connects to the web site, queries the database and downloads results.\"RC19:The second and the third paragraph could be joined because the topics are strongly related.AR19:We joined the first and second paragraphs. RC20:Grant informations: In most research journals this section is called “Acknowledgments”.AR20:We changed the “Grant Information” section name to \"Acknowledgments\"."
}
]
},
{
"id": "20081",
"date": "03 Mar 2017",
"name": "Stephanie M. Gogarten",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes an R-package, haploR, which queries bionformatics databases. The benefit of the package is an ability to incorporate these queries into workflows in R, rather than using a web interface.\nThe haploR package seems useful, but the paper is lacking sufficient detail in several areas.\n\nThe Bioconductor project (bioconductor.org) contains a wealth of resources for querying various sources of annotation from R. The paper should discuss how the haploR package provides features that are not available in existing resources.\n\nThe types of information available in HaploReg and RegulomeDB are not well described. Why were these particular resources selected for this package and how do they differ from each other?\n\nThe \"future work\" section mentions adding other web tools to the package in the future. What additional information will be provided by those tools and how were they selected for inclusion in the package?\n\nI was able to install the R-package and follow the examples given in the vignette. However, these examples would benefit from more explanation.\nIn the HaploReg example, querying the database with two rs IDs returns results for many additional rs IDs. Why is this?\n\nWhy is the first element returned by getStudyList() blank?\n\nIn summary, the authors have provided a potentially useful R-package, but they need to include more explanation of how this package will benefit the bioinformatics community.",
"responses": [
{
"c_id": "2692",
"date": "15 May 2017",
"name": "Ilya Zhbannikov",
"role": "Author Response",
"response": "We thank the reviewer for careful reading of our paper and constructive remarks. We believe that the comments have identified important areas which required improvement. After completion of the suggested edits, the revised paper has benefited from an improvement in the overall presentation and clarity. Reviewer comments/suggestions (RC) are in italics font; our responses (AR) are in regular, black font.RC1:The Bioconductor project (bioconductor.org) contains a wealth of resources for querying various sources of annotation from R. The paper should discuss how the haploR package provides features that are not available in existing resources.AR1:We wanted to automatically retrieve the information about annotated genetic variants listed as an output of our custom genomic pipeline. We decided to find an R package that would be able to do this rather than download very large annotation files from different projects in order to query them locally. Among a plethora of annotation packages from Bioconductor and CRAN (annotate, mygene, ensembldb, biomaRt, myvariant, rsnps, rentrez), only myvariant, biomaRt, rentrez could potentially serve our needs. However, even the rich outputs of myvariant, biomaRt and rentrez did not contain ready-to use information about LD, sequence conservation across mammals, the effect of SNPs on regulatory motifs, and the effect of SNPs on expression from eQTL studies. In the revised version of our paper we briefly (due to limited size) emphasized the advantages of haploR. Please see introductory section.RC2:The types of information available in HaploReg and RegulomeDB are not well described. Why were these particular resources selected for this package and how do they differ from each other?AC2:HaploReg is a web resource for exploring annotations of genetically linked variants (i.e. variants in haplotype blocks). The particular advantage of HaploReg is that it allows explorations the effects of SNPs on expression from eQTL studies. It also outputs genetically linked (to the query) SNPs, therefore we can discover effects of correlations. RegulomeDB is a resource that shows annotated SNPs with known and predicted regulatory elements in the intergenic regions of the human genome. Data mostly come from publicly available datasets (GEO, ENCODE, etc.). Both HaploReg and RegulomeDB were chosen as convenient tools for exploring effects of eQTL and determining close-related variants. We added description of HaploReg and RegulomeDB output data to the package vignette (please see Overview section).RC3:The \"future work\" section mentions adding other web tools to the package in the future. What additional information will be provided by those tools and how were they selected for inclusion in the package?AC3:We think that including additional resources on regulatory factors is beneficial since such factors can modulate gene expression and protein yield distinctly across individuals and cell types. This can help us to discover novel mechanisms of genetic associations.RC4:I was able to install the R-package and follow the examples given in the vignette. However, these examples would benefit from more explanation. In the HaploReg example, querying the database with two rs IDs returns results for many additional rs IDs. Why is this?AC4: This happened because HaploReg returns information about query SNPs and also information about those SNPs, which are in LD equal or higher than some pre-defined threshold (0.8 by default).RC5:Why is the first element returned by getStudyList() blank?AC5:This was because we used a study list returned by Haploreg 'as is' where the first element was blank. It is fixed in version 1.4.4 of the package (blanks were removed)."
}
]
}
] | 1
|
https://f1000research.com/articles/6-97
|
https://f1000research.com/articles/6-678/v1
|
15 May 17
|
{
"type": "Research Article",
"title": "To derotate or not? The impact of a permanent derotation screw on the revision rate of dynamic hip screw fixation for intracapsular neck of femur fractures.",
"authors": [
"Simon Woods",
"Richard Pilling",
"Ivan Vidakovic",
"Alloush Al-Mothenna",
"Reza Mayahi",
"Richard Pilling",
"Ivan Vidakovic",
"Alloush Al-Mothenna",
"Reza Mayahi"
],
"abstract": "Background: In this retrospective study, we examine the impact that employing a permanent derotation screw (DRS) has on the rate of revision for 2-hole dynamic hip screws (DHS, a.k.a. sliding hip screws), used for internal fixation of intracapsular neck of femur (NOF) fractures. To the best of our knowledge, we are the first to examine the impact of using a derotation screw on DHS revision rate. Methods: We obtained a list of 64 patients suffering intracapsular NOF fracture treated with 2-hole DHS over a 5-year period, 28 of these were also treated with a DRS, forming our DRS group, 36 were not (non-DRS group). Fracture severity and patient demographics between the groups were compared to ensure homogeneity. The rate of revision to arthroplasty (total or hemi) of the two groups were compared. Results: The mean age in the DRS group was 70.79 years, 1.77 years lower than the non-DRS group (p=0.570). The DRS group had a rate of revision of 14%, in comparison with 39% in the non-DRS group (p=0.0299), corresponding with a number needed to treat of 4.06 derotation screws to prevent a single failure. Conclusions: In this study, employing a permanent derotation screw alongside a 2-hole DHS was associated with a significantly lower rate of revision to arthroplasty than using a 2-hole DHS alone. We would recommend this be further investigated with prospective randomized trials, to provide robust evidence and make clinical recommendations.",
"keywords": [
"dynamic hip screw",
"hip fracture",
"intracapsular",
"neck of femur",
"sliding hip screw",
"derotation screw"
],
"content": "Introduction\n\nIntracapsular neck of femur (NOF) fractures comprise one of the most common orthopaedic injuries1. The majority are treated with arthroplasty, as the femoral neck biomechanics and vulnerability of the blood supply lead to a high incidence of non-union and avascular necrosis following internal fixation. Some may be treated with fixation rather than replacement depending on patient factors and fracture configuration, however, the optimal fixation method is controversial2.\n\nThe choice of fixation method for intracapsular fractures is either cannulated hip screws (CHS), or dynamic hip screw (DHS) with or without a derotation screw (DRS). These devices provide stability in the plane of the femoral neck, whilst enabling compression at the fracture site to facilitate direct healing.\n\nThe biomechanics of basicervical fractures are influenced by fracture character and fixation method. Stankewich et al. (1996) investigated the biomechanical impact afforded to fracture configuration under cyclical and failure loading. They determined that force at the fracture site correlates with fracture angle. The more vertical the fracture angle, the greater the force resisted by the implant alone, and ultimate failure force correlated with the moment arm3.\n\nThe load through the hip when walking at 4km/h is approximately 238% body weight (BW), increasing to 250% when ascending stairs and 260% when descending4. Therefore a 70kg person loads the hip with approximately 1400–1500N when walking. The torsional force through the femur is also 23–83% larger when climbing stairs than when walking4. Using synthetic femurs Freitas et al. (2014) compared the load to failure for a pauwels III fracture fixed with a DHS plus DRS against a control group without fracture and found the mean load to failure in the DHS group was 1742N, compared to 1329N in the control group5.\n\nBlair (1994) states that a DRS provides rotational control during insertion of the lag screw, but no additional fixation thereafter6. This opinion is echoed by both clinical experience of others7,8 and biomechanical testing by Swiontkowski et al. (1987)9. However, biomechanical studies of cadaveric fractures have shown that DHS with DRS gives superior stabilisation, theoretically reducing AVN and non-union10,11.\n\nIn this study we examine the impact employing a permanent derotation screw concomitantly with a 2-hole DHS has on rate of revision to arthroplasty in the treatment of intracapsular NOF fractures.\n\n\nMethods\n\nA list of 161 patients were identified as sustaining an intracapsular NOF fracture treated with internal fixation between April 2009 and April 2014. The patient follow-up notes and imaging were reviewed, excluding those treated with CHS, and ensuring follow up for at least one year. This left 64 patients treated with DHS, 28 of those treated with a derotation screw (the DRS group), and 36 without (the non-DRS group).\n\nX-rays, operation notes, discharge summaries and clinic letters were reviewed to assess the outcomes associated with the treatments. Each fracture was assessed and scored using Pauwels and Gardens classifications1 to ensure homogeneity between the DRS group and the non-DRS group with regards to fracture severity. The follow-up was reviewed and a negative outcome was defined as the need for revision surgery to hip arthroplasty.\n\nThe following inclusion criteria were applied:\n\n1. Patient sustained intracapsular NOF fracture between the 1st April 2009 and 31st March 2014.\n\n2. The fracture was treated with 2-hole dynamic hip screw.\n\n3. Patient has been followed up for a minimum of 1-year following surgery.\n\n\nResults\n\nPatient demographics between the groups are summarised in Table 1. The Pauwels and Gardens scores are shown in Table 2 and Table 3. Pauwels score cut-offs are 0–30° for 1, 30–50° for 2 and >50° for 312. The mean fracture angle in the DRS group was 39.78 degrees (SD 11.11) compared to 35.18 degrees (SD 9.69) in the non-DRS group, and the proportion of each fracture character in the 2 groups shows a similar distribution. We therefore determined that the groups were sufficiently homogeneous to allow comparison.\n\nDRS: permanent derotation screw.\n\nThe patients in the DRS group had a significantly lower rate of revision to arthroplasty than those in the non-DRS group (p=0.0203), as shown in Table 4. Without a derotation screw the revision rate was 39%, in comparison to 14% when a DRS was used.\n\n\nDiscussion\n\nEmploying a permanent derotation screw alongside a dynamic hip screw seems to offer protection against requirement for revision to arthroplasty, carrying a relative risk reduction of 66% and NNT of 4.06. This NNT suggests the clinical impact of routinely employing a DRS could be quite significant, and needs to be further investigated with robust, prospective clinical studies. To the best of the our knowledge, there are no previous studies analysing the impact a derotation screw has on the failure rate of 2-hole sliding hip screws when used for treating intracapsular hip fractures.\n\nOf the 18 patients requiring revision, 16 underwent total hip arthroplasty, 1 underwent hemiarthroplasty and one (in the non-DRS group) was managed conservatively despite requiring revision. We included this patient as it was documented that they required revision to arthroplasty, but were not fit for surgery, and therefore met our definition of a negative outcome.\n\nWhen reviewing images it was not possible to ascertain whether an intraoperative derotation wire had been used as these images were rarely saved, and operative notes were unreliable in reporting this. Our non-DRS group therefore likely contained some patients that were covered with an intraoperative derotation wire and some that were not.\n\n\nConclusions\n\nThis study shows a reduced rate of revision to arthroplasty when a permanent DRS was used alongside a 2-hole DHS for fixation of intracapsular neck of femur fractures when compared to DHS alone. Given effect size suggested in this study and potential improvements in patient care that could be achieved we recommend this is an area that should be investigated with a randomised controlled trial.\n\n\nData availability\n\nDataset 1: Source data used as a basis for the findings in this study. Data collected for this study was collected through Hull Royal Infirmary’s hip fracture database, which is gathered for the national hip fracture database (NHFD). DOI, 10.5256/f1000research.11433.d16127013.\n\n\nConsent\n\nAll data collected for this study was collected through Hull Royal Infirmary’s hip fracture database, which is gathered for the national hip fracture database (NHFD). From the NHFD website: \"the NHFD is approved by the NHS England HRA Confidentiality Advisory Group (CAG) to collect patient data without consent under Section 251 exemption. (This approval was formerly administered under the NIGB-ECC/PIAG).\" and \"patients do not need to give formal consent\" for data to be collected, but “may opt out if they wish”.",
"appendix": "Author contributions\n\n\n\nSW, lead researcher and author, designed the study, collected data, analysed data and performed the write up. RP performed background research and contributed to the write up. IV assisted with data collection and analysis. AA-M obtained patient lists and internal permissions. RM oversaw the project.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: Statistical analysis of the source data summarising the main findings. Statistics were calculated using SPSS.\n\nClick here to access the data.\n\n\nReferences\n\nParker MJ: The management of intracapsular fractures of the proximal femur. J Bone Joint Surg Br. 2000; 82(7): 937–941. PubMed Abstract\n\nDeneka DA, Simonian PT, Stankewich CJ, et al.: Biomechanical comparison of internal fixation techniques for the treatment of unstable basicervical femoral neck fractures. J Orthop Trauma. 1997; 11(5): 337–43. PubMed Abstract | Publisher Full Text\n\nStankewich CJ, Chapman J, Muthusamy R, et al.: Relationship of mechanical factors to the strength of proximal femur fractures fixed with cancellous screws. J Orthop Trauma. 1996; 10(4): 248–57. PubMed Abstract | Publisher Full Text\n\nBergmann G, Deuretzbacher G, Heller M, et al.: Hip contact forces and gait patterns from routine activities. J Biomech. 2001; 34(7): 859–71. PubMed Abstract | Publisher Full Text\n\nFreitas A, Torres GM, Souza AC, et al.: Analysis on the mechanical resistance of fixation of femoral neck fractures in synthetic bone, using the dynamic hip system and an anti-rotation screw. Rev Bras Ortop. 2014; 49(6): 586–592. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBlair B, Koval KJ, Kummer F, et al.: Basicervical fractures of the proximal femur. A biomechanical study of 3 internal fixation techniques. Clin Orthop Relat Res. 1994; (306): 256–63. PubMed Abstract\n\nSim E, Schmiedmayer HB, Lugner P: Mechanical factors responsible for the obstruction of the gliding mechanism of a dynamic hip screw for stabilizing pertrochanteric femoral fractures. J Trauma. 2000; 49(6): 995–1001. PubMed Abstract | Publisher Full Text\n\nStiasny J, Dragan S, Kulej M, et al.: Comparison analysis of the operative treatment results of the femoral neck fractures using side-plate and compression screw and cannulated AO screws. Ortop Traumatol Rehabil. 2008; 10(4): 350–61. PubMed Abstract\n\nSwiontkowski MF, Harrington RM, Keller TS, et al.: Torsion and bending analysis of internal fixation techniques for femoral neck fractures: the role of implant design and bone density. J Orthop Res. 1987; 5(3): 433–444. PubMed Abstract | Publisher Full Text\n\nBonnaire FA, Weber AT: Analysis of fracture gap changes, dynamic and static stability of different osteosynthetic procedures in the femoral neck. Injury. 2002; 33(Suppl 3): C24–32. PubMed Abstract | Publisher Full Text\n\nBaitner AC, Maurer SG, Hickey DG, et al.: Vertical shear fractures of the femoral neck. A biomechanical study. Clin Orthop Relat Res. 1999; (367): 300–5. PubMed Abstract\n\nBartonícek J: Pauwels' classification of femoral neck fractures: correct interpretation of the original. J Orthop Trauma. 2001; 15(5): 358–60. PubMed Abstract | Publisher Full Text\n\nWoods S, Pilling R, Vidakovic I, et al.: Dataset 1 in: To derotate or not to derotate: A retrospective study on the impact of derotation screws on the revision rate of 2-hole dynamic hip screw fixation of intracapsular neck of femur fractures. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23260",
"date": "06 Jun 2017",
"name": "Martyn J Parker",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nEssentially the article is concise and well written. The problem is the small number of patients studied, there really have not been a sufficient number to be able to justify the conclusions stated in this article. Other comments are - There must have been many more patients treated by internal fixation over the 5 year study period. This is a selected group of patients. How were they selected? Numbers not just percentages must be given in the abstract. There are only very limited presentation of the patient demographics. There is only need for one decimal place. Were there more displaced fractures in the de-rotation screw group? The treatment of the complications should be in the results not the discussion.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNo\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2783",
"date": "13 Jun 2017",
"name": "simon woods",
"role": "Author Response",
"response": "Many thanks for taking the time to review our article, we hope this addresses some of your feedback.How were they selected?The patient's were not actively selected. A list of patients sustaining intracapsular neck of femur fracture NOT treated with arthorplasty (hemi/THR) was obtained (2642 patients). From this we reviewed patient imaging and eliminated those that had been miscoded (2481 patients: mostly patients that had actually been treated with arthoplasty, or extracapsular fractures). A further 39 were eliminated if they did not have clinical follow up for 1 year post op. This left 122 patients, 58 treated with cannulated screws and not included in this study, 64 teated with 2-hole DHS. We don't doubt there were also patients with IC NOFs treated with fixation that were incorrectly coded and so lost to this study, but there was no intentional patient selection. This would have been included in the limitations section given more words to use.There are only very limited presentation of the patient demographics.There is a 1000 word limit and so we only reported age, sex, and fracture angle.Were there more displaced fractures in the de-rotation screw group?The distribution of Garden classification and Pauwel scores can be found in tables 2 and 3The treatment of the complications should be in the results not the discussionTrue. There is a lot more we would have liked to included in the discussion but were limited by the 1000 word limit"
}
]
},
{
"id": "23554",
"date": "21 Jun 2017",
"name": "Raju Karuppal",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe objective of the article is interesting. The problems are mainly the small sample size, how do they randomise the sample and the discussion part is poorly written. How do the researchers assess the sole reason for revision in non-DRS group . The result/ statistical part needs to be improved by including more details of patient demographics.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "28128",
"date": "20 Nov 2017",
"name": "Gianluca Testa",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is written with a native English, but a poor number of patients was included in the study. The Methods and Results must be better reported (percentage and range must be added). Discussion is poor and does not justify the findings described - the number of patients is not sufficient to determine conclusions. Newer references should be considered.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-678
|
https://f1000research.com/articles/6-676/v1
|
15 May 17
|
{
"type": "Research Note",
"title": "Electrocortical correlations between pairs of isolated people: A reanalysis",
"authors": [
"Dean Radin"
],
"abstract": "A previously reported experiment collected electrocortical data recorded simultaneously in pairs of people separated by distance. Reanalysis of those data confirmed the presence of a time-synchronous, statistically significant correlation in brain electrical activity of these distant “sender-receiver” pairs. Given the sensory shielding employed in the original experiment to avoid mundane explanations for such a correlation, this outcome is suggestive of an anomalous intersubjective connection.",
"keywords": [
"electrocortical",
"coherence",
"synchronization"
],
"content": "Introduction\n\nGiroldini et al. (2016) reported an experiment where pairs of people isolated by distance each had 14-channel electroencephalograms (EEGs) recorded simultaneously (Emotiv EPOC+, San Francisco, CA). The “sender” (S) of each pair was exposed to 128 stimulus epochs per test session, where each epoch consisted of a one-second exposure to a light or sound stimulus (the latter presented over earbuds). Using a whole brain EEG coherence metric, Giroldini et al. found that after 25 experimental sessions that the “receiver’s” (R) electrocortical coherence increased during the stimulus epochs. This was interpreted as a reflection of a “nonlocal” connection between S and R. The effect was primarily observed in the EEG alpha band of 8 – 12 Hz, with a statistically stronger effect reported in the range of 9 – 10 Hz. To double-check how robust the reported effect might be, this study developed a simpler correlational approach and applied it to the original, unfiltered EEG data.\n\n\nMethods\n\nThe raw EEG data from Giroldini et al. (2016) was obtained from: doi, 10.6084/m9.figshare.1466876.v8 (Tressoldi, 2016).\n\nMatlab (R2013a) scripts were written to conduct the analysis. These scripts may be obtained from: 10.6084/m9.figshare.4954643.v2 (Radin, 2017).\n\nTo process the raw EEG data, first use the script readEEG.m (this uses the function importfile1.m), then put all of the newly processed files (in Matlab’s .mat format) into a single folder and run the script EEG_xcorr_raw.m in that folder. This will create Giroldini’s et al.’s brain coherence metrics for each pair of participants. Finally, run the script EEG_analysis_Radin.m, which will analyze those files and generate results in graph form.\n\nAs a brief description of the method, the processing scripts follow Giroldini et al.’s method for creating a whole brain coherence metric for each S and R datafile. The scripts then create an ensemble median of this metric plus and minus one second from stimulus onset. A Pearson correlation is then formed between the ensemble median curves for S and R pairs. The two-tailed p-value associated with that correlation is transformed into a one-tailed z score using an inverse normal transform. Then a nonparametric permutation analysis is used to determine the probability associated with that z score (i.e., this z is not distributed as a standard normal deviate because its variance is inflated due to the autocorrelated nature of EEG data). The p-value resulting from the permutation analysis is converted into a standard normal deviate (this is now a conventional z score). The same process is used on the remaining 24 pairs of EEG data. The final step combines the 25 z scores into a Stouffer Z = ∑zs/5, where Z is distributed as a standard normal deviate.\n\n\nResults\n\nThe above procedure results in a Stouffer Z = 2.705, p = 0.006 (two-tailed). Four of the 25 sessions are independently significant at p < .05 (two-tailed); all four of those sessions had positive S-R correlations.\n\nTo check if this S-R relationship is in time-synchrony, the Matlab script circular shifts each R’s EEG coherence signal -2 seconds, and then repeats the entire analytical procedure to determine the overall Stouffer Z score. Then R’s coherence signal is shifted to the right by 100 msec, reanalyzed, and this is repeated until reaching a lag of +2 seconds. If the original S-R correlation was synchronized in time, then we would expect to see the peak result at lag 0. Figure 1 shows that this was indeed the case.\n\nPositive lags in this graph represent post-stimulus S-R correlations; negative lags are pre-stimulus.\n\nFigure 1 also shows a significantly negative deviation at a lag of 900 msec post-stimulus. Because this analysis is based on the absolute magnitude and not the direction of the correlation, this decline indicates that the S-R correlation strength declined below chance-expected levels about 1second post-stimulus. This may reflect a drop in electrocortical coherence in S generated by the explicit presentation of a stimulus; thus, during that time, the magnitude of the S-R correlation would be expected to momentarily drop. If similar negative correlations are observed in future experiments of this type, it may prove to be a useful secondary indicator of a genuine S-R relationship.\n\n\nConclusion\n\nAnalysis of previously collected EEG data showed a significant time-synchronized correlation between the electrocortical activity of “sender” and “receiver” pairs. Because the data were collected under conditions where participants were isolated by shielding and distance, this outcome is suggestive of a “nonlocal” mind-to-mind interaction.\n\n\nData availability\n\nThe raw EEG data from Giroldini et al. (2016) was obtained from: doi, 10.6084/m9.figshare.1466876.v8 (Tressoldi, 2016).",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nGiroldini W, Pederzoli L, Bilucaglia M, et al.: EEG correlates of social interaction at distance [version 5; referees: 2 approved]. F1000Res. 2016; 4: 457. Publisher Full Text\n\nRadin D: readEEG analysis files. figshare. 2017. Data Source\n\nTressoldi P: EEG correlates of social interaction at distance. figshare. 2016. Data Source"
}
|
[
{
"id": "22753",
"date": "18 May 2017",
"name": "Edward Justin Modestino",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a brief research note that is under review. It refers to an independent reanalysis of data from another research group was done for a controversial study on non-local consciousness. The reanalysis used a non-parametric permutation. The only thing that I do not understand clearly is the results. It appears that the results of 25 session (different subject pairs) divulged a significant p-value of p = 0.006 in a group analysis. Next, it is explained that four out of the 25 sessions were independently significant at p<0.05 two tailed. I am a bit confused. I guess this means the greatest significance was seen at the group level, and at the subject level only four subject pairs showed significance. I am not sure I am understanding this correctly. Please make sure it is very explicitly stated to avoid the confusion I have had.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2729",
"date": "30 May 2017",
"name": "Dean Radin",
"role": "Author Response",
"response": "> It appears that the results of 25 session (different subject pairs) divulged a significant p-value of p = 0.006 in a group analysis. Next, it is explained that four out of the 25 sessions were independently significant at p<0.05 two tailed.... I guess this means the greatest significance was seen at the group level, and at the subject level only four subject pairs showed significance. Yes, that is correct. The p value of p = 0.006 is a group analysis over all 25 sessions. When examining individual sessions 4 were independently significant at p < 0.05. It is noteworthy that this latter outcome is unexpected by chance because the binomial probability of 4 or more significant (at p < 0.05) sessions out of 24 is associated with p = 0.03. What this suggests is that while some of the other sessions did not quite reach the (conventional) threshold for significance, on average they contributed results in the same direction, thus leading to the overall stronger statistical outcome for all data combined."
}
]
},
{
"id": "22756",
"date": "23 Jun 2017",
"name": "Aliodor Manolea",
"expertise": [
"Reviewer Expertise amplified states of consciousness",
"statistics in psychology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe statistical method seems to be the correct one if we consider each experimental session corresponding to an S-R pair as a separate experiment. The study is very concise and on the subject, and the results comes from a logical thinking that is materialized in a mathematical method, perfectly adapted to the purpose pursued. Well done work.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-676
|
https://f1000research.com/articles/6-669/v1
|
12 May 17
|
{
"type": "Correspondence",
"title": "From disease modelling to personalised therapy in patients with CEP290 mutations",
"authors": [
"Elisa Molinari",
"Shalabh Srivastava",
"John A. Sayer",
"Simon A. Ramsbottom",
"Elisa Molinari",
"Shalabh Srivastava",
"John A. Sayer"
],
"abstract": "Mutations that give rise to premature termination codons are a common cause of inherited genetic diseases. When transcripts containing these changes are generated, they are usually rapidly removed by the cell through the process of nonsense-mediated decay. Here we discuss observed changes in transcripts of the centrosomal protein CEP290 resulting not from degradation, but from changes in exon usage. We also comment on a landmark paper (Drivas et al. Sci Transl Med. 2015) where modelling this process of exon usage may be used to predict disease severity in CEP290 ciliopathies, and how understanding this process may potentially be used for therapeutic benefit in the future.",
"keywords": [
"Cep290",
"splicing",
"genetic pleiotropy",
"exon-skipping",
"Leber congential amaurosis",
"Joubert syndrome",
"nephronophthisis"
],
"content": "Mechanisms of aberrant transcript removal\n\nNonsense mutations are changes to the coding sequence of a gene, which lead to a termination (stop) codon being coded for in place of an amino acid, giving rise to a truncated protein. These types of mutations are common in human disease and account for around a third of inherited genetic disorders1. The resultant abnormally truncated proteins can have significant deleterious effects on the cell. A truncated protein may display a loss-of-function or in some cases a dominant-negative function that can seriously impact on normal biological processes. Two mechanisms have been identified by which cells remove unwanted transcripts, thereby avoiding production of abnormal protein. The first of these is nonsense-mediated decay (NMD), whereby transcripts containing a nonsense mutation are targeted for degradation giving rise to a reduction in transcript2,3. The second method that has been described is that of nonsense-associated altered splicing (NAS), a mechanism that promotes the increase in transcripts that are missing the exon containing the deleterious mutation4.\n\nMany genes are alternately spliced in order to generate proteins with unique properties and functions5. However, NAS, triggered by the presence of a premature stop codon, leads to splicing of an exon that may not normally be spliced. Evidence of NAS has been shown following mutation of CEP2906, a gene that, when mutated, is associated with a spectrum of inherited genetic disorders, including Leber congenital amaurosis (LCA), Senior Løken syndrome (SLS), Joubert syndrome (JBTS) and Meckel-Gruber syndrome7–10. The compound heterozygous mutations described in a family with LCA by Littink et al. included a novel premature termination codon in exon 7 (c.451C>T, p.Arg151*). When mRNA transcripts were sequenced, skipping of either exon 7 alone, or exon 7 and exon 8 was revealed, which was never seen in controls6. The authors concluded that the LCA phenotype seen in the patient, which was less severe than expected, was due to a functional CEP290 protein being produced due to NAS. The genetic pleiotropy exhibited in patients with CEP290 mutations may therefore be explained in part by the differential ability of NAS to give rise to a functional protein in the event of a premature termination codon being generated. One can surmise that the greater the level of near-normal (wild-type) protein that can be generated, the lower the disease burden and the milder the phenotype.\n\n\nPredicting disease severity using alternate splicing models\n\nThe recent landmark article by Drivas et al. has provided some novel insights into the potential modelling of CE290 mutations, based around the idea that genetic pleiotropy may arise as a result of differences in protein levels11. Known mutations in CEP290 were classified into categories based on the premise that the severity of disease correlates with the impact on the overall level of functional protein that may be generated. Missense mutations were classified as mild, as they should impact less on the level of transcript, whereas nonsense mutations were either moderate or severe depending on whether or not they occur in a codon that begins and ends in the same frame and can therefore be spliced out with no change in reading frame. In theory, transcripts containing nonsense mutations would be removed by NMD, so the overall level of CEP290 transcript would be lower in those patients harbouring mutations in exons that may not be easily skipped by NAS. Using this simple model, it was shown that the disease severity does correlate fairly well with the predicted level of CEP290 protein. The authors then modified the model to take into account skipping of regions of known functional importance. Mutations that map to these regions have more severe phenotypes than can be explained by the original model, due to the fact that skipping out these exons gives rise to a protein with reduced functionality.\n\nWhen the model was tested against genotype-phenotype correlations in patients with varying symptoms, it did appear to accurately predict the phenotype from the given genotype11. Furthermore, the authors showed that for mutations that give rise to a premature termination codon, the exon in which the mutation has occurred is indeed spliced out, as would be expected if NAS were activated. However, what was unexpected was that in control samples these exons were also shown to be spliced out. Perhaps even more surprisingly, the spliced levels observed in control samples were a similar level to that of the patient samples. The fact that the levels observed in controls are the same as seen in patients suggests that it is in fact not NAS that is leading to splicing of these exons; splicing is simply happening at a basal level and is not in any way increased due to the presence of the mutation.\n\nImportantly, while it was shown by PCR that small amounts of these transcripts existed, the authors were unable to detect transcripts arising from basal exon skipping via direct RNA-sequencing, which suggests the infrequent nature of these alternative splicing events and so brings into question the biological relevance of this mechanism. It must also be noted that there are patients who have the same or similar mutations, but present with symptoms of differing severity. One reported CEP290 mutation (c.21G>T; p.Trp7Cys) gives rise to both SLS and JBTS phenotypes8,9. This mutation is in exon 2, which contains the start codon, and so may not be spliced out. In this case, the genotype alone is unable to be used to predict how the transcript level will impact on the severity of the disease and is therefore no use as a proxy measure for phenotype. Similarly, there have been several patients reported with a nonsense mutation in exon 36 (c.4723A>T; p.1575*), who present with LCA and not JBTS12–14, even though this mutation is only a few bases upstream of the c.4732G>T; p.Glu1578* mutation with JBTS phenotypes8. It must therefore be acknowledged that exon skipping is not the sole source of genetic pleiotropy.\n\n\nManipulation of splicing for therapeutic benefit\n\nPatients suffering from non-syndromic LCA commonly have a mutation within the CEP290 gene (c.2991+1655A>G), which creates a cryptic splice site, resulting in the inclusion of an aberrant exon of 128 bp that contains a premature stop codon (p.Cys998*). Alternative splicing of the cryptic exon occurs in some, but not all, mRNA transcripts15,16. Collin et al. successfully exploited the use of antisense oligonucleotides (AONs) to boost an efficient skipping of the mutant cryptic exon: by transfecting AONs in patient-derived lymphoblastoid cells, they were able to redirect normal splicing of CEP29017. As a proof of principle for the feasibility of altering the splicing pattern of Cep290 in vivo in the affected tissue, intravitreal injection of wild type mice with naked splice-switching AON led to the modification of Cep290 splicing in retinal cells18. Similarly, naked AONs and adeno-associated virus-packaged AONs were administered to a humanized mouse model (Cep290lca/lca) that contains intron 26 of the human CEP290 gene carrying the c.2991+1655A>G mutation. Delivery by intraocular injection caused a statistically significant reduction of aberrantly spliced Cep290 up to 1 month after injection, without compromising the retinal structure19. However, humanized Cep290lca/lca mouse fails to recapitulate the human clinical features, making it impossible to understand the actual impact of AON-directed restoration of wild type Cep290 transcript on the retinal phenotype18–20. Nevertheless, the ability of splice-switching AONs not only to cause an upregulation of wild type CEP290 mRNA to normal levels, but also to restore otherwise impaired ciliogenesis in patient-derived fibroblast cells, demonstrates, although only in a limited way, that an increase of correctly spliced transcript can indeed result in a phenotypical rescue21.\n\nUsing a similar approach, AONs can be exploited to promote skipping of exons carrying nonsense mutations to increase the abundance of slightly shortened transcripts and near-full length functional protein, as in the case of mutated dystrophin in Duchenne muscular dystrophy22.\n\nThe majority of CEP290 mutations are nonsense mutations that introduce a premature stop codon in the mRNA9. In addition to retinal degeneration, these mutations cause a wide spectrum of multisystemic ciliopathies, such as the cystic kidney disease nephronophthisis, which results in end stage renal failure at a median age of 13 years. Due to the relatively slow progression of this disease, there is a potential time for therapeutic intervention. If CEP290 protein levels could be restored by inducing exon skipping, disease progression may be significantly slowed or even halted. As we move closer to personalised medicine, especially in the arena of rare disease, it is likely that therapeutic strategies such as this may become routine, with unique therapies being designed based on the patient’s genotype. Understanding the way in which deleterious mutations are dealt with in vivo will have a significant impact on how successfully these therapies can be implemented.",
"appendix": "Author contributions\n\n\n\nJ.A.S conceived the article. S.A.R, S.S and E.M prepared the first draft. All authors contributed to the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nE.M. is funded by Kids Kidney Research. J.A.S is funded by the Medical Research Council (MR/M012212/1) and the Newcastle upon Tyne Hospitals NHS Charity. S.S. and S.A.R are funded by Kidney Research UK (PDF_003_20151124).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nFrischmeyer PA, Dietz HC: Nonsense-mediated mRNA decay in health and disease. Hum Mol Genet. 1999; 8(10): 1893–900. PubMed Abstract | Publisher Full Text\n\nKurosaki T, Maquat LE: Nonsense-mediated mRNA decay in humans at a glance. J Cell Sci. 2016; 129(3): 461–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhajavi M, Inoue K, Lupski JR: Nonsense-mediated mRNA decay modulates clinical outcome of genetic disease. Eur J Hum Genet. 2006; 14(10): 1074–81. PubMed Abstract | Publisher Full Text\n\nWang J, Chang YF, Hamilton JI, et al.: Nonsense-associated altered splicing: a frame-dependent response distinct from nonsense-mediated decay. Mol Cell. 2002; 10(4): 951–7. PubMed Abstract | Publisher Full Text\n\nde Klerk E, 't Hoen PA: Alternative mRNA transcription, processing, and translation: insights from RNA sequencing. Trends Genet. 2015; 31(3): 128–39. PubMed Abstract | Publisher Full Text\n\nLittink KW, Pott JW, Collin RW, et al.: A novel nonsense mutation in CEP290 induces exon skipping and leads to a relatively mild retinal phenotype. Invest Ophthalmol Vis Sci. 2010; 51(7): 3646–52. PubMed Abstract | Publisher Full Text\n\nSayer JA, Otto EA, O'Toole JF, et al.: The centrosomal protein nephrocystin-6 is mutated in Joubert syndrome and activates transcription factor ATF4. Nat Genet. 2006; 38(6): 674–81. PubMed Abstract | Publisher Full Text\n\nValente EM, Silhavy JL, Brancati F, et al.: Mutations in CEP290, which encodes a centrosomal protein, cause pleiotropic forms of Joubert syndrome. Nat Genet. 2006; 38(6): 623–5. PubMed Abstract | Publisher Full Text\n\nCoppieters F, Lefever S, Leroy BP, et al.: CEP290, a gene with many faces: mutation overview and presentation of CEP290base. Hum Mutat. 2010; 31(10): 1097–108. PubMed Abstract | Publisher Full Text\n\nFrank V, den Hollander AI, Brüchle NO, et al.: Mutations of the CEP290 gene encoding a centrosomal protein cause Meckel-Gruber syndrome. Hum Mutat. 2008; 29(1): 45–52. PubMed Abstract | Publisher Full Text\n\nDrivas TG, Wojno AP, Tucker BA, et al.: Basal exon skipping and genetic pleiotropy: A predictive model of disease pathogenesis. Sci Transl Med. 2015; 7(291): 291ra97. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerrault I, Delphin N, Hanein S, et al.: Spectrum of NPHP6/CEP290 mutations in Leber congenital amaurosis and delineation of the associated phenotype. Hum Mutat. 2007; 28(4): 416. PubMed Abstract | Publisher Full Text\n\nBrancati F, Barrano G, Silhavy JL, et al.: CEP290 mutations are frequently identified in the oculo-renal form of Joubert syndrome-related disorders. Am J Hum Genet. 2007; 81(1): 104–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStone EM: Leber congenital amaurosis - a model for efficient genetic testing of heterogeneous disorders: LXIV Edward Jackson Memorial Lecture. Am J Ophthalmol. 2007; 144(6): 791–811. PubMed Abstract | Publisher Full Text\n\nden Hollander AI, Koenekoop RK, Yzer S, et al.: Mutations in the CEP290 (NPHP6) gene are a frequent cause of Leber congenital amaurosis. Am J Hum Genet. 2006; 79(3): 556–61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nden Hollander AI, Roepman R, Koenekoop RK, et al.: Leber congenital amaurosis: genes, proteins and disease mechanisms. Prog Retin Eye Res. 2008; 27(4): 391–419. PubMed Abstract | Publisher Full Text\n\nCollin RW, den Hollander AI, van der Velde-Visser SD, et al.: Antisense Oligonucleotide (AON)-based Therapy for Leber Congenital Amaurosis Caused by a Frequent Mutation in CEP290. Mol Ther Nucleic Acids. 2012; 1: e14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGerard X, Perrault I, Munnich A, et al.: Intravitreal Injection of Splice-switching Oligonucleotides to Manipulate Splicing in Retinal Cells. Mol Ther Nucleic Acids. 2015; 4: e250. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGaranto A, Chung DC, Duijkers L, et al.: In vitro and in vivo rescue of aberrant splicing in CEP290-associated LCA by antisense oligonucleotide delivery. Hum Mol Genet. 2016; 25(12): 2552–63. PubMed Abstract | Publisher Full Text\n\nGaranto A, van Beersum SE, Peters TA, et al.: Unexpected CEP290 mRNA splicing in a humanized knock-in mouse model for Leber congenital amaurosis. PLoS One. 2013; 8(11): e79369. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGerard X, Perrault I, Hanein S, et al.: AON-mediated Exon Skipping Restores Ciliation in Fibroblasts Harboring the Common Leber Congenital Amaurosis CEP290 Mutation. Mol Ther Nucleic Acids. 2012; 1: e29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilton SD, Fall AM, Harding PL, et al.: Antisense oligonucleotide-induced exon skipping across the human dystrophin gene transcript. Mol Ther. 2007; 15(7): 1288–96. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "23066",
"date": "19 Jun 2017",
"name": "Patricia D. Wilson",
"expertise": [
"Reviewer Expertise Renal genetic disease"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis clearly-written and well-argued manuscript highlights recent pertinent findings concerning mechanisms underlying exon skipping and splicing in determination of disease severity in patients with CEP290 mutations. Evidence cited is consistent with the notion that the severity of disease is inversely proportional to the level of normal or near-normal protein generated. It is suggested that phenotypic rescue, could be achieved in patients with the most common, severe nonsense mutations of CEP290 by in-frame splicing. This has important potential for personalised therapeutic intervention in the future which envisages induction of exon skipping to up regulate normal mRNA, increase near-normal protein levels and thereby reduce the severity and progression of for instance nephronopthisis, a rare disease but common cause of renal failure in children.\n\nIs the rationale for commenting on the previous publication clearly described? Yes\n\nAre any opinions stated well-argued, clear and cogent? Yes\n\nAre arguments sufficiently supported by evidence from the published literature or by new data and results? Yes\n\nIs the conclusion balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "23067",
"date": "19 Jun 2017",
"name": "Ruxandra Bachmann-Gagescu",
"expertise": [
"Reviewer Expertise Ciliopathies",
"medical genetics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe correspondence article by Molinari and colleagues discusses potential therapeutic options through manipulation of splicing for patients harbouring CEP290 mutations. This represents an important topic given the role of CEP290 in human disease and the intense research focusing on understanding the phenotypic consequences of CEP290 dysfunction and identifying therapies for affected individuals. Thus the rationale for writing this correspondence article is clearly established.\nThe article by Molinari and colleagues reviews various studies that have provided a possible explanation to the important phenotypic pleiotropy caused by CEP290 mutations through exon skipping to by-pass stop mutations (“non-sense associated altered splicing”). In particular, they thoroughly comment on the paper by Drivas et al. who developed an algorithm to predict phenotypic severity based on the location of the mutation within the CEP290 gene and consequent anticipated CEP290 protein levels. After discussing this model in depth, Molinari and colleagues suggest that targeted modulation of splicing through antisense oligonucleotide-based therapy is an interesting therapeutic option for patients with CEP290 mutations.\nThe opinions expressed in this correspondence article are well balanced and supported by appropriate citations. The authors thoroughly comment on the study by Drivas et al, highlighting strengths but also discussing limitations. In particular, they comment on the fact that exon skipping appears to mostly occur not through “nonsense-associated altered splicing” as expected but rather through “basal exon skipping” independently of the presence of a mutation. While this finding questions the relevance of exon skipping occurring in vivo in the presence of stop mutations, it would probably not question the feasibility of splicing modulation as a therapeutic modality; increasing the proportion of transcript by-passing the mutation should partially rescue protein function, regardless of whether such a mechanism is triggered in vivo to attenuate consequences of a mutations on protein function.\nBesides these considerations, one important point that is not addressed in the work by Drivas et al, but that is appropriately raised by Molinari and colleagues, is the fact that phenotypic differences are observed even between individuals sharing the same causal mutations, indicating that other mechanisms must influence the phenotypic outcome. Similarly, our recent work on a large JS cohort (Phelps et al, Genetics in Medicine in press) noted phenotypic discrepancies despite identical causal mutations in 60% of situations. Therefore, while the causal mutations certainly play a major role in determining the phenotypic outcome, additional mechanisms must also underlie the genetic pleiotropy observed.\nThe discussion by Molinari and colleagues could have been expanded to include additional points:\nIn commenting on the work by Drivas et al, it should be pointed out that the correlation between the anticipated protein levels and the phenotypic severity is strongly influenced by the phenotypic assessment. The study by Drivas et al mostly relied on phenotypic descriptions from the literature and misclassification of patients (as having JS instead of JSRD for example, if no thorough assessments of retinal or renal function were performed) cannot be ruled out. Moreover, determining the “phenotypic severity” is more difficult than it seems. The commonly used scale considers LCA as the least severe disease manifestation, MKS as the most severe and JS with or without additional features in-between. However, one can argue whether single-organ involvement resulting in blindness in early childhood (LCA) is really less severe than presence of the molar tooth sign with mild developmental delay (classified as JS) or whether JS with mild developmental delay and polydactyly (classified as JSRD) is more severe than JS with severe developmental delay but no additional organ system involvement (classified as JS). From a biological standpoint, it remains yet to be demonstrated that the degree of dysfunction of a given protein is necessarily correlated with the number of tissues affected. One could provide alternative hypotheses involving the presence of additional variants in other genes (modifiers) or suggesting that different regions of the protein are important for different functions in different cell types. These points might affect the reliability of the prediction algorithm proposed by Drivas and colleagues which in turn would question the efficiency of inducing exon skipping to decrease phenotypic severity in patients. Larger studies relying on thorough standardized phenotypic assessments as performed for 6 patients in the study by Drivas et al. and a more nuanced classification of phenotypic severity would be helpful in confirming the reliability of the proposed exon-skipping model in predicting phenotypic outcome.\n\nAdditional issues should also be considered when thinking about controlled exon skipping as a therapeutic modality. Exon skipping may not rescue the phenotype as efficiently in genes such as CC2D2A, in which missense mutations predominate in JS/JSRD. Finally, splicing modulators may be challenging to apply in patients harbouring compound heterozygous mutations in different exons of the target gene as each splicing modulator can affect both alleles, resulting in transcripts missing multiple exons.\n\nThese points may represent limitations in the applicability or efficiency of splicing-modulation as a therapeutic option which could have been discussed by Molinari and colleagues. Nevertheless, as appropriately discussed by the authors, this avenue deserves further investigation and CEP290 is certainly the best candidate among the ciliopathy genes for such type of intervention given the high proportion of truncating mutations and the progressive nature of retinal and renal complications providing a time window for intervention.\n\nIs the rationale for commenting on the previous publication clearly described? Yes\n\nAre any opinions stated well-argued, clear and cogent? Yes\n\nAre arguments sufficiently supported by evidence from the published literature or by new data and results? Yes\n\nIs the conclusion balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-669
|
https://f1000research.com/articles/5-1945/v1
|
10 Aug 16
|
{
"type": "Research Article",
"title": "A multi-scale computational model of the effects of TMS on motor cortex",
"authors": [
"Hyeon Seo",
"Natalie Schaworonkow",
"Sung Chan Jun",
"Jochen Triesch",
"Hyeon Seo",
"Natalie Schaworonkow",
"Sung Chan Jun"
],
"abstract": "The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different cell types in motor cortex due to transcranial magnetic stimulation. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict detailed neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to predict activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also predicts differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of corctial pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.",
"keywords": [
"transcranial magnetic stimulation",
"computational model",
"compartmental neuron model",
"brain stimulation",
"multi-scale modeling",
"motor cortex",
"D-wave",
"I-wave"
],
"content": "Introduction\n\nTranscranial magnetic stimulation (TMS) is a neurostimulation and neuromodulation technique that noninvasively activates neurons in the brain1,2. It generates a time varying magnetic field using a coil above the head, which induces an electric field in the brain that can be of sufficient magnitude to depolarize neurons. In recent years, TMS has been widely tested as a tool for diagnosis and treatment for a broad range of neurological and psychiatric disorders3–5. Although the efficacy of TMS has been demonstrated, there remains a large degree of uncertainty regarding the factors influencing the affected brain areas and relevant circuits.\n\nTo provide a better understanding of the biophysical mechanisms behind TMS, several computational studies have been performed to try to reveal the effects of a number of parameters that lead to variable outcomes. The majority of models predict the brain regions influenced by TMS based on stimulus-induced electric fields6,7. While early studies utilized spherical models of the human head, in recent years high-resolution volume conduction models of the head have been developed from human magnetic resonance imaging (MRI) to the improve accuracy of calculated electric fields8–15. These models have revealed that the geometry of the volume conduction model, such as complex gyral folding patterns, is one of the key parameters determining the induced electric field. In addition, computational studies were extended by connecting numerical results with experimental observations to show the correlation between computed electric fields with physiological observations16–18.\n\nDirectly monitoring target cells’ activities under stimulation would be immensely valuable for the interpretation of TMS effects, but few such studies exist19,20. However, computational studies can explore the effects of the electromagnetic fields on neural activation by simulating models of neural stimulation in silico. In early computational models, straight axonal fibers were considered numerically and the response of neurons induced by the external field was modeled by means of the cable equation21,22. Later models investigated the role of cell morphology using multi-compartmental modeling23–25. Since the responses of cortical neurons vary depending on not only the neuronal morphology but also orientation relative to the induced electric field and stimulus amplitude6,26,27, anatomical information such as cortical folding that induces a wide range of field orientations was fed into the neuronal models by applying the calculated electric field from the head model to the neuronal models28,29.\n\nHere, we use an advanced multi-scale modeling approach that combines a high-resolution head model with detailed multi-compartmental neuron models. We construct an anatomically realistic head model based on MRI and calculate the external currents that affect neurons via the TMS-induced electric field with high accuracy. We concentrate on the hand knob area of the motor cortex that is the predominant target of many TMS studies16,30. A multitude of layer 5 and layer 3 pyramidal neurons (L5/L3 PNs) is incorporated on the basis that they might be primary activators of the corticospinal tract and provide the main input to the direct pathway24,28,31,32. We estimate the target area of activation as a function of coil orientation as well as the stimulation intensities required to activate neurons. Finally, we predict the precise sites where the neurons initiate their action potentials.\n\n\nMethods\n\nIn order to study the cellular effects of TMS in the brain we employed a multi-scale computational modeling approach combining a volume conductor head model with detailed neuronal models of cortical pyramidal neurons. The motor cortex, especially the hand area, was considered as a cortical target location. The volume conductor head model was used to simulate the stimulus-induced electric fields. The precise impact of these fields on different neural targets was evaluated using multi-compartmental models of pyramidal neurons embedded into the head model. This allowed us to predict differences in individual neuron’s susceptibility to TMS depending on neuron placement and coil orientation.\n\nThe simulated effects of TMS depend not only on stimulation parameters but also on the anatomical information specified in the volume conductor model. To calculate the precise electric field, a volume conductor head model for TMS that reflected T1-weighted and T2-weighted magnetic resonance (MR) images was constructed using SimNibs v1.114,33. Briefly, segmentation of white matter (WM), gray matter (GM), cerebrospinal fluid (CSF), skull and skin was based on FreeSurfer v5.3.034,35, FSL v5.0.036 and MeshFix v2.037, as shown in Figure 1(a). Then, the head model was constructed by generating an optimized tetrahedral volume mesh using an enhanced resolution in the region of interest (ROI) around the hand knob using Gmsh38. The total number of tetrahedral elements was approximately 5.6 million. At each layer of the head model, isotropic conductivity was assigned with the following values (in S/m): WM: 0.126; GM: 0.276; CSF: 1.654; skull: 0.01; and skin: 0.465.\n\n(a) Cross-section displaying the scalp, skull, cerebrospinal fluid, gray matter and white matter. (b) The computed coil location is superimposed on the head model. (c) The yellow dot indicates the location of the center of the TMS coil on the border between gray matter and cerebrospinal fluid, and the coil handle is oriented in the direction of the yellow arrow.\n\nThe electric field induced by TMS, E→=−∂A→∂t−∇ϕ=−Ep→−ES,→ consisted of primary (Ep→) and secondary (ES→) electric fields. The primary electric field was directly determined by the coil geometry and the head model. The secondary electric field was solved via a finite element method using the GetFEM++ library and MATLAB33,39. The Magstim 70 mm Figure 8 coil was represented by magnetic dipoles positioned above the hand knob (Figure 1) and the stimulator output was set to 1 A/μs. The coil orientation was defined relative to the direction of the central sulcus such that the electric field induced was in anterior to posterior direction (Figure 1(c)). Then, three additional coil orientations were tested by rotating in steps of 45 degrees and reversed orientations were simulated by changing the sign of the current through the coil.\n\nTo investigate the TMS-induced cellular effects, we quantified the magnitude of the electric field |E→| and the orthogonal component of the electric field E⊥=E→⋅n→ to the gray matter surface, where n→ is the normal vector for the boundary surface element. The orthogonal component was expected to contribute to TMS-induced brain activation by the theoretical cortical column cosine model of TMS efficacy (C3-model)10,40.\n\nWe adapted existing multicompartmental models of layer 5 and 3 pyramidal neurons (L5/L3 PNs) from cat visual cortex41 using the NEURON simulation software42. The electrical properties were unchanged from the original models. Briefly, a high density of fast, inactivating voltage-dependent Na+ channels were present in the axon hillock and axon initial segment, and a low density of these channels was present in the soma and dendrites. Slow Ca2+-dependent K+ channels and high threshold Ca2+ channels were located in the soma and dendrites. Except for the dendrites, fast K+ channels were present. L5/L3 PNs were combined virtually with the head model and modified to accommodate the irregular geometry of the cortex28,29,43–46, as shown in Figure 2. The dendritic trees were lengthened or shortened by re-scaling them according to the local dimensions of the cortex such that dendrites reached layer 1 and the orientation was perpendicular to the cortical surface45,47. Since the morphology of the dendritic trees was not symmetric and it might influence the neuronal activation, L5/L3 PNs had randomly rotated dendritic trees at different locations. The axons of L5 PNs were defined to curve beyond the boundary between GM and WM toward the corpus callosum. The axons of L3 PNs were defined to terminate in layer 5/6 within the GM. To reduce superfluous computations, we preselected a region of interest (ROI) of 50 × 50 × 50 mm3 around the hand knob and then placed L5/L3 PNs in each triangular element comprising the gray matter surface. Altogether, a total of 10,888 L5 PNs and 10,888 L3 PNs was constructed. This process was implemented in MATLAB (MathWorks, Natick, MA, USA).\n\n(a) The distributions of somata of L5/L3 PNs are marked as colored dots (red: L5; blue: L3). (b) A schematic view of the distribution of the L5/L3 PNs is shown along the cortex folding (gray colored area); note the bending of L5 PN axons when crossing the boundary between gray matter and white matter.\n\nThe membrane potentials induced by stimulation were approximated by adding an external current source Iext to the cable model2,21,22,24,25:\n\n\n\nwhere ra is the axial resistance per unit length and E1 represents the component of the electric field that is parallel to each compartment of the PNs. The derivative of the electric field along each compartment was calculated at each center point by l→T(∇E)l→, where ∇E contains the components of the electric field gradient tensor that are estimated by computing the difference of electric fields at neighboring points displaced by ±1 mm along each axis48.\n\nWe calculated a monophasic pulse that induced a fluctuating magnetic field through an RLC circuit as detailed in 49,\n\n\n\nwhere w = 30 kHz is the angular frequency and τ = 0.08 ms is the decay time. The Iext at each compartment was then multiplied by the normalized time derivative of the monophasic pulse. Finally, we obtained the spatial and temporal membrane potential dynamics. They were used to measure the excitation thresholds, the stimulation site and action potential propagation.\n\n\nResults\n\nFigure 3 depicts the magnitude of the electric fields (|E→|, top row) and the orthogonal component of the electric fields (E⊥, bottom row) for different coil orientations. All calculations were performed for a rate of change of the coil current of 1 A/μs. Electric fields had higher magnitudes in the precentral and postcentral gyrus and focused on the top of the gyri, regardless of coil orientations. We observed only slight changes in the field strengths depending on coil orientation. In contrast, the orthogonal component of electric fields (E⊥) showed different spatial patterns compared to the electric field magnitude. High strengths of E⊥ were found on the walls of the gyri and strongly depended on coil orientation. Furthermore, while the spatial extent of |E→| was the same for the standard orientation and the +180 degree orientation, the sign of E⊥ in the +180 degree orientation was reversed due to the reversed sign of the induced electric fields. Interestingly, the maximum value of E→| depended on coil orientation; its was lowest in the standard coil orientation and highest at +135 degrees. However, the maximum values of E⊥ were highest for the standard coil orientation and lowest at +90 degrees.\n\nThe spatial patterns of magnitude of electric fields (|E→|, top row) and its component orthogonal to the gray matter surface (E⊥, bottom row) are visualized; the color scale is adapted for better visualization. The black arrows indicate different coil orientations, and the maximum value of |E→| and E⊥ (measured in V/m) are given in the bottom left of each figure.\n\nTo assess the neuronal activations as a function of coil orientation, we determined the excitation threshold required to cause action potentials of L5/L3 PNs. For each coil orientation, we kept increasing the stimulator output until a neuron generated an action potential or we reached a maximum rate of change of the current defined as 171 A/μs. Our focus is on the excitability for a stimulation intensity corresponding to 67 A/μs, as this value corresponds to the average motor threshold for the Magstim 200 stimulator connected to the coil modeled18,29,48,50. The excitability of L5/L3 PNs was predicted either by the direct estimation of the electric field (E⊥ map in the Figure 4(c)) or by simulating the induced depolarization and firing of the detailed neuronal models (threshold maps in Figure 4(d,e)). The color of the threshold maps represents the stimulator output necessary to activate the corresponding cell and the estimated excitable area in the E⊥ maps. The blue colored areas indicate an excitability in the opposite direction, because the head model was linear with respect to the electric field. As shown in Figure 4(a), we virtually divided the precentral and postcentral gyrus to better visualize the excitability in the walls of the gyrus.\n\n(a) The red dot on the border between GM and CSF indicates the location of the center of the coil. The base orientation is shown as red arrows. The inset represents the region of interest in which PNs were distributed. The blue arrows indicate the opposite coil orientation (+180°). The precentral and postcentral gyri were virtually divided for visualization purposes. The spatial patterns of E⊥ (b) and threshold maps of L5 (c) and L3 (d) PNs depended on coil orientation as shown. The black and red colored areas in the threshold maps (c-d) indicate the excitable areas under the stimulator output corresponding to the average motor threshold (67 A/μs). The directions of coil orientations in the 2nd row are the opposite directions of the 1st row (in the threshold map in (c and d)) simulated by changing the sign of the current through the coil. Note how the excitable areas strongly depend on the coil orientation.\n\nIn L5 PNs, we observed that the predicted excitability depended on coil orientation for both E⊥ and threshold maps (Figure 4(c,d)). For the base orientation and +45 degrees, a high excitability was predominantly observed in the wall of the precentral gyrus. In contrast, for orientations +90 degrees to +225 degrees we observed high excitability in the wall of the postcentral gyrus. Comparing these threshold maps to the E⊥ maps, we see that the threshold maps show activated L5 PNs in some additional smaller areas with comparatively small E⊥ values. From +90 to +180 degrees, the excited regions were quite well matched to the results from E⊥. Furthermore, in the standard direction, the spatial extents of L5 PNs that were activated for stimulation intensities corresponding to the motor threshold seemed to enlarge with increasing coil rotation, while for the opposite direction of the coil current highly excitable areas shrank with increasing coil rotations.\n\nOverall, the excitability in L3 PNs showed behavior comparable to that of L5 PNs (Figure 4 (d,e)), but notable differences in threshold maps between L5 and L3 PNs were as follows: while L5 PNs in the top of the gyri were never activated, L3 PNs were excited in the top and also the wall of gyri. The excitable areas of L3 PNs caused by the +90 and +135 degree stimulations were relatively focused on the upper parts of the wall of the postcentral gyrus, while L5 PNs placed in the deeper parts of the sulcus were activated. Furthermore in L3 PNs, the excitability in the precentral gyrus and the postcentral gyrus was comparable and a bigger area was affected than for L5 PNs. The discrepancies between L5 and L3 PNs confirmed that the morphology and placement of neuronal models has an important impact beyond the position relative to the coil.\n\nThe percentage of excited neurons for a stimulation intensity at the motor threshold is shown in Figure 5. Consistent with the results for the threshold maps in Figure 4 we found that the percentage of excited L5 PNs increased from base to +135 degrees and then decreased gradually thereafter. Similarly, the maximum percentage of excited L3 PNs was observed when the coil was rotated at +135 degrees and L3 PNs had about two times higher activations over all coil orientations. The overall percentage of excited neurons under the maximum stimulator output (171 A/μs) had a similar pattern; in standard orientation, 29% and 38% of L5 and L3 PNs were activated, respectively, and these numbers increased to 33% and 44% when the coil was oriented at +135 degrees.\n\nThe majority of action potentials were initiated at the axon initial segment and others at the axon near the boundary between GM and WM for L5 PNs and at the middle and terminal points for L3 PNs (Figure 6). In the base orientation, threshold stimulation elicited action potentials first at the initial segment for 90% of both the L5 and the L3 PNs. This fraction increased with increasing coil rotations up to 97% at +135 degrees in L5 PNs and up to 95% at +90 degrees in L3 PNs. Example plots of membrane potential dynamics induced by the threshold stimulus evoking action potentials are shown in Figure 7. We observe the propagation of the action potentials from the axon initial segment to the more distal parts of the neurons. In both L5 and L3 PNs, following the action potential at the initial segment, the soma was activated as it is closest to the initial segment. The terminal points of the axons were activated last as they are most distal from the axon initial segment. Since the axon of a L5 PN is quite long compared to that of a L3 PN, the arrival of the action potential at the terminal point was substantially delayed. Similarly, dendrites of L5 PNs showed delayed activation while in the L3 PNs dendrites were occasionally activated early.\n\nSites include the axon initial segment (iseg) and the boundary between gray matter and white matter (boundary). Additionally, the terminal part (terminal) and middle point of the axon (middle) for L3 PNs were considered. Most action potentials are first evoked at the axon initial segment of L5 PNs (96.31±1.72%) and L3 PNs (92.76±2.42%). The remaining number of L5 PNs show action potential initiation at the axon near the boundary between gray matter and white matter. Only few L5 PNs (0.49±0.14%) initiate action potentials simultaneously at the axon initial segment and the GM-WM. For L3 PNs, middle (1.05±0.78%) and terminal points (2.08±1.45%) of axons are also activated occasionally.\n\nThe simulated recordings were performed from dendrites (dend), soma and parts of the axons, as indicated by the red colored cones. (a) In a L5 PN, the membrane potentials are recorded at the axon initial segment (iseg), the location where the axon crosses the boundary between gray matter and white matter (boundary), bending and terminal points. (b) Additionally, the middle points of axons of L3 PNs are recorded.\n\nThe PNs that were morphologically reconstructed had asymmetric dendritic trees that might affect the neuronal responses. We studied the impact of dendritic trees on threshold maps and the percentage of excited neurons for a stimulation intensity at the motor threshold by rotating them in steps of 45 degrees around the axis defined by their apical dendrite for a fixed coil orientation at +180 degrees. In the threshold maps for L5 PNs as shown in Figure 8, the highest variations of the thresholds caused by these rotations were observed in the boundary between the top of the postcentral gyrus and the sulcus. However, the coil dependency in the threshold maps did not change and thus the L5 PNs toward the postcentral gyrus were activated consistently. Compared to the percentage of excited L5 PNs with randomly rotated dendritic trees (16.12% as shown in Figure 5), the fixed orientation of dendritic trees induced changes in the fraction of activated neurons of up to 2%. The threshold variations in the L3 PNs were hardly noticeable compared to those of the L5 PNs. The percentage of excited L3 PNs was 26.4% with randomly rotated dendritic trees, and when the orientations of dendritic trees were fixed it resulted in changes of at most 0.3%. Thus, we found that the morphology of the dendritic tree of the L5 PN model had a bigger impact than that of the L3 PN model, possibly due to its greater lack of rotational symmetry. Overall, however, rotations of the dendritic trees around the axis defined by the apical dendrite did not alter the spatial extent of activated regions much.\n\n(a) The threshold maps according to the different orientations of dendritic trees and (b) its mean and standard deviation are shown. The map of standard deviations in (b) indicates precise orientation of the dendritic tree can alter activation thresholds in a noticeable fashion in certain situations. Overall, however, the activated areas in (a) do not change much compared to the threshold map for randomly rotated dendritic trees in (c).\n\n\nDiscussion\n\nThe detailed mechanisms through which TMS activates cortical cells and cortical circuits are still not fully understood. In this study, we used multi-scale computational modeling to predict cortical activation as a function of coil orientation in two different ways. First, we simply considered the strength of the component of the TMS-induced electric field that is orthogonal to the gray matter surface as suggested by the C3-model10,40. Second we developed a detailed computational modeling approach that combined an anatomically realistic head model with complex multi-compartment neuronal models of L5/L3 PNs and quantified their stimulation thresholds. A major finding was the characterization of the induced electric fields and the thresholds of L5/L3 PNs as a function of coil orientations as shown in Figure 4. In addition, threshold variations according to different morphologies of PNs were observed.\n\nThe magnitude of the electric field was considered first, because the strength of the electric field is commonly used to extrapolate neuronal activation6,7,51. We found that the magnitude of the TMS-induced field is focused on the top of the gyrus, which is in agreement with previous modeling studies10,12–14. However, the electric field magnitude showed little dependency on coil orientation10. Then, we investigated the directional electric field, especially the orthogonal component that is perpendicular to the cortical surface, as this has been suggested to contribute most to the TMS-induced activation according to the C3-model10,40,52. We found a strong dependence of the orthogonal field component on coil orientation, as shown in Figure 3. In contrast to the electric field magnitude, the highest field values were found in the sulcal walls and never on the apex (or crown) of the gyrus.\n\nWhile the analysis of TMS-induced electric fields has been widely addressed in the past, the incorporation of multi-compartment neuronal models has hardly been investigated. To permit a more detailed understanding of the biophysical mechanisms of TMS, a few previous modeling studies employed detailed neuronal models and calculated the membrane potential dynamics generated by the electromagnetic field. However, these attempts had various limitations. First, in early studies no anatomical information on large-scale brain morphology was applied23–25. Rather than constructing a finite element head model, these studies applied a uniform electric field to the neuronal model. Even though such investigations achieved reasonable results regarding the neuronal activation, they did not consider the effects of the complex folding patterns of the cortex and the effects of tissue borders such as the borders between GM on the one hand and CSF or WM on the other hand. However, the importance of anatomically realistic head models has been shown convincingly12–14. Furthermore, the impact of detailed brain anatomy has been considered in various methods of brain stimulation and substantial differences have been demonstrated by improving anatomical information related to the head model53–56. Salvador et al. (2011) investigated neuronal responses using a simplified head model of a cortical sulcus with several types of neurons and found changes of the stimulation threshold depending on the pulse waveform and the coil orientation29. However, the used head model had an approximated geometry restricted to the motor cortex and a full complex geometry, such as the hook-shaped hand knob, was not considered. Furthermore, the modeled coil orientation was limited to anterior to posterior and its reversal due to the simplified geometry of the head model. Most recently, Goodwin and Butson (2015) proposed a more realistic approach that integrates an anatomically realistic head model derived from MR images with detailed neuronal models28. They considered the excitability of neurons as a function of coil orientation. However, in contrast to our results, excitability maps hardly showed a systematic dependence on coil orientation and activation thresholds were lower in the gyral crown. We speculate that this might be caused by the different morphology of PNs or the different way in which they calculated the external currents to simulate neuronal responses. Also, we considered two types of L3/L5 PNs spread over a wider region of the cortex. Finally, we also established the site of action potential initiation and found that most PNs are activated at the axon initial segment and action potential initiations at other parts of the neuron are comparatively rare.\n\nThe threshold maps we calculated demonstrate acute sensitivity to coil orientation, but different spatial extents were observed according to the different morphologies of the PNs. In L5 PNs, activation thresholds were low in the sulcal walls, matching predictions based on the orthogonal component of the electric field. The excitation in the sulcal cortical surface was consistent with the well-established columnar neuronal orientation and functional organization of the cortex and functional imaging studies40,57. Furthermore, the excitable area of the L5 PNs were wider in the postcentral gyrus compared to the precentral gyrus. This might be due to the severe curvature in the precentral gyrus or the thinner cortex on the postcentral side such that the neurons were smaller than those in precentral gyrus. This suggests an additional important factor of cortex geometry for TMS next to neuron placement and coil orientation30. The L3 PNs had different morphology with shorter axons than the L5 PNs such that they were located completely within the gray matter. Similar to the L5 PNs, the coil orientation had a significant impact on the responses of the L3 PNs, but the precise patterns of the threshold maps differed between the L3 and L5 PNs. As hypothesized by Day et al., here the proximity to the coil played an important role58, as L3 PNs in the gyral crown and the upper parts of the sulcal wall were predominantly activated.\n\nThe neural response to TMS is composed of a direct (D) and several indirect (I) waves. The D-wave is thought to be produced by direct activation of L5 PNs as we have modeled it here and is followed by I-waves that are thought to be generated by synaptic excitation and/or re-excitation of L5 pyramidal cells with longer latencies58,59, presumably via pyramidal cells in superficial cortical layers L2 and L3. According to Di Lazzaro et al. (2004), at the lowest stimulation intensity to evoke neuronal responses, an I-wave is elicited, and with increasing stimulation intensity, the earlier, small D-wave is produced30. This indicates that thresholds for eliciting I-waves are lower than those for eliciting D-waves60. In this work, we explored the excitation thresholds of both L3 and L5 PNs and found that the percentage of excited neurons for a stimulation intensity at the motor threshold was about two times higher for L3 PNs than for L5 PNs. Furthermore, the activation of the L3 PNs was consistently higher than that of the L5 PNs for the full range of stimulation intensities. The lower stimulation thresholds of the L3 PNs are consistent with lower stimulation intensities required to produce I-waves49,61, and the higher stimulation intensities required to produce D-waves.\n\nThe highest percentage of excited PNs was observed at +135 degrees and the excited regions were focused on the postcentral gyrus. The base coil orientation induced the lowest percentage of activated PNs, but as shown in Figure 4 the precentral gyrus was targeted better than for other coil orientations. Thus, to activate the precentral gyrus, the base coil orientation is recommended by our model, congruent with previous research62,63, and +135 degrees should be ideal to stimulate the postcentral gyrus.\n\nThe question of the precise initiation site of action potentials is a central issue for understanding the physiological effects of TMS. According to our study, the dominant initiation site leading to action potentials is the axon initial segment in both L5 and L3 PNs. This is consistent with previous studies arguing that action potentials giving rise to the D-wave might be initiated close to the soma and/or axon initial segment64,65. In addition, L5 PN action potentials were also initiated at the axon where it crosses the boundary between gray matter and white matter, where tissue conductivity changes abruptly48. However, it will be important to verify these results with more realistic axon models.\n\nThere are several limitations in our modeling study. A first limitation is related to neuronal properties and morphologies. The L3 and L5 PNs were taken from cat visual cortex due to the lack of models for most human cortical cell types. However, despite the uncertainty with regard to properties of PNs, we produced results matching both experimental studies and other computational studies23,25 that incorporated the same models of PNs.\n\nWhile we observed the stimulation of neural activity in the superficial cortex nearby the coil, TMS might also affect deep brain areas that cannot be stimulated directly. This can be explained on the basis of the propagation of action potentials along white matter fiber tracts. Recent studies modeled tractography-based white matter fiber tracts using diffusion tensor imaging (DTI) and observed activation of axon bundles9,11,12. Compared to fiber tracts in previous modeling, we modeled straightly stretched axons of L5 PNs inside the WM. Due to such limitations, the axons inside the WM occasionally passed through protruding parts of GM. Notwithstanding this intersection could affect the neuronal responses such as the action potential initiation or thresholds, most PNs initiated action potentials at the axon initial segment and coil orientation dependency observed in threshold maps was consistent with observations in previous studies. Further developments in tractography may improve detailed neuronal models and may lead to a deeper understanding of the TMS-induced brain activity propagations from the superficial cortex to distant brain regions.\n\nAnother limitation is that the reconstructed PNs were synaptically isolated. For the L5 cells this means that we basically studied the generation mechanism of D-waves. The activation of L3 cells could be seen as a proxy for the generation of I-waves. A logical next step is to synaptically couple L3 and L5 cells as done in a recent model for the generation of D and I-waves using L5 PNs that were contacted by a pool of excitatory and inhibitory layer 2 and 3 neurons49. This model successfully reproduced various characteristics of I-waves and highlighted the importance of the complex morphology of the L5 PNs for the generation of I-waves. An improvement would be to use the anatomical information on the activation of PNs as modeled here, as we found a clear difference in the threshold maps for L5 and L3 PNs based on their morphology. Therefore, in future work, we plan to incorporate synaptic connections between L3 and L5 PNs. We hope that this will bring us one step closer to a detailed understanding of the mechanisms through which TMS activates cortical circuits, paving the way for more precise and effective application of TMS based on individual brain morphology in clinical and basic research settings.\n\n\nData availability\n\nF1000Research: Dataset 1. Figure 3 input data, 10.5256/f1000research.9277.d13206966\n\nF1000Research: Dataset 2. Figure 4 input data, 10.5256/f1000research.9277.d13207067\n\nF1000Research: Dataset 3. Figure 5 raw data, 10.5256/f1000research.9277.d13207168\n\nF1000Research: Dataset 4. Figure 6 raw data, 10.5256/f1000research.9277.d13207269",
"appendix": "Author contributions\n\n\n\nJT, HS and NS designed the study. HS and NS implemented the model. HS, SCJ and JT are analyzed the data. HS and JT wrote the manuscript and all authors were involved in the revision of the manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by grant (NRF-2016R1A2B4010897) from the National Research Foundation of Korea. The Lab of JT is supported by a gift from the Quandt foundation.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nBarker AT, Freeston IL, Jalinous R, et al.: Clinical evaluation of conduction time measurements in central motor pathways using magnetic stimulation of human brain. Lancet. 1986; 1(8493): 1325–1326. PubMed Abstract | Publisher Full Text\n\nWagner T, Valero-Cabre A, Pascual-Leone A: Noninvasive Human Brain Stimulation. Annu Rev Biomed Eng. 2007; 9: 527–565. PubMed Abstract | Publisher Full Text\n\nBarker AT, Jalinous R, Freeston IL: Non-invasive magnetic stimulation of human motor cortex. Lancet. 1985; 1(8437): 1106–1107. PubMed Abstract | Publisher Full Text\n\nDi Lazzaro V, Oliviero A, Profice P, et al.: The diagnostic value of motor evoked potentials. Clin Neurophysiol. 1999; 110(7): 1297–1307. PubMed Abstract | Publisher Full Text\n\nSchulz R, Gerloff C, Hummel FC: Non-invasive brain stimulation in neurological diseases. Neuropharmacology. 2013; 64: 579–587. PubMed Abstract | Publisher Full Text\n\nRadman T, Ramos RL, Brumberg JC, et al.: Role of cortical cell type and morphology in subthreshold and suprathreshold uniform electric field stimulation in vitro. Brain Stimul. 2009; 2(4): 215–28, 228.e1–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIlmoniemi RJ, Ruohonen J, Karhu J: Transcranial magnetic stimulation--a new tool for functional imaging of the brain. Crit Rev Biomed Eng. 1999; 27(3–5): 241–284. PubMed Abstract\n\nDe Lucia M, Parker GJ, Embleton K, et al.: Diffusion tensor MRI-based estimation of the influence of brain tissue anisotropy on the effects of transcranial magnetic stimulation. Neuroimage. 2007; 36(4): 1159–1170. PubMed Abstract | Publisher Full Text\n\nGeeter ND, Dupré L, Crevecoeur G: Modeling transcranial magnetic stimulation from the induced electric fields to the membrane potentials along tractography-based white matter fiber tracts. J Neural Eng. 2016; 13(2): 026028. PubMed Abstract | Publisher Full Text\n\nJanssen AM, Oostendorp TF, Stegeman DF: The coil orientation dependency of the electric field induced by TMS for M1 and other brain areas. J Neuroeng Rehabil. 2015; 12: 47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNummenmaa A, McNab JA, Savadjiev P, et al.: Targeting of white matter tracts with transcranial magnetic stimulation. Brain Stimul. 2014; 7(1): 80–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOpitz A, Windhoff M, Heidemann RM, et al.: How the brain tissue shapes the electric field induced by transcranial magnetic stimulation. Neuroimage. 2011; 58(3): 849–859. PubMed Abstract | Publisher Full Text\n\nThielscher A, Opitz A, Windhoff M: Impact of the gyral geometry on the electric field induced by transcranial magnetic stimulation. Neuroimage. 2011; 54(1): 234–243. PubMed Abstract | Publisher Full Text\n\nWindhoff M, Opitz A, Thielscher A: Electric field calculations in brain stimulation based on finite elements: An optimized processing pipeline for the generation and usage of accurate individual head models. Hum Brain Mapp. 2013; 34(4): 923–935. PubMed Abstract | Publisher Full Text\n\nKim D, Jeong J, Jeong S, et al.: Validation of Computational Studies for Electrical Brain Stimulation With Phantom Head Experiments. Brain Stimul. 2015; 8(5): 914–925. PubMed Abstract | Publisher Full Text\n\nLaakso I, Hirata A, Ugawa Y: Effects of coil orientation on the electric field induced by TMS over the hand motor area. Phys Med Biol. 2014; 59(1): 203–18. PubMed Abstract | Publisher Full Text\n\nOpitz A, Legon W, Rowlands A, et al.: Physiological observations validate finite element models for estimating subject-specific electric field distributions induced by transcranial magnetic stimulation of the human motor cortex. Neuroimage. 2013; 81: 253–264. PubMed Abstract | Publisher Full Text\n\nThielscher A, Kammer T: Linking physics with physiology in TMS: a sphere field model to determine the cortical stimulation site in TMS. Neuroimage. 2002; 17(3): 1117–1130. PubMed Abstract | Publisher Full Text\n\nLenz M, Platschek S, Priesemann V, et al.: Repetitive magnetic stimulation induces plasticity of excitatory postsynapses on proximal dendrites of cultured mouse CA1 pyramidal neurons. Brain Struct Funct. 2015; 220(6): 3323–3337. PubMed Abstract | Publisher Full Text\n\nLenz M, Galanis C, Müller-Dahlhaus F, et al.: Repetitive magnetic stimulation induces plasticity of inhibitory synapses. Nat Commun. 2016; 7: 10020. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNagarajan SS, Durand DM, Warman EN: Effects of induced electric fields on finite neuronal structures: a simulation study. IEEE Trans Biomed Eng. 1993; 40(11): 1175–1188. PubMed Abstract | Publisher Full Text\n\nRoth BJ, Basser PJ: A model of the stimulation of a nerve fiber by electromagnetic induction. IEEE Trans Biomed Eng. 1990; 37(6): 588–597. PubMed Abstract | Publisher Full Text\n\nKamitani Y, Bhalodia VM, Kubota Y, et al.: A model of magnetic stimulation of neocortical neurons. Neurocomputing. 2011; 38–40: 697–703. Publisher Full Text\n\nPashut T, Wolfus S, Friedman A, et al.: Mechanisms of Magnetic Stimulation of Central Nervous System Neurons. PLoS Comput Biol. 2011; 7(3): e1002022. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu T, Fan J, Lee KS, et al.: Cortical neuron activation induced by electromagnetic stimulation: a quantitative analysis via modelling and simulation. J Comput Neurosci. 2016; 40(1): 51–64. PubMed Abstract | Publisher Full Text\n\nChan CY, Nicholson C: Modulation by applied electric fields of Purkinje and stellate cell activity in the isolated turtle cerebellum. J Physiol. 1986; 371(1): 89–114. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRahman A, Reato D, Arlotti M, et al.: Cellular effects of acute direct current stimulation: somatic and synaptic terminal effects. J Physiol. 2013; 591(10): 2563–2578. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoodwin BD, Butson CR: Subject-Specific Multiscale Modeling to Investigate Effects of Transcranial Magnetic Stimulation. Neuromodulation. 2015; 18(8): 694–704. PubMed Abstract | Publisher Full Text\n\nSalvador R, Silva S, Basser PJ, et al.: Determining which mechanisms lead to activation in the motor cortex: a modeling study of transcranial magnetic stimulation using realistic stimulus waveforms and sulcal geometry. Clin Neurophysiol. 2011; 122(4): 748–758. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDi Lazzaro V, Oliviero A, Pilato F, et al.: Comparison of descending volleys evoked by transcranial and epidural motor cortex stimulation in a conscious patient with bulbar pain. Clin Neurophysiol. 2004; 115(4): 834–838. PubMed Abstract | Publisher Full Text\n\nGorman AL: Differential patterns of activation of the pyramidal system elicited by surface anodal and cathodal cortical stimulation. J Neurophysiol. 1966; 29(4): 547–564. PubMed Abstract\n\nSilva S, Basser PJ, Miranda PC: Elucidating the mechanisms and loci of neuronal excitation by transcranial magnetic stimulation using a finite element model of a cortical sulcus. Clin Neurophysiol. 2008; 119(10): 2405–2413. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThielscher A, Antunes A, Saturnino GB: Field modeling for transcranial magnetic stimulation: A useful tool to understand the physiological effects of TMS? Conf Proc IEEE Eng Med Biol Soc. 2015; 2015: 222–225. PubMed Abstract | Publisher Full Text\n\nDale AM, Fischl B, Sereno MI: Cortical surface-based analysis. I. Segmentation and surface reconstruction. NeuroImage. 1999; 9(2): 179–194. PubMed Abstract | Publisher Full Text\n\nFischl B, Sereno MI, Dale AM: Cortical surface-based analysis. II: inflation, flattening, and a surface-based coordinate system. Neuroimage. 1999; 9(2): 195–207. PubMed Abstract | Publisher Full Text\n\nSmith SM, Jenkinson M, Woolrich WM, et al.: Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage. 2004; 23(Suppl 1): S208–S219. PubMed Abstract | Publisher Full Text\n\nAttene M: A lightweight approach to repairing digitized polygon meshes. Vis Comput. 2010; 26(11): 1393–1406. Publisher Full Text\n\nGeuzaine C, Remacle JF: Gmsh: A 3-D finite element mesh generator with built-in pre- and post-processing facilities. Int J Numer Methods Eng. 2009; 79(11): 1309–1331. Publisher Full Text\n\nRenard Y, Pommier J: GetFEM++ Homepage — GetFEM++. getfem, A Generic Finite Element Library in C. Documentation. 2010. Reference Source\n\nFox PT, Narayana S, Tandon N, et al.: Column-based model of electric field excitation of cerebral cortex. Hum Brain Mapp. 2004; 22(1): 1–14. PubMed Abstract | Publisher Full Text\n\nMainen ZF, Sejnowski TJ: Influence of dendritic structure on firing pattern in model neocortical neurons. Nature. 1996; 382(6589): 363–366. PubMed Abstract | Publisher Full Text\n\nHines ML, Carnevale NT: The NEURON simulation environment. Neural Comput. 1997; 9(6): 1179–1209. PubMed Abstract | Publisher Full Text\n\nManola L, Holsheimer J, Veltink P, et al.: Anodal vs cathodal stimulation of motor cortex: a modeling study. Clin Neurophysiol. 2007; 118(2): 464–474. PubMed Abstract | Publisher Full Text\n\nSeo H, Kim D, Jun SC: Computational Study of Subdural Cortical Stimulation: Effects of Simulating Anisotropic Conductivity on Activation of Cortical Neurons. PLoS One. 2015; 10(6): e0128590. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWongsarnpigoon A, Grill WM: Computer-based model of epidural motor cortex stimulation: effects of electrode position and geometry on activation of cortical neurons. Clin Neurophysiol. 2012; 123(1): 160–172. PubMed Abstract | Publisher Full Text\n\nZwartjes DG, Heida T, Feirabend HK, et al.: Motor cortex stimulation for Parkinson’s disease: a modelling study. J Neural Eng. 2012; 9(5): 056005. PubMed Abstract | Publisher Full Text\n\nDeFelipe J, Alonso-Nanclares L, Arellano JI: Microstructure of the neocortex: comparative aspects. J Neurocytol. 2002; 31(3–5): 299–316. PubMed Abstract | Publisher Full Text\n\nMiranda PC, Correia L, Salvador R: Tissue heterogeneity as a mechanism for localized neural stimulation by applied electric fields. Phys Med Biol. 2007; 52(18): 5603–17. PubMed Abstract | Publisher Full Text\n\nRusu CV, Murakami M, Ziemann U, et al.: A model of TMS-induced I-waves in motor cortex. Brain Stimul. 2014; 7(3): 401–414. PubMed Abstract | Publisher Full Text\n\nKammer T, Beck S, Thielscher A, et al.: Motor thresholds in humans: a transcranial magnetic stimulation study comparing different pulse waveforms, current directions and stimulator types. Clin Neurophysiol. 2001; 112(2): 250–258. PubMed Abstract | Publisher Full Text\n\nBikson M, Rahman A, Datta A, et al.: High-resolution modeling assisted design of customized and individualized transcranial direct current stimulation protocols. Neuromodulation. 2012; 15(4): 306–315. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrieg TD, Salinas FS, Narayana S, et al.: Computational and experimental analysis of TMS-induced electric field vectors critical to neuronal activation. J Neural Eng. 2015; 12(4): 046014. PubMed Abstract | Publisher Full Text\n\nGrant PF, Lowery MM: Electric field distribution in a finite-volume head model of deep brain stimulation. Med Eng Phys. 2009; 31(9): 1095–1103. PubMed Abstract | Publisher Full Text\n\nKim D, Seo H, Kim HI, et al.: Computational study on subdural cortical stimulation - the influence of the head geometry, anisotropic conductivity, and electrode configuration. PLoS One. 2014; 9(9): e108028. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNummenmaa A, Stenroos M, Ilmoniemi RJ, et al.: Comparison of spherical and realistically shaped boundary element head models for transcranial magnetic stimulation navigation. Clin Neurophysiol. 2013; 124(10): 1995–2007. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeo H, Kim D, Jun SC: Effect of Anatomically Realistic Full-Head Model on Activation of Cortical Neurons in Subdural Cortical Stimulation-A Computational Study. Sci Rep. 2016; 6: 27353. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrieg TD, Salinas FS, Narayana S, et al.: PET-based confirmation of orientation sensitivity of TMS-induced cortical activation in humans. Brain Stimul. 2013; 6(6): 898–904. PubMed Abstract | Publisher Full Text\n\nDay BL, Dressler D, Maertens de Noordhout A, et al.: Electric and magnetic stimulation of human motor cortex: surface EMG and single motor unit responses. J Physiol. 1989; 412(1): 449–473. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatton HD, Amassian VE: Single and multiple-unit analysis of cortical stage of pyramidal tract activation. J Neurophysiol. 1954; 17(4): 345–363. PubMed Abstract\n\nHern JE, Landgren S, Phillips CG, et al.: Selective excitation of corticofugal neurones by surface-anodal stimulation of the baboon’s motor cortex. J Physiol. 1962; 161(1): 73–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDi Lazzaro V, Profice P, Ranieri F, et al.: I-wave origin and modulation. Brain Stimul. 2012; 5(4): 512–525. PubMed Abstract | Publisher Full Text\n\nBrasil-Neto JP, Cohen LG, Panizza M, et al.: Optimal focal transcranial magnetic activation of the human motor cortex: effects of coil orientation, shape of the induced current pulse, and stimulus intensity. J Clin Neurophysiol. 1992; 9(1): 132–136. PubMed Abstract | Publisher Full Text\n\nMills KR, Boniface SJ, Schubert M: Magnetic brain stimulation with a double coil: the importance of coil orientation. Electroencephalogr Clin Neurophysiol. 1992; 85(1): 17–21. PubMed Abstract | Publisher Full Text\n\nBaker SN, Olivier E, Lemon RN: Task-related variation in corticospinal output evoked by transcranial magnetic stimulation in the macaque monkey. J Physiol. 1995; 488(Pt 3): 795–801. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEdgley SA, Eyre JA, Lemon RN, et al.: Excitation of the corticospinal tract by electromagnetic and electrical stimulation of the scalp in the macaque monkey. J Physiol. 1990; 425(1): 301–320. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeo H, Schaworonkow N, Jun SC, et al.: Dataset 1 in: A Multi-Scale Computational Model of the effects of TMS on Motor Cortex. F1000Research. 2016. Data Source\n\nSeo H, Schaworonkow N, Jun SC, et al.: Dataset 2 in: A Multi-Scale Computational Model of the effects of TMS on Motor Cortex. F1000Research. 2016. Data Source\n\nSeo H, Schaworonkow N, Jun SC, et al.: Dataset 3 in : A Multi-Scale Computational Model of the effects of TMS on Motor Cortex. F1000Research. 2016. Data Source\n\nSeo H, Schaworonkow N, Jun SC, et al.: Dataset 4 in: A Multi-Scale Computational Model of the effects of TMS on Motor Cortex. F1000Research. 2016. Data Source"
}
|
[
{
"id": "15646",
"date": "01 Sep 2016",
"name": "Axel Thielscher",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nSeo and coworkers combine realistic calculations of the electric field that is induced by TMS with multi-compartmental neural models in order to reveal how and at which sites TMS generates neural activity. Only a few prior studies have targeted this topic in detail, and the presented study contributes relevant new insights into the putative stimulation mechanisms of TMS. Clearly, on the long run, multi-scale modeling approaches as presented by the authors should be superior to approaches based only on field calculations in explaining the biophysics of TMS. As such, I am very supportive of the study and the topic in general. In the following, I suggest changes to improve the clarity of the presentation of the results and to better align them with the known electrophysiological findings for magnetic stimulation of the motor cortex. In addition, I suggest extending the section in the discussion that deals with the study limitations. Given the novelty of the overall approach, it is relevant to educate the readers on the uncertainties involved in the modeling process, and to point toward putative improvements that could be taken up in future studies.\nComments on the methods and results section (sorted by order of occurrence, not relevance):\nPlease specify the origin of the MR data and – if relevant – the underlying ethics approval (page 2, volume conductor model)\n\nThe A field is not dependent on the head model, only coil geometry (page 2, field calculations)\n\nHow much did the re-scaling of the neural models affect the results (page 3, multi-compartmental neuronal models)? This should be tested for a few selected cases. The question how well neuronal models for other species and brain areas can be transferred to human sensorimotor cortex is highly relevant. In that respect, it would be good to know how sensitive the results are towards changes of the local dimensions of the neurons.\n\nWhy were the axons of the L5 PNs modelled to curve towards the corpus callosum rather than the internal capsule (page 3)? How much could this have influenced the results?\n\nPlease give more details on the method used for placing the neurons and axons (page 3). Was it fully automated?\n\nAdding an external current source to model the TMS field (page 3): How well is this justified for non-cable like structures such as the soma?\n\nAngular frequency of 30 kHz (page 3): This is too high, 3 kHz would be more realistic. Maybe just a typo?\n\nFigure 1C: It would be helpful to indicate the region in which the neurons were placed in this plot.\n\nFigure 3: It is surprising to see that only slight changes in the field strength in dependence on coil orientation were observed. This is in contrast to prior results. The peak field strength might stay relatively constant, but the spatial pattern of the gyral crowns seeing high field strengths should clearly change. Maybe this is only a scaling issue in the plot, or related to the question on where the field was read out (on the CSF-GM surface or within GM?). Please clarify, and compare to prior studies (in particular those using the same FEM method – i.e. simnibs)\n\nFigure 3 and definition of coil orientations in the text: It would be easier to define the orientations according to the current direction induced in the brain. It is somewhat counterintuitive that that an arrow pointing anteriorly (i.e., in the base orientation) represents a current flowing in the opposite direction, as stated on page 2. Physiological experiments show that M1 has the lowest threshold for monophasic pulses when currents are flowing from posterior to anterior. The results in Figure 4 indicate that this should be the case for the base orientation. This, however, is in contrast to the statement in the methods that current direction was anterior-posterior in that case. Please clarify. A schematic plot of all coil (or better current) directions would be helpful to prevent confusion. In a related manner, please also add information on the meaning of blue and red colors for the lower half of Figure 3 (normal component). Do red colors indicate inflowing or outflowing currents?\n\nFigure 4: It would be good to be able to see the fields and excitabilities on the medial part of the hand knob. For example, for 270° (L5 PNs), the hand knob seems mostly non-excited. However, this impression might simply result from not being able to see the medial part.\n\nFigures 4 and 5, and related results: I strongly suggest to differentiate between M1 and S1, and present separate results for both, with a focus on M1. From Figure 5, it seems that coil orientation 135° activates the largest number of neurons. However, Figure 4 indicates that those neurons are mostly in S1. For this reason, it would make more sense to present separate subfigures for M1 and S1 in Figure 5.\n\nFigure 6, and related results: It seems that axon bends of L5 PNs were never the site where stimulation occurred most easily. This is in contrast to prior hypotheses, and also (simplified) modeling studies. Can you indicate how much higher the excitation thresholds of the bends are? What is the reason that the bends were not excited? By which angle did the axons bend, and which bend radius was modelled?\n\nFigure 8: It would be good to replace the results using the coil orientation which stimulates the hand knob most strongly, rather than S1.\n\nComments on the Discussion section:\n\nThis is one of only few studies that combine field calculations with morphologically realistic models of neurons. It shows very encouraging results. Given its pioneering character, however, I would appreciate if the uncertainties involved in the modeling process could be more systematically discussed.I would like to emphasize that this suggestion is not meant as a critique of quality of the study. The match between modeling and experimental results is promising, but should not be misunderstood as a proof that the model is “correct”.\nIn addition to limitations in modeling the neurons, uncertainties regarding the tissue conductivities (resulting in uncertainties regarding the field distribution) should be mentioned. The neuron models were taken from cat visual cortex. Even in humans, the histology of visual and motor cortex differs substantially. While the visual cortex has a thick layer 4, the motor cortex is dominated by layer 3 and 5 (including the large “Betz” cells). The study results demonstrate a strong impact of neural features on the results, raising the question on how well a transfer between species and areas is possible. Here, the size of the neurons was rescaled to fit to the target cortical area. I am curious on how much is known about other differences, such as differences in axon and soma diameters, or channel densities, that might systematically impact the results, but where not taken into account? In a related manner, the study of Salvador et al. (2011) hints towards putative additional structures that might be affected, such as short- and intermediate connections in WM (their a1 and a2 association fibres), and terminals of incoming fibre projections. It would be worth mentioning them as well.\n\nI am curious to understand which features of the L3 PNs resulted in their better excitability to TMS? One could intuitively assume that the larger and far-projecting L5 PNs might be the better targets. Discussing this might also help to better reveal the important neural features that dominate the cortical excitability to TMS.",
"responses": [
{
"c_id": "2460",
"date": "17 Feb 2017",
"name": "Hyeon Seo",
"role": "Author Response",
"response": "Thank you very much for your valuable comments, which have helped us greatly to improve the manuscript. We reply to your comments point by point below. Please specify the origin of the MR data and – if relevant – the underlying ethics approval (page 2, volume conductor model) ➔ We have used the SimNIBS pipeline to construct the head model and to calculate electric field distributions using the example dataset provided by SimNIBS. We rephrased the corresponding sentences as follows: “(At the beginning of Methods section) The volume conductor head model was used to simulate the stimulus-induced electric fields; it was based on the SimNIBS v1.1 software pipeline14, 33.” “(Volume conductor model in Method section) To calculate the precise electric field, a volume conductor head model for TMS that reflected T1-weighted and T2-weighted magnetic resonance (MR) images was constructed using example dataset provided by SimNIBS v1.1 under the ethical approval14” The A field is not dependent on the head model, only coil geometry (page 2, field calculations) ➔ We rephrased the sentence as follows: “The primary electric field was directly determined by the coil geometry and the secondary electric field caused by charge accumulations at tissue interfaces.” How much did the re-scaling of the neural models affect the results (page 3, multi-compartmental neuronal models)? This should be tested for a few selected cases. The question how well neuronal models for other species and brain areas can be transferred to human sensorimotor cortex is highly relevant. In that respect, it would be good to know how sensitive the results are towards changes of the local dimensions of the neurons. ➔ We re-scale the dendritic trees while keeping axon and soma identical and observe the spatial distributions of thresholds (Figure 8(b)). We found only slight changes in threshold maps and the percentage of excited PNs by reducing the scale of dendritic trees, while the impact of rotation of dendritic trees was relatively bigger. However, in the presence of morphological changes of dendritic trees, the walls of the precentral gyrus were consistently targeted and the spatial extent of activated regions did not change much. We rephrased the corresponding sentences as follows: “(Results) In addition, the impact of scaling of PNs was investigated by reducing the dimension of the dendrite trees by 10%, 20% or 30%, as shown in Figure 8(b). For this, we simply scaled the length of all dendritic compartments while keeping their diameters identical46. We observed consistently activated sites in the threshold maps and only slight changes of the percentage of excited L5 PNs of up to 0.3%. Thus, we found that the rotations of the dendritic trees had a bigger impact on PN excitability than that of scaling the dendritic trees. Overall, morphological changes in dendritic trees did not alter the spatial extent of activated regions much.” Why were the axons of the L5 PNs modelled to curve towards the corpus callosum rather than the internal capsule (page 3)? How much could this have influenced the results? ➔ The corpus callosum was a typo. We tilted the triangular elements comprising the gray matter defined in the ROI toward the internal capsule (Supplementary Figure S1). We rephrased these points as follows: “(Multi-compartmental neuronal models in Method section) The axons of L5 PNs were defined to curve beyond the boundary between GM and WM in the direction of the internal capsule (Supplementary Figure S1).” Please give more details on the method used for placing the neurons and axons (page 3). Was it fully automated? ➔ It is not fully automated. When the different neuronal models or different head models are applied, we need to change the parameters to adapt it. We rephrased corresponding paragraphs to include more details as follows: “L5/L3 PNs were combined virtually with the head model and modified to accommodate the irregular geometry of the cortex28, 29, 44– 47 , as shown in Figure 2. To reduce superfluous computations, we preselected a region of interest (ROI) of 50 × 50 × 50 mm 3 around the hand knob and then placed L5/L3 PNs in each triangular element comprising the gray matter surface. The multi-compartmental models of PNs consisted of a series of compartments connected by resistors. Each compartment was further discretized into segments of equal length to allow for accurate numerical simulation. The center points of each segment were extracted and used to calculate the necessary changes to neuron geometry, as described below. The dendritic trees were lengthened or shortened by re-scaling the lengths of the compartments according to the local dimensions of the cortex such that dendrites reached layer 1 and the orientation was perpendicular to the cortical surface46, 48 . Since the morphology of the dendritic trees was not symmetric and it might influence the neuronal activation, L5/L3 PNs had randomly rotated dendritic trees at different locations. The axons of L5 PNs were defined to curve beyond the boundary between GM and WM in the direction of the internal capsule (Supplementary Figure S1). Further adjustments of L5 PNs geometry were as follows (illustrated in Supplementary Figure S2): each dendritic tree was oriented such that its principal axis would align with the normal vector of its associated triangular surface element. The bending part of the axon was calculated according to the normal vector of the surface element. The arc length of the axon bend was set to 0.6 mm when the z-component of the normal vector was positive and otherwise the arc length was 0.3 mm (compare Supplementary Figure S2). Note that when we varied the angle and arc length of the axon bend, it usually did not alter the activation threshold. The axons of L3 PNs were defined to terminate in layer 5/6 within the GM. Altogether, a total of 10,888 L5 PNs and 10,888 L3 PNs was constructed. This process was implemented in MATLAB (MathWorks, Natick, MA, USA).” Adding an external current source to model the TMS field (page 3): How well is this justified for non-cable like structures such as the soma? ➔ All the compartments comprising PNs including the soma were considered by a spatially discretized version of the cable equation, and there are several papers (24. Pashut et al., 2011; 28. Goodwin et al., 2015; 29. Salvador et al., 2011) using the same approach. We are not aware of any problems with adding an external current source in this way. Angular frequency of 30 kHz (page 3): This is too high, 3 kHz would be more realistic. Maybe just a typo? ➔ We corrected it to 30 rad/ms. Figure 1C: It would be helpful to indicate the region in which the neurons were placed in this plot. ➔ We added it. Figure 3: It is surprising to see that only slight changes in the field strength in dependence on coil orientation were observed. This is in contrast to prior results. The peak field strength might stay relatively constant, but the spatial pattern of the gyral crowns seeing high field strengths should clearly change. Maybe this is only a scaling issue in the plot, or related to the question on where the field was read out (on the CSF-GM surface or within GM?). Please clarify, and compare to prior studies (in particular those using the same FEM method – i.e. simnibs) ➔ We changed the color bar scaling, and now the figure reflects coil-orientation changes in the field distribution more clearly. Figure 3 and definition of coil orientations in the text: It would be easier to define the orientations according to the current direction induced in the brain. It is somewhat counterintuitive that that an arrow pointing anteriorly (i.e., in the base orientation) represents a current flowing in the opposite direction, as stated on page 2. Physiological experiments show that M1 has the lowest threshold for monophasic pulses when currents are flowing from posterior to anterior. The results in Figure 4 indicate that this should be the case for the base orientation. This, however, is in contrast to the statement in the methods that current direction was anterior-posterior in that case. Please clarify. A schematic plot of all coil (or better current) directions would be helpful to prevent confusion. In a related manner, please also add information on the meaning of blue and red colors for the lower half of Figure 3 (normal component). Do red colors indicate inflowing or outflowing currents? ➔ Thanks for the suggestion. In the bottom row in Figure 3, red color indicated current directed inwards and blue color means current directed outwards. Thus, in the base orientation, the currents were flowing from posterior to anterior. We corrected the corresponding sentence as follows: “(Field calculations) The base coil orientation was defined relative to the direction of the central sulcus that the electric field induced was in the posterior to anterior direction (the yellow arrow in Figure 1(c)).” “(Legend in Figure 3) For the orthogonal component of electric fields (bottom row), red color indicates current flowing in the direction from superficial to lower cortical layers and blue color represents currents flowing in the opposite direction.” Figure 4: It would be good to be able to see the fields and excitabilities on the medial part of the hand knob. For example, for 270° (L5 PNs), the hand knob seems mostly non-excited. However, this impression might simply result from not being able to see the medial part. ➔ We made Supplementary Figure S3 (an animated gif) to show the medial part of the hand knob. Figures 4 and 5, and related results: I strongly suggest to differentiate between M1 and S1, and present separate results for both, with a focus on M1. From Figure 5, it seems that coil orientation 135° activates the largest number of neurons. However, Figure 4 indicates that those neurons are mostly in S1. For this reason, it would make more sense to present separate subfigures for M1 and S1 in Figure 5. ➔ We appreciate the reviewer’s suggestion. We separated results for precentral and postcentral gyrus, as shown in Figure 5 (for Figure 4, we already showed spatial distributions of thresholds separately for the precentral and postcentral gyrus). We rephrased corresponding sentences in Results and Discussion sections, as follows: “(Results section) The percentage of excited neurons for a stimulation intensity at the motor threshold is shown in Figure 5. We separately analyzed neurons falling in the precentral and postcentral gyrus. When we focused on PN activations in the precentral gyrus, the highest percentage of excited neurons was observed at +90 degrees rather than the base orientation. For the base orientation PNs in the sulcal wall along the central sulcus were activated. At +90 degrees PNs were activated mostly in the opposite sulcal wall (Supplementary Figure S3). For the postcentral gyrus, the maximum percentage of activated PNs was observed when the coil was oriented at +135 degrees, which is in agreement with the threshold maps in Figure 4.” Figure 6, and related results: It seems that axon bends of L5 PNs were never the site where stimulation occurred most easily. This is in contrast to prior hypotheses, and also (simplified) modeling studies. Can you indicate how much higher the excitation thresholds of the bends are? What is the reason that the bends were not excited? By which angle did the axons bend, and which bend radius was modelled? ➔ According to [28] Goodwin et al. (2015) who combined a realistic head model with pyramidal neurons, most neurons initiated action potentials at close to or within the axon hillock just adjacent to the axon initial segment, and [24] Pashut et al. (2011) who investigated the neuronal responses induced by magnetic stimulation also argued that action potential initiation is in the axon initial segment. Thus, these recent modeling studies reported the axon initial segment as action potential initiation site. The different observation compared to the results using simplified models might be induced because of different morphology and electrical properties of incorporated PNs. As we mentioned in the Discussion section, further study of the impact of neuronal morphology and electrical properties will be helpful. We rephrased corresponding sentences as follows: “(Discussion section) The question of the precise initiation site of action potentials is a central issue for understanding the physiological effects of TMS. According to our study, the dominant initiation site leading to action potentials is the axon initial segment in both L5 and L3 PNs. In previous modeling studies, the action potentials were initiated at the axons crossing the boundary between gray matter and white matter, where the conductivity changes abruptly49 and at the bending parts of the axon due to charge accumulation29. However, Goodwin et al. (2015) combined a realistic head model with detailed PN models and observed that most action potentials were initiated at or close to the axon hillock just adjacent to the axon initial segment. Pashut et al. (2011) have also argued for action potential initiation at the axon initial segment. Furthermore, this is consistent with previous studies arguing that action potentials giving rise to the D-wave might be initiated close to the soma and/or axon initial segment65, 66 . In our study, L5 PN action potentials were only rarely initiated at the axon where it crosses the boundary between gray matter and white matter, where tissue conductivity changes abruptly49 . It will be important to verify these results with more realistic neuron (in particular: axon) models.” ➔ In addition, when we control the arc length from 0.6 mm over 0.3 mm to 0 mm we found that the threshold did not alter as long as the bend was smooth, while a bend with 0 mm arc length produced an increased threshold. In this work, 0.6 mm arc length was determined to reproduce the results from Wongwarnpigoon et al., 2012 that investigated the impact of epidural cortical stimulation using same pyramidal neurons. We added text how the axon bending parts were made as follows: “(Method – Multi-compartment neuronal models) The bending part of the axon was calculated according to the normal vector of the surface element. The arc length of the axon bend was set to 0.6 mm when the z-component of the normal vector was positive and otherwise the arc length was 0.3 mm (compare Supplementary Figure S2). Note that when we varied the angle and arc length of the axon bend, it usually did not alter the activation threshold.” Figure 8: It would be good to replace the results using the coil orientation which stimulates the hand knob most strongly, rather than S1. ➔ We appreciate the reviewer’s suggestion. We changed Figure 8 to the case for the base coil orientation. Comments on the Discussion section: This is one of only few studies that combine field calculations with morphologically realistic models of neurons. It shows very encouraging results. Given its pioneering character, however, I would appreciate if the uncertainties involved in the modeling process could be more systematically discussed.I would like to emphasize that this suggestion is not meant as a critique of quality of the study. The match between modeling and experimental results is promising, but should not be misunderstood as a proof that the model is “correct”. ➔ A critical limitation involved in the proposed modeling process is that straightly stretched axons of L5 PNs inside the WM are not realistic. Thus, as we mentioned in discussion section, further developments in tractography using DTI may improve detailed neuronal morphology and thus we expect that more realistic PNs can be constructed in the future. The other uncertainties of the neuronal models were related to electrical and morphological variation among and within PNs. These issues were addressed below according to the next comments. In addition to limitations in modeling the neurons, uncertainties regarding the tissue conductivities (resulting in uncertainties regarding the field distribution) should be mentioned. The neuron models were taken from cat visual cortex. Even in humans, the histology of visual and motor cortex differs substantially. While the visual cortex has a thick layer 4, the motor cortex is dominated by layer 3 and 5 (including the large “Betz” cells). The study results demonstrate a strong impact of neural features on the results, raising the question on how well a transfer between species and areas is possible. Here, the size of the neurons was rescaled to fit to the target cortical area. I am curious on how much is known about other differences, such as differences in axon and soma diameters, or channel densities, that might systematically impact the results, but where not taken into account? In a related manner, the study of Salvador et al. (2011) hints towards putative additional structures that might be affected, such as short- and intermediate connections in WM (their a1 and a2 association fibres), and terminals of incoming fibre projections. It would be worth mentioning them as well. ➔ We agree that neural features have a strong impact on the results. In this work, we only varied the morphology of PNs (L3 and L5 PNs). However, further variation regarding to morphological and electrical properties should be considered. Thus, we rephrased the corresponding sentences as follows: “There are several limitations in our modeling study. A first limitation is that we have assumed isotropic conductivity, as is common in computational studies. Opitz et al. (2011) revealed that anisotropy might create hot spots in the WM with increased field strength that might affect neural excitation 12, and Seo et al (2015) reported that anisotropy affected L5 PNs significantly while it had only minor impact on L3 PNs45. Thus anisotropic conductivity might have significant effects on L5 PN axons running through the WM. In this work, the L3 and L5 PNs were taken from cat visual cortex due to the lack of models for most human cortical cell types. Thus, while we lengthened PNs to fit the cortex, the uncertainties regarding morphology of neurons was not fully studied. Wu et al. (2016) incorporated a multitude of PNs with various stimuli25 and Salvador et al. (2010) constructed various types of neural structures including pyramidal neurons, interneurons, and association fibers29; they found that the excitability can be shaped by field orientation, pulse wave form, and diameter of neurons. In addition, changes in the electrical properties, such as membrane properties and ion channels had the largest influence on neuronal excitability25. However, despite the uncertainty with regard to properties of PNs, we produced results matching both experimental studies and other computational studies23, 25 that incorporated the same models of PNs.” I am curious to understand which features of the L3 PNs resulted in their better excitability to TMS? One could intuitively assume that the larger and far-projecting L5 PNs might be the better targets. Discussing this might also help to better reveal the important neural features that dominate the cortical excitability to TMS. ➔ Mainen and Sejnowski, 1996, [41], tested the influence of dendritic structure by constructing neurons that share a common distribution of ion channels and differ only in their dendritic geometry (Fig 1). In that study, we used Fig.1 (c) morphology for L3 PNs and Fig 1. (d) morphology for L5 PNs. Their intracellular thresholds to evoke action potentials are 0.2 nA for L5 PNs and 0.1 nA for L3 PNs. From this, we can see that L3 PNs had a lower threshold, i.e. better excitability, compared to L5 PNs. In addition, better excitability to invasive cortical stimulation was also observed in previous modeling studies (ref [45] and [56]). It is plausible therefore that the morphology of L3 PNs might contribute to their better excitability to TMS. We rephrased the corresponding sentences as follows: “In this work, we explored the excitation thresholds of both L3 and L5 PNs and found as another major result that the percentage of excited neurons for a stimulation intensity at the motor threshold was about two times higher for L3 PNs than for L5 PNs. Furthermore, the activation of the L3 PNs was consistently higher than that of the L5 PNs for the full range of stimulation intensities. Mainen and Sejnowski (1996) compared the dendritic structure of L3 and L5 PNs with a common distribution of ion channels and found that a smaller intracellular current injection was necessary to activate L3 PNs compared to L5 PNs42. Thus, the morphology of L3 PNs might result in their higher excitability in response to TMS. In addition, the lower stimulation thresholds of the L3 PNs are consistent with lower stimulation intensities required to produce I-waves50, 62 , and the higher stimulation intensities required to produce D-waves.”"
}
]
},
{
"id": "18318",
"date": "07 Dec 2016",
"name": "Socrates Dokos",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPlease provide a more detailed description in the Methods of how you obtained the primary and secondary electric fields. For instance, you state that the former is directly determined from the coil and head geometries, whilst the latter is calculated using FEM. Why this distinction? Cannot FEM with appropriate boundary conditions be used for a single/combined electric field calculation?\n\nPlease clarify more clearly the nature of the magnetic dipoles you used to simulate the TMS coil i.e. how many and how were they distributed on the scalp? These dipoles are shown in the panel of Figure 1b as yellow arrows, but I find the figure very unclear, with minimal description in the text.\n\nCan you speculate on how the assumption of isotropic conductivity in WM would affect your results? Current is more likely to flow along WM fiber tracts than across these, potentially altering your electric fields and activation results.\n\nIn the abstract, replace \"corctial\" with \"cortical\", and your field calculation methods, replace \"in anterior to posterior\" with \"in the anterior to posterior\".",
"responses": [
{
"c_id": "2459",
"date": "17 Feb 2017",
"name": "Hyeon Seo",
"role": "Author Response",
"response": "The contribution of the proposed approach is combining TMS-induced electric field with the pyramidal neurons. All the process to calculate stimulus-induced electric field using the head model was based on the SimNibs software pipeline, and thus we just briefly introduced the procedure related to field calculations. According to the SimNibs pipeline, they used magnetic dipoles to model the TMS coils as it allows to easily determine the magnetic vector potential of a coil (A-field; the primary electric fields). Furthermore, magnetic vector potential is unaffected by the conductivity in the head model and thus it is straight-forward to calculate. The secondary electric field arises from the charge accumulations at conductivity discontinuities and thus was numerically determined using FEM. For clarity, we now wrote that the field calculation followed the SimNibs pipeline and rephrased corresponding sentences as follows: “The TMS-induced electric field was calculated based on the SimNibs v1.1 pipeline. Briefly, the electric field, E→=−∂A→∂t−∇ϕ=−Ep→−Es,→ consisted of primary (Ep→) and secondary (Es→) electric fields. The primary electric field was directly determined by the coil geometry and the secondary electric field caused by charge accumulations at tissue interfaces. Using magnetic dipoles to model the TMS coil, the primary electric field was calculated directly without the volume conductor model and then used as input for the secondary electric field calculation via a finite element method using the GetFEM++ library and MATLAB 33, 39 .” As already responded to the previous comment, we simulated the field following SimNibs software and thus the magnetic dipoles representing the TMS coil was provided in SimNibs. In this paper, the magstim 70 mm figure-8 coil was modelled as two circular disks (radius r = 5 cm) which are divided into 10 rings each (Thielscher and Kammer, 2004.) In previous study, Opitz et al. (2011) investigated the effects of anisotropic conductivity in WM in TMS and revealed the anisotropic conductivity create hot spots in the WM. In addition, Seo et al. (2015) investigated the effects of anisotropy on PNs induced by electrical stimulation. Because of this, one could speculate that the anisotropic conductivity may increase activation of L5 PNs. We rephrased these points in the discussion section as follows: “There are several limitations in our modeling study. A first limitation is that we have assumed isotropic conductivity, as is common in computational studies. Opitz et al. (2011) revealed that anisotropy might create hot spots in the WM with increased field strength that might affect neural excitation12, and Seo et al. (2015) reported that anisotropy affected L5 PNs significantly while it had only minor impact on L3 PNs45. Thus anisotropic conductivity might have significant effects on L5 PN axons running through the WM.” Corrected."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1945
|
https://f1000research.com/articles/5-2124/v1
|
31 Aug 16
|
{
"type": "Research Note",
"title": "Predicting Outcomes of Hormone and Chemotherapy in the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) Study by Biochemically-inspired Machine Learning",
"authors": [
"Iman Rezaeian",
"Eliseos J. Mucaki",
"Katherina Baranova",
"Huy Q. Pham",
"Dimo Angelov",
"Alioune Ngom",
"Luis Rueda",
"Peter K. Rogan",
"Iman Rezaeian",
"Eliseos J. Mucaki",
"Katherina Baranova",
"Huy Q. Pham",
"Dimo Angelov",
"Alioune Ngom",
"Luis Rueda"
],
"abstract": "Genomic aberrations and gene expression-defined subtypes in the large METABRIC patient cohort have been used to stratify and predict survival. The present study used normalized gene expression signatures of paclitaxel drug response to predict outcome for different survival times in METABRIC patients receiving hormone (HT) and, in some cases, chemotherapy (CT) agents. This machine learning method, which distinguishes sensitivity vs. resistance in breast cancer cell lines and validates predictions in patients, was also used to derive gene signatures of other HT (tamoxifen) and CT agents (methotrexate, epirubicin, doxorubicin, and 5-fluorouracil) used in METABRIC. Paclitaxel gene signatures exhibited the best performance, however the other agents also predicted survival with acceptable accuracies. A support vector machine (SVM) model of paclitaxel response containing the ABCB1, ABCB11, ABCC1, ABCC10, BAD, BBC3, BCL2, BCL2L1, BMF, CYP2C8, CYP3A4, MAP2, MAP4, MAPT, NR1I2, SLCO1B3, TUBB1, TUBB4A, TUBB4B genes was 78.6% accurate in 84 patients treated with both HT and CT (median survival ≥ 4.4 yr). Accuracy was lower (73.4%) in 304 untreated patients. The performance of other machine learning approaches were also evaluated at different survival thresholds. Minimum redundancy maximum relevance feature selection of a paclitaxel-based SVM classifier based on expression of ABCB11, ABCC1, BAD, BBC3 and BCL2L1 was 79% accurate in 53 CT patients. A random forest (RF) classifier produced a gene signature (ABCB11, ABCC1, BAD, BCL2, CYP2C8, CYP3A4, MAP4, MAPT, NR1I2, TUBB1, GBP1, OPRK1) that predicted >3 year survival with 82.4% accuracy in 420 HT patients. A similar RF gene signature showed 79.6% accuracy in 504 patients treated with CT and/or HT. These results suggest that tumor gene expression signatures refined by machine learning techniques can be useful for predicting survival after drug therapies.",
"keywords": [
"Gene expression signatures",
"breast cancer",
"chemotherapy resistance",
"hormone therapy",
"machine learning",
"support vector machine",
"random forest"
],
"content": "Introduction\n\nCurrent pharmacogenetic analysis of chemotherapy makes qualitative decisions about drug efficacy in patients (determination of good, intermediate or poor metabolizer phenotypes) based on variants present in genes involved in the transport, biotransformation, or disposition of a drug. We have applied a supervised ML approach to derive accurate gene signatures, based on the biochemically-guided response to chemotherapies with breast cancer cell lines1, which show variable responses to growth inhibition by paclitaxel and gemcitabine therapies2,3. We analyzed stable4 and linked unstable genes in pathways that determine their disposition. This involved investigating the correspondence between 50% growth inhibitory concentrations (GI50) of paclitaxel and gemcitabine and gene copy number, mutation, and expression first in breast cancer cell lines and then in patients1. Genes encoding direct targets of these drugs, metabolizing enzymes, transporters, and those previously associated with chemo-resistance to paclitaxel (n=31 genes) were then pruned by multiple factor analysis (MFA), which indicated expression of ABCC10, BCL2, BCL2L1, BIRC5, BMF, FGF2, FN1, MAP4, MAPT, NKFB2, SLCO1B3, TLR6, TMEM243, TWIST1, and CSAG2 could predict sensitivity in breast cancer cell lines with 84% accuracy. The cell line-based paclitaxel-gene signature predicted sensitivity in 84% of patients with no or minimal residual disease (n=56; data from 5). The present study derives related gene signatures with ML approaches that predict outcome of hormone- and chemotherapies in the large METABRIC breast cancer cohort6.\n\n\nMethods\n\nSVM learning: Previously, paclitaxel-related response genes were identified from peer-reviewed literature, and their expression and copy number in breast cancer cell lines were analyzed by multiple factor analysis of GI50 values of these lines2 (Figure 1). Genes with expression levels related to GI50 were used to derive SVMs by backwards feature selection for paclitaxel, tamoxifen, methotrexate, 5-fluorouracil, epirubicin, and doxorubicin (trained using the function fitcsvm in MATLAB R2014a7 and tested with either leave-one-out or 9 fold cross-validation). These SVMs were then assessed for their ability to predict patient outcomes based on available metadata (see Figure 1 and reference 1). Interactive prediction using normalized expression values as input is available at http://chemotherapy.cytognomix.com.\n\nThe initial set of genes is carefully selected through the understanding of the drug and the pathways associated with it. A multiple factor analysis of the GI50 values of a training set of breast cancer cell lines and the corresponding expression levels of each gene in the initial set reduces the list of genes. Given this expression levels of each gene the reduced set for each cell line, the method finds the optimal gene subset and the SVM that minimizes the misclassification rate by cross-validation. The SVM is evaluated on patients by classifying those with shorter survival time as resistant and longer survival as sensitive to hormone and/or chemotherapy. The Gaussian kernel SVM requires manual selection of two different parameters, C and sigma; these parameters determine how strictly the SVM learns the training set, thus if not selected properly can lead to overfitting. A grid search evaluates a wide range of combinations of these values by parallelization. The algorithm selects the C and sigma combination that lead to the lowest cross-validation misclassification rate. A backwards feature selection (greedy) algorithm is used, in which one gene of the set is left out in a reduced gene set and the classification is then assessed; genes that maintain or lower the misclassification rate are kept in the signature. The procedure is repeated until the subset with the lowest misclassification rate is selected as the optimal subset of genes.\n\nRF learning: RF was trained using the WEKA 3.78 data mining tool. This classifier uses multiple random trees for classification, which are combined via a voting scheme to make a decision on the given input gene set. Figure 2 depicts the therapy outcome prediction process of a given patient using a RF consisting of a series of decision trees derived from different subsets of paclitaxel-related genes.\n\nSeveral DTs are built using different subsets of paclitaxel-related genes. The process starts from the root of each tree and if the expression of the gene corresponding to that node is greater than a specific value, the process continues through the right branch, otherwise it continues through the left branch until it reaches a leaf node; that leaf represents the prediction of the tree for that specific input. The decisions of all trees are considered and the one with the largest number of votes is selected as the patient outcome.\n\nAugmented Gene Selection: The most relevant genes (features) for therapy outcome prediction were found using the minimum redundancy and maximum relevance (mRMR) approach9. mRMR is a wrapper that incrementally selects genes by maximizing the average mutual information between gene expression features and classes, while minimizing their redundancies:\n\n\n\nwhere fi corresponds to a feature in gene set S, I(fi,C) is the mutual information between fi and class C, and I(fi,fj) is the mutual information between features fi and fj.\n\nFor this experiment, we used a 26-gene signature (genes ABCB1, ABCB11, ABCC1, ABCC10, BAD, BBC3, BCL2, BCL2L1, BMF, CYP2C8, CYP3A4, MAP2, MAP4, MAPT, NR1I2, SLCO1B3, TUBB1, TUBB4A, TUBB4B, FGF2, FN1, GBP1, NFKB2, OPRK1, TLR6, TWIST1) as the base feature set. These genes were selected (in Ref. 1) based either on their known involvement in paclitaxel metabolism, or evidence that their expression levels and/or copy numbers correlate with paclitaxel GI50 values (Table 3). mRMR and SVM were combined to obtain a subset of genes that can accurately predict patient survival outcome; here, we considered 3, 4 and 5 years as survival thresholds for breast cancer patients (Table 3).\n\n\nResults and discussion\n\nInitial gene sets preceding feature selection: Paclitaxel - ABCB1, ABCB11, ABCC1, ABCC10, BAD, BBC3, BCAP29, BCL2, BCL2L1, BIRC5, BMF, CNGA3, CYP2C8, CYP3A4, FGF2, FN1, GBP1, MAP2, MAP4, MAPT, NFKB2, NR1I2, OPRK1, SLCO1B3, TLR6, TUBB1, TWIST1. Tamoxifen - ABCB1, ABCC2, ALB, C10ORF11, CCNA2, CYP3A4, E2F7, F5, FLAD1, FMO1, IGF1, IGFBP3, IRS2, NCOA2, NR1H4, NR1I2, PIAS4, PPARA, PROC, RXRA, SMARCD3, SULT1B1, SULT1E1, SULT2A1. Methotrexate - ABCB1, ABCC2, ABCG2, CDK18, CDK2, CDK6, CDK8, CENPA, DHFRL1. Epirubicin - ABCB1, CDA, CYP1B1, ERBB3, ERCC1, GSTP1, MTHFR, NOS3, ODC1, PON1, RAD50, SEMA4D, TFDP2. Doxorubicin - ABCB1, ABCC2, ABCD3, AKR1B1, AKR1C1, CBR1, CYBA, FTH1, FTL, GPX1, MT2A, NCF4, RAC2, SLC22A16, TXNRD1. 5-Fluorouracil - ABCB1, ABCC3, CFLAR, IL6, MTHFR, TP53, UCK2.\n\n1 Surviving patients; 2 Analysis included patients in the METABRIC ‘discovery’ dataset only; 3 SVMs tested with 9 fold cross-validation, all others tested with leave-one-out cross-validation; 4 Includes all patients treated with HT,CT, combination CT/HT, either with or without combination radiotherapy; 5 Median time after treatment until death (> 4.4 years) was used to distinguish favorable outcome, ie. sensitivity to therapy.\n\n1AUC: Area under receiver operating curve; both Discovery and Validation patient datasets analyzed\n\n1Predicted treatment responses for individual METABRIC patients using the described ML techniques are provided in Dataset 1.\n\nThe performance of several ML techniques have been compared that distinguish paclitaxel sensitivity and resistance in METABRIC patients using its tumour gene expression datasets. SVMs have generated gene signatures, indicating which genes are important for treatment response in METABRIC patients. These models are more accurate for prediction of outcomes in patients receiving HT and/or CT compared to other patient groups.\n\nSVMs and RF were trained using expression of genes associated with paclitaxel response, mechanism of action and stable genes in the biological pathways of these targets (Figure 3). SVM models for drugs used to treat these patients were derived by backwards feature selection on patient subsets stratified by treatment or outcome (Table 1). The highest SVM accuracy was found for the paclitaxel signature in patients treated with HT and/or adjuvant chemotherapy (78.6%).\n\nSchematic elements of gene expression changes associated with response to paclitaxel. Red boxes indicate genes with a positive correlation between gene expression or copy number, and resistance using multiple factor analysis. Blue demonstrates a negative correlation. Genes outlined in dark grey are those in a previously published paclitaxel SVM model (reproduced from reference 1 with permission).\n\nThe RF classifier was used to predict paclitaxel therapy outcome for patients that underwent CT and/or HT (Table 2). The best performance achieved with RF showed 82.4% overall accuracy using a 3-year survival threshold for distinguishing therapeutic resistance vs. sensitivity.\n\nThe best overall accuracy and AUC (sensitivity and specificity) for CT/HT patients using mRMR feature selection for SVM predicting outcome of paclitaxel therapy was obtained for CT patients with 4 year survival. Outcomes for HT patients with 3 year survival were predicted with 84% accuracy; however the specificity was lower in this group. SVM combined with mRMR further improved accuracy of feature selection and prediction of response to hormone and/or chemotherapy based on survival time than either SVM or RF alone.\n\nWhile not a replication study sensu stricto, the initial paclitaxel gene set used for feature selection was the same as in our previous study1. Predictions for the METABRIC patient cohort, which was independent of the previous validation set5, of the either same (SVM) or different ML methods (RF and SVM with mRMR) exhibited comparable or better accuracies than our previous gene signature1.\n\nThese techniques are powerful tools which can be used to identify genes that may be involved in drug resistance, as well as predict patient survival after treatment. Future efforts to expand these models to other drugs may assist in suggesting preferred treatments in specific patients, with the potential impact of improving efficacy and reducing duration of therapy.\n\n\nData availability\n\nPatient data: The METABRIC datasets are accessible from the European Genome-Phenome Archive (EGA) using the accession number EGAS00000000083 (https://www.ebi.ac.uk/ega/studies/EGAS00000000083). Normalized patient expression data for the discovery (EGAD00010000210) and validation sets (EGAD00010000211) were retrieved with permission from EGA. Corresponding clinical data was obtained from the literature6. While not individually curated, HT patients were treated with tamoxifen and/or aromatase inhibitors, while CT patients were most commonly treated with cyclophosphamide-methotrexate-fluorouracil (CMF), epirubicin-CMF, or doxorubicin-cyclophosphamide.\n\nF1000Research: Dataset 1. Predicted treatment response for each individual METABRIC patient, 10.5256/f1000research.9417.d13398310",
"appendix": "Author contributions\n\n\n\nPKR, AN and LR designed the methodology and oversaw the project. SVM feature selection with MATLAB was automated by DA. EJM and KB selected the initial gene signatures, and performed processing of the METABRIC data using SVM methods. IR performed the preprocessing of the METABRIC dataset using RF; IR and HQ designed feature selection and classification modules using WEKA. PKR, IR, EJM, AN, and LR wrote the manuscript.\n\n\nCompeting interests\n\n\n\nPKR cofounded Cytognomix. A patent application related to biologically inspired gene signatures is pending. The other authors declare that they have no competing interests.\n\n\nGrant information\n\nAN and LR are funded by NSERC grants RGPIN-2016-05017 and RGPIN-2014-05084 and by the Windsor Essex County Cancer Centre Foundation under a Seeds4Hope grant. PKR has been supported by NSERC [Discovery Grant 371758-2009], Canadian Foundation for Innovation, Canada Research Chairs and Cytognomix Inc.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nDorman SN, Baranova K, Knoll JH, et al.: Genomic signatures for paclitaxel and gemcitabine resistance in breast cancer derived by machine learning. Mol Oncol. 2016; 10(1): 85–100. PubMed Abstract | Publisher Full Text\n\nDaemen A, Griffith OL, Heiser LM, et al.: Modeling precision treatment of breast cancer. Genome Biol. 2013; 14(10): R110. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShoemaker RH: The NCI60 human tumour cell line anticancer drug screen. Nat Rev Cancer. 2006; 6(10): 813–823. PubMed Abstract | Publisher Full Text\n\nPark NI, Rogan PK, Tarnowski HE, et al.: Structural and genic characterization of stable genomic regions in breast cancer: Relevance to chemotherapy. Mol Oncol. 2012; 6(3): 347–59. PubMed Abstract | Publisher Full Text\n\nHatzis C, Pusztai L, Valero V, et al.: A genomic predictor of response and survival following taxane-anthracycline chemotherapy for invasive breast cancer. JAMA. 2011; 305(18): 1873–1881. PubMed Abstract | Publisher Full Text\n\nCurtis C, Shah SP, Chin SF, et al.: The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. Nature. 2012; 486(7403): 346–352. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMATLAB and Statistics Toolbox Release 2014a. The MathWorks Inc., Natick, Massachusetts, United States.\n\nHall M, Frank E, Holmes G, et al.: The WEKA data mining software: an update. ACM SIGKDD explorations newsletter. 2009; 11(1): 10–18. Publisher Full Text\n\nDing C, Peng H: Minimum redundancy feature selection from microarray gene expression data. J Bioinform Comput Biol. 2005; 3(2): 185–205. PubMed Abstract | Publisher Full Text\n\nRezaeian I, Mucaki EJ, Baranova K, et al.: Dataset 1 in: Predicting Outcomes of Hormone and Chemotherapy in the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) Study by Machine Learning. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16733",
"date": "30 Sep 2016",
"name": "Elana J. Fertig",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study develops SVM and RF algorithms built upon previously learned gene signatures of therapeutic response to breast cancer. The algorithms are applied and compared to predict patient survival under different treatment conditions in METABRIC data. The analyses and comparisons are robust and this study provides a useful assessment of biologically-driven classifiers. The three major areas that require improvement before the article is indexed are as follows, and described in further detail below.\nThe methods require further clarification to distinguish differences between this study and the previous study as well as the parameters of the machine learning algorithms. Accuracy in the results must better distinguish results on independent test and training sets. Classifiers must be put in the context of other existing genomics classifiers used in breast cancer and/or previously published in Mammaprint data.\n\nTitle and Abstract\n\nAcceptable\n\nArticle content\n\nMethods\nAbbreviations SVM and RF must be spelled out as Support Vector Machine and Random Forrest on first use in Methods. Writing in SVM learning subsection of Methods requires clarification to distinguish which of these methods were developed in the previous Molecular Oncology publication and which were developed as part of this publications. Details about the SVM learning algorithm are included in the caption to Figure 1, but must also be included and completely described in text for the corresponding section of the methods. No equations are provided to describe the role of the parameters C and sigma. It is also unclear whether this greedy search is implemented by the Matlab function fitcsvm or uses custom code developed by the authors.\n\nResults\nNeed to specify whether reported accuracies are computed with leave-one-out cross validation or 9-fold cross validation (described in Methods). Ideally, given the size of METABRIC data they would be calculated on independent training (first 1000 patient samples) and training (last 1000 patient samples) datasets. AUC must be computed separately for discovery and validation sets (Table 2). It is unclear whether the previous validation set described in the sentence “Predictions for the METABRIC patient cohort, which was independent of the previous validation set” refers to a validation set used in this publication or the previous publication. Covariates such as ER/PR or PAM50 subtypes must be included in a table describing the sample cohorts. Accuracy must be computed separately for these co-variates or they must also be included as co-variates in the machine-learning model. Ideally accuracy would be compared to existing breast cancer classifiers (e.g., using code from Marchionni et al., BMC Genomics, 2013) and/or survival curves reported in the literature.\n\nConclusions\nMust be discussed in the context of existing genomics classifiers for breast cancer (e.g., OncotypeDx and/or Mammaprint). Results must be put in context with other predictions on METABRIC data, e.g., outcomes from the DREAM contest.\n\nData\n\nAcceptable",
"responses": [
{
"c_id": "2433",
"date": "27 Jan 2017",
"name": "Peter Rogan",
"role": "Author Response",
"response": "Comment 1: The methods require further clarification to distinguish differences between this study and the previous study as well as the parameters of the machine learning algorithms. Response: The first paragraph of the Methods describes Support Vector Machine learning, which has been greatly expanded upon. Differences in SVM methodology between the two studies are indicated there (i.e. a Gaussian kernel was used instead of a linear kernel). All other feature selection methods described in the manuscript (Random Forest, mRMR) were not used in Dorman et al., 2016. The parameters for machine learning algorithms have been incorporated in the manuscript, and can be found in the footnote section of each data table. Comment 2: Accuracy in the results must better distinguish results on independent test and training sets. Response: The Validation dataset showed a distinct overall expression profile from the Discovery set, possibly due to batch effects, which are well known. We added another experiment to the manuscript by splitting the Discovery set into Training and Test sets. The model was trained using 70% of the data and then tested using the remaining 30% of data as test set. We repeated this procedure 100 times and took the median as the final performance result. The results are presented in Tables 4 and 5 of the manuscript. Comment 3: Classifiers must be put in the context of other existing genomics classifiers used in breast cancer and/or previously published in Mammaprint data. Response: We have added two sentences in the second paragraph of the “Results and Discussion” section which describes the comparison of our gene signature to those from MammaPrint and Oncotype Dx. Pair-wise comparison of these three signatures show that they are nearly independent of one another. Methods Comment 4: Abbreviations SVM and RF must be spelled out as Support Vector Machine and Random Forest on first use in Methods. Response: We thank the reviewer for this suggestion. It has been addressed in the Methods section of the manuscript. Comment 5: Writing in SVM learning subsection of Methods requires clarification to distinguish which of these methods were developed in the previous Molecular Oncology publication and which were developed as part of this publications. Response: This is now clarified within the first paragraph of the Methods section in the manuscript. The SVM classifier was adopted from previous Molecular Oncology publication, while the feature selection method has been developed as part of this publication. Comment 6: Details about the SVM learning algorithm are included in the caption to Figure 1, but must also be included and completely described in text for the corresponding section of the methods. Response: Thanks for the reviewer’s suggestion. This description of the SVM learning algorithm has been moved from the Figure 1 legend and integrated into the first paragraph of the methods section. Comment 7: No equations are provided to describe the role of the parameters C and sigma. It is also unclear whether this greedy search is implemented by the Matlab function fitcsvm or uses custom code developed by the authors. Response: A brief description of the role of each parameter has been added to the first paragraph of the methods section of the manuscript. Readers are also now directed to a reference (Ben-Hur and Weston, 2010) if more detail is desired. The greedy search, also called sequential backward feature selection, was implemented as a script by our lab in MATLAB. It is not a MATLAB function. This is clarified by changing a few words in the first paragraph of the methods section: “A backwards feature selection (greedy) algorithm was designed and implemented in MATLAB in which…” Moreover, as described above, the SVM classifier was adopted from previous Molecular Oncology publication (Dorman et al. 2016), while the feature selection method has been developed as part of this publication. Results Comment 8: Need to specify whether reported accuracies are computed with leave-one-out cross validation or 9-fold cross validation (described in Methods). Response: All SVM models described in the manuscript used leave-one-out cross validation except one, and this is clearly indicated in Table 1, and is now commented on in the methods. A 9-fold cross-validation was used to build a model using 735 patients who were treated with Chemotherapy and/or Hormone therapy, as leave-one-out cross validation of this many patients took an unreasonably long time to complete (it exceeded 3 weeks on a dedicated I7 Intel processor). Comment 9: Ideally, given the size of METABRIC data they would be calculated on independent training (first 1000 patient samples) and test (last 1000 patient samples) datasets. Response: We obtained new results for both RF and mRMR+SVM models using Discovery patient set for training and Validation set for testing, however the performance of the model was poor. After further investigation, we found that there were large differences between gene expression levels of the 26 model signature genes in the Discovery versus Validation sets (we used Wilcoxon rank sum test, Kruskal-Wallis test and t-test to evaluate the results – shown in the plotted distributions of gene expression in Supplemental Dataset 2) regardless of patient status (alive or dead). Hence, building any classifier using discovery and validation set as training and test set in their current forms will result of poor performance due to this source of heterogeneity. To address this issue, we did carry out another experiment based on data from the Discovery patient dataset alone; using 70% of data for training and remaining 30% for testing, the performance of the model was significantly better. We speculate that the discrepancy between the expression distributions in the Discovery and Validation sets were the result of batch effects. The results have been added to the manuscript (Tables 4,5). Comment 10: AUC must be computed separately for discovery and validation sets (Table 2). Response: We have included additional performance measures to Tables 1-5, including Area Under Curve (AUC). Comment 11: It is unclear whether the previous validation set described in the sentence “Predictions for the METABRIC patient cohort, which was independent of the previous validation set” refers to a validation set used in this publication or the previous publication. Response: This sentence is referring to breast cancer patient data from Hatzis et al. (2013), which was used as a validation set in Dorman et al. (2016), not this publication. We have modified this sentence to clarify the issue. Comment 12: Covariates such as ER/PR or PAM50 subtypes must be included in a table describing the sample cohorts. Accuracy must be computed separately for these co-variates or they must also be included as co-variates in the machine-learning model. Response: Even with the subtype as covariant, it is not possible to perform the analysis the reviewer requested. Certain therapies are definitely more effective in particular subtypes (eg. etoposide, docetaxel, and cisplatin are preferentially active in basal or claudin-low cell lines, as observed clinically; Heiser et al., 2012). The public METABRIC dataset (or the corresponding publication) does not provide the specific therapies used to treat individual patients. Had they done so, it would have made sense to look at these covariates. Reference: Heiser LM, Sadanandam A, Kuo WL, Benz SC, Goldstein TC, Ng S, Gibb WJ, Wang NJ, Ziyad S, Tong F, et al. (2012). Subtype and pathway specific responses to anticancer compounds in breast cancer. Proc Natl Acad Sci US A109:2724-2729. Comment 13: Ideally accuracy would be compared to existing breast cancer classifiers (e.g., using code from Marchionni et al., BMC Genomics, 2013) and/or survival curves reported in the literature. Response: The proposed method has been compared against the K-TSP (Marchionni et al., BMC Genomics, 2013) as per reviewer’s suggestion and the results are presented in Table 6 of the manuscript. Conclusions Comment 14: Must be discussed in the context of existing genomics classifiers for breast cancer (e.g., OncotypeDx and/or Mammaprint). Response: We have added text to both the second paragraph of the “Results and Discussion” paragraph and to the conclusion of the paper. Comment 15: Results must be put in context with other predictions on METABRIC data, e.g., outcomes from the DREAM contest. Response: An important distinction to note in regards to our methodology is that the predictions are based on the genes known to be associated with the response to specific drugs used to treat breast cancer. In the DREAM contest, the method with the highest METABRIC score (as described in Cheng et al., 2013) was phenotype-based, finding signatures for molecular processes that are disregulated in METABRIC, rather than responses to the cancer therapies themselves. While this is an interesting prediction method, the results cannot compared to our approach. The gene signatures that we have derived contain components of many different pathways. Reference: Cheng WY, Ou Yang TH, Anastassiou D. Biomolecular events in cancer revealed by attractor metagenes. PLoS Comput Biol. 2013;9(2):e1002920."
}
]
},
{
"id": "16345",
"date": "03 Oct 2016",
"name": "Chun-Wei Tung",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study proposed prediction methods using SVM and RF classifiers with mRMR selected feature sets from cell line data and demonstrate its prediction ability for outcomes from METABRIC patient cohort. The classifiers with good prediction performance show the usefulness of combining domain knowledge with feature selection techniques. However, some details essential for reproducibility and interpretation are missing. Required information is listed in the following.\nWhat are the values of parameters for SVM and RF classifiers and the methods for parameter selection (by default or other selection methods)?\n\nThe development and evaluation of models for patient data are not clear. Whether the models were trained using partial data from METABRIC or only leave-one-out cross-validation was applied? If cross-validation is the case, then what is the model offered at the online server because there will be more than one models created, and whether the cross-validation is involved in the feature selection process that often leads to an overestimation of the performance. For the case of training on partial data, both training and test performance are essential information for evaluating the robustness of models.\n\nSince some of the datasets are highly imbalanced, the numbers of positives and negatives, as well as sensitivity and specificity are more important than accuracy for interpreting the results as a high accuracy with a low AUC could be the result of all positive/negative predictions on an imbalanced dataset. Listing all the information along with the accuracy and AUC will help the interpretation of prediction performances.",
"responses": [
{
"c_id": "2432",
"date": "27 Jan 2017",
"name": "Peter Rogan",
"role": "Author Response",
"response": "Comment 1:What are the values of parameters for SVM and RF classifiers and the methods for parameter selection (by default or other selection methods)? Response: The parameter values for these classifiers have been added to the Tables 1-5. In regards to parameter selection, the first paragraph of the methods now describes C and Sigma selection as a grid search to find the values with the lowest cross-validation misclassification rate. Similarly for RF, a grid search was used to optimize the maximum number of randomly selected genes for each tree (second paragraph of Methods section). Comment 2: The development and evaluation of models for patient data are not clear. Whether the models were trained using partial data from METABRIC or only leave-one-out cross-validation was applied? If cross-validation is the case, then what is the model offered at the online server because there will be more than one models created, and whether the cross-validation is involved in the feature selection process that often leads to an overestimation of the performance. For the case of training on partial data, both training and test performance are essential information for evaluating the robustness of models. Response: We obtained new results for both RF and mRMR+SVM models when we use discovery set as training set and validation set as test set, the performance of the model was poor. After more investigation we found that there happened to be a large variation between gene expression of 26 targeted genes between discovery and validation set (please see Supplementary Dataset 2). Hence, building any classifier using discovery and validation set as training and test set in their current forms will result of poor performance, since the training and test sets are vastly different. However, we did carry out another experiment on discovery set solely and used 70% of data for training and remaining 30% for test the performance of the model. The results have been added to the manuscript (Tables 4 and 5). Comment 3: Since some of the datasets are highly imbalanced, the numbers of positives and negatives, as well as sensitivity and specificity are more important than accuracy for interpreting the results as a high accuracy with a low AUC could be the result of all positive/negative predictions on an imbalanced dataset. Listing all the information along with the accuracy and AUC will help the interpretation of prediction performances. Response: As previously mentioned, we have added more performance measures including MCC and AUC. They have been added Tables 1-5 of the manuscript."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2124
|
https://f1000research.com/articles/5-2516/v1
|
13 Oct 16
|
{
"type": "Research Article",
"title": "The refined biomimetic NeuroDigm GEL™ Model of neuropathic pain in the mature rat",
"authors": [
"Mary R. Hannaman",
"Douglas A. Fitts",
"Rose M. Doss",
"David E. Weinstein",
"Joseph L. Bryant",
"Douglas A. Fitts",
"Rose M. Doss",
"David E. Weinstein",
"Joseph L. Bryant"
],
"abstract": "Background: Many humans suffering with chronic pain have no clinical evidence of a lesion or disease. They are managed with a morass of drugs and invasive procedures. Opiates usually become less effective over time. In many, their persistent pain occurs after the healing of a soft tissue injury. Current animal models of neuropathic pain typically create direct neural damage with open surgeries using ligatures, neurectomies, chemicals or other forms of deliberate trauma. However, we have observed clinically that after an injury in humans, the naturally occurring process of tissue repair can cause chronic neural pain.\n\nMethods: We demonstrate how the refined biomimetic NeuroDigm GEL™ Model, in the mature male rat, gradually induces neuropathic pain behavior with a nonsurgical percutaneous implant of tissue-derived hydrogel in the musculo-fascial tunnel of the distal tibial nerve. Morphine, Celecoxib, Gabapentin and Duloxetine were each screened in the model three times each over 5 months after pain behaviors developed. A pilot study followed in which recombinant human erythropoietin was applied to the GEL neural procedure site.\n\nResults: The GEL Model gradually developed neuropathic pain behavior lasting months. Morphine, initially effective, had less analgesia over time. Celecoxib produced no analgesia, while gabapentin and duloxetine at low doses had profound analgesia at all times tested. The injected erythropoietin markedly decreased bilateral pain behavior that had been present for over 4 months. Histology revealed a site of focal neural remodeling, with neural regeneration, as in human biopsies.\n\nConclusion: The refined NeuroDigm GEL™ Model induces localized neural remodeling resulting in robust neuropathic pain behavior. The analgesics responses in this model reflect known responses of humans with neuropathic pain. The targeted recombinant human erythropoietin appears to heal the ectopic focal neural site, as demonstrated by the extinguishing of neuropathic pain behavior present for over 4 months.",
"keywords": [
"animal models",
"neuropathic pain",
"erythropoietin",
"nerve regeneration",
"neuritis",
"tissue repair",
"hydrogel",
"nerve block",
"morphine resistance",
"refinement"
],
"content": "Introduction and background\n\nThe development of chronic neural pain following soft tissue injuries in humans is an uncommon but disabling complication1–5. The persistent pain usually begins gradually, continuing for months to years. The typical initiating causes of the antecedent soft tissue injuries include blunt trauma, strains, surgery, industrial injuries, radiation, fractures, vibration, and repetitive motion6–16. Disuse also contributes to the tissue matrix stiffness, edema and pain17. Despite a common history of trauma, a trauma-specific neural lesion or occult nerve injury are seldom recognized. The existence of an occult neural lesion has been implied in the initiation and the maintenance of neuropathic pain18. In the absence of an identified neural lesion, many of these patients have been hypothesized to have a peripheral neural “generator” or an “ectopic” site of localized inflammation19–30.\n\nThe perceived absence of a specific neural injury site does not mean these patients lack such a lesion; these lesions may be “clinically invisible” and below our current level of detection31. These patients usually have no clinical evidence of either neural injury or physical abnormality18,32. The persistence of pain behaviors in these individuals argues in support of a local neural activation site. In vivo peripheral nerve imaging techniques33–39 and diagnostics are presently being developed40,41; however, they cannot yet detect abnormalities in small branches of the distal peripheral nerves18, which are the fibers most likely to be affected in soft tissue injuries.\n\nA logical cause for the gradual appearance of chronic pain following soft tissue trauma is the predictable changes that occur during the tissue repair process at the affected site. These changes involve the removal of debris, fibrosis, and the regeneration of damaged tissue, including muscle, nerve, vasculature and extracellular matrix. The remodeling of tissue may result in nerve compression, with delayed onset of pain42. One such example of the ability of minimal pressure on the nerve causing severe pain, is trigeminal neuralgia, where even micro-compression of the nerve root can cause severe pain43. The timing of the onset of chronic neuropathic pain parallels tissue morphologic events that occur during healing and tissue remodeling of the affected area (Figure S1: Tissue repair comparison chart)44,45. We hypothesize that it is during tissue remodeling that an accumulation of matrix and possibly local edema alter the neural microenvironment46 and contribute to the compression of vulnerable nerve cells, resulting in focal neural injury. These injuries cause atypical matrix forces then abnormal function47 of peripheral glia and neurons48, resulting in subsequent pain syndromes. To test this biophysical hypothesis we have created a model of a discrete focal lesion in the rat rear limb that recreates clinical findings found in humans.\n\nDoubts have been raised about whether or not rodents can represent the human condition in neuropathic pain because few effective analgesics have been discovered using them1,49. We consider the social behaviors50, tissue healing51,52 and the similar evoked neural pain behaviors that humans share with rodents as confirmative to the relevance for their use. Two other crucial factors we used were the rodent’s correlated mature human age and the clinical relevance of the biologic pathophysiology embodied by the model1,53. Presently, animal models with neuropathic pain behaviors are created using forms of direct surgical nerve trauma or open surgery with neural irritation using chemicals, drugs, cold or heat54,55. The most commonly used of these are the Spinal Nerve Ligation (SNL) model56, the Chronic Constriction Injury (CCI) model57, and the Spared Nerve Injury (SNI) model58. These models use ligations, neurectomies or a combination to create pain with sensory and motor debility. While these open surgical models are useful in mimicking direct nerve trauma, they do not reproduce the pathophysiology of the delayed onset of neural pain without debility, as usually happens in many patients with neuropathic pain.\n\nThis biomimetic neural pain model is based on clinical observations of patients with soft tissue injuries followed by persistent pain. Many of them were treated with a specific localized nerve block for a regional pain, after a detailed neuroanatomic examination of their involved peripheral nerves (MRH physician practice 1987–2016). The NeuroDigm GEL™ Model uses a known physiological tissue process to create an occult chronic mononeuritis as seen in these patients.\n\n\nMaterials and methods\n\nThe protocol was approved by the Institutional Animal Care and Use Committee of NeuroDigm Corporation (IACUC permit number 1-2014/15) and was in compliance with the guidelines of the 8th edition of Guide for the Care and Use of Laboratory Animals. All efforts were made to minimize the number of animals used, and pain and suffering. NC3Rs ARRIVE guidelines for reporting on animal research were followed (Supplementary file S2).\n\nThirty-seven Sprague Dawley 9.5-month-old male rats (an outbred strain from Harlan facility in Houston, Texas) were received, after being raised within their normal social groups. Their initial weights ranged from 440 to 660 grams, with a mean of 545 grams. In this study, the rats’ human equivalent age is as a mature adult59–61. The rats had no prior drug exposure. A total of 37 rats were received with 36 rats enrolled after baseline testing. Three rats were removed from study for complications, with 33 finishing study.\n\nVentilation and housing were in compliance with the guidelines of the 8th edition of Guide for the Care and Use of Laboratory Animals. Each rat was housed singly in clear, open cages in the same room. The room and individual cages had ammonia sensors (Pacific Sentry). No other animals or rodents were housed in the facility. The cages were changed out every 2 weeks or earlier. Bedding was 0.25-inch corncob pellets. Food was LabDiet 5V5R with low phytoestrogens, continuous access. Light-dark cycle was 12 hours, with lights off from 7 PM to 7 AM, except when screening. Maximum lumens at cage level were 20–40; at time of pain behavior testing the maximum lumens were 85–100. The room had no high frequency interference detected (Batseeker Ultrasonic Bat Detector), other than that related to the rats on weekly and as needed testing. Water used was municipal water. In each cage, enrichments were 1) a non-plasticized polyvinyl chloride tube 4” in diameter by 6” in length (Bisphenol A free) for shelter and 2) bedding at an increased depth of 0.75 to 1 inch when dry, to encourage burrowing. Facility was in north Texas. All pain behavior testing were performed in the same room as the rats were housed.\n\nAfter receiving, the rats were acclimated for 15 days with baseline testing (Figure 1). The rats were housed singly to limit fighting and rough play. The rats were randomly assigned to one of three groups: GEL procedure, sham procedure, or control (no procedure). The investigator performing the procedures and behavioral testing was blinded to the rat group assignment. Another experimenter did the random group assignment, before the initial procedures. The rats were housed in a separate room during the procedures and handed to the investigator by an assistant. Tail identification was masked prior to performance of the procedures. The investigator did not know the group assignments at any time, until the unblinding on post procedure day 149. The locations of the animals on the rack were randomly changed every 10–14 days. Isoflurane 2–3% was used for anesthetic induction and during the procedure for approximately 2 minutes. Isoflurane gas was used due to brief anesthetic time needed, enabling less recovery time compared to most injectable anesthetics. Four analgesics, morphine, celecoxib, gabapentin and duloxetine, were screened three times each during the 5-month study. The screening involved testing the plantar hindpaws of the rats with stimuli used to detect mechanical allodynia and mechanical hyperalgesia. Each rat was tested singly on a wire mesh with manual application of stimuli, for all behavior testing. The behavioral testing was performed between 1 PM and 11 PM. Animal welfare observations of behavior, coat and movement were checked daily. Monitoring for signs of infection, water and food use were conducted at least three times a week. Weights were monitored every 4 weeks, or every 2 weeks as indicated. The same female investigator performed all procedures and screenings, with no one else in the room. Subcutaneous injections rather than oral gavage were used for analgesics, as they are less stressful. A pilot study, blinded during pain behavior testing, was performed on the effect of a localized application of a human recombinant erythropoietin analog (EPO) injected near the GEL™ neural procedure site on post procedure day 152, followed by behavioral testing on days 153–160.\n\nAt the end, a pilot study with localized EPO was performed.\n\nWe tested the null hypothesis62 that the GEL procedure does not differ from the control group during the 5 months after procedure on the dependent variables of paw withdrawals in response to von Frey fibers, a camel-hair brush, and pinprick. The alternative hypothesis was that, over time, there is a difference between the groups. The experiment was designed to discover the smallest biologically important effect, optimizing the number of animals used63.\n\nWe conducted a power analysis based on data from previous rat experiments with the GEL model to detect a difference between the GEL group and control group with an unpaired t-test if the difference was 1 paw withdrawal and the standard deviation was 0.5 paw withdrawals. We concluded that a minimum sample size of eight per group would yield 95% power with a two-tailed Type I error rate of .05. An additional three animals per group were added to compensate for a possible loss of sample size during the 5-month study, and an additional four animals were added to the GEL group for illness over time, technical complications, and a pilot study with local EPO. GEL n = 14, control n = 11, sham n = 8 were the final sample sizes.\n\nPercutaneous injectable procedure for GEL and sham. During isoflurane anesthesia the percutaneous injection procedures were performed. The hydrogel used in the GEL group was the proprietary biological NeuroDigm GEL™, that is composed of purified biocompatible tissue-derived by-products and peptides of mammalian soft tissue, as found in the perineural tissue milieu after a soft tissue injury. Such injectable implant products are used in human surgeries, dermal procedures, and wound healing, with rare reactions of acute inflammation. Purified biological proteins and hydrogels are normally absorbed over days to weeks by the tissues into which they are implanted, and are rarely antigenic64,65. The hydrogel we used was introduced within the left (ipsilateral) tibial neural tunnel below the popliteal area at mid lower leg, with aseptic technique. First, the skin was pierced with a sterile 19 gauge needle tip; then a sterile, custom tapered, blunted 21 gauge hollow probe entered the skin puncture site to gain access to the tibial nerve tunnel (U.S. patents 7015371, 7388124). Since older confined rats normally have less passive knee extension, a distal to proximal neural tunnel access was used in this study. The point of the probe’s percutaneous entry was over the Achilles tendon. The probe then advanced subcutaneously in a caudal direction; then it pierced the fascia between the distal origins of the medial and lateral gastrocnemius muscle and entered the anatomic tunnel posterior to the tibialis posterior muscle and medial to the soleus, where the tibial nerve courses. Upon entering the neural tunnel, the probe was softly glided in avoiding resistance or nerve contact. In the mid-tibial tunnel of the lower leg 0.3 cc of the GEL™ or the shams’ Ringer’s lactate was deposited, and then the probe was withdrawn, with the rat placed in a cage for observation.\n\nProcedures to elicit neuropathic pain behavior. The primary outcome measure was the average number of paw withdrawals to each of five stimuli applied eight times to each plantar mid hindpaw of a rat. Each stimulus was applied first to the contralateral hindpaw, then to the ipsilateral side of the GEL and sham procedure. Time between each stimulus application was usually 2–4 seconds or longer. For each stimulus the total number of each hindpaw withdrawals was recorded as a data point.\n\nMeasures of mechanical hypersensitivity were chosen for this analgesic study. These measures are the most commonly used in rodent screening for analgesics and also commonly used in humans. Further behavioral characterization of this model is planned.\n\nNon-noxious light touch: for static mechanical allodynia von Frey filaments (Semmes Weinstein Mono-filaments North Coast Medical TouchTest®) exerting confirmed forces of 2 grams, 6 grams and 10 grams were used, tips smooth. Dynamic mechanical allodynia was tested with a fan sable brush (09004-1002; Dick Blick Art Materials). The stimuli were applied in the order of: von Frey 2 g, 6 g, 10 g, then brush. Each von Frey stimulus was applied for approximately 1 second until the fiber bent, or the paw was withdrawn. The brush was stroked gently from rear to front of the plantar hindpaw.\n\nNoxious light touch: mechanical hyperalgesia was tested with a custom sharp non-penetrating plastic point calibrated to elicit 2–4 paw withdrawals at baseline. This pinprick stimulus tip was touched to the plantar site until the paw withdrew or the skin visibly indented. Each stimulus lasted about 1 second.\n\nAnalgesics administration. The analgesics were administered by subcutaneous injection (27 g 1.5”) over the dorsal lower back and proximal thighs, with a custom administrator-held restraint device to reduce handling and stress. Morphine sulfate (West-Ward) was mixed with normal saline and administered at a dose of 3 mg/kg 1 hour prior to screening the analgesic. The vehicle used in mixing the following three drugs was 0.25% methylcellulose (Methocel® A4MPremium LV USP). These three drugs were mixed 24–48 hours prior to use. Celecoxib (Cayman Chemical) was dispensed at a dose of 10 mg/kg 1 hour prior to screening; gabapentin (Cayman Chemical) was dispensed at a dose of 25 mg/kg 2 hours prior to screening; and duloxetine (Cayman Chemical) was mixed (mechanically agitated) and administered at a dose of 10 mg/kg 2 hours prior to screening. The experimenter knew the drugs being screened; the identity of the groups were blinded throughout experiment. The injected volume of each drug was less than 1.2 cc.\n\nThe original doses chosen for gabapentin and duloxetine had adverse effects in this study of aged mature rats, interfering with the testing of pain behaviors. Gabapentin at 60 mg/kg had marked ataxia in all rats, with their hindpaws not staying on testing screen due to lumbering gait and falls. Duloxetine at 30 mg/kg was noted for marked “frozen” hypoactive posture, with increased tone and alertness (no central sedation) to normal handling and testing. After duloxetine was given at this dose, paw withdrawals were not elicited in any of the three groups. Due to these adverse effects, lower doses were tested and used, as described above. These lower doses had no observed drug side effects and improved the ability to test paw withdrawals66.\n\nEpoetin alfa (EPO) by Amgen, a recombinant human erythropoietin analog at 2000 units/mL, was diluted 1:3 with normal saline, and 0.3 cc vol., 200 units, was the administered dose in the pilot study. After the main experiment was over the 14 GEL procedure rats continued in a pilot study for 8 days beginning on day 152, with days 140, 149 and 152 taken as baseline days. Three subgroups were picked randomly and the experimenter was blinded during screenings for pain behavior. On day 152 under isoflurane, as described prior, the \"EPO at site\" group (n = 5) received an injection of 200 units of EPO at the site of the original GEL procedure on the ipsilateral leg. The “EPO SC” group (n = 4) received the same EPO injection subcutaneously by dorsal low back, and the “No EPO” group (n = 5) received no injection. The original “EPO at site” injection approach was ipsilateral (left) posterior-to-anterior at mid tibia through the bellies of the gastrocnemius muscle aiming for the tibial nerve tunnel. Pinprick behavior data were collected on days 153, 154, 156, 159, and 160.\n\nTwo of the five “EPO at site” rats had no decrease in paw withdrawals with the original technique of the EPO injection. These two rats had an adapted lateral approach of the injection of 200 units of EPO to improve localization of the perineural infiltrate near the original ipsilateral GEL™ procedure site. This adapted injection was on the ipsilateral side, through the lateral gastrocnemius muscle targeted to the mid tibial tunnel at lower leg.\n\nAt the conclusion of the study, three rats were chosen randomly from each of the three groups: 1.) GEL procedure rats 2.) sham procedure rats of the 5/8 that displayed late onset robust pain behavior, and 3.) controls. The selection from the GEL group contained two rats that were controls in the EPO pilot study, and one that had received the subcutaneous EPO injection, all with no change in pain behavior noted. The animals were anesthetised, and then perfused with Lactated Ringer’s solution (Hospira), followed by perfusion fixation with 4% paraformaldehyde (PCCA Professional Compounding Centers of America) in Phosphate Buffered Saline (PBS) (Electron Microscopy Services). Following fixation, the lower limb on the ipsilateral side was grossly dissected to reveal the gastrocnemius muscle thus providing a landmark for locating the tibial nerve. Once identified, the distal tibial nerve (below the popliteal area) was dissected free of the surrounding muscle and fascia, and placed into ice-cold 4% paraformaldehyde in PBS for overnight incubation. The following day the paraformaldehyde solution was replaced with 30% sucrose (IBI Scientific) to cryoprotect the tissue. The cryoprotected samples were embedded in Tissue-Tek OCT (Sakura Finetechnical, Japan) and frozen on dry ice. Cryosections (10 μm) were then prepared and mounted onto SuperFrost Plus slides (Fisher Scientific, Rockford, IL). Sections were then fixed in 10% Neutral Buffered Formalin for 10 minutes, washed for 5 minutes in 1X PBS to remove OCT, and rinsed with tap H2O. Subsequently, sections were then stained in Hematoxylin (Fisher Scientific) for 5 minutes and rinsed with tap H2O, differentiated in acid alcohol (1% HCl in 70% EtOH) for 30 seconds and rinsed extensively with tap H2O, blued in 0.2% aqueous ammonia, rinsed with tap H2O, and stained with eosin (Fisher Scientific) for 1 minute. Sections were then dehydrated by sequential submersion in graded 75%, 95%, 100% EtOH for 5 minutes each, and a final submersion in xylene. The slides were air dried before mounting with Permount (Fisher Scientific) and adding coverslip. Sections were viewed and the images captured on a Nikon 80i microscope, outfitted for digital light micrographs.\n\nPain behavior statistical analyses. As described in detail in the results section, the data were inspected for compliance with the assumptions of ANOVA. Two areas of concern were noted, particularly the heterogeneous variances in the pinprick data and the very large number of pairwise comparisons that could be compared. The former occurred because (a.) GEL animals that developed pain symptoms tended to score a maximum number of withdrawal responses (8) out of the possible number of stimuli presented (8) leading to some cells with very small or zero variance in the GEL group only, and (b.) the animals in the sham group were not homogeneous in their response to the sham procedure and this greatly increased their variance. We proceeded with the ANOVA for pinprick because of the convenience of describing interaction effects and for comparison with the allodynia data. We note that the pinprick variable was the least likely to generate errors of inference because of the very large effects and consequent minuscule p values obtained. Individual sham data are plotted in a separate graph to illustrate the problem there. Type I errors were reduced by testing only planned comparisons among a relatively small number of means and by combining data where appropriate before analysis so that fewer comparisons would be made.\n\nAnalyses of the paw withdrawals in response to von Frey fibers, the brush, or the pinprick on the routine test days were conducted using a mixed model ANOVA with one between groups factor (eleven controls, fourteen GELs, and eight shams) and two repeated measures factors. The first repeated measures factor was the time the data were collected, with an average baseline period of 4 days prior to the procedures and the five 30-day periods, referred to as post procedure Monthly Periods 1 through 5 (P1, P2, P3, P4, P5), following the procedure (Figure 1). For analyses, the data point for each animal in each monthly period was the mean of at least four routine pain behavior testing days during that month. The second repeated measures factor for the von Frey fiber analysis was a composite factor combining the three levels of fiber force (V1, V2, V3) and the two sides for a total of six levels of different forces tested on bilateral hindpaws. Differences owing to fiber force and sidedness were determined by comparing means with planned comparisons. The second repeated measures factor for the brush and pinprick were the bilateral hindpaws. In the global ANOVA, a p value of <.05 was considered significant. Except for the von Frey analysis, planned comparisons were conducted using Fisher's Least Significant Difference test after a global ANOVA was determined to be significant at the .05 level with a two-tailed test (Dataset was used in all analyses).\n\nAnalgesic statistical analyses. Experiments were conducted with four analgesic drugs administered shortly before the usual testing with von Frey fibers, the brush, and the pinprick. The four drugs were each tested three times during the post procedure period from day 28 to day 149. The effects of the analgesic drugs were analyzed on two dependent variables instead of five (allodynia measures averaged together as one variable and the hyperalgesia measure of pinprick as the other). The data were analyzed using a mixed model ANOVA with the three groups as a between-subjects factor and side (left or right) and days (three pairs of pre-drug and analgesic drug days) as repeated measures factors. The effect of the analgesic drug for each pair of days was analyzed using planned comparisons. These comparisons used Fisher's Protected Least Significant Difference test if the corresponding F-ratio was significant or used a Bonferroni-protected contrast if the F-ratio was not significant. All tests used a two-tailed significance level of .05.\n\n\nResults\n\nAll rats had recovered from anesthesia within 5 minutes and were walking normally without altered gait. Following recovery from anesthesia the subjects did not demonstrate observable pain behaviors67 nor clinical evidence of tissue injury. Throughout the duration of the study all the rats were observed to have normal gait and were without visible evidence of inflammation, swelling, weakness, deformities or positional changes noted on the operated hindpaw, at any time. There was no observed evidence of acute nocioceptive pain, even after the procedures. Their grooming activities were normal. Among the GEL group, 14/14 rats had markedly increased paw withdrawals to pinprick, von Frey fibers and brush by day 23 post procedure; the paw withdrawals to pinprick became more exaggerated over the remaining months. By post procedure day 60, 5/8 of the sham rats had developed marked paw withdrawals. The most common pain behavior paw withdrawal reaction was a reflexive flinch. Other reactions appearing 1 month after increased pain behaviors included prolonged shaking and or licking of their affected ipsilateral paw. Similar patterns of paw withdrawal reactions occurred on testing the contralateral side as pain behavior appeared. No chewing of the paws occurred.\n\nResults are presented below with the statistics; also see Supplementary file S3: Basic Anova for a comprehensive description of the pain behavior statistical results without drugs or EPO.\n\nPain behavior analyses. While the hyperalgesia in this mature GEL™ model was robust over time; the allodynia was a minor response. Before attempting a statistical analysis, we plotted the raw routine days of data (without analgesics) of each group for inspection. The robust effects of pinprick hyperalgesia were evident in the raw data graph. The small effects observed with the data of each individual allodynia stimulus (each of three von Frey fibers and the brush) indicated that an individual routine days mean in the GEL group could not reliably be expected to differ from the control group. In order to observe more reliable differences, the pain behavior data could be grouped for more samples in two ways.\n\nFirst, different days of testing for each stimulus could be averaged together for an individual variable, such as data for each individual von Frey fiber averaged together over 4 testing days to make a monthly average: This method was used in the allodynia and hyperalgesia line graphs (Figure 3–Figure 5).\n\nSecond, the scores for all four allodynia measurements could be averaged together to make a summary single variable, i.e., averaging data for all three von Frey fibers and the brush into a single number representing allodynia. This method was used for allodynia in the analgesic drug response bar graphs, to compare one day of pre-drug data to one day of post-drug data (Figure 7–Figure 10).\n\nThese methods have different advantages. Plotting each day’s data is useful for determining the precise timing of when effects emerge during the long period of testing. Plotting monthly data is useful for observing the small effects that are apparent between the individual von Frey fibers. Information about effect sizes will be presented for the routine days data of the averaged variable (von Frey plus brush), and formal statistical analysis will be applied to the much smaller number of means in the monthly data for each individual variable.\n\nDays of data pattern of allodynia and hyperalgesia after GEL procedure. The data for all routine testing days (no analgesics given) for the combined allodynia variable (von Frey plus brush, top) and for the hyperalgesia variable (pinprick, bottom) are presented in Figure 2 below. The most noticeable effect is the increase of pain behaviors in response to the pinprick in the GEL group on both the ipsilateral and contralateral sides. The pinprick hyperalgesia effect occurred in every rat subjected to the GEL procedure. Although the pinprick responses required about 23 days to develop, the symptoms, and therefore the opportunity to study those symptoms, persisted for months and showed no sign of waning by the end of the experiment. Under conditions of a null effect, each of two groups would be expected to have a greater mean than the other about 50% of the time. However, the last day on which the control group had an absolutely greater mean was for pinprick post procedure day 5 on the ipsilateral side and day 23 on the contralateral side.\n\nThe averaged allodynia measures on the ipsilateral side were more consistently different on a routine test day basis than any of the individual allodynia measures (top: y-axis is 2, maximum is 8). Allodynia effects on the contralateral side are not obvious. Effects related to pinprick hyperalgesia were more robust and persistent than those for the allodynia measures (bottom: maximum y-axis is 8). Shams are not homogenous: 5/8 with pinprick hyperalgesia, 3/8 similar to controls.\n\nTable 1 provides estimates of effect sizes for the GEL™ effect, which tends to increase with time. The standardized effect size was calculated as the difference between the means of the GEL and control groups on that day divided by the pooled standard deviation of the two groups. These can be used to plan future experiments depending on which interval after the procedure will be studied. Minimum sample sizes are provided to yield at least 80% power in a two-group, two-sided t-test with a Type I error rate of .05. Larger sample sizes are required to study allodynia than to study hyperalgesia. More complex designs, such as this one, which include many repeated measurements and multiple groups, will have more error degrees of freedom for the comparisons than a simple t-test, and will not require such large sample sizes for the allodynia measures.\n\nIncluded are the last day that the control group mean exceeded the GEL group mean, and the inclusive days that the size of effect for the GEL group exceeded the control group by 1.94, 2.83, or 3.50 standard deviations (SDs). ES is the difference between the group means divided by the pooled standard deviation of the two groups. The calculated n is the sample size required to detect a difference between GEL and control groups of the given size for the hyperalgesia or allodynia variable on a routine test day basis in an independent-samples t-test with 80% power and a two-sided alpha of .05. Allodynia was not consistent on the contralateral side. See routine test day means in Figure 2. NA, not applicable.\n\nUsing these data, analgesic screening can start on day 23 or later with a group of n = 6 rats using pinprick (mechanical hyperalgesia) on the ipsilateral side. The effect size for hyperalgesia on the ipsilateral side was persistent at more than 1.94 standard deviations from day 23 until the end of the main experiment on day 149. Smaller effects of the GEL group were observed for the combined allodynia variable (von Frey plus brush) than for hyperalgesia with the pinprick, but, unlike the data for the individual von Frey and brush variables, the combined variable showed clear and persistent differences between the GEL group and control group on the ipsilateral side for each testing day. This is important for our subsequent experiments with analgesics. The effect of the GEL™ procedure is not constant across time. Therefore, when testing the effect of an analgesic on a single day with a control value from the same animal, the response must be compared to a control day very close in time to the day the analgesic is given rather than to the average of all control days. To do this successfully, each control day must show a positive effect of the GEL procedure, and this was not true on every day for the individual von Frey fibers or the brush alone. Consequently, we opted to use the combined von Frey plus brush data for all comparisons between individual analgesic and control days (see the section, Results of experiments with analgesic drugs, where statistical analyses of those days are provided).\n\nIn order to provide a formal statistical analysis of the individual allodynia variables, we averaged all days (at least 4) within each month that the animals were tested without analgesics to remove some of the test day variability, stabilize the means, and estimate effect sizes. These analyses are presented in the following sections.\n\nMechanical allodynia: von Frey analysis. The data are given in Figure 3 for the three different fiber forces in each group, over all periods applied to both the ipsilateral and contralateral sides. The highest-order interaction of the ANOVA was significant (F (50, 750) = 2.21, p < .001). The control group never significantly exceeded the baseline value in any monthly period on either side. Asterisks in Figure 3 mark means of the GEL group that were significantly different from both the GEL group baseline and the mean of the control group during the same period. The shams had no significant allodynia.\n\nIncreased pain behavior of allodynia to light touch with von Frey fibers only seen in the GEL rats, bilaterally; on the left with 2 g, 6 g and 10 g fibers, and on the right with 6 g and 10 g. This allodynia decreased after the 3rd month for all fibers with this effect. These graphs depict von Frey fiber results for paw withdrawals ipsilateral and contralateral, for all groups, for three fiber forces over 5 monthly post procedure periods (P1–P5) (maximum y-axis is 8). Mean and S.E.M. *p < .05 GEL group greater than both GEL group baseline and control group for same period. Reduced responding in the GEL group during period 5 may reflect habituation.\n\nMechanical allodynia: brush analysis. The GEL group showed prolonged pain behavior of dynamic mechanical allodynia to brush stimuli with increased paw withdrawals, only on the ipsilateral side. This pain behavior peaked by the 3rd month then waned, returning to near baseline by the 5th month. The shams had similar pain behavior that plateaued by the 4th month, persisting until the 5th month at end of study. The response of the shams, with the onset of the pain behavior of mechanical allodynia during the 3rd month that persisted, was not anticipated (Figure 4). The highest-order interaction was significant (F (10, 150) = 1.943, p = .044). The control group never significantly exceeded the baseline value in any monthly period on either side. Asterisks in Figure 4 mark means of the GEL group that were significantly different from both the GEL group baseline and the mean of the control group during the same period.\n\nDynamic allodynia or hypersensitivity to the brush was noted only on the ipsilateral hindpaw. These graphs depict brush results for paw withdrawals ipsilateral and contralateral to procedure over 5 monthly post procedure periods (P1–P5) (maximum y-axis is 8). Mean and S.E.M. *p < .05 GEL group greater than both GEL group baseline and control group for same period. Reduced responding on the ipsilateral side in the GEL and control groups during period 5 may reflect habituation, which is not present in the shams as their allodynia increased in P4–P5.\n\nMechanical hyperalgesia: pinprick analysis. The GEL group had the earliest and most persistent pain behavior of mechanical hyperalgesia with increased paw withdrawals to pinprick, bilaterally. The hyperalgesia was first present on the left side and within a few weeks present on the right side. This pain behavior was vigorous after the first month and persisted robustly for 4 months, until the end of the study. The shams had similar pinprick pain behavior bilaterally that peaked by the 4th month and persisted until the 5th month, at end of study. The control group had no pinprick pain behavior during the study.\n\nThe data for pinprick are presented in Figure 5. The highest-order interaction was significant (F (10, 150) = 4.592, p < .001). The control group never deviated from its own baseline value in any post procedure period on either side. By contrast, the GEL group’s paw withdrawal response on the ipsilateral side was significantly greater than baseline during all five post procedure periods, and the GEL group on the contralateral side was significantly greater than baseline during periods 2 through 5. Between-group comparisons for pinprick indicated that the three groups were not significantly different during the baseline period on either the contralateral or ipsilateral side. Asterisks in Figure 5 mark means of the GEL group that were significantly different from both the GEL group baseline and the mean of the control group during the same period.\n\nThe GEL and later the sham groups developed mechanical hyperalgesia to pinprick bilaterally. In the GEL group, 2–3 weeks following the onset of hyperalgesia on the left it began to develop on the contralateral side; a similar slight delay in the contralateral onset of hyperalgesia is seen in the shams. The graphs depict the mean of all paw withdrawal responses to pinprick on the ipsilateral and contralateral sides during the 5 monthly post procedure periods (maximum y-axis is 8). Shams are not homogenous: 5/8 with pinprick hyperalgesia, 3/8 similar to controls. No habituation effect was noted.\n\nIndividual data for sham group. Retrospectively, we noted that five of eight sham procedure animals developed pain behavior bilaterally, similar to the GEL™ animals, in post procedure monthly periods 4 and 5 (after 3 months); and the three remaining sham rats behaved similarly to the control group.\n\nThe foregoing analysis did not stress any effects that might or might not be different between the GEL and sham groups. The reason for this is that the sham group itself was not homogeneous in the responses of the animals to the sham procedure. Individual data for the sham animals are presented in Figure 6. The data on the shams, in all the pain behavioral studies and in the analgesic screening, included the results from all eight shams. The probability that the bracketed five responder sham rats would separate themselves from the other three in exactly the same direction by chance in the allodynia experiment is .017 for a single day. This probability does not factor in the magnitude of the effect between responders and non-responders or the fact that they separated themselves the same way on the same two consecutive days as in the hyperalgesia experiment. This is very strong evidence that we detected ipsilateral allodynia in all of the same sham animals where we detected hyperalgesia.\n\nGraphs depict average of paw withdrawals for eight individual shams; the average of all responses to von Frey and brush stimuli (top) or to pinprick (bottom) over baseline and for 5 months (maximum on y-axis is 8). Pain behavior in response to pinprick on right developed 2–3 weeks after the left; 5/8 shams had bilateral pinprick mechanical hyperalgesia during the P3 month persisting until end of study. The remaining three animals resembled the control group. Bracket in allodynia graph identifies the same five responders from the hyperalgesia data. In P4 and P5 for bilateral pinprick there is complete separation between the three behaving like controls and the five behaving like GEL.\n\nSummary of pain behaviors. For the GEL procedure group, we found that the gradual pattern of development of ipsilateral pain behaviors was usually followed with the gradual onset of contralateral pain behaviors within 2–3 weeks. In the GEL group, the light touch mechanical allodynia responses had a gradual onset after the first month, decreasing slowly after the third month (P3). Once the pinprick hyperalgesia developed in the GEL group it became robust and persisted until the end of the study.\n\nThe sham procedure also gradually induced the pain behavior of pinprick similar to the GEL procedure in five of eight rats on the ipsilateral side after day 72, followed by the contralateral side after day 90. The affected shams (5/8) with elicited pain behavior of mechanical hyperalgesia became robust after the third month (P3), persisting until the end of the study. The shams did not have a robust response to von Frey or brush stimulation. The control group developed no pain behaviors.\n\nFactor influencing pain behavior testing. A reportable factor discovered during this study relates to the influence of hormone replacement of the experimenter on the elicited responses to stimuli. On each routine pain behavior test day, after the first six random rats were screened, the results were compared to the prior session of each rat, to observe for environmental influences. In nine such early screening comparisons, paw withdrawals were markedly less to nonexistent bilaterally in the six rats prescreened as compared to their prior screening session. Thirty minutes after the topical application to the investigator of a 17 beta-estradiol replacement cream, the screening was repeated on the same six rats, and their elicited pain behaviors were then consistent with the data collected during the prior testing period. On unblinding, this effect was not related to groups. Even baseline pre-procedure behaviors were similarly affected by the estrogen hormone replacement. All data used in this study was collected with topical estrogen applied 30 minutes prior to beginning all testing. This reaction echoes the olfactory ‘male observer’ effect of male experimenters reducing acute pain behaviors in rodents, as compared to females68; and suggests that besides their sex, the age and hormonal status of experimenters may influence the reproducibility of pain behaviors.\n\nTo control for the effects of time, it was important to compare the data for each analgesic’s screening day to a single pre-drug control day, prior to the drug’s administration. As illustrated in the top part of Figure 2, an effect on the GEL group’s ipsilateral side was apparent on individual control days when the paw withdrawals for all four allodynia measures were averaged together into one variable. For that reason, we analyzed only the composite allodynia variable and the pinprick for responses to the analgesics. The data for each analgesic drug are displayed in the figures as bars for the mean response, with the analgesic drug on a particular day (the days listed in the x-axis label) paired with the respective control data from the routine test day one to four days prior. Throughout the following analyses, the mean number of paw withdrawals in the GEL group on the ipsilateral side was significantly greater than that of the control group on all pre-drug days for both the von Frey plus brush variable and for the pinprick.\n\nMorphine. Morphine sulfate at 3 mg/kg had an escalating loss of effectiveness bilaterally, which was not due to tolerance as each of the three doses were at least 30 days apart. Morphine at 3 mg/kg is usually a toxic dose in humans. The data for the morphine test days using the von Frey fibers and brush are presented in the top half of Figure 7. The ANOVA revealed that all main or interaction effects were significant at the .05 level including the three-way interaction (F (10, 150) = 2.322, p = .014). Morphine caused significant decreases in responding compared with pre-drug day on both the ipsilateral and contralateral side only on day 28 as denoted by asterisks in the figures. In three instances, morphine actually increased pain responses, as denoted by red plus symbols. These were the only significant increases in pain behavior resulting after an analgesic in the entire dataset for the four analgesic drugs. The data for pinprick are presented in the bottom half of Figure 7. The ANOVA revealed that all main and interaction effects were significant including the three-way interaction (F (10, 150) = 2.655, p = .005). Significant decreases in paw withdrawals are denoted by asterisks.\n\nMorphine was less effective over time, not due to tolerance, since each dose is separated by weeks. The graph depicts the average of paw withdrawals on pre-drug control days (black) and on paired morphine dose days (white). Results of behavior testing show the average of all four light touch allodynia measures (three von Frey fibers and the brush, top graphs) and the mechanical hyperalgesia (pinprick, bottom graphs). Only on D28 morphine had marked analgesia with pinprick (bottom) and light touch allodynia (top) across all groups, bilaterally. The data suggest a developing opioid related hypersensitivity to stimuli. Mean and S.E.M. *Significant decrease from the paired control day, p < .05. + Significant increase from the paired control day, p < .005.\n\nMorphine was effective early after the GEL™ procedure, but the size of the effect waned with time on both sides. For example, in the GEL group on the ipsilateral side for the allodynia measures, the standardized effect size between the control day and the morphine day changed from a positive analgesic effect of 1.30 pooled standard deviation units on day 28 to no effect on day 64 to a negative effect of -1.25 pooled standard deviation units on day 108. For the pinprick measure, effect sizes were conservatively estimated using the standard deviation for the morphine condition only instead of the pooled standard deviation because of the reduction of the variability as the responses approached a ceiling of eight paw withdrawals out of eight pinpricks. The analgesic effect size waned from 4.14 standard deviations on day 28 to 0.64 standard deviations on day 108. The shams and controls also suggest an escalating pattern of stimuli sensitivity after morphine with pinprick.\n\nCelecoxib. There was no analgesic effect of Celecoxib at 10 mg/kg in any group on any day on either side. Celecoxib did not decrease pain behaviors, as demonstrated with a decrease in paw withdrawals. Celecoxib at 10 mg/kg is about three times a human dose. Data for the celecoxib days for the von Frey fibers and brush are presented in the top half of Figure 8. The ANOVA revealed significant main effects of groups, side, and drug response and a significant group-by-side interaction (F (2, 30) = 22.92, p = .001). None of the other interaction effects was significant. There was no effect of the celecoxib dose on either side in any group by Bonferroni-protected planned contrasts. The data for pinprick are presented in the bottom half of Figure 8. The ANOVA revealed that all main and interaction effects were significant, including the three-way interaction (F (10, 150) = 2.675, p = .005). The significance of this interaction was completely accounted for by other effects in the data that were not related to any specific pre-drug vs. drug contrast in our set of planned comparisons.\n\nNo analgesic effect of celecoxib was demonstrated; no significant change in paw withdrawals. Graph depicts average of paw withdrawals on pre-drug control days (black) and on paired celecoxib dose days (white). Results of behavior testing show the average of all four light touch allodynia measures (three von Frey fibers and the brush, top graphs) and the mechanical hyperalgesia (pinprick, bottom graphs). Mean and S.E.M. Celecoxib did not significantly reduce paw withdrawal responses on any pair of days for any groups.\n\nGabapentin. On all days on both sides in the GEL group, gabapentin at 25 mg/kg robustly reduced paw withdrawal responses (Figure 9). The dose of gabapentin used is nearly equivalent to a human dose. Gabapentin significantly reduced the sham pinprick responses on most days. Gabapentin also significantly reduced the level of responding in the control group on days 47 and 76 on both sides. The data for the gabapentin test days using the von Frey fibers and brush are presented in the top half of Figure 9. The ANOVA revealed significant main effects and interaction effects except for the three-way interaction. It appears that the group-by-days pattern of responding was similar on the ipsilateral and contralateral sides, therefore, the groups-by-days interaction term is the important one for analysis (F (10, 150) = 1.929, p = .045). When the data for the ipsilateral and contralateral sides were combined, we found that gabapentin robustly reduced the GEL group responding on all analgesic test days. For comparison to other figures, asterisks in the top half of Figure 9 represent significant decreases from the paired pre-drug day by Bonferroni-protected planned contrasts. By this analysis, the comparison for day 114 on the contralateral side for the allodynia measure in the GEL group was not significant. The data for pinprick are presented in the bottom half of Figure 9. With the exception of the group-by-side interaction (p = .055), all main and interaction effects were significant at p < .05 including the three-way interaction (F (10, 150) = .029, p = .03).\n\nAnalgesia with reduced paw withdrawals was seen in the GEL group bilaterally. Graph depicts average of paw withdrawals on pre-drug control days (black) and on paired gabapentin dose days (white). Results of behavior testing show the average of all four light touch allodynia measures (three von Frey fibers and the brush, top graphs) and the mechanical hyperalgesia (pinprick, bottom graphs). Mean and S.E.M. *Significant decrease from the paired control day, p < .05. Gabapentin also suppressed responding in the sham and control group.\n\nDuloxetine. Duloxetine at 10 mg/kg reduced pain behaviors bilaterally in the GEL group (Bonferroni-protected contrasts). This dose is markedly less than most prior rat doses66, and more than human doses. The bilateral analgesia was similar in the sham group on D83 and D125, but emerged only after the pain behaviors in them began developing after 2 months. Interestingly, duloxetine did not suppress normal responses to pinprick stimuli in the control group as the gabapentin did. Yet the duloxetine did suppress responses of the contralateral allodynia in the control group. The von Frey fibers and brush data for duloxetine are presented in the top half of Figure 10. The ANOVA revealed a significant three-way interaction (F (10, 150) = 1.99, p = .039). The data for pinprick are presented in the bottom half of Figure 10. The ANOVA revealed that all main and interaction effects were significant except for the three-way interaction. The groups-by-days interaction was significant (F (10, 150) = 11.358, p < .001), and the pattern of responding within the groups was similar on the ipsilateral and contralateral sides.\n\nAnalgesia with reduced paw withdrawals was seen in the GEL group bilaterally. A similar response was noted in the last 2 test days of the shams, D83 and D125. Graph depicts average of paw withdrawals on pre-drug control days (black) and on paired duloxetine dose days (white). Results of behavior testing show the average of all four light touch allodynia measures (three von Frey fibers and the brush, top graphs) and the mechanical hyperalgesia (pinprick, bottom graphs). Mean and S.E.M. *Significant decrease from the paired control day, p < .05. Duloxetine had less effect than gabapentin on the responses in the control group.\n\nIn general, the effects of the different analgesic drugs were similar in the GEL™ and sham groups whenever the sham group demonstrated pain behavior; five of the eight rats in the sham group eventually developed pain behaviors like the GEL group. Morphine demonstrated marked analgesia to the pinprick stimuli on both sides on day 28, but not on later days. This correlates with the waning effect of morphine in the GEL group. Celecoxib did not affect responses in the sham group. As in the GEL group, both gabapentin and duloxetine demonstrated marked analgesia in the sham group.\n\nMorphine depicted a waning analgesic response over time not related to tolerance, with increasing pain behaviors, suggestive of a developing opioid related hypersensitivity. Celecoxib had no analgesic effect on pain behaviors at any time. By contrast, gabapentin and duloxetine both produced robust analgesia bilaterally in the GEL group during all time periods and in 5/8 shams during the last two drug testing periods. The pain behaviors, when present in the GEL group after D23 and later in the shams after D90, respond to all analgesics similarly.\n\nThe pain behavior in the GEL group that had persisted for 4 months was reversed for up to 7 days (end pilot) by the targeted perineural application of epoetin alfa (EPO) in a pilot study, at the end of the investigation (Figure 11). Two of the rats in the \"EPO at site\" group received a second local EPO injection with a lateral approach on day 155 (seen as a + sign on the left paw results in Figure 11), described in the Materials and Methods section on Erythropoietin treatment pilot study. Pinprick behavior data were collected on days 153, 154, 156, 159, and 160. The resulting data were analyzed using a mixed model ANOVA, with the “EPO at site” injection group as the between subjects factor (three groups), and days as one repeated measures factor (8 days), and laterality as a second repeated measures factor (left and right paws). The data are presented in Figure 11. The three-way interaction was not significant, but the groups-by-days interaction was significant (F (14, 77) = 8.208, p < .001). As can be seen in the graph, the effect of EPO was nearly identical on the right and left sides and was significant for at least 6 days, end of study. An ANOVA such as this should be interpreted with caution because of several cells with zero variance (there is a ceiling effect of eight paw withdrawals out of eight stimulus presentations of pinprick).\n\nAfter a perineural infiltration of EPO (200 units in NS) the paw withdrawals to pinprick (hyperalgesia or hypersensitivity) decreased bilaterally to near pre-GEL procedure levels. The groups of “No EPO” control (n = 5) and “EPO SC” injected subcutaneously (n = 4) continued with robust pain behavior, bilaterally. The + on day 155 signifies two rats in the “EPO at site” group still with pain behavior that had the GEL site on left re-injected with EPO with a different anatomic technique, as described, which caused a decrease in their paw withdrawals to pinprick. *p < .001 “EPO at site” vs. “No EPO”.\n\nThe tissue sections from nine rats randomly selected from each procedure group were blinded, and observed by an independent neuropathologist. These observations were later respectively matched to each of the three groups, with n = 3 GEL procedure rats, n = 3 sham procedure rats of the 5/8 that displayed late onset robust pain behavior, and n = 3 the controls. Details are described in Histology within the Materials and Methods section. There were no differences or abnormal findings noted in the tissue sections between the control and the sham procedure animals. The structure of the nerve and surrounding tissue was completely unremarkable (Figure 12).\n\nLongitudinal sections through the control (a.) and sham (b.) procedure groups reveal normal appearing nerves. The surrounding muscle tissue that was harvested with the nerve also appears to be within normal limits.\n\nWithin the GEL™ treated group, the histology of the nerves was in stark contrast to the sham and control groups. The gross appearance of the tissue at the time of dissection revealed a discrete area of swelling, or a bulge, along the course of the distal tibial nerve, in all specimens of the GEL group only. These discrete structures were about twice the diameter of the nerve just distal and proximal to the outcropping. Longitudinal sections through the portion of the tibial nerve that contained these bulges revealed that the swelling is the result of changes within the endoneurium, including evidence of intraneural edema with increased spacing between the neural fascicles, and axonal edema in the fibers within the bulge region (Figure 13 a and b). In addition, there were numerous profiles in which ongoing axonal fragmentation, a hallmark of Wallerian degeneration, was evident (see arrows in Figure 13 a’ and c)69.\n\nPanels a. and b. reflect the gross swelling seen in the tibial nerves upon dissection. The brackets in panels a. and b. denote the prominent areas of swelling in the GEL nerves. The axons within the swollen area are themselves swollen. The diameters of the axons in this distended area of the nerve are approximately twice the diameter of the contiguous axons proximal or distal (not shown) to it. The arrowheads point to axonal debris in panel a’, which is a magnification of the area within the box in panel a. Macrophage-engulfled myelin and axonal debris (panel c., arrows) is further evidence of ongoing Wallerian degeneration.\n\nConsistent with ongoing Wallerian degeneration, we observed numerous macrophages within the endoneurium, which phagocytize myelin and axonal debris (Figure 14 a and b large arrows). We also noted a significant leukocyte accumulation in and around the perineurium (Figure 14 b thin arrows).\n\nIntraneural (large arrows a. and b.) and perineural leukocytic infiltration (thin arrows, b.) is seen in the GEL group nerves. Notably, there are numerous profiles of regeneration axons in the same nerve (panel c. and c’) at the same proximal/distal level, suggesting ongoing nerve remodeling in the GEL procedure animals. Panel c’ is a higher power magnification of the clusters of small, unmyelinated axons within the box in panel c.\n\nFindings consistent with any gel residual were not noted in the GEL procedure specimens. In addition to the ongoing Wallerian degeneration observed in the GEL animals, we also noted ongoing axonal regeneration. There were clusters of small, unmyelinated or lightly myelinated fibers growing into the nerves of the GEL cohort only. The grouping of these small fibers is consistent with regeneration, rather than the randomly single-arrayed, unmyelinated fibers that are found in the healthy, homeostatic adult nerve70.\n\n\nDiscussion\n\nThis nonsurgical NeuroDigm GEL™ aged model appears to mimic many humans complaining of persistent neural pain after soft tissue injuries, by being of gradual onset, persisting for months and lacking deformities or antalgic gait. Two other characteristics have been suggested for relevance in a rodent neuropathic pain model: 1) an analgesic response profile similar to humans and 2) effective analgesia with doses of near human equivalent strengths (14th World Congress IASP 2012; SIG Pain in non-humans species: TW 62 “Closing the Gap between Preclinical and Clinical Studies, Jeffrey Mogil). This socially mature aged outbred rat version of this model of neurogenic pain approaches these analgesic response guidelines.\n\nMature adult rats59,61 were chosen to resemble more closely, human chronic pain patients1,71. The allodynia response in this aged model was weak, while the response to pinprick with hyperalgesia was robust. Younger (270 g) rats created with the NeuroDigm GEL™ method have had robust allodynia over 2 months (Supplementary file S4). Neuropathic pain models in older rats have been recognized as having less pronounced mechanical allodynia than younger ones72,73.\n\nCentral sensitivity or neural plasticity is demonstrated in this model after 2–3 weeks by the appearance of elicited pain behavior on the contralateral side in all GEL procedure rats, as well as the five sham rats that developed the late onset pain behavior. The contralateral spread of pain behaviors in this model raises the possibility that there are central changes as well. Whether this is the result of anatomic changes or the alterations in neural signaling remains to be determined. Central sensitivity with contralateral pain is known to exist in humans with originally unilateral neural pain74–77.\n\nThe sham effect with the delayed onset of neuropathic pain was not anticipated, but evaluation of the data depicted a more delayed tissue response (S1) than the GEL group. While the GEL™ was a moderately strong stimulus for tissue repair causing a pain response initially at 23 days (Figure 2), the data shows that the sham fluid was a weaker stimulus for a tissue response, with the gradual consistent onset of pain behavior after 60 days in the five out of eight shams with pain (Figure 2, Figure 6). If this experiment ended prior to 3 months the sham effect would not have been evident. Interestingly, many human neuropathic pain conditions begin months after an injury78.\n\nAn opioid related hypersensitivity79,80 and morphine tolerance is suggested by the data of neural pain behavior developing over time, as seen in the GEL and 5/8 sham animals (Figure 7). Such lack of effective analgesia with morphine over time is characteristic of many neuropathic pain patients81–83. Since each of the three doses of morphine in this study is separated by weeks, the weakening response to morphine does not reflect tolerance but a resistance. This morphine resistance may correlate to the gradual development of nerve injury84. Repeated screenings of analgesics over time may determine translational effectiveness85. The analgesic responses of this mature GEL model has negative predictive validity for morphine, and positive predictive validity for gabapentin and duloxetine.\n\nErythropoietin86–89, methylprednisolone90, glucocorticoids in general91, and ARA290, an erythropoietin derived tissue repair peptide92–94, as well as other biologics95, are known to have neuroprotective effects systemically and locally in nerve injury rodent models. The local erythropoietin dose at the neural lesion in this model may integrate the tissue repair ability of erythropoietin96 in alleviating pain behavior. In this model the EPO injection acts similar to a diagnostic and therapeutic peripheral nerve block.\n\nThe targeted local injection of an erythropoietin analog in this study appears to support the localized “ectopic” and “generator” theories for the persistence of neural pain. The unilateral application of erythropoietin appears to have markedly reduced, for at least 6 days (end of study), the bilateral pain behavior of mechanical hyperalgesia (pinprick). The focal neural swelling seen on the distal tibial nerve in the GEL group acts like a mid-axon nociceptive stimulus, maintaining the pain behavior until the localized erythropoietin healed the neural lesion.\n\nAn unanticipated feature suggested in our 5-month study is the evidence for habituation to the von Frey and brush stimuli. In this study, the light touch stimuli (von Frey, brush) have less painful significance than the pinprick, so their responses may be susceptible to habituation (Figure 3, Figure 4). We have not been able to locate any specific study of habituation to von Frey or brush stimuli. Ipsilateral responses to light touch stimuli in the GEL group appeared to diminish in post procedure periods 4 and 5. By contrast, the sham group tended to remain the same or increase during the same time periods, as some of them developed pain behavior. We did not observe any evidence of habituation to the pinprick stimulus in any group.\n\nThe nerve specimens used in this study were processed at end of the near 6-month study, when chronic tissue changes dominated, without evidence of acute inflammation97. The histological changes seen were restricted to the NeuroDigm GEL™ procedure group and are consistent with ongoing tissue remodeling in the area where the GEL was placed. Specifically, there was evidence of both Wallerian degeneration and axonal regeneration, hallmarks of nerve remodeling. The GEL evoked remodeling of extraneural matrix tissue may result in nerve compression98, resulting in neural remodeling with the delayed onset of pain42. Also consistent with ongoing nerve remodeling is the observed inflammation with leukocytic infiltration, seen in both the endoneurial environment and the extraneural space.\n\nNotably, a large number of tightly packed unmyelinated fibers were within the nerves of the GEL procedure animals, consistent with regeneration. The increased number of these fibers and their unusual clustered appearance raises the issues of adequate insulation and the possibility that some of the pain behavior might be due to ephaptic transmission99–101. Neural regeneration with ephaptic transmission is likely the underlying cause of the both the Tinel’s sign102 observed in some patients at sites of neural compression due to entrapment, and also the Pinch Reflex Test found at sites of regenerating peripheral nerves in experimental rodents103,104.\n\nThe histology demonstrates that the light microscopy anatomical response to the GEL procedure is restricted to a focal area of neural and axonal edema with neuroinflammation in the tibial nerve, yet the behavioral effects are widespread. Light microscopy showed that the three shams (from the 5/8 with robust pain behavior) had no visible anatomic changes on histology, similar to the paclitaxel model105. Additional studies are needed in which the effects of the GEL™ induced distal mononeuritis are explored on the brain, spinal cord and the dorsal root ganglion.\n\nThe mechanisms that lead to the anatomic and behavioral changes are not yet determined. However, it is likely that the neural remodeling91 seen in our model’s neuritis reflects many of the ongoing changes that are seen with chronic pain following peripheral trauma106. The findings in our study are consistent with prior findings found in neural biopsies of humans with severe persistent pain due to known nerve entrapments102,107,108. The creation of this model allows for further in-depth studies to understand the events around the establishment of chronic neural pain following minimal trauma as seen in humans.\n\n\nConclusion\n\nOur model simulates the gradual sequelae of a soft tissue injury with a focal matrix compression (a pinch) causing an ectopic neural site with persistent neurogenic pain109,110. Gradual perineural changes of the extracellular matrix and scarring111 can cause such focal compressions on a nerve42. Recently, a lesion of the somatosensory system, or a disease, has been needed for the definition of neuropathic pain112. This model provides an extended temporal window into an occult focal neural lesion as a naturally occurring soft tissue disease113.\n\nThe GEL method supports the 3Rs initiative for refinement in the humane use of animals, and if predictive at determining analgesia may lead to a reduction in animals used. The tissue reaction used in this refinement is an encoded response occurring after any tissue injury in all vertebrates114,115. The last stage of this repair process is tissue remodeling (S1)113,116 which may be a unifying etiology117 to target in many complex neuropathic pain syndromes.\n\nWe have used tissue repair in this model to create neuropathic pain, and also to treat it. Performing analgesic studies on a mature adult rat with a tissue matrix neural lesion may help reveal the analgesic potential of an agent for humans with neuropathic pain. The refined118 NeuroDigm GEL™ Model has an accessible neural biomimetic target for translational studies exploring cell signaling, biomarkers, analgesics, detection devices, biologic treatments and alternatives to opiates.\n\n\nData availability\n\nF1000Research: Dataset 1. Raw data of NeuroDigm Model of neuropathic pain in mature rat, 10.5256/f1000research.9544.d137472119",
"appendix": "Author contributions\n\n\n\nConceived and designed the experiments: MRH DAF DEW JLB. Performed the experiments: MRH RMD DEW. Analyzed the data: DAF DEW MRH. Wrote the paper: MRH DAF RMD DEW JLB. All authors agreed to the final content of the article.\n\n\nCompeting interests\n\n\n\nMRH is an officer and stockholder in NeuroDigm Corporation.\n\n\nGrant information\n\nNeuroDigm Corporation funded this work.\n\n\nAcknowledgements\n\nWe are indebted to general surgeon Dale C. Rank Sr. MD (deceased February 5th 1996), who contributed to the translation from man to the preclinical method of the model. Deep gratitude is given to Gordon Munro PhD, who advised on analgesic doses and methods, and read the paper.\n\n\nSupplementary material\n\nCompares the onset of pain behaviors in this study to the onset of tissue remodeling with fibrosis.\n\nSupplementary file S2: NC3Rs ARRIVE guidelines.\n\nChecklist of recommendations for in vivo animal experiments.\n\nClick here to access the data.\n\nSupplementary file S3: Basic Anova statistics on groups.\n\nDetailed Anova Statistics on each group on the routine test days without analgesics.\n\nClick here to access the data.\n\nSupplementary file S4: NeuroDigm GEL™ analgesic study in young rats.\n\nAnalgesics over months in young NeuroDigm GEL rats with robust allodynia.\n\nClick here to access the data.\n\n\nReferences\n\nPercie du Sert N, Rice AS: Improving the translation of analgesic drugs to the clinic: animal models of neuropathic pain. Br J Pharmacol. 2014; 171(12): 2951–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nde Mos M, de Bruijn AG, Huygen FJ, et al.: The incidence of complex regional pain syndrome: a population-based study. Pain. 2007; 129(1-2): 12–20. PubMed Abstract | Publisher Full Text\n\nDieleman JP, Kerklaan J, Huygen FJ, et al.: Incidence rates and treatment of neuropathic pain conditions in the general population. Pain. 2008; 137(3): 681–8. PubMed Abstract | Publisher Full Text\n\nSandroni P, Benrud-Larson LM, McClelland RL, et al.: Complex regional pain syndrome type I: incidence and prevalence in Olmsted county, a population-based study. Pain. 2003; 103(1–2): 199–207. PubMed Abstract | Publisher Full Text\n\nPayne R: Neuropathic Pain Syndromes, with special reference to Causalgia and Reflex Sympathetic Dystrophy. The Clinical Journal of Pain. 1986; 2(1): 59–73. Reference Source\n\nElkind AH: Headache and facial pain associated with head injury. Otolaryngol Clin North Am. 1989; 22(6): 1251–71. PubMed Abstract\n\nFreund B, Schwartz M: Post-traumatic myofascial pain of the head and neck. Curr Pain Headache Rep. 2002; 6(5): 361–9. PubMed Abstract | Publisher Full Text\n\nRho RH, Brewer RP, Lamer TJ, et al.: Complex regional pain syndrome. Mayo Clin Proc. 2002; 77(2): 174–80. PubMed Abstract | Publisher Full Text\n\nRoberts WJ: A hypothesis on the physiological basis for causalgia and related pains. Pain. 1986; 24(3): 297–311. PubMed Abstract | Publisher Full Text\n\nRotter R, Kuhn C, Stratos I, et al.: Erythropoietin enhances the regeneration of traumatized tissue after combined muscle-nerve injury. J Trauma Acute Care Surg. 2012; 72(6): 1567–75. PubMed Abstract | Publisher Full Text\n\nSchwartzman RJ, Grothusen J, Kiefer TR, et al.: Neuropathic central pain: epidemiology, etiology, and treatment options. Arch Neurol. 2001; 58(10): 1547–50. PubMed Abstract | Publisher Full Text\n\nSchwartzman RJ, Maleki J: Postinjury neuropathic pain syndromes. Med Clin North Am. 1999; 83(3): 597–626. PubMed Abstract | Publisher Full Text\n\nSeale KS: Reflex sympathetic dystrophy of the lower extremity. Clin Orthop Relat Res. 1989; (243): 80–5. PubMed Abstract\n\nvan der Laan L, Goris RJ: Reflex sympathetic dystrophy. An exaggerated regional inflammatory response? Hand Clin. 1997; 13(3): 373–85. PubMed Abstract\n\nWasner G, Schattschneider J, Binder A, et al.: Complex regional pain syndrome--diagnostic, mechanisms, CNS involvement and therapy. Spinal Cord. 2003; 41(2): 61–75. PubMed Abstract | Publisher Full Text\n\nWilson PR: Post-traumatic upper extremity reflex sympathetic dystrophy. Clinical course, staging, and classification of clinical forms. Hand Clin. 1997; 13(3): 367–72. PubMed Abstract\n\nHooshmand H: Chronic pain: reflex sympathetic dystrophy, prevention, and management. Boca Raton, Florida: CRC Press; 1993; 201. Reference Source\n\nOaklander AL: Role of minimal distal nerve injury in complex regional pain syndrome-I. Pain Med. 2010; 11(8): 1251–6. PubMed Abstract | Publisher Full Text\n\nAli Z, Ringkamp M, Hartke TV, et al.: Uninjured C-fiber nociceptors develop spontaneous activity and alpha-adrenergic sensitivity following L6 spinal nerve ligation in monkey. J Neurophysiol. 1999; 81(2): 455–66. PubMed Abstract\n\nCampbell JN, Meyer RA: Mechanisms of neuropathic pain. Neuron. 2006; 52(1): 77–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDevor M: Nerve pathophysiology and mechanisms of pain in causalgia. J Auton Nerv Syst. 1983; 7(3–4): 371–84. PubMed Abstract | Publisher Full Text\n\nDevor M: Neuropathic pain and injured nerve: peripheral mechanisms. Br Med Bull. 1991; 47(3): 619–30. PubMed Abstract\n\nFinnerup NB, Baastrup C: Spinal cord injury pain: mechanisms and management. Curr Pain Headache Rep. 2012; 16(3): 207–16. PubMed Abstract | Publisher Full Text\n\nFinnerup NB, Jensen TS: Spinal cord injury pain--mechanisms and treatment. Eur J Neurol. 2004; 11(2): 73–82. PubMed Abstract | Publisher Full Text\n\nHan HC, Lee DH, Chung JM: Characteristics of ectopic discharges in a rat neuropathic pain model. Pain. 2000; 84(2–3): 253–61. PubMed Abstract | Publisher Full Text\n\nHu P, McLachlan EM: Macrophage and lymphocyte invasion of dorsal root ganglia after peripheral nerve lesions in the rat. Neuroscience. 2002; 112(1): 23–38. PubMed Abstract | Publisher Full Text\n\nJensen TS, Baron R: Translation of symptoms and signs into mechanisms in neuropathic pain. Pain. 2003; 102(1–2): 1–8. PubMed Abstract | Publisher Full Text\n\nWoolf CJ: The pathophysiology of peripheral neuropathic pain--abnormal peripheral input and abnormal central processing. Acta Neurochir Suppl (Wien). 1993; 58: 125–30. PubMed Abstract | Publisher Full Text\n\nCostigan M, Scholz J, Woolf CJ: Neuropathic pain: a maladaptive response of the nervous system to damage. Annu Rev Neurosci. 2009; 32: 1–32. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDeLeo JA, Yezierski RP: The role of neuroinflammation and neuroimmune activation in persistent pain. Pain. 2001; 90(1–2): 1–6. PubMed Abstract | Publisher Full Text\n\nBouhassira D, Attal N: Diagnosis and assessment of neuropathic pain: the saga of clinical tools. Pain. 2011; 152(3 Suppl): S74–83. PubMed Abstract | Publisher Full Text\n\nOaklander AL, Wilson PR, Moskovitz PA, et al.: Response to \"A new definition of neuropathic pain\". Pain. 2012; 153(4): 934–5; author reply 935–6. PubMed Abstract | Publisher Full Text\n\nDilley A, Greening J, Walker-Bone K, et al.: Magnetic resonance imaging signal hyperintensity of neural tissues in diffuse chronic pain syndromes: a pilot study. Muscle Nerve. 2011; 44(6): 981–4. PubMed Abstract | Publisher Full Text\n\nFiller A: Magnetic resonance neurography and diffusion tensor imaging: origins, history, and clinical impact of the first 50,000 cases with an assessment of efficacy and utility in a prospective 5000-patient study group. Neurosurgery. 2009; 65(4 suppl): A29–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFiller AG, Bell BA: Axonal transport, imaging, and the diagnosis of nerve compression. Br J Neurosurg. 1992; 6(4): 293–5. PubMed Abstract | Publisher Full Text\n\nLawande AD, Warrier SS, Joshi MS: Role of ultrasound in evaluation of peripheral nerves. Indian J Radiol Imaging. 2014; 24(3): 254–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimon NG, Narvid J, Cage T, et al.: Visualizing axon regeneration after peripheral nerve injury with magnetic resonance tractography. Neurology. 2014; 83(15): 1382–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTung KW, Behera D, Biswal S: Neuropathic pain mechanisms and imaging. Semin Musculoskelet Radiol. 2015; 19(2): 103–11. PubMed Abstract | Publisher Full Text\n\nVaeggemose M, Ringgaard S, Ejskjaer N, et al.: Magnetic resonance imaging may be used for early evaluation of diabetic peripheral polyneuropathy. J Diabetes Sci Technol. 2015; 9(1): 162–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJacobson JA, Wilson TJ, Yang LJ: Sonography of Common Peripheral Nerve Disorders With Clinical Correlation. J Ultrasound Med. 2016; 35(4): 683–93. PubMed Abstract | Publisher Full Text\n\nKerasnoudis A: Ultrasound visualization of nerve remodeling after strenuous exercise. Muscle Nerve. 2016; 53(2): 320–4. PubMed Abstract | Publisher Full Text\n\nMyers RR: 1994 ASRA Lecture. The pathogenesis of neuropathic pain. Reg Anesth. 1995; 20(3): 173–84. PubMed Abstract\n\nBowsher D: Trigeminal neuralgia: an anatomically oriented review. Clin Anat. 1997; 10(6): 409–15. PubMed Abstract | Publisher Full Text\n\nHinz B, Phan SH, Thannickal VJ, et al.: Recent developments in myofibroblast biology: paradigms for connective tissue remodeling. Am J Pathol. 2012; 180(4): 1340–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVelnar T, Bailey T, Smrkolj V: The wound healing process: an overview of the cellular and molecular mechanisms. J Int Med Res. 2009; 37(5): 1528–42. PubMed Abstract | Publisher Full Text\n\nZochodne DW: The microenvironment of injured and regenerating peripheral nerves. Muscle Nerve Suppl. 2000; 9(Supplement S9): S33–8. PubMed Abstract | Publisher Full Text\n\nOgawa R: Mechanobiology of scarring. Wound Repair Regen. 2011; 19(Suppl 1): s2–9. PubMed Abstract | Publisher Full Text\n\nScholz J, Woolf CJ: The neuropathic pain triad: neurons, immune cells and glia. Nat Neurosci. 2007; 10(11): 1361–8. PubMed Abstract | Publisher Full Text\n\nMogil JS: Animal models of pain: progress and challenges. Nat Rev Neurosci. 2009; 10(4): 283–94. PubMed Abstract | Publisher Full Text\n\nMogil JS, Davis KD, Derbyshire SW: The necessity of animal models in pain research. Pain. 2010; 151(1): 12–7. PubMed Abstract | Publisher Full Text\n\nAnsell DM, Holden KA, Hardman MJ: Animal models of wound repair: Are they cutting it? Exp Dermatol. 2012; 21(8): 581–5. PubMed Abstract | Publisher Full Text\n\nPeplow PV, Chung TY, Baxter GD: Photodynamic modulation of wound healing: a review of human and animal studies. Photomed Laser Surg. 2012; 30(3): 118–48. PubMed Abstract | Publisher Full Text\n\nvan der Worp HB, Howells DW, Sena ES, et al.: Can animal models of disease reliably inform human studies? PLoS Med. 2010; 7(3): e1000245. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJaggi AS, Jain V, Singh N: Animal models of neuropathic pain. Fundam Clin Pharmacol. 2011; 25(1): 1–28. PubMed Abstract | Publisher Full Text\n\nAbdo H: Gait Analysis and Therapeutic Application of Carbon Monoxide in a Rodent Model of Complex Regional Pain Syndrome Type-1. Electronic Thesis and Dissertation Repository: The University of Western Ontario. 2015. Reference Source\n\nKim SH, Chung JM: An experimental model for peripheral neuropathy produced by segmental spinal nerve ligation in the rat. Pain. 1992; 50(3): 355–63. PubMed Abstract | Publisher Full Text\n\nBennett GJ, Xie YK: A peripheral mononeuropathy in rat that produces disorders of pain sensation like those seen in man. Pain. 1988; 33(1): 87–107. PubMed Abstract | Publisher Full Text\n\nDecosterd I, Woolf CJ: Spared nerve injury: an animal model of persistent peripheral neuropathic pain. Pain. 2000; 87(2): 149–58. PubMed Abstract | Publisher Full Text\n\nSengupta P: The Laboratory Rat: Relating Its Age With Human's. Int J Prev Med. 2013; 4(6): 624–30. PubMed Abstract | Free Full Text\n\nAndreollo NA, Santos EF, Araújo MR, et al.: Rat’s age versus human’s age: what is the relationship? Arq Bras Cir Dig. 2012; 25(1): 49–51. PubMed Abstract | Publisher Full Text\n\nQuinn R: Comparing rat’s to human’s age: how old is my rat in people years? Nutrition. United States, 2005; 21(6): 775–7. PubMed Abstract | Publisher Full Text\n\nFitts DA: Improved stopping rules for the design of efficient small-sample experiments in biomedical and biobehavioral research. Behav Res Methods. 2010; 42(1): 3–22. PubMed Abstract | Publisher Full Text\n\nFitts DA: Ethics and animal numbers: informal analyses, uncertain sample sizes, inefficient replications, and type I errors. J Am Assoc Lab Anim Sci. 2011; 50(4): 445–53. PubMed Abstract | Free Full Text\n\nAchneck HE, Sileshi B, Jamiolkowski RM, et al.: A comprehensive review of topical hemostatic agents: efficacy and recommendations for use. Ann Surg. 2010; 251(2): 217–28. PubMed Abstract | Publisher Full Text\n\nDrury JL, Mooney DJ: Hydrogels for tissue engineering: scaffold design variables and applications. Biomaterials. 2003; 24(24): 4337–51. PubMed Abstract | Publisher Full Text\n\nMunro G, Storm A, Hansen MK, et al.: The combined predictive capacity of rat models of algogen-induced and neuropathic hypersensitivity to clinically used analgesics varies with nociceptive endpoint and consideration of locomotor function. Pharmacol Biochem Behav. 2012; 101(3): 465–78. PubMed Abstract | Publisher Full Text\n\nBrabb T, Carbone L, Snyder J, et al.: Institutional animal care and use committee considerations for animal models of peripheral neuropathy. ILAR J. 2014; 54(3): 329–37. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSorge RE, Martin LJ, Isbester KA, et al.: Olfactory exposure to males, including men, causes stress and related analgesia in rodents. Nat Methods. 2014; 11(6): 629–32. PubMed Abstract | Publisher Full Text\n\nGriffin JW, George EB, Chaudhry V: Wallerian degeneration in peripheral nerve disease. Baillieres Clin Neurol. 1996; 5(1): 65–75. PubMed Abstract\n\nGondré M, Burrola P, Weinstein DE: Accelerated nerve regeneration mediated by Schwann cells expressing a mutant form of the POU protein SCIP. J Cell Biol. 1998; 141(2): 493–501. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPickering G, Jourdan D, Millecamps M, et al.: Age-related impact of neuropathic pain on animal behaviour. Eur J Pain. 2006; 10(8): 749–55. PubMed Abstract | Publisher Full Text\n\nChung JM, Choi Y, Yoon YW, et al.: Effects of age on behavioral signs of neuropathic pain in an experimental rat model. Neurosci Lett. 1995; 183(1–2): 54–7. PubMed Abstract | Publisher Full Text\n\nCrisp T, Giles JR, Cruce WL, et al.: The effects of aging on thermal hyperalgesia and tactile-evoked allodynia using two models of peripheral mononeuropathy in the rat. Neurosci Lett. 2003; 339(2): 103–6. PubMed Abstract | Publisher Full Text\n\nShenker N, Haigh R, Roberts E, et al.: A review of contralateral responses to a unilateral inflammatory lesion. Rheumatology (Oxford). 2003; 42(11): 1279–86. PubMed Abstract | Publisher Full Text\n\nShenker NG, Haigh RC, Mapp PI, et al.: Contralateral hyperalgesia and allodynia following intradermal capsaicin injection in man. Rheumatology (Oxford). 2008; 47(9): 1417–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchmid AB, Nee RJ, Coppieters MW: Reappraising entrapment neuropathies--mechanisms, diagnosis and management. Man Ther. 2013; 18(6): 449–57. PubMed Abstract | Publisher Full Text\n\nMaleki J, LeBel AA, Bennett GJ, et al.: Patterns of spread in complex regional pain syndrome, type I (reflex sympathetic dystrophy). Pain. 2000; 88(3): 259–66. PubMed Abstract | Publisher Full Text\n\nSchott GD: Delayed onset and resolution of pain: some observations and implications. Brain. 2001; 124(Pt 6): 1067–76. PubMed Abstract | Publisher Full Text\n\nAttal N, Chen YL, Kayser V, et al.: Behavioural evidence that systemic morphine may modulate a phasic pain-related behaviour in a rat model of peripheral mononeuropathy. Pain. 1991; 47(1): 65–70. PubMed Abstract | Publisher Full Text\n\nOssipov MH, Lopez Y, Nichols ML, et al.: The loss of antinociceptive efficacy of spinal morphine in rats with nerve ligation injury is prevented by reducing spinal afferent drive. Neurosci Lett. 1995; 199(2): 87–90. PubMed Abstract | Publisher Full Text\n\nFreynhagen R, Geisslinger G, Schug SA: Opioids for chronic non-cancer pain. BMJ. 2013; 346: f2937. PubMed Abstract | Publisher Full Text\n\nFields HL: The doctor's dilemma: opiate analgesics and chronic pain. Neuron. 2011; 69(4): 591–4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHorvath RJ, Romero-Sandoval EA, De Leo JA: Glial Modulation in Pain States: Translation into humans. In Kruger L, Light AR, editors. Translational Pain Research: From Mouse to Man. Boca Raton, FL: CRC Press, 2010; 215–234. PubMed Abstract\n\nRaghavendra V, Rutkowski MD, De Leo JA: The role of spinal neuroimmune activation in morphine tolerance/hyperalgesia in neuropathic and sham-operated rats. J Neurosci. 2002; 22(22): 9980–9. PubMed Abstract\n\nHama A, Sagen J: Altered antinociceptive efficacy of tramadol over time in rats with painful peripheral neuropathy. Eur J Pharmacol. 2007; 559(1): 32–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampana WM, Li X, Shubayev VI, et al.: Erythropoietin reduces Schwann cell TNF-alpha, Wallerian degeneration and pain-related behaviors after peripheral nerve injury. Eur J Neurosci. 2006; 23(3): 617–26. PubMed Abstract | Publisher Full Text\n\nElfar JC, Jacobson JA, Puzas JE, et al.: Erythropoietin accelerates functional recovery after peripheral nerve injury. J Bone Joint Surg Am. 2008; 90(8): 1644–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi X, Gonias SL, Campana WM: Schwann cells express erythropoietin receptor and represent a major target for Epo in peripheral nerve injury. Glia. 2005; 51(4): 254–65. PubMed Abstract | Publisher Full Text\n\nCampana WM, Myers RR: Exogenous erythropoietin protects against dorsal root ganglion apoptosis and pain following peripheral nerve injury. Eur J Neurosci. 2003; 18(6): 1497–506. PubMed Abstract | Publisher Full Text\n\nJohansson A, Bennett GJ: Effect of local methylprednisolone on pain in a nerve injury model. A pilot study. Reg Anesth. 1997; 22(1): 59–65. PubMed Abstract | Publisher Full Text\n\nDevor M: The pathophysiology of damaged peripheral nerves. In: Wall PD, Melzak R, editors. Textbook of Pain. 3rd ed. London: Churchill-Livingston; 1994; 79–100.\n\nDilley A: ARA290 in a rat model of inflammatory pain. Methods Mol Biol. 2013; 982: 213–25. PubMed Abstract | Publisher Full Text\n\nPulman KG, Smith M, Mengozzi M, et al.: The erythropoietin-derived peptide ARA290 reverses mechanical allodynia in the neuritis model. Neuroscience. 2013; 233: 174–83. PubMed Abstract | Publisher Full Text\n\nSwartjes M, Niesters M, Dahan A: Assessment of allodynia relief by tissue-protective molecules in a rat model of nerve injury-induced neuropathic pain. Methods Mol Biol. 2013; 982: 187–95. PubMed Abstract | Publisher Full Text\n\nRichards N, McMahon SB: Targeting novel peripheral mediators for the treatment of chronic pain. Br J Anaesth. 2013; 111(1): 46–51. PubMed Abstract | Publisher Full Text\n\nBrines M: Discovery of a master regulator of injury and healing: tipping the outcome from damage toward repair. Mol Med. 2014; 20(Suppl 1): S10–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJi RR, Xu ZZ, Gao YJ: Emerging targets in neuroinflammation-driven chronic pain. Nat Rev Drug Discov. 2014; 13(7): 533–48. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHayashi A, Pannucci C, Moradzadeh A, et al.: Axotomy or compression is required for axonal sprouting following end-to-side neurorrhaphy. Exp Neurol. 2008; 211(2): 539–50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrafft RM: Trigeminal neuralgia. Am Fam Physician. 2008; 77(9): 1291–6. PubMed Abstract\n\nUeda H: Peripheral mechanisms of neuropathic pain - involvement of lysophosphatidic acid receptor-mediated demyelination. Mol Pain. 2008; 4: 11. PubMed Abstract | Free Full Text\n\nSeltzer Z, Devor M: Ephaptic transmission in chronically damaged peripheral nerves. Neurology. 1979; 29(7): 1061–4. PubMed Abstract | Publisher Full Text\n\nMackinnon SE, Dellon AL, Hudson AR, et al.: Chronic human nerve compression--a histological assessment. Neuropathol Appl Neurobiol. 1986; 12(6): 547–65. PubMed Abstract | Publisher Full Text\n\nBisby MA, Pollock B: Increased regeneration rate in peripheral nerve axons following double lesions: enhancement of the conditioning lesion phenomenon. J Neurobiol. 1983; 14(6): 467–72. PubMed Abstract | Publisher Full Text\n\nDanielsson P, Dahlin L, Povlsen B: Tubulization increases axonal outgrowth of rat sciatic nerve after crush injury. Exp Neurol. 1996; 139(2): 238–43. PubMed Abstract | Publisher Full Text\n\nPolomano RC, Mannes AJ, Clark US, et al.: A painful peripheral neuropathy in the rat produced by the chemotherapeutic drug, paclitaxel. Pain. 2001; 94(3): 293–304. PubMed Abstract | Publisher Full Text\n\nMartin YB, Herradón G, Ezquerra L: Uncovering new pharmacological targets to treat neuropathic pain by understanding how the organism reacts to nerve injury. Curr Pharm Des. 2011; 17(5): 434–48. PubMed Abstract | Publisher Full Text\n\nMackinnon SE, Dellon AL, Hudson AR, et al.: Histopathology of compression of the superficial radial nerve in the forearm. J Hand Surg Am. 1986; 11(2): 206–10. PubMed Abstract | Publisher Full Text\n\nRempel D, Dahlin L, Lundborg G: Pathophysiology of nerve compression syndromes: response of peripheral nerves to loading. J Bone Joint Surg Am. 1999; 81(11): 1600–10. PubMed Abstract\n\nSommer C, Galbraith JA, Heckman HM, et al.: Pathology of experimental compression neuropathy producing hyperesthesia. J Neuropathol Exp Neurol. 1993; 52(3): 223–33. PubMed Abstract | Publisher Full Text\n\nSchmid AB, Coppieters MW, Ruitenberg MJ, et al.: Local and remote immune-mediated inflammation after mild peripheral nerve compression in rats. J Neuropathol Exp Neurol. 2013; 72(7): 662–80. PubMed Abstract | Publisher Full Text\n\nCox TR, Erler JT: Remodeling and homeostasis of the extracellular matrix: implications for fibrotic diseases and cancer. Dis Model Mech. 2011; 4(2): 165–78. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJensen TS, Baron R, Haanpää M, et al.: A new definition of neuropathic pain. Pain. Netherlands, 2011; 152(10): 2204–5. PubMed Abstract | Publisher Full Text\n\nHandorf AM, Zhou Y, Halanski MA, et al.: Tissue stiffness dictates development, homeostasis, and disease progression. Organogenesis. 2015; 11(1): 1–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nColeman LS: A stress repair mechanism that maintains vertebrate structure during stress. Cardiovasc Hematol Disord Drug Targets. 2010; 10(2): 111–37. PubMed Abstract | Publisher Full Text\n\nEming SA, Hammerschmidt M, Krieg T, et al.: Interrelation of immunity and tissue repair or regeneration. Semin Cell Dev Biol. 2009; 20(5): 517–27. PubMed Abstract | Publisher Full Text\n\nMason BN, Califano JP, Reinhart-King CA: Matrix Stiffness: A Regulator of Cellular Behavior and Tissue Formation. S.K. Bhatia, editor: Springer Science; 2012; 19–37. Publisher Full Text\n\nHama AT, Borsook D: Behavioral and pharmacological characterization of a distal peripheral nerve injury in the rat. Pharmacol Biochem Behav. 2005; 81(1): 170–81. PubMed Abstract | Publisher Full Text\n\nFlecknell P: Replacement, reduction and refinement. ALTEX. 2002; 19(2): 73–8. PubMed Abstract\n\nHannaman MR, Fitts DA, Doss RM, et al.: Dataset 1 in: The refined biomimetic NeuroDigm GEL™ Model of neuropathic pain in the mature rat. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16989",
"date": "26 Oct 2016",
"name": "Odd-Geir Berge",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article presents a novel model of neuropathy with slow onset of symptoms (increased responsiveness to mechanical stimuli) that, once established, persist throughout the study period of several months. Another feature of the model is the use of adult rats. The model is characterized by means of some commonly used drugs, histology, and, as a pilot study added after conclusion of the main study, administration of erythropoietin to a subgroup of animals.\nWhile the experimental procedures as such appear to be technically solid, reservations may be raised concerning terminology, presentation of background including literature references, translational context, discussion, and validity of conclusions. The structure of the paper can be improved by prioritizing and moving content to relevant sections, which would facilitate reading and increase the focus on important findings. In addition, the erythropoietin “pilot study” was carried out under less stringent conditions and may perhaps be reported as a separate short or preliminary paper.\n\nMajor concerns:\nThe authors equate responses evoked by mild mechanical stimuli (von Frey stimulation, gentle brushing, pin prick) with pain behavior and clinical pain. Even terms like allodynia and hyperalgesia are used to describe these responses although the paper provides no evidence that there is any pain involved in the model or that these responses are valid surrogate markers for clinically relevant pain. These types of stimuli are undoubtedly useful both in psychophysical studies and quantitative sensory testing but rarely used to evaluate analgesic efficacy in clinical trials. I would suggest using neutral, descriptive terms throughout the manuscript and bring up the translational relevance in the Discussion, taking into account the extensive literature on this topic.\nThe pharmacology is interpreted in terms of analgesia, which is questionable since there is no evidence for pain in the model. The drugs used, perhaps with the exception of celecoxib in low doses, have known non-analgesic effects that may interfere with evoked responses. While pharmacological characterization of a model is useful for various reasons, clinical relevance is a different matter and clinical efficacy is generally poorly predicted by animal models of neuropathy. This is arguably the case here where the efficacy of the tested drugs are much more robust than the case is in well-controlled clinical trials and the order of efficacy, in spite of what is claimed, is at variance with the clinical data (for a recent, comprehensive review, see Finnerup et al1). It should be pointed out that comparing drug efficacy in single-dose experiments has limited utility, especially when pharmacokinetic data are lacking. Relating drug doses in this animal study to human clinical data is of course relevant but should be done in the Discussion with due reference to the many limitations of the current approach.\nThe use of literature references is in part inadequate. For instance, the short first paragraph of the introduction contains no less than 30 references and it is hard to identify the ones that may have relevance for the statements in the text. Many of these references deal with conditions that are connected to the topic of the present study only in a general sense and it is not obvious how they may support the arguments. Since delayed development of sensory changes seems to be a main feature of the described model, specific reference to one or two key clinical papers would be helpful. It may be necessary to carefully match the text of the manuscript with the references and perhaps delete sections or statements that are not supported by literature; alternatively to reorganize the text so that speculations and personal opinions are clearly differentiated from information supported by evidence in the literature.\n\nSpecific suggestions & comments\nTitle (and in other places): The described method is referred to as “refined” but compared to what is not defined in the manuscript.\n\nAbstract: Background: The paragraph should be refocused to state the objectives / hypotheses of the study.\nMethods: The first sentence belongs to the Results paragraph. Mention test methods and readouts. Some details e.g. age of rats, doses, routs of administration of drugs, would give the reader a chance of understanding what to expect from the paper. Generic drug names should not be capitalized (correct in the following section).\nResults: As in the rest of the manuscript, avoid controversial terms like “pain behavior” and “analgesia”.\nConclusion: The text is inaccurate in suggesting that there is direct evidence for pain behavior in the model and that this behavior is related to remodeling; the difference between the sham and gel treatments is mostly quantitative allowing for other interpretations. It is not obvious that the effects of analgesics in the model reflect clinical efficacy.\n\nIntroduction: In addition to the more general points raised above, concentrating the text on the present model into a single paragraph with a clear statement of objective may facilitate reading and sharpen focus. The text regarding relevance of animal models do not adequately address the rather extensive discussion that has been ongoing at meetings and in the literature for quite a while and the argument put forward is unspecific. There is room for a more complete treatment in the Discussion where a number of issues relevant to the present study could be addressed, e.g. interpretation of readouts and the importance of pharmacokinetic factors. In the introduction, perhaps pointing to the expected advantages of the present paradigm compared to previous work would suffice.\n\nMaterials and methods: Bodyweight of animals also at the time of testing would be interesting.\nP5, left column, 4th paragraph: How was the forces of the von Frey filaments confirmed? Are these filaments stable under changing environmental conditions?\nP5, last paragraph: the meaning of the phrase “all with no change in pain behavior noted\" is unclear.\nSome text could be deleted or moved to the discussion to provide easier reading, e.g. p5, left column, 3rd paragraph (\"Measures of …\"), right column 2nd paragraph (\"The original doses ...\")\n\nResults: In general, this part of the manuscript would benefit from limiting the text to what is necessary to understand the data, leaving discussion to the Discussion. Even figure legends should be edited to remove interpretations, speculations and explanation of what can be easily seen from the graphs. All symbols should be explained in the legends. Number of animals, indication of statistical analysis (with details in methods and results), and how the means are calculated could be given in the legends and thus allow for a more succinct main text.\nP7 & 8: “Results of behavioral data” - the first four paragraphs should be deleted or condensed and moved to the discussion (with appropriate succinct information added to the figure legends). The following paragraphs “Days of data...\" could be condensed to a few descriptive sentences and the justification for various procedures moved to the discussion. I would suggest that Table 1 and associated text be moved to Supplemental material or deleted altogether; minor changes in the results may change the numbers significantly so the value of these calculations for future studies would be too limited to warrant an extensive presentation as in the present manuscript.\nP9, Section on von Frey: For consistency: mention the results of the GEL group. The sentence “Asterisks … “ should be moved to the figure legend (this also applies to the following sections).\nP10: “Individual data...”: It is difficult to see what this analysis contributes to the paper – in my view it distracts from the main findings and should be deleted or moved to Supplemental material. The following paragraph “Summary ...” is redundant; the “Factor influencing ...” is anecdotal and probably best deleted, alternatively mentioned in the Discussion. “Results of experiments...”: the information in the first paragraph fits better in other parts of the manuscript. Section on morphine: the first sentence is redundant / irrelevant. Later in the paragraph (p 11), the explanation of asterisks can be removed twice, it's adequately explained in the figure legend (same applies to text later in the Results). The last paragraph on morphine does not appear to ad value.\np14: “Summary...” is redundant.\n\nDiscussion: I would recommend an extensive revision of the Discussion, addressing methodological aspects and taking into account the limitations of the approach, some of which are delineated above. Careful scrutiny of the references to make sure that they are fair and representative as well as addressing the statements in the text would be another recommendation, also pertaining to the rest of the manuscript. As indicated above, there may be material in previous sections of the manuscript that would fit in the Discussion.\n\nConclusion: Appears more as a continuation of the discussion and should be revised, preferably to reflect the objectives of the study and the initial hypotheses.",
"responses": [
{
"c_id": "2639",
"date": "04 May 2017",
"name": "Mary Hannaman",
"role": "Author Response",
"response": "The in-depth analysis you provided was carefully examined and applied. Some areas were retained based on the comments of another, as in Table 1, which has the standard deviations desired by some. In this lengthy research report, discussions of translational context and pharmacokinetics are not covered. The references have been abridged for relevancy, as suggested. The hypothesis and objective have been included in the Abstract and Conclusion. Paragraphs have been moved to more relevant headings. Summaries were deleted. Clarifications regarding refinement are discussed in the revision (Supplement S3). Early recognition of a hormonal influence on behavioral inconsistencies was crucial to the study’s completion. If the role of estrogen had not been realized the study would have been terminated. The very thorough paper by Finnerup et al 2015 “Pharmacotherapy for neuropathic pain in adults: a systematic review and meta-analysis” (1) classifies gabapentin and duloxetine as first-line drugs for neuropathic pain with a “high quality of evidence” for both, and strong opioids as a third-line choice. These analgesic classifications are supported by the findings in our study. We agree that “clinical efficacy is generally poorly predicted by animal models of neuropathy” since their pathophysiology does not reflect what usually happens in patients. We also consider the traditional neuropathic pain assays of mechanical hypersensitivity used for decades in research labs, that we used, to have unrecognized translational merit — as you note they are “rarely used to evaluate analgesic efficacy in clinical trials”, but should be considered in our estimation. As suggested, an explanation of the validity of the conclusions has been added to the revision in the Discussion.References 1. Finnerup NB, Attal N, Haroutounian S, McNicol E, Baron R, Dworkin RH, Gilron I, Haanpää M, Hansson P, Jensen TS, Kamerman PR. Pharmacotherapy for neuropathic pain in adults: a systematic review and meta-analysis. The Lancet Neurology. 2015 Feb 28;14(2):162-73."
}
]
},
{
"id": "17230",
"date": "14 Nov 2016",
"name": "Michael Brines",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes an interesting rodent model of chronic pain. The basic premise is that local injection of a mixture of biological materials typical of the extracellular matrix following tissue injury activates tissue repair processes which ultimately causes a constrictive nerve injury associated with the development of pain behaviors. However, the view that traumatic injury is not involved in the development of this model must be incorrect, as the sham animals also develop pain behaviors, albeit to a lesser degree and with a delay compared to the active procedure arm. The biological processes involved are well schematized in Figure S1 and underscores that the involvement of both injury and repair processes in the development of the neuropathic state. Unfortunately, pathological evaluation was performed only months after the initiating lesion, long after the acute and subacute effects of the procedure had resolved. If the assumption that purely repair processes are involved in this model, it would be necessary to evaluate in a longitudinal manner molecular and other markers of tissue damage and repair. For example, the presence or absence of inflammatory cells and quantification of pro-inflammatory cytokines at the site of injection of the sciatic nerve in the days and weeks following the experimental procedure would determine to what extent inflammation is involved. A better description of the pathophysiology of this model would be that of a delayed neuropathic state developing after mild peripheral nerve trauma.\nIn terms of general issues, the behavioral methodological details are unusually detailed and to aid readability of the manuscript, much could be moved to Supplemental Materials. The authors are to be congratulated in following very rigorous blinding procedures for the behavioral testing which often are lacking in the published descriptions of neuropathy models. These strengthen greatly the behavioral observations. A particularly interesting observation was the effect of estrogen-containing cream present on the experimenter on the pain behavior observed and further underscores the need for great care in limiting potentially confounding variables which reduce the fidelity of experimental observations.\nIn spite of the detailed description of the experimental procedures, one question arises: do the authors believe that the injection site targets only the tibial nerve? If so, the exact testing location of the plantar surface will be important, i.e., whether in the tibial or sural nerve distribution. Many peripheral nerve models are characterized by sprouting from the adjacent sural distribution and this possibility needs to be evaluated.\nAdditionally, there is a redundant exposition of experimental results: as just 2 examples, Figures 2 and 5 present the same data (Figure 2 is perhaps more useful for the reader) as do Figures 5-6. Streamlining the text and exposition of data, i.e. methods and results, would make the manuscript more readable. The “Summary of pain behavior” and “Conclusion” are redundant in my view.\nThe conclusion in the discussion that this model is one of focal peripheral nerve injury is premature, as evaluation of the spinal cord and dorsal root ganglia was not undertaken. The presence of contralateral behavioral findings, even within the sham group, implies that there is at the least critical spinal cord involvement in this pain model. The authors correctly point out that additional studies focusing on the central nervous system are needed to more completely define the pathophysiology of this model.\nThe inclusion of EPO-mediated pain improvement raises more questions than it answers and appears almost as an afterthought. Unfortunately, there is not enough detail, especially at the pathological level, to confirm the effect is via “activation of repair”. The extremely fast response, occurring over a few days evidently, is not consistent with true repair which requires a much longer timespan. Additionally, the differentiation between local and systemic effects of EPO assumed in this study may not be correct. The dose of EPO administered (400IU/kg) is at the lower limit needed to exhibit beneficial effects in other neurological models and further, administration via the SC route would reduce the peak plasma EPO concentration. An intraperitoneal or intravenous route would have been preferable. Perhaps this observation would be better moved to supplementary material and speculation on the underlying biology limited.\nIn conclusion, this model is interesting and the very detailed methodology makes it a useful addition to the study of pharmacological treatment of pain behavior. The clear distinction between classes of agents used as treatment for neuropathic symptom increase optimism that the model can be useful for screening potential efficacy of new compounds or evaluating novel methods of treatment using existing pharmacotherapy.",
"responses": [
{
"c_id": "2640",
"date": "04 May 2017",
"name": "Mary Hannaman",
"role": "Author Response",
"response": "The value of your knowledge of tissue repair is useful in understanding the hypothesis of our model. We realize our focus on the later stages of our model presents frustrations. For this study we merely wanted to develop and then treat the chronic pain behavior as seen in patients. Once established as being representative of patients, earlier studies can be done. Also the extensive testing limited the number of rodents able to be used for specimens in this lengthy study. The paper’s main tissue focus is on the neural histology findings long after pain behaviors are established, limiting any explanation of the exact pathophysiology. The hydrogel does not directly cause traumatic injury or acute inflammation in the first 14 days, as evidenced by the lack of significant pain behaviors, erythema, cyanosis, edema, or altered gait. Interestingly, the shams that developed late onset pain behaviors had no evidence of tissue damage on light microscopy, and may represent an obscure neural “dysfunction”. Regarding the redundant exposition of experimental results: We agree that Figure 2 is in many ways more informative, and we used Figure 5 to provide a formal statistical analysis of the null hypothesis. Omitting Figure 5 would omit the statistical analysis. We agree that EPO does not likely alleviate the pain through tissue repair. The erythropoietin’s neuroprotective mechanisms are not known for this neural model and need further study. The EPO dose in mg/kg units is close to 300 units/kg. The systemic EPO dose was subcutaneous to limit stress, tissue damage and complications. The repeat injection of EPO in 2/5 GEL rats discussed in paper suggests localized EPO placement is critical for this low dose to cause a reversal effect. You are correct in stating that this model of focal neuritis creating neuropathic pain may involve more than the discrete site on the nerve. The spinal cord and brain are likely involved, and we saved the brains, as well as non-neural tissue, of the rats for further study. Your other concerns have been incorporated in the revision."
}
]
},
{
"id": "16966",
"date": "15 Nov 2016",
"name": "Gillian L. Currie",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe research article “The refined biomimetic NeuroDigm GELTM Model of neuropathic pain in the mature rat” describes a new neuropathic pain animal model designed to be more relevant to the development of chronic neuropathic pain in humans. The authors describe a model using mature adult rats with a percutaneous implant of GELTM into the tibial nerve. The model is characterised for 5 months by assessing pain-related behavioural responses to mechanical stimuli and the effect of morphine, celecoxib, gabapentin and duloxetine. Histology of the nerve was also assessed. A pilot study is also reported assessing the pain-related behavioural response to the injection of human erythropoietin.\n\nThe rationale for the development of a model of neuropathic pain that more closely mimics the human condition is sound. I agree that the most commonly used neuropathic pain models do not mirror the pathophysiology of the delayed onset of neural pain without debility as seen in many neuropathic pain patients. The authors provide detailed description of their methods and rationale and include an ARRIVE guidelines checklist.\n\nI am concerned that the Sham animals are not used as controls to characterise this model. The authors acknowledge that the sham animals are not homogenous in their pain-related behavioural responses (5 out of 8 animals develop pinprick-induced pain-related behaviours). However, I believe that this is the most appropriate control for this new model of neuropathic pain.\n\nI disagree with the claim that this model meets the NC3Rs criteria for refinement. Refinement refers to methods that minimise the pain, suffering, distress or lasting harm that may be experienced by animals. The longer duration of this model and the development of similar pain-related behaviours as observed in other neuropathic pain models does not meet refinement criteria.\n\nSpecific recommendations:\nTitle\nDescription of the model as refined needs to be clarified in the article. How is this a refined model?\n\nAbstract\nResearch objectives should be stated in the abstract.\nStrain of animal should be stated in the abstract.\nI do not think the results of the pilot study should be stated in the conclusions of the abstract. The main study characterising the model, which is appropriately powered, should be the focus.\n\nIntroduction and background\nIt would be useful to use more specific references for the type of pain that this model is meant to be modelling.\nMaterials and Methods\nAnimals were randomly assigned to groups as stated in the Methods section Study Design Paragraph. Please state how animals were randomly allocated to group. Also, state whether animals were randomly assigned to analgesic treatment groups and how this was carried out.\n\nResults\nThe data supports the claim that the model does develop pain-related behaviours that develop gradually and persist for months compared to control animals. However, should the comparison be to sham animals?\nDo the analgesic responses reflect human responses?\nAll figures: For presentation of results in the figures I recommend the used of standard deviations not standard error of the mean1.\nFigure 5: The Sham animals also show an increase in pain behaviour from baseline and this should be indicated with asterisks.\nFigure 7-10: How many animals were tested in the analgesic drug experiments? This should be clearly stated in the results section and in the figure legend. Were the same animals used for each drug? If so, this should be clearly stated.\nThe authors give a thorough and transparent description of their data and analysis choices. However, I recommend that a statistician assesses the statistical methods. For example, I question the use of the Fisher’s Protected Least Significant Difference test as this does not account for multiple comparisons. I also query the use of the Bonferroni-protected contrast because, as I understand it, this should only be used following a significant ANOVA result.\nDiscussion\nStudy limitations should be explored. For example, the use of only reflex behaviours to measure pain-related behaviours.\nThe paragraph outlining the implications to the 3Rs should be changed as I do not believe that this model is a refinement of the use of animals in research. This paragraph should also be moved from the conclusions section. Although, it should be noted that in the future if it does provide a more reliable model of human neuropathic pain then it has the potential to reduce the number of animals used in models that are not clinically relevant.",
"responses": [
{
"c_id": "2641",
"date": "04 May 2017",
"name": "Mary Hannaman",
"role": "Author Response",
"response": "Your perspective is appreciated and your concerns have been addressed. EPO pilot study power statistics have been added to the revision. The initial sample size was based on prior investigations. The Supplemental S3 refinement chart we added depicts how the refinement of a scientific procedure (as referred to by NC3Rs) that limits tissue damage can reduce acute pain, eliminate paw dragging, limb deformities, and self-mutilation. If rodent refinements reduced or eliminated pain their potential as models could be lost. Your pertinent question about the type of pain the model represents is best answered simply. We strived to elicit the types of evoked pain behaviors referred to as allodynia and hyperalgesia (1) that may develop gradually in patients after soft tissue trauma. We were trying to mimic in this study the chronic pain sustained for years as seen in humans. Most investigations would not need to be months long and studies can be shortened to any time after post procedure day 23. However, the longer possible duration of this study can possibly reduce the number of animals used in future studies. There is no homogeneous “sham group” represented by the data that could be used as a control over time. After 3 months 5/8 shams developed pain behavior and there was no single sham animal that had intermediate behavior in the values represented by the graphs. We took the unusual step of presenting the sham data individually to be perfectly clear about what happened in that interesting group. We purposely did not include asterisks in Figure 5, because it would be misleading. The shams’ individual data in Figure 6 resembles the human response — not all humans get neuropathic pain after a soft tissue injury.The EPO pilot study also helps characterize the GEL model by showing the analgesic effect of a localized biologic, which has not been demonstrated in the current models. The second author (DAF) consulted on experimental design and data display and conducted the statistical analysis of the behavioral data. Aside from expertise in neurobiology and IACUC regulations, he has taught undergraduate statistics and published several articles (2— 6) on experimental design, simulation, and ANOVA. Regarding the standard deviation instead of standard error, this recommendation by Lang and Altman (7) is curious because they offer no rationale for it in their paper. Use of standard deviations allows estimation of standardized effect sizes, whereas use of standard errors allows estimation of inference (null hypothesis test or confidence interval). Those authors might have had in mind that the use of standard errors without indication of corresponding sample sizes would rule out the estimation of standardized effect sizes by subsequent readers or meta-analyses. That is not the case in our paper because we explicitly include estimates of standardized effect sizes in Table 1. Thus, there is no reason to prefer standard deviations over standard errors, and the latter assist in informal estimation of significance for the many contrasts that we did not explicitly test. Fisher's protected least significant differences (PLSD) and Bonferroni: Note that planned comparisons can always be tested, unlike post-hoc comparisons (data snooping). We tested only planned comparisons in this paper. We quote the recommendations of experts Milliken and Johnson (8): Conduct an F-test for equal means. If the F-statistic is significant at the 5% level, make any planned comparisons you wish to make by using the LSD method. This includes not only comparisons between pairs of means but also comparisons based on any selected contrasts of the µi’s. If one has equal sample sizes, the Waller-Duncan method can also be used. For data snooping and unplanned comparisons, use Scheffe’s method. If the F-statistic for equal means is not significant, the experimenter should still consider any individual comparisons that he or she had planned, but should do so using either the multivariate t-distribution method or Bonferroni’s method. The experimenter should not do any data snooping in this case. Since the F-test for equal means is nonsignificant, Scheffe’s procedure would not yield any significant differences anyway. Your insightful question “Do the analgesic responses reflect human responses?” highlights a crucial issue that cannot be accurately answered despite extensive discussion in the literature. Presently “pain” in patients and experimental rodents is assessed by different assays. With similar “pain assays” the validity of such translational comparisons can be improved. References 1. Baron R. Mechanisms of disease: neuropathic pain—a clinical perspective. Nature clinical practice Neurology. 2006 Feb 1;2(2):95-106. 2. Fitts DA. Misuse of ANOVA with cumulative intakes. Appetite. 2006 Jan 31;46(1):100-2. 3. Fitts DA. Improved stopping rules for the design of efficient small-sample experiments in biomedical and biobehavioral research. Behavior research methods. 2010 Feb 1;42(1):3-22. 4. Fitts DA. The variable-criteria sequential stopping rule: generality to unequal sample sizes, unequal variances, or to large ANOVAs. Behavior research methods. 2010 Nov 1;42(4):918-29. 5. Fitts DA. Ethics and animal numbers: informal analyses, uncertain sample sizes, inefficient replications, and type I errors. Journal of the American Association for Laboratory Animal Science. 2011 Jul 15;50(4):445-53. 6. Fitts DA. Minimizing animal numbers: the variable-criteria sequential stopping rule. Comparative medicine. 2011 Jun 15;61(3):206-18. 7. Lang T, Altman D: Basic statistical reporting for articles published in clinical medical journals: the SAMPL Guidelines. Science Editors' Handbook, European Association of Science Editors. 2013. 8. Milliken GA, Johnson DE. Analysis of messy data, Volume I: Designed experiments. Wadsworth. Inc. Belmont, California. 1984."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2516
|
https://f1000research.com/articles/6-621/v1
|
03 May 17
|
{
"type": "Research Note",
"title": "SNP-SNP interactions as risk factors for aggressive prostate cancer",
"authors": [
"Venkatesh Vaidyanathan",
"Vijay Naidu",
"Nishi Karunasinghe",
"Anower Jabed",
"Radha Pallati",
"Gareth Marlow",
"Lynnette R. Ferguson",
"Vijay Naidu",
"Nishi Karunasinghe",
"Anower Jabed",
"Radha Pallati",
"Gareth Marlow",
"Lynnette R. Ferguson"
],
"abstract": "Prostate cancer (PCa) is one of the most significant male health concerns worldwide. Single nucleotide polymorphisms (SNPs) are becoming increasingly strong candidate biomarkers for identifying susceptibility to PCa. We identified a number of SNPs reported in genome-wide association analyses (GWAS) as risk factors for aggressive PCa in various European populations, and then defined SNP-SNP interactions, using PLINK software, with nucleic acid samples from a New Zealand cohort. We used this approach to find a gene x environment marker for aggressive PCa, as although statistically gene x environment interactions can be adjusted for, it is highly impossible in practicality, and thus must be incorporated in the search for a reliable biomarker for PCa. We found two intronic SNPs statistically significantly interacting with each other as a risk for aggressive prostate cancer on being compared to healthy controls in a New Zealand population.",
"keywords": [
"prostate cancer",
"SNP genotyping",
"SNP-SNP interaction",
"SEQUENOM MassArray technology"
],
"content": "Introduction\n\nProstate cancer (PCa) is highly prevalent, and around 1 in 6 patients are at risk of developing the aggressive form of the disease1. It has become one of the most significant male health concerns worldwide2. An individual is diagnosed as having high-risk or aggressive PCa based on the classification by the American Urological Association3, when the clinical T stage ≥cT2c, and/or the Gleason score ≥8, and/or the serum prostate serum antigen (PSA) level >20ng/ml4.\n\nAlthough a hereditary aspect is well known for this disease5, various studies have also shown that genetic interactions with biological and behavioral factors play an important role in the overall risk and prognosis of PCa6–8. Variations in the genome are a major contributor to the differences in disease susceptibly amongst individuals9. Single nucleotide polymorphisms (SNPs) are the most commonly identified variations in a genome.\n\nAnalysing the role of SNP-SNP interactions and epistasis10 is very appealing among researchers working on risk factors for various cancers11–13, including prostate cancer14. Here we have identified a SNP-SNP interaction as a risk factor for aggressive PCa, by comparing the data generated after carrying out SNP genotyping using the SEQUENOM MassARRAY iPLEX® assay, and the TaqMan® assay (depending on the gene of interest) from the DNA extracted from blood samples. These samples were taken from a New Zealand cohort of men with self-reported European ethnicity that have been clinically diagnosed with aggressive and non-aggressive PCa, and healthy controls with no reported symptoms of the disease. Symptoms include increased urination during night time along with a frequent urge to urinate problems maintaining a steady flow of urine, hematuria and dysuria15. Our results indicate a strong influence of gene x environment interaction in overall gene expression and epistasis.\n\n\nMethods\n\nPatients with a clinically established diagnosis of PCa (aggressive and non-aggressive) from the Auckland Regional Urology Registry (Auckland, Middlemore, and North Shore hospitals), and certain private practices in the Waikato region of New Zealand were sent invitations along with the written consent forms to participate in this study between the years 2006 and 2014. Eventually, a total of 254 patients with various grades of PCa voluntarily participated in our study after providing us with written informed consent. (Ethics reference NTY05/06/037 by Northern B Ethics Committee, New Zealand, previously Northern Y Ethics Committee, New Zealand). Additionally, 369 males from the Auckland region of New Zealand with no reported symptoms of PCa were considered as healthy controls for this study (Ethics reference NTY/06/07/AM04 by Northern B Ethics Committee, New Zealand, previously Northern Y Ethics Committee, New Zealand), recruited by advertising in and around the University of Auckland. Written informed consent for participation in the study was also obtained from the male healthy controls.\n\nEach individual participating in this study completed a demographic and lifestyle questionnaire. Because of the influence of age in this disease16, care was taken to invite men to participate in this study that were between the ages of 40 to 90 years at the time of diagnosis for patients with PCa, and at the time of recruitment for healthy controls. (Dataset 117). The average age of men with aggressive PCa was calculated to be 66 years, 67 years for men with non-aggressive PCa and 58 years for healthy controls.\n\nPatient blood samples were collected at respective outpatient clinics at the Auckland, North Shore and Counties Manukau Hospitals, New Zealand. The blood samples of healthy controls were collected at the Faculty of Medical and Health Sciences, the University of Auckland, New Zealand and the New Zealand Blood Bank, Great South Road Centre, Epsom, Auckland New Zealand.\n\nBlood samples from each participant were collected in Vacutainer® tubes (Becton Dickinson) containing EDTA by a trained phlebotomist. DNA was extracted using a QIAamp genomic DNA kit (Qiagen, Hilden, Germany) following the manufacturer’s protocol, with the aid of a fully automated QIAcube (Qiagen, Hilden, Germany). The DNA samples were diluted to 5.0ng/μl as per requirements of the SEQUENOM MassARRAY iPLEX® assay protocol.\n\n136 SNPs, located in 66 genes and some undefined chromosomal locations were identified by a thorough literature search of GWAS for both aggressive PCa and PCa. Care was taken to select SNPs that were identified as significantly associated with risks for PCa and aggressive PCa in European populations only. Research papers were only considered if published in and after the year 2000, in order to be in concordance with the current trends in PCa research. The final SNPs selected to be genotyped were at the research team’s discretion, either by SEQUENOM MassARRAY iPLEX® assay or by TaqMan® SNP genotyping assay, as discussed in Vaidyanathan et al., 201718.\n\nSNP genotyping by SEQUENOM MassARRAY iPLEX® assay for the candidate SNPs was carried out in the Auckland UniServices Sequenom Facility at The Liggins Institute, Auckland, and AgResearch Limited, Mosgiel, New Zealand, using a custom-designed multiplex gene panel and iPlex chemistry. The genotype calling was carried out by using the standard post-processing calling parameters from the SEQUENOM Type 4.0 software.\n\nSNP genotyping using the TaqMan® SNP genotyping (Applied Biosystems, ABI) was carried out on a panel of genes that failed to be genotyped using the SEQUENOM MassARRAY iPLEX® assay. The primers used were either obtained pre-designed from ABI or were custom-made using Assay-by-Design service by ABI, and the protocol provided by the manufacturers was followed7,18–20.\n\n26 SNPs were removed for being in linkage, and a further 5 SNPs were removed for failing the Hardy-Weinberg Equilibrium (HWE) in the healthy controls, thereby reducing the total SNPs analyzed to 105 (colour coded in Dataset 117). SNPs that failed the HWE in patients with PCa were still considered for analysis, as the SNPs may have failed to be in equilibrium in the patient population due to the influence of the risk allele and hence should not be ignored from a case-control study like ours21,22. The statistical significance was set to p≤0.000123.\n\nAnalysis of the data for SNP-SNP interactions associated with aggressive PCa was carried out using PLINK software version 1.0718,23. PLINK’s clustering approach, identical-by-state (IBS) clustering, is based on pairing up the SNPs based on similarity of genetic identity. This IBS clustering is used in order to test if the SNPs of two individuals belong to the same population or not18. Following this stratification, we performed a standard case-control association test using the Cochran-Mantel-Haenszel test (1 degree of freedom) to analyse the SNP-disease association that is conditional on the clustering18. The slower ‘--epistasis’ command was used to test for epistasis using logistic regression23. It is the most accurate test to define SNP-SNP interactions using PLINK23.\n\n\nResults\n\nTable 1 shows the statistically significant SNP-SNP interaction discovered in patients with aggressive PCa when compared to healthy controls. The results obtained for other categorical analyses are not discussed here, as they were not statistically significant in our study and have been mentioned in Supplementary Table 2. The SNP rs2121875, an intronic SNP present in chromosomal position 5p12 near the fibroblast growth factor 10 (FGF10) gene24, has been identified to be associated with the SNP rs4809960, an intronic SNP present in chromosomal position 20q13 near the gene cytochrome P450 family 24 subfamily A member 1 (CYP24A1)25, such that the latter SNP raises the odds of having the prior.\n\nCHR1: chromosome of first SNP, SNP1: Identifier for first SNP, CHR2: Chromosome of second SNP, SNP2: Identifier for second SNP, OR_INT: Odds ratio for interaction, STAT: Chi-square statistic 1df, p: Asymptotic p-value.\n\n\nDiscussion\n\nEpistatic effects that are crucial to define various biologically-intuitive models of interaction between two SNPs have already been observed in a variety of species11. We believe this is the first study on SNP-SNP interactions associated with aggressive PCa carried out with patients from a New Zealand population.\n\nThe SNP rs4809960 in the gene CYP24A1 has been reported by Holt et al., (2010) to be associated with prostate cancer-specific mortality, and was not evolutionarily conserved25. It was also found to have an effect on the body mass index (BMI), but due to a small sample size the hazard ratios for the BMI strata were not considered reliable enough to be reported25. The protein encoded by CYP24A1 initiates the degradation of the physiologically active form of Vitamin D3 (VD3)26. VD3 is an important hormone that is actively involved in regulating cell proliferation in the prostate, and has also been identified to have increased expression in PCa cell lines27. It is well established that, with ageing, the skin cannot synthesize VD3 as effectively as desirable and the kidney’s ability to convert VD3 to its active form decreases28. This is of relevance because PCa has always been considered as a disease of elderly men29 who have had less exposure to sunlight and thereby Vitamin D330. It is even more intriguing for the other epistatic SNP to be identified in FGF10.\n\nAccording to Paul et al. (2013), during mesenchymal development, eFGF10 protein can trigger PCa development through increased androgen receptor expression in the neoplastic epithelium31. It is also worthy to mention that FGF10 is closest to FGF7 based on its evolutionary history32, and according to Emoto et al. (1997), is suggested to have no activity for fibroblasts32. We do not agree with this, because fibroblasts in certain organs, senesce due to aging33, and can promote tumour invasion34. This logical progression of ageing-led senescence and promotion of tumour invasion holds true for ageing and risk of aggressive PCa16 as well.\n\nWe suggest that the intronic SNP rs2121875 in the gene FGF10 may be causing alterations in gene expression, perhaps due to the prevalent external/environmental conditions in the elderly men with PCa. Our theory is based on the recent discovery in a study by Zhang et al. (2007) that even intronic SNPs (such as the ones identified in FGF10 and CYP24A1) can change the outcome and usage of exons35,36. This unique and novel epistatic finding emphasizes the fact that intronic SNPs (and SNP-SNP interactions) can also have a significant effect on the risk of diseases such as aggressive PCa, and need to be investigated further.\n\n\nData availability\n\nDataset 1: Raw data from the current study. DOI, 10.5256/f1000research.11027.d15860517\n\nDataset 2: Epistasis results after analysis of the data for SNP-SNP interactions. DOI, 10.5256/f1000research.11027.d15860637",
"appendix": "Author contributions\n\n\n\nVV and VN planned and carried out the experiments. VV wrote the manuscript. VV and VN did the data cleaning and statistical analysis, respectively. VV interpreted the data. VV, NK, AJ, RP, GM and LRF conceived the idea of the discussion chapter and proofread the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe wish to thank the Auckland Cancer Society, University of Auckland, New Zealand for funding the salaries of N.K. and L.R.F.; Maurice Wilkins Centre, University of Auckland for funding the salary of AJ; and Cardiff University for funding the salary of GM. This study is based on what has been reported in the research article by Vaidyanathan et al (2017). Therefore contributions made by all authors in the said article are acknowledged.\n\n\nReferences\n\nCooperberg MR, Vickers AJ, Broering JM, et al.: Comparative risk-adjusted mortality outcomes after primary surgery, radiotherapy, or androgen-deprivation therapy for localized prostate cancer. Cancer. 2010; 116(22): 5226–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJemal A, Bray F, Center MM, et al.: Global cancer statistics. CA Cancer J Clin. 2011; 61(2): 69–90. PubMed Abstract | Publisher Full Text\n\nThompson I, Thrasher JB, Aus G, et al.: Guideline for the management of clinically localized prostate cancer: 2007 update. J Urol. 2007; 177(6): 2106–31. PubMed Abstract | Publisher Full Text\n\nD'Amico AV, Whittington R, Kaplan I, et al.: Calculated prostate carcinoma volume: The optimal predictor of 3-year prostate specific antigen (PSA) failure free survival after surgery or radiation therapy of patients with pretreatment PSA levels of 4–20 nanograms per milliliter. Cancer. 1998; 82(2): 334–41. PubMed Abstract | Publisher Full Text\n\nBratt O: Hereditary prostate cancer: clinical aspects. J Urol. 2002; 168(3): 906–13. PubMed Abstract | Publisher Full Text\n\nSchaid DJ: The complex genetic epidemiology of prostate cancer. Hum Mol Genet. 2004; 13 Spec No 1: R103–21. PubMed Abstract | Publisher Full Text\n\nKarunasinghe N, Han DY, Zhu S, et al.: Serum selenium and single-nucleotide polymorphisms in genes for selenoproteins: relationship to markers of oxidative stress in men from Auckland, New Zealand. Genes Nutr. 2012; 7(2): 179–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKarunasinghe N, Lange K, Yeo Han D, et al.: Androgen Pathway Related Gene Variants and Prostate Cancer Association in Auckland Men. Curr Pharmacogenomics Person Med. 2013; 11(1): 22–30. Publisher Full Text\n\nTweardy DJ, Belmont JW: “Personalizing” academic medicine: opportunities and challenges in implementing genomic profiling. Transl Res. 2009; 154(6): 288–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCordell HJ: Epistasis: what it means, what it doesn't mean, and statistical methods to detect it in humans. Hum Mol Genet. 2002; 11(20): 2463–8. PubMed Abstract | Publisher Full Text\n\nHartwig FP: SNP-SNP Interactions: Focusing on Variable Coding for Complex Models of Epistasis. J Genet Syndr Gene Ther. 2013; 4: 189. Publisher Full Text\n\nSu WH, Yao Shugart Y, Chang KP, et al.: How genome-wide SNP-SNP interactions relate to nasopharyngeal carcinoma susceptibility. PLoS One. 2013; 8(12): e83034. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJamshidi M, Fagerholm R, Khan S, et al.: SNP-SNP interaction analysis of NF-κB signaling pathway on breast cancer survival. Oncotarget. 2015; 6(35): 37979–94. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTao S, Feng J, Webster T, et al.: Genome-wide two-locus epistasis scans in prostate cancer using two European populations. Hum Genet. 2012; 131(7): 1225–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHodgson F, Obertova Z, Brown C, et al.: PSA testing in general practice. J Prim Health Care. 2012; 4(3): 199–204. PubMed Abstract\n\nVaidyanathan V, Karunasinghe N, Jabed A, et al.: Prostate Cancer: Is It a Battle Lost to Age? Geriatrics. 2016; 1(4): 27. Publisher Full Text\n\nVaidyanathan V, Naidu V, Karunasinghe N, et al.: Dataset 1 in: SNP-SNP interactions as risk factors for aggressive prostate cancer. F1000Research. 2017. Data Source\n\nVaidyanathan V, Naidu V, Kao CH, et al.: Environmental factors and risk of aggressive prostate cancer among a population of New Zealand men - a genotypic approach. Mol Biosyst. 2017; 13(4): 681–98. PubMed Abstract | Publisher Full Text\n\nKarunasinghe N, Han DY, Goudie M, et al.: Prostate disease risk factors among a New Zealand cohort. J Nutrigenet Nutrigenomics. 2012; 5(6): 339–51. PubMed Abstract | Publisher Full Text\n\nBishop KS, Han DY, Karunasinghe N, et al.: An examination of clinical differences between carriers and non-carriers of chromosome 8q24 risk alleles in a New Zealand Caucasian population with prostate cancer. Peer J. 2016; 1(4): e1731. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZong GY, Donner A: The merits of testing Hardy-Weinberg equilibrium in the analysis of unmatched case-control data: a cautionary note. Ann Hum Genet. 2006; 70(Pt 6): 923–33. PubMed Abstract | Publisher Full Text\n\nNamipashaki A, Razaghi-Moghadam Z, Ansari-Pour N: The Essentiality of Reporting Hardy-Weinberg Equilibrium Calculations in Population-Based Genetic Association Studies. Cell J. 2015; 17(2): 187–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPurcell S, Neale B, Todd-Brown K, et al.: PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007; 81(3): 559–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKote-Jarai Z, Olama AA, Giles GG, et al.: Seven prostate cancer susceptibility loci identified by a multi-stage genome-wide association study. Nat Genet. 2011; 43(8): 785–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolt SK, Kwon EM, Koopmeiners JS, et al.: Vitamin D pathway gene variants and prostate cancer prognosis. Prostate. 2010; 70(13): 1448–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDi Rosa M, Malaguarnera M, Nicoletti F, et al.: Vitamin D3: a helpful immuno-modulator. Immunology. 2011; 134(2): 123–39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLou YR, Qiao S, Talonpoika R, et al.: The role of Vitamin D3 metabolism in prostate cancer. J Steroid Biochem Mol Biol. 2004; 92(4): 317–25. PubMed Abstract | Publisher Full Text\n\nNair R, Maseeh A: Vitamin D: The “sunshine” vitamin. J Pharmacol Pharmacother. 2012; 3(2): 118–26. PubMed Abstract | Free Full Text\n\nNelen V: Epidemiology of prostate cancer. Recent Results Cancer Res. 2007; 175: 1–8. PubMed Abstract | Publisher Full Text\n\nConsensus Statement on Vitamin D and Sun Exposure in New Zealand. Wellington, New Zealand: Ministry of Health and Cancer Society of New Zealand; 2012. Reference Source\n\nCorn PG, Wang F, McKeehan WL, et al.: Targeting fibroblast growth factor pathways in prostate cancer. Clin Cancer Res. 2013; 19(21): 5856–66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEmoto H, Tagashira S, Mattei MG, et al.: Structure and expression of human fibroblast growth factor-10. J Biol Chem. 1997; 272(37): 23191–4. PubMed Abstract | Publisher Full Text\n\nCampisi J: The role of cellular senescence in skin aging. J Investig Dermatol Symp Proc. 1998; 3(1): 1–5. PubMed Abstract | Publisher Full Text\n\nCoppé JP, Desprez PY, Krtolica A, et al.: The senescence-associated secretory phenotype: the dark side of tumor suppression. Annu Rev Pathol. 2010; 5: 99–118. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTazi J, Bakkour N, Stamm S: Alternative splicing and disease. Biochim Biophys Acta. 2009; 1792(1): 14–26. PubMed Abstract | Publisher Full Text\n\nZhang Y, Bertolino A, Fazio L, et al.: Polymorphisms in human dopamine D2 receptor gene affect gene expression, splicing, and neuronal activity during working memory. Proc Natl Acad Sci U S A. 2007; 104(51): 20552–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVaidyanathan V, Naidu V, Karunasinghe N, et al.: Dataset 2 in: SNP-SNP interactions as risk factors for aggressive prostate cancer. F1000Research. 2017. Data Source"
}
|
[
{
"id": "22484",
"date": "09 May 2017",
"name": "Syed Muhammad Shahid",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript identified a number of SNPs in GWAS as risk factors for PCa in various European populations as well as defined SNP-SNP interactions, using PLINK software, with nucleic acid samples from a New Zealand cohort. The approach which authored used to find a gene x environment marker for aggressive PCa gene x environment interactions can be adjusted statistically, however, it is highly impossible in practicality.\nThe manuscript compiled most of recent literature available on the subject and propose justified discussion on the research question.\n\nSince I do not have sufficient expertise in statistical analyses used to elaborate the key findings of the manuscript, I am reluctant to comment on the authenticity and validity of conclusion.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? I cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22808",
"date": "17 May 2017",
"name": "Yog Raj Ahuja",
"expertise": [
"Reviewer Expertise Yog Raj Ahuja: Genetics",
"Zeenath Jehan: Genetics and cancer genomics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have carried out SNP genotyping in from a New Zealand cohort of men with self-reported European ethnicity that have been clinically diagnosed with aggressive and non-aggressive PCa, and healthy controls. They have identified a number of SNPs from the GWAS from various European populations and described SNP-SNP interaction as a risk factor for aggressive PCa in a New Zealand cohort. The intronic SNP rs2121875, an on chromosomal position 5p12 near the fibroblast growth factor 10 (FGF10) gene24, has been identified to be associated with the intronic SNP rs4809960, an intronic SNP present in chromosomal position 20q13 near the gene cytochrome P450 family 24 subfamily A member 1 (CYP24A1). The protein encoded by CYP24A1 initiates the degradation of the physiologically active form of Vitamin D3 (VD3) 26 which is an important hormone that is actively involved in regulating cell proliferation in the prostate, and has also been identified to have increased expression in PCa cell lines. The epistatic effect of the SNP-SNP interactions suggested by the authors may be relevant in view of many recent studies showing intronic mutations which can exert their effect on protein coding exons. The recent observation of decreasing Vitamin D3 levels worldwide further support the role of environmental factors in these gene environment interactions. Future studies may help in understanding the role of SNPs and environmental interactions.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-621
|
https://f1000research.com/articles/6-616/v1
|
03 May 17
|
{
"type": "Case Report",
"title": "Case Report: Using ultrasound to prevent a broken catheter from migrating to the heart.",
"authors": [
"Pieter J. Schraverus",
"Suzanne van Rijswijk",
"Pieter Roel Tuinman",
"Suzanne van Rijswijk",
"Pieter Roel Tuinman"
],
"abstract": "Peripheral intravenous (IV) catheters can break off while still in the patient, with possible detrimental effects such as upstream migration to the heart. These catheters have probably been damaged by the needle during a difficult insertion. A peripheral IV catheter was removed in a 90 year old patient and only half of the catheter was retrieved. By using ultrasound examination the remaining part of the IV catheter was identified, and retrieved surgically, before it could migrate towards the heart. This case report suggests that ultrasound should not only be used for difficult placement of a peripheral IV catheter, but can also be used when removal is complicated.",
"keywords": [
"broken catheter",
"ultrasound",
"echography"
],
"content": "Introduction\n\nPeripheral intravenous (IV) catheters are given every day to many patients, without much attention given to possible complications. The complications are usually minor, for instance phlebitis or subcutaneous injection of solutions. However, when placement is difficult and the needle of the IV catheter is reinserted for another attempt, the needle can cut the catheter and damage it in such a way that it might break while inside the vein. These fragments can migrate to the right side of the heart, both atrium and ventricle, evidenced by reports in the literature1,2.\n\n\nCase report\n\nWe present the case of a 90 year old caucasian patient where only half of the peripheral IV catheter was retrieved after removal. By using ultrasound the remaining part of the catheter was identified and removed.\n\nThe patient was admitted to the hospital with a contained ruptured aneurysm of the abdominal aorta. The patient underwent emergency surgery and an aortic bifurcation prosthesis was placed. According to the anaesthesiologist who cared for the patient, the peripheral catheter was used to administer the anaesthetics and induction of anaesthesia went as planned. After induction, a central venous catheter was placed and the peripheral catheter was no longer used. Post operatively the patient was admitted to the intensive care unit (ICU) where the peripheral catheter was removed and only the proximal half of the catheter came out. We examined the arm but could not palpate the remaining part of the catheter. Ultrasound examination, performed by the ICU resident, showed an echogenic hollow tube (Figure 1), eight centimeters proximal of the insertion site of the catheter. The surgeon made a small incision and the remaining part of the catheter was removed (Figure 2). We were not able to trace the person who placed the IV catheter, to evaluate the technique used.\n\nThe black arrow indicates the broken catheter.\n\n\nDiscussion\n\nThis case describes the succesful use of ultrasound as a diagnostic tool for difficulties encountered after peripheral IV catheter removal.\n\nWe hypothesize that the needle was reinserted in the catheter during placement because placement was difficult. By doing so, the needle may have cut the distal part of the catheter. Reinserting a needle back into a catheter can be dangerous, and there is a risk of damaging the catheter or cutting it off and thereby allowing the free part of the catheter to migrate to the heart, a consequence that could be disastrous and would make removal of the catheter far more difficult3,4. The use of ultrasound made a quick diagnosis and enhanced treatment possible, thereby preventing further complications.\n\n\nConsent\n\nWritten informed consent for publication of clinical details and clinical images was obtained from the patient.",
"appendix": "Author contributions\n\n\n\nPJS wrote the manuscript. PRT and SvR revised the manuscript. All authors contributed to design of the manuscript and approved the final version for publication.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nVan Den Akker-Berman LM, Pinzur S, Aydinalp A, et al.: Uneventful 25-year course of an intracardiac intravenous catheter fragment in the right heart. J Interv Cardiol. 2002; 15(5): 421–3. PubMed Abstract | Publisher Full Text\n\nOto A, Tokgozoglu SL, Oram A, et al.: Late percutaneous extraction of an intracardiac catheter fragment. Jpn Heart J. 1993; 34(1): 117–9. PubMed Abstract | Publisher Full Text\n\nActis Dato GM, Arslanian A, Di Marzio P, et al.: Posttraumatic and iatrogenic foreign bodies in the heart: report of fourteen cases and review of the literature. J Thorac Cardiovasc Surg. 2003; 126(2): 408–14. PubMed Abstract | Publisher Full Text\n\nSproat IA, Bielke D, Crummy AB, et al.: Transthoracic 2D echocardiographic guidance for percutaneous removal of a nonopaque intracardiac catheter fragment. Cardiovasc Intervent Radiol. 1993; 16(1): 58–60. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22447",
"date": "08 May 2017",
"name": "Michiel Justinus Blans",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis case report shows that ultrasound can also be used for the described indication. It is well structured and the photo material is clearly showing the described content. Being a short case report no literature review is needed. It is important that modern doctors learn to use all possible aspects of point-of-care ultrasound (as briefly stated, also for the insertion of iv catheters).\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "22448",
"date": "06 Jul 2017",
"name": "Joris Lemson",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis case report shows the usefulness of ultrasound when a broken intravenous canula had migrated away from the insertion location. The case is adequately described and illustrated and the message is clear. It would be of interest to know if ultrasound devices that are often used at intensive care units are also adequately equipped for searching foreign bodies.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-616
|
https://f1000research.com/articles/6-614/v1
|
03 May 17
|
{
"type": "Research Article",
"title": "Growth hormone receptor antagonism with pegvisomant in insulin resistant non-diabetic men: A phase II pilot study",
"authors": [
"Ada P. Lee",
"Kathleen Mulligan",
"Morris Schambelan",
"Elizabeth J. Murphy",
"Ethan J. Weiss",
"Ada P. Lee",
"Kathleen Mulligan",
"Morris Schambelan",
"Elizabeth J. Murphy"
],
"abstract": "Background: Growth hormone (GH) is known to affect insulin and glucose metabolism. Blocking its effects in acromegalic patients improves diabetes and glucose metabolism. We aimed to determine the effect of pegvisomant, a GH receptor antagonist, on insulin resistance, endogenous glucose production (EGP) and lipolysis in insulin resistant non-diabetic men. Methods: Four men between the ages of 18-62 with a BMI of 18-35kg/m2, with insulin resistance as defined by a HOMA-IR > 2.77, were treated for four weeks with pegvisomant 20 mg daily. Inpatient metabolic assessments were performed before and after treatment. The main outcome measurements were: change after pegvisomant therapy in insulin sensitivity as measured by hyperinsulinemic euglycemic clamp; and EGP and lipolysis assessed by stable isotope tracer techniques. Results: Insulin like growth factor-1 (IGF-1) concentrations decreased from 134.0 ± 41.5 (mean ± SD) to 72.0 ± 11.7 ng/mL (p = 0.04) after 4 weeks of therapy. Whole body insulin sensitivity index (M/I 3.2 ± 1.3 vs. 3.4 ± 2.4; P = 0.82), as well as suppression of EGP (89.7 ± 26.9 vs. 83.5 ± 21.6%; p = 0.10) and Ra glycerol (59.4 ± 22.1% vs. 61.2 ± 14.4%; p = 0.67) during the clamp were not changed significantly with pegvisomant treatment. Conclusions: Blockade of the GH receptor with pegvisomant for four weeks had no significant effect on insulin/glucose metabolism in a small phase II pilot study of non-diabetic insulin resistant participants without acromegaly.",
"keywords": [
"Insulin resistance",
"Prediabetes",
"Pegvisomant",
"Growth Hormone",
"Metabolism"
],
"content": "Introduction\n\nThe worldwide incidence of type 2 diabetes (T2DM) has increased dramatically1. Insulin resistance (IR) plays a critical role in the pathogenesis of T2DM, but the mechanisms underlying insulin resistance in target tissues remain complex and unresolved2. Insulin regulates the metabolism of glucose, lipids and proteins in multiple tissues, including liver, muscle, and fat3. There are individuals who have been described as ‘fit and fat’ – insulin sensitive despite a high body mass index (BMI)4. There are also well-known examples of individuals with insulin resistance despite a low BMI5. This dissociation between total adiposity and insulin sensitivity is especially significant in growth hormone (GH) disorders. In acromegaly, a condition of GH excess, there is low body fat with insulin resistance, and in Laron’s syndrome, due to an inactivating mutation in the growth hormone receptor (GHR), there is high body fat with insulin sensitivity6.\n\nGH is a known regulator of lipid and carbohydrate metabolism. Excessive GH secretion in acromegaly can lead to insulin resistance and diabetes, while reducing overall fat mass. Pegvisomant is a specific and competitive antagonist for the GHR that effectively blocks GH signaling7. Treatment of acromegaly, including medical treatment with pegvisomant, improves insulin sensitivity and glucose metabolism8,9. The reduction of GH signaling in mice by global disruption of Ghr also leads to improved insulin sensitivity, lower fasting glucose and insulin levels, and increased longevity despite an increase in body fat10. Mice with global disruption of Ghr are also protected from high fat diet-induced changes in carbohydrate metabolism despite increased body fat10. Similarly, humans with Laron’s syndrome are exquisitely insulin sensitive despite increased adiposity6,11. Compared to age and sex-matched relatives, people with inactivating mutations in Ghr are dramatically protected from diabetes over 22 years of follow up12.\n\nIntact GH signaling appears to play an important role in insulin action. There are very few studies of substrate metabolism in healthy subjects treated with pegvisomant13,14. In these studies, pegvisomant treatment was short, often single dose, and an effect on substrate metabolism and insulin resistance was not observed. There is a single small study that demonstrated an improvement in hepatic insulin sensitivity in patients with type 1 diabetes treated with pegvisomant15. In this phase II pilot study, we sought to determine the effect of longer-term treatment with pegvisomant on insulin resistance in pre-diabetic men. We hypothesized that one month of pegvisomant treatment would improve whole body insulin sensitivity, as well as suppression of endogenous glucose production (EGP) and lipolysis, during a hyperinsulinemic euglycemic clamp.\n\n\nMethods\n\nFour men, aged 52–57 years, with a BMI between 18–35 kg/m2 and insulin resistance, defined as a HOMA-IR score >2.7716, were enrolled. The first subject was enrolled on March 3, 2014. The sample size was determined after a power analysis based on prior work and the effect size of the primary outcome. Participants were recruited through a combination of advertising, word of mouth, and doctor’s appointments. Participants were required to be on a stable medication regimen for any lipid disorders. Participants were excluded if they had type 1 or type 2 diabetes, fasting blood glucose >126 mg/dl, hemoglobin A1c >6.5, unstable hypertension, human immunodeficiency virus infection, hepatitis B or hepatitis C infection, evidence of chronic kidney disease, major gastrointestinal surgery, history of pancreatitis, pancreatic disease, liver or biliary disorders, or fasting plasma triglyceride >500 mg/dl. The study was completed in August 2015, and the final follow-up was completed in December 2015. The Committee on Human Research of the University of California, San Francisco approved the study (IRB Number: 13-10982; full protocol available as Supplementary File 1). Written, informed consent was obtained from each individual.\n\nParticipants were admitted to the Clinical Research Center (CRC) at San Francisco General Hospital the evening before baseline testing and consumed a controlled metabolic diet with fixed proportions of macronutrients (10 kcal/kg; 15% protein, 30% fat, 55% carbohydrate with <20% from simple sugars). All meals were prepared in the metabolic kitchen of the CRC under the supervision of CRC bionutritionists. In the morning, the participants underwent metabolic assessments, including measurements of body composition by dual-energy X-ray absorptiometry (DXA), resting energy expenditure by indirect calorimetry, and hyperinsulinemic-euglycemic clamp with stable isotope tracer infusions. After baseline testing, participants were discharged and self-administered pegvisomant 20 mg subcutaneously nightly, after supervised instruction from a CRC nurse. Participants were instructed to maintain their usual diets and activity levels and attended weekly follow up visits. During these visits, the participants had updated medical history and a brief physical exam, including weight and vital signs. All reported signs and symptoms were recorded. Safety laboratory studies (i.e. fasting glucose, lipids, electrolytes, and renal and hepatic function), as well as insulin, GH and IGF1 levels, were obtained. At these visits they also returned unused vials of drug and received their next week’s supply of drug and supplies. After four weeks of treatment, the inpatient metabolic assessments were repeated, as at baseline.\n\nWhole-body insulin sensitivity was measured by the euglycemic hyperinsulinemic clamp technique17. After the baseline measurements were completed, insulin (Humulin®, Eli Lilly & Co., Indianapolis, IN, USA; 40 mU/m2 min) was infused for 180 minutes and blood samples were collected at 5-minute intervals from a retrograde intravenous line placed in a hand that was warmed in a heated box at 50–55oC. Whole-blood glucose concentrations were determined by the glucose oxidase method (YSI Stat glucose analyzer, Yellow Springs, OH, USA). A variable infusion of 20% dextrose (labeled with [U-13C] glucose, as described below) was adjusted to maintain blood glucose concentrations at 90 mg/dL. Blood samples were collected at 30-minute intervals during the final hour of the clamp and the serum was frozen and batched for measurement of insulin. Insulin sensitivity was calculated as a measure of whole-body glucose uptake during the final hour of the clamp (M) divided by steady-state serum insulin level (I)18.\n\nEGP (Ra glucose) and whole body lipolysis (Ra glycerol) were measured under fasting conditions using primed constant infusions of [U-13C] glucose (0.96 mg/kg/h, prime 0.096 mg/kg/min for 10 min) and [2H5]-glycerol (0.67 mg/kg/h, prime 0.067 mg/kg/min for 10 min) started at 430 h. Blood samples were obtained every 10 minutes between 0800 and 0830 h for steady-state fasting measurements. The isotope infusions continued, and a 180-minute euglycemic-hyperinsulinemic clamp was started at 0900 h, as described above. The constant glycerol infusion continued, while the enriched glucose became a variable infusion of 0.6% [U-13C] glucose within the 20% dextrose infusion used for the clamp. During the final 30 minutes of the clamp (1130 to 1200 h), blood samples were collected every 10 minutes for determination of EGP and lipolysis under conditions of steady-state hyperinsulinemia.\n\nIsotope enrichments were measured by Metabolic Solutions (Nashua, NH, USA). [2H5]-glycerol was determined by gas chromatography-mass spectrometry (GC-MS), using the trimethylsilyl (TMS) derivative19. Plasma 13C6-glucose was determined using the aldonitrile penta-acetate derivative20.\n\nRa glucose and Ra glycerol were calculated by the dilution technique using the average of the last 4 samples during the fasting state and during the clamp21.\n\nTotal and regional fat and lean body mass were measured by DXA (Hologic, Marlborough, MA, USA) and subsequently analyzed using Apex 5.5™ software (Hologic, Marlborough, MA, USA) to provide estimates of visceral adipose tissue.\n\nResting energy expenditure was measured by indirect calorimetry under fasting conditions and during the clamp using a Deltatrac II Metabolic Monitor (Sensormedics, Yorba Linda, CA, USA). Respiratory quotient (RQ), and index of substrate utilization, was calculated as carbon dioxide production divided by oxygen production rates.\n\nFree insulin like growth factor (IGF)-1 and IGF-BP3 were determined by ELISA (GenWay Biotech, San Diego, CA, USA). The San Francisco General Hospital Clinical Laboratory measured total and high-density lipoprotein (HDL) cholesterol, fasting triglycerides (TG), and calculated low-density lipoprotein (LDL) cholesterol, serum insulin, and fasting serum glucose. Serum insulin was measured by chemiluminescent sandwich assay.\n\nThe primary outcome was specified to be changes in insulin sensitivity (M/I). Key secondary outcomes were changes in endogenous glucose production (%EGP suppression) and changes in lipolysis (% Ra glycerol suppression). Analyses were performed using Graph Pad Prism 7.0. Student’s paired t-test was used to compare baseline values to values after one month of treatment, using two tailed p-values. Values are represented as mean ± SD.\n\nClinicalTrials.gov identifier: NCT02023918\n\nThe following deviations from the protocol were made. A total of 6 male participants were recruited, enrolled, and completed the entire study. There was a modification to the protocol after the first 4 participants to change the insulin infusion rate during the clamp. This was done because there was near complete suppression of endogenous glucose production with the original insulin dose. However, there did not appear to be a significant effect of lowering the insulin dose in the final 2 participants. To maintain consistency, the final analysis included only the data from the first 4 participants who were treated according to the original clamp protocol. The summary data published on the ClinicalTrials.gov website include all 6 participants, while the data presented here include on the first 4. Despite enrolling fewer than the expected number of participants, the investigators felt that there was an extremely low likelihood that additional participants would change the outcomes, so the study was terminated at this point.\n\nConsort checklist and flowchart are available as Supplementary File 2 and Supplementary File 3.\n\n\nResults\n\nThe mean age of the participants was 54.5 ± 2.1 years (Table 1). Three of the participants carried a diagnosis of hyperlipidemia and were on statin medications. Three of the participants carried a diagnosis of hypertension and two were on antihypertensive medications (amlodipine and metoprolol). One participant carried a diagnosis of gout, but did not require medication during the study.\n\nData are mean ± SD. P values are derived from paired t-tests. Values that are bolded are statistically significant. IGF-1 , insulin like growth factor-1; IGFBP-3, insulin like growth factor binding protein-3; RQ, respiratory quotient; TG, triglycerides; VAT, visceral adipose tissue; M/I, M-value is defined as average glucose infusion rate over a period 80–120 minutes from start of insulin infusion. M/I, ratio M-value to insulin; Ra, rate of appearance.\n\nParticipants were adherent to the daily self-infections of pegvisomant based on weekly medication reconciliation and measurement of IGF-1 levels.\n\nAs shown in Figure 1, total IGF-1 levels decreased in all participants (134.0 ± 41.5 vs. 72 ± 11.7 ng/mL, p = 0.04). There was no significant change in IGF-BP3 levels (Table 1).\n\nIGF-1 decreased as expected over the four-week treatment period. Circles indicate individual baseline and post-treatment values. P values are derived from paired t-tests.\n\nThere was no significant change in fasting blood glucose, fasting insulin, or HOMA-IR following four weeks of pegvisomant treatment (Table 1). There was no significant difference in the serum glucose level or the glucose infusion rate during the clamp (Figure 2). There was no difference in clamped insulin levels pre- and post-treatment (Figure 3). There was no difference in basal EGP pre- or post-pegvisomant treatment or in the percent suppression of EGP by insulin (Figure 4). There was a small increase in Ra glucose post-treatment, but there was no significant difference in suppression of EGP during the clamp (Table 1). There was no significant change in whole body insulin sensitivity as assessed by M/I (3.2 ± 1.3 vs. 3.4 ± 2.4 p = 0.82).\n\nThere was no difference between the glucose infusion rate (GIR) after treatment with pegvisomant. Blue symbols indicate GIR at baseline. Red symbols indicate GIR post-treatment.\n\nInsulin was measured at various times during the clamp. Data represent the mean insulin levels +/- SD for the pre- (closed circles) and post- (dark squares) during the steady state portion of the clamp.\n\nData represent the percent suppression of endogenous glucose production during the hyperinsulinemic euglycemic clamp at 140 minutes at baseline and after treatment with pegvisomant for one month.\n\nThere was no significant difference in fasting TG, HDL, or LDL following pegvisomant treatment. Whole body lipolysis did not change in either the fasting state or during hyperinsulinemia.\n\nLean body mass did not change significantly during the treatment period. There was no change in total fat mass, nor was there a change in visceral adipose tissue mass, but there was a small but statistically significant decrease in appendicular fat (decrease of 0.4 kg, p <0.01). While truncal fat also decreased by 0.5 kg, this did not reach statistical significance (p = 0.11).\n\nThere was no significant change in resting energy expenditure or RQ measured during the clamp. Fasting RQ declined significantly (p = 0.04).\n\nOne participant, who had prior aspartate aminotransferase (AST) and abnormal liver function (ALT), had a mild increase of his transaminases during weeks 2 and 3 of monitoring, but these remained less than twice the upper limits of normal and decreased back to his baseline while on drug treatment. Side effects for all participants were limited to injection site discomfort. No participants discontinued the drug as a consequence of side effects or laboratory abnormalities.\n\n\nDiscussion\n\nIn 1931, the Argentinian physician scientist, Bernardo Houssay demonstrated that injection of anterior pituitary extract worsened glycemic control in dogs22,23. He also showed that impaired anterior pituitary function led to hypoglycemia and increased sensitivity to insulin24. Houssay and others showed that hypophysectomy ameliorated not only insulin resistance, but also diabetic complications in humans25–29. Several decades later, GH was shown to confer much of the pituitary-derived diabetogenic activity30. Both loss- and gain-of-function studies in humans and rodents support a role for GH in the biology of insulin responsiveness. While there are a small number of studies exploring the role of GH signaling on insulin resistance in patients with acromegaly, there are no published studies examining the effects of GH antagonism in insulin resistant, non-acromegalic patients. Therefore, we aimed to determine how antagonism of GH signaling with pegvisomant would affect insulin sensitivity in insulin resistant, but non-diabetic men. We found that one month of treatment with the potent GHR antagonist, pegvisomant, reduced levels of circulating IGF-1, but had no effect on insulin sensitivity, endogenous glucose production or lipolysis.\n\nInterestingly, there was a small but statistically significant decrease in appendicular fat mass post-pegvisomant treatment. In acromegalic patients, both surgical treatment and pegvisomant are known to increase adiposity, therefore this decrease was unexpected31. We have no experimental data to account for this result, but one potential explanation would be specific targeting of pegvisomant action to the liver. Pegvisomant treatment is known to increase GH levels due to the suppression of IGF-132. Circulating IGF-1 is derived almost exclusively from the liver33. If pegvisomant preferentially blocked GH action in the liver, the compensatory increase in circulating GH would cause unopposed GH action in adipose tissue leading to a paradoxical increase in lipolysis and decreased fat mass. We did not observe a significant change in lipolysis, but declines in both appendicular fat mass and resting RQ are consistent with increased lipolysis, so this possibility remains.\n\nGiven the very strong rationale supporting the notion that GHR antagonism would improve insulin sensitivity, we were surprised to find no effect of pegvisomant on insulin sensitivity with the clamp. There are several potential explanations for these results. It is possible the dose of pegvisomant was too low or the duration of treatment too short. We had a small sample size and thus we could have insufficient power to detect a difference, though there was absolutely no difference between pre- and post-treated participants and if anything a worsening of hepatic insulin sensitivity. As discussed above, it is possible that pegvisomant has preferential effects on the liver and has relatively little effect on blocking GH signaling in adipose tissue. We saw no effect on whole-body lipolysis and as noted, observed paradoxical changes in body composition. While there is not yet an answer as to the cell or tissue type that mediates the effect of GH on whole body insulin sensitivity, there is evidence suggesting that the predominant site of action is adipose tissue in both mice34 and humans35. Finally, EGP during hyperinsulinemia was nearly fully suppressed at pre-treatment baseline, which means that we would have a hard time detecting further suppression of EGP. This makes the interpretation of the data more difficult as we expected an improvement in suppression of EGP with pegvisomant treatment.\n\nOur study has several limitations. There were a small number of participants, which potentially amplifies the effect of variable diets, activity or other behaviors. It is notable that the other published studies of pegvisomant using the hyperinsulinemic euglycemic clamp technique were small and yet revealed significant effects8,9,15. It is possible our subjects were not sufficiently insulin resistant for us to see an effect of pegvisomant. Finally, as previously discussed, near total suppression of EGP at baseline could have obscured an effect of pegvisomant on improvement of hepatic insulin sensitivity.\n\nThis is the first report of GH antagonism in insulin resistant, non-acromegalic human participants. Using gold-standard methodology, we observed no effect on insulin sensitivity. Given the abundance of information from human and animal studies that support a role of GH signaling on insulin and glucose metabolism these results are surprising. However, these results suggest that there is still much to be learned about GH and IGF-1 and effects on metabolism. Future studies will be necessary to further explore these effect(s). In particular, studies in more insulin resistant individuals, such as drug naïve, newly diagnosed patients with T2DM, may be more informative.\n\n\nData availability\n\nDataset 1: De-identified raw metabolic data for the four participants. doi, 10.5256/f1000research.11359.d15941536",
"appendix": "Author contributions\n\n\n\nE.J.W. conceived of the study; A.P.L. and K.M. designed experiments, carried out experiments, analyzed data, wrote and edited the manuscript; M.S, E.J.M., and E.J.W. designed experiments, analyzed data, wrote and edited the manuscript.\n\n\nCompeting interests\n\n\n\nThis study was generously funded by Pfizer pharmaceuticals, who additionally provided drug. The authors have nothing else to disclose.\n\n\nGrant information\n\nThis work was supported in large part by an investigator initiated research grant (IIR WI178028) from Pfizer, as well as the Wilsey Family Foundation, the James Peter Read Foundation, and funding from the National Institutes of Health (5T32DK007418-35).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank the CRC nurses and other staff for their assistance in performing the study (grant awarded to CRC, NCATS UL1 TR000004), Jennifer Tran for coordinating lab based studies, and Pfizer for donating the drug.\n\n\nSupplementary material\n\nSupplementary File 1: Full trial protocol.\n\nClick here to access the data.\n\nSupplementary File 2: CONSORT checklist.\n\nClick here to access the data.\n\nSupplementary File 3: CONSORT flowchart.\n\nClick here to access the data.\n\n\nReferences\n\nChen L, Magliano DJ, Zimmet PZ: The worldwide epidemiology of type 2 diabetes mellitus--present and future perspectives. Nat Rev Endocrinol. 2011; 8(4): 228–236. PubMed Abstract | Publisher Full Text\n\nSamuel VT, Shulman GI: Mechanisms for Insulin Resistance: Common Threads and Missing Links. Cell. 2012; 148(5): 852–871. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCheatham B, Kahn CRL: Insulin Action and the Insulin Signaling Network. Endocr Rev. 1995; 16(2): 117–142. PubMed Abstract | Publisher Full Text\n\nBlüher M: The distinction of metabolically 'healthy' from 'unhealthy' obese individuals. Curr Opin Lipidol. 2010; 21(1): 38–43. PubMed Abstract | Publisher Full Text\n\nRaji A, Seely EW, Arky RA, et al.: Body Fat Distribution and Insulin Resistance in Healthy Asian Indians and Caucasians. J Clin Endocrinol Metab. 2001; 86(11): 5366–5371. PubMed Abstract | Publisher Full Text\n\nGuevara-Aguirre J, Rosenbloom AL, Balasubramanian P, et al.: GH Receptor Deficiency in Ecuadorian Adults Is Associated With Obesity and Enhanced Insulin Sensitivity. J Clin Endocrinol Metab. 2015; 100(7): 2589–2596. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParkinson C, Scarlett JA, Trainer PJ: Pegvisomant in the treatment of acromegaly. Adv Drug Deliv Rev. 2003; 55(10): 1303–1314. PubMed Abstract | Publisher Full Text\n\nLindberg-Larsen R, Møller N, Schmitz O, et al.: The Impact of Pegvisomant Treatment on Substrate Metabolism and Insulin Sensitivity in Patients with Acromegaly. J Clin Endocrinol Metab. 2007; 92(5): 1724–1728. PubMed Abstract | Publisher Full Text\n\nHigham CE, Rowles S, Russell-Jones D, et al.: Pegvisomant improves insulin sensitivity and reduces overnight free fatty acid concentrations in patients with acromegaly. J Clin Endocrinol Metab. 2009; 94(7): 2459–2463. PubMed Abstract | Publisher Full Text\n\nList EO, Sackmann-Sala L, Berryman DE, et al.: Endocrine Parameters and Phenotypes of the Growth Hormone Receptor Gene Disrupted (GHR−/−) Mouse. Endocr Rev. 2011; 32(3): 356–386. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuevara-Aguirre J, Procel P, Guevara C, et al.: Despite higher body fat content, Ecuadorian subjects with Laron syndrome have less insulin resistance and lower incidence of diabetes than their relatives. Growth Horm IGF Res. 2016; 28: 76–78. PubMed Abstract | Publisher Full Text\n\nGuevara-Aguirre J, Balasubramanian P, Guevara-Aguirre M, et al.: Growth Hormone Receptor Deficiency Is Associated with a Major Reduction in Pro-Aging Signaling, Cancer, and Diabetes in Humans. Sci Transl Med. 2011; 3(70): 70ra13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuller AF, Leebeek FW, Janssen JA, et al.: Acute Effect of Pegvisomant on Cardiovascular Risk Markers in Healthy Men: Implications for the Pathogenesis of Atherosclerosis in GH Deficiency. J Clin Endocrinol Metab. 2001; 86(11): 5165–5171. PubMed Abstract | Publisher Full Text\n\nMuller AF, Janssen JA, Hofland LJ, et al.: Blockade of the Growth Hormone (GH) Receptor Unmasks Rapid GH-Releasing Peptide-6-Mediated Tissue-Specific Insulin Resistance. J Clin Endocrinol Metab. 2001; 86(2): 590–593. PubMed Abstract | Publisher Full Text\n\nThankamony A, Tossavainen PH, Sleigh A, et al.: Short-term administration of pegvisomant improves hepatic insulin sensitivity and reduces soleus muscle intramyocellular lipid content in young adults with type 1 diabetes. J Clin Endocrinol Metab. 2014; 99(2): 639–647. PubMed Abstract | Publisher Full Text\n\nMatthews DR, Hosker JP, Rudenski AS, et al.: Homeostasis model assessment: insulin resistance and beta-cell function from fasting plasma glucose and insulin concentrations in man. Diabetologia. 1985; 28(7): 412–419. PubMed Abstract | Publisher Full Text\n\nDeFronzo RA, Tobin JD, Andres R: Glucose clamp technique: a method for quantifying insulin secretion and resistance. Am J Physiol. 1979; 237(3): E214–223. PubMed Abstract\n\nMatsuda M, DeFronzo RA: In vivo measurement of insulin sensitivity in humans. In: Draznin B, Rizza R eds. Methods, assessment, and metabolic regulation. Totowa, NJ: Humana Press. 1997; 1: 23–65.\n\nBeylot M, Martin C, Beaufrere B, et al.: Determination of steady state and nonsteady-state glycerol kinetics in humans using deuterium-labeled tracer. J Lipid Res. 1987; 28(4): 414–422. PubMed Abstract\n\nGuo Z, Paul Lee WN, Katz J, et al.: Quantitation of Positional isomers of deuterium-labeled glucose by gas chromatography/mass spectrometry. Anal Biochem. 1992; 204(2): 273–282. PubMed Abstract | Publisher Full Text\n\nWolfe R, Chinkes D: Isotope Tracers in Metabolic Research. Principles and Practice of Kinetic Analysis. 2005. Hoboken, NJ Willey and Sons. Reference Source\n\nHoussay BA: The Hypophysis and Metabolism. N Engl J Med. 1936; 214: 961–971. Publisher Full Text\n\nHoussay BA, Biasotti A: The Hypophysis, Carbohydrate Metabolism And Diabetes. Endocrinology. 1931; 15(6): 511–523. Publisher Full Text\n\nHoussay BA: Carbohydrate Metabolism. N Engl J Med. 1936; 214: 971–986. Publisher Full Text\n\nCushing H: Further Concerning a Parasympathetic Center in the Interbrain: VII. The Effect of Intraventricularly-Injected Histamine. Proc Natl Acad Sci U S A. 1932; 18(7): 500–510. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChabanier H, Copeman WS: A New System of Treatment of Diabetes Mellitus. Br Med J. 1926; 1(3412): 897–898. PubMed Abstract | Free Full Text\n\nLuft R, Olivecrona H: Experiences with hypophysectomy in man. J Neurosurg. 1953; 10(3): 301–316. PubMed Abstract | Publisher Full Text\n\nLuft R, Olivecrona H, Ikkos D, et al.: Hypophysectomy in man; further experiences in severe diabetes mellitus. Br Med J. 1955; 2(4942): 752–756. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJavid M, Gordon ES, Erickson TC: Hypophysectomy in severe diabetes. I. Neurosurgical aspects. J Neurosurg. 1958; 15(5): 504–511. PubMed Abstract | Publisher Full Text\n\nGreenberg E: Growth Hormone and Diabetes Mellitus. Diabetes. 1965; 14: 43–45. PubMed Abstract | Publisher Full Text\n\nMøller N, Jørgensen JO: Effects of growth hormone on glucose, lipid, and protein metabolism in human subjects. Endocr Rev. 2009; 30(2): 152–177. PubMed Abstract | Publisher Full Text\n\nTrainer PJ, Drake WM, Katznelson L, et al.: Treatment of acromegaly with the growth hormone-receptor antagonist pegvisomant. N Engl J Med. 2000; 342(16): 1171–1177. PubMed Abstract | Publisher Full Text\n\nNordstrom SM, Tran JL, Sos BC, et al.: Liver-derived IGF-I contributes to GH-dependent increases in lean mass and bone mineral density in mice with comparable levels of circulating GH. Mol Endocrinol. 2011; 25(7): 1223–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorbit KC, Camporez JP, Tran JL, et al.: Adipocyte JAK2 mediates growth hormone-induced hepatic insulin resistance. JCI Insight. 2017; 2(3): e91001. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavidson MB: Effect of growth hormone on carbohydrate and lipid metabolism. Endocr Rev. 1987; 8(2): 115–131. PubMed Abstract | Publisher Full Text\n\nLee AP, Mulligan K, Schambelan M, et al.: Dataset 1 in: Growth hormone receptor antagonism with pegvisomant in insulin resistant non-diabetic men: A phase II pilot study. F1000Research. 2017. Data Source"
}
|
[
{
"id": "22787",
"date": "26 Jun 2017",
"name": "Laura E. Dichtel",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe growth hormone (GH)/IGF-1 axis has a complex relationship with glucose homeostasis, as GH promotes insulin resistance while IGF-1 contributes to insulin sensitivity. Patients with acromegaly and GH excess tend to have insulin resistance that improves with GH receptor blockade (pegvisomant). The authors of this manuscript set out to investigate an interesting question—that is, would GH receptor blockade for one month in men with insulin resistance but without type 2 diabetes mellitus improve whole body insulin sensitivity (M/I) while reducing lipolysis and endogenous glucose production (EGP). Methodology in this trial was rigorous, including controlled metabolic diets the evening prior to assessment, hyperinsulinemic-euglycemic clamp with stable isotope infusions, indirect calorimetry and body composition by DXA. The authors demonstrated that despite adequate GH blockade (as evidenced by a drop in IGF-1 levels), there were no differences in insulin sensitivity, lipolysis or EGP after 1 month of GH receptor blockade. The authors thoughtfully discuss potential rationale for these negative results, including small sample size with limited power, variability of GH receptor blockade in liver versus adipose tissue as well as near-total suppression of EGP at baseline, limiting the ability to detect further suppression of EGP at follow up after pegvisomant administration. In their summary, they raise the interesting point that despite these negative results in men with insulin resistance but no diabetes, it might be worthwhile studying a more severe phenotype, such as those with untreated, newly diagnosed type 2 diabetes mellitus. This is a well written manuscript that addresses an important gap in knowledge related to the GH/IGF-1 axis and insulin resistance.\n\nOne point for the authors to consider updating: In the abstract the subject ages are listed as 18-62 years while the methods section of the paper lists enrolled ages of 52-57 years. Given the mean age of 54.5 ± 2.1 years, I presume the latter age range is probably more accurate of subjects who were actually enrolled.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23283",
"date": "03 Jul 2017",
"name": "Stuart J Frank",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nYes, the current literature is cited. The discussion points out that acromegaly is associated with insulin resistance and Laron dwarfism is associated with insulin sensitivity. While both are true, growth hormone deficiency is also associated with insulin resistance, which may be a state that is more analogous to partial GH receptor blockade which would be induced by pegvisomant (Johannson et al, Metabolism 44:1126, 1995 and Alford FP, J Endocrinol Invest 22: 28, 1999, among other sources).\n\nThe study design is appropriate and conventional means of assessing insulin sensitivity are used.\n\nGenerally sufficient details of methods and analysis are used. It would be useful to know the age adjusted normal ranges of IGF-1 levels for the subjects in the study to better understand their levels compared to a normal population. Subjects were asked to continue there normal levels of activity, it would be useful to know more about this since exercise can dramatically affect insulin sensitivity.\n\nThe statistical analysis and interpretation appears to be appropriate.\n\nThe appropriate source data are available.\n\nIn the conclusion, the authors note that the results are surprising in light of the known effects of GH in two extreme circumstances (acromegaly and Laron dwarfism) but do not cite the substantial literature suggesting insulin resistance in growth hormone deficient patients from other causes. Partial GH receptor blockade may more closely mimic the GH deficiency of hypopituitarism (which is often partial as well) and thus these results may not be as surprising as the authors assert.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-614
|
https://f1000research.com/articles/6-390/v1
|
29 Mar 17
|
{
"type": "Systematic Review",
"title": "Cardiovascular involvement and manifestations of systemic Chikungunya virus infection: A systematic review",
"authors": [
"María Fernanda Alvarez",
"Adrián Bolívar-Mejía",
"Alfonso J. Rodriguez-Morales",
"Eduardo Ramirez-Vallejo",
"María Fernanda Alvarez",
"Adrián Bolívar-Mejía",
"Eduardo Ramirez-Vallejo"
],
"abstract": "Background: In the last three years, chikungunya virus disease has been spreading, affecting particularly the Americas, producing more than two million cases. In this setting, not only new disease-related epidemiological patterns have been found, but also new clinical findings have been reported by different research groups. These include findings on the cardiovascular system, including clinical, electrocardiographic and echocardiographic alterations. Methods: We performed a systematic review looking for reports about cardiovascular compromise during chikungunya disease. Cardiac compromise is not so common in isolated episodes; but countries where chikungunya virus is an epidemic should be well informed about this condition. We used 6 bibliographical databases as resources: Medline/Pubmed, Embase, ScienceDirect, ClinicalKey, Ovid and SciELO. Dengue reports on cardiovascular affectation were included as well, to compare both arbovirus’ organic affectations. Articles that delved mainly into the rheumatic articular and cutaneous complications were not considered, as they were not in line with the purpose of this study. The type of articles included were reviews, meta-analyses, case-controls, cohort studies, case reports and case series. Results: Originally based on 737 articles, our reviewed selected 40 articles with 54.2% at least mentioning CHIKV cardiovascular compromise within the systemic affectation. Cardiovascular manifestations can be considered common and have been reported in France, India, Sri Lanka, Malaysia, Colombia, Venezuela and USA, including mainly, but no limited to: hypotension, shock and circulatory collapse, Raynaud phenomenon, arrhythmias, murmurs, myocarditis, dilated cardiomyopathy, congestive insufficiency, heart failure and altered function profile (Troponins, CPK). Conclusions: Physicians should be encouraged to keep divulgating reports on the cardiovascular involvement of chikungunya virus disease, to raise awareness and ultimately encourage suitable diagnosis and intervention worldwide.",
"keywords": [
"cardiovascular",
"Chikungunya",
"clinical",
"Colombia",
"Latin America"
],
"content": "Introduction\n\nChikungunya virus (CHIKV) is an RNA-type arbovirus species that according to the International Committee on Taxonomy of Viruses (ICTV) belongs to the Family Togaviridae, the Genus Alphavirus (currently not assigned to an Order), along with more than 30 other pathogens for vertebrates and humans, causing a very broad spectrum of disease1,2. The word “Chikungunya” means “which contorts or bends up” in Makonde language from Tanzania and Mozambique, referring accurately to the difficulty in deambulation or walking of those affected1,2. Despite CHIKV first being documented in 1954 in Tanzania, Africa and subsequently Asia1,3,4, it was not until 2006 that CHIKV first alarmed the world for being a major public health concern. After an explosive epidemic outbreak in French island La Réunion, where 35% of the total population was infected over six months, CHIKV arrived to central France and extended to Germany, Italy, Norway, and Switzerland1. Later on, the virus hit North, Central and South America and brought with it the concept of a “self-limited febrile illness”, a more benign type of infection with predominantly articular symptomatology1,3–5.\n\nAlphaviruses can be separated into two phylogenetic categories: “Old World” viruses and “New World” viruses. “Old World” viruses such as CHIKV are known for their articular tropism and exanthematous febrile syndrome; and the “New World” viruses such as the western equine encephalitis and Venezuelan equine encephalitis viruses1–3 have preference for nervous system stromal cells. CHIKV infection pathway in humans is shared with Dengue fever, and is caused by the biting of borne-arthropods from the Aedes mosquito family, Aedes aegypti and most recently Aedes albopictus1, the last one being essential to the wide geographic colonization process ever since a new mutation (A226V) in CHIKV has conferred the virus a better ability to replicate in this species. Ae. albopictus is more common in Asia, and has become worthy of mentioning in the Southeast of the United States and the Caribbean region6. CHIKV currently circulating in America seems to no longer be related to the African lineage, but to strains documented in Asia and the Phillipines2,4.\n\nThe transmission cycle, although originally merely sylvatic between primates and forest mosquitoes, has developed an alternate urban cycle involving humans1,6. Aedes as vectors are capable of spreading the virus after biting a viremic human, after which CHIKV replicates in salivary glands of the female mosquito and then a new bite of a healthy host takes place6,7. After the infectious bite, the incubation period of CHIKV ranges from 1–12 days before clinical onset of symptoms1,6. The appearance of clinical manifestations of the febrile syndrome coincides with viremia settling in during a period of 5–7 days, when viral load can be as high as 109 viral genome copies per milliliter3. Most recently, cases of vertical transmission have been reported, but it is indeed rare, and transmission through nursing has not been proven1,6,8.\n\nThree stages of disease after the incubation period have been recognized9:\n\nAcute (<3 weeks post-infection)\n\nPost-acute or subacute (3 to 12 weeks post-infection)\n\nChronic (>12 weeks post-infection)\n\nNot every patient develops the full three stages, and at least a 20% of the infected population will not develop any symptoms at all, despite serological confirmation3,9. On the other hand, isolated cases have reported severe acute manifestations, far from the classic expected evolution of the disease, especially in areas with renowned late outbreaks such as India (2006)1,10, La Réunion and Mayotte (France, 2006)9,11, Malaysia (2008), Thailand (2008)12,13 and South America (Colombia, Venezuela and later Brazil, from 2014 until now)14,15. As a result, some authors have started to classify the clinical progression of CHIKV into either classical, severe or neurological (neuro-chikungunya)10,13. The severe subtype of the disease contemplates an atypical systemic compromise, in which the liver, lungs, and even the eye are affected by the extra-articular intense inflammatory response10,16,17. Similarly, the involvement of the heart has often been fatal and worth highlighting in some reports18–22, but it has not been very largely discussed.\n\nCharacterizing potential systemic compromise due to CHIKV infection, especially cardiovascular, and characterizing manifestations and complications as a result, is essential in clinical practice. Here, identifying the febrile syndrome is particularly common on a daily basis and, coexists in a great proportion of patients with other morbidities and chronic conditions, that could easily trigger a more severe presentation and clinical picture of the disease9,11,23.\n\nTo systematically review published literature on the cardiovascular manifestations and involvement of systemic CHIKV infection;\n\nTo explore which are the main clinical cardiovascular features of chikungunya infection?\n\nTo identify which are the main electrocardiographical findings of chikungunya infection?\n\n\nMethods\n\nThis protocol has been registered in the PROSPERO International Prospective Register of Systematic Reviews (ID: 58949).\n\nEligibility criteria were: original studies that report cases with cardiovascular manifestations (acute and/or chronic) related to Chikungunya. We will include studies published in English and Spanish. Eligible study designs were case-control, cohort studies, case reports and series of cases.\n\nA systematic review was conducted using six bibliographical databases (Medline/Pubmed, Embase, Elsevier, ClinicalKey, Ovid and SciELO) as resources.\n\nThe established search strategy to explore the extent by this topic is currently represented in medical literature, initiating the searches with “Chikungunya AND Systemic AND Manifestations”, “Chikungunya AND Heart” and “Chikungunya AND Cardiac”. Article language was limited to English and Spanish, and there was no limit set for time of publication, but searches concluded on November 1, 2016. Dengue reports on cardiovascular affectation were included as well, to compare between both arbovirus’ organic affectation. Articles that delved mainly into the rheumatic articular and cutaneous complications were not considered, as they were not in line with the purpose of this study.\n\nThe type of articles included were reviews, meta-analyses, case-controls, cohort studies, case reports and case series.\n\nData extraction from reports was done independently by two investigators, later checking for duplicates and an initial quality screening of the studies and articles included.\n\nIn the articles assessment the variables for which data were sought included any cardiovascular manifestation associated with CHIKV infection, as well electrocardiographical, echocardiographical and laboratory cardiovascular related finding in patients during acute and/or chronic phase of disease.\n\nChikungunya is an emerging disease in the Americas and reemerging in the World, so there are a small number of studies addressing the cardiovascular manifestations (acute and/or chronic) related with Chikungunya. All studies that meet the criteria of inclusion will be included and the risk of bias discussed throughout the article. To assess the quality of eligible studies critical appraisals specific to study design will be completed by two independent reviewers.\n\nWe proceeded to compile and submit a complete review of CHIKV that included the main facts about characterization, origin and transmission of the virus, epidemiology, pathogenesis, clinical features of the classic and severe/atypical disease; but with a clear focus on the extra-articular and mainly cardiovascular manifestations of the CHIKV infection, diagnosis of CHIKV-induced cardiomyopathy, manage, prognosis, and differences to what is observed in Dengue virus (DENV) heart compromise.\n\n\nResults\n\nThe research initially rendered a total of 737 articles: duplicates across the databases and articles about other viruses were eliminated, unless they focused solely on cardiac affectation. Finally, 40 articles were selected based on their relevance and pertinence of the title or abstract to the systemic compromise that was being evaluated, with 54.2% at least mentioning CHIKV cardiovascular compromise within the systemic affectation (Flow Diagram).\n\nThe frequency at which the rest of the organs systems are affected is shown in Table 1. The information on the role of the cardiovascular system during CHIKV infection is very scarce indeed; only 21.4% of the resulting articles focused solely and exclusively on the cardiovascular findings; the first publication on the topic was by Obeyeskere et al. and dates to 1972. In relation to extra-articular compromise of other organ systems besides cardiovascular, the most published were the nervous system –both central and peripherical- and secondary skin complications.\n\nReporting articles n () and % corresponded to the number of articles (out of the eligible, n=40), that described the type of compromise (e.g. osteoarticular, cardiovascular, etc). Category of frequency of different organ-type of manifestations was classified as follows: extremely common (100-80%), very common (79-60%), common (59-40%), unusual (39-20%), rare (19-10%) and extremely rare if below 10%.\n\nAccording to the type of systems compromise (e.g. osteoarticular, cardiovascular, neurological, etc) in the literature, the frequency of affectation of organs/systems was classified in six categories: extremely common (100-80%), very common (79-60%), common (59-40%), unusual (39-20%), rare (19-10%) and extremely rare if below 10%. Data were registered in Table 1, showing the countries of origin of reports describing such types of manifestations of CHIKV infection.\n\nClinical course. The acute stage extends from the first symptomatic day to the 21st day and is characterized by an end-of-incubation sudden high fever (often above 39ºC), headaches, myalgia and the insidious onset of typical symmetric, bilateral polyarthalgia (most frequently of small distal joints – phalanges, wrists, ankles), along with a typical maculopapular evanescent rash1,3,9. The location of the arthralgias tends to vary between individuals. There are rare descriptions in the literature of pain in the costochondral, hip and temporomandibular articulations24, so it may not be advisable to dismiss a CHIKV diagnosis if these pains are present. Palmo-plantar pruritus, photophobia, edema in the face and extremities and adenopathies have been also described, and benign and self-limited hemorrhagic manifestations are relatively common in children. Subsequently, by the end of the acute stage, asthenia and adynamia tends to appear1,9.\n\nIn the post-acute stage, from the first to the third month, all symptoms described above tend to vanish, except for some residual arthralgia, and some residual fever and adynamia. Extra-articular rheumatisms such as tenosynovitis, bursitis, tendinitis, worsening of osteoarthritis and even tunnel syndrome and Raynaud phenomenon have been reported9. Not every patient develops this phase, and degrees of severity and functional limitation will depend on patients’ previous comorbidities, mainly musculoskeletal. Alternatively, other risk factors for being still symptomatic after the first month have been linked, for instance, to having poor rest during the acute phase, and females above the age of 40 are at major risk1,9,11.\n\nChronic CHIKV infection would be defined as a symptomatic period longer than three months and manifestations (continuous or episodic) that last for months, years or even a decade. Manifestations are the same as previously described in the post-acute phase, presenting as oscillating arthralgias over time with or without inflammatory signs until, according to natural history of the disease, the patient returns to the health state that they had before the infection. The degree of functional limitation may vary from little to moderate; leaving to a mean of 50% the most incapacitating and aggressive compromise9.\n\nAtypical presentations. Atypical presentations of CHIKV infection can involve almost every organ system, as seen in Table 2. Even though the most common extra-articular manifestations reported in the literature involve the nervous system25–27 and the eye17; alterations in the gastrointestinal tract, liver16, kidney, muscles, mucous membranes and skin and hematologic cells have been evidenced, as well as in hemostasis and coagulation processes. Cardiovascular compromise is worthy of mentioning because of its usually fatal outcomes10,28. Infection can leds to cardiovascular manifestations, but in addition, patients with existing cardiovascular disease can be decompensated consequently altering its clinical control of disease, then worsening the short-term prognosis; as it has been described with diabetes, lupus; or neurological, renal, pulmonary and cardiovascular insufficiency9,11,23.\n\nALAT: Alanine-aminotranspherase, ASAT: Aspartate-aminotranspherate, CPK: Creatine-phosphokinase, SCr: serum creatinine, BUN: Blood Urea Nitrogen.\n\n*: Seen also in children.\n\nA common denominator of the 0.5% of patients who develop these systemic atypical patterns of disease is having some kind of predisposing condition, disease, or advanced age9,16,24. In retrospective records of severe cases reported by Economopolou A., et al. from La Réunion, 89% had previous medical conditions, 78% took medication before the disease (14% NSAIDS) and 14% were alcoholic11,23. Nevertheless, it is notable that risk of severe infection and compromise seems to increase in large outbreaks, as documented in India (2006), where only 25% of cases developed classical CHIKV; and 75% were severe cases where 60% of these had some degree of neurological affectation10.\n\nCardiovascular involvement. La Réunion reported an overall outbreak mortality of 10%; heart failure was the attributed cause in 15% of the cases, myocarditis and pericarditis in 5% and acute myocardial infarction in 2%; leaving a remarkable total of 22% mortality due to cardiovascular affectation11,15. Several similar past records raise concerns about a possible cardiac tropism of CHIKV, with clear evidence. The first description of clinical myocardial involvement of CHIKV infection was reported in 1972, when Obeyeskere et al. presented a cohort of 10 patients who had a history of arbovirus-like syndrome, serological evidence of Dengue IgM antibodies or CHIKV haemagglutination inhibition (HI) antibodies test, and complement-fixation antibodies test in high titres, and now had clinical and electrocardiographic evidence of myocarditis. Apart from the classic acute febrile symptoms, patients manifested palpitations, chest pain, fatigue, dyspnea and vagal-stimulation symptoms; which by themselves could already indicate coronary syndrome20.\n\nFurther studies have histopathologically identified and verified the presence of the virus in cardiac tissue of postmortem biopsies. It is the case of an elderly woman with serologically confirmed CHIKV who developed a fulminant myocarditis, with no significant medical background29. Myocardial biopsy revealed extensive necrosis and cytoplasmic viral inclusions in the cells29. Nowadays, evidence shows that, besides the heart, CHIKV may also have tropism for the nervous system and the liver28.\n\nPhysiopathology of CHIKV-induced cardiac compromise. Few authors have tried to determine the physiopathology behind the cardiac damage that CHIKV can potentially cause19,20. Studying other viruses that share tropism for the heart is essential. A postmortem study, based on endomyocardial biopsies with PCR, in patients diagnosed with idiopathic dilated cardiomyopathy, evidenced a viral infiltration of myocytes in 66% of the cases. In that study the three most isolated viral agents were: parvovirus 19, herpes virus and enterovirus30; then, showing that viral direct organ invasion is feasible, lethal and more frequent than expected for such viruses.\n\nCHIKV penetrates the myocytes and generates direct damage to the muscle fibers, meanwhile inflammatory response and infiltrate grows, leading to secondary damage by a hypersensitivity reaction and necrosis, but usually with no typical signs of infarction20,22,30. Furthermore, it has been proposed that these alterations are long-standing, and tend to make the cardiac tissue more vulnerable to recurrent damage from other microorganisms20 and favor transition from myocarditis to dilated cardiomyopathy30. As has been mentioned, Obeyeskere et al. in 1972 was the first group in make such reports and observed the CHIKV physiopathology at cardiovascular level.\n\nClinical cardiovascular progression pattern. A progression pattern has been identified and proposed, with three phases. Patients may follow the three phases strictly, or present a torpid evolution right to the last phase and skip the second one. Also, time of progression varies between individuals, depending on the severity of the initial cardiovascular injuries and previous comorbidities.\n\nFirst is “pre-congestive or prodromal”; when isolated, not very specific electrocardiographic findings are detected (especially T wave abnormalities). Cardiomegaly can be detected with a simple thorax radiography or echocardiogram and gallop rhythm may be auscultated, but there are no visible cardiovascular symptoms. By this time (after 7 days), the initial viremia peak is over, but we are in front of an incipient heart failure19.\n\nThe most documented electrocardiographic changes were T wave inversion in DII, III, aVF and V5-V6, and ST elevation18,23,28,29. These are relatively nonspecific findings, which are encouraged to be interpreted within the whole clinical context so that other compatible differential diagnoses such as acute coronary syndrome, electrolyte disorder, or even digitalis intoxication, can be dismissed20. In addition, echocardiograms mostly reveal biventricular hypertrophy and dyskinesia of wall movements; and these results are compatible with myocarditis. Ejection fraction may be mildly diminished and pericardial effusion is rare. Creatine Phospho-Kinase (CPK) levels may be increased after the first phase28.\n\nThe second phase is known as the “arrhythmic phase”. It starts when the recent myocardial injury can no longer permit an adequate functioning of the cardiac conduction system. Again, according to the severity, findings may range from premature auricular and ventricular extrasystoles to atrial fibrillation with high risk of thromboembolism; and in the worst-case scenario, ventricular fibrillation and sudden death19. This wide spectrum directly correlates with the symptoms and hemodynamic state of the patient31.\n\nThe patients after the acute and subacute phase that are most affected will invariably develop heart failure, displaying some a right side insufficiency with pulmonary and peripheral edema and hepatomegaly; but more frequently a left side insufficiency with low perfusion and shock clinic19. Reduced peripheral blood flow can be responsible for many pathological events too, blurring the line between expected consequences of shock and the real direct organ damage of CHIKV. Kidneys are an example, as in Economopolou et al’s retrospective study, 20% of the patients with heart failure also presented with pre-renal failure23, which suggests it is more of a consequence of shock in this instance. In contrast, lesions as nephritis are more likely to be caused by the virus. Additionally, in this third stage, a constrictive syndrome has also been described, with extensive compromise and pericarditis, but it is indeed less common24.\n\nA summary of most the common clinical manifestations during CHIKV infection that suggest cardiac viral compromise is given in Table 3. Isolated signs and symptoms reported in single case reports that seemed to relate more to the pre-morbidities of the patient were excluded. Regarding blood pressure, there are significant variations in the reports, but recently including hypotension during acute CHIKV infection in patients with high blood pressure under antihypertensive treatment. A pattern through the revised articles could not be identified, so having hypo or hypertension may be a poor predictor of cardiac compromise during CHIKV infection and seems more a product of the severity of the case and the numbers previously managed by the patient20.\n\nAV: Atrioventricular. NTproBNP: N-terminal pro-Brain Natriuretic Peptide. MRI: Magnetic Resonance Imaging.\n\nDiagnosis and management of CHIKV infection with cardiac compromise. As it is already inferable, diagnosis of a CHIKV infection with cardiac compromise must be more epidemiologically and clinically-based rather than anything else. Specific CHIKV infection during acute phase would be diagnosed by molecular techniques such as the PCR, but after that phase by immunological/serological tests, particularly the detection of IgG anti-CHIKV. Once CHIKV infection is suspected, echocardiographic imaging, MRI and other paraclinical exams will only help in assessing the severity of the damage. There is an evident lack of studies on the topic and therefore, lack of data determining sensibility and specificity of the findings that are mentioned in Table 3. However, Simon et al. mentioned and delimited specific and very valid diagnostic criteria to what is called CHIKV-induced myopericarditis in their case report. They demonstrated clinical, biological and morphological evidence of myocarditis, with serologically documented CHIKV infection and no serologic evidence of another recent infection, then linking that the cardiovascular compromise was associated with CHIKV18,28. Results like these are very useful but it is always advisable to always look at these criteria in the context of the patients and their previous comorbidities.\n\nNevertheless, what is noticed is that diagnoses are rarely made, interventions tend to be delayed and insufficient, and outcome is often an imminent refractory heart failure. Management has mostly been ineffective in containing the damage, and death by cardiac arrest becomes inevitable. Cases as severe as a 63-year-old woman with a T wave inversion in V5-V6 and global progressive hypokinesia have been reported, who experienced cardiac arrest and died in 4 hours since admission, where action time was so limited and manage was not even mentioned29. It is not possible to cite a standard manage due to the poor frequency of reports of this type of CHIKV disease, but only to cite manage given to cases in the literature and compare outcomes.\n\nOn the other hand, the treatment given to a successful case in India who remained fully asymptomatic after follow ups consisted of inotropic support (dopamine and dobutamine) and levocarnitine use to relieve mitochondrial dysfunction.. Additionally, a 19-year-old male previously healthy who developed myocarditis was discharged after 3 days with Acebutolol and Ramipril, and at follow-up, premature beats had disappeared22. There is another case of a 21-year old woman who returned from La Reunion and responded clinically to with high doses of aspirin, and her EKG changes reverted18. Such good prognosis as seen in the aforementioned cases may not be representative of the true clinical progression, and may be biased due to the early age of the patients28.\n\nIn summary, management of CHIKV disease is not established everywhere, remains very variable, and consists mainly in correcting the clinical features of the cardiac failure, but does not taking into consideration the root cause. Beta-adrenergic blockers, ACE-inhibitors and inotropic support during the crisis are commonly reported in order to maintain hemodynamic stability. Only one case reported the use of prednisolone21, but without any other cardiac support drugs, and the outcome was equally poor. Studies on the impact of anti-inflammatory corticosteroids along with cardiovascular support drugs should be carried out, it seems to be a promising option considering the underlying severe systemic inflammatory response in these cases. A very similar substrate is seen in the eosinophilic myocarditis that can cause Toxocara canis, where early prednisolone in doses of 1mg/kg/day for the acute phase and 5–10mg/kg/day for maintenance has been recommended32.\n\nPrognosis and functional sequelae. The Indian child cited above showed general improvement within three days, with no relapses. A follow up echocardiogram reported only a mild mitral regurgitation, with intact left ventricle function28. The 19 and 21 year old patients remained asymptomatic, but dilation persisted on imaging18,21. By now, it is evident that there are three clear, different outcomes to CHIKV infection:\n\nAsymptomatic with no imaging sequelae;\n\nAsymptomatic with partial reversion of EKG and echocardiogram changes;\n\nDeath\n\nChanges seen on cardiac magnetic resonance imaging that persist for more than one year from disease onset will be permanent and affect the patients to some degree later in life18. Simon et al thus proposes that in upcoming years, countries that suffered outbreaks of CHIKV since 2005, will see a long-term increase in dilated cardiomyopathy, reporting this as the most frequent sequelae, even in asymptomatic patients who had an apparently classic clinical picture involving arthralgia predominantly18. This raises public health concerns and the risk of a noticeable limitation in quality of life for these patients in the future.\n\nSimilar reports and findings on Dengue fever: Arbovirus-induced cardiopathy. The cardiac tropism of CHIKV seems to be shared with DENV, with multiple cases in the literature displaying similar cardiovascular complications and often mimicking acute myocardial infarction as well33,34. Myocarditis is reported similarly. However, arrhythmias and compromise of the electric conduction system of the heart have a higher incidence with DENV, including supraventricular arrhythmias such as atrial fibrillation, AV blockage28 and cases reporting refractory ventricular fibrillation as the ultimate cause of death34. Acute pericardial and pulmonary edema are also described, and fatality outcome is not as frequent. As a common denominator in the published literature, most reports of cardiac involvement are seen in patients with hemorrhagic fever manifestations of CHIKV infection.\n\nEven though the etiological agent is very similar, DENV-induced cardiomyopathy has a variant: the plasma leak syndrome and characteristically endothelial dysfunction of DENV that may result helpful to the extravasation process and chemotaxis of inflammatory cells to myocardial tissue, creating a highly cytokine rich environment35, besides the already known tropism of DENV for the heart. This could explain why cardiovascular manifestations are much more common with DENV than with CHIKV35,36. Host susceptibility and the virulence of the strain also play a role in the severity of the clinical picture37.\n\nManifestations remain comparable, but electrocardiography disturbances are observed frequently in a wide range of 34–75%35,36 of the dengue cases. In the 2005 outbreak, Sri Lanka reported 62.5% of patients affected37. Abnormalities basically consist of sinus bradycardia, T inversion, depression of ST segment in precordial leads and avF, AV blocks, (Mobitz type I second degree has been mentioned) bundle branch blocks and rarely, atrial fibrillation33,34,37,38. All were reported as supposedly transient39. Two cases of remaining atrial fibrillation after the resolution of disease have been reported, with reversion only achieved after antiarrhythmic treatment (Amiodarone)39.\n\nImaging is similar to what is reported in CHIKV echocardiography: global hypokinesia and important decrease in left ventricle ejection fraction (LVEF). A study reported a mean of 47.08% of LVEF in all DENV infected patients, and of 39.6% if shock syndrome was present. At follow up after three weeks, LVEF was superior to 50% in all cases and ECG changes had reverted35. From these findings, JP Wali et al proposed three diagnostic criteria for suspected cardiac compromise: ST-T changes in ECG, global hypokinesia and a decreased LVEF in imaging.\n\nAlthough arboviral cardiovascular manifestations have been described for over 40 years20, few studies8,18,40 have documented in detail the specific cardiovascular and specific EKG patterns during acute disease40, especially in recent epidemics in Latin America. Initial reports of three fatal cases of chikungunya in Barranquilla, Colombia15, in which patients presented hypotension and tachycardia, raised red flags among physicians in the region. More recently in Sucre, Colombia, in 2016, a case series of 42 patients with chikungunya followed in detail found arrhythmias in EKG findings, such as repolarization disturbances, in more than 71% of those cases. Repolarization disturbances were the most frequent (21%)40. Preliminary unpublished data41 from a study in Caracas, Venezuela, reported in 2016, they provided similar findings in patients, although at a lower frequency. Indeed, evidence of patent or silent myocarditis was observed in a high percentage of patients prospectively evaluated in Venezuela. An unexpected finding was persistent symptomatic arterial hypotension observed in one third of these patients with prior stable hypertension on treatment, requiring the anti-hypertensive medication to be discontinued or reduced due to severe clinical manifestations41.\n\nA study from Tolima, Colombia, in 2016 provided consistent findings and information with regards to the spectrum of EKG alterations. Rhythm disturbances occurred in 10 patients out of 14 (71%)35. They included sinusal tachycardia (3/14 patients), hemiblocks (2/14), left ventricular hypertrophy (2/14) and ST segment depression (2/14), among others35.\n\nPatients with chikungunya may present cardiovascular complications including myocarditis and pericarditis18,40,41. Thus, an accurate physical examination, including a detailed cardiovascular system assessment should be performed. This should include cardiac auscultation looking for sound alterations, which could be indicating premature ventricular contractions18,20,40,41. Besides that, all CHIKV infected patients with should have an EKG performed on them, given that it is an easy, cheap and quick assessment tool that could prevent potential deleterious cardiovascular outcomes40.\n\nIn light of any clinical or electrocardiographic abnormality, cardiac enzymes should also be measured (e.g. troponin)20. As suggested for over 40 years20 cardiac tropism and direct cytolytic effects of the virus remains a latent possibility40, yet to date has not been demonstrated at a tissue level. Further studies using novel molecular approaches for virus detection in endomyocardial biopsies of symptomatic CHIKV infected patients could confirm this possible role and establish the underlying physiopathological mechanisms of CHIKV myocarditis which then translate into the the spectrum of symptoms such as rhythm and conduction disturbances20,40.\n\nOngoing studies should focus on determining the potential chronic cardiovascular outcomes that could develop in patients infected with chikungunya, in order to provide an appropriate early clinical intervention strategy to avoid potential disabilities.\n\nManagement of DENV is poorly reported and not established everywhere, as is the case with CHIKV. Early use of IV hydrocortisone resulted in full recovery in two cases of myocarditis in 12 year old patients42, and authors support that fatality is significantly reduced under opportune intervention during the first hours42. A more conservative attitude was adopted for the analyzed cohort from the Sri Lanka outbreak; with indications of strict bed rest, liquid maintenance, oxygen, close monitorization of vital signs and inotropic support when needed, and a clear avoidance of steroids and other empirical drugs37.\n\nThe importance of a rapid intervention (first hours) is exemplified by the case of a 25 year old Indian male, that presented with nonspecific abdominal epigastric pain and vomiting. Exams revealed myocarditis. The patient died in a few hours when he developed pharmacological and electrical refractory ventricular tachycardia while evaluating a much more invasive treatment option: the possibility of implanting a left ventricular assistance device. Positive DENV serology results were known later34. It is clear at this point, that therapy needs to be standardized for arbovirus-induced cardiomyopathy, comparing efficacy of treatments that have already been proposed, as well as new treatment options.\n\n\nDiscussion\n\nThe key for a successful outcome of CHIKV-induced cardiomyopathy is recognizing signs and symptoms early on. It is certainly a condition that can be life-threatening, which is why patients should be referred for cardiac assessment as early as possible, after displaying any of the previously mentioned symptoms. Identifying comorbidities is recommended as well to distinguish CHIKV-induced cardiomyopathy from an exacerbation of previous heart disease.\n\nCardiac compromise is not so common in isolated episodes; but countries where chikungunya virus is an epidemic should be alarmed and well informed about this condition. Physicians should be encouraged to keep divulgating reports on the cardiovascular involvement of chikungunya virus disease, to raise awareness and ultimately encourage suitable diagnosis and intervention worldwide. Questions are still raised about the real incidence, as every outbreak seems to follow a different pattern, but what is needed the most is further investigation on therapy for this specific condition and in different age groups.\n\nLimitations will always be the sporadic nature of these cases, something we need to be prepared for in future outbreaks.\n\n\nConclusions\n\nFinally, these observations on DENV and CHIKV associated cardiovascular manifestations could be useful for management of Zika virus infections, which are currently causing epidemics in Latin America43–45. Cardiovascular compromise has already been described and reported in fatal cases46,47. In addition, cardiovascular complications might be underdiagnosed in clinical practice48. Future research needs to focus on the potential cardiovascular complications of Zika virus infection, with prompt cardiovascular screening in suspected cases44,48,49. Other emerging arboviruses such as Mayaro49–54, Oropouche51,52, Venezuelan Equine Encephalitis53,54 may be also causing cardiovascular compromise, or even be co-infecting. We are still learning about the multiple clinical implications55,56 of co-infection, including those affecting the cardiovascular system.",
"appendix": "Author contributions\n\n\n\nAJRM and ERV formulated the research questions, designed the study, developed the preliminary search strategy, and drafted the manuscript. ABM and MFA refined the search strategy by conducting iterative database queries and incorporating novel search terms. MFA and ABM searched and collected the articles. All authors critically reviewed the manuscript for important intellectual content. All authors have read and approved the final version of the manuscript.\n\n\nCompeting interests\n\n\n\nThe authors declared no competing interests.\n\n\nGrant information\n\nThis work was supported by the Universidad Tecnológica de Pereira.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nSimon F, Parola P, Grandadam M, et al.: Chikungunya infection: an emerging rheumatism among travelers returned from Indian Ocean islands. Report of 47 cases. Medicine (Baltimore). 2007; 86(3): 123–37. PubMed Abstract | Publisher Full Text\n\nZuluaga M, Isaza D: El virus Chikungunya en Colombia: aspectos clínicos y epidemiológicos y revisión de la literatura. Iatreia. 2016; 29(1): 65–74. Publisher Full Text\n\nCouderc T, Lecuit M: Chikungunya virus pathogenesis: From bedside to bench. Antiviral Res. 2015; 121: 120–31. PubMed Abstract | Publisher Full Text\n\nSimon F, Javelle E, Oliver M, et al.: Chikungunya virus infection. Curr Infect Dis Rep. 2011; 13(3): 218–28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoro ML, Grilli E, Corvetta A, et al.: Long-term chikungunya infection clinical manifestations after an outbreak in Italy: a prognostic cohort study. J Infect. 2012; 65(2): 165–72. PubMed Abstract | Publisher Full Text\n\nMadariaga M, Ticona E, Resurrecion C: Chikungunya: bending over the Americas and the rest of the world. Braz J Infect Dis. 2016; 20(1): 91–8. PubMed Abstract | Publisher Full Text\n\nVega-Rúa A, Schmitt C, Bonne I, et al.: Chikungunya Virus Replication in Salivary Glands of the Mosquito Aedes albopictus. Viruses. 2015; 7(11): 5902–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVillamil-Gómez W, Alba-Silvera L, Menco-Ramos A, et al.: Congenital Chikungunya Virus Infection in Sincelejo, Colombia: A Case Series. J Trop Pediatr. 2015; 61(5): 386–92. PubMed Abstract | Publisher Full Text\n\nSimon F, Javelle E, Cabie A, et al.: French guidelines for the management of chikungunya (acute and persistent presentations). November 2014. Med Mal Infect. 2015; 45(7): 243–63. PubMed Abstract | Publisher Full Text\n\nTandale BV, Sathe PS, Arankalle VA, et al.: Systemic involvements and fatalities during Chikungunya epidemic in India, 2006. J Clin Virol. 2009; 46(2): 145–9. PubMed Abstract | Publisher Full Text\n\nEconomopoulou A, Dominguez M, Helynck B, et al.: Atypical Chikungunya virus infections: clinical manifestations, mortality and risk factors for severe disease during the 2005–2006 outbreak on Réunion. Epidemiol Infect. 2009; 137(4): 534–41. PubMed Abstract | Publisher Full Text\n\nSam IC, Kamarulzaman A, Ong GS, et al.: Chikungunya virus-associated death in Malaysia. Trop Biomed. 2010; 27(2): 343–7. PubMed Abstract\n\nChusri S, Siripaitoon P, Hirunpat S, et al.: Case reports of neuro-Chikungunya in southern Thailand. Am J Trop Med Hyg. 2011; 85(2): 386–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTorres JR, Leopoldo Códova G, Castro JS, et al.: Chikungunya fever: Atypical and lethal cases in the Western hemisphere: A Venezuelan experience. IDCases. 2014; 2(1): 6–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoz JM, Bayona B, Viloria S, et al.: Fatal cases of Chikungunya virus infection in Colombia: Diagnostic and treatment challenges. J Clin Virol. 2015; 69: 27–9. PubMed Abstract | Publisher Full Text\n\nChua HH, Abdul Rashid K, Law WC, et al.: A fatal case of chikungunya virus infection with liver involvement. Med J Malaysia. 2010; 65(1): 83–4. PubMed Abstract\n\nMahendradas P, Avadhani K, Shetty R: Chikungunya and the eye: a review. J Ophthalmic Inflamm Infect. 2013; 3(1): 35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSimon F, Paule P, Oliver M: Chikungunya virus-induced myopericarditis: toward an increase of dilated cardiomyopathy in countries with epidemics? Am J Trop Med Hyg. 2008; 78(2): 212–3. PubMed Abstract\n\nObeyesekere I, Hermon Y: Arbovirus heart disease: myocarditis and cardiomyopathy following dengue and chikungunya fever--a follow-up study. Am Heart J. 1973; 85(2): 186–94. PubMed Abstract | Publisher Full Text\n\nObeyesekere I, Hermon Y: Myocarditis and cardiomyopathy after arbovirus infections (dengue and chikungunya fever). Br Heart J. 1972; 34(8): 821–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNagaratnam N, Siripala K, de Silva N: Arbovirus (dengue type) as a cause of acute myocarditis and pericarditis. Br Heart J. 1973; 35(2): 204–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMirabel M, Vignaux O, Lebon P, et al.: Acute myocarditis due to Chikungunya virus assessed by contrast-enhanced MRI. Int J Cardiol. 2007; 121(1): e7–8. PubMed Abstract | Publisher Full Text\n\nRajapakse S, Rodrigo C, Rajapakse A: Atypical manifestations of chikungunya infection. Trans R Soc Trop Med Hyg. 2010; 104(2): 89–96. PubMed Abstract | Publisher Full Text\n\nStaikowsky F, Talarmin F, Grivard P, et al.: Prospective study of Chikungunya virus acute infection in the Island of La Réunion during the 2005–2006 outbreak. PLoS One. 2009; 4(10): e7603. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaity P, Roy P, Basu A, et al.: A case of ADEM following Chikungunya fever. J Assoc Physicians India. 2014; 62(5): 441–2. PubMed Abstract\n\nChandak NH, Kashyap RS, Kabra D, et al.: Neurological complications of Chikungunya virus infection. Neurol India. 2009; 57(2): 177–80. PubMed Abstract | Publisher Full Text\n\nGérardin P, Couderc T, Bintner M, et al.: Chikungunya virus–associated encephalitis: A cohort study on La Réunion Island, 2005-2009. Neurology. 2016; 86(1): 94–102. PubMed Abstract | Publisher Full Text\n\nMenon PR, Krishnan C, Sankar J, et al.: A child with serious Chikungunya virus (CHIKV) infection requiring intensive care, after an outbreak. Indian J Pediatr. 2010; 77(11): 1326–8. PubMed Abstract | Publisher Full Text\n\nLemant J, Boisson V, Winer A, et al.: Serious acute chikungunya virus infection requiring intensive care during the Reunion Island outbreak in 2005–2006. Crit Care Med. 2008; 36(9): 2536–41. PubMed Abstract | Publisher Full Text\n\nKühl U, Pauschinger M, Noutsias M, et al.: High Prevalence of Viral Genomes and Multiple Viral Infections in the Myocardium of Adults With “Idiopathic” Left Ventricular Dysfunction. Circulation. 2005; 111(7): 887–93. PubMed Abstract | Publisher Full Text\n\nMendoza I, Morr I, Mendoza I, et al.: Chikungunya myocarditis: an emerging threat to America. J Am Coll Cardiol. 2015; 65(10): A946. Publisher Full Text\n\nBolívar-Mejía A, Rodríguez-Morales AJ, Paniz-Mondolfi AE, et al.: Manifestaciones cardiovasculares de la toxocariasis humana. Arch Cardiol Mex. 2013; 83(2): 120–129. Publisher Full Text\n\nLee CH, Teo C, Low AF: Fulminant dengue myocarditis masquerading as acute myocardial infarction. Int J Cardiol. 2009; 136(3): e69–71. PubMed Abstract | Publisher Full Text\n\nMahmod M, Darul ND, Mokhtar I, et al.: Atrial fibrillation as a complication of dengue hemorrhagic fever: non-self-limiting manifestation. Int J Infect Dis. 2009; 13(5): e316–18. PubMed Abstract | Publisher Full Text\n\nHidalgo-Zambrano DM, Jiménez-Canizales CE, Alzate-Piedrahita JA, et al.: Electrocardiographic changes in patients with chikungunya fever. Rev Panam Infectol. 2016; 18(1): 13–5. Reference Source\n\nWali JP, Biswas A, Chandra S, et al.: Cardiac involvement in Dengue Haemorrhagic Fever. Int J Cardiol. 1998; 64(1): 31–6. PubMed Abstract | Publisher Full Text\n\nKularatne SA, Pathirage MM, Kumarasiri PV, et al.: Cardiac complications of a dengue fever outbreak in Sri Lanka, 2005. Trans R Soc Trop Med Hyg. 2007; 101(8): 804–8. PubMed Abstract | Publisher Full Text\n\nPatil DR, Hundekar SL, Arankalle VA: Expression profile of immune response genes during acute myopathy induced by chikungunya virus in a mouse model. Microbes Infect. 2012; 14(5): 457–69. PubMed Abstract | Publisher Full Text\n\nLee IK, Lee WH, Liu JW, et al.: Acute myocarditis in dengue hemorrhagic fever: a case report and review of cardiac complications in dengue-affected patients. Int J Infect Dis. 2010; 14(10): e919–22. PubMed Abstract | Publisher Full Text\n\nVillamil-Gómez WE, Ramirez-Vallejo E, Cardona-Ospina JA, et al.: Electrocardiographic alterations in patients with chikungunya fever from Sucre, Colombia: A 42-case series. Travel Med Infect Dis. 2016; 14(5): 510–2. PubMed Abstract | Publisher Full Text\n\nTorres JR: Severe and fatal chikungunya fever in the Americas. Hotel RIU Plaza, Panama City. April 20 to 23, 2016. 5th Pan-American Dengue Research Network Meeting; Panama. 2016; 66, Access date: May 1, 2016. Reference Source\n\nWiwanitkit V: Dengue myocarditis, rare but not fatal manifestation. Int J Cardiol. 2006; 112(1): 122. PubMed Abstract | Publisher Full Text\n\nRodríguez-Morales AJ: Zika: the new arbovirus threat for Latin America. J Infect Dev Ctries. 2015; 9(6): 684–685. PubMed Abstract | Publisher Full Text\n\nMartinez-Pulgarin DF, Acevedo-Mendoza WF, Cardona-Ospina JA, et al.: A bibliometric analysis of global Zika research. Travel Med Infect Dis. 2016; 14(1): 55–57. PubMed Abstract | Publisher Full Text\n\nRodríguez-Morales AJ, Villamil-Gómez WE, Franco-Paredes C: The arboviral burden of disease caused by co-circulation and co-infection of dengue, chikungunya and Zika in the Americas. Travel Med Infect Dis. 2016; 14(3): 177–179. PubMed Abstract | Publisher Full Text\n\nArzuza-Ortega L, Polo A, Pérez-Tatis G, et al.: Fatal Sickle Cell Disease and Zika Virus Infection in Girl from Colombia. Emerg Infect Dis. 2016; 22(5): 925–927. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSarmiento-Ospina A, Vásquez-Serna H, Jimenez-Canizales CE, et al.: Zika virus associated deaths in Colombia. Lancet Infect Dis. 2016; 16(5): 523–524. PubMed Abstract | Publisher Full Text\n\nKrittanawong C, Zhang H, Sun T: Cardiovascular complications after Zika virus infection. Int J Cardiol. 2016; 221: 859. PubMed Abstract | Publisher Full Text\n\nPatiño-Barbosa AM, Bedoya-Arias JE, Cardona-Ospina JA, et al.: Bibliometric assessment of the scientific production of literature regarding Mayaro. J Infect Public Health. 2016; 9(4): 532–534. PubMed Abstract | Publisher Full Text\n\nPaniz-Mondolfi AE, Rodriguez-Morales AJ, Blohm G, et al.: ChikDenMaZika Syndrome: the challenge of diagnosing arboviral infections in the midst of concurrent epidemics. Ann Clin Microbiol Antimicrob. 2016; 15(1): 42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRodríguez-Morales AJ, Paniz-Mondolfi AE, Villamil-Gómez WE, et al.: Mayaro, Oropouche and Venezuelan Equine Encephalitis viruses: following in the footsteps of Zika? Travel Med Infect Dis. 2017; 15: 72–73. PubMed Abstract | Publisher Full Text\n\nCulquichicón C, Cardona-Ospina JA, Patiño-Barbosa AM, et al.: Bibliometric analysis of Oropouche research: impact on the surveillance of emerging arboviruses in Latin America [version 1; referees: 2 approved]. F1000Res. 2017; 6: 194. Publisher Full Text\n\nOrtiz-Martinez Y, Villamil-Gómez WE, Rodríguez-Morales AJ: Bibliometric assessment of global research on Venezuelan Equine Encephalitis: a latent threat for the Americas. Travel Med Infect Dis. 2017; 15: 78–79. PubMed Abstract | Publisher Full Text\n\nPaniz-Mondolfi AE, Blohm G, Piñero R, et al.: Venezuelan Equine Encephalitis: how likely are we to see the next epidemic? Travel Med Infect Dis. 2017; pii: S1477-8939(17)30030-3. PubMed Abstract | Publisher Full Text\n\nRodriguez-Morales AJ: Aspectos agudos y crónicos de la infección por virus chikungunya: aun aprendiendo. Actualizaciones en SIDA e Infectología. 2016; 24(93): 98–104. Reference Source\n\nTorres JR, Murillo J, Bofill L: The ever changing landscape of Zika virus infection. Learning on the fly. Int J Infect Dis. 2016; 51: 123–126. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "21381",
"date": "18 Apr 2017",
"name": "José Antonio Suárez",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nCardiovascular involvement in CHIKV disease has been described in several publication highlighting a possible cardiac tropism of CHIKV. This findings have shown that arboviruses like CHIKV and Dengue can share with parvovirus 19, herpes virus and enterovirus30 and other viruses, the list of the viral causes of heart damage.\n\nThis study helps the understanding of cardiovascular manifestations and complications in all 3 stages of CHICK disease and it gives the physician the awareness of thinking in arbovirus-related diseases to make the accurate diagnosis and avoid fatalities. Once the physician thinks in CHIKV, the patient should have a cardiac assessment as early as possible, especially in countries where CHIKV is epidemic.\nFrom study design, methods and analysis points of view, the authors complied with all PRISMA and PROSPERO criteria for a Systematic Review showing a robust data and good conclusions.\n\nIn Latin America where other arboviruses are co-circulating with CHIKV and Dengue, cardiovascular symptoms can be the first signal of a viral infection.\n\nThe content of this Systematic Review can help tropical medicine and travel medicine physicians to have a better approach in the assessment of patients with some arbovirus diseases.",
"responses": [
{
"c_id": "2652",
"date": "19 Apr 2017",
"name": "Alfonso Rodriguez-Morales",
"role": "Author Response",
"response": "Dear Dr. SuárezThanks for your assessment as well your positive comments on our review, which as can be appreciated after reviewing major bibliographical databases, such as Scopus, PubMed and/or Web of Science, is probably the first to address specifically the cardiovascular involvement and manifestations of systemic Chikungunya virus infection."
}
]
},
{
"id": "21382",
"date": "18 Apr 2017",
"name": "Stalin Vilcarromero",
"expertise": [
"Reviewer Expertise epidemiology and clinic of Arbovirus infection"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the manuscript entitled “Cardiovascular involvement and manifestations of systemic Chikungunya virus infection: A systematic review” authors have made a hard and interesting description of the cardiovascular effect by chikungunya virus infection. This analysis would be important to share with the scientific community; however it is necessary major changes in order to be ready for publication. Mainly, focus in the specific topic and put in order and properly the information in the paper. General comments:\nObjective:\nIt s not clear, author start assessing that the objective focus among cardiovascular involvement and manifestation, however, other systemic manifestation or atypical manifestation also become relevant in the results section.\nMethodology\n\nThe main objective of systematic reviews is to respond to a specific question. Initially authors seem to do that pointing on cardiovascular involvement, but then, when they write the introduction and results section, this main point seem to disappear or is not clear for the reader. However, in the discussion section, authors re-take the objectives again. Certainly, the big amount of information given in the introduction section and talking about the virus, history, classification, vector and cycles is very interesting, but it is not the purpose of this study. I suggest them shortening it and focussing on the cardiovascular involvement and complications. I also recommend displaying a flow (figure) of how papers were excluded and included. What guideline do the authors follow up in order to assess the risk of bias (PRISMA or Cochrane? It is possible you follow PRISMA approach (For example: http://prisma-statement.org/PRISMAStatement/Checklist.aspx). Please give this information explaining the steps. In the Methods section, the next key word “Chikungunya AND Systemic AND Manifestation” used by authors has probably given them no specific references or papers. Instead of that, why do not they also use specific “key words” such as: “Chikungunya AND cardiac involvement” “Chikungunya AND cardiac complication” or “Chikungunya AND cardiovascular involvement” “Chikungunya AND cardiovascular complications” or Chikungunya AND Atypical manifestation/complications“. The idea is to be more specific and less general. According to the authors, the protocol was registered in PROSPERO; however it was not possible to view the registered protocol in the web (https://www.crd.york.ac.uk/prospero/searchadvanced.php).\n\nI wonder if the inclusion of reviews may cause a bias. Please clarify and if it is true, consider in the “limitations section”.\n\nResults\nAuthors use “cardiac affectation” and also “cardiovascular compromise”. I recommend to standardize the term in order to avoid creating confusion for the reader. Authors describe with detail the clinical features during the acute, post-acute and chronic stage of CHIKV infection. I recommend shortening and focusing on the topic. In “Atypical presentation” section, authors shows systemic manifestation considering the affectation in different organs such as neurological, cardiovascular, etc. Why did not the authors use also the key word “atypical” rather than “systemic manifestation” or “extra-articular” in the searching strategy and also in the analyses? It is confusing. In Table 2, the word “Systemic extra-articular involvement of atypical CHIKV” shows clinical manifestation, however, we do not know more information about “who (co-morbidities? Older? Young?)”, “how many”(number/percentage), type of paper(case report, case control, etc), etc. Are these clinical manifestations in outpatients or inpatients? Are these early clinical features or complications? So there are many questions around this information that would be useful in order to form a correct interpretation. In Table 3, it is important to consider more information about the selected papers. I recommend a big table showing the different cardiovascular involvement/complication, for example: 1. the clinical diagnoses (heart failure, acute coronary syndrome, refractory shock, and rhythm abnormalities), relevant signs and symptoms, especially early signs/symptoms (chest pain, dyspnea, bradycardia, etc), laboratory diagnose (Troponin, BNP, CK-MB, etc), Imaging studies (echocardiography, Magnetic resonance), final diagnose (Myocarditis?), Management (inotropics, corticosteroid, etc) and outcome (survive, died). I consider, this data would be important to understand the impact of cardiovascular involvement due to CHIKV. In dengue, now it is known that more about the cardiovascular involvement is mostly characterized by rhythm abnormalities (bradycardia) with no symptoms or complications. However, in moderate or severe cases where there was a cardiovascular affectation or complication, myocarditis has been an important cause. Myocarditis due to DENV infection may present several patterns such as “refractory shock”, “heart failure”, “arrhythmia”, etc and It would be important to consider this diagnose. In the comparison with CHIKV infection and cardiac involvement, myocarditis, should be discussed. It is not clear what do authors try to assess when they say: “The cardiac tropism of CHIKV seem to be shared with DENV”, it is important to clarify. Diagnose myocarditis by those arbovirus, need to consider a myocardial biopsy or a cardiac magnetic resonance, however performing these in tropical areas where these arbovirus are prevalent, is very hard. The management of myocarditis, whatever is the etiology, focus in the management of the agent (virus, bacteria, etc); management of the cardiovascular event (heart failure, cardiogénico shock, arrhythmia, etc) and the management of the inflammatory process. The last, is under discussion and needs more research, although in some severe cases by DENV, the corticosteroids administration changed the evolution.\n\nDiscussion\nAuthors say, “The key for a successful outcome of CHIKV-induced cardiomyopathy is recognizing signs and symptoms”. Here I recommend repeating the most important ones.\nLimitations\nThis section would consider some of the bias common to this kind of study where there is no clinical trials included and where “case reports” were including. Bias such as “selection” or “publication” bias, etc.\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Partly\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Partly\n\nIs the statistical analysis and its interpretation appropriate? Not applicable\n\nAre the conclusions drawn adequately supported by the results presented in the review? Partly",
"responses": [
{
"c_id": "2653",
"date": "19 Apr 2017",
"name": "Alfonso Rodriguez-Morales",
"role": "Author Response",
"response": "Dear Dr. VilcarromeroFirst, thanks for your initial comments on our systematic review (SR) (without meta-analysis). In second place we want to clarify that making for the first time a SR about cardiovascular involvement and manifestations of systemic Chikungunya virus infection implies comparisons with dengue, unavoidably, as well, to introduce readers about the systemic manifestations of chikungunya, to better understand the cardiovascular involvement and manifestations in this arboviral disease. By the way, still on this specific topic, many aspects will become more detailed along with research in the near future. In which our group is specifically also contributing with studies, already published on the cardiovascular electrocardiographical alterations, as well ongoing on the ecochardiographical and cardiovascular biochemical ones.Objective:It s not clear, author start assessing that the objective focus among cardiovascular involvement and manifestation, however, other systemic manifestation or atypical manifestation also become relevant in the results section.Our SR has as objectives, not just one:-To systematically review published literature on the cardiovascular manifestations and involvement of systemic CHIKV infection;-To explore which are the main clinical cardiovascular features of chikungunya infection?-To identify which are the main electrocardiographical findings of chikungunya infection?All of them, are clearly developed in the section Synthesis of Results, which is the main part of the SR.Before that section, a brief introduction on other systemic manifestation or atypical manifestation of chikungunya infection is at Clinical Course, short section of just 3 paragraphs and atypical presentations, a section of just 2 paragraphs, which include, by the way, the transition to the cardiovascular aspects, developed in the section Synthesis of Results and later sections of the SR.MethodologyThe main objective of systematic reviews is to respond to a specific question. In our case, to three specific questions, with the proper context to make it readable and understandable given the novelty of the topic, as mentioned, also with ongoing research in multiple aspects.Initially authors seem to do that pointing on cardiovascular involvement, but then, when they write the introduction and results section, this main point seem to disappear or is not clear for the reader. From the title it's clear that this SR is not only about the cardiovascular involvement, but also about cardiovascular manifestations of chikungunya. Nevertheless, we will make more clarifications on our Introduction regarding this.However, in the discussion section, authors re-take the objectives again.Ok. Thanks.Certainly, the big amount of information given in the introduction section and talking about the virus, history, classification, vector and cycles is very interesting, but it is not the purpose of this study.This is not properly a study, it is a SR, without meta-analysis. Then, we consider, given the still novelty of the topic, such aspects are necessary for the readers.I suggest them shortening it and focussing on the cardiovascular involvement and complications. We have done that at Synthesis of Results and later sections of the SR.I also recommend displaying a flow (figure) of how papers were excluded and included.With our submission we provided the PRISMA Flow Chart and Checklist. However, this was not published by the journal with our article. Considering its importance and agreeing with you about it, we will incorporate as a Figure 1 and including properly in the manuscript, and not as supplementary file of our submission.What guideline do the authors follow up in order to assess the risk of bias (PRISMA or Cochrane? It is possible you follow PRISMA approach (For example: http://prisma-statement.org/PRISMAStatement/Checklist.aspx). Please give this information explaining the steps.We followed the PRISMA statement, as recommended by F1000 Research. We will clarify more on this in our revised manuscript.In the Methods section, the next key word “Chikungunya AND Systemic AND Manifestation” used by authors has probably given them no specific references or papers. Instead of that, why do not they also use specific “key words” such as: “Chikungunya AND cardiac involvement” “Chikungunya AND cardiac complication” or “Chikungunya AND cardiovascular involvement” “Chikungunya AND cardiovascular complications” or Chikungunya AND Atypical manifestation/complications“. The idea is to be more specific and less general.Given the lack of studies, we explore both options, finally trying to be more sensitive in order to include all the possibly relevant studies related to our SR.According to the authors, the protocol was registered in PROSPERO; however it was not possible to view the registered protocol in the web (https://www.crd.york.ac.uk/prospero/searchadvanced.php). We agree with you. This protocol was not prospectively registered in PROSPERO. But this is not a mandatory aspect for publication at F1000 Research, then this was modified later, but when published appeared with that incorrect comment. This will be deleted in the revised version.I wonder if the inclusion of reviews may cause a bias. Please clarify and if it is true, consider in the “limitations section”.We will extend more on our Limitations.ResultsAuthors use “cardiac affectation” and also “cardiovascular compromise”. I recommend to standardize the term in order to avoid creating confusion for the reader.Agree, we will use only \"compromise\" and not \"affectation\".Authors describe with detail the clinical features during the acute, post-acute and chronic stage of CHIKV infection. I recommend shortening and focusing on the topic.We will consider this in the revised version.In “Atypical presentation” section, authors shows systemic manifestation considering the affectation in different organs such as neurological, cardiovascular, etc. Why did not the authors use also the key word “atypical” rather than “systemic manifestation” or “extra-articular” in the searching strategy and also in the analyses? It is confusing.In chikungunya, the definition of systemic manifestation or extra-articular has not well typified. Conversely, atypical case was defined by PAHO/WHO during the expert consultation meeting in Managua, Nicaragua, 2016, and later published at the Weekly Epidemiological Record of the WHO 14 august 2015, 90, 33, 409-420. We will clarify this in our revised version.In Table 2, the word “Systemic extra-articular involvement of atypical CHIKV” shows clinical manifestation, however, we do not know more information about “who (co-morbidities? Older? Young?)”, “how many”(number/percentage), type of paper(case report, case control, etc), etc. Are these clinical manifestations in outpatients or inpatients? Are these early clinical features or complications? So there are many questions around this information that would be useful in order to form a correct interpretation.We will consider the improvement of this Table.In Table 3, it is important to consider more information about the selected papers. I recommend a big table showing the different cardiovascular involvement/complication, for example: 1. the clinical diagnoses (heart failure, acute coronary syndrome, refractory shock, and rhythm abnormalities), relevant signs and symptoms, especially early signs/symptoms (chest pain, dyspnea, bradycardia, etc), laboratory diagnose (Troponin, BNP, CK-MB, etc), Imaging studies (echocardiography, Magnetic resonance), final diagnose (Myocarditis?), Management (inotropics, corticosteroid, etc) and outcome (survive, died). I consider, this data would be important to understand the impact of cardiovascular involvement due to CHIKV.Unfortunately the number of papers as well the data available is limited to fully performed that, although was discussed by the group of authors of this SR. Nevertheless, we will consider your comment to improve this Table.In dengue, now it is known that more about the cardiovascular involvement is mostly characterized by rhythm abnormalities (bradycardia) with no symptoms or complications. However, in moderate or severe cases where there was a cardiovascular affectation or complication, myocarditis has been an important cause. Myocarditis due to DENV infection may present several patterns such as “refractory shock”, “heart failure”, “arrhythmia”, etc and It would be important to consider this diagnose. In the comparison with CHIKV infection and cardiac involvement, myocarditis, should be discussed. It is not clear what do authors try to assess when they say: “The cardiac tropism of CHIKV seem to be shared with DENV”, it is important to clarify.Thanks for this comment. We will use it in the revised version.Diagnose myocarditis by those arbovirus, need to consider a myocardial biopsy or a cardiac magnetic resonance, however performing these in tropical areas where these arbovirus are prevalent, is very hard.We fully agree with this.The management of myocarditis, whatever is the etiology, focus in the management of the agent (virus, bacteria, etc); management of the cardiovascular event (heart failure, cardiogénico shock, arrhythmia, etc) and the management of the inflammatory process. The last, is under discussion and needs more research, although in some severe cases by DENV, the corticosteroids administration changed the evolution.We fully agree with this.DiscussionAuthors say, “The key for a successful outcome of CHIKV-induced cardiomyopathy is recognizing signs and symptoms”. Here I recommend repeating the most important ones.Ok. Agree.LimitationsThis section would consider some of the bias common to this kind of study where there is no clinical trials included and where “case reports” were including. Bias such as “selection” or “publication” bias, etc.We will comment more on that."
}
]
},
{
"id": "21386",
"date": "25 Apr 2017",
"name": "Cecilia Perret",
"expertise": [
"Reviewer Expertise Tropical and Travel Medicine. Viral emerging diseases"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comment This is a very interesting article that compiles information relevant to an emerging and widely disseminated infection in the American continent whose complications are still under study and its impact still to be known. Specifically, there are some aspects of the review that need to be considered: Objectives: - Clearly specified focusing on the cardiovascular involvement of Chikungunya virus infection, frequency of presentation, clinical manifestations and laboratory elements such as the electrocardiogram\n\nMethodology - Articles published in French have been left out, which could be a limitation considering that the largest series of cases and their complications came from France in relation to the big outbreak on La Reunion island, and many of them published in French. - The search criteria are wide: Chikungunya AND systemic manifestations, heart, cardiac. However, with these criteria, you could lose some reports of severe disease that do not appear under these search criteria. Chikungunya AND mortality or Death could be included. Also, more specific criteria could be used in order to answer the question the authors propose to find cardiovascular involvement in Chikungunya infection - The inclusion of dengue in the search and in the results escapes the objectives of this systematic review.\n\nThere is no reference to the use of PRISMA checklist in this review, if it was used, make it more explicit in each of the items.\n\nResults\nIt is confusing and difficult to follow what results are obtained from the systematic review (40 articles) and what results are from articles not included in the review. Table 1, which shows the frequency of the involvement of different organs in chikungunya can be absolutely biased since the inclusion criteria are articles with systemic involvement. According to the search criteria, the classical form of the disease has not been included overestimating the frequency of systemic involvement. It is not clear what criteria you used to classify an organ involvement as very common or extremely rare. According to the authors, an unusual manifestation occurs between 39-20%. This could be debatable. The clinical description of the disease in terms of acute, post-acute and chronic phase is irrelevant for the purposes of this review. Just mention the clinical aspects that are important for the objectives. In this sense, clarify the terms used as atypical manifestations, extra-articular manifestations, systemic disease, severe disease. They are used sometimes as synonyms and sometimes with a different meaning. It is confusing. It is difficult to understand why in table 1, cardiovascular manifestations are as frequent as 54% but does not appear to be so in the text of atypical presentations. The entire section of cardiovascular manifestations in dengue goes beyond the purpose of the study and should not be mentioned in the results. Comparisons with the cardiovascular compromise in chikungunya, which is appreciated, can be presented in the discussion. At the end of results and before management, the paragraph of studies determining the cardiovascular outcome in patients with chikungunya should be included in conclusions.\n\nDiscussion It is not clear the meaning of the phrase cardiac compromise is not so common in isolated episodes. The authors do not clearly indicate the home take messages for this study and its main contribution.\n\nLimitations: authors do not mention the limitations of their study. Conclusions: Do not correspond to purpose of the study or objectives.\n\nReferences In the references, it is not clear what are the articles included in the systematic review and what are included for discussion.\n\nAre the rationale for, and objectives of, the Systematic Review clearly stated? Yes\n\nAre sufficient details of the methods and analysis provided to allow replication by others? Partly\n\nIs the statistical analysis and its interpretation appropriate? Not applicable\n\nAre the conclusions drawn adequately supported by the results presented in the review? No",
"responses": [
{
"c_id": "2673",
"date": "02 May 2017",
"name": "Alfonso Rodriguez-Morales",
"role": "Author Response",
"response": "Dear Dr. PerretThanks for your valuable comments. Certainly the topic of this systematic review (without meta-analysis), is on ongoing findings that in the near future will be better defined. Regard your assessment, unfortunately this arrived when two others reviews ago had been submitted; based on them, we have recently proceeded to develop the new revised version. Your comments are consistent with those from Dr. Vilcarromero in a significant magnitude. Then, most of your comments are addressed in the revised version of the paper, including clarification on the objectives as well on the Methodology (explaining more the search criteria as well the clarification regard the use of PRISMA checklist and flow diagram). Regard the language of articles included, we agree that this would be a limitation. Limitations are now better described in the new version of the article. As we explained, dengue cardiovascular compromise and manifestations was a necessary comparison for understanding of readers. In the new version, Discussion and Conclusions were improved."
}
]
}
] | 1
|
https://f1000research.com/articles/6-390
|
https://f1000research.com/articles/6-67/v1
|
23 Jan 17
|
{
"type": "Research Note",
"title": "Training strategies and outcomes of ab interno trabeculectomy with the trabectome",
"authors": [
"Katherine Fallano",
"Igor Bussel",
"Larry Kagemann",
"Kira L. Lathrop",
"Nils A. Loewen",
"Katherine Fallano",
"Igor Bussel",
"Larry Kagemann",
"Kira L. Lathrop"
],
"abstract": "Plasma-mediated ab interno trabeculectomy with the trabectome was first approved by the US Food and Drug Administration in 2004 for use in adult and pediatric glaucomas. Since then, increased clinical experience and updated outcome data have led to its expanded use, including a range of glaucomas and angle presentations, previously deemed to be relatively contraindicated. The main benefits are a high degree of safety, ease, and speed compared to traditional filtering surgery and tube shunts. The increasing burden of glaucoma and expanding life expectancy has resulted in demand for well-trained surgeons. In this article, we discuss the results of trabectome surgery in standard and nonstandard indications. We present training strategies of the surgical technique that include a pig eye model, and visualization exercises that can be performed before and at the conclusion of standard cataract surgery in patients who do not have glaucoma. We detail the mechanism of enhancing the conventional outflow pathway and describe methods of visualization and function testing.",
"keywords": [
"Glaucoma",
"training",
"microincisional glaucoma surgery",
"ab interno trabeculectomy",
"trabectome",
"canalogram"
],
"content": "Introduction\n\nThe trabecular meshwork (TM) is the main resistance of the conventional outflow route of aqueous humor in primary and - to an even greater extent - in secondary open angle glaucoma1. Several procedures and devices to bypass or ablate the TM exist. The main differences are the amount of access to angle structures measured in degrees of angle arc, method of TM removal (ablation in ab interno trabeculectomy (AIT) versus disruption), and whether an implant remains in the eye or not.\n\nSuture or catheter trabeculotomy in gonioscopy-assisted transluminal trabeculotomy (GATT) and Trab360 (Sight Sciences, Menlo Park, CA, USA) can disrupt 360 degrees of TM through a single access site, whereas trabectome surgery (Neomedix Inc., Tustin, CA, USA) or goniotomy can achieve near 180 degrees of TM ablation or incision through a single clear corneal wound. While the use of a second site can increase ablation to 360 degrees, an 180-degree ablation provides additional flow approximately 30 degrees beyond each ablation endpoint2,3. The resulting circumferential flow can be detected experimentally and occurs even on the opposite site of the ablation4,5.\n\nA key feature of the trabectome is a ramping “footplate that provides the key function of lifting the TM and putting it on a slight stretch, positioning the tissue for maximal discharge effect from above while protecting underlying tissue”6. This device was described 15 years ago by Baerveldt and Chuck (http://www.google.com/patents/US6979328) and US Food and Drug Administration approved on February 9, 2004, for the treatment of adult and pediatric glaucoma (http://www.accessdata.fda.gov/cdrh_docs/pdf4/K040584.pdf). It represents the refinement of a mechanical goniectomy instrument (http://www.google.com/patents/US6979328). While the plasma created at the tip of the electrosurgical trabectome molecularizes the TM and is the more atraumatic and drag-free TM removal technique, it does require a high frequency generator that is not necessary with the goniectome (http://www.google.com/patents/US6979328) or the dual blade introduced a few years later (https://patents.google.com/patent/US20150297400A1/en). This device has recently been rekindled for use in operating rooms where a high-frequency generator is not available7,8. Preclinical investigations on ab interno trabeculectomy with the trabectome and a dual-blade device demonstrated a similar decrease in intraocular pressure (IOP) in an eye perfusion model8. A key differentiator among these instruments, however, is the active irrigation and aspiration system that facilitates visualization, a challenge that is well known from the pediatric goniosurgery literature9–11.\n\nEndoscopic excimer and YAG-laser trabeculotomy are limited to only a few circular TM ablation spots because the probe has to touch and ideally be parallel to the TM. This can be best achieved with the tip resting against the chamber angle opposite to the insertion site. TM micro-bypass implants (e.g. the iStent, Glaukos, Laguna Hill, CA, USA) are similar to such laser trabeculotomy by creating a single lumen access with an outflow enhancement over approximately 60 degrees of angle structures2,3 unless several implants are used, or a longer scaffold is inserted12,13. Compared to epibulbar drainage implants, angle surgery places unique demands on surgeons, due to the highly confined space of the angle that is approximately 200 fold smaller. Vulnerable structures are in proximity to the TM ablation (Figure 1) and consist of the deep intrascleral venous plexus, aqueous veins, mid limbal intrascleral plexus and vessels of the iris root14. Injuring those can present postoperative challenges and discourage new surgeons. It is, therefore, important to be persistent in learning the proper technique. In this article, we discuss AIT specifically with the trabectome (Neomedix, Inc.) and consider a method from other angle surgeries, a trabecular microbypass. In addition, we present a training model that has evolved from a pig eye research system and uses fluorescein or fluorescent spheres to trace or quantify outflow, and we discuss how many eyes are required to become a safe surgeon and why three to four times more eyes are needed to learn how to master this surgery. A guide to practice angle visualization and mock techniques prior to or at the conclusion of cataract surgery is presented, to help trainees who do not have access to pig eyes.\n\nVulnerable structures in proximity are the iris root, the suprachoroidal space, the cornea and the deep intrascleral venous plexus.\n\n\nSurgical and training methodologies\n\nAIT is performed first for optimal angle visualization. A 1.8 mm wide iris planar clear corneal incision is fashioned approximately 2 mm anterior to the surgical limbus. No viscoelastic is used at this stage as it can contribute to carbonization during ablation. The patient’s head is rotated about 40 degrees away from the surgeon, and the microscope is tilted in the opposite direction. The ideal gonioscopic TM visualization is achieved when the angle between the microscope and the patient’s eye is about 70 to 80 degrees. The incision is gaped to induce hypotony and enable identification of Schlemm’s canal from refluxed blood. It is also possible to use Trypan blue to stain the TM if desired (https://www.google.com/patents/US6372449). If the anterior chamber is too shallow for a full insertion, the irrigation ports of the metal sleeve allow forming the anterior chamber by resting the ports against the outer lips of the incision while the tip is already inside of the eye. The trabectome is engaged in the TM with the tip pointing 45 degrees upward just anterior to the scleral spur for a more pointed entry into the meshwork (Figure 2); a slightly offset approach towards the left further facilitates the engagement. The trabectome is then advanced parallel with no outward push toward the wall of the canal. The handpiece is turned 180 degrees to complete the clockwise ablation. By tilting the goniolens toward the brow and then toward the cheek (or in inverse order depending on whether the right or left eye is operated on), it is feasible to visualize the superior and inferior angle structures and remove nearly 180 degrees of meshwork. After removing the trabectome, viscoelastic is injected to pressurize the eye. If proceeding with cataract surgery, the incision is enlarged with a regular keratome. A more detailed explanation of this technique can be found in Polat and Loewen15.\n\nThe TM is engaged toward the left and with a 45 degree upward stroke before straightening out after Schlemm’s canal is entered.\n\nMicroincisional glaucoma surgeries (MIGS)16,17 occur in a space that is approximately 200-fold smaller than what is used during implantation of epibulbar glaucoma drainage devices18, making them challenging to learn. The iris root, ciliary body band, suprachoroidal space, and the deep venous plexus distal to the outer wall of Schlemm’s canal are structures that can become injured with variable sequelae19,20. Mastery of this surgery hinges on becoming proficient at visualizing the angle, identifying the correct target, avoiding trauma and maximizing the ablation length. Unfortunately, commonly used simulators or model systems with synthetic eyes in cataract surgery wet labs are not available for MIGS. Therefore, most aspiring MIG surgeons practice on glaucoma patients, even though reported complications are nearly tenfold as high as toward the end of the learning curve in ophthalmic surgery21. For this reason, we developed a safe and low-cost training environment that uses pig eyes mounted into a model head (Figure 3;3,4), which permits the tracking of progress objectively. In these eyes, outflow is traced by infusion of diluted fluorescein (0.017 mg/ml;3–5), used for intravenous applications in ophthalmic angiography or from the bottle used for tonography, although the latter source contains a preservative that may change diffusion barriers. Fluorescein has the advantage of diffusing through the TM, allowing estimation of flow speeds in non-ablated parts of the eye. A downside is that diffusion occurs over time also through intact vascular endothelium to stain the extravascular space. This is not the case within the 15 minutes of outflow tracing when fluorescent spheres of 0.5 microns are used (100-fold dilution of FluoSpheres Carboxylate-Modified Microspheres, 0.5 µm, yellow-green fluorescent (505/515), 2% solids, Thermo Fisher Scientific, Eugene, OR, USA;4,22). Fluorescent beads provide a less time sensitive, beginner-friendly method of quantifying the extent of ablation, but limits the user to a semi-quantitative assessment that does not provide flow speeds or volume estimates as fluorescein does.\n\nThe eye is mounted in a mannequin head and the angle is visualized with a surgical microscope using a goniolens (left). Gonioscopic view and tactile feedback are very similar to human eyes, except that Schlemm’s canal like segments (angular aqueous plexus) are present instead of a continuous, single lumen that is often seen in humans. Please see the following references for more information3,4,22.\n\nAqueous spaces, but less so actual flow function, can also be reconstructed using spectral domain-optical coherence tomography (SD-OCT). Past protocols were time consuming and required manual curation and delineation23. Consequently, we developed a new, automated method that uses a SD-OCT optics engine (Bioptigen, Research Triangle, Durham, NC, USA) coupled with a wide-bandwidth diode array (870-nm center wavelength, 200-nm bandwidth; model Q870; Superlum Ltd, Dublin, Ireland). If using an ex vivo model, the eye is placed into a holder with the eye facing up. If a patient’s eye is imaged, it is best to provide a fixation target. The head of the SD-OCT has to be steadied with a mount during the scan to prevent motion artifacts. After obtaining individual radial scan sets, each clock hour is imaged with a density of 512×512 axial scans to acquire a 2 by 3 mm area of tissue. After pre and post image processing, 3D rendered stacks can be assembled and segmented using Fiji/ImageJ (ImageJ 1.50b; http://imagej.nih.gov/ij, Wayne Rasband, National Institutes of Health, Bethesda, MD, USA)30. This automated segmentation protocol creates a virtual cast of the aqueous outflow tract (Figure 4, right).\n\nPatency of outflow structures can be visualized in this pig eye model with spectral domain-optical coherence tomography (SD-OCT). Please see the following references for more information3–5,22.\n\nDepending on the equipment that is available, an ex vivo training system might be too challenging to implement. Our team has found the following a useful technique for setting up a simulation in cataract patients: 1) positioning the patient’s head; 2) setting up the microscope; 3) gonioscopic visualization of the angle; and 3) induction of blood reflux to identify Schlemm’s canal. The headrest needs to be close enough to the stand of the main body of the microscope to accommodate the tilted view and greater distance from it. Trainees can tell their patient that they would like to take a brief look at the angle of the eye. With the surgeon seated temporally, the patient's head is tilted away by 30 degrees. The microscope is set up by centering the head of the microscope, confirming that the tilt knob is covered with a handle and then tilted toward the surgeon by 30 degrees. The microscope is lowered manually to bring the limbus into focus. Practicing gonioscopic visualization of the angle requires confirming the proper handedness of the modified Swan Jacob gonioprism, placing it on the eye and moving the microscope’s focus down toward the nasal angle. The iris root, ciliary body band, trabecular meshwork can be seen and needs to be distinguished from Schwalbe’s line and Sampaolesi line. Using a 0.12 forcep to tap lightly on the posterior lip of the primary cataract incision, induction of blood reflux identifies Schlemm’s canal. After several seconds, the goniolens is placed onto the cornea again to visualize a partially venous blood filled Schlemm’s canal.\n\n\nOutcomes from trabectome surgery\n\nBy removing the primary outflow resistance (the TM) the aqueous humor can more easily pass into the collector channels and aqueous veins (Figure 4). According to the Goldmann equation (Intraocular Pressure = [Aqueous Humor Formation/Outflow] + Episcleral Venous Pressure), this should cause the IOP to drop to the level of the episcleral veins24. However, this is rare and most postoperative IOP average are around 16 mmHg25. The change of diameters, collapse or patency of collector channels can be imaged using SD-OCT (Figure 4, right;29). Similar to human eyes, pig eyes have more outflow along the nasal drainage system compared with the temporal angle; nasal TM ablation is able to enhance outflow beyond physiological levels and causes fluorescein to flow circumferential through small connections between Schlemm-canal like segments that are characteristic for the angular aqueous plexus of the pig3–5. New reconstruction of the outflow tract via SD-OCT confirmed that the presence of aqueous spaces matched collectors where flow was seen. However, non-perfused vascular structures exist that might belong to the arterial or venous vascular system or reflect poorly perfused collector channels.\n\nUsing the pig eye training system mentioned above, we previously found that surgical time decreases by 1.4 minutes per eye in a linear fashion, and the ablation arc length follows an asymptotic function with a half-maximum after 5.3 eyes, while an operating room score achieves a half-maximum already after only 2.5 eyes22,26. This rapid improvement is contrasted by a slower slope of outflow in canalograms in this model, and suggests that achievement of true mastery requires about 29 eyes22.\n\nRecent discoveries indicate valve-like structures that appear to guard the collector channel orifices and collapsible aqueous veins27,28, calling into question the simplistic view that collector channel openings are round and unobstructed. Structural data indicates that flaps are kept in suspension through string-like attachments. This suggests that either these attachments should be maintained or that the flaps at the opening of collector channels need to be removed. It is not proven which procedures do this, but it is likely that a longer scaffold that displaces the TM away from the outer wall will consequentially disrupt the flap attachments and allow the flaps to remain. Electron microscopic images of the outer wall from trabectome ablation suggest that most of them are removed together with their attachments and the TM29.\n\nPhacoemulsification combined with AIT. As cataract and glaucoma often co-exist in the same patient, phacoemulsification combined with trabectome surgery is a useful and cost-effective option for the treatment of both conditions30. Combined phaco-trabectome surgery can result in an approximate 18% reduction in IOP31. A recent review of 498 patients undergoing phaco-trabectome surgery demonstrated a greater reduction in IOP in more severe glaucoma, as well as steroid-induced glaucoma32,33 and pseudoexfoliative glaucoma34,35. The focus on a percentage reduction is misleading because of the increase of outflow facility, which is only limited by post-trabecular outflow resistance. Patients with a high preoperative IOP can be expected to drop toward a similar postoperative IOP, near 16 mmHg, as patients with a lower preoperative IOP32,36. Non-matched studies of phaco-trabectome and trabectome outcomes do not take into account that the second group has an IOP reduction as the primary indication and a higher baseline compared to the group with cataract surgery patients, many of which may have stable glaucomas, but would like to take advantage of reduced eye drop dependency.\n\nAIT alone in pseudophakic or phakic eyes. Trabectome surgery alone can be a useful alternative to more invasive procedures in patients who have already undergone cataract surgery, as well as those who do not have a visually significant cataract. The lens status or performance of phacoemulsification in the same session has no significant impact37,38 on IOP reduction. In a review of 235 pseudophakic patients undergoing trabectome surgery compared to 352 patients undergoing phaco-trabectome, individuals with phaco-trabectome had only slightly lower IOPs (0.73 +/- 0.32 mm Hg) than patients undergoing trabectome alone37.\n\nIn a prospective study of 261 patients undergoing trabectome or phaco-trabectome, there was a trend toward a greater benefit in phaco-trabectome compared to trabectome alone in phakic or pseudophakic eyes39. However, in a review of 255 phakic patients undergoing trabectome compared with 498 patients undergoing phacoemulsification combined with trabectome, phakic patients had a 21% reduction in IOP compared to an 18% reduction in patients undergoing phacoemulsification. There was no statistically significant difference in IOP or the number of medications between the groups, suggesting that phacoemulsification itself may not contribute significantly to pressure lowering in these patients38.\n\nGoniosynechialysis and AIT in narrow angles and angle closure. Previously, angle-based glaucoma surgery in patients with narrow angles has been thought more likely to result in synechiae and fibrosis, and this has been considered a relative contraindication to trabectome surgery. However, a retrospective review of 671 patients undergoing either trabectome or phaco-trabectome offers evidence that trabectome surgery can be successful even in these patients. Patients with an angle judged as Shaffer grade 2 or less (narrow) had a 42% reduction in IOP at one year after trabectome surgery and a 24% reduction in IOP at one year after phaco-trabectome. Similarly, patients with an angle judged as Shaffer grade 3 or above (open) had a 37% reduction in IOP at one year after trabectome surgery and a 25% reduction in IOP at one year after phaco-trabectome. There was no statistically significant difference between the groups in IOP, the number of medications, or success rates, suggesting that trabectome surgery is a viable option for patients with narrow angles40.\n\nFailed trabeculectomy or tube shunt. Reoperations after failed trabeculectomy or tube shunt are some of the most challenging surgeries for a glaucoma specialist. Trabectome surgery is a suitable, minimally invasive alternative to a revision or repeat filter or shunt. In a retrospective review of 20 patients undergoing trabectome surgery after prior failed tube shunt, there was a statistically significant reduction in IOP from 23.7 +/- 6.4 mm Hg pre-op to 15.5 +/- 3.2 mm Hg at 12 months. The authors report a 84% success rate at 12 months, with only three patients requiring further surgery41.\n\nIn a retrospective review of 73 patients undergoing trabectome surgery after failed trabeculectomy, there was a 28% reduction in IOP following trabectome and a 19% reduction following phaco-trabectome, with a 1-year probability of success of 81% for trabectome and 87% for phaco-trabectome42. Another recent review of 60 patients undergoing trabectome surgery after failed trabeculectomy demonstrated a 36% reduction in IOP and a 14% decrease in the number of IOP-lowering medications, with 25% of patients requiring further surgery in the course of follow-up43. Although these studies are limited by their retrospective nature and the relatively small number of patients included, the results suggest that the distal outflow tract is patent and functioning, contradicting the assumption that an unused outflow system atrophies. Presumably, this was the product of a misinterpretation of the high IOP that often follows when the drainage cleft of a cyclodialysis, a historical surgery now rarely performed, closes44–46. The cause of the lack of conventional outflow is instead the failure of the TM to allow fluid passage after extended periods of having been bypassed.\n\nAdjuvant AIT at the time of tube shunt implantation. Finally, with its favorable safety profile, trabectome surgery may be a valuable adjuvant at the time of both valved and non-valved tube shunt implantation. In a matched comparison of 117 patients undergoing Baerveldt tube implantation alone versus 60 patients undergoing Baerveldt combined with trabectome surgery, both groups showed similar IOPs and visual acuities at each postoperative time point. However, the combination group required fewer IOP-lowering drops at each time point than the group receiving tube alone. Therefore, adjuvant trabectome surgery may improve the quality of life of patients by reducing medication burden47. Results from Ahmed tube implantations were similar48; trabectome surgery plus two medications reduced IOP by 12 mmHg while Ahmed implantation plus four medications lowered IOP by 15 mmHg.\n\nAIT in severe glaucoma. When trabectome surgery outcomes are stratified by glaucoma severity, patients with more medications, a higher baseline IOP and worse visual field, experience a larger IOP reduction than subjects with less aggressive glaucoma32,36. We previously created a glaucoma index to capture the clinical treatment resistance and the risk by combining baseline IOP, the number of medications that achieved this pressure and the visual field status32,36,49. Our experience with trabecular ablation after failed trabeculectomy suggested that removal of a highly impaired TM might have a bigger effect than the removal of an only mildly impaired TM. Analysis of 843 patients indicated that patients in the most advanced glaucoma group had a threefold larger IOP reduction than the ones in the mild glaucoma group36. Due to the increasing glaucoma severity in the four glaucoma severity groups, we created, individuals with advanced glaucoma had a lower success rate of 71% compared to patients with mild glaucoma, who had a success rate of 90%. Such risk of failure needs to be taken into consideration when trabectome surgery is selected to avoid a more complication prone, traditional procedure. Otherwise, traditional filtering surgeries remain a good option to advance treatment after failed trabectome surgery50, while choosing noninvasive selective laser trabeculoplasty is less successful in this situation51. Age, Hispanic ethnicity, steroid-induced glaucoma and cup disk ratio were also found to be significantly associated with a greater IOP reduction36. An analysis of stratified phaco-trabectome outcomes showed that worse glaucoma, pseudoexfoliation and steroid-induced glaucoma contributed to IOP reduction, but not ethnicity nor the cup disk ratio49. To discover additional factors from an increased testing power using more patients, we performed a combined analysis of 1340 phaco-trabectome and trabectome patients by glaucoma severity32, due to the negligible impact of phacoemulsification on IOP37,38. This study was consistent with the more limited analysis of 843 patients36 and indicated a fourfold larger IOP reduction in patients of the worst glaucoma group. We found that patients of Hispanic ethnicity had an IOP reduction, which was nearly 4 mmHg greater. In addition, pseudoexfoliation and steroid glaucoma imparted an IOP reduction that was greater by 3 mmHg and 4 mmHg, respectively, than POAG32. In contrast to the trabectome glaucoma severity analysis, the cup disk ratio was not significantly associated with IOP reduction in the expanded study combining trabectome with phaco-trabectome patients.\n\nIn general, the trabectome has a highly favorable safety profile, particularly in comparison to incisional glaucoma surgery. According to a recent meta-analysis25, the most common complications of trabectome surgery include hyphema (up to 100%), peripheral anterior synechiae (14%), corneal injury (6%) and temporary IOP spike (6%). Less common complications include transient hypotony, lasting less than 3 months (1.5%), iris injury (1%), cystoid macular edema (1.5% when combined with phacoemulsification), and cataract progression (1.2%). Among all 3828 patients reported by the meta-analysis, there are case reports of a few, rare complications, including cyclodialysis cleft (2 cases), aqueous misdirection (4 cases), choroidal hemorrhage (1 case), and endophthalmitis (1 case)25.\n\n\nConclusions\n\nTrabectome surgery is a durable surgical technique that may apply to a wide range of glaucomas, including narrow-angle glaucoma and severe glaucoma after failed incisional surgery. While it has not replaced filtration or tube shunt surgery, it is a valuable addition to a previously limited surgical toolkit in the treatment of glaucoma. The combination of canalograms to estimate localized outflow function and virtual casting by reconstructing vascular spaces with SD-OCT provides new tools to investigate mechanisms and failure of outflow enhancement. New surgeons can practice angle surgery in a pig eye training model that allows measurement of progress. Learning curves from junior surgeons suggest that they are safe surgeons after 5 to 7 eyes and learn to master the technique after about 30 eyes.",
"appendix": "Author contributions\n\n\n\nAll authors participated in the writing, research, and preparation of this article, in the initial and all stages of preparation. KF: experimental design, manuscript writing and reviewing, data analysis, data interpretation. IB: experimental design, data acquisition, data analysis, data interpretation, manuscript writing. LK: experimental design, data acquisition, data analysis. KL: experimental design, data acquisition, data analysis, data interpretation, manuscript preparation. NAL: experimental design, data acquisition, data analysis, data interpretation, manuscript writing, funding.\n\n\nCompeting interests\n\n\n\nKAF, IIB, LK, KL have no financial disclosures.\n\nNAL has received honoraria for trabectome wet labs and lectures from Neomedix Corp.\n\n\nGrant information\n\nThe authors are grateful for assistance provided by the Initiative to Cure Glaucoma of the Eye and Ear Foundation of Pittsburgh.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nMäepea O, Bill A: Pressures in the juxtacanalicular tissue and Schlemm’s canal in monkeys. Exp Eye Res. 1992; 54(6): 879–83. PubMed Abstract | Publisher Full Text\n\nRosenquist R, Epstein D, Melamed S, et al.: Outflow resistance of enucleated human eyes at two different perfusion pressures and different extents of trabeculotomy. Curr Eye Res. 1989; 8(12): 1233–40. PubMed Abstract | Publisher Full Text\n\nParikh HA, Loewen RT, Roy P, et al.: Differential Canalograms Detect Outflow Changes from Trabecular Micro-Bypass Stents and Ab Interno Trabeculectomy. Sci Rep. 2016; 6: 34705. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoewen RT, Brown EN, Scott G, et al.: Quantification of Focal Outflow Enhancement Using Differential Canalograms. Invest Ophthalmol Vis Sci. 2016; 57(6): 2831–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLoewen RT, Brown EN, Roy P, et al.: Regionally Discrete Aqueous Humor Outflow Quantification Using Fluorescein Canalograms. PLoS One. 2016; 11(3): e0151754. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrancis BA, See RF, Rao NA, et al.: Ab interno trabeculectomy: development of a novel device (Trabectome) and surgery for open-angle glaucoma. J Glaucoma. 2006; 15(1): 68–73. PubMed Abstract\n\nSooHoo JR, Seibold LK, Kahook MY: Ab interno trabeculectomy in the adult patient. Middle East Afr J Ophthalmol. 2015; 22(1): 25–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSeibold LK, SooHoo JR, Ammar DA, et al.: Preclinical investigation of ab interno trabeculectomy using a novel dual-blade device. Am J Ophthalmol. 2013; 155(3): 524–9.e2. PubMed Abstract | Publisher Full Text\n\nChoi EY, Walton DS: Goniotomy for Steroid-Induced Glaucoma: Clinical and Tonographic Evidence to Support Therapeutic Goniotomy. J Pediatr Ophthalmol Strabismus. 2015; 52(3): 183–8. PubMed Abstract | Publisher Full Text\n\nChen TC, Walton DS: Goniosurgery for prevention of aniridic glaucoma. Arch Ophthalmol. 1999; 117(9): 1144–8. PubMed Abstract | Publisher Full Text\n\nYeung HH, Walton DS: Pediatric Glaucoma: Angle Surgery and Glaucoma Drainage Devices. In: Giaconi JA, Law SK, Nouri-Mahdavi K, Coleman AL, Caprioli J, editors. Pearls of Glaucoma Management. Springer Berlin Heidelberg, 2016; 487–94. Publisher Full Text\n\nJohnstone MA, Saheb H, Ahmed II, et al.: Effects of a Schlemm canal scaffold on collector channel ostia in human anterior segments. Exp Eye Res. 2014; 119: 70–6. PubMed Abstract | Publisher Full Text\n\nGulati V, Fan S, Hays CL, et al.: A novel 8-mm Schlemm’s canal scaffold reduces outflow resistance in a human anterior segment perfusion model. Invest Ophthalmol Vis Sci. 2013; 54(3): 1698–704. PubMed Abstract | Publisher Full Text\n\nvan der Merwe EL, Kidson SH: Advances in imaging the blood and aqueous vessels of the ocular limbus. Exp Eye Res. 2010; 91(2): 118–26. PubMed Abstract | Publisher Full Text\n\nPolat JK, Loewen NA: Combined phacoemulsification and trabectome for treatment of glaucoma. Surv Ophthalmol [Internet]. [cited 2016 Mar 31], 2016; pii: S0039-6257(15)30015-1. PubMed Abstract | Publisher Full Text\n\nKaplowitz K, Schuman JS, Loewen NA: Techniques and outcomes of minimally invasive trabecular ablation and bypass surgery. Br J Ophthalmol. 2014; 98(5): 579–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaplowitz K, Loewen NA: Minimally Invasive and Nonpenetrating Glaucoma Surgery. In: Yanoff, MSDJ editors. Ophthalmology: Expert Consult. Elsevier; 2013; 1133–46. Reference Source\n\nChristakis PG, Tsai JC, Kalenak JW, et al.: The Ahmed versus Baerveldt study: three-year treatment outcomes. Ophthalmology. 2013; 120(11): 2232–40. PubMed Abstract | Publisher Full Text\n\nKaplowitz K, Bussel II, Honkanen R, et al.: Review and meta-analysis of ab-interno trabeculectomy outcomes. Br J Ophthalmol [Internet]. 2016; 100(5): 594–600. PubMed Abstract | Publisher Full Text\n\nKaplowitz K, Abazari A, Honkanen R, et al.: iStent surgery as an option for mild to moderate glaucoma. Expert Rev Ophthalmol. 2014; 9(1): 11–6. Publisher Full Text\n\nMartin KR, Burton RL: The phacoemulsification learning curve: per-operative complications in the first 3000 cases of an experienced surgeon. Eye (Lond). 2000; 14(Pt 2): 190–5. PubMed Abstract | Publisher Full Text\n\nDang Y, Waxman S, Wang C, et al.: Rapid learning curve assessment in an ex vivo training system for microincisional glaucoma surgery [Internet]. Researchgate.net. 2016; [cited 2016 Dec 9]. Reference Source\n\nFrancis AW, Kagemann L, Wollstein G, et al.: Morphometric analysis of aqueous humor outflow structures with spectral-domain optical coherence tomography. Invest Ophthalmol Vis Sci. 2012; 53(9): 5198–207. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrubaker RF: Goldmann’s equation and clinical measures of aqueous dynamics. Exp Eye Res. 2004; 78(3): 633–7. PubMed Abstract | Publisher Full Text\n\nKaplowitz K, Bussel II, Honkanen R, et al.: Review and meta-analysis of ab-interno trabeculectomy outcomes. Br J Ophthalmol. 2016; 100(5): 594–600. PubMed Abstract | Publisher Full Text\n\nLoewen N: Trabectome data update 2016 [Internet]. American Academy of Ophthalmology. Chicago. 2016; [cited 2016 Dec 3]. Publisher Full Text\n\nXin C, Wang RK, Song S, et al.: Aqueous outflow regulation: Optical coherence tomography implicates pressure-dependent tissue motion. Exp Eye Res [Internet]. 2016; pii: S0014-4835(16)30148-8. PubMed Abstract | Publisher Full Text\n\nJohnstone M: 3. Intraocular pressure control through linked trabecular meshwork and collector channel motion. In: Knepper PA, Samples JR, editors. Glaucoma Research and Clinical Advances 2016 to 2018. Kugler Publications; 2016; 41. Reference Source\n\nKaplowitz K, Loewen NA: Minimally Invasive Glaucoma Surgery: Trabeculectomy Ab Interno. In: Samples JR, Ahmed IIK, editors. Surgical Innovations in Glaucoma. Springer New York; 2014; 175–86. Publisher Full Text\n\nIordanous Y, Kent JS, Hutnik CM, et al.: Projected Cost Comparison of Trabectome, iStent, and Endoscopic Cyclophotocoagulation Versus Glaucoma Medication in the Ontario Health Insurance Plan. J Glaucoma [Internet]. 2014; 23(2): e112–8. PubMed Abstract | Publisher Full Text\n\nMinckler D, Mosaed S, Dustin L, et al.: Trabectome (trabeculectomy-internal approach): additional experience and extended follow-up. Trans Am Ophthalmol Soc. 2008; 106: 149–59; discussion 159–60. PubMed Abstract | Free Full Text\n\nDang Y, Roy P, Bussel II, et al.: Combined analysis of trabectome and phaco-trabectome outcomes by glaucoma severity [version 2; referees: 3 approved]. F1000Res. 2016; 5: 762. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNgai P, Kim G, Chak G, et al.: Outcome of primary trabeculotomy ab interno (Trabectome) surgery in patients with steroid-induced glaucoma. Medicine (Baltimore). 2016; 95(50): e5383. PubMed Abstract | Publisher Full Text\n\nWidder RA, Dinslage S, Rosentreter A, et al.: A new surgical triple procedure in pseudoexfoliation glaucoma using cataract surgery, Trabectome, and trabecular aspiration. Graefes Arch Clin Exp Ophthalmol. 2014; 252(12): 1971–5. PubMed Abstract | Publisher Full Text\n\nTing JL, Damji KF, Stiles MC, et al.: Ab interno trabeculectomy: outcomes in exfoliation versus primary open-angle glaucoma. J Cataract Refract Surg. 2012; 38(2): 315–23. PubMed Abstract | Publisher Full Text\n\nLoewen RT, Roy P, Parikh HA, et al.: Impact of a Glaucoma Severity Index on Results of Trabectome Surgery: Larger Pressure Reduction in More Severe Glaucoma. PLoS One. 2016; 11(3): e0151926. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeiweem AE, Bussel II, Schuman JS, et al.: Glaucoma Surgery Calculator: Limited Additive Effect of Phacoemulsification on Intraocular Pressure in Ab Interno Trabeculectomy. PLoS One. 2016; 11(4): e0153585. PubMed Abstract | Publisher Full Text | Free Full Text\n\nParikh HA, Bussel II, Schuman JS, et al.: Coarsened Exact Matching of Phaco-Trabectome to Trabectome in Phakic Patients: Lack of Additional Pressure Reduction from Phacoemulsification. PLoS One. 2016; 11(2): e0149384. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnton A, Heinzelmann S, Neß T, et al.: Trabeculectomy ab interno with the Trabectome® as a therapeutic option for uveitic secondary glaucoma. Graefes Arch Clin Exp Ophthalmol. 2015; 253(11): 1973–8. PubMed Abstract | Publisher Full Text\n\nBussel II, Kaplowitz K, Schuman JS, et al.: Outcomes of ab interno trabeculectomy with the trabectome by degree of angle opening. Br J Ophthalmol. 2015; 99(7): 914–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMosaed S, Chak G, Haider A, et al.: Results of Trabectome Surgery Following Failed Glaucoma Tube Shunt Implantation: Cohort Study. Medicine (Baltimore). 2015; 94(30): e1045. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBussel II, Kaplowitz K, Schuman JS, et al.: Outcomes of ab interno trabeculectomy with the trabectome after failed trabeculectomy. Br J Ophthalmol. 2015; 99(2): 258–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWecker T, Neuburger M, Bryniok L, et al.: Ab Interno Trabeculectomy With the Trabectome as a Valuable Therapeutic Option for Failed Filtering Blebs. J Glaucoma. 2016; 25(9): 758–62. PubMed Abstract | Publisher Full Text\n\nSewall EC: Cyclodialysis For Chronic Glaucoma. Cal State J Med. 1907; 5(5): 119–21. PubMed Abstract | Free Full Text\n\nChandler PA, Maumenee AE: A major cause of hypotony. Am J Ophthalmol. 1961; 52: 609–18. PubMed Abstract | Publisher Full Text\n\nBöke H: [History of cyclodialysis. In memory of Leopold Heine 1870–1940]. Klin Monbl Augenheilkd. 1990; 197(4): 340–8. PubMed Abstract | Publisher Full Text\n\nKnowlton P, Bilonick R, Loewen N: Baerveldt tube shunts with trabectome surgery in a matched comparison to Baerveldt tube shunts [Internet]. F1000Res. 2016; [cited 2016 Dec 3]. Publisher Full Text\n\nKola S, Kaplowitz K, Brown E, et al.: Case-matched results of trabectome ab interno trabeculectomy versus ahmed glaucoma implant. F1000Res [Internet]. 2016; 5. [cited 2016 Mar 31]. Publisher Full Text\n\nRoy P, Loewen RT, Dang Y, et al.: Stratification of phaco-trabectome surgery results using a glaucoma severity index [Internet]. 2016; [cited 2016 Dec 16]. Publisher Full Text\n\nJea SY, Mosaed S, Vold SD, et al.: Effect of a failed trabectome on subsequent trabeculectomy. J Glaucoma. 2012; 21(2): 71–5. PubMed Abstract\n\nTöteberg-Harms M, Rhee DJ: Selective laser trabeculoplasty following failed combined phacoemulsification cataract extraction and ab interno trabeculectomy. Am J Ophthalmol. 2013; 156(5): 936–40.e2. PubMed Abstract | Publisher Full Text\n\nLoewen N: Trabectome data update 2016 [Internet]. American Academy of Ophthalmology. Chicago. 2016; [cited 2016 Dec 3]. Publisher Full Text\n\nXin C, Wang RK, Song S, et al.: Aqueous outflow regulation: Optical coherence tomography implicates pressure-dependent tissue motion. Exp Eye Res [Internet]. 2016; pii: S0014-4835(16)30148-8. PubMed Abstract | Publisher Full Text\n\nIordanous Y, Kent JS, Hutnik CM, et al.: Projected Cost Comparison of Trabectome, iStent, and Endoscopic Cyclophotocoagulation Versus Glaucoma Medication in the Ontario Health Insurance Plan. J Glaucoma [Internet]. 2014; 23(2): e112–8. PubMed Abstract | Publisher Full Text\n\nKnowlton P, Bilonick R, Loewen N: Baerveldt tube shunts with trabectome surgery in a matched comparison to Baerveldt tube shunts [Internet]. F1000Res. 2016; [cited 2016 Dec 3]. Publisher Full Text\n\nRoy P, Loewen RT, Dang Y, et al.: Stratification of phaco-trabectome surgery results using a glaucoma severity index [Internet]. University of Pittsburgh, 2016; [cited 2016 Dec 16]. Publisher Full Text"
}
|
[
{
"id": "19579",
"date": "15 Feb 2017",
"name": "Randolf A. Widder",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis well written paper is an extensive and excellent review of the outcome of glaucoma surgery after trabectome procedure worth publishing. Also training strategies are reported like performing surgery in a pig´s eye model and practicing visualisation of the trabecular meshwork in cataract patients.\n\nSome minor remarks\n\nThe authors claim, that 180 degrees of the trabecular meshwork is removed during surgery (see \"Surgical and training methodologies“). This is probably not the method which is used by all trabectome surgeons. They should explain the guidelines of the manufacturing company or the experience from the early studies where the removal of less trabecular meshwork was used or advised.\n\nMost of the literature is based on a large databank managed by the manufacturing company. Only few patient series exist which are not part of this databank. This could be commented by the authors and they could point out or stress which papers do not rely on the Neomedix databank.\n\nThe authors estimate that a trainee needs to perform surgery in 29 eyes to master the technique. To enable the reader to follow this interesting statement, the estimation process could be explained in more detail.\nThe index of the literature should be revised between numbers 51-55. The numbers are not correct.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-67
|
https://f1000research.com/articles/6-608/v1
|
02 May 17
|
{
"type": "Opinion Article",
"title": "On the origin of nonequivalent states: How we can talk about preprints",
"authors": [
"Cameron Neylon",
"Damian Pattinson",
"Geoffrey Bilder",
"Jennifer Lin",
"Damian Pattinson",
"Geoffrey Bilder",
"Jennifer Lin"
],
"abstract": "Increasingly, preprints are at the center of conversations across the research ecosystem. But disagreements remain about the role they play. Do they “count” for research assessment? Is it ok to post preprints in more than one place? In this paper, we argue that these discussions often conflate two separate issues, the history of the manuscript and the status granted it by different communities. In this paper, we propose a new model that distinguishes the characteristics of the object, its “state”, from the subjective “standing” granted to it by different communities. This provides a way to discuss the difference in practices between communities, which will deliver more productive conversations and facilitate negotiation, as well as sharpening our focus on the role of different stakeholders on how to collectively improve the process of scholarly communications not only for preprints, but other forms of scholarly contributions.",
"keywords": [
"Preprints",
"scholarly communication",
"validation",
"community",
"status",
"peer-review"
],
"content": "Introduction\n\nTwo scientists, Jimmy Maxwell and Chuck Darwin, meet at a conference and realise that they have common research interests, though one is a physicist and the other a naturalist. So they agree to collaborate, and their work develops quickly into a theory so big it could revolutionise both their disciplines.\n\nThey write up their work and, egged on by the physicist, decide to post to a preprint server before submitting to their target journal, The Science of Nature. The preprint causes a sensation! It receives attention, generates heated discussion, and citations ensue from their colleagues in both disciplines. The journal submission, however, faces a rockier path, getting held up by Reviewer #3 through four rounds of revision over a sticky issue involving the techniques for measuring the forces of barnacle-rock attraction.\n\nDuring the publication delay, offers start pouring into young Maxwell’s inbox from universities and companies wishing to recruit the young physicist. He takes a plum job and goes on to change the course of physics forever. Chuck, on the other hand, finds offers hard to come by. His grant applications to fund a research trip to far-flung islands fail because his CV lacks the high impact articles required to make him stand out. In despair he quits the bench and opens a pet shop. Some decades later the two researchers are recognized by the award of the prestigious Prize of Nobility. Maxwell’s place in the firmament is assured, while Darwin returns to his pet shop, now specialising in finches, where something about their beaks bothers him until the day he dies.\n\nWe open with this cheeky illustration to foreground one main point: different communities grant the same object different degrees of importance. We can complicate the story by revealing that both researchers were scooped between posting the preprint and article publication. Or funding panels in each discipline assess their applications and count the outputs as scholarly contributions in different ways. But they all illustrate the same central point. There exists no universal standard of when an output is considered as part of the formal scholarly record. Rather, it is determined by particular groups in particular contexts.\n\n\nNo universal definition of preprint exists (and never will)\n\nThe pace of technological change over the past two decades has far outstripped the language we use to describe the objects and processes we use to communicate. This disconnect between language and technology is at the root of the current debate around preprints. The very word “preprint” is an odd combination of retronym and synecdoche. A preprint is increasingly unlikely to ever be a precursor to anything that is physically printed onto paper. At the same time, that use of “print\" takes one small part of scholarly publishing to stand in for the entire process. A preprint is different from a working paper, yet both are entirely different to an academic blog post. Additionally, all these appear in designated online repositories as digital documents that are recognizably structured as scholarly objects. Some preprints are shared with the future intent of formal publication in a journal or monograph. But not all. The term is used to mean a host of different things, and as such, remains referentially opaquea. An earlier version of this article is available on the “preprint” server BioRxiv. Should we refer to that here? Should it be formally referenced? Or is that “cheating” by inflating citation counts? What do we call the version of this article on F1000Research after posting, but prior to the indexing that follows approval by peer review?\n\nWikipedia is a good source for identifying common usage. At the time of writing, it defines a preprint as “a draft of a scientific paper that has not yet been published in a peer-reviewed scientific journal.” This definition encompasses everything from early, private drafts of a paper that the authors have never shared with anyone, all the way to drafts of accepted manuscripts that have yet to go through a publisher’s production process. Interpreted liberally, the Wikipedia page itself might even be included1. The definition also conflates science and scholarship in a way that is both common and unhelpful. For many readers it would exclude work from the social sciences and humanities, as well as book chapters and other drafts destined for venues beyond “a peer-reviewed scientific journal”.\n\nOther organizations have constructed their own meanings and terms to fit the agenda of their constituencies. SHERPA, a UK organisation dedicated to studying scholarly communication, has a more precise definition for preprints: “the version of the paper before peer review”3. They then define versions between acceptance and publication by a journal as \"post-prints.\" NISO (National Information Standards Organisation) doesn't formally define the word \"preprint\" in its Journal Article Version (JAV) standard2, preferring instead to further delineate where \"significant value-added state changes\" occur. They break down the broad Wikipedia definition into four distinct stages, including \"author's original\", \"submitted manuscript under review\", \"accepted manuscript\" and any numbers of \"proofs\" that may emerge between acceptance and the published \"version of record\", a term which suffers under the dual burden of being both essentially undefinable and highly politicised.\n\nAs a further complication, the shifting roles of different players in the ecosystem have also contributed to this confusion. To “publish” a work can mean three entirely different things: the labour of preparing a work for its dissemination, to communicate or make public a work, or in the narrow sense we use in the academy, to make available through designated channels after specified social and technical processes. “Preprint” is positioned and often defined in relation to “publish”, in a way that adds to the ambiguity of both terms.\n\nIn the past, there was a clear distinction between services that hosted preprints and “publishers” who carried out the formal process of “publication”, as defined by scholarly communities. A preprint could therefore be identified by its presence on a platform that was not that of a “publisher”. But today, publishers are starting to provide repositories to host preprints (PeerJ, Elsevier/SSRN, and the American Chemical Society). To add to the confusion, new forms of journals that run quite traditional quality assurance and review processes are being developed, which use preprint servers as the storage host for their articles. Discrete Analysis and The Open Journal both use ArXiv to store the PDF versions of accepted papers. A definition that depends on the historical role of any given player will fail if that role changes. Attempts to define the term “preprint” in this way pushes the confusion onto other terms that are equally poorly defined. Saying a preprint “is not published” or “is not in a journal” merely shifts the ambiguity to the question of what “published” means or what counts as a “journal.”\n\nThe lack of clear definitions is a problem when discussing and negotiating important changes to research communication. Researchers today can share results earlier, in new forms, and to new communities. But the newness of such technologies means that we have not yet come up with terminology to clearly discuss the available choices. Some researchers simply see a preprint as an early notification or preview of a “formal” publication. For others it is a complete finding and a clear claim of priority in the scholarly literature. These differences are most often due to differences in disciplinary cultures. And, as in our story, the confusion is even greater with work that crosses disciplinary boundaries.\n\nAt the core, we have a fundamental issue of what “counts”, and what counts will clearly depend on the community doing the counting. This is the central social and political issue on which disagreements on the status of preprints are based. We will never agree on a universal definition because communities naturally value different things. So are we fated to build walls between disciplines, between Maxwell and Darwin’s tribes, never to be scaled or crossed? As research itself brings together different perspectives and types of knowledge to work on shared intellectual questions, we want to break down, not build up walls. We can in fact fruitfully engage across disciplinary boundaries and have productive discussions about preprints and the value of different kinds of scholarly communication. But to achieve this we must recognise when our differences are matters of fact (what process has an object been through) and differences in opinion and values between communities.\n\nWe present a model that will tease out one of the fundamental issues we’ve witnessed when research communities assess what will count and why. We do not propose a new vocabulary nor a new universal definition of preprints. This would only further contribute to our current confusion and complexity. However, our conceptual framework offers practical paths for publishers, service providers, and research communities to consider and implement, all of which will facilitate more effective discussions and better communications systems.\n\n\nThe State-Standing Model\n\nWhile “preprints” is a referentially opaque term that make little sense in the context of an online communications environment, it is unlikely we will persuade anyone to abandon the term. Instead, we seek to tease out two attributes often elided when discussing objects in scholarly communication: “state” and “standing”. We use the term “object” so as to be inclusive, as well as to avoid the further use of terms tied to obsolete technologies (see Box 1).\n\nState - the external, objectively determinable, characteristics\n\nStanding - the position, status, or reputation\n\nThe “state” of a research object is comprised of the external, objectively determinable characteristics of the object. This includes records of claims made about the object, metadata, statements of validation processes the object has undergone, etc. An object submitted for peer review undergoes a wide array of state changes as multiple players interact with it in the process of submission and publication: technical checks and validation, editorial assessment, assignment of editor and reviewers, referee review, editorial decision, typesetting, author approval and corrections, publication accept, content registration/metadata depositing, front matter editorial posting, publication commentary facilitation, retraction/correction processes, publication event tracking, etc. This includes explicitly modelled metadata elements within strong schema (such as “indexed in PubMed”), as well as unstructured and vague terms. It also includes a description of groups that have access, including “the public”3. With “state,” there can be an explicit record made even if it is not exposed. Such records may be hidden within publisher systems or may even be private information that is unethical to share. The record might be in third party systems, such as Pubmed Central or ORCID. Some elements may be badly recorded or lost and thus inaccessible.\n\nIf an object changes state, it may also undergo changes in perceived value or intellectual status. The “standing” of a research object is the position, status, or reputation of an object. It is a consequence of its history and state. There are various forms of standing recognised by different groups, for example: “has been validated by (a traditional) peer review process”, “establishes priority of claim”, “is appropriate for inclusion in this assessment process,” “is considered appropriate for discussion and thus citable”, etc. These are judgments about the recognition or value of the output. Standing is conferred by a group, not an individual, and is therefore distinct from any individual’s opinion of the workb. It is also conferred not directly to individual objects, but to classes of objects that share attributes of state.\n\nWith a conceptual barrier between state and standing in place, we can investigate their relationship as the scholarly output changes over time. A state change may lead to a change in standing, but not necessarily and not in all cases. A change in standing, however, only occurs as a consequence of a state change triggered by some external shift that has led to a reconsideration of value.\n\nStanding is independently conferred by each group for whom the research output has meaning. While similar forms of standing between groups might arise, they cannot be identical as such. What matters most in this model is the possibility that a particular community may confer a different form of standing than another on the same type of research object (i.e., with the same state).\n\nFigure 1 illustrates how changes in the state of research objects may result in different changes of standing between two communities: physics and life sciences. Both may consider research validated and part of the formal record at similar stages of the publication process. But there are also key differences. When a preprint is posted by a physicist, they have established the priority of claim in that community, and it is considered worth of citation. However, for the life sciences community, claim priority is generally established when a manuscriptc is submitted to a journal. It is only appropriate to cite the article even later, when the text is made available online (Advanced Online Publication or online publication).\n\nAOP stands for advance online publication.\n\nThe conditions that prevail in the conduct of research are naturally tied to the type of research itself. As these vary widely, so would the influence they have on the communication culture of the group and how they confer status. That certain fields in physics share equipment, work in very large groups, etc., has been often mentioned as a contributor to their predilection for preprints. On the other end of the publication event, research may expand its reach and utility beyond the academy. This introduces other possible entities that begin to serve as a conferrer of status (e.g. university office of technology transfer), and it will vary by field and discipline depending on the opportunities possible. Both Maxwell and Darwin are awarded for their work in acknowledgment of their contributions, but given that the research was taken up by the physics community earlier, it would not be surprising to see time differences in the subsequent accolades offered to each by their respective disciplines.\n\nPrior to the development of the web, some segments of both Economics and High Energy Physics communities shared a similar practice, the circulation by mail of manuscripts to a select community, before submission for formal peer review at a journal. As the web developed, both communities made use of online repositories to make this sharing process more efficient and effective. Paul Ginsparg initially created ArXiv as an email platform, but then migrated it onto a web-based platform in the early 1990s. In 1994, two economists created the Social Sciences Research Network (SSRN), a platform that shared many traits with ArXiv. In both cases, researchers submit digital manuscripts, which undergo a light check prior to being made publicly available on the platform. These manuscripts have not been subjected to any formal version of review by expert peers. Furthermore, there is a common expectation in both repositories that most manuscripts will go on to be formally published as journal articles or book chapters. That is, the state of objects in both ArXiv and SSRN is very similar.\n\nNonetheless the standing of these objects for these two communities are quite different. For the High Energy Physics community (and others in theoretical physics), posting to ArXiv establishes the priority of claims and discoveries. In many ways, ArXiv preprints are seen as equivalent to formally published articles, and many physicists will preferentially read articles at ArXiv rather than find copies in journals. Indeed, for those disciplines where use of ArXiv is common, the formal publication is the point at which citations to the manuscript start to drop off4. The question of why physicists continue to publish in journals at all is a separate one and beyond the scope of this article. However, our model can help: clearly the community, or communities that matter, do grant some standing to journal articles, which is both different to that granted to preprints and important in some way. The question of what that standing is and why it continues to matter is separated in our model from the equivalence of state that journal articles in physics share with those in other disciplines. As Maxwell and Darwin found in our story, physics and biosciences are different in important ways, even when their publication processes are very similar.\n\nBy contrast, working papers on SSRN are seen much more as works in progress. They are frequently posted well before submission to a journal, unlike ArXiv where posting is frequently done at the same time as submission. Observers from outside these communities, including those interested in adopting physics posting practices for the biosciences, often make the mistake of seeing two similar repositories with similar requirements and assume that SSRN working papers and ArXiv preprints can be equated. The differences are not obvious from an examination of state, but are situated in differences in standing. Working papers and preprints have a different standing, and serve quite different functions for their cognate communities, despite being quite similar in form. Separating the two concerns allows us to be much clearer about what is similar and what is different between the two cases.\n\n\nFurther applications in the publishing life cycle\n\nThe uses of our model are not limited to preprints. It is a useful heuristic for isolating the questions that require answers from a community, from those that can be answered by auditing the process an object has been through. That is, it is helpful to separate the question of whether something has been done, from the question of whether any community cares.\n\nWe believe this separation of concerns will be valuable for discussions on a wide range of outputs, including software, data and books. Indeed all types of research outputs go through processes of validation, dissemination and assessment, which are accorded differing degrees of importance by different communities. Discussions of the details of options for differing modes of open, signed, partially open, single blind, double, or even triple blind, peer review will benefit from separating the description of process (and testing whether the stated process has been followed) from the views of any given community of objects that have been through that process.\n\nUntil recently much work on peer review was done within disciplines with little comparative work. The role of peer review processes in community formation is now gaining greater interest, as is the detailed study of the history of peer review processes. Some communities have strong affiliations with double-blind peer review processes, and some with single-blind, or increasingly non-anonymous or signed reviews. Today, questions are raised as to whether processes that do not blind referees to author identity (a process described by specific state changes) can be expected to be unbiased and therefore valid (a question of standing). Pontille and Torny5 in examining the complex history of these views quote Lowry6 to showcase the view that “...a man’s[sic] name is important and...can be used as a basis for judging the reliability and relevance of what he says”. Separating the value-laden discussion of what judgements are necessary or reliable from the details of the process that support them can help to uncover and illuminate effective paths forward in deep-seated disagreements.\n\nIt may be the case that much of the confusion around newer forms of scholarly sharing, including efforts to make certain scholarly outputs “matter” as much as traditional narrative publications is due to a similar confusion. New forms of output seek to co-opt the expression of forms of state, without putting in the required work that connects the social machinery of state-standing links. As a result, they frequently fall into an “uncanny valley”, objects that look familiar but are wrong in some subtle way. The most obvious example of this are efforts to make new objects “citable”, i.e. making it technically feasible to reference in a traditional manner through provision of specific forms of metadata, most commonly via DOIs. To actually shift incentives, this work needs to be linked to a social and political shift that changes a community’s view of what they should cite, i.e. what gives an object sufficient standing to make it “citation-worthy”.\n\nA similar debate is that which rages between traditional publishers and advocates of a shift towards “publish-first, review-later” models of research communication. On one hand, advocates of change often remark on the seeming lack of improvement made to the text of an article through traditional peer review. For example, Klein et al. found that text content of ArXiv preprints only undergo minor changes between the initially submitted and finally published versions7. Of course this neglects state changes in the validation process that may be important, but are not necessarily reflected in the character-stream of the article, such as ethical or statistical checks that were managed by the publisher.\n\nOn the other hand, publishers have established practices that they consider important, captured in the JAV vocabulary2. JAV details a number of different stages (with different states) that a manuscript might undergo. Many of these are invisible to authors. For instance, Author Original and Submitted Manuscript Under Review are identified as distinct states. An author would consider these to be the same document, but a publisher needs to record the manuscript’s transition into the peer review pipeline. At the same time, JAV ignores changes that are likely of concern to authors by failing to record them. For example, it has no concept of the distinct revised versions of a manuscript submitted during review cycles.\n\nThis distinction may be useful in looking backwards as well as forwards. A growing interest in the history of scholarly communications reveals that processes of selection and publication in the 18th, 19th and early 20th century could be very different to our current systems. For instance, Fyfe and Moxham8 discuss a shift in process at the Royal Society in the 19th century. They trace “a transition from the primacy of [a paper being read at] face-to-face scientific meetings...to the primacy of the printed article by the end of the nineteenth century”. The processes changed, as did the status granted them. Presumably our current views of standing, and their ties to current processes of state change, evolved together. Separating the processes and state changes from the standing granted by historical communities (if this can in fact be determined from archival records) can only help us to understand how our current processes and values evolved.\n\nIt is also not just deep history that could find the distinction helpful. The primacy of the reading of a paper at a meeting will be familiar to many scholars in Computer Science, where conference proceedings remain the highest prestige venue for communication of results. The state changes of an object in computer science have some similarities to the historical state changes at the Royal Society. An examination of how similar standing is in these two cases, and more particularly how the primacy of conference presentation arose in Computer Science could benefit from analysis in terms of our model.\n\nHere, the issue is a difference in focus on what it is that matters, what kinds of standing are important. Changes in state that are important markers of shifts in standing for one group are ignored by the other and vice versa. Until the full set of state changes that are relevant to all stakeholders are transparently visible, discussions of standing are unlikely to be productive.\n\nThis illustrates a crucial point. Our model exposes the need for high quality metadata that is well coupled to the record of processes that a work has experienced. If what is contained within the scholarly record is a question of standing, then the formal record of state is a critical part of supporting claims of research.\n\n\nConclusions\n\nTo engage in productive discourse on new (and traditional) forms of scholarly sharing, we need to gain clarity on the objects themselves. We propose a model that explicitly separates the state of a work – the processes it has been through and the (objectively determinable) attributes it has collected through those processes – from the standing granted it by a specific community. It is not only a formal framework, but a practical apparatus for navigating and negotiating the ongoing changes in scholarly communications. By distinguishing two attributes we can isolate aspects of objects that can be easily agreed on across communities, and those for which agreement may be difficult. These have clouded discussion of community practices, particularly those around the emerging interest in “preprints” in disciplines that have not previously engaged in the sharing of article manuscripts prior to formal publication.\n\nIn proposing this distinction, we are foregrounding the importance of social context in the community-based processes of scholarly validation. The importance of social context in scholarly processes is of course at the centre of many of the controversies of the late 20th century in Philosophy and Sociology of Science, Science and Technology Studies, and other social studies of scholarly and knowledge production processes. Our proposal follows in those traditions. In our model, what is to our knowledge novel is that it provides a way to link the conversations, focusing on process and metadata, that occur when researchers and publishers discuss scholarly communication, with the social context that they occur in. By connecting state and standing, and recognising that each has an influence over the other – state directly on standing, standing by privileging certain changes of state – we aim to show how the intertwined relationship is at the core of conferring value across scholarly communities.\n\nHow does our model help the young Darwin and Maxwell? Well, it makes explicit the changing nature of discovery across disciplines, and provides a way of differentiating between changes to the object and changes to the perception of the object. Questions of standing will be inherently difficult to discuss across community boundaries, and while the model cannot solve the underlying social challenge that different research communities simply value different things, and in particular different parts of the overall life cycle of a research communication, it does offer a way of talking about and analysing those differences. To bring the culture of manuscript posting to the biosciences, Darwin would be better served by identifying the different goals that different people had as well as discuss the concerns that more traditional researchers have.\n\nOur model does not, and cannot, solve the problem of differing perspectives between research communities. It does, however, have clear implications on how the various players in research communications can better contribute to an effective and efficient conversation:\n\nPublishers, including preprint repositories, can better serve their communities by making state changes much clearer, more explicit, and transparent. It is impossible for us to make progress in discussing standing when we cannot clearly define what the state is. We cannot discuss the difference in standing between a preprint, a journal editorial, and a research article without knowing what review or validation process each has gone through. We need a shift from “the version of record” to “the version with the record”.\n\nService providers, including publishers and repositories, but also those that record other processes, need to pay much greater attention to recording state changes. Currently many records of state are focused on the internal needs of the service provider rather than surfacing critical information for the communities that they serve. Principled and transparent community evaluation depends on a clear record of all the relevant state changes.\n\nFinally, scholarly communities must take responsibility for clearly articulating our role in the validation and to recognise that this is a fundamentally social process. It is our role to grant standing. We need to explicitly identify how that standing is related to a clear and formal record of changes in state. The current discussion arises due to confusion over the terminology of preprints, but the issue is much more general. By making explicit both the distinction between social processes and the record of attributes that results from them, and explicitly recognising the connection between state and standing, we will surface the processes of scholarship more clearly, and re-centre the importance of our communities deciding for themselves what classes of object deserve which granting of standing.\n\n\nFootnotes\n\na It should be noted that this is not a new problem. For many years researchers have bundled everything that has not been formally “published” under the umbrella term “grey literature”, creating headaches for every meta-analyst and systematic reviewer who has had to decide what “counts” as a meaningful academic contribution.\n\nb We use the general term “group” to refer to communities, institutions and other parties that confer standing. “Groups” therefore includes disciplinary communities, universities (and their departments), funders, but also potentially entities such as main stream media venues, as well as specific publics.\n\ncWe acknowledge that ‘manuscript’ is as much a retronymic synecdoche as ‘preprint’. However, we use the term here as the most appropriate in context.",
"appendix": "Author contributions\n\n\n\nAll authors contributed equally to the conception, authoring and editing of this paper.\n\n\nCompeting interests\n\n\n\nDP is employed by, and owns stock in, Research Square LLP, a company that provides editorial services for authors and publishers. JL and GB are employed by Crossref, a provider of scholarly metadata.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nWodak SJ, Mietchen D, Collings AM, et al.: Topic Pages: PLoS Computational Biology Meets Wikipedia. PLoS Comput Biol. 2012; 8(3): e1002446. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNISO/ALPSP Journal Article Versions (JAV) Technical Working Group: Journal Article Versions (JAV): Recommendations of the NISO/ALPSP JAV Technical Working Group. NISO Recommended Practice 2008–08. 2008. Reference Source\n\nNeylon C, Pentz E, Tananbaum G: Standardized Metadata Elements to Identify Access and License Information. Informations Systems Quarterly. 2014; 26(2): 35–37. Publisher Full Text\n\nGentil-Beccot A, Mele S, Brooks T: Citing and Reading Behaviours in High-Energy Physics. How a Community Stopped Worrying about Journals and Learned to Love Repositories. arXiv: 0906.5418v2. 2009. Reference Source\n\nPontille D, Torny D: From Manuscript Evaluation to Article Valuation: The Changing Technologies of Journal Peer Review. Hum Stud. 2015; 38(1): 57–79. Publisher Full Text\n\nLowry RP: Communications to the editors. Am Sociol. 1967; 2(4): 220. Reference Source\n\nKlein M, Broadwell P, Farb SE, et al.: Comparing Published Scientific Journal Articles to Their Pre-print Versions. arXiv: 1604.05363v1. 2016. Reference Source\n\nFyfe A, Moxham N: Making public ahead of print: meetings and publications at the Royal Society, 1752–1892. Notes Rec. 2016; 70(4): 361–379. Publisher Full Text"
}
|
[
{
"id": "22411",
"date": "19 May 2017",
"name": "Kathleen Fitzpatrick",
"expertise": [
"Reviewer Expertise Scholarly communication",
"humanities"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article focuses on the need (most immediately seen in recent discussions of the challenges presented by \"preprints\") for distinguishing between the ways we represent the state of research outputs and the ways we represent the standing those outputs have in accordance with that state. The importance of this distinction, as the authors point out, is in recognizing that different communities of practice accord different value to outputs that are objectively in the same state. By “separating the description of process (and testing whether the stated process has been followed) from the views of any given community of objects that have been through that process” (5), we might be better able to speak across disciplinary boundaries about the value of the work we do. The distinction between state and standing is especially crucial for those who seek to change scholarly communication practices, for instance by allowing a greater range of outputs to “count” in hiring and assessment processes, in order to make clear that the transformation that matters lies in “the social machinery of state-standing links” (5). By encouraging the disambiguation of state and standing, the authors are able to advise publishers, platforms, and scholarly communities on ways they might contribute to better conversations about the value of particular research outputs.\nThe article begins with a highly engaging opening illustration of the stakes of the non-universality not only of language but of practices in scholarly communication, and continues through careful and well-documented argumentation. The authors look carefully at distinctions not just in terminology but also in values across different fields. They are careful to note that they are not recommending a new vocabulary, nor a guiding framework for how scholarly communities should negotiate the current changes in their communication practices, but they do our fields a great service by exposing the reasons for much of our mutual incomprehension across fields. They also go a long way toward explaining why “we have to make preprints ‘count’” is a very heavy lift in some communities.\nI do hope that the authors will continue their research in this line. It would be great to have their input, for instance, into the construction of metadata that can help clarify changes in a research object’s state, enabling better judgment in communities about its standing.\nA few very small copyediting notes:\nThe phrase “different to” is used a couple of times; I’m honestly not sure if this is a UK/US distinction, but I’d argue for “different from” instead. On page 3, column 2, line 1, “that make” should be “that makes”. On page 4, AOP is glossed as “advance online publication” in the caption for Figure 1, and “Advanced Online Publication” in the text.\nI am grateful for the opportunity to have reviewed this article, and I look forward to the discussions that it might inspire.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "22413",
"date": "30 May 2017",
"name": "Todd Carpenter",
"expertise": [
"Reviewer Expertise Information science",
"standards",
"metadata"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThere are many challenges in our current scholarly communications and assessment environment. This article draws out an important distinction between the objective status of a piece of scholarly content and the value to which the community assigns to that content. This is an intricate, intertwined, and sometimes confusing interplay between the two concepts. The authors do a commendable job of describing the current state and outline potentially valuable model for distinguishing between the two concepts.\nWhile the article does have much to recommend, I have several areas of concern. First, the article provides a quaint description of two scholars in two distinctly different fields of inquiry, physics and biology, who collaborate on a research project. One of the researchers, the physicist, receives \"credit\" for a joint paper, while the other, the naturalist, receives none. The article takes the reader on a journey where one researcher succeeds while the other fails because of a lack distinction between the two social responses to the same content form. This illustration describes the sometimes critical differences between domains and the different weight given to forms of distribution and publication. In the article's conclusion, the authors note that their framework isn't meant to address the social differences between domains that are at the root of these differences, simply to describe them. While distinguishing between “state” and “standing” might provide some method to identify the objective and subjective status of a content object, the article lacks consideration of the criteria or suggestions about what characteristics might contribute to their notion of standing. At the heart of the illustration is an environment where different domains confer different meaning or value, the objective status may or may not influence the subjective response. Distinguishing between the two seems obvious.\nWhile the distinction between \"State\" and \"Standing\" as described in the framework appears a useful distinguishing characteristic, it is not clear to me that the examples of \"state\" changes are in fact \"objectively determinable characteristics of the object\" that are intrinsic to the object itself. There is no way to know by examining the object whether it has undergone any particular state change. To consider a real world example, take an article in ArXiv (https://arxiv.org/abs/1509.06859v2) by Sébastien Gouëzel (LMJL). This paper has an earlier version (https://arxiv.org/abs/1509.06859v1) and was updated with the current version in May, 2017. Viewing this from ArXiv, there is no indication that this object has gone through any vetting, nor any editorial review, nor any validation processes, nor any copy editing, nor any of the other state changes mentioned in this paper. However, the paper has been included in the online journal Discrete Analysis . There is an editorial introduction with a DOI (10.19086/da.1639, which oddly didn't resolve) and it isn't clear that the journal \"publishes\" the article or the introductions. Presumably the article itself went through a peer review, an editorial review, and possibly edited and then was revised and resubmitted to ArXiv. The date on the ArXiv revision is 5 May 2017, four days before the Discrete Analysis paper was posted on 9 May 2017. If a user views this article through the wrapper of journal, it may be clear that these \"state changes\" as defined in this paper might apply, but the same content viewed through the lens of the paper directly on ArXiv they are not. The state changes exist in one environment but not in another.\nThe authors respond to this situation, as they note in their conclusion, that this is simply a failing of metadata and that if only the state changes were recorded in the metadata, this problem might be addressed. The problems of metadata quality is well known and much discussed in the community. Properly assigning metadata to a final version of record is challenging enough, without retroactively populating metadata or ensuring a string of provenance data is included with the current object to support this chain of awareness of the current state of a content object. For example, if someone were to come across the first version of Gouëzel's paper in the example I noted, how would anyone know the current state of the previous version? Without forward linking, there is no way to know that the preprint version (or authors original, using the JAV terminology) was followed by another version?\nWith this example in mind, the paper would be strengthened through a more robust description of state changes, and what would distinguish a state change from something less substantial. Since many of the changes that would might take place to an article may or may not be significant. Also, many state changes might not lead to notable changes. For example, (in a closed peer-review environment say), I may have read this article without recommending any changes. The act of reading and saying \"Yes, this is OK\", is completely external to the object and failing quality metadata to describe the review. These external acts related to a piece of content are critical to the process of developing standing, but aren't necessarily externally obvious, as the authors note.\nA minor point about standing to which I quibble is the notion that standing is not something that can be conferred individually. There are many instances when standing could be individually confirmed. Many journals are editorially run by a single individual, who might review, take a decision to publish or not, or approve for publication. A department chair, may determine that a piece of content is appropriate for inclusion in a promotion or tenure decision. I am sure there are countless other examples of this. One might say the editor is speaking on behalf of a community of subscribers, but in reality it is just one person taking the decision.\nIn practice, the framework outline in this article builds upon the structure outlined in the NISO Journal Article Versions (NISO JAV) Recommended Practice, which defined a structure of changes for the constrained scope of journal articles. That effort settled by \"identif[ing] a significant value-added “state change” in the progress of a journal article from origination to publication.\" While these state changes in NISO JAV are explicitly focused on the formal publication process, the concept of state change applies across all forms of content, again as noted in the introduction of NISO JAV. It should also be noted that the working group that developed NISO JAV structure intentionally did not extend it's scope to other forms of content, nor to extend the resulting vocabulary to every instance in the content creation process.\nThis article, or potentially in subsequent work by the authors, would be strengthened by a discussion of the types of elements that go into standing. The description of potential changes to a content object's \"state\" are comparatively robust, but the components of what constitutes \"standing\" is decidedly weaker. Especially since this appears to be the core argument of the need for this framework, this lack of discussion around those details glosses over the difficulty in that side of this environment.\nInherently, this second domain of the meaning of and definitions of \"standing\" are incredibly fraught and fungible across the academy. What has standing in one domain does not in another, often without rhyme or reason. There is no fault in the authors avoiding these very granular and thorny questions in this article, nor does it diminish from the value in trying to distinguish between the two. However, without the understanding of \"standing\" there can be no resolution to the problems that Darwin faces in the article's opening illustration.\nI look forward to the continued discussion around these issues and encourage the authors to continue to develop their work in this area.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-608
|
https://f1000research.com/articles/6-607/v1
|
02 May 17
|
{
"type": "Research Article",
"title": "A qualitative and quantitative performance evaluation of Swaziland’s Rural Health Motivator program",
"authors": [
"Pascal Geldsetzer",
"Maria Vaikath",
"Jan-Walter De Neve",
"Till Bärnighausen",
"Thomas J. Bossert",
"Maria Vaikath",
"Jan-Walter De Neve",
"Till Bärnighausen",
"Thomas J. Bossert"
],
"abstract": "Background: Community health workers (CHWs) are increasingly used to increase access to primary healthcare, and considered to be a key health worker cadre to achieve the UNAIDS 90-90-90 target. Despite the recent policy interest in effectively designing, implementing, and evaluating new CHW programs, there is limited evidence on how long-standing CHW programs are performing. Using the CHW Performance Logic model as an evaluation framework, this study aims to assess the performance of Swaziland’s long-standing national CHW program, called the rural health motivator (RHM) program. Methods: This study was carried out in the Manzini and Lubombo regions of Swaziland. We conducted a survey of 2,000 households selected through two-stage cluster random sampling and a survey among a stratified simple random sample of 306 RHMs. Additionally, semi-structured qualitative interviews were conducted with 25 RHMs. Results: While RHMs are instructed to visit every household assigned to them at least once a month, only 15.7% (95% CI: 11.4 – 20.4%) of RHMs self-reported to be meeting this target. Less than half (46.3%; 95% CI: 43.4 – 49.6%) of household survey respondents, who reported to have ever been visited by a RHM, rated their overall satisfaction with RHM services as eight or more points on a 10-point scale (ranging from “very dissatisfied” to “very satisfied”). A theme arising from the qualitative interviews was that community members only rarely seek care from RHMs, with care-seeking tending to be constrained to emergency situations. Conclusions: The RHM program does not meet some of its key performance objectives. Two opportunities to improve RHM performance identified by the evaluation were increasing RHM's stipend and improving the supply of equipment and material resources needed by RHMs to carry out their tasks.",
"keywords": [
"Community health worker",
"performance evaluation",
"Swaziland",
"rural health motivator"
],
"content": "Introduction\n\nMany low-and middle-income countries, particularly in sub-Saharan Africa, face a severe shortage of skilled healthcare workers1. Community health workers (CHWs) are increasingly being used to address this shortage of more extensively trained health workers in order to increase access to primary healthcare services2,3. While there has been a recent policy interest in designing, implementing, and evaluating new CHW programs4–8, many large CHW programs that have existed for decades have not yet been rigorously evaluated. One such program is Swaziland’s national CHW program, known as the rural health motivator (RHM) program. Existing since 1976, the RHM program currently employs over 5,000 RHMs and aims to cover every household in the nation with basic primary healthcare and health information9.\n\nHIV is causing the highest burden of any disease in Swaziland10, and is a major challenge to the country’s health system. UNAIDS and the World Health Organization recently set a new goal for ending the HIV epidemic: the 90-90-90 target11. Under this target, countries aim to ensure that, by 2020, 90% of people living with HIV know their HIV status, 90% of all people whose HIV infection has been diagnosed receive sustained antiretroviral therapy (ART), and 90% of all those receiving ART are virally suppressed. Expanded utilization of CHWs is considered essential to achieving this goal12, particularly through offering community-based HIV testing and shifting certain components of long-term ART care from healthcare facilities to the community, for example through ART home delivery13–16. Yet, while RHMs are providing many HIV-relevant services, including the provision of condoms, information on HIV, and following up with pre-ART and ART patients who have missed an HIV care appointment17, HIV treatment and care in Swaziland is still largely facility-based. Successful shifting of further HIV testing, treatment and care tasks from healthcare facilities to RHMs would likely require the RHM program to perform reliably and at a high level. Using the CHW Performance Logic Model as an evaluation framework18, this study therefore aims to (i) assess the performance of the RHM program, and (ii) identify ways in which program performance can be improved.\n\n\nMethods\n\nThis study was conducted in the Lubombo and Manzini regions, which are two of Swaziland’s four administrative regions. Shiselweni and Lubombo are the most rural and poorest regions in Swaziland, while Manzini and Hhohho are comparatively more urban and wealthy19,20. In the latest census from 2007, 206,400 people lived in Lubombo and 313,900 in Manzini, jointly accounting for 52% of Swaziland’s total population19. According to Swaziland’s last HIV incidence and measurement survey21,22, conducted in 2010 and 2011, adult HIV prevalence was 32.4% in Lubombo and 33.6% in Manzini region. The corresponding national estimate was 32.1%.\n\nA number of CHW programs are currently active in Swaziland. At the time of the study, all CHW programs other than the RHM program had a cadre of less than 50 CHWs. While this study also collected data on three non-RHM CHW programs (the HIV expert client program, the Mothers2Mothers mentors, and a community outreach team for HIV-testing and voluntary male medical circumcision), this manuscript focuses on the RHM program given its size, and thus importance to Swaziland’s health system.\n\nEstablished in 1976, the RHM program employed 5,214 RHMs in 2015. As per their official job responsibilities, RHMs are assigned the following activities during their household visits: 1) referring ill household members to a healthcare facility; 2) providing health information on a variety of health topics; 3) providing condoms; 4) encouraging household members to take up preventive healthcare services and antenatal care; 5) follow up with those community members who have missed an HIV care appointment at the healthcare facility; 6) attending medical emergencies (e.g., emergency deliveries); 7) assisting with growth monitoring programs of children under five years of age; 8) dietary counseling; and 9) promoting adult literacy17. RHMs are instructed to visit 25 households assigned to them at least once a month.\n\nQuantitative data were collected through a population-based household survey and a questionnaire for RHMs (Supplementary File 1 and Supplementary File 2). The household survey employed two-stage stratified cluster random sampling. In the first stage, we selected a random sample of 50 enumeration areas (EAs) in each Lubombo and Manzini. In each region, 37 of the enumeration areas were classified as rural by the Swaziland Statistics Office, and 13 as urban. In each EA, we selected 20 households through systematic random sampling. Data collectors administered a questionnaire in SiSwati to each household member aged 11 years or older who was present at the time of the household visit and who provided written consent to participate in the survey. Due to feasibility constraints, the data collection team did not revisit households if no household members were present at the time of the visit.\n\nThe RHM questionnaire was administered in SiSwati to all RHMs working in the EAs that were selected for the household survey. Since the EAs selected for the household survey were only a relatively small subsample of all EAs in the Lubombo and Manzini region of Swaziland, 306 (12.0%) out of a total of 2,543 RHMs in these two regions were interviewed. The RHM questionnaire was administered at the RHM’s household by the same cadre of data collectors, which conducted the household survey.\n\nBoth the household and RHM survey were conducted between June 2015 and September 2015. Quantitative analyses consisted of descriptive statistics (means and proportions) and were conducted in Stata version 13.0 (College Station, TX, USA).\n\nQualitative data were collected through semi-structured interviews with 25 RHMs (Supplementary File 3). These RHMs comprised a criterion-based stratified purposive sample. Strata used were region (13 RHMs from Manzini and 12 from Lubombo region) and urban versus rural (13 from rural areas and 12 from urban areas in each region). Additional sampling criteria were age and sex of RHMs, attempting to yield a sample that is similar to the age and sex distribution of the RHM cadre in general. In addition, we conducted semi-structured qualitative interviews with the chief RHM program manager in the program office in Mbabane, Swaziland, and five RHM trainers in the regional offices of the RHM program.\n\nFive recent graduates of the University of Swaziland Social Science Program who were fluent in SiSwati and English conducted the interviews. The data collectors were Swazi and aged between 20 and 35 years. The interviews lasted between 30 and 45 minutes and were conducted in SiSwati. The interviewers taped the interviews, and transcribed them verbatim in SiSwati. The transcripts were then translated into English by the local study coordinator, who is also an author of this paper (MM). He also conducted a quality check of each transcript. Two authors (MV and PG) conducted content analysis using an inductive approach to coding23. We identified broad themes after an initial review of the data, and then conducted iterative reviews to further refine themes and their relationships to each other. All coding was done using NVivo 11 (QSR International, Melbourne, Australia).\n\nThe evaluation framework that was used for this performance evaluation is the CHW Performance Logic Model (Figure 1), which has been described in detail elsewhere18. The model was used to inform the design of the questionnaires and interview guides. More specifically, the data collection tools contained questions on the dimensions (white rectangles in Figure 1), which in turn were grouped under sections corresponding to the dimensions of the model (results, activities, and inputs). Questions in the household survey questionnaire focused on CHW program outcomes by asking about the household members’ experiences with the RMH program and the degree to which they sought care from RHMs. Meanwhile the RHM questionnaire focused on CHW program outputs (e.g., self-reported performance, and job satisfaction and motivation), and support provided to RHMs by the community and health system (and actors within these systems). Data on inputs was obtained from program reports and personal meetings with the RHM program management. We have structured the results section according to the logic model dimensions, moving from the inside (CHW performance outcomes) to the outside (inputs) of the model depicted in Figure 1.\n\nAdapted from Naimoli et al.18\n\nThis study was approved by the Swaziland Ethics Committee on March 31st 2015 (reference number: MH/599C/FWA 000 15267/IRB 000 9688), and received an exemption by the institutional review board of the Harvard T. H. Chan School of Public Health on March 31st 2015. Written informed consent was obtained from all study participants.\n\n\nResults\n\nThe RHM questionnaire was administered to a total of 306 RHMs, 96.1% of whom were female (Table 1). On average, RHMs were 52.9 years old (SD: 11.6 years) with 16 RHMs (5.2%) older than 70 years. RHMs had lived in their communities for an average of 34.6 years (SD: 16.5 years) and had worked in the RHM program for 15.5 years (SD: 12.9 years). 30.5% of RHMs reported to have done work other than for the RHM program during the previous 12 months. The characteristics of the 25 RHMs with whom we conducted semi-structured qualitative interviews were similar to those of the sample of RHMs who were included in the RHM survey. The population-based household survey was administered to 2,342 household members across 2,000 households. 97.7% of household survey respondents had lived in the surveyed community for more than one year.\n\nStandard deviations are shown in brackets. Abbreviations: RHM=Rural Health Motivator; No. = number\n\nAs described in the methods, we assessed performance of the RHM program on the output and outcome level of the CHW Performance Logic Model18. Table 2 summarizes our quantitative findings.\n\nOutcomes: Satisfaction with the RHM program. Household survey respondents’ overall satisfaction with RHM services was mixed, with 46.3% of respondents rating their satisfaction as greater or equal to eight on a 10-point scale ranging from very dissatisfied to very satisfied (Table 2 and Figure 2). 20.4% of respondents rated their satisfaction as less than five on this scale. Nonetheless, the vast majority (96.1%) of respondents would recommend the RHM program to other communities.\n\nAbbreviations: CI = Confidence interval; RHM = Rural Health Motivator; % = percentage.\n\n1 This question was only asked to community members who reported to have ever been visited by a RHM.\n\n2 This was defined as reporting ≥8 on a 10-point scale from “very dissatisfied” to “very satisfied”.\n\n1This question was only asked to household survey respondents who reported that their household had ever been visited by a RHM (n=1, 151). 2Satisfaction was measured on a scale ranging from 1 (“very dissatisfied”) to 10 (“very satisfied”).\n\nOutcomes: Care-seeking from RHMs. 76.7% of RHMs indicated that households had approached them for help or advice. However, in the qualitative interviews, a topic that emerged is that although households did approach RHMs, it was either rare or infrequent. In cases where RHMs were approached, it was usually for acute emergency care:\n\nInterviewer: “How often are you contacted for help or advice?”\n\nRHM: “It is rare … sometimes when someone is in labor then they call me for help” (Manzini)\n\nIn the less common scenario where RHMs indicated that they were contacted frequently, it tended to be for material assistance such as medication, diapers, or gloves:\n\nInterviewer: “How often are you contacted for help or advice?”\n\nRHM: “About 3 times a week. They usually want disposable diapers, gloves, or ORS [oral rehydration therapy]” (Lubombo)\n\nOutcomes: RHMs’ standing in the community. In general, RHMs felt that their standing within their communities had increased as a result of them being part of the RHM program. 74.3% indicated that their standing had increased, while only 16.5% stated that their standing had decreased, with the remainder answering that their standing had remained unchanged. In the qualitative interviews, when asked about the effect of their work as a RHM on their community standing, RHMs who indicated an increase in community standing suggested that RHMs’ responsibilities mean that community members respect them more. In cases where RHMs indicated that their community standing remained unchanged or had decreased, these were accompanied by the perception that they did not meet the expectations of community members:\n\n“No, I think [my community standing] is the same especially because people complain that we do not bring them anything except information; they want material things” (RHM, Lubombo)\n\nOutputs: Quantity of work performed. According to the RHM program management, RHMs are responsible for 25 households, which they are to visit at least once a month. In the RHM survey, RHMs reported to be responsible for visiting an average of 29.8 households. Less than a quarter of RHMs (15.7%) reported to have visited all households assigned to them in the last one month, and 57.8% stated they had visited all assigned households at least once in the last six months. The vast majority of RHMs (92.1%) reported that the workload expected of them is reasonable.\n\nPart of the qualitative interviews with RHMs focused on the reasons for not being able to visit all assigned households at least once a month. Four main factors were mentioned most frequently by RHMs: 1) the availability of the client, 2) physical distance to the household, 3) clients’ acceptability of the RHMs, and 4) the inability of RHMs to meet the expectations of some clients. Typical quotes illustrating each of these factors are:\n\nClient availability: “Sometimes there are no people in the household I visit and I have to return on another day” (RHM, Lubombo)\n\nPhysical distance to the household: “I find it to be very easy since the households I am responsible for are nearby and I do not need to walk a long distance” (RHM, Manzini)\n\nAcceptability of RHMs: “It is easiest with the homes where people are educated about the health issues and understand our work as RHMs; in homes where this is not the case, they are normally hostile towards us…” (RHM, Manzini)\n\nInability to meet clients’ expectations: “It is very difficult… people expect motivators to come with material things like [disposable diapers] napkins for their bedridden relatives, but we do not have these things. This disappoints the people and they start to develop an attitude towards us.” (RHM, Manzini)\n\nOutputs: Job satisfaction. Roughly half of RHMs reported to be satisfied or very satisfied with their job. Most RHMs (93.7%) would recommend the RHM program as a good organization to work for, and 95.0% of RHMs answered that they were proud to be working for the RHM program. Roughly a quarter (26.2%) of RHMs reported to occasionally or often think about leaving their job.\n\nTable 3 summarizes the results for the indicators used to evaluate program-level activities (as defined by the CHW Performance Logic Model18).\n\nAbbreviations: RHM=rural health motivator.\n\n1 This question was only asked if the RHM reported to have regularly interacted with facility-based healthcare workers (93.0%).\n\n2 This was defined as reporting ≥8 on a 10-point scale from “very bad quality” to “very high quality”.\n\n3 The denominator for these percentages is the number of RHMs who reported having received non-monetary payments from the program.\n\nSocial support. The majority of RHMs indicated that they were somewhat or very well supported by members in their communities (89.8%), by their families (95.7%), and by facility-based healthcare workers (96.5%). The vast majority of RHMs (95.4%) felt that facility-based colleagues value their work.\n\nTechnical support. The initial training for new RHMs lasts 12 weeks full-time. In addition, the program runs in-service trainings, which re-emphasize certain topics taught during the initial training and usually also cover some new material. These refresher trainings last for two to five days and are conducted once a year for each RHM. Only 10.5% of RHMs surveyed reported to never have attended an in-service training. Most RHMs (94.7%) either agreed (48.3%) or strongly agreed (46.4%) that the training provided by the program is sufficient to competently perform their work as a RHM. 81.9% rated the quality of their in-service training as being high.\n\nIncentives. The majority of RHMs expressed dissatisfaction with the compensation offered. 57.8% either disagreed (38.0%) or strongly disagreed (19.8%) with the statement that “Given the amount of work I do as a rural health motivator, I am being paid a fair amount”. This is also reflected in the qualitative data, in which RHMs frequently mentioned that they do not feel that they are sufficiently compensated. A typical opinion expressed in this regard is:\n\n“I do not feel I am being paid a fair amount because there is a lot of work that we do. Sometimes the families desert the ill patients and leave them in their own dirt until the day a RHM comes along and bathes the patient, feeds them....so the work is quite a lot” (RHM, Lubombo)\n\nVery few RHMs reported to have received non-monetary compensation from the RHM program.\n\nTable 4 summarizes the results for the indicators used to evaluate system-level activities.\n\nAbbreviations: RHM=rural health motivator\n\nLeadership and governance. Among RHMs, 55.8% agreed and 41.9% strongly agreed that the RHM program management was supportive of their work. Most either agreed (53.0%) or strongly agreed (44.0%) with the statement that “the RHM program rules make it easy for me to do a good job”. Similarly, virtually all RHMs (97.0%) expressed that it was generally easy to communicate with members from all levels of the RHM program. Concerning supervision, 91.8% of RHMs indicated that supervisors provide feedback on their work. While 76.0% of RHMs were satisfied (58.6%) or very satisfied (17.4%) with the level of supervision that they receive, 65.3% indicated that they would like to receive more supervision. Qualitatively, in cases where RHMs expressed interest in additional supervision, the reason tended to be that they felt additional feedback would help motivate them further and support continued learning, as illustrated by the following quote:\n\n“I would like more supervision because it would help me learn and grow my skills as a RHM. Additionally, it helps to keep me motivated and to put in more effort in my work” (RHM, Manzini)\n\nProvision of material resources. 60.6% of RHMs either disagreed (40.1%) or strongly disagreed (20.5%) that the program provides all the equipment, supplies, and material resources necessary to perform their duties.\n\nHuman resources. The RHM program had 5,214 RHMs in 2015, of which roughly half (2,803) lived and worked in the Lubombo or Manzini region. In addition, the program had one program manager, one program officer, one administrative assistant, 18 RHM trainers (who are trained nurses), and two drivers.\n\nCapital resources The RHM program occupies four offices in the country, one in each of Swaziland’s four regions. The program also owns two cars.\n\nCosts. Table 5 shows the running costs of the RHM program for 2011 using data from the Kingdom of Swaziland Budget versus Expenditure Report 201224, which was the latest data available to us. We present these costs in terms of purchasing power parity dollars (PPP$). One PPP$ is calculated such that it had the same purchasing power in Swaziland in 2011 as one US dollar had in the United States in that year. Roughly two thirds of the program costs are spent on salaries for the RHMs. As of 2015, RHMs earned 350 Swazi Lilangeni per month, which is approximately US$ 22.50 (PPP$ 73.22).\n\nAbbreviations: PPP$ = Purchasing power parity-adjusted dollars; RHM, rural health motivator.\n\n1 This is the PPP$ value for 2011 (i.e., not further adjusted for inflation since 2011). The PPP\n\nconversion factor for Swaziland for 2011 was obtained from the United Nations Statistics Division29.\n\n2 In 2011, the RHM program employed 4,765 RHMs.\n\n\nDiscussion\n\nThis evaluation identified a number of weaknesses in the RHM program’s performance. First, despite being in close geographic proximity to their clients, the Swazi population appears to prefer seeking care from other healthcare workers than the RHM cadre. As found in particular through our qualitative interviews, community members rarely seek care from RHMs, and if they do, this tends to be for emergency care when care from other health care providers is unavailable. Second, client satisfaction with the RHM program appears to be comparatively low. The survey data on client satisfaction is likely to suffer from some degree of courtesy or social desirability bias whereby community members give a more favorable assessment of the RHMs’ care to abide by a perceived social norm of showing satisfaction and gratitude rather than criticism25. Despite the possibility of this bias, a comparatively low proportion (46.3%) of community members rated their overall satisfaction with RHM services as eight or more points on a 10-point scale ranging from very dissatisfied to very satisfied. Third, RHMs do not appear to provide the quantity of care that the program aims to provide. Data on the number of households visited by RHMs are self-reported and may, thus, also suffer from an upward bias as RHMs are likely to want to appear as fulfilling their duties. Despite this likely bias, only 15.7% of RHMs reported achieving the program target of visiting all assigned households at least once a month. Overall, improving the performance of the RHM cadre may be necessary to successfully shift HIV care tasks from facility-based to RHM-led care.\n\nOur assessment of the RHM program on the program- and system-level dimensions of the CHW Performance Logic Model provides some insight into factors that might be lowering RHM performance. In general, RHMs report that they are satisfied with the quantity and quality of training and supervision provided to them. However, RHMs are dissatisfied with the level of monetary compensation, with 57.8% of RHMs indicating that the level of their pay is unfair given the amount of work they do. In 2015, RHMs earned 350 Swazi Lilangeni (approximately US$ 22.50) per month. Additionally, in the qualitative interviews, RHMs reported that they face transport costs and bank fees to collect and cash their paycheck. Swaziland’s national poverty line lies at US$ 3.10 per day26. Ignoring costs to collect and cash their paycheck, RHMs earn approximately US$ 0.74 per day, which is only 23.9% of the daily income needed to be earning at the national poverty line. Expectations of RHM performance need to be examined in light of this comparatively low level of pay. The low pay is likely also an obstacle for shifting HIV care tasks to RHMs, as many of these tasks, such as ART home-delivery, require reliable and constant care. A theme arising from our qualitative interviews, however, was that RHMs view themselves as volunteers rather than employees given their low level of pay. It would thus seem likely that other income-generating activities take priority over RHM work, which in turn may lead to prolonged gaps in RHM care delivery.\n\nApart from monetary compensation, RHMs were also dissatisfied with the material resources provided to them by the RHM program for performing their duties. In the qualitative interviews, RHMs frequently mentioned that community members expect them to provide certain material resources, such as diapers, medications (particularly paracetamol), bandages, and disposable gloves. RHMs felt that not being able to meet this expectation was an important barrier in maintaining a good relationship with the community, and to cover the households that they were assigned. Thus, providing the expected material resources to RHMs and/or altering the expectations of community members to receive such resources from RHMs may increase RHM performance. Improving the RHM-client relationship is of particular importance if RHMs are to provide more HIV care given the continued high HIV-related stigma in Swaziland27.\n\nWe used the CHW Performance Logic Model to guide this performance evaluation. While the logic model aims to be a useful tool for planning, consensus-building, implementation, and evaluation of CHW programs18, we can only comment on our experience with the model’s usefulness for CHW program evaluations. A key characteristic of the model is that it tries to comprehensively include all factors that may influence CHW performance. As such, the logic model differs strongly from the more simplistic framework of inputs – processes – outputs that we have previously used for a performance evaluation of a CHW program in Dar es Salaam, Tanzania28. In our view, the comprehensive nature of the logic model is its key strength. Given the sheer number of possible factors that may plausibly influence CHW program performance, most evaluators will have to make a decision regarding the scope of their evaluation. The CHW Performance Logic Model could help evaluators clearly define the evaluation’s scope, and be more explicit about their choice of which factors and domains they include in the evaluation. Nonetheless, the model’s comprehensive nature could be a disadvantage if evaluators find the number of possible factors to evaluate overwhelming. In our view, the main disadvantage of the model is that it does not provide any guidance to evaluators on which factors are the most important determinants of CHW performance. As such, a prioritization of the categories and factors in the model based on relevant theory and evidence, rather than an un-weighted list of all factors that plausibly influence CHW performance, would substantially improve the utility of the model. Another limitation of the model is that many of the performance measures and factors assessed under the model’s dimensions lack established measures and scales. In addition, there are doubts as to whether a dimension is measured appropriately, which also results in some degree of subjectivity in interpreting what level of CHW program performance the observed achievement on a measure represents.\n\nOther limitations of this study include that the data from the RHM questionnaire are likely to suffer from a degree of self-reporting bias whereby RHMs may, for example, over-report aspects of their work that they perceive as desirable (e.g., the number of households visited). Similarly, household survey respondents may have been hesitant to express criticism of RHMs because they wanted to maintain a good relationship with the RHMs (who are fellow community members chosen by the community and the village chiefs), or simply due to an intrinsic tendency to be courteous. Lastly, while the RHM program is a national program, this assessment has focused on only two of four regions in Swaziland due to feasibility constraints. However, these two regions constitute more than half (52%) of Swaziland’s population, and the program structures for management and implementation of the RHM cadre do not differ between regions. We, therefore, feel confident that the findings of this study apply to the RHM program as a whole.\n\n\nConclusions\n\nThis evaluation found that the RHM program does not meet some of its performance targets. For instance, RHMs are currently not an important point of first call for seeking care for an illness, and the RHMs do not appear to achieve their household coverage target. If the RHM program is to adopt specific HIV-related tasks, then Swaziland’s HIV response would likely benefit from policy and management changes aimed at improving RHM performance. While it is beyond the purview of this study to provide an exhaustive list of suitable reforms, two simple changes identified by this evaluation that may lead to an improvement in RHM performance are i) an increase in monetary compensation, and ii) the provision of material resources to RHMs (e.g., paracetamol, diapers, and bandages) to enable RHMs to meet their community’s expectations.\n\n\nData availability\n\nPlease note that some items have been removed/edited due to potentially identifiable information. The datasets contain both CSV and .dta files.\n\nDataset 1: Household (head and member) survey raw data. doi, 10.5256/f1000research.11361.d15877730\n\nDataset 2: Rural health motivator survey raw data. doi, 10.5256/f1000research.11361.d15877831\n\nQualitative interview manuscripts are not shared publicly because they cannot be effectively de-identified given the relatively small number of staff involved in the studied community health worker programs. Individuals interested in accessing the transcripts should contact the corresponding author.",
"appendix": "Author contributions\n\n\n\nPG and MV analyzed the data and wrote the first draft of the manuscript. The authors (PG, MV, JWD, TB, TJB) jointly designed the study and data collection tools. All authors (PG, MV, JWD, TB, TJB) provided important edits to the manuscript and approved the final version.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nData used in this study were collected for other activities supported by the American people through the United States Agency for International Development (USAID) with funding from the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR). The data were collected by the Harvard T.H. Chan School of Public Health through the USAID Applying Science to Strengthen and Improve Systems (ASSIST) Project. The USAID ASSIST Project is managed by University Research Co., LLC (URC) under the terms of Cooperative Agreement AID-OAA-A-12-00101. The authors’ views expressed in this paper do not necessarily reflect the views of USAID or the United States Government.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript\n\n\nSupplementary material\n\nSupplementary File 1: Household survey questionnaires consisting of a questionnaire for all household members aged 11 years and older and an additional questionnaire for the household head.\n\nClick here to access the data.\n\nSupplementary File 2: Questionnaire for the survey of rural health motivators.\n\nClick here to access the data.\n\nSupplementary File 3: Interview guide for RHMs, RHM trainers, and the RHM program management.\n\nClick here to access the data.\n\n\nReferences\n\nWorld Health Organization: World Health Statistics 2016. Geneva: World Health Organization, 2016. Reference Source\n\nSingh P, Sachs JD: 1 million community health workers in sub-Saharan Africa by 2015. Lancet. 2013; 382(9889): 363–5. PubMed Abstract | Publisher Full Text\n\nHongoro C, McPake B: How to bridge the gap in human resources for health. Lancet. 2004; 364(9443): 1451–6. PubMed Abstract | Publisher Full Text\n\nMwai GW, Mburu G, Torpey K, et al.: Role and outcomes of community health workers in HIV care in sub-Saharan Africa: a systematic review. J Int AIDS Soc. 2013; 16(1): 18586. PubMed Abstract | Publisher Full Text | Free Full Text\n\nViswanathan M, Kraschnewski J, Nishikawa B, et al.: Outcomes of community health worker interventions. Evid Rep Technol Assess (Full Rep). Rockville, MD: RTI International–University of North Carolina Evidence-based Practice Center. 2009; 181: 1–144; A1-2, B1-14, passim. PubMed Abstract | Free Full Text\n\nGilmore B, McAuliffe E: Effectiveness of community health workers delivering preventive interventions for maternal and child health in low- and middle-income countries: a systematic review. BMC Public Health. 2013; 13: 847. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLewin S, Munabi-Babigumira S, Glenton C, et al.: Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database Syst Rev. 2010; (3): Cd004015. PubMed Abstract | Publisher Full Text\n\nPerry HB, Zulliger R, Rogers MM: Community health workers in low-, middle-, and high-income countries: an overview of their history, recent evolution, and current effectiveness. Annu Rev Public Health. 2014; 35: 399–421. PubMed Abstract | Publisher Full Text\n\nEast Central and Southern African Health Community: Task shifting in Swaziland: A case study. Washington, DC: Futures Group, Health Policy Initiative, Task Order 1, 2010. Reference Source\n\nGBD 2013 DALYs and HALE Collaborators, Murray CJ, Barber RM, et al.: Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990-2013: quantifying the epidemiological transition. Lancet. 2015; 386(10009): 2145–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUNAIDS: 90-90-90 An ambitious treatment target to help end the AIDS epidemic. Geneva: UNAIDS, 2014. Reference Source\n\nUNAIDS, One Million Community Health Workers: UNAIDS joins forces with the One Million Community Health Workers campaign to achieve the 90–90–90 treatment target. 2016, (accessed 24 March 2016). Reference Source\n\nWorld Health Organization, PEPFAR, UNAIDS: Task shifting - Global recommendations and guidelines. Geneva: World Health Organization, 2008. Reference Source\n\nJaffar S, Amuron B, Foster S, et al.: Rates of virological failure in patients treated in a home-based versus a facility-based HIV-care model in Jinja, southeast Uganda: a cluster-randomised equivalence trial. Lancet. 2009; 374(9707): 2080–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSelke HM, Kimaiyo S, Sidle JE, et al.: Task-shifting of antiretroviral delivery from health care workers to persons living with HIV/AIDS: clinical outcomes of a community-based program in Kenya. J Acquir Immune Defic Syndr. 2010; 55(4): 483–90. PubMed Abstract | Publisher Full Text\n\nGeldsetzer P, Francis JM, Ulenga N, et al.: The impact of community health worker-led home delivery of antiretroviral therapy on virological suppression: A non-inferiority cluster-randomized health systems trial in Dar es Salaam, Tanzania. BMC Health Serv Res. 2017; 17(1): 160, In press. PubMed Abstract | Publisher Full Text | Free Full Text\n\nICAP-Swaziland: RHM Review 2012. Mbabane: Columbia University, 2012.\n\nNaimoli JF, Frymus DE, Wuliji T, et al.: A Community Health Worker \"logic model\": towards a theory of enhanced performance in low- and middle-income countries. Hum Resour Health. 2014; 12: 56. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCentral Statistical Office: 2007 Population and Housing Census. Mbabane, Swaziland: The Kingdom of Swaziland, UNFPA, 2010; 6. Reference Source\n\nCentral Statistical Office, Macro International Inc: Swaziland - Demographic and Health Survey 2006-2007. 2007. Reference Source\n\nMinistry of Health: Swaziland HIV incidence measurement survey (SHIMS). Mbabane: Kingdom of Swaziland, 2012. Reference Source\n\nBicego GT, Nkambule R, Peterson I, et al.: Recent patterns in population-based HIV prevalence in Swaziland. PLoS One. 2013; 8(10): e77101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSandelowski M: Whatever happened to qualitative description? Res Nurs Health. 2000; 23(4): 334–40. PubMed Abstract | Publisher Full Text\n\nThe Kingdom of Swaziland: Budget versus Expenditure Report. Mbabane, Swaziland, 2012.\n\nGlick P: How reliable are surveys of client satisfaction with healthcare services? Evidence from matched facility and household data in Madagascar. Soc Sci Med. 2009; 68(2): 368–79. PubMed Abstract | Publisher Full Text\n\nOxford Poverty and Human Development Initiative: OPHI Country Briefing Dec 2015: Swaziland. Oxford, UK: Oxford Department of International Development, University of Oxford, 2015.\n\nTsai AC: Socioeconomic gradients in internalized stigma among 4,314 persons with HIV in sub-Saharan Africa. AIDS Behav. 2015; 19(2): 270–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLema IA, Sando D, Magesa L, et al.: Community health workers to improve antenatal care and PMTCT uptake in Dar es Salaam, Tanzania: a quantitative performance evaluation. J Acquir Immune Defic Syndr. 2014; 67(Suppl 4): S195–201. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUnited Nations Statistics Division. Purchasing power parities (PPP) conversion factor, local currency unit to international dollar. (accessed September 8 2016) 2016. Reference Source\n\nGeldsetzer P, Vaikath M, de Neve JW, et al.: Dataset 1 in: A qualitative and quantitative performance evaluation of Swaziland’s Rural Health Motivator program. F1000Research. 2017. Data Source\n\nGeldsetzer P, Vaikath M, de Neve JW, et al.: Dataset 2 in: A qualitative and quantitative performance evaluation of Swaziland’s Rural Health Motivator program. F1000Research. 2017. Data Source"
}
|
[
{
"id": "26804",
"date": "07 Nov 2017",
"name": "Frédérique Vallières",
"expertise": [
"Reviewer Expertise Applications of Psychology to Global Health. Namely",
"Oganisational Psychology",
"Global Mental Health",
"Psychotraumatology",
"and Latent Variable Modelling"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for the opportunity to review this interesting paper by Geldsetzer et al. The authors conducted a cross-sectional, mixed-methods study examining the performance of Swaziland’s RHM programme. The study had two underlying objectives (i) to assess the performance of the RHM programme, using the CHW Performance Logic Model, and (ii) to identify ways in which performance could be improved. Overall, this is a well-written paper with many strengths including an impressive sample of 2000 households, representative of two of Swaziland’s four administrative regions, and a further 25 semi structured interviews. However, the paper would benefit from a number of important revisions, especially pertaining to a clear definition of performance, the methodological approaches used to define and measure this, and more rigorous analytical approaches in order to adequately address the study’s objectives.\n\nSuggested Revisions\nIntroduction\n\nWhile evaluating Swaziland’s RHM program is a worthy endeavour, I would argue that it is unfair to state that ‘many [of the] large programs that have existing for decades have not yet been rigorously evaluated’ (p.3) as the rationale for the current study. For example, there have been a number of evaluations of Ethiopia’s CHEW programme1 and Pakistan’s LHW programme2. Many of these studies also arguably employ more rigorous evaluation approaches than the current study. It is probably sufficient to say that no evaluations of the RHM in Swaziland have taken place to date, and if RHMs are expected to contribute to achieving 90-90-90, then a better understanding of the programme’s progress is required. More detail is required on the RHM programme in general. Are the RHMs remunerated? If so, who pays them? If not, what non-financial incentives are in place for them, if any? Who supervises them? Are they affiliated with a health centre? Are they trained? If so, for how long? Are they predominantly women? Are they recognised as part of the formal health system? Additional background on the context of Swaziland, specifically as it relates to their lack of human resources for health, and their need to task shift/share, would also improve the introduction and offer more context for the reader. A clearer rationale for the importance of assessing performance of CHWs is required. Performance of health workers is a difficult construct to define, and there would be large variation in the literature in terms of how performance is (i) defined and (ii) measured. The paper would benefit greatly from engaging with this literature. Specifically, the paper should engage the literature examining the relationship between job satisfaction, motivation, intention to leave, etc. and performance, in order to build justification for the inclusion of these variables within the questionnaire design.\n\nDid the authors consider other frameworks? If so, why was the CHW Performance Logic Model chosen as the framework for this study? A stronger rationale for why this framework was chosen is required as this model does not measure performance per se, but rather puts forward a theoretical pathway towards performance. What evidence is there to suggest that this model is valid (i.e. predictive validity) in terms of predicting CHW performance? This is also methodological decision and does not belong in the Introduction.\n\nMethods\n\nStudy Setting\nClarify why Lubombo and Manzini were chosen as the two study areas. Was this random selection? If not, how were these two chosen? Is there any reason to believe that differences might exist between these two areas? What percentage of Swaziland’s population resides within these two areas? Who operates the other CHW programmes in Swaziland? While the authors specify that each have a workforce of less than 50CHWs, how many CHWs in Swaziland are part of other programmes (i.e. not RHMs)? If you have the data for these, why not include these and control for ‘CHW programme’ in your analysis? No need to repeat that the RHM programme was established in 1976 and employs 5000 RHMs again here, as it is already in the introduction.\n\nStudy Design & Materials\nClearly state that this study uses a cross-sectional, mixed-methods design. As above, a clearer operational definition of what is meant by ‘performance’ is needed. What is meant by high or good performance here? How was ‘performance’ defined and measured for the purpose of this study? Why, for example, not include indicators that are aligned to the activities outlined under the RHM program section of the introduction (page 2 para 5)? It seems only fair that performance of the RHMs should, at least in part, be measured against those activities and tasks they are expected to complete. This is a major element missing from this study. A much more detailed description of the study tools are required. As it stands, it is difficult to evaluate the validity of the scales used without knowing the details of how the survey employed during data collection was designed. For example, have the ‘job satisfaction’, ‘social support’, ‘supervision’, and ‘motivation’ items used in the RHM questionnaire (Part 12 and Part 13, respectively) been used in other studies? If so, which ones? Have these scales been validated? According to these validation studies, how are they meant to be coded/scored?\nSome of the items under Part 13 do not seem to be measuring motivation (i.e. lack face validity). For example, items 13.8 & 13.9 seem to more so be measuring conscientiousness at work, than motivation.\n\nSampling\nIt would be quite unlikely in an entirely random sample to have the same number of rural (n=37) and urban (n=13) EAs in each region. Especially since the study setting section describes Manzini as comparatively more urban and wealthy than Lubombo. What is the rationale for interviewing family members aged 11 years or older? Also, one would expect that those over the age of 18 would provide written consent, but those between 11-17 would provide assent, provided parental consent. Please clarify whether this was the case. On page 4, first paragraph, please clarify the sampling for the SSIs. The authors state that 13 RHMs from Manzini and 12 from Lubombo were selected, but then go on to specify that 13 were from rural and 12 were from urban areas, in each region. Were there 50 SSIs in total?\n\nData Analysis\n\nThe quantitative data analysis is rather superficial (i.e. descriptive), and unfortunately, does not exploit the richness of the dataset. Why were inferential statistics not employed to look at correlations between certain factors that are known to predict performance (i.e. job satisfaction, motivation at work, supportive supervision) and performance (however this is defined here)? Regression methods, with ‘performance’ as the dependent variable, could be used to more rigorously assess variables that are associated with performance, and are aligned with addressing the second objective of your study: (ii) identify ways in which performance could be improved. The logic here being that improvements made to these factors could result in corresponding changes ‘performance’. Moreover, you would be able to control for differences across Lubombo and Manzini, CHW programes etc.\n\nHow were scores calculated? For example, on what basis was it decided that anyone who scored below 8/10 on the Likert scale should not be classified as ‘satisfied with the services provided by the RHMs in their community’? Why recode the answers into dichotomous variables, instead of using the mean score? These methodological decisions need to be justified/described in much greater detail.\nHow was missing data treated?\n\nThe qualitative data analysis, given the use of a comprehensive interview guide, with broadly pre-determined themes (i.e. the dimensions of the CHW Performance Logic Model), strikes me as having been analysed using more deductive, rather than inductive approaches. More detail is required as to how you: ‘conducted iterative reviews’ and ‘further refined themes’ and established ‘their relationship to each other’. As it stands, the qualitative description of the analysis is insufficient to ensure replicability.\n\nResults\n\nThe results in part read like a report against a logframe or results-based framework than a research paper. As above, the lack of clarity around how dichotomous categories were recoded from the Likert scale makes the results difficult to interpret. In line with the above comment on inductive vs. deductive approaches, I’m not convinced that HHs approaching RHMs was ‘rare or infrequent’ (pg 6, para 2) is an emerging theme or topic, when the question asked was “How often are you contacted for help or advice?”. Figure 2 is not very telling. Instead of presenting as a histogram, consider presenting as mean scores for ‘satisfaction’ across the various items in the HH questionnaire, with the Likert Scale range of scores presented on the y-axis. “Inputs” section is odd here and should form part of the narrative describing the RHM programme in the introduction. RHM salaries in Table 5 should also be reported in USD to give the reader some idea of what this figure represents. Overall, I’m not convinced that the results presented are sufficient to answer the research objectives (i) to assess the performance of the RHM programme and (ii) to identify ways in which performance could be improved. The results could however, shed some light on how the RHM programme is faring in terms of known determinants of performance.\n\nDiscussion\n\nWhile every programme has room for improvement, I would argue that the results presented are not only suggestive of ‘a number of weaknesses in the RHM’ programme. Sure, resource mobilization and poor pay are an issue (as they are in most places!), but overall, the evidence also suggests a programme that has maintained high community support, a high level of quality training, frequent and supportive levels of supervision from the health facilities, and relatively low levels of people who were thinking of leaving their job. These are all worthy accomplishments for a health programme where resources are extremely limited. Moreover, these are all factors that the literature would strongly suggest are important in predicting health worker performance. While I understand that this type of narrative is what keeps foreign aid funding flowing into programmes, our responsibility as researchers is to present the evidence as objectively as possible. Here, the evidence provided is quite strong for many successful elements of the RHM programme, and these should be discussed too.\n\nPage 10 para 1: “overall, improving the performance of the RHM cadre may be necessary to successfully shift HIV care tasks from facility-based to RHM-led care” seems like an over-interpretation of findings. The evidence presented more so suggests that adequate compensation and more material resources are likely necessary before asking RHMs to take on yet another task within the health system. The evidence presented offers some insight into why RHMs may not be meeting their HH targets (having to return to homes more than once if a family is absent, attitudes towards HRMs). This should be discussed accordingly as the reason for this observation does not appear to lie solely with the RHM. The breakdown of income earning on page 10 is interesting, but would be better suited to a description of the RHM programme in the literature review. Page 10, para 2: “Our assessment of the RHM program…provides some insight into factors that might be lowering RHM performance”. There is no evidence presented to suggest that ‘performance’ decreased in any way. Please rephrase. Page 10 para 2: “a theme arising from our qualitative interviews, however, was that RHMs view themselves as volunteers rather than…”. I did not see this evidence presented in the results of the paper. Generally speaking, new results should not be presented in the discussion. Limitations: I disagree that many of the performance-related factors assessed under the CHW Performance Logic Model lack established measures and scales. The I/O psychology literature for example, contains a number of well-developed, cross-culturally validated scales of social support, motivation at work, job satisfaction, etc. scales.\n\nConclusion\nRegarding the statement that, “the evaluation found that the RHM program does not meet some of its performance targets”: Are RHMs really intended to act as the first point of care in this context? Other than not visiting with all 25-30 households on a monthly basis, what other performance target(s) does this statement refer to?\n\nOther (including Figures & Tables)\n\n2nd para, page 4, there is no need to explain in text what the contributions of each author were. There should be an ‘Author Contributions’ section which allows for this.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNo\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "25132",
"date": "09 Nov 2017",
"name": "Eilish McAuliffe",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comment This study sets out to evaluate the performance of Swaziland’s Rural Health Motivator Programme. As most resource-poor countries with human resources shortages in their health sectors are employing CHWs as a strategy to respond to the health needs of communities, evaluations of this strategy are highly relevant for health policy makers. Gathering data from households and CHWs has the potential to generate a much better understanding of how these programmes work or don’t work. This study is impressive in the large amount of data gathered from different stakeholder perspectives. It is a well-written paper, but with some critical omissions as set out below.\n\nIntroduction A brief discussion on published evaluations of CHW programmes (e.g. well established programmes such as the Health Extension Workers programme in Ethiopia and the Lady Health Workers in Pakistan) would help to situate this research in the field of study and enable the authors to better articulate the study’s contribution to the field in their discussion and conclusions part of the paper. Some description of RHMs and how they differ from CHWs would be helpful.\n\nMethods A rationale needs to be provided for the study design. Why were household surveys and RHM surveys conducted? Was it the authors intention to cross reference the results? What was the specific purpose of the qualitative interviews? What is the rationale for the sample size 25? No information is provided on how the interviews map to the CHW Performance Logic Model. It is not clear if the interviews were conducted after the analysis of the quantitative data and whether they were used to explore issues arising from questionnaire results in more depth or to address some other aspect of the Logic Model. The rationale for the study design and the specific purpose of each of the methods needs to be included. Authors need to include some explanation of the CHW Performance Logic Model and why it was considered most relevant for this study. Performance is the key outcome variable in this study, yet the authors have not defined what this term means in the context of this study. The authors state that if no household members was available at first visit, they did not revisit. They ought to give some consideration to the potential bias this may have introduced into the study. For instance, it may be that those who were employed and out at work were less likely to be available than those who were unemployed. No information is provided about the time the visits took place, so it is difficult to know if this bias could have occurred. Some further expansion of the methods and/or consideration of this issue in the limitations of the study is needed.\n\nResults A strength of the study is the use of three sources of data and the opportunity this presents to triangulate data. The relatively large sample sizes should allow for a robust interrogation of the data. It is therefore disappointing that the analysis is rather superficial and confined mainly to descriptive stats. There are some obvious questions that remain unanswered because the relevant data have not been triangulated. For example, the RHMs reporting of rate of household visits could have been triangulated the household members response to Questions 2.8-2.14 to give a much better understanding of the productivity of the RHMs. Also there are many questions in the household member study that explore the performance of RHMs the results of which are not presented at all in this paper. If these results have been presented elsewhere, it would be helpful to refer to the publication.\nThe statement “30.5% of RHMs reported to have done work other than for the RHM program during the previous 12 months” requires further explanation. Does this mean these RHMs worked elsewhere before joining the program or that they concurrently take on other work while conducting their RHM duties?\n57.8% of sample disagree that they are paid a fair amount for the work they do, yet there are several non-monetary payments. Is it possible that these compensate.\nIt is mentioned that half of respondents were satisfied or very satisfied with their job and then “Roughly a quarter (26.2%) of RHMs reported to occasionally or often think about leaving their job”. Was this issue explored in the qualitative interviews? Why is such a high proportion thinking about leaving their jobs?\nI also have concerns about the treatment of the qualitative data. It is not clear if quotes provided are indicative of a majority view or not. Whilst it would not be appropriate to attempt to quantify qualitative data, some sense of how widely held the opinions represented in the quotes are in the sample of interviewees would be appropriate.\nThe following quote is provided as illustrative of physical distance being a barrier to RHMs visiting the households, but the quote is illustrative of the ease of visiting due to short distance and is therefore not appropriate to illustrate the point above ““I find it to be very easy since the households I am responsible for are nearby and I do not need to walk a long distance””\n\nDiscussion There are several problems with the discussion section of this paper.\n\nIn the first paragraph of the discussion on social desirability bias needs to be re-written. The authors suggest that social desirability bias may have impacted the answers to two specific questions, but the results (i.e. low satisfaction with RHM and low rates of productivity reported by RHNs) suggest the opposite is more likely. This needs further discussion and explanation. There seems to be an element of selectivity about what results are discussed. For example, taking two results presented in the results section: “The majority of RHMs expressed dissatisfaction with the compensation offered. 57.8% either disagreed (38.0%) or strongly disagreed (19.8%) with the statement that “Given the amount of work I do as a rural health motivator, I am being paid a fair amount”. and “65.3% indicated that they would like to receive more supervision. Qualitatively, in cases where RHMs expressed interest in additional supervision, the reason tended to be that they felt additional feedback would help motivate them further and support continued learning”. The first of these results is discussed and a recommendation put forward to increase the monetary reward of RHMs. The second result represents a greater majority and is clearly linked to improved motivation, yet it is not mentioned in the discussion and additional supervision is not put forward as a recommendation. There are many apparent contradictions in the data that are not discussed e.g. low satisfaction levels with RHM services, but very high percentage of households would recommend the RHM programme to other communities The discussion section contains statements for which no evidence is provided in the results section e.g. “The low pay is likely also an obstacle for shifting HIV care tasks to RHMs, as many of these tasks, such as ART home-delivery, require reliable and constant care.” “Qualitative interview manuscripts are not shared publicly because they cannot be effectively de-identified given the relatively small number of staff involved in the studied community health worker programs.” This is not a convincing explanation as 25 interviewees from a sample of 306 would suggest that their anonymity could be protected. If the data cannot be provided, then the extracted text coded to the main reported themes should be made available. In the limitations, the authors provide a comprehensive account of the weaknesses of the CHW Performance Logic Model. This begs the question as to why they decided to use such a flawed model as the framework for the evaluation, given that this information on the Model was available to them.\n\nConclusions. Again some selectivity evident as in the discussion, but some important points are also made regarding the deficiencies in the RHM programme.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-607
|
https://f1000research.com/articles/5-2824/v1
|
06 Dec 16
|
{
"type": "Software Tool Article",
"title": "Cluster Flow: A user-friendly bioinformatics workflow tool",
"authors": [
"Philip Ewels",
"Felix Krueger",
"Max Käller",
"Simon Andrews",
"Philip Ewels",
"Felix Krueger",
"Simon Andrews"
],
"abstract": "Pipeline tools are becoming increasingly important within the field of bioinformatics. Using a pipeline manager to manage and run workflows comprised of multiple tools reduces workload and makes analysis results more reproducible. Existing tools require significant work to install and get running, typically needing pipeline scripts to be written from scratch before running any analysis. We present Cluster Flow, a simple and flexible bioinformatics pipeline tool designed to be quick and easy to install. Cluster Flow comes with 40 modules for common NGS processing steps, ready to work out of the box. Pipelines are assembled using these modules with a simple syntax that can be easily modified as required. Core helper functions automate many common NGS procedures, making running pipelines simple. Cluster Flow is available with an GNU GPLv3 license on GitHub. Documentation, examples and an online demo are available at http://clusterflow.io.",
"keywords": [
"Workflow",
"Pipeline",
"Data analysis",
"Parallel computing",
"Next-generation sequencing",
"Bioinformatics"
],
"content": "Introduction\n\nAs the field of genomics matures, next-generation sequencing is becoming more and more affordable. Experiments are now frequently run with large numbers of samples with multiple conditions and replicates. The tools used for genomics analysis are increasingly standardised with common procedures for processing sequencing data. It can be inconvenient and error prone to run each step of a workflow or pipeline manually for multiple samples and projects. Workflow managers are able to abstract this process, running multiple bioinformatics tools across many samples in a convenient and reproducible manner.\n\nNumerous workflow managers are available for next-generation sequencing (NGS) data, each varying in its approach and use. Many of the popular tools allow the user to create analysis pipelines using specialised domain specific languages (Snakemake1, NextFlow2, Bpipe3). Such tools allow users to rewrite existing shell scripts into pipelines and are principally targeted at experienced bioinformaticians with high throughput requirements. They can be used to create highly complex analysis pipelines that make use of concepts, such as divergent and convergent data flow, logic checkpoints and multi-step dependencies. Using such a free-form approach allows great flexibility in workflow design.\n\nWhilst powerful, this flexibility comes at the price of complexity. Setting up new analysis pipelines with these tools can be a huge task that deters many users. Many NGS genomics applications don’t require such advanced features and can instead be run using a simple, mostly linear, file based system. Cluster Flow aims to fill this niche: numerous modules for common NGS bioinformatics tools come packaged with the tool (Supplementary File 1: Table S1), along with ready to run pipelines for standard data types. By using a deliberately restricted data flow pattern, Cluster Flow is able to use a simple pipeline syntax. What it lacks in flexibility it makes up for with ease of use; sensible defaults and numerous helper functions make it simple to get up and running.\n\nCluster Flow is well suited to those running analysis for low to medium numbers of samples. It provides an easy setup procedure with working pipelines for common data types out of the box, and is great for those who are new to bioinformatics.\n\n\nMethods\n\nCluster Flow is written in Perl and requires little in the way of installation. Files should be downloaded from the web and added to the user’s bash PATH. Command line wizards then help the user to create a configuration file. Cluster Flow requires pipeline software to be installed on the system and directly callable or available as environment modules, which can be loaded automatically as part of the packaged pipelines.\n\nCluster Flow requires a working Perl installation with a few minimal package dependencies, plus a standard bash environment. It has been primarily designed for use within Linux environments. Cluster Flow is compatible with clusters using Sun GRIDEngine, SLURM and LSF job submission software. It can also be run in ’local’ mode, instead submitting background jobs using bash.\n\nPipelines are launched using the cf Perl script, with input files and other relevant metadata provided as command line options. This script calculates the required jobs and launches jobs accordingly.\n\nCluster Flow uses modules for each task within a pipeline. A module is a standalone program that uses a simple API to request resources when Cluster Flow launches. The module then acts as a wrapper for a bioinformatics tool, constructing and executing a suitable command according to the input data and other specified parameters.\n\nModules are strung together into pipelines with a very simple pipeline configuration script (Supplementary File 1: Figure S1). Module names are prefixed with a hash symbol (#), and tab spacing indicates whether modules can be run in parallel or in series. Parameters recognised by modules can be added after the module name or specified on the command line to customise behaviour.\n\nCluster Flow comes with integrated reference genome management. At its core, this is based on a configuration file listing paths to references with an ID and their type. An interactive command line wizard helps with building this file, able to automatically search for common reference types. Once configured, the genome ID can be specified when running Cluster Flow, making multiple reference types available for that assembly. This makes pipelines simple and intuitive to launch (Figure 1A).\n\nProcess for (A) Launching an analysis pipeline, (B) checking its status on the command line and (C) a typical notification e-mail.\n\nUnlike most other pipeline tools, Cluster Flow does not use a running process to monitor pipeline execution. Instead, it uses a file-based approach, appending the outputs of each step to ‘.run‘ files. When running in a cluster environment, cluster jobs are queued using the native dependency management. Cluster Flow can also be run locally, using a bash script in a background job to run modules in series. The current status can be queried using a subcommand, which prints the queued and running steps for each pipeline along with information such as total pipeline duration and the working directory (Figure 1B).\n\nWhen pipelines finish, Cluster Flow automatically parses the run log files and builds text and HTML summary reports describing the run. These include key status messages and list all commands executed. Any errors are clearly highlighted both within the text and at the top of the report. This report is then e-mailed to the user for immediate notification about pipeline completion, clearly showing whether the run was successful or not (Figure 1C).\n\nMuch of the Cluster Flow functionality is geared towards the end-user, making it easy to launch analyses. It recognises paired-end and single-end input files automatically, grouping accordingly and triggering paired-end specific commands where appropriate. Regular expressions can be saved in the configuration that will automatically merge multiplexed samples before analysis and FastQ files are queried for encoding type before running. If URLs are supplied instead of input files, Cluster Flow will download and run these, enabling public datasets to be obtained and analysed in a single command. Cluster Flow is also compatible with SRA-explorer (https://ewels.github.io/sra-explorer/), which fetches download links for entire SRA projects. Such features can save a lot of time for the user and prevent accidental mistakes when running analyses.\n\n\nUse cases\n\nCluster Flow is designed for use with next-generation sequencing data. Most pipelines take raw sequencing data as input, either in FastQ or SRA format. Outputs vary according to the analysis chosen and can range from aligned reads (eg. BAM files) to quality control outputs to processed data (eg. normalised transcript counts). Tool wrappers are written to be as modular as possible, allowing custom data flows to be created.\n\nThe core Cluster Flow program is usually installed centrally on a cluster. This installation can have a central configuration file with common settings and shared reference genome paths. Users can load this through the environment module system and create a personal configuration file using the Cluster Flow command line setup wizard. This saves user-specific details, such as e-mail address and cluster project ID. In this way, users of a shared cluster can be up and running with Cluster Flow in a matter of minutes.\n\nCluster Flow can also easily be used on single node clusters in local mode, as a quick route to running common pipelines. Some groups have used Cluster Flow in the cloud using the MIT STAR Cluster Package (http://star.mit.edu/cluster/) with Amazon AWS.\n\n\nConclusions\n\nWe describe Cluster Flow, a simple and lightweight workflow manager that is quick and easy to get to grips with. It is designed to be as simple as possible to use - as such, it lacks some features of other tools such as the ability to resume partially completed pipelines and the generation of directed acyclic graphs. However, this simplicity allows for easy installation and usage. Packaged modules and pipelines for common bioinformatics tools mean that users don’t have to start from scratch and can get their first analysis launched within minutes. It is best suited for small to medium sized research groups who need a quick and easily customisable way to run common analysis workflows, with intuitive features that help bioinformaticians to launch analyses with minimal configuration.\n\n\nSoftware availability\n\nCluster Flow available from: http://clusterflow.io\n\nSource code available from: https://github.com/ewels/clusterflow/tree/v0.4\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.579004\n\nLicense: GNU GPLv3",
"appendix": "Author contributions\n\n\n\nPE wrote the tool and manuscript. FK provided coding help and advice. MK supported further development and contributed to the manuscript. SA conceived the initial concept, helped with code and provided manuscript feedback. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the Science for Life Laboratory and the National Genomics Infrastructure (NGI) as well as the Babraham Institute and the UK Biotechnology and Biological Sciences Research Council (BBSRC).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank S. Archer, J. Orzechowski Westholm, C. Wang and R. Hamilton for contributed code and discussion.\n\n\nSupplementary material\n\nSupplementary File 1: Typical pipeline script (Figure S1) and a list of modules with tool description and URL (Table S1). Figure S1: The script shows the analysis pipeline for reduced representation bisulfite sequencing (RRBS) data, from FastQ files to methylation calls with a project summary report. Pipeline steps will run in parallel for each read group for steps prefixed with a hash symbol (#). All input files will be channelled into the final process, prefixed with a greater-than symbol (>). Table S1: List of modules excludes Core Cluster Flow modules. List valid at time of writing for Cluster Flow v0.4.\n\nClick here to access the data.\n\n\nReferences\n\nKöster J, Rahmann S: Snakemake--a scalable bioinformatics workflow engine. Bioinformatics. 2012; 28(19): 2520–2522. PubMed Abstract | Publisher Full Text\n\nDi Tommaso P, Chatzou M, Baraja PP, et al.: A novel tool for highly scalable computational pipelines. 2014; 8003. Publisher Full Text\n\nSadedin SP, Pope B, Oshlack A: Bpipe: A tool for running and managing bioinformatics pipelines. Bioinformatics. 2012; 28(11): 1525–1526. PubMed Abstract | Publisher Full Text\n\nEwels P, Archer S, Andrews S, et al.: clusterflow: Cluster Flow v0.4 [Data set]. Zenodo. 2016. Data Source"
}
|
[
{
"id": "18295",
"date": "19 Dec 2016",
"name": "Alastair R. W. Kerr",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAs the software is available and in use in multiple institutions (and thus tried and tested), I have no problems with accepting the manuscript. I feel that the manuscript and/or the linked documentation would benefit from some changes noted below.\nInstall Having a copy of the install instruction in the downloaded tarball would be useful. The “cf” executable uses the FindBin Perl module to establish the location of the script and hence the relative path to the CF Perl modules. Therefore the install must add the clusterflow directory to the PATH and would not function if the “cf” executable was symlinked to a directory on the PATH. This should be made clear in the install instructions although this is alluded to in the manuscript.\nAdding genomes The program can add genomes from installed locations in the filesystem. A helper script to autoinstall from Ensembl/UCSC public sites would be a benefit. Moreover it is unclear if missing index files for mapping programs are generated automatically and permanently stored when running pipelines. This would be useful and easy to implement.\nMetadata I am glad to see the workflow captures metadata such as software versions and this should be highlighted in the manuscript. A reporting tool to extract this information, perhaps in a tabular format, from the log files would be useful.\nReproducibility Output from the pipelines are depended on the software versions on the PATH. This is not ideal and an easy way to configure software versions would be useful to allow reproducible pipelines. I assume that “modules” are what the maintainers imagine most people would use? Docker would have been a nice solution.\nAdding programs There is information in the on-line documentation to add new programs to clusterflow by writing wrappers. This functionality should be noted in the manuscript.\nUpgrades It is unclear how clusterflow can upgraded (I assume that new tarball needs to be downloaded) and whether there are repositories for new pipelines or tools. For example it would be useful for a community facility for depositing new tools and pipelines.\nLanguage Is providing compatibility with the common workflow language [CWL]1 a possibility or a likelihood?\nResources I would like more detail on the following: How exactly is runs/threads/memory managed on a single node cluster? How happens if multiple users each run cf? Are instances aware of each other? Do the scripts check how many jobs are running or how much free memory is available?",
"responses": [
{
"c_id": "2668",
"date": "02 May 2017",
"name": "Philip Ewels",
"role": "Author Response",
"response": "Many thanks for your time in reading the Cluster Flow manuscript and your helpful comments. We have revised the manuscript to address these points and are grateful for the help in improving the quality of the paper. Responses to specific comments are described below: Having a copy of the install instruction in the downloaded tarball would be useful. All of the Cluster Flow documentation is included with the downloaded tarball as markdown files within the ‘docs’ directory. However, we agree that this could be more visible. We have rewritten the main README.md file (also shown on the GitHub front page) to include brief installation instructions with a link to the longer documentation. The “cf” executable uses the FindBin Perl module to establish the location of the script and hence the relative path to the CF Perl modules. Therefore the install must add the clusterflow directory to the PATH and would not function if the “cf” executable was symlinked to a directory on the PATH. This should be made clear in the install instructions although this is alluded to in the manuscript. Thank you for alerting us to this issue. We have updated Cluster Flow to use $RealBin instead of $Bin, which makes it work with symlinks. The program can add genomes from installed locations in the filesystem. A helper script to autoinstall from Ensembl/UCSC public sites would be a benefit. Moreover it is unclear if missing index files for mapping programs are generated automatically and permanently stored when running pipelines. This would be useful and easy to implement. We agree that such a helper script would be useful and will look into writing this for a future release. At the time of writing, missing index files are not automatically generated. We recommend using illumina iGenomes where appropriate. We have written a helper tool to add centralised reference genomes for users running Cluster Flow on the Swedish UPPMAX clusters. I am glad to see the workflow captures metadata such as software versions and this should be highlighted in the manuscript. A reporting tool to extract this information, perhaps in a tabular format, from the log files would be useful. Mention of this has been added to the manuscript (section: Notifications and logging). In the new Cluster Flow release (v0.5), modules have been updated to extract and standardise the version numbers. These are now included in summary e-mails and parsed by a new Cluster Flow module written for MultiQC. MultiQC produces also machine-readable versions of this data (tsv, csv, json or yaml). Output from the pipelines are depended on the software versions on the PATH. This is not ideal and an easy way to configure software versions would be useful to allow reproducible pipelines. I assume that “modules” are what the maintainers imagine most people would use? Docker would have been a nice solution. As the reviewer suggests, Cluster Flow was primarily designed for use with environment modules as a method to standardise tools used and support for that is built in. We have added to the Cluster Flow submission log file, which now contains information about all loaded environment modules, all directories currently on the PATH, the current user and information about the compute environment. Users are able to run Cluster Flow inside a docker container if desired, using local mode. There is information in the on-line documentation to add new programs to clusterflow by writing wrappers. This functionality should be noted in the manuscript. This is now described within the manuscript (section: Modules and pipelines). It is unclear how clusterflow can upgraded (I assume that new tarball needs to be downloaded) and whether there are repositories for new pipelines or tools. For example it would be useful for a community facility for depositing new tools and pipelines. This is correct, Cluster Flow is updated by replacing the program files with a new tarball download. A community repository for Cluster Flow modules and pipelines has previously been discussed, and we hope to be able to work on this project in the future. Is providing compatibility with the common workflow language [CWL] a possibility or a likelihood? The authors have looked into such compatibility in the past, including discussing the point with Michael Crusoe when he visited our institute to present CWL. Unfortunately, due to differing assumptions and architectures within the two systems it seems unlikely that this will be pursued. How exactly is runs/threads/memory managed on a single node cluster? How happens if multiple users each run cf? Are instances aware of each other? Do the scripts check how many jobs are running or how much free memory is available? Running Cluster Flow in local mode on a single node cluster is fairly simplistic. There is no resource management and instances are not aware of each other. It was primarily written for easy testing and low-throughput runs where this will not be a problem. If running a lot of jobs on a single server we recommend installing a job management system such as SLURM. We have added a sentence describing this to the manuscript (section: Use Cases)."
}
]
},
{
"id": "18292",
"date": "13 Feb 2017",
"name": "Stephen Taylor",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a useful automation pipeline for institutes, where significant amounts of similar analyses are run on daily basis. It’s nice that potential users can get an idea of software by using the interactive web based terminal session (on http://clusterflow.io/). We recommend the software should be published but we have the following comments/questions. 1) We installed the software fairly easily to run in ‘local mode’, although we couldn’t get it to run using Sun Grid Engine. It would be useful to put more documentation and/or examples here. 2) How easy it is to add non Perl code to the software? It appears Perl is the main language to configure the pipelines but we were wondering about other languages. Are there standard procedures or templates for including R scripts and passing parameters to them, for example? 3) Can one fine-tune the pipeline while running it? In contrast to adding changes to the pipelines, or adding new tools to the pipelines ( which is more a system admin / senior bioinformatician task ), one often needs to make frequent calls about \"which parameters suit the analysis of this sequencing library the best\" e.g. Thresholds for peak calling in ChIP-Seq. Can such thresholds be easily applied running the pipeline on the fly? 4) Which kind of visualisation/report generating software do the authors recommend? As the pipeline produces a folder full of output results, it makes sense to have software to inspect these results. Which kind of software do you recommend to be used to this kind of task? Is there a concept of building reports? For example, is it recommended to use Labrador with CF (https://github.com/ewels/labrador)? 5) How do the authors envisage managing multiple versions of very similar pipelines across different users and use cases without things becoming confusing and to encourage reuse of pipelines, rather than just creating new instances?",
"responses": [
{
"c_id": "2669",
"date": "02 May 2017",
"name": "Philip Ewels",
"role": "Author Response",
"response": "Many thanks for your time in reading the Cluster Flow manuscript and your helpful comments. We have revised the manuscript to address these points and are grateful for the help in improving the quality of the paper. Responses to specific comments are described below: 1) We installed the software fairly easily to run in ‘local mode’, although we couldn’t get it to run using Sun Grid Engine. It would be useful to put more documentation and/or examples here. We have extended the installation documentation available on the website. A walkthrough screencast tutorial is available on the Cluster Flow homepage (and YouTube). More concrete examples are difficult due to the varying setups of different clusters, though we continue to provide support via GitHub issues and e-mail. It appears Perl is the main language to configure the pipelines but we were wondering about other languages. Are there standard procedures or templates for including R scripts and passing parameters to them, for example? Any language can be used for software modules, however Perl is recommended because of the available Cluster Flow functions which greatly simplify the interaction with the core program. There have been example modules bundled with Cluster Flow for Python and R, but they are difficult to maintain with the changes to the core Cluster Flow code and not advertised as a result. From experience, we find that it is usually easier to write a simple perl module is written which in turn executes downstream custom scripts. In contrast to adding changes to the pipelines, or adding new tools to the pipelines ( which is more a system admin / senior bioinformatician task ), one often needs to make frequent calls about \"which parameters suit the analysis of this sequencing library the best\" e.g. Thresholds for peak calling in ChIP-Seq. Can such thresholds be easily applied running the pipeline on the fly? Whilst parameters and thresholds cannot be altered once a pipeline is running, it is possible to tweak such settings when launching an analysis. This is done by using the --params command line flag (can also be specified within pipeline files). We were somewhat shocked to realise that there was no documentation of this feature anywhere and have added a new section to the Cluster Flow documentation. This describes all available --params for every module. We have added mention of this to the manuscript (section: Modules and pipelines). As the pipeline produces a folder full of output results, it makes sense to have software to inspect these results. Which kind of software do you recommend to be used to this kind of task? Is there a concept of building reports? For example, is it recommended to use Labrador with CF (https://github.com/ewels/labrador)? Labrador is able to view some results from Cluster Flow pipelines (such as the .html completion report). However, the authors have written another tool called MultiQC which is able to summarise all results from a pipeline into a single html file [1]. See http://multiqc.info for more information. 5) How do the authors envisage managing multiple versions of very similar pipelines across different users and use cases without things becoming confusing and to encourage reuse of pipelines, rather than just creating new instances? This is of some concern to the authors, and we encourage users to submit new modules and pipelines back to the main Cluster Flow repository so that they are available to everyone. However, we do not want to sacrifice flexibility, and so aim for maximum traceability by saving pipeline and module information for every run. A central repository for modules and pipelines has also been discussed (see comments to review 1 above) but is not yet being actively worked on. [1] MultiQC: Summarize analysis results for multiple tools and samples in a single report. Philip Ewels, Måns Magnusson, Sverker Lundin and Max Käller. Bioinformatics (2016) doi: 10.1093/bioinformatics/btw354"
}
]
},
{
"id": "18293",
"date": "16 Feb 2017",
"name": "David R. Powell",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes a pipeline tool, Cluster Flow, (http://clusterflow.io/) specifically for bioinformatics processing. Cluster Flow is well documented, and comes with many pipelines, and modules. Pipelines are built by combining modules. Modules define how to run specific tools, including the CPU and RAM requirements. The tool works by specifying a pipeline to run, which then creates a shell script that either submits jobs to a cluster or runs locally depending on configuration.\nCluster Flow is designed to be simple to use, but it does lack basic pipeline features such as being able to automatically re-run stages of a pipeline.\nIt is not clear whether parameters can be changed when running a pipeline. For example, selecting different adaptors for trimming, or different mapping thresholds for a short read aligner. While Cluster Flow is designed to be simple, it seems such a feature would be commonly needed.",
"responses": [
{
"c_id": "2670",
"date": "02 May 2017",
"name": "Philip Ewels",
"role": "Author Response",
"response": "We thank the reviewer for this review. Pipeline parameters can indeed be changed (for example, adapters and trimming lengths) using the --params command line option. New documentation about this has been written and mention of it added to the manuscript (section: Modules and pipelines)."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2824
|
https://f1000research.com/articles/6-604/v1
|
02 May 17
|
{
"type": "Review",
"title": "Reproducibility2020: Progress and priorities",
"authors": [
"Leonard P. Freedman",
"Gautham Venugopalan",
"Rosann Wisman",
"Gautham Venugopalan",
"Rosann Wisman"
],
"abstract": "The preclinical research process is a cycle of idea generation, experimentation, and reporting of results. The biomedical research community relies on the reproducibility of published discoveries to create new lines of research and to translate research findings into therapeutic applications. Since 2012, when scientists from Amgen reported that they were able to reproduce only 6 of 53 “landmark” preclinical studies, the biomedical research community began discussing the scale of the reproducibility problem and developing initiatives to address critical challenges. Global Biological Standards Institute (GBSI) released the “Case for Standards” in 2013, one of the first comprehensive reports to address the rising concern of irreproducible biomedical research. Further attention was drawn to issues that limit scientific self-correction, including reporting and publication bias, underpowered studies, lack of open access to methods and data, and lack of clearly defined standards and guidelines in areas such as reagent validation. To evaluate the progress made towards reproducibility since 2013, GBSI identified and examined initiatives designed to advance quality and reproducibility. Through this process, we identified key roles for funders, journals, researchers and other stakeholders and recommended actions for future progress. This paper describes our findings and conclusions.",
"keywords": [
"reproducibility",
"preclinical research",
"study design",
"reagents and reference materials",
"protocol sharing",
"scientific publications"
],
"content": "Introduction\n\nPreclinical biomedical research is the foundation of health care innovation. The preclinical research process is a cycle of idea generation, experimentation, and reporting of results (Figure 1)1. The biomedical research community relies on the reproducibility of published discoveries to create new lines of research and to translate research findings into therapeutic applications. Irreproducibility limits the translatability of basic and applied research to new scientific discoveries and applications.\n\nFigure from 1.\n\nAlthough quality control during the research process centers on review of proposals and completed experiments (Figure 1), opportunities to improve reproducibility exist across the entire life-cycle of the research enterprise. In fact, as Figure 1 describes, there are very few steps in the cycle where quality check points are broadly used. By recognizing these opportunities, stakeholders, such as leading scientists, journals, funders, and industry leaders, are taking meaningful steps to address reproducibility throughout the research life-cycle, including commitments to scientific quality, a willingness to examine long- held research policies, and the development of new policies and procedures to improve the process of science.\n\nThe magnitude and effects of reproducibility problems are well documented. In 2012, scientists at Amgen reported that they were able to reproduce only 6 of 53 “landmark” preclinical studies2. Global Biological Standards Institute (GBSI) released the “Case for Standards” in 20131, one of the first comprehensive reports to address the rising concern of irreproducible biomedical research. Further attention was drawn to issues that limit scientific self-correction, including reporting and publication bias, underpowered studies, lack of open access to methods and data, and editorial and reviewer bias against publishing reproducibility studies (see Section IV)3. Based on these findings, GBSI completed an economic study in 2015 and estimated that the prevalence of irreproducible preclinical research exceeds 50%, with associated annual costs of approximately $28B in the United States alone4.\n\nResearch community stakeholders have responded to these concerns with innovation and policy. In early 2016, GBSI launched the Reproducibility2020 Initiative to leverage the momentum generated by these stakeholder-led initiatives. Reproducibility2020 is a challenge to all stakeholders in the biomedical research community to improve the quality of preclinical biological research by the year 2020. The Reproducibility2020: Progress and Priorities Report (or Report), is the first to highlight progress and track important publications and actions, since the issue started to get broad research community and public attention in 20135,6. The Report addresses progress in the four major components of the research process: study design and data analysis, reagents and reference materials, laboratory protocols, and reporting and review. Moreover, the Report identifies the following broad strategies as integral to the continued improvement of reproducibility in biomedical research: 1) drive quality and ensure greater accountability through strengthened journal and funder policies; 2) engage the research community in establishing community-accepted standards and guidelines in specific scientific areas; 3) create high quality online training and proficiency testing and make them widely accessible; 4) enhance open access to data and methodologies.\n\nNote to Reader: Terms such as reproducibility, replicability, and robustness lack consistent definition. The Report draws upon the definitions promulgated by the framework proposed by Goodman et al.7: “methods reproducibility” refers to the complete and transparent reporting of information required for another researcher to repeat protocols and analytical methods; “results reproducibility” refers to independent attempts to produce the same result with the same protocols (often called “replication”); and “inferential reproducibility” refers to the ability to draw the same conclusions from experimental data. The Report defines “reproducibility” to include issues affecting any of these three areas.\n\nThis report is organized around key areas in the life-sciences research process where action can significantly drive improved reproducibility4 (Figure 2):\n\nFigure adapted from 4.\n\nI. Study design and data analysis\n\nII. Reagents and reference materials\n\nIII. Laboratory protocols\n\nIV. Reporting and review\n\nThe following sections contain detailed descriptions of each of these areas, including a review of the associated reproducibility problems, solutions, and examples of recent or current activities to promote greater quality and rigor (summarized in Table 1). The Report outlines the potential impact that lack of reproducibility has on the research community and its stakeholders (Table 2).\n\n\nMethods\n\nTo identify key initiatives in reproducibility of biomedical research from 2013 to 2017, we conducted a review of literature, U.S. government policies, and online sources using the following keywords: reproducibility, rigor, transparency, and open access. Through these initial searches, we identified conferences on and funders of various efforts associated with reproducibility, which we used to identify other initiatives that were not identified using the keyword approach. We analyzed the information and developed recommended actions for promotion, and roles for life science stakeholders.\n\n\nResults and discussion\n\nStudy design is the development of a research framework and analytical methods prior to beginning experiments8. A well-designed study has a research question with a rationale, and clearly defined experimental conditions, sample sizes, and analytic methods. In addition, researchers may include practices, such as blinded analysis, to mitigate subconscious bias. Pre-determining the research questions and sample sizes helps avoid problems such as “p-hacking” and selective reporting, where sample sizes and analytic variables are chosen based on their statistical significance rather than through a research framework (e.g., a hypothesis or an exploratory research model). Poor study design and incorrect data analysis can sabotage even a perfectly executed experiment.\n\nResearcher surveys suggest that study design flaws are a key source of irreproducibility. Four of the top ten irreproducibility factors identified in a researcher survey relate to poor study design and analytical procedures10. These findings can promote a multifaceted approach to improving study design and data analysis. Although researchers ultimately are responsible for ensuring sound study design and analysis, funder policies should encourage rigorous study design before research begins, journal requirements should facilitate better review of completed research, and training and support resources should improve researchers’ study design and analysis skills.\n\nNIH study design policy. Funder policies that require good study design are especially powerful because they encourage researchers to develop rigorous study plans before beginning experimentation. Clinical research has regulatory mechanisms to review study design; for example, Phase 2 and 3 Investigational New Drug clinical trial applicants must acquire FDA approval of the study design and statistical analysis plan that includes explicit description of contingencies, such as sample exclusion criteria (http://www.accessdata.fda.gov/SCRIPTs/cdrh/cfdocs/cfCFR/CFRSearch.cfm?CFRPart=312). Preclinical biomedical research is not covered by these regulatory standards, and generally has not required explicit justifications of key parameters, such as sample sizes and statistical tests, in the hypothesis and specific aims sections of proposals or in publications. For example, an analysis of 48 neuroscience meta-analyses found that 28 (57%) of the studies had a median study power of 30% or less, despite the relative ease of increasing sample size11. The new NIH policy (see Box 1) requires grant reviewers to explicitly incorporate several key rigor and transparency features into their peer reviews, but the policy does not add dedicated scoring line items for these areas. With respect to study design and analysis, the policy requires grant applicants to evaluate the rigor of prior studies that form the basis of a research proposal, and to justify their proposed study design. In the first round of reviews with the new guidelines, the NIH Center for Scientific Review noted that panels increasingly discussed the areas of emphasis, but that additional communication is required to get all reviewers and applicants on the same page (http://www.csr.nih.gov/CSRPRP/2016/09/implementing-new-rigor-and-transparency-policies-in-review-lessons-le). Formal evaluations of this ongoing effort will provide valuable lessons for NIH and other funders interested in implementing their own rigor and transparency guidelines.\n\nTo augment these efforts, NIH has worked with the journal community to develop publication guidelines (see Section IV), and funded the development of researcher training programs in study design (see “Training and Support” below) as part of its rigor and reproducibility efforts.\n\nAs the largest and most influential research funder in the world, NIH took a major step in establishing new guidelines and going on record that NIH will address other areas where they can impact reproducibility9. NIH serves as an important model for other government and private research funders looking to establish greater accountability around quality and rigor.\n\nNIH Rigor and Transparency Guidelines\n\nNIH’s Rigor and Transparency Guidelines went into effect on January 25, 2016 (https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-011.html, https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-012.html) This policy includes applicant and reviewer guidance in four key areas: scientific premise, scientific rigor, consideration of sex and other biological variables, and authentication of key biological and/or chemical resources\n\n(https://grants.nih.gov/grants/peer/guidelines_general/Reviewer_Guidance_on_Rigor_and_Transparency.pdf) Applicants are required to describe the strengths and weaknesses of prior studies cited in their scientific premise, specifically they are required to describe and justify the proposed study design, and develop authentication plans based on established standards. Since reviewers are now instructed to review applications based on these criteria, grant applicants that fail to meet the new criteria are less likely to be funded. NIH also requires grantees to report on rigor and transparency measures in their publications and the Research Performance Progress Reports submitted during the life of an award. These new guidelines underscore the need for development and propagation of study design training, pre-registration resources, and low cost authentication tools. For further information, see the NIH webpage: https://grants.nih.gov/reproducibility/index.htm\n\nJournal efforts to improve study design. Several studies indicate that fewer than 20% of highly-cited publications contain adequate descriptions of study design and analytic methods12. At least 31 journals have signed on to the Principles and Guidelines for Reporting Preclinical Research, which included a call for journals to include statistical analysis reporting requirements and to verify the statistical accuracy of submitted manuscripts (see Section IV) (https://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research). As these principles do not specify what these requirements should be, implementation varies by journal. One example from the Biophysical Journal recommends that authors consult with a statistician and requires reporting of specific information about sample sizes and statistical analyses (http://www.cell.com/pb/assets/raw/journals/society/biophysj/PDFs/reproducibility-guidelines.pdf).\n\nIn the United Kingdom, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines developed by the National Centre for the Replacement Refinement & Reduction of Animals in Research, include a checklist for researchers who perform animal studies to help researchers appropriately report study design and sample size justifications (www.nc3rs.org.uk/arrive-guidelines). These guidelines can also be used to help ensure that researchers are planning their animal experiments correctly. As of January 2017, these reporting guidelines have been endorsed by nearly 1,000 journals and are required by the major funders in the UK, including the Wellcome Trust and Medical Research Council (https://www.nc3rs.org.uk/arrive-animal-research-reporting-vivo-experiments).\n\nSome journals are prototyping alternate review models to help verify study design. As of January 2017, the Registered Reports initiative through the Center for Open Science allows selected reviewers to comment on study design and methods prior to data collection (https://cos.io/rr). Once study design has been approved, participating journals essentially guarantee publication so long as the authors follow the study design. In addition, researchers can use the Registered Reports format to submit articles to these journals. Currently, 45 journals are participating in this initiative. In a separate, but related initiative, the Center for Open Science’s Pre-Registration Challenge has been designed to provide training and incentives for up to 1,000 researchers to pre-register study protocols and submit manuscripts to participating journals (https://cos.io/our-services/prereg/).\n\nOne journal, Psychological Science, currently is pilot testing statcheck software on all submitted manuscripts (http://www.psychologicalscience.org/publications/psychological_science/ps-submissions). Statcheck and StatReviewer are tools developed by researchers to automatically review data analysis information contained in published manuscripts15,16. Researchers also have broadly deployed the Statcheck tool on thousands of published studies (see Section IV).\n\nTraining and support. Many life-science researchers will require training and support to satisfy the funding and publication policies described above. In the 2016 Proficiency Index Assessment (PIA) (see Box 2), GBSI surveyed over 1,000 researchers of varying experience levels. Participants reported lower confidence in their skills in study design, data management, and analysis compared to their experimental execution skills13. Furthermore, research experience did not correlate with higher study design proficiency, suggesting the value of ongoing training and support in this area. New textbooks8,17, online minicourses (https://www.nih.gov/research-training/rigor-reproducibility/training)18 and journal articles19 can be used for course development or independent study by more senior trainees.\n\nNew approaches to training researchers should be a priority for all steps in the research cycle, including the study design training resources described in the Report. Enhanced training should be available for all levels of researchers—graduate students, post-docs, and experienced PIs. Active learning opportunities are particularly important, considering the informal apprenticeship culture of science, in which trainees learn how to design, perform, and report on their research by working with more senior scientists. However, not all senior researchers have the most current expertise or may not be able to spend the requisite time with their trainees. Surveys of researchers support this need: the 2016 Proficiency Index Assessment indicated that even experienced researchers stand to benefit from study design training, and a figshare and Digital Science survey reported that over half of researchers wanted training on open access policies and procedures13,14.\n\nInnovative pedagogical approaches are required to ensure that training is effective and engaging for researchers at all stages of their careers. These approaches, including interactive teaching, in-lab practice, and proficiency assessments, are increasingly being explored by many institutions (see “Training and Support” example in Section I). Online training modules are a cost-effective way to provide high-quality, accessible, interactive training for researchers at all levels.\n\nThe positive response to study design courses established at Johns Hopkins University20 and Harvard University (https://nanosandothercourses.hms.harvard.edu/node/96) demonstrate the value of study design training. These courses are becoming more widespread and better tailored to the needs of life scientists, but are not universally available or required. Efforts are underway to increase the experimental design skillset of early-career students, but funding in this area has been relatively modest and in general, private funders have seen training and education as the responsibility of government funders and graduate programs. In 2014, NIH funded graduate courses on study design. Since 2014, NIH has issued a series of four funding opportunities for grantees interested in providing study design instruction for their graduate students and postdoctoral trainees through administrative supplements to existing grants (https://www.nih.gov/research-training/rigor-reproducibility/funding-opportunities, https://grants.nih.gov/grants/guide/rfa-files/RFA-GM-15-006.html). Several of these grantees have used the funds to develop study design training programs that are tailored to their respective research areas (https://www.nigms.nih.gov/training/instpredoc/Pages/admin-supplements-prev.aspx). For more computationally-focused researchers, a Harvard course on reproducible genomics is available online for free21.\n\nIn addition to training, researchers now have increased access to expert support during study design and analysis. University statistics departments often provide free consulting services to affiliated researchers (http://statistics.berkeley.edu/consulting, https://catalyst.harvard.edu/services/biostatsconsult/, http://www.stat.purdue.edu/scs/), and the Center for Open Science provides a similar service (https://cos.io/our-services/training-services/). The CHDI Foundation provides protocol and study design assistance, evaluation, and review to researchers studying Huntington’s disease (http://chdifoundation.org/independent-statistical-standing-committee/). This model may be of interest to other disease-specific funders as a low-cost investment that can improve research rigor and strengthen the community of practice in their mission area.\n\nTogether, these training and support resources work together to improve reproducibility by increasing the general standard of rigor for all research. As researchers gain an improved understanding and awareness of study design, they can design their own studies better and more effectively communicate with statistics consultants, conduct peer review, and evaluate published findings that may inform future work.\n\nReproducibility is difficult if labs are not working with the same research reagents and materials. Supplier-to-supplier variability often is poorly characterized until researchers run into problems with results reproducibility, as demonstrated by the example of synthetic albumin. The structure, stability, and immunogenicity of synthetic albumin varies across suppliers and lots, in ways that are not commonly characterized22. In addition, factors, such as lot-to-lot material variability, cell line drift, and contamination, can cause an individual researcher’s assays to change over time. Examples from other sectors suggest that these problems can be addressed with standards.\n\nMaterials developed and validated based on standards are well-characterized and demonstrate consistency. Standardized materials that exhibit a predictable behavior can be used reliably in methods reproducibility, and can facilitate development of reference materials for assay validation. Standards of most well-known and often-used biological materials typically apply to particular clinical applications, such as virus strains used in influenza vaccine development1. Although preclinical researchers often use standardized chemical reagents (e.g., salts and sugars), few standardized biological materials exist. However, surveys suggest that life science researchers increasingly understand the need for standardized materials1, and the research community recently has made progress on cell line authentication and antibody validation.\n\nStandards development for biomedical research reagents. Stakeholders of preclinical research include researchers, reagent manufacturers, funders, journals, standards experts, and nonprofit organizations from countries throughout the world. Recent efforts to establish antibody databases, information-sharing requirements, and international frameworks for antibody validation standards are good examples of the broad, multi-stakeholder approach required to develop consensus standards around a specific reagent (see Box 3).\n\nThe research community has acknowledged that antibodies are an area of widespread error and inaccuracy23. The Antibody Validation Initiative, involving stakeholders throughout the research community and led by GBSI, is an example that could be replicated in other scientific areas (e.g. both stem cells and synthetic biology are areas where a greater emphasis on development of standards and best practices are needed to ensure quality and advance discovery). Antibodies are key reagents in preclinical research for activities as diverse as protein visualization, protein quantification, and biochemical signal disruption. Antibody performance is variable, with differences in specificity, reliability, and functionality for different types of experiments (e.g., Western blotting and immunofluorescence), manufacturers, and lots, harming reproducibility24. Stakeholder solutions include antibody databases, such as the CiteAB database (https://www.citeab.com/), and repositories, such as the proposed universal library recombinant antibodies for all human gene products25. In all cases, validation is a key component of the solution.\n\nNIH specifically highlights antibody authentication in the Rigor and Transparency guidelines, (https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-011.html) providing additional impetus for new standards, policies, and practices. Researchers, manufacturers, pharmaceutical companies, funders, and journals have held dedicated conferences on antibody validation e.g. (http://www.antibodyvalidation.co.uk/). In 2016, the International Working Group on Antibody Validation (IWGAV) qualitatively identified key validation “pillars” that may be suitable for assessing antibody performance26. Seeking to build on the IWGAV recommendations, GBSI and The Antibody Society organized a workshop for all stakeholder groups to develop actionable recommendations to improve antibody validation27. Stakeholder groups recognized the shared responsibility of antibody validation and effective communication of validation methodology and results. In addition, they highlighted the need for continued, multi-sectoral engagement during the development of standards for validation, which may vary by use case, and information-sharing, which may vary by stakeholder.\n\nSince the workshop, GBSI established seven multi-stakeholder working groups to draft validation guidelines for the major antibody applications. Validation guidelines will include an application-specific point system to quantify antibody specificity, sensitivity, and technical performance. The Antibody Validation Initiative also includes a Producer Consortium to address issues of common concern for producers and a Training and Proficiency Assessment program to ensure the highest quality of validation.\n\nGood cell culture practice. One well-known example of developing standards for laboratory reagents is cell culture validation, which includes assay validation, cell line authentication, and testing for contamination28. Many commonly-used cell lines are available from repositories, such as ATCC, as well as other nonprofit, governmental, and for-profit organizations. These organizations regularly test and validate the cells, confirming desired cell function and testing for accidental cross-contamination or infection. Researchers in two different labs can purchase validated cells from these providers and be assured that they are receiving the same product, but cells diverge once they are used in the lab. Use of shared sterile culture hoods, incubators, and reagent storage spaces can cause infection with bacteria, viruses, mold, or yeast, and result in unintentional cross-contamination of purchased cells with other cell cultures used in the lab. Even without contamination, genetic changes occur in cells through repeated culturing and experimentation, a process known as cell line drift. Despite these known problems, periodic cell line authentication and infection testing are not universally-practiced in preclinical research even though a human cell authentication standard exists29,30.\n\nAs with study design, cell culture validation can be enhanced with policies from funders and journals. For example, the Prostate Cancer Foundation has been a leader in validation of cell lines used to study the disease, requiring periodic cell line authentication since 2013. NIH now requires grant applicants to describe their authentication plan as part of the Rigor and Transparency guidelines (https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-011.html) and many journals now ask researchers to perform cell line authentication (http://www.scoop.it/t/cell-line-contamination/p/4040895974/2015/04/08/which-journals-ask-for-cell-line-authentication).\n\nMany of the validation assays required for cell culture validation can be borrowed directly from other applications. In 2011 and 2012, ATCC organized an international group of scientists from academia, regulatory agencies, major cell repositories, government agencies, and industry to develop a standard that describes optimal cell line authentication practices, ANSI/ATCC ASN-0002-2011. The authentication assay uses Short Tandem Repeat (STR) profiling technology and is an affordable cell line authentication tool. The International Cell Line Authentication Committee’s Database of Cross-contaminated or Misidentified Cell Lines provides researchers with a dataset to check during the authentication process31. For products of animal origin, U.S. Department of Agriculture regulations specify testing protocols for mycoplasma and select viruses32 and test kits are commercially available.\n\nImproving the reproducibility and translation of biomedical research using cultured cell lines must build on ongoing, multi-stakeholder efforts to raise awareness of the issues of misidentification and the role of authentication33. GBSI’s #authenticate campaign encourages this kind of stakeholder engagement (www.gbsi.org/authenticate).\n\nTechnology and assay development. The development and propagation of standards is an iterative process. For example, recent publications highlight the simultaneous progress in cell line authentication technologies and standards development, including the establishment of reference data standards and cell line authentication policies for the broader research community28,29. As technology development progresses, the standards need to be revisited and improved to reflect the current capabilities afforded by new tools34. For example, more affordable next generation sequencing is an increasingly useful tool to validate genome editing and characterize changes in cell behavior35, and mass spectrometry and lab-on-a-chip assays can help characterize sera and other liquid reagents36,37.\n\nSera validation: an opportunity for standards and technology development. One opportunity to further improve cell culture validation would be to develop standards for sera production and validation. The media used to feed most cells in culture include sera, such as fetal bovine serum, that provides a variety of growth factors and other small molecules. Even authenticated cells may perform very differently in two different sera preparations. Serum is a “black box” ingredient with high variability between manufacturers and lots. Recently developed best practices include characterizing and reporting information on the particular lot(s) of serum/sera used in an experiment, and repeating an experiment with multiple lots of sera to ensure that observed phenotypes are not serum-related artifacts38. Serum manufacturers have begun to characterize and validate sera (http://www.bioind.com/support/tech-tips-posters/introduction-to-fetal-bovine-serum-class/), but no industry standard exists for reporting serum characteristics and reliability.\n\nFurther technological development could reduce reliance on sera. In serum-free culture, researchers precisely define all components of the cell culture medium rather than using a “black box” serum. Building a system with defined minimum essential components improves reproducibility and enhances scientific understanding of the key signaling molecules involved in biological processes of interest38. Researchers are developing and validating robust, serum-free culture systems. Clear material and validation standards are building blocks that facilitate this development.\n\nReproducibility requires thorough, detailed laboratory protocols. Without ready access to the original protocols, researchers may introduce process variability when attempting to reproduce the protocol in their own laboratories. The respondents of the GBSI’s Proficiency Index Assessment were more confident in their experimental skills than their study design skills13. Despite this relative confidence in their laboratory execution skills, researchers frequently are unable to recreate an experiment based on the experimental methods published in journals, which usually do not contain step-by-step laboratory protocols that specify every relevant variable. Further, a particular study may use a modified version of an established protocol, but state the method was “as previously described” without noting the changes. If attempts to contact authors to request the original protocols are not successful, the reader may not be able to reproduce the methods in the published work. In a Nature survey, nearly half of researchers felt that incomplete experimental protocol descriptions in published articles hindered methods reproduction efforts10. Although fewer efforts exist in this key area than in the other three areas described in this report, newly developed tools and processes designed to facilitate protocol sharing and version control may improve documentation and reduce barriers to methods reproduction.\n\nProtocol repositories. Protocol repositories are an innovative approach that may facilitate transparency, protocol sharing, and version control. Researchers can upload their protocols to a repository, such as Protocols.io, precisely specifying all step-by-step instructions with links to required reagents. As the original researchers, or others, modify the protocol, they can document these changes in the repository and create their own “forked” version of the protocol. Protocols in the repository can receive a DOI number, making identification of the precise version used in a publication easier. Suppliers also can post recommended protocols for their products on these websites, which facilitates adoption of their products.\n\nProtocol development requires a robust community of practice, so that protocols can be developed and tested by researchers in different laboratories. This practice ensures that the written instructions are understandable and replicable by a third party. Emerging on-line tools, such as BioSpecimen Commons (The Biodesign Institute at Arizona State University), provides a common location and uniform set of protocols and conditions for clinical sample-related standard operating procedures. Another example is the international Protist Research to Optimize Tools in Genetics group, funded by the Gordon and Betty Moore Foundation, and working on the Protocols.io website (https://www.moore.org/article-detail?newsUrlName=$8m-awarded-to-scientists-from-the-gordon-and-betty-moore-foundation-to-accelerate-development-of-experimental-model-systems-in-marine-microbial-ecology, https://www.protocols.io/groups/protist-research-to-optimize-tools-in-genetics-protg). As of January 2017, this group has 95 members who have contributed 31 protocols to the platform. Although this group does not focus on preclinical research, the practices established by this group are a relevant example that could be reproduced in preclinical research. Preclinical research funders may find added value with version control, protocol forking, and communities of practice in their areas of interest.\n\nImproved protocol reporting in journals. The Principles and Guidelines for Reporting Preclinical Research also call for “no limit or generous limits on the length of methods sections.” (https://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research) However, most methods sections still do not contain step-by-step protocols. Authors submitting to participating journals can include links to Protocols.io in the methods section, specifying the exact version of a protocol that was used in the study with a DOI number (https://www.protocols.io/partners?publishers). In April 2017, PLOS and Protocols.io announced a partnership where PLOS is encouraging their authors to log their experimental methods in Protocols.io (https://www.moore.org/article-detail?newsUrlName=open-access-to-data-and-the-laboratory-methods).\n\nAlthough methods journals (i.e., those dedicated to publishing detailed methods) usually provide sufficient information about protocols, most scientific publications do not. Even new techniques are not described in full detail because they build on established techniques, the methods for which are not fully described. However, some journals, such as the Journal of Visualized Experiments, publish original, peer-reviewed manuscripts and videos of both established and new techniques (http://www.jove.com/). The use of videos helps to communicate technique subtleties that may not be captured in written instruction. This type of tacit knowledge often only can be obtained by visiting a laboratory and learning directly from the protocol developers.\n\nThe scientific community requires ready access to publications and the original underlying data to adequately review studies and conduct results for reproducibility efforts. Journal reporting guidelines improve methods reproducibility by ensuring that manuscripts contain a minimum standard of required information. Data standards further facilitate this process, as large data sets formatted in an agreed-upon, machine-readable format are easier to find, compare, and integrate across different studies. With better access to data and manuscripts, researchers now can engage in more robust post-publication review. Reducing these barriers can improve reproducibility by identifying potential flaws in published papers, making scientific self-correction and self-checking faster and cheaper.\n\nEnhanced journal reporting guidelines. Journals increasingly recognize the importance of methods reproducibility and are developing more transparent and enhanced reporting guidelines. Co-led by the Nature Publishing Group, the American Association for the Advancement of Science (AAAS; publisher of Science), and the NIH (as part of its Rigor and Reproducibility efforts), the scientific journal community established the Principles and Guidelines for Reporting Preclinical Research in June 2014 (https://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research). Per the last update of the NIH website in 2016, 31 journals have signed on to these guidelines (https://www.nih.gov/research-training/rigor-reproducibility/principles-guidelines-reporting-preclinical-research). The guidelines provide a minimum consensus standard for statistical rigor, reporting transparency, data and material availability, and other relevant best practices, but do not specify in detail exactly what these reporting requirements should be.\n\nMore specific guidelines from journals have built upon this initial effort. Differences in implementation of reporting guidelines may cause some short-term confusion among authors and reviewers. However, over time, their implementation could provide long-term benefit in identifying successful approaches and best practices. One initiative that seeks to provide broad direction and even instruction to journals are the Transparency and Openness Promotion (TOP) Guidelines, promulgated by the Center for Open Science’s Open Science Framework. TOP includes templates for journals interested in implementing their own reproducibility guidelines, and exist in a tiered framework so journals can gradually implement more stringent standards as they improve their own implementation and review capability39. Several of the journals highlighted in the examples listed below are signatories to the TOP guidelines.\n\nExpanded reproducibility guidelines from the Biophysical Journal are an example of what enhanced journal guidelines look like in practice. These guidelines specifically establish reporting standards in four key areas: Rigorous Statistical Analysis, Transparency and Reproducibility, Data and Image Processing, and Materials and Data Availability (http://www.cell.com/pb/assets/raw/journals/society/biophysj/PDFs/reproducibility-guidelines.pdf).\n\nAuthors submitting to the Nature Publishing Group family of journals must complete a reporting checklist to ensure compliance with established guidelines, including a requirement that authors detail if and where they are sharing their data (http://www.nature.com/authors/policies/checklist.pdf).\n\nSTAR Methods guidelines (Structured, Transparent, and Accessible Reporting) are designed to improve reporting across Cell Press journals. These guidelines remove length restrictions on methods, provide standardized sections and reporting standards for methods sections, and ensure that authors include adequate resource and contact information (http://www.cell.com/star-methods).\n\nSince January 2016, researchers funded by the Howard Hughes Medical Institute have been required to adhere to a set of publication guidelines that cover similar areas as the minimum consensus guidelines described above (http://www.hhmi.org/sites/default/files/About/Policies/sc_300.pdf).\n\nThe Research Resource Identification Initiative establishes unique identifiers for reagents, tools, and materials used in experiments, reducing ambiguity in methods descriptions40.\n\nJournals and funders can use two methods to measure and continuously improve implementation of these guidelines: 1) stakeholder feedback studies; and 2) research measuring the frequency of compliance over time. The journal community periodically should reconvene and use data from these evaluations to identify and propagate successful implementation of the Guidelines, and to update and improve the Guidelines.\n\nOpen access policies. Funder policies increasingly mandate access to data and publications (see Box 4). As of October 2016, 16 U.S. government funding agencies require their grantees’ publications to be open access within a year of the publication date, and 13 of these funders, including the NIH, require data management plans to be included in research proposals41. Globally, the online research repository figshare predicts that by 2020, all funders in the developed world will require openness14. At the end of March 2017, the European Commission (EC; institute of the European Union) expressed an interest to set up a “publishing platform” to stimulate open-access publishing in Europe42. The EC is hopeful the platform will catalyze their initial plan to make all published research funded by EU members open access by the year 2020 (http://www.sciencemag.org/news/2017/03/european-commission-considering-leap-open-access-publishing).\n\nPrivate funders have taken a variety of approaches to promoting open access, such as increasingly requiring either full open access or archived manuscripts as a condition of continued funding (reference [https://www.ucl.ac.uk/library/open-access/research-funders] contains a summary of many institutions’ policies). The Bill & Melinda Gates Foundation is a leader among philanthropic organizations in formulating and implementing open access policies. Beginning in January 2017, the Gates Foundation’s Open Access Policy requires immediate open access (“Gold” access) for all publications and underlying data generated by authors that it supports (http://www.gatesfoundation.org/How-We-Work/General-Information/Open-Access-Policy).\n\nMany journals already have open access options that comply with the Gates Foundation policy, but some high-profile journals such as Nature, and Science, did not have Gates-compliant policies as of January 201743. In response to this policy change, AAAS reached a provisional agreement with the Gates Foundation to make Gates-funded publications in AAAS journals open access44. Similarly, the Cell Press family of journals has special agreements with a number of funders, including Gates, that allow immediate open access for a fee (http://www.cell.com/rights-sharing-embargoes). This issue warrants further attention as funders and journals continue to negotiate around access permissions. The Wellcome Trust has a similar policy, encouraging immediate open access but allowing a six-month delay. Both the Wellcome Trust and Gates Foundation have provided dedicated funding to support open access fees imposed by journals where appropriate, and prefer the unrestricted Creative Commons-BY license (https://creativecommons.org/licenses/by/4.0/). More recently, both the Gates Foundation and the Wellcome Trust took the additional step of partnering with F1000 to establish publishing platforms for their grantees.\n\nWhile this represents real progress, these policies can be a source of confusion for researchers. In a recent survey of over 1,000 researchers by figshare and Digital Science, 64% of researchers who have made their data open could not recall what licensing rights they had granted on the data (e.g. CC-BY, CC-BY-NC)14. Additionally, 20% of researchers were unaware whether their funders had an open data policy and most researchers welcomed additional guidance on their funders’ openness policies14, suggesting the need for increased education and support. One facet of the Gates Foundation solution to this problem is a new service called Chronos. The Chronos service guides users through submission to services that are compliant with Gates’ policy, automatically pays open access fees, and archives manuscripts on PubMed (https://youtu.be/lweC1BajBBY). The Gates Foundation expects to scale Chronos to additional funding organizations (https://chronos.gatesfoundation.org/dynamic.aspx?data=article&key=13-What-is-Chronos&template=ajaxFancyArticle).\n\nThe leadership of funders has led several journals to allow authors to self-archive manuscripts on preprint servers, such as arXiv or bioRxiv, before publication. Some journals, such as PeerJ, also have their own pre-print option46. PubMed Central and European PubMed Central also provide open full text archives. The precedent set by these large funders has established an infrastructure and leadership base that smaller funders may be able to leverage in the development and advancement of their own open access policies. Supported by the Laura and John Arnold Foundation, the Center for Open Science also has developed implementation guidelines for funders interested in establishing transparency and openness policies39. Like the TOP journal guidelines, the TOP funder policies are tiered to allow funders to implement more stringent standards over time. Starting in March 2017, the U.S. NIH has begun encouraging investigators to cite preprints or draft (non-peer-reviewed) manuscripts as part of their funding applications47.\n\nBoth governmental and private funders have undertaken significant policy changes to mandate open access to data sets and publications. Funders are generally moving towards more open access, mandating or encouraging researchers to publish in open access journals, paying open access fees, and requiring manuscript archival when researchers publish in more restrictive journals.\n\nLarge funders are leading the drive towards open access. NIH spends roughly $4.5 million on PubMed Central45, and requires all grantees to deposit articles and/or manuscripts in this open repository within twelve months of publication (https://publicaccess.nih.gov/policy.htm). The Gates Foundation and Howard Hughes Medical Institute have leveraged the NIH’s investment by requiring their own grantees to archive manuscripts in PubMed (http://www.gatesfoundation.org/How-We-Work/General-Information/Open-Access-Policy, http://www.hhmi.org/sites/default/files/About/Policies/sc320-public-access-to-publications.pdf). Gates has gone one step further on open access, requiring all publications to be immediately available in open access “Gold” format (http://www.gatesfoundation.org/How-We-Work/General-Information/Open-Access-Policy). The Gates Foundation has also developed tools to assist its grantees with compliance with these new open access policies (https://youtu.be/lweC1BajBBY).\n\nAs major funders increasingly mandate open access, more journals are providing open access options for authors. Many journals provide Creative Commons copyright options, providing a uniform set of standards. The increased adoption of Creative Commons licenses by journals, especially unrestricted CC-BY licenses, reduces the barrier to adoption of open and transparent sharing permissions (https://creativecommons.org/licenses/by/4.0/).\n\nData standards. Policies that ensure open access to the original underlying data and materials can be leveraged more effectively when the data from different studies can be compared easily. Common standards have been incorporated into reporting policies for journals. For example, the Addgene Vector Database provides a repository of published and commercially-available expression vectors (https://www.addgene.org/vector-database/). At least 31 journals recommend or require authors to submit their plasmids to the Addgene repository (https://www.addgene.org/deposit/pre-publication/). Addgene performs sequencing to verify submission quality (https://help.addgene.org/hc/en-us/articles/206135535-What-type-of-Quality-Control-does-Addgene-perform-), and requires each contributor to provide the same types of information in a uniform format, making the database easily searchable and comparable.\n\nThe Addgene approach works well for plasmids, which consist of a relatively limited number and size compared to high- throughput, whole genome sequencing data sets. As next generation techniques become more widespread, data standards will become even more important. These data standards include metadata (i.e., information about the data set), data fields, and file formats. With data standards, large data sets become much easier to download and interpret, because users do not have to spend valuable and expensive computational time modifying existing analysis tools to fit each new data set. Researchers have proposed a series of metadata checklists for high-throughput studies48. Similar to the development of reagent standards described above, updated data standards will require multi-stakeholder collaboration within the community of practice, harnessing existing standards where possible and harmonizing divergent practices where appropriate.\n\nPost-publication review. Scientific review is an ongoing process that continues well after peer-review and publication. The broader scientific community may identify issues that were not highlighted by the peer reviewers, and other researchers may attempt to reproduce a study on their own. As the post-publication review process may require experimentation, it warrants dedicated resources.\n\nDespite the time commitment and added value to science, the research community typically does not reward post- publication review. Historically, funding agencies and tenure boards do not tend to reward results reproducibility studies, and researchers can have trouble convincing journals to review and accept such manuscripts. However, stakeholders from different sectors now are dedicating resources to results reproduction. The Laura and John Arnold Foundation currently is funding a cancer biology results reproducibility study as part of its Reproducibility Project series. The first five attempts to reproduce papers as part of this effort were published in January 2017 in the journal eLife, an open access journal supported by the Howard Hughes Medical Institute, Max Planck Gesellschaft, and the Wellcome Trust49. Two of these five studies successfully reproduced the original findings, one study did not, and two attempts were inconclusive. Since the project seeks to reproduce approximately 50 papers, conclusions about the Project’s reproducibility rates at this early stage (i.e., after five experiments) would be premature. An earlier project, Reproducibility Project: Psychology, attempted to reproduce 100 original psychology findings, successfully reproducing one-third to one-half of the results50. Another open access publication, F1000Research, established the Preclinical Reproducibility and Robustness Channel as a platform dedicated to reproducibility of published papers (https://f1000research.com/channels/PRR).\n\nResearchers attempting to raise concerns to editors about irreproducible or incorrectly analyzed results found in published articles describe many barriers to the process of raising these concerns, including lack of clarity and transparency from journals in the post-publication review process51. Similarly, journals do not always have a clearly-defined retraction process that mirrors the submission and peer review processes. Much like the stakeholder discussions on study design, cell line authentication, and open access, the retraction process is an important topic that warrants engagement by the research community. The Committee on Publication Ethics has established best practices for Retraction Guidelines52, which may provide an opportunity for this discussion.\n\nWebsites, like PubMed Commons and PubPeer, provide an informal mechanism to facilitate post-publication review and results reproduction attempts by providing a discussion forum for researchers to openly discuss scientific publications. Discussions on these platforms can occur much faster than the pace of published technical commentaries in journals, and provide opportunities for more scientists to contribute. Last year, researchers undertook a widespread deployment of the automated statcheck algorithm on nearly 700,000 experiments from over 50,000 papers, and automatically generated comments on PubPeer for each paper53. This automated tool helps researchers identify papers that deserve further review and discussion about solutions, such as retraction or publication of counter studies. Discussions on open blogs are a double-edged sword. Whereas rapid turnaround and informal discussion can stimulate productive scientific debate, unmoderated discussion can also lead to unwarranted criticism of legitimate studies. In contrast, technical commentary in journals is refereed by an editor who can help organize and moderate the discussion.\n\nThe sheer volume of published research increases the difficulty of identifying and tracking publication errors. Science journalism is another tool that can improve reproducibility. Science reporters, such as the authors of Retraction Watch (www.retractionwatch.com), bring publicity to reproducibility and retraction news, which can galvanize the scientific community to action. For example, replicability of the initial paper describing the NgAgo genome editing technique has been the subject of fierce debate in the community wherein researchers described their difficulties in reproducing the paper’s claims on internet and scientific news sites. The technique drew so much attention that over 100 researchers attempted to reproduce the technique in the first few months after publication, but less than 10% were successful54. The controversy resulted in three peer-reviewed publications, all of which documented a failure to reproduce the original study, and researchers now are trying to understand the reasons for irreproducibility55.\n\nRetraction Watch also partners with the Center for Open Science to generate a database of retractions, as some retracted articles still are cited frequently after retraction56. Researchers armed with this database can avoid using retracted work as a (shaky) foundation for new studies, thereby increasing their chance of success. By reading about reproducibility and retraction news, researchers can learn about the common pitfalls that can cause retractions and new resources available to help them improve the reproducibility of their work, such as the initiatives described in this report. However, highly-visible retractions are a potential threat to public confidence and support for science, as the lay public reads more about retractions and irreproducibility. This further highlights the urgent need for the scientific community to act on the initiatives described in this report and make meaningful improvements to reproducibility.\n\n\nConclusion: a path forward\n\nIrreproducibility is a serious and costly problem in the life sciences. Measured reproducibility rates are shockingly low, requiring significant effort to solve this problem. Many stakeholders now recognize the importance of reproducibility and are taking steps to develop and implement meaningful policies, practices, and resources to address the underlying issues. The lessons learned from these early efforts will assist all stakeholders seeking to scale up or replicate successful initiatives. The research community is making progress to improve research quality. By prioritizing the strategies outlined in the Report, stakeholders in life science research will continue to make progress in improving reproducibility and in turn have a profound positive impact on the subsequent development of treatments and cures.\n\nHowever, the authors would be remiss if we ignored a transcending challenge facing the research community and their willingness to voluntarily accept these positive steps in addressing reproducibility: the current rewards system in academia, including constant pressure to obtain grants and publish in “high impact” journals. The research culture, particularly at academic institutions, must seek greater balance between the pressures of career advancement and advancing rigorous research through standards and best practices. We believe that the many initiatives described in this Report add needed momentum to this emerging culture shift in science, but additional leadership and community-wide support will be needed to better align incentives with reproducible science and effect this change.\n\nContinued transparent, international, multi-stakeholder engagement is the way forward to better, more impactful science. GBSI calls on all stakeholders – individuals and organizations alike – to take action to improve reproducibility in the preclinical life sciences by joining an existing effort, replicating successful policies and practices, providing resources to results reproduction efforts, and/or taking on new opportunities. Table 3 contains specific actions that each stakeholder group can take to enhance reproducibility.\n\nIn its leadership role, GBSI will:\n\nwork with journals and funders to encourage policies that increase rigor, accountability and open access to data and methodologies;\n\nlead the effort toward improving the validation of reagents—particularly cells and antibodies— and work with the research community to explore other scientific areas (e.g. stem cells and synthetic biology) where a greater emphasis on development of standards and best practices are needed to ensure quality and advance discovery;\n\nensure high quality, accessible online training modules available to both emerging and experienced researchers who are eager to improve their proficiencies in new and evolving best practices; and\n\ncontinue to track reproducibility efforts through the Reproducibility2020 Initiative.\n\nThe preclinical research community is full of talented, motivated people who care deeply about producing high-quality science. We are optimistic about the potential to improve reproducibility, and look forward to contributing to the effort.",
"appendix": "Author contributions\n\n\n\nLF, GV, and RW conceived of the review study. LF and RW developed the initial outline and GV carried out most of the literature review and completed the first draft. All authors were involved in subsequent revisions of the manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors would like to acknowledge Dr. Kavita Berger and Ms. Allison Mistry from Gryphon Scientific as well as Michael Byrne and Lauren Shihar from GBSI for their review of the manuscript.\n\nAn earlier version of this article can be found on bioRxiv (doi: 10.1101/109017).\n\n\nReferences\n\nGlobal Biological Standards Institute: The Case for Standards in Life Science Research. 2013. Reference Source\n\nBegley CG, Ellis LM: Drug development: Raise standards for preclinical cancer research. Nature. 2012; 483(7391): 531–533. PubMed Abstract | Publisher Full Text\n\nIoannidis JP: Why Science Is Not Necessarily Self-Correcting. Perspect Psychol Sci. 2012; 7(6): 645–654. PubMed Abstract | Publisher Full Text\n\nFreedman LP, Cockburn IM, Simcoe TS: The Economics of Reproducibility in Preclinical Research. PLoS Biol. 2015; 13(6): e1002165. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarcus AD: Lab Mistakes Hobble Cancer Studies But Scientists Slow to Take Remedies. In The Wall Street Journal. 2012. Reference Source\n\nEconomist T: Problems with scientific research: How science goes wrong. In The Economist. 2013. Reference Source\n\nGoodman SN, Fanelli D, Ioannidis JP: What does research reproducibility mean? Sci Transl Med. 2016; 8(341): 341ps12. PubMed Abstract | Publisher Full Text\n\nGlass DJ: Experimental design for biologists. Cold Spring Harbor Laboratory Press. 2014. Reference Source\n\nCollins FS, Tabak LA: Policy: NIH plans to enhance reproducibility. Nature. 2014; 505(7485): 612–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker M: 1,500 scientists lift the lid on reproducibility. Nature. 2016; 533(7604): 452–454. PubMed Abstract | Publisher Full Text\n\nButton KS, Ioannidis JP, Mokrysz C, et al.: Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci. 2013; 14(5): 365–376. PubMed Abstract | Publisher Full Text\n\nMoher D, Avey M, Antes G, et al.: The National Institutes of Health and guidance for reporting preclinical research. BMC Med. 2015; 13: 34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlobal Biological Standards Institute: Proficiency Index Assessment (PIA) - GRP. [Survey] 2016; [December 16, 2016; January 19, 2017]. Archive: http://web.archive.org/web/20170214233330/https://www.gbsi.org/publication/grpsurvey/. Reference Source\n\nFane B, Treadway J, Gallagher A, et al.: Open Season for Open Data: A Survey of Researchers. In The State of Open Data, Figshare and Digital Science. Editors Figshare. 2016. Reference Source\n\nEpskamp S, Nuijten MB: statcheck: Extract Statistics from Articles and Recompute p Values. [Software Package], 2016; [August 18, 2016 January 23, 2017]. Reference Source\n\nStatreviewer: Statreviewer: Automated Statistical Support for Journals and Authors. [January 23, 2017]. Archive: http://web.archive.org/web/20170214233434/http://www.statreviewer.com/. Reference Source\n\nRuxton G, Colegrave N: Experimental design for the life sciences. Oxford University Press, 2011. Reference Source\n\nSoderberg C, Dodson GT, Clyburne-Sherin A: COS Reproducible Research and Statistics Training. Open Science Framework. 2016. Reference Source\n\nKass RE, Caffo BS, Davidian M, et al.: Ten Simple Rules for Effective Statistical Practice. PLoS Comput Biol. 2016; 12(6): e1004961. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBaker M: Reproducibility: seek out stronger science. Nature. 2016; 537(7622): 703–704. Publisher Full Text\n\nIrizarry R, Love M, Carey V: Data Analysis for Life Sciences 6: High-performance Computing for Reproducible Genomics. Harvard University. Reference Source\n\nFrahm GE, Smith DG, Kane A, et al.: Determination of supplier-to-supplier and lot-to-lot variability in glycation of recombinant human serum albumin expressed in Oryza sativa. PLoS One. 2014; 9(10): e109893. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFreedman LP, Gibson MC, Bradbury AR, et al.: [Letter to the Editor] The need for improved education and training in research antibody usage and validation practices. Biotechniques. 2016; 61(1): 16–18. PubMed Abstract | Publisher Full Text\n\nBaker M: Reproducibility crisis: Blame it on the antibodies. Nature. 2015; 521(7552): 274–6. PubMed Abstract | Publisher Full Text\n\nBradbury A, Plückthun A: Reproducibility: standardize antibodies used in research. Nature. 2015; 518(7537): 27–29. PubMed Abstract | Publisher Full Text\n\nUhlen M, Bandrowski A, Carr S, et al.: A proposal for validation of antibodies. Nat Methods. 2016; 13(10): 823–7. PubMed Abstract | Publisher Full Text\n\nGlobal Biological Standards Institute: Asilomar Antibody Workshop Report. 2016. Reference Source\n\nFreedman LP, Gibson MC, Ethier SP, et al.: Reproducibility: changing the policies and culture of cell line authentication. Nat Methods. 2015; 12(6): 493–7. PubMed Abstract | Publisher Full Text\n\nAlmeida JL, Cole KD, Plant AL: Standards for Cell Line Authentication and Beyond. PLoS Biol. 2016; 14(6): e1002476. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFreedman LP, Gibson MC, Wisman R, et al.: The culture of cell culture practices and authentication--Results from a 2015 Survey. Biotechniques. 2015; 59(4): 189–90, 192. PubMed Abstract | Publisher Full Text\n\nCapes-Davis A, Theodosopoulos G, Atkin I, et al.: Check your cultures! A list of cross-contaminated or misidentified cell lines. Int J Cancer. 2010; 127(1): 1–8. PubMed Abstract | Publisher Full Text\n\nU.S. Department of Agriculture: Standard Requirements. in 9 CFR 113. 2016.\n\nLorsch JR, Collins FS, Lippincott-Schwartz J: Cell Biology. Fixing problems with cell lines. Science. 2014; 346(6216): 1452–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu M, Selvaraj SK, Liang-Chu MM, et al.: A resource for cell line authentication, annotation and quality control. Nature. 2015; 520(7547): 307–11. PubMed Abstract | Publisher Full Text\n\nFan M: CRISPR 101: Validating Your Genome Edit. [Blog post]. 2015; [January 23, 2017]. Archive: http://web.archive.org/web/20170214234614/http://blog.addgene.org/crispr-101-validating-your-genome-edit. Reference Source\n\nNaldi M, Baldassarre M, Domenicali M, et al.: Mass spectrometry characterization of circulating human serum albumin microheterogeneity in patients with alcoholic hepatitis. J Pharm Biomed Anal. 2016; 122: 141–147. PubMed Abstract | Publisher Full Text\n\nOedit A, Vulto P, Ramautar R, et al.: Lab-on-a-Chip hyphenation with mass spectrometry: strategies for bioanalytical applications. Curr Opin Biotechnol. 2015; 31: 79–85. PubMed Abstract | Publisher Full Text\n\nBaker M: Reproducibility: Respect your cells! Nature. 2016; 537(7620): 433–435. PubMed Abstract | Publisher Full Text\n\nNosek BA, Alter G, Banks GC, et al.: Transparency and Openness Promotion (TOP) Guidelines. Open Science Framework. 2017. Reference Source\n\nBandrowski A, Brush M, Grethe JS, et al.: The Resource Identification Initiative: A cultural shift in publishing [version 2; referees: 2 approved]. F1000Res. 2015; 14: 134. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSheehan J: Federally Funded Research Results Are Becoming More Open and Accessible. [Blog post]. 2016; [October 28, 2016 January 23, 2017]. Reference Source\n\nEnserink M: European Commission considering leap into open-access publishing. In Science. 2017. Publisher Full Text\n\nVan Noorden R: Gates Foundation research can’t be published in top journals. Nature. 2017; 541(7637): 270. PubMed Abstract | Publisher Full Text\n\nVan Noorden R: Science journals permit open-access publishing for Gates Foundation scholars. Nature. 2017; [February 15, 2017]. Archive: http://web.archive.org/web/20170215174624/http://www.nature.com/news/science-journals-permit-open-access-publishing-for-gates-foundation-scholars-1.21486. Publisher Full Text\n\nAnderson K: The Price of Posting— PubMed Central Spends Most of Its Budget Handling Author Manuscripts. 2013; [February 7, 2017]. Reference Source\n\nCallaway E: Heavyweight funders back central site for life-sciences preprints. Nature. 2017; 542(7641): 283–284. PubMed Abstract | Publisher Full Text\n\nKaiser J: NIH enables investigators to include draft preprints in grant proposals. Science. 2017. Reference Source\n\nKolker E, Özdemir V, Martens L, et al.: Toward more transparent and reproducible omics studies through a common metadata checklist and data publications. OMICS. 2014; 18(1): 10–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNosek BA, Errington TM: Making sense of replications. eLife. 2017; 6: pii: e23383. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOpen Science Collaboration: PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015; 349(6251): aac4716. PubMed Abstract | Publisher Full Text\n\nAllison DB, Brown AW, George BJ, et al.: Reproducibility: A tragedy of errors. Nature. 2016; 530(7588): 27–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWager E, Brown AW, George BJ, et al.: Retraction Guidelines. 2009. Reference Source\n\nChawla DS: Here’s why more than 50,000 psychology studies are about to have PubPeer entries. 2016; [January 23, 2017]. Archive: http://web.archive.org/web/20170214235501/http://retractionwatch.com/2016/09/02/heres-why-more-than-50000-psychology-studies-are-about-to-have-pubpeer-entries/. Reference Source\n\nCyranoski D: Replications, ridicule and a recluse: the controversy over NgAgo gene-editing intensifies. Nature. 2016; 536(7615): 136–7. PubMed Abstract | Publisher Full Text\n\nCyranoski D: Updated: NgAgo gene-editing controversy escalates in peer-reviewed papers. Nature. 2016; 540(7631): 20–21. PubMed Abstract | Publisher Full Text\n\nMcCook A: New Retraction Watch partnership will create retraction database. 2015; [January 23, 2017]. Archive: http://web.archive.org/web/20170214235526/http://retractionwatch.com/2015/11/24/new-partnership-will- create-retraction-database/. Reference Source"
}
|
[
{
"id": "22387",
"date": "30 May 2017",
"name": "Michael Lauer",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFreedman and colleagues present a narrative review on current efforts underway to improve reproducibility in preclinical biomedical research. They begin by summarizing the extent of the problem and noting that quality checkpoints are either used in disparate points of the research cycle or used only sparingly. They identify key sources of irreproducibility as poor study design and analysis, inadequately authenticated reagents and reference materials, inadequately documented laboratory protocols, and inadequate reporting and review.\n\nThey describe the important roles of many stakeholders, including funders, researchers, research institutions, industry, foundations, professional societies, and the public.\n\nThe authors proceed to describe many efforts already under way including new funder requirements, journal guidelines, enhanced training opportunities, programs to enhance standards development and authentication checks, protocol repositories, improved reporting platforms, open access policies (including open access publishing, greater use of preprint servers, data and code sharing), data standards, and post-publication review. They conclude with a “path forward” that they call the “Reproducibility2020 Action Plan” that includes specific recommendations for funders, researchers, institutions, journals, industry, foundations, and the public.\n\nThoughts and comments:\n\nThe paper is interesting, well-written, and well-documented. I appreciated the many web links that take the reader directly to interesting sites.\n\nThe authors suggest that the current crisis begins with the Amgen findings (Reference 2). While that was a defining moment, I wonder whether it’s also worth mentioning that contemporary discussion about false research findings dates back at least as far back as Ioannidis 2005 ( https://doi.org/10.1371/journal.pmed.0020124). Ioannidis there suggests that exploratory research was highly vulnerable because of small sample sizes, overly flexible designs, and biased designs (e.g. with lack of randomization and proper masking).\n\nTable 1: I commend the authors for noting that “the chance of an irreproducible finding is much higher than the commonly noted 5% threshold.” This is widely under-appreciated, even by well-trained scientists. The authors might consider spelling out that prospective, properly done sample size calculations are critical to overcoming this problem. The “elephant in the room” is that sample sizes will have to increase substantially, meaning that with constrained funds researchers will be forced to conduct fewer experiments. But as some have noted (Cressey D, Nature, April 15, 2015), that may be good for the enterprise – it would be better to do fewer properly powered experiments than to do too many woefully underpowered experiments.\n\nTable 1 and elsewhere: Should there be a “Consumer Reports” for antibodies, cell lines, and other resources? Or maybe I’m missing it, and you’re saying that’s happening. Such a “Consumer Reports” would allow for large-scale surveys in which researchers can report problems with purchased materials.\n\nTable 1: Another potential solution to study design and analysis is mandatory sharing of statistical code (e.g. in SAS, R, or Stata). This is already common practice in some fields (e.g. economics).\n\nTable 2: Another consequence for the public is lack of faith in science. They hear scientists promising the moon, and then nothing happens.\n\nTable 2: There is an ethical problem subjecting animals and people to inadequately designed or documented experiments that were doomed to be irreproducible from the beginning.\n\nTable 2 or elsewhere: NAS just released a report on research integrity in which notes a continuum between frank misconduct (fabrication, falsification, and plagiarism) and “practices detrimental to research.” The authors might want to consider the comments of the report (https://www.nap.edu/catalog/21896/fostering-integrity-in-research).\n\nThere have been some recent successes in improved rigor, such as in preclinical stroke research. (For example, see http://circres.ahajournals.org/content/early/2017/04/04/CIRCRESAHA.117.310628). The authors note that “stroke research has uniquely improved.”\n\nPage 6 – the link didn’t take me directly to “Statcheck software,” though I did eventually find it.\n\nProtocols – many leading clinical journals require authors to submit full clinical trial protocols along with the manuscripts.\n\nTable 3\nShould it be the responsibility of funders to provide statistical consultation to applicants? Should it be the responsibility of funders to pay for open access and transparency tools? Should funders include dedicated reviews on methodological issues for those applications deemed meritorious by content?\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
},
{
"id": "22382",
"date": "01 Jun 2017",
"name": "Lenny Teytelman",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a carefully considered, well written, and comprehensive overview of the numerous causes of irreproducibility and the many ongoing efforts to address them. This manuscript also provides a set of useful actionable recommendations for researchers, funders, journals, and other stakeholders to improve the rigor and reproducibility of research.\n\nBelow are specific comments that I hope the authors will find useful for revising and improving their paper.\n\nATCC is one of the main funders of GBSI and this report mentions ATCC a couple of times. The mentions are appropriate, but the GBSI/ATCC relationship should be clearly disclosed in the COI.\n\n[Abstract and Introduction]\nBoth the abstract and introduction mention the 2012 Amgen report as the beginning of attention to reproducibility. Without a doubt, the Amgen and Bayer headlines have led to a spike of attention and discussion; however the reproducibility issue is not a new problem. Inability to repeat the work of others is as old as science itself and much has been previously written regarding this issue (examples: https://www.ncbi.nlm.nih.gov/pubmed/16510544, http://www.the-scientist.com/?articles.view/articleNo/16604/title/Microarray-Data-Stands-Up-to-Scrutiny/, http://www.nature.com/ng/journal/v41/n2/full/ng.295.html, http://iai.asm.org/content/78/12/4972.full, http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0040028, http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020272, http://elpub.scix.net/data/works/att/001_elpub2008.content.pdf). Moreover, many efforts to improve reproducibility are significantly older than 2012 (for example, Current Protocols, Open Wet Ware, Nature Protocol Exchange, JOVE, and more). Would be good to explicitly acknowledge this.\n\n[Introduction] “Based on these findings, GBSI completed an economic study in 2015 and estimated that the prevalence of irreproducible preclinical research exceeds 50%, with associated annual costs of approximately $28B in the United States alone[4].”\nAs has been publicly discussed after the PLOS Biology publication [4], the estimate of $28B cost of irreproducible research is on shaky ground (see The Sensational vs. the Useful in the Quest for Reproducibility in Research and Study claims $28 billion a year spent on irreproducible biomedical research). It extrapolates to all of US Biomedical Funding from a few estimates of irreproducibility in specific fields. I know of no quantitative research that evaluates reproducibility of published basic research in zebra fish or drosophila communities. If reproducibility problems are greater in cancer, human cell lines, and other research fields, the overall scale of the reproducibility problem across all biomedical research could be smaller.\nAlso, I very much appreciate the authors’ note that the “irreproducible” definition is tricky and that they include results, methods, and inferential reproducibility in their analysis. So, the results may be simply “hard to reproduce” due to missing details or reagents, but they would be included in the “irreproducible total”. The definition issues further complicate the attempt to estimate in dollar amounts the scale of irreproducible research.\nInstead of saying, “the prevalence of irreproducible preclinical research exceeds 50%, with associated annual costs of approximately $28B in the United States alone”, I urge the authors to simply refer to their publication with something more general such as, “GBSI’s 2015 economic study highlighted the high level of economic costs from poor reproducibility.”\n\n[Study design and analysis]\nBox 2 recommends online training courses as highly cost-effective. It is true that they are cost effective, but are they effective when it comes to improving study design? Given how busy scientists tend to be, it is unclear that they will actually devote time to watching online training videos. (For example, podcasts for scientists tend to be consumed much more readily than videos of the same length, as people can listen during commute, runs, cooking, etc. In contrast, videos longer than 3-4 minutes are barely watched by anyone to the end.)\n\n[Laboratory protocols]\nThis section should probably mention the Protocol Exchange from Nature/Springer which is a protocol repository that was started over a decade ago to improve the reporting of methods.\nThe authors might also want to include a mention of Bio-protocol, a journal devoted to increasing reproducibility. Though a selective peer-reviewed journal rather than a repository, Bio-protocol is also connecting to journals and eLife recently included them in their author guidelines to encourage scientists when appropriate to submit new method details to Bio-protocol in parallel with their eLife manuscript submission.\n\n[Reporting and review]\nIn the data reporting section, I recommend adding a brief discussion of data repositories such at Dryad and figshare. Journal policies regarding data sharing are critical and this overview of the genomics community journal policies from Heather Piwowar and Wendy Chapman is relevant: http://elpub.scix.net/data/works/att/001_elpub2008.content.pdf.\nAlso, the explicit data policy from the Public Library of Science is an important step in improving reproducibility of published work.\nRelated to the data policies, sharing code and software from computational pipelines used to analyze the data is critical. Perhaps add a mention of policies encouraging proper reporting and sharing of code/software?\n\n[Reporting and review]\nThere are important experiments happening with open review from publishers such as F1000 Research, EMBO, BMJ, PeerJ and others. Transparent publication with review/author response history can be helpful for reproducibility as readers can see reviewers’ concerns and that can help to discern which parts of the paper are more or less trustworthy.\nAnother relevant proposal is for the adoption of CRediT (Contributor Roles Taxonomy) by publishers. (See Transparency In Authors' Contributions And Responsibilities To Promote Integrity In Scientific Publication.)\n\n[Reporting and review: open access policies]\nThis section does a good job of summarizing open access initiatives and policies from funders, but the link to reproducibility is unclear. As an advocate for open access, I am delighted to see these developments, but the connection between open access publishing and increased reproducibility is not obvious to me.\nA paper in a subscription journal can be solid and reproducible, while one in an open access journal is not. The reverse is just as likely. Certainly, this is more a function of chance and editorial and peer review vigilance than the journal’s business model.\nAn argument can be made for how open access enables reproducibility initiatives (ex. CiteAb), but I don’t think I saw it in this paper.\n\n[Reporting and review: preprints]\nAs above for open access, I am a huge fan of preprints but am unsure how they fit into the push for greater reproducibility. Preprints, of course, shorten publication delays, facilitating communication and speeding up research. However, preprints are not peer-reviewed, do not go through conflict-of-interest checks, data/method reporting compliance checks, and so forth. At scale adoption of preprints in biology is welcome for many reasons, but not exactly due to more rigor and higher reproducibility.\n(Possibly, preprints reduce the pressure to publish and create a track record of a paper’s initial state, reducing publication biases? Preprints can also help to challenge previously-published work and to report negative results. If these are the arguments for preprints improving reproducibility, please make this case explicitly in the manuscript.)\n(Minor note: the use of “preprint” versus “pre-print” is inconsistent in this paper. Please remove the extra dash.)\n\n[Table 3, action plan]\nFor funders, there is a recommendation to “Enact policies requiring study design pre-registration”. I am on the steering committee for COS’s pre-registration initiative and support this effort, but I am not sure that “requiring” pre-registration widely is appropriate. This will depend on the funder and specific research grant. For example, in the case of method development and highly explorative grants, pre-registration is unlikely to be productive. How about “encourage where appropriate” instead of “require”?\nFor journals, there is a recommendation to “Require authors to link to version-controlled protocols”. Again, “require” is a strong term. In certain cases, it may be better to share a protocol directly as part of the publication (for example, JOVE). A more general “encourage or require detailed reporting of protocols” may be more appropriate.\n\n[Conclusion] “Irreproducibility is a serious and costly problem in the life sciences. Measured reproducibility rates are shockingly low, requiring significant effort to solve this problem.”\n\nI very much agree with the first sentence in that irreproducibility is a serious problem. However, is the reproducibility rate “shockingly” low? What is that rate for biology in general? As discussed above, 50% may be the number for some fields but not for others. More importantly, what rate are we aiming for? 70%? 90%? If all of the action items recommended in this report were followed, what rate would we end up with? Is our current level of reproducibility better or worse than it was 30 years ago? What is the optimal reproducibility rate from society’s perspective?\nI don’t have the answers to the above questions. We need a lot more data to make informed statements about the levels of reproducibility over time. It is terrific that we are discussing this issue and the initiatives to address the problem, but I urge caution in editorializing about whether today’s reproducibility levels are a “crisis” or are “shocking”. Science is hard and because it is pushing the boundaries of knowledge, we will never be at 100% of published research being reproducible. We can and should do a lot better, hence all of the initiatives, but it will never be 100%.\n\n[General thoughts]\nAs I mention in #11 above, with the exception of a few efforts from Science Exchange and the Center for Open Science, we have very little data on the reproducibility issue. The authors may want to include in their discussion the need for more quantitative studies about replication and reproducibility over time. We need ways to assess the various initiatives and to measure whether they are in fact improving the overall reproducibility levels of published research.\nAlso, most of the recommendations and discussion in this Report are focused on the design, execution, and publication steps of the research cycle. However, given the complexity of research and the fact that we will never attain 100% reproducibility, efforts aimed at post-publication opportunities to improve reproducibility may be particularly effective. Perhaps we should pay more attention not just to preventing mistakes, but to ways to correct and improve papers, long after publication.\nThis Report mentions post-publication review and retractions, but there are other promising efforts in this phase. Versioning, as implemented on F1000Research and bioRxiv, has great potential. There is a need for technologies that automatically connect readers to corrections and discussion on the papers that they have in their libraries. Crossmark from Crossref is a great initiative aimed at making corrections discoverable. Also, an interesting recent proposal argues for rethinking of “retractions/corrections” in favor of \"amendments\" to increase post-publication evolution and improvement of work.\n-------------------------------------\n\nI would like to stress that I thoroughly enjoyed this report and am grateful to the authors and GBSI for their efforts to improve the research enterprise for the benefit of scientists and the public. The authors should feel free to ignore any of the above suggestions if they disagree.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-604
|
https://f1000research.com/articles/6-596/v1
|
28 Apr 17
|
{
"type": "Software Tool Article",
"title": "Viewing RNA-seq data on the entire human genome",
"authors": [
"Eric M. Weitz",
"Lorena Pantano",
"Jingzhi Zhu",
"Bennett Upton",
"Ben Busby",
"Eric M. Weitz",
"Lorena Pantano",
"Jingzhi Zhu",
"Bennett Upton"
],
"abstract": "RNA-Seq Viewer is a web application that enables users to visualize genome-wide expression data from NCBI’s Sequence Read Archive (SRA) and Gene Expression Omnibus (GEO) databases. The application prototype was created by a small team during a three-day hackathon facilitated by NCBI at Brandeis University. The backend data pipeline was developed and deployed on a shared AWS EC2 instance. Source code is available at https://github.com/NCBI-Hackathons/rnaseqview.",
"keywords": [
"RNA-seq",
"ideogram",
"javascript",
"next generation sequencing"
],
"content": "Introduction\n\nInteractive visualizations can yield insights from the deluge of gene expression data brought about by RNA-seq technology. Several genome browsers enable users to see such data conveniently plotted within a single chromosome in a web application (Broad Institute, 2014; Kent et al., 2002; National Center for Biotechnology Information: Genome Data Viewer (2016)). While such single-chromosome views excel at displaying local features, depicting RNA-seq data across all chromosomes in a genome, i.e. in an ideogram, has the potential to intuitively highlight global patterns of gene expression (such as in Figure 2a in Parker et al., 2016).\n\nIn this paper we describe RNA-Seq Viewer, a web application that enables users to visualize genome-wide expression data from the National Center for Biotechnology Information’s (NCBI) Sequence Read Archive (SRA) (Kodama et al., 2012) and Gene Expression Omnibus (GEO) (Barrett et al., 2013) databases. The application consists of a backend data pipeline written in Python and a web frontend powered by Ideogram.js, a JavaScript library for chromosome visualization (Weitz, 2015).\n\nThe data pipeline, developed by a small team of software engineers in a three-day NCBI hackathon at Brandeis University, extracts aligned RNA-seq data from SRA or GEO and transforms it into a format used by Ideogram. Ideogram then displays the distribution of genes in chromosome context across the entire human genome and enables users to filter those genes by gene type or expression levels in the given SRA/GEO sample.\n\n\nMethods\n\nThe primary task of the hackathon was to develop a prototype data pipeline to extract aligned RNA-seq data from SRA, determine genomic coordinates for the sampled genes, and transform the combined result into the JSON format used by Ideogram.js annotation sets. The formatted annotation data was then plugged into a lightly modified example from the Ideogram repository to provide an interactive, faceted search application for exploring genome-wide patterns of gene expression.\n\nIdeogram.js uses JavaScript and SVG to draw chromosomes and associated annotation data in HTML documents. It leverages D3.js, a popular JavaScript visualization library, for data binding, DOM manipulation, and animation (Bostock et al., 2011). Faceted search in Ideogram is enabled by Crossfilter, a JavaScript library for exploring large multivariate datasets (Square Inc., 2012). By relying only on JavaScript libraries, HTML and CSS, Ideogram can function entirely in a web browser, with no server-side code required, which simplifies embedding ideograms in a web application.\n\nAnnotation data for Ideogram leverages space-efficient data structures and the compact nature of JSON to minimize load time in web pages. For example, the gzip-compressed set of 31,148 human gene feature annotations, including data on expression level and gene type, output by our pipeline for SRA run SRR562646 (National Center for Biotechnology: Sequence Read Archive Run Browser) is 399 KB in size and takes less than 285 ms to download on an average US Internet connection (14 Mb/s download bandwidth, 50 ms latency) (Belson et al., 2016) as measured using Chrome Developer Tools (Basques & Kearney, 2016). Under the same network-throttled conditions using Chrome version 51 on a Mac OS X laptop with a 2.9 GHz Intel Core i5 CPU, the Chrome DevTools Timeline tab reports that an uncached, interactive genome-wide histogram of expression for 31,148 gene features takes Ideogram between 830 ms and 1044 ms to completely load and render after the start of navigation to the web page.\n\nBroadly, the pipeline developed to produce Ideogram annotation data works as follows:\n\n1. Get data for an SRR accession from NCBI SRA (National Center for Biotechnology Information: Sequence Read Archive).\n\n2. Count reads for each gene and normalize expression values to TPM units (Wagner et al., 2012)\n\n3. Get coordinates and type for each gene from a GFF file in the NCBI Homo sapiens Annotation Release\n\n4. Format coordinates and TPM values for each gene into JSON used by Ideogram.js\n\nThe data pipeline exists in two parts: one for data in SRA and one for data in GEO.\n\nThe tool reads a list of SRR accession numbers (National Center for Biotechnology Information, 2011; National Center for Biotechnology Information: SRA Handbook (2011)) and identifies the ones that have alignment. It then retrieves the genome reference used for the creation of the BAM/SAM file to download the gene annotation for quantification. Only genome assemblies GRCh37 (GCA_000001405.1) and GRCh38 (GCA_000001405.15) are supported, and the annotations used for each of them are NCBI Homo sapiens Annotation Release 105 and 107, respectively (National Center for Biotechnology Information, 2013; National Center for Biotechnology Information, 2015).\n\nAlternatively, the tool can read a BAM/SAM file in case of local files. In one single command, the tool quantifies gene expression using HTSeq-count version 0.6.1p1 (Anders et al., 2015) after sam-dump version 1.3 (National Center for Biotechnology Information, 2011). To avoid possible errors due to non-standard SAM files, our filtering steps in the middle sort the BAM file and keep only properly paired reads. The output from HTSeq-count is a tabular file, where the first column is the gene symbol and the second is the read counts. Finally, we normalize the expression by the length of the mature transcript using the longest transcript as the size of the gene.\n\nAfter obtaining TPM values for each gene’s expression level (Step 2) as described above, the next step in the pipeline parses genomic coordinates (chromosome name, start and stop) and gene type (e.g. mRNA, ncRNA) from a GFF file in the NCBI Homo sapiens Annotation Release. These data are combined with each gene’s TPM value, formatted into a compressed JSON structure, and written to a file containing symbols, genomic coordinates, expression levels and gene types for every human gene. This file, e.g. SRR562646.json, represents the final output of the RNA-Seq Viewer data pipeline, and contains all the data used by the fast client-side faceted search in Ideogram.js.\n\n\nResults\n\nThe resulting RNA-Seq Viewer web application prototype was demonstrated at the conclusion of the three-day hackathon at Brandeis University. The application provides an interactive data visualization in which users can filter genes by expression level and gene type across the entire human genome (Figure 1) or within a single chromosome (Figure 2).\n\n\nDiscussion\n\nThe RNA-Seq Viewer prototype demonstrates a pipeline for transforming aligned RNA-seq data from SRA into a format used for genome-wide visualization.\n\nNext steps for this data pipeline include supporting RNA-seq alignment and normalization when using multiple samples, such as from different tissues. Filters for those different tissues could also be added as filters in the display. The resulting genome-wide visualizations could then be embedded in genome browsers, e.g. NCBI Genome Data Viewer (National Center for Biotechnology Information: Genome Data Viewer), or any genomics-oriented application that supports HTML, CSS, and JavaScript.\n\nThe prototype implemented in the hackathon only supports RNA-seq datasets from SRA that are already aligned to a reference genome, e.g. GRCh37 or GRCh38. Salmon (Patro et al., 2015) and Kallisto (Bray et al., 2016) are two popular alignment programs that could be used for this task. Both alignment programs can generate gene expression, with low memory and CPU requirements.\n\n\nSoftware availability\n\nLatest source code: https://github.com/NCBI-Hackathons/rnaseqview\n\nArchived source code as at the time of publication: https://dx.doi.org/10.5281/zenodo.377055 (Weitz et al., 2017)\n\nLicense: CC0 1.0 Universal",
"appendix": "Author contributions\n\n\n\nAll of the authors participated in designing the study, carrying out the research, and preparing the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nWork on this project by Eric Weitz and Ben Busby was supported by the Intramural Research Program of the National Institutes of Health (NIH)/National Library of Medicine (NLM)/National Center for Biotechnology Information (NCBI).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank Francesco Pontiggia and Brandeis University for provisioning computing resources and facilities for development. The authors thank Lisa Federer, NIH Library Writing Center, for manuscript editing assistance. The authors thank Valerie Schneider, NCBI, for insightful comments and suggestions.\n\n\nReferences\n\nAnders S, Pyl PT, Huber W: HTSeq--a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015; 31(2): 166–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarrett T, Wilhite SE, Ledoux P, et al.: NCBI GEO: archive for functional genomics data sets--update. Nucleic Acids Res. 2013; 41(Database issue): D991–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBasques K, Kearney M: Chrome Developer Tools: Network Panel Overview [Online]. 2016; [Accessed: 26 September 2016]. Reference Source\n\nBasques K, Kearney M: Chrome Developer Tools: Network Panel Overview [Online]. 2016; [Accessed: 26 September 2016]. Reference Source\n\nBelson D, Thompson J, Sun J, et al.: Q4 2015 State of the Internet Report. Akamai Technologies. 2016; 8(4). Reference Source\n\nBostock M, Ogievetsky V, Heer J: D3: Data-Driven Documents. IEEE Trans Vis Comput Graph. 2011; 17(12): 2301–2309. PubMed Abstract | Publisher Full Text\n\nBray NL, Pimentel H, Melsted P, et al.: Near-optimal probabilistic RNA-seq quantification. Nat Biotechnol. 2016; 34(5): 525–527. PubMed Abstract | Publisher Full Text\n\nBroad Institute: Lightweight html5 version of the Integrative Genomics Viewer [Online]. 2014; [Accessed: 26 September 2016]. Reference Source\n\nKent WJ, Sugnet CW, Furey TS, et al.: The human genome browser at UCSC. Genome Res. 2002; 12(6): 996–1006. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKodama Y, Shumway M, Leinonen R, et al.: The Sequence Read Archive: explosive growth of sequencing data. Nucleic Acids Res. 2012; 40(Database issue): D54–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNational Center for Biotechnology Information: Genome Data Viewer [Online]. 2016; [Accessed: 26 September 2016]. Reference Source\n\nNational Center for Biotechnology Information: SRA Handbook [Online]. 2011; [Accessed: 26 September 2016]. Reference Source\n\nNational Center for Biotechnology Information Sequence Read Archive Run Browser [Online]: GSM999527: DSN-lite; Homo sapiens; RNA-Seq (SRR562645). [Accessed: 14 February 2017]. Reference Source\n\nNational Center for Biotechnology Information: Using the SRA Toolkit to convert .sra files into other formats. In SRA Knowledge Base. 2011; [Accessed: 26 September 2016]. Reference Source\n\nNational Center for Biotechnology Information: Homo sapiens Annotation Release 105 [Online]. 2013; [Accessed: 26 September 2016]. Reference Source\n\nNational Center for Biotechnology Information: Homo sapiens Annotation Release 107 [Online]. 2015; [Accessed: 26 September 2016]. Reference Source\n\nParker CC, Gopalakrishnan S, Carbonetto P, et al.: Genome-wide association study of behavioral, physiological and gene expression traits in outbred CFW mice. Nat Genet. 2016; 48(8): 919–926. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatro R, Duggal G, Kingsford C: Salmon provides accurate, fast, and bias-aware transcript expression estimates using dual-phase inference. BioRxiv. 2015. Publisher Full Text\n\nSquare Inc: Crossfilter [Online]. 2012; [Accessed: 26 September 2016]. Reference Source\n\nWagner GP, Kin K, Lynch VJ: Measurement of mRNA abundance using RNA-seq data: RPKM measure is inconsistent among samples. Theory Biosci. 2012; 131(4): 281–285. PubMed Abstract | Publisher Full Text\n\nWeitz EM: Ideogram [Online]. 2015; [Accessed: 26 September 2016]. Reference Source\n\nWeitz EM, Pantano L, Zhu J, et al.: NCBI-Hackathons/rnaseqview 1.1. Zenodo. 2017. Data Source"
}
|
[
{
"id": "22368",
"date": "08 May 2017",
"name": "Chase A. Miller",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe RNA-Seq Viewer tool is well motivated and clearly described in this article. RNA-Seq Viewer does a good job of combining data from multiple public sources into a single coherent visualization and interface. There is a great need for more tools like this that make use of the huge amounts of public genomic data available.\n\nI found an online example of RNA-Seq Viewer here. It would be very useful if this link was included in the Abstract so that potential users can quickly try out the tool.\n\nAlthough beyond the scope of a 3-day hackathon, in the future it would be valuable to turn this tool into a fully hosted web app so that no download or command line knowledge would be required.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "25864",
"date": "18 Sep 2017",
"name": "Christopher J. Fields",
"expertise": [
"Reviewer Expertise Computational biology",
"genomics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have succeeded in integrating multiple disparate software resources into a useful and well-motivated tool, RNA-Seq Viewer, that was largely put together within a three-day hackathon, which is even more impressive and speaks to the strengths of hackathons in general, particularly when there is a clear motivation and goal in mind that play to the strengths of everyone involved. I'm particularly happy to see that NCBI is more actively engaging the open-source community though organization of workshops and events such as this.\nOne key item: I couldn't find an online example, if there is one available this would be very useful as a live demo and would be particularly useful in garnering feedback, including possible directions for future development.\nI should note: there are already a few 'RNA-Seq Viewer' tools out there, not sure if you would need to change the name (if so please no horrible backronyms):\nhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC5320319/ http://bioinfo.au.tsinghua.edu.cn/software/RNAseqViewer/\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "25863",
"date": "20 Sep 2017",
"name": "Philip Ewels",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript does a great job of describing RNA-Seq Viewer, a tool to visualise genome-wide co-ordinate based clustering of gene expression in a sample. It's a neat project and great to see such a well polished product coming out of a hackathon. Such a responsive tool is impressive, and the user interface is simple to use.\nMinor concerns:\nThe abstract says that RNA-Seq Viewer is a \"web application\". However, to use it users are required to run a series of command line tools to prepare data and then edit a HTML file before getting to the web page. So whilst the tool certainly uses web technologies, I would not say that it's a fully fledged (eg. online only) web application yet. A minor change in wording would be sufficient to clear this up.\n\nThe abstract says that \"The backend data pipeline was developed and deployed on a shared AWS EC2 instance.\" - however, this seems to be the only mention of AWS in the manuscript or repository. If the authors mean that they deployed it for a one-off run, I think it's a little misleading (my assumption was that it is running as a service on AWS for anyone to use).\nAdditional documentation as to how users can use AWS to run the tool would also be useful.\n\nThere are example reports in the GitHub repository, but they're not mentioned anywhere that I can see. It would be nice if the readme clearly pointed towards these in the introduction with links using http://rawgit.com so that they can be loaded directly.\n\nOther than this, I think that the manuscript fairly describes the project. I'd love to see the additions that the authors propose at the end (support for multiple samples and use with a wider range of input data) and hope that the manuscript may get a future revision with such additional features!\nI see that another reviewer mentions the generic name - I agree that 'RNAideogram' or something else may be a little more specific and useful!\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-596
|
https://f1000research.com/articles/6-595/v1
|
28 Apr 17
|
{
"type": "Research Article",
"title": "Gene length and detection bias in single cell RNA sequencing protocols",
"authors": [
"Belinda Phipson",
"Luke Zappia",
"Alicia Oshlack",
"Luke Zappia"
],
"abstract": "Background: Single cell RNA sequencing (scRNA-seq) has rapidly gained popularity for profiling transcriptomes of hundreds to thousands of single cells. This technology has led to the discovery of novel cell types and revealed insights into the development of complex tissues. However, many technical challenges need to be overcome during data generation. Due to minute amounts of starting material, samples undergo extensive amplification, increasing technical variability. A solution for mitigating amplification biases is to include unique molecular identifiers (UMIs), which tag individual molecules. Transcript abundances are then estimated from the number of unique UMIs aligning to a specific gene, with PCR duplicates resulting in copies of the UMI not included in expression estimates. Methods: Here we investigate the effect of gene length bias in scRNA-Seq across a variety of datasets that differ in terms of capture technology, library preparation, cell types and species. Results: We find that scRNA-seq datasets that have been sequenced using a full-length transcript protocol exhibit gene length bias akin to bulk RNA-seq data. Specifically, shorter genes tend to have lower counts and a higher rate of dropout. In contrast, protocols that include UMIs do not exhibit gene length bias, with a mostly uniform rate of dropout across genes of varying length. Across four different scRNA-Seq datasets profiling mouse embryonic stem cells (mESCs), we found the subset of genes that are only detected in the UMI datasets tended to be shorter, while the subset of genes detected only in the full-length datasets tended to be longer. Conclusions: We find that the choice of scRNA-seq protocol influences the detection rate of genes, and that full-length datasets exhibit gene-length bias. In addition, despite clear differences between UMI and full-length transcript data, we illustrate that full-length and UMI data can be combined to reveal the underlying biology influencing expression of mESCs.",
"keywords": [
"single cell RNA sequencing",
"unique molecular identifiers",
"gene length bias",
"gene detection rate",
"differential expression"
],
"content": "Introduction\n\nSingle cell RNA-Seq (scRNA-Seq) has rapidly gained popularity as the primary tool to profile gene expression of hundreds to thousands of single cells. This new technology enables researchers to examine transcription at the resolution of a single cell in a high-throughput manner, and has led to the discovery of novel cell types and revealed insights into the development of complex tissues as well as differentiation lineages. With the promise of novel discoveries, this new technology has been embraced by the scientific community.\n\nMany technical challenges need to be overcome during data generation, and technology for performing scRNA-Seq is advancing at a rapid rate. The original Fluidigm C1 system has a 96-well plate, which limits how many single cells researchers can practically handle in an experiment. However, depth of sequencing is only limited by cost, with a sequencing depth of around 2 million reads per cell recommended (Tung et al., 2016). Droplet based technology, such as InDrop (Klein et al., 2015), Drop-Seq (Macosko et al., 2015) and the more recent Chromium system from 10X Genomics (Zheng et al., 2016), are cost effective methods to obtain relatively shallow sequencing of thousands to tens of thousands of single cells in one run. Lower sequencing depth limits the complexity of the expression profile attained per cell, as only the most highly expressed genes will be observed, however, it may be the case that researchers combine deeper sequencing of fewer single cells with shallow sequencing of tens of thousands of cells to answer their scientific questions of interest.\n\nNot only are there different technologies for capturing single cells, there are also differences in library preparation protocols, which aim to amplify and process the minute amounts of RNA from each cell. Most RNA-Seq library preparation protocols include enrichment of mRNA by either polyA pulldown or ribosomal depletion, followed by fragmentation and PCR amplification before sequencing. The extensive PCR amplification that is required for scRNA-Seq increases technical variability in the data by introducing amplification biases (Stegle et al., 2015). A solution for mitigating amplification biases is to include Unique Molecular Identifiers (UMIs), which are short (5–10bp) sequences ligated onto the 5’ end of the molecule prior to PCR amplification (Islam et al., 2014). Transcript abundances are then estimated from the number of reads with unique UMIs aligning to a specific gene. PCR duplicates resulting in copies of the UMI are therefore not included in expression estimates. While some protocols, such as those used with Fluidigm C1 (e.g SMARTer), need to be modified to include UMIs (Tung et al., 2016), some droplet based methods, for example the Chromium system (Zheng et al., 2016), always include UMIs in the chemistry. It is worth noting that, while mechanisms such as alternative splicing can be studied using full-length transcript protocols, this type of analysis is not possible with data generated with protocols that include UMIs.\n\nGene length bias is well understood in bulk RNA-seq data. When cDNAs are fragmented, long genes result in more fragments for the same number of transcripts, resulting in higher counts and more power to detect differential expression (Oshlack & Wakefield, 2009). As a result gene set testing is biased towards gene ontology categories containing longer genes (Young et al., 2010). While there is much in common between scRNA-Seq and bulk RNA-Seq data, modifications to the protocols such as amplification and the inclusion of UMIs, may highlight different biases in the data.\n\nHere we investigate the effect of gene length bias in scRNA-Seq across a variety of datasets that differ in terms of capture technology, library preparation, cell types and species. As hypothesised, we find that scRNA-seq datasets that have been sequenced using a full-length transcript protocol exhibit gene length bias akin to bulk RNA-seq data. Specifically, shorter genes tend to have lower counts and a higher rate of dropout. In contrast, protocols that include UMIs do not exhibit gene length bias. UMI protocols reveal that shorter genes are as highly expressed as longer genes, and dropout is mostly uniform across genes of varying length. These effects mean that different protocols have the ability to detect a different subset of genes, with shorter genes detected more readily using UMI protocols and longer genes detected by full-length protocols.\n\n\nMethods\n\nWe processed three datasets through our pipeline developed for full-length data:\n\nMouse embryonic stem cells (Kolodziejczyk et al., 2015);\n\nHuman cerebral organoid cells (Camp et al., 2015),\n\nMouse embryonic stem cells (Buettner et al., 2015).\n\nThe quality of the raw sequencing reads was examined using FastQC (v0.11.4). They were checked for contamination by aligning a sample of reads to multiple reference genomes using FastQ Screen (v0.6.4.). Reads were aligned to the appropriate reference using STAR (v2.5.2a) (Dobin et al., 2013). For the mouse dataset, we used the mm10 version of the genome, using the chromFa.tar.gz file on http://hgdownload.soe.ucsc.edu/goldenPath/mm10/bigZips, and for the human datasets we used the hg38 version of the genome, using the hg38.chromFa.tar.gz file on http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips. Reads were summarised across genes using featureCounts (v1.5.0-p3) (Liao et al., 2014), with GENCODE M9 annotation for mouse and GENCODE V22 annotation for human datasets. This pipeline was constructed in Bpipe (v0.9.9.3) (Sadedin et al., 2012), and a report summarising the steps produced using MultiQC (v0.8) (Ewels et al., 2016).\n\nOur gene filtering strategy was identical between datasets. Genes that had more than 90% zeroes across all cells, as well as ribosomal and mitochondrial genes, were filtered out. Genes that could not be annotated with an Entrez Gene ID were also removed in order to retain a set of well curated genes. Gene length information was taken as the sum of the exon lengths as outputted by the featureCounts software for the mm10 GENCODE VM4 annotation for all mouse datasets, and for all human datasets, we used the sum of exon lengths as outputted by featureCounts for the hg38 GENCODE V22 annotation. Genes that could not be annotated with gene length information were filtered out. We found that using these criteria helped reduce some of the variability in the datasets.\n\nDetails of all datasets analysed in this study are listed in Supplementary Table 1.\n\nMouse embryonic stem cells, Kolodziejczyk et al., 2015, full-length. We downloaded the raw data from the ArrayExpress database under accession number E-MTAB-2600 and ran our full-length processing pipeline using the mm10 mouse genome to produce a counts matrix. We performed quality control on the cells and removed cells that had a dropout rate of greater than 80% and a library size of fewer than half a million. We calculated the proportion of sequencing reads taken up by the ERCC spike-ins and discarded three plates that had proportions of ERCC spike-ins that appeared excessive compared to the remaining plates. We performed gene filtering as described above. After cell and gene filtering, we were left with 530 cells and 12395 genes for further analysis.\n\nHuman primordial germ cells, Guo et al., 2015, full-length. We downloaded the processed data from Conquer. The data had been pseudo-aligned to the latest human reference genome, hg38, using the Salmon software tool, v0.6.0 (Patro et al., 2017). The data is also available under the GEO accession number GSE63818. There did not appear to be any spike-in controls for this dataset, hence filtering was performed on the dropout rate and total sequencing depth for each cell. Cells with more than 85% dropout and fewer than half a million sequencing reads were filtered out. After cell and gene filtering, there were 226 cells and 15837 genes for further analysis.\n\nHuman cerebral organoid cells, Camp et al., 2015, full-length. We downloaded the data from SRA under accession SRP066834, and ran our full-length processing pipeline to produce a counts matrix, using the hg38 human genome. We removed cells that had greater than 90% dropout, library size smaller than half a million as well as cells that had more than 20% of the sequencing taken up by ERCC controls. After cell and gene filtering, we had 494 cells and 11325 genes for further analysis.\n\nMouse embryonic stem cells, Grün et al., 2014, UMI. We downloaded the processed data from GEO under accession number GSE54695. The data was aligned to the mm10 mouse genome using BWA and transcript number estimated from UMI counts by the authors. We removed cells that had > 80% dropout, library size smaller than 10000, as well as cells that had more than 5% of the sequencing taken up by ERCC controls. After cell and gene filtering, there were 127 cells and 9962 genes for further analysis.\n\nHuman induced pluripotent stem cells, Tung et al., 2016, UMI. We downloaded the processed molecule counts and sample information from the authors’ Github repository (https://github.com/jdblischak/singleCellSeq). The data was aligned by the authors to the human genome hg19 using the Subjunc aligner (Liao et al., 2013). The data is also available under GEO accession GSE77288. We removed cells that had > 70% dropout, fewer than 30000 sequencing reads per cell, as well as cells that had more than 3% of the sequencing taken up by ERCC spike-ins. After cell and gene filtering, we had 671 cells and 11971 genes for further analysis.\n\nHuman K562 cells (lymphoblastoma culture), Klein et al., 2015, UMI. The processed molecule count data was downloaded from GEO under accession GSM1599500. The data was aligned to the hg19 human genome using Bowtie v0.12.0 (Langmead et al., 2009). Cells that had > 85% dropout, fewer than 10000 total sequencing reads, or an ERCC library size to total library size ratio > 0.01 were filtered out. After cell and gene filtering, we had 219 cells and 13418 genes for further analysis.\n\nMouse embryonic stem cells, Ziegenhain et al., 2016, UMI. We downloaded the molecule counts from GEO under accession GSE75790. The SCRB-Seq protocol, a 3’ digital gene expression RNA-Seq protocol, (Soumillon et al., 2014), was used to generate the libraries. The data was processed by the authors through a dropseq pipeline, which included alignment to the mm10 mouse genome using STAR v2.4.0 (Dobin et al., 2013). The cells all appeared good quality hence cell filtering wasn’t necessary. After gene filtering, we had 84 cells and 10519 genes for further analysis.\n\nMouse embryonic stem cells, Buettner et al., 2015, full-length. We downloaded the data from the European Nucleotide Archive, under accession PRJEB6989, and ran the data through our full-length pipeline, mapping to the mm10 mouse genome to produce a counts matrix. We filtered out cells with > 85% dropout and sequencing depth less than a million. After cell and gene filtering, we had 271 cells and 11700 genes for further analysis.\n\nWe combined the four different mouse embryonic stem cell datasets using the following approach. We performed gene and cell filtering on each dataset independently, and combined the datasets by taking the genes commonly detected across all four datasets (8678 genes, 1012 cells, each gene is detected in at least 10% of the cells for each dataset). This strategy ensured that the genes were detected in all four datasets, and hence larger datasets did not dominate gene filtering. It also ensured that the larger datasets did not dominate the principal components analysis.\n\nAll statistical analysis was performed in R-3.3.1, using the limma (Ritchie et al., 2015), edgeR (Robinson et al., 2010), scran (Lun et al., 2016) and scater (McCarthy et al., 2016) Bioconductor packages (Gentleman et al., 2004). The UMI dataset was normalised using scran prior to differential expression analysis, as it clearly showed composition bias. Differential expression analysis in the mESCs was performed using edgeR, specifying a log-fold-change cut-off of 1 for the full-length dataset, and 0.5 for the UMI dataset. GO analysis was performed with hypergeometric tests using the goana function in the Bioconductor R package limma (Ritchie et al., 2015). All scripts for analysing the datasets are available on the Oshlack lab Github page (https://github.com/Oshlack/GeneLengthBias-scRNASeq).\n\n\nResults\n\nInitially, we analysed three different datasets generated using full-length transcript protocols: mouse embryonic stem cells (Kolodziejczyk et al., 2015), human primordial germ cells (Guo et al., 2015) and human brain whole organoids (Camp et al., 2015). For a full list of the datasets analysed see Supplementary Table 1. Quality control of the single cells was performed and problematic cells filtered out (see methods), leaving 530 mouse embryonic stem cells, 226 human primordial germ cells and 494 human brain organoid cells. For each gene, the average log-counts, normalised for sequencing depth, and the proportion of zeroes across the cells (i.e. the dropout rate per gene) were calculated. Gene-wise abundances were estimated for all datasets by dividing the gene-level counts by gene length to obtain reads per kilobase per million (RPKM). In order to assess gene length bias, genes were assigned to 10 bins based on gene length, such that each bin had roughly 1000 genes. The results are summarised in the boxplots in Figure 1.\n\nThree different datasets were analysed: (a–c) mouse embryonic stem cells, n=530 (Kolodziejczyk et al., 2015), (d–f) human primordial germ cells, n=226 (Guo et al., 2015), (g–i) human brain whole organoids, n=494 (Camp et al., 2015). For all plots (a–i), the x-axis shows 10 gene length bins all containing roughly equal numbers of genes. The left panel shows gene-wise average log counts, the middle panel shows proportion of zeroes in each gene (dropout rate per gene), and the right panel shows average log counts corrected for gene length (RPKM).\n\nFor all three full-length protocol datasets, shorter genes have lower count level expression proportions compared to longer genes, with a clear trend of increasing log-counts as gene length increases (Figure 1a, d, g). This was accompanied by a decreasing trend in the dropout rate per gene as gene length increased, highlighting the fact that shorter genes are more difficult to detect using full length protocols (Figure 1b, e, h). These trends are stronger for the human PGCs and human brain organoid datasets, while not as severe for the mouse ESCs. Calculating transcript abundance by dividing gene-level counts by gene length mostly removed the gene length bias for the human PGCs and brain organoid datasets (Figure 1f, i), however for the mouse ESCs calculating RPKMs appeared to induce a trend with gene length such that shorter genes appeared more highly expressed relative to the longer genes (Figure 1c).\n\nWe hypothesised that because UMI protocols tag each transcript molecule separately we would not see a similar gene length bias in these protocols. In order to assess gene length bias in scRNA-Seq datasets with included UMIs, we analysed three different datasets: mouse embryonic stem cells generated using a CEL-Seq protocol (Grün et al., 2014; Hashimshony et al., 2012), human induced pluripotent stem cells generated using a modified SMARTer protocol with the Fluidigm C1 system (Tung et al., 2016) and human leukemia cell line K562 cells using the CEL-Seq protocol with InDrop (Klein et al., 2015). After quality control and filtering of problematic cells, 127 single cells remained for the mouse embryonic stem cells, 671 for human induced pluripotent stem cells and 219 human K562 cells.\n\nWe found that for the human iPSCs and human K562 datasets, the average log-counts were fairly uniform across the 10 gene length bins, and for the mouse ESCs, the shorter genes appear to be more highly expressed than the longer genes (Figure 2a, d, g). Comparing medians, the dropout rate per gene is slightly lower for shorter genes in the mouse ESCs, while for the human iPSCs and K562 cells, the dropout is fairly uniform across the gene length bins, although slightly more variable for the shortest genes (Figure 2b, e, h). However, calculating RPKMs by dividing by gene length induces a clear trend with gene length where shorter genes appear to be more highly expressed relative to longer genes, with the median log RPKM decreasing with increasing gene length (Figure 2c, f, i).\n\nThree different datasets were analysed: (a–c) mouse embryonic stem cells n=127 (Grün et al., 2014), (d–f) human induced pluripotent stem cells n=671 (Tung et al., 2016), and (g–i) human leukemia cell line K562 cells, n=219 (Klein et al., 2015). For all plots (a–i), the x-axis shows 10 gene length bins all containing roughly equal numbers of genes. The left panel shows gene-wise average log counts, the middle panel shows proportion of zeroes in each gene (dropout rate per gene), and the right panel shows average log expression corrected for gene length (RPKM).\n\nTo ensure the gene length bias is not due to the specific biology of the different cell types, we analysed four different mouse embryonic stem cell datasets generated using both UMI and full-length transcript protocols (Buettner et al., 2015; Grün et al., 2014; Kolodziejczyk et al., 2015; Ziegenhain et al., 2016). When we combined all four datasets together (see methods) and performed principal components analysis, we noted that the cells clustered by dataset, with the UMI datasets on the left and full-length datasets on the right of the plot (Figure 3a). Interestingly, in principal components two and three, we saw some biological structure in the datasets emerging, with cells grown in different media clustering together (Figure 3b). In particular, three different datasets (two full-length, one UMI), grown in standard media with 2i inhibitors all cluster together on the left of the plot. This shows great promise for obtaining biologically interesting results from combining multiple datasets generated in separate labs using different technology.\n\nFour different mouse embryonic stem cell datasets were combined, two full-length transcript (Buettner et al., 2015; Kolodziejczyk et al., 2015) and two UMI datasets (Grün et al., 2014; Ziegenhain et al., 2016). (a) Principal component analysis plot (coloured by dataset) shows the major source of variation between the cells is the dataset, with the UMI datasets on the left and the full-length datasets on the right. (b) Examining principal components two and three reveals that the next major source of variation in the data is the media in which cells are grown. In particular three datasets (two full-length and one UMI) which have cells grown in standard media with 2i inhibitors all cluster together on the left. J1, Rex1 and G4 refer to the mESC cell line. The Ziegenhain dataset has single cells profiled in two batches. (c–d) Gene length bias is present in full-length mESC datasets; dotted grey line is the median log-count in the first gene length bin. (e–f) Gene length bias is absent in UMI mESC datasets; dotted grey line is the median log-count in the first gene length bin.\n\nIn terms of the gene length bias across the multiple datasets, it is clear that data generated from full length protocols exhibit gene length bias, with shorter genes having lower average log-counts compared to longer genes (Figure 3c, d). This is not as pronounced compared to other full-length datasets (Figure 1d, g), however compared to the UMI mESC datasets it is quite noticeable. For the UMI datasets, the gene length bias is mostly uniform across the gene length bins, however the shortest genes in the first bin appear to have slightly higher average log-counts and are more variable compared to the longer genes (Figure 3e, f).\n\nIn order to investigate whether choice of protocol impacts which genes are detected, we compared genes detected in both UMI mESC datasets to genes detected in both full-length mESC datasets. Across all datasets, 13434 genes were detected in at least one of the four datasets. Across both UMI datasets, 8866 genes were detected with counts in at least 10% of the cells for each dataset. For the full-length datasets, 11328 genes were detected using the same criteria. The full-length datasets had much greater sequencing depth (median ~ 3million reads, Supplementary Table 1) and more cells compared to the UMI datasets (median ~33,000 reads, Supplementary Table 1), hence it is unsurprising that more genes are detected across both full-length datasets. However, there were 188 genes detected in the UMI datasets that were not detected in the full-length datasets (Figure 4a). The genes unique to the UMI datasets tended to be shorter compared to the gene lengths of the 2644 genes uniquely detected in the full-length datasets (Figure 4b, p-value=0.000297, Wilcoxon Rank Sum Test). The genes uniquely detected in either the full-length or UMI datasets tended to be lowly expressed, hence more difficult to detect in general (Supplementary Figure 1).\n\n(a) A Venn diagram comparing the number of genes detected in two UMI mESC datasets, with the number detected in the two full-length datasets. We find that while the majority of genes are detected in all datasets (n=8689), there are genes that are uniquely detected when using either a full-length or UMI protocol. (b) Density plots of gene length for the subsets of genes corresponding to the Venn diagram in (a). The uniquely detected genes for the UMI datasets (blue line) tend to be shorter than the uniquely detected genes in the full-length datasets (red line), p=0.000297. (c) A Venn diagram showing the number of enriched GO categories in the 188 genes unique to UMIs and the 2649 genes unique to the full-length protocols. This reveals that these genes interrogate different biology, with only 3 GO categories in common. (d) Density plots of average gene length for each GO category corresponding to the significantly enriched GO categories in (c). We assigned each GO category an average length by calculating the median of the lengths of all genes annotated to each GO category. While there is not a significant shift in location in the density plots we noted a much greater spread of median length in the enriched GO categories for the uniquely detected UMI genes, largely driven by the presence of GO categories that tend to have very short genes.\n\nComparing differential expression between two media (2i inhibitors versus serum) in one UMI dataset (Grün et al., 2014), revealed that 31% (59/188) of the uniquely detected genes were defined as significantly differentially expressed (total differentially expressed = 1641/9962, 16%). For a similar comparison in a full length dataset (2i inhibitors versus serum, Kolodziejczyk et al., 2015), 20% (531/2644) of the uniquely detected genes in full length datasets were significantly differentially expressed (total differentially expressed = 1653/12395, 13%). This highlights that protocol choice may impact ability to detect differential expression of some genes.\n\nExamining which GO terms are over-represented for the 188 genes unique to the UMI dataset revealed that categories such as neural crest cell migration, negative regulation of megakaryocyte differentiation and stem cell development were among the 26 statistically significantly enriched categories (Supplementary Table 2). There were 4/26 GO categories with extremely short average gene length (<1000, median gene length across all GO categories = 4039), with the top two GO categories, “nucleosome” and “DNA packaging complex”, having median gene length in GO categories = 614, 706. However there were also statistically significant categories comprised of longer than average genes (13/26 categories with median length > 4039), indicating that pathways enriched for the unique UMI genes were not heavily biased towards categories only containing short genes.\n\nFor the full-length datasets, the GO categories that were significantly enriched (n=111) were different to those pathways enriched for the unique UMI genes, with only 3 GO categories overlapping (Figure 4c, Supplementary Table 3). GO categories such as those involved in plasma membrane, cell signalling, and ion and cation channel activity, were over-represented for the 2649 unique genes. While there were no significantly enriched GO categories that had extremely small average gene length (<1000), 14% (16/111) had median gene length < 2632 (the 5th percentile of median gene length across the GO categories). There was one statistically significant GO category with extremely large average gene length (> 10,000). Although there was no significant shift in median gene length of GO categories between the UMI and full-length GO categories, we noted that the variation in median GO length for the uniquely detected UMI genes was 3.5 times greater than for the uniquely detected full-length genes, largely driven by prevalence of very small sets (Figure 4d, p-value = 5.6x10-6, F-test).\n\n\nDiscussion\n\nWhile single cell RNA-sequencing technology is advancing at a rapid rate and novel discoveries are being made, the datasets being generated have many technical biases. Here, we have investigated the role that gene length plays in protocols that include UMIs as well as full-length transcript protocols. Unsurprisingly, we find that for full-length protocols, genes that tend to be shorter have lower counts and a higher rate of dropout, while UMI based protocols have a more even distribution of dropout across genes of varying length. In addition, a UMI protocol is more likely to detect lowly expressed genes that are shorter compared to a full-length protocol, where lowly expressed genes that are longer are easier to detect (Supplementary figure 1). Of course, UMI protocols are unable to provide information on transcript structure such as which isoforms are expressed in a sample, and only provide overall gene level expression measures. Since UMI counts are already molecule counts, expression levels should be expressed as normalised counts (e.g. counts per million) rather than dividing by gene length to obtain RPKMs, as this latter measure will artificially inflate the expression estimates of shorter genes relative to longer genes.\n\nWhile datasets generated using a UMI based protocol tend to have much lower sequencing depths, and hence lower counts, we found that in mESCs we were still able to detect uniquely expressed genes in the UMI datasets that were not detected in full-length datasets. However, a larger set of genes were detected in the mESC full-length datasets. Performing GO analysis on genes uniquely detected by each protocol revealed that they interrogate different biology, and hence the choice of protocol may affect which pathways can be studied. In particular, the genes unique to either the UMI or the full-length datasets appeared to be biologically relevant, as a subset were found to be significantly differentially expressed when comparing cells grown in two different media.\n\nWe combined four different datasets generated from mESCs that had strikingly different sequencing depths and protocols. Despite these differences, we found that we were able to recover biologically relevant structure. In particular, three different datasets (two full-length, one UMI), grown in standard media with 2i inhibitors, all cluster together when examining higher principal components. Although promising, the greatest source of variation between the cells was the dataset they belonged to, highlighting the known issues with large batch effects in scRNA-seq (Tung et al., 2016; Hicks et al., 2015). Hence, analysis methods including data cleaning and normalisation are crucial when combining datasets in order to extract biologically meaningful relationships.\n\n\nData and software availability\n\nLatest source code for scripts used to analyse the datasets:\n\nhttps://github.com/Oshlack/GeneLengthBias-scRNASeq\n\nInformation on the repositories and accession numbers of all datasets used in this study:\n\nMouse embryonic stem cells, Kolodziejczyk et al., 2015, full-length: ArrayExpress database under accession number E-MTAB-2600.\n\nHuman primordial germ cells, Guo et al., 2015, full-length: GEO under accession number GSE63818\n\nHuman cerebral organoid cells, Camp et al., 2015, full-length: SRA under accession number SRP066834\n\nMouse embryonic stem cells, Grün et al., 2014, UMI: GEO under accession number GSE54695\n\nHuman induced pluripotent stem cells, Tung et al., 2016, UMI: author’s GitHub repository, https://github.com/jdblischak/singleCellSeq.\n\nHuman K562 cells (lymphoblastoma culture), Klein et al., 2015, UMI: GEO under accession number GSM1599500\n\nMouse embryonic stem cells, Ziegenhain et al., 2016, UMI: GEO under accession number GSE75790\n\nMouse embryonic stem cells, Buettner et al., 2015, full-length: European Nucleotide Archive under accession PRJEB6989",
"appendix": "Author contributions\n\n\n\nBP and AO conceived the study. BP performed all statistical analysis. LZ downloaded and processed the full-length datasets. BP prepared the first draft of the manuscript. All authors contributed to writing and editing the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nLuke Zappia is supported through an Australian Government Research Training Program Scholarship. Alicia Oshlack is supported through a National Health and Medical Research Council Career Development Fellowship APP1126157.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary Figure 1: Average log counts for detected genes in UMI and full-length transcript protocols. The average log counts tend to be much lower for UMI datasets compared to the full-length datasets. The genes uniquely detected for each protocol tend to be lowly expressed, hence more difficult to detect.\n\nClick here to access the data.\n\nSupplementary Table 1: Details of the datasets analysed in the paper.\n\nClick here to access the data.\n\nSupplementary Table 2: Enrichment of GO categories for the 188 genes uniquely detected in the UMI mESC datasets.\n\nClick here to access the data.\n\nSupplementary Table 3: Enrichment of GO categories for the 2649 genes uniquely detected in the full-length mESC datasets.\n\nClick here to access the data.\n\n\nReferences\n\nBuettner F, Natarajan KN, Casale FP, et al.: Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nat Biotechnol. 2015; 33(2): 155–60. PubMed Abstract | Publisher Full Text\n\nCamp JG, Badsha F, Florio M, et al.: Human cerebral organoids recapitulate gene expression programs of fetal neocortex development. Proc Natl Acad Sci U S A. 2015; 112(51): 15672–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDobin A, Davis CA, Schlesinger F, et al.: STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013; 29(1): 15–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEwels P, Magnusson M, Lundin S, et al.: MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016; 32(19): 3047–3048. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGentleman RC, Carey VJ, Bates DM, et al.: Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10): R80. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrün D, Kester L, van Oudenaarden A: Validation of noise models for single-cell transcriptomics. Nat Methods. 2014; 11(6): 637–640. PubMed Abstract | Publisher Full Text\n\nGuo F, Yan L, Guo H, et al.: The Transcriptome and DNA Methylome Landscapes of Human Primordial Germ Cells. Cell. 2015; 161(6): 1437–1452. PubMed Abstract | Publisher Full Text\n\nHashimshony T, Wagner F, Sher N, et al.: CEL-Seq: Single-Cell RNA-Seq by Multiplexed Linear Amplification. Cell Rep. 2012; 2(3): 666–673. PubMed Abstract | Publisher Full Text\n\nHicks SC, Teng M, Irizarry RA: On the widespread and critical impact of systematic bias and batch effects in single-cell RNA-Seq data. bioRxiv. 2015. Publisher Full Text\n\nIslam S, Zeisel A, Joost S, et al.: Quantitative single-cell RNA-seq with unique molecular identifiers. Nat Methods. 2014; 11(2): 163–166. PubMed Abstract | Publisher Full Text\n\nKlein AM, Mazutis L, Akartuna I, et al.: Droplet Barcoding for Single-Cell Transcriptomics Applied to Embryonic Stem Cells. Cell. 2015; 161(5): 1187–1201. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKolodziejczyk AA, Kim JK, Tsang JC, et al.: Single Cell RNA-Sequencing of Pluripotent States Unlocks Modular Transcriptional Variation. Cell Stem Cell. 2015; 17(4): 471–485. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangmead B, Trapnell C, Pop M, et al.: Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009; 10(3): R25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiao Y, Smyth GK, Shi W: featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics. 2014; 30(7): 923–30. PubMed Abstract | Publisher Full Text\n\nLiao Y, Smyth GK, Shi W: The Subread aligner: fast, accurate and scalable read mapping by seed-and-vote. Nucleic Acids Res. 2013; 41(10): e108. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLun AT, Bach K, Marioni JC: Pooling across cells to normalize single-cell RNA sequencing data with many zero counts. Genome Biol. 2016; 17(1): 75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMacosko EZ, Basu A, Satija R, et al.: Highly Parallel Genome-wide Expression Profiling of Individual Cells Using Nanoliter Droplets. Cell. 2015; 161(5): 1202–1214. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcCarthy DJ, Campbell KR, Lun AT, et al.: scater: pre-processing, quality control, normalisation and visualisation of single-cell RNA-seq data in R. bioRxiv. 2016. Publisher Full Text\n\nOshlack A, Wakefield MJ: Transcript length bias in RNA-seq data confounds systems biology. Biol Direct. 2009; 4: 14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPatro R, Duggal G, Love MI, et al.: Salmon provides fast and bias-aware quantification of transcript expression. Nat Methods. 2017; 14(4): 417–419. PubMed Abstract | Publisher Full Text\n\nRitchie ME, Phipson B, Wu D, et al.: limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7): e47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, McCarthy DJ, Smyth GK: edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26(1): 139–140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSadedin SP, Pope B, Oshlack A: Bpipe: a tool for running and managing bioinformatics pipelines. Bioinformatics. 2012; 28(11): 1525–1526. PubMed Abstract | Publisher Full Text\n\nSoumillon M, Cacchiarelli D, Semrau S, et al.: Characterization of directed differentiation by high-throughput single-cell RNA-Seq. bioRxiv. 2014. Publisher Full Text\n\nStegle O, Teichmann SA, Marioni JC: Computational and analytical challenges in single-cell transcriptomics. Nat Rev Genet. 2015; 16(3): 133–145. PubMed Abstract | Publisher Full Text\n\nTung PY, Blischak JD, Hsiao C, et al.: Batch effects and the effective design of single-cell gene expression studies. bioRxiv. 2016; 62919. Publisher Full Text\n\nYoung MD, Wakefield MJ, Smyth GK, et al.: Gene ontology analysis for RNA-seq: accounting for selection bias. Genome Biol. 2010; 11(2): R14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZheng GX, Terry JM, Belgrader P, et al.: Massively parallel digital transcriptional profiling of single cells. bioRxiv. 2016. Publisher Full Text\n\nZiegenhain C, Vieth B, Parekh S, et al.: Comparative analysis of single-cell RNA sequencing methods. bioRxiv. 2016. Publisher Full Text"
}
|
[
{
"id": "22376",
"date": "10 May 2017",
"name": "Charlotte Soneson",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a nice evaluation of the extent of gene length and detection bias in single-cell RNA-seq data sets generated with different types of protocols. Overall, it is clearly written and the results are well presented and agree with expectations. All analysis code is available in a GitHub repository. An additional step towards full reproducibility would be to also make the processed data objects and additional scripts, not present in the GitHub repository, accessible.\n\nOtherwise, my main comment concerns the representation of gene abundances, specifically the calculation of RPKMs by dividing the library size-normalized gene counts with the \"exon-union\" length of the gene. Without information about which isoform is contributing to the expression of a given gene, this length may be far from the true number of base pairs \"contributing\" to the observed reads. An alternative approach would be to aggregate isoform-level TPM estimates (from methods like Salmon1, RSEM2 or kallisto3) to the gene level, and I am wondering whether that would affect the conclusions. Similarly, it could be interesting to investigate whether suggested alternatives to actual or expected read counts, such as \"scaled TPMs\"4 or census counts5, would mitigate the observed gene length bias.\n\nIn a couple of places, I think that the manuscript would benefit from some clarifications:\nIn the last lines of the \"Gene filtering\" paragraph, it is mentioned that genes that could not be annotated with gene length information were filtered out. How many genes are affected by this, and in what way can they be assigned reads (i.e., correspond to well-defined genomic regions) but not a length?\n\nFrom the \"Processing of all datasets\" paragraph, it is not completely clear whether cells are filtered out only if they have both more than (e.g.) 85% dropout and fewer than (e.g.) 500,000 reads, or if one of these criteria alone is enough. It is also not fully clear from the text whether cell filtering or gene filtering was performed first (e.g., the \"Gene filtering\" paragraph mentions \"all cells\", but in the following paragraph and in the code it seems that the cell filtering was performed first).\n\nOn what values was the principal component analysis applied? Could you expand a bit more on how the data set merging strategy ensures that the larger datasets do not dominate the PCA (they still make up a larger part of the final dataset)?\n\nIn the \"Statistical analysis\" paragraph, how was the UMI data set normalized with scran? Was there an actual normalization step, or a calculation of normalization factors used later in the analysis?\n\nFor the four mouse mESC data sets, it might be useful to provide a table listing the conditions (=colors in Figure 3b) that were included in each of them, since it is a bit difficult to discern all color/symbol combinations in Figure 3b.\n\nThe numbers in Figure 4a and b do not match (the numbers in Figure 4b match those given in the text, while those in Figure 4a match the figure legend).\n\nAre the two densities in Figure 4d generated with the same kernel width? If not, the differences may be visually exaggerated.\n\nFor the preprocessing of the Guo et al. data set, the pseudo-alignment with Salmon was done to the reference transcriptome rather than the genome.\n\nFinally, for the gene set analysis, in addition to the observation that there are some gene sets with short median gene lengths that are among the most enriched in the \"UMI-specific\" genes, it might be interesting to see whether these gene sets were in fact top-ranked because of the short genes contained in them, or if it was the longer genes in these gene sets that were the significant ones.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22438",
"date": "10 May 2017",
"name": "Samuel W. Lukowski",
"expertise": [
"Reviewer Expertise Single-cell technologies",
"regulation of mammalian gene expression",
"computational biology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments This is a well-written and concise study that reveals some very interesting results. Firstly, pertaining to the enrichment of biological processes in data processed using different protocols, the fact that the genes detected in full-length transcript and 3' transcript-end protocols show markedly different specificity for enriched pathways is intriguing. With 3' being highly specific and biologically relevant, compared to the generic pathways identified for full-length data, it raises the possibility that a much higher resolution of biological function and greater classification accuracy might be attained if full-length transcript data was re-analyzed as 3' transcript-end. Secondly, the importance of using correct normalization methods for UMI data is shown here to be of critical importance for accurate analysis.\nAlso, many thanks to the authors for making their analysis code available on GitHub.\nSummary In this manuscript, the authors asked whether single-cell RNA-seq data would be biased by choice of protocol. Specifically, they looked at the difference between data generated using full-length transcript protocols compared to those generated using 3' transcript end-only protocols that incorporate unique molecular identifiers. To do this, they used publically available scRNA-seq datasets from mouse and human.\nTheir conclusion is that full-length transcript methods exhibit gene length bias, such that short genes have less mapped reads than longer genes, which translates to lower transcript counts and a higher dropout rate. Conversely, UMI-based methods do not suffer from either of these effects. They also demonstrated that a combination of both methods can enhance the biological interpretation of the scRNA-seq data.\nComments for the authors:\nFor each of the datasets that were pre-processed (not raw data), it is possible that the different reference genomes (hg19, complete GRCh38, transcriptome-only GRCh38) and the use of different software packages could create artifacts that affect data analysis, particularly if the mapping software was an old version. I note that, with respect to pre-processed data, five different aligners were used. It is clear that all alignment packages have their strengths/weaknesses, especially if they haven't been updated regularly. Could the differences between (i) these packages, and (ii) the different references, contribute in any way to the results obtained in this study?\n\nRelated to question 1, would isoform/ splice junction-aware aligner yield different results compared to those that aren't designed for that type of mapping? Would you expect a difference in the full-length data sets that were processed with transcriptome-only reference (Guo 20151) compared to the complete hg38 genome reference (Camp 20152)?\n\nEach pre-processed dataset was filtered using slightly different parameters. How did the authors establish the dropout percentage threshold for removing cells (none, 70, 80, 85)? How were the library size and sequencing read thresholds determined for each sample? It's not clear to me why these should all be different. Is it to maximize the cell numbers on a per-sample basis? Have the authors tried using the same threshold for all pre-processed samples as for the in-house filtering (90%), or applying the other thresholds to raw data?\n\nGene ontology analysis is widely used and can provide insight into biological functions. I wonder whether the authors also considered using more specific databases such as Reactome or KEGG that can highlight enriched pathways that are not detected by GO analysis. These may show less disparity than GO terms.\n\nMinor point: Throughout the text and in Figure 3 and 4, UMI is capitalised, but in Fig 2 it is shown in lower case in the plot titles.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22371",
"date": "17 May 2017",
"name": "Wolfgang Huber",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper presents a very useful investigation of the dependence of detection efficiency in single cell RNA sequencing on (a) gene length and (b) certain choices in the experimental protocol, namely, shotgun sequencing of full transcripts versus transcript end sequencing such as in methods that employ UMIs. Overall the article is well written, clear, and likely to be useful for practitioners of experimental design and data analysis in the field.\n\nI have a few points that a revised version might address:\n\nThe term ‘dropout’ is used in many places, but not properly defined, neither mathematically nor biophysically. At some point in the middle of the manuscript the authors seem to imply that they use ‘dropout’ as a synonym to ‘occurrence of a zero count’ in the data. What is the rationale behind giving the name ‘dropout’ to such an event? What is dropping, and out of what? I understand that some colleagues use this term to point to high probabilities of seeing a zero count for (low abundant?) genes due to the sparse sampling, but I wonder whether (or in which datasets, protocols) this is really something that is more ominous than what is trivially implied by Poisson or Gamma-Poisson statistics, and if so, whether only 0s are ominous or also 1s, 2s, …? Given that this is a paper by statisticians on detection biases it would be great to see a more careful treatment of this aspect of the data.\n\nWhy are the parameter choices in Section “Processing of all datasets” (for fraction of dropouts and number of reads) so different between the different datasets? There seems to be a potential for the introduction of biases or artefacts in the computed statistics (of Figs. 1 and 2) through choices made here, and it would be good to demonstrate that such biases, if any, are inconsequential.\n\nIn Figs. 1 and 2, how are the ‘average log counts’ computed for data that contain a lot of zeros? The logarithm is not defined for 0. And whatever is the answer to this question, how did the authors make sure that it introduces no biases/artefacts that affect the shown trends? In particular, in conjunction with the filtering steps mentioned above in Point 2?\n\nIn Figs. 1 and 2, how is the set of genes selected that enter the calculation of ‘Proportion of zeros in each gene’? Again, how can we be sure that the choices made in the filtering do not affect the conclusions made here?\n\nIt is recommendable that the scripts are provided in a github repository. I wonder whether the authors would be willing to go the full length and upload the scripts to a repository that also does regular “live” testing of the scripts for functionality (e.g. installation, dependencies, versions, data availability), such as Bioconductor or CRAN.\n\nOn p.5., the authors report differing trends for human PGCs and human brain organoids, compared to mouse ESCs. Do they imply that this is a biological observation, and if so, what does it mean? Or could there be confounding with experimental circumstances? (In which case the effect would perhaps better be reported in association with that than with the names of biological conditions).\n\nIn the Discussion and on p.5, results from applying RPKM to UMI-based data are reported. Perhaps the point could be strengthened that already for very basic theoretical reasons this is a nonsensical thing to do. Finding this also empirically is nice, but perhaps it can be said that this confirms basic reasoning rather than being ‘news’.\n\nMinor:\nOn p.1, a wording is used that implies that datasets are being sequenced. But nucleotides are sequenced, and datasets are produced.\n\nI think the term “pseudo-aligned / pseudo-alignment” is ugly, and “mapped / mapping” is better and more widely used in the field.\n\nOn the bottom right of p.4, the term “log-fold change cut-off of 1” is unclear. Which base? Also, do you perhaps mean absolute logarithmic fold change?\n\nThe boxplots in Figs. 1 and 2 are a bit dull. Use of geom_hex with aes(x=rank(genelength)) in ggplot2 could present an alternative.\n\nIn the caption of Fig.1, ambiguity in the term ‘log counts corrected by gene length’ could be avoided by more explicit mathematical terminology (e.g. corrected = divided?)\n\nDiscussion: the conclusion that the choice of protocol may affect which pathways can be studied is a bit wild, and probably also not helpful if not translated into concrete advice to readers for how to address it when doing their experimental designs.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22437",
"date": "19 May 2017",
"name": "Sam Buckberry",
"expertise": [
"Reviewer Expertise Molecular Biology and Bioinformatics"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPhipson, Zappia and Oshlack present evidence against the existence of systematic gene length bias in single cell RNA sequencing experiments that use unique molecular identifiers (UMI). In contrast, methods that measure read counts across full length transcripts appear similar to bulk RNA sequencing methods in that they are biased against short transcripts. Although these results are somewhat unexpected by those working in the field, the thorough analysis presented by Phipson et al. will be a valuable reference to those wishing to design single cell RNA-seq experiments. The article is written in a clear and accessible manner, and it is also nice to see all the analysis code has been made available. However, there are a number of minor issues with the paper in it's current form that we think should be addressed.\nOf particular note, the paper appears to conflate UMI methods with 3' counting methods. We see this as incorrect as i) long-read sequencing technology may allow profiling of full-length transcripts while incorporating UMIs, and ii) 3' counting methods can be used without UMIs. The effect of 3' counting on gene length bias could be separated from the effect of using UMIs by ignoring UMIs in a 3' counting experiment and testing to see if substantial gene length bias exists. Our guess is that it would not, due to the simple fact that the effective gene length is approximately equal for all genes when you measure only the last ~300 bp. Therefore, for the examination of gene length bias, it seems to us that the emphasis should be on 3' counting and not UMIs. Of course, not using UMIs would introduce substantial PCR amplification bias, but this is a separate issue to that being addressed by the paper.\nMinor comments:\nIntroduction: \"...technology enables researchers to examine transcription at the resolution of a single cell...\" -- The technology measures mRNA abundance, not transcription itself. (paragraph 1)\n\"...alternative splicing... analysis is not possible with data generated with protocols that include UMIs.\" -- it is possible that long-read technologies (eg. Pacbio or Oxford Nanopore) could be coupled with UMI tagged cDNA generated using drop-seq methods before cDNA fragmentation to capture full-length transcripts. (paragraph 3)\nIt may be beneficial to include supplemental table 1 in the main text.\nProcessing of all datasets: Why are different cutoffs used for filtering out cells between experiments? eg. 80% dropout for Kolodziejczyk, 85% dropout for Guo, 90% for Camp, 70% for Tung, 85% Klein. Similar with the library size cutoff and percent ERCC cutoff.\nIn the Klein methods section, ERCC percentage is reported as >0.01 total library size rather than the percentage. For readability it may be better to have consistent style throughout the manuscript (eg. percent total for everything).\nFor Ziegenhain methods, it's stated that all cells appeared high quality and so weren't filtered. What constitutes high quality, and how was this assessed? As the count matrix was used in this case, were the cells pre-filtered by the original authors?\nStatistical analysis: \"UMI dataset was normalized using scran ...as it clearly showed composition bias.\" What method was used for normalization, and what exactly is meant by 'compositional bias' and how was this assessed? We believe the scran package depends on scater for implementation of it's normalization methods.\nWhy use different fold change parameters for UMI and full-length methods? Also, a log (is this log2?) fold change of 1 is 0 fold change. Furthermore, how were the log transformed values calculated for datasets with many zeros?\nFigure 1: More informative axis labels, eg. \"Average normalized read counts (log2 scale)\" rather than \"AvgLogCounts\" would increase readability.\n\nPlease note that the Tung et al. paper is now published in Scientific Reports and the Ziegenhain paper is published in Molecular Cell.\nFigure 4: The comparison of the number of genes detected by UMI vs full-length methods is somewhat confounded by the differing sequencing depth between the methods. This is stated in the text, but a better comparison could be made by sub-sampling reads from the experiments to equivalent numbers of reads per cell.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-595
|
https://f1000research.com/articles/6-170/v1
|
21 Feb 17
|
{
"type": "Research Note",
"title": "Open Online Courses in Public Health: experience from Peoples-uni",
"authors": [
"Richard F. Heller",
"Robert Zurynski",
"Alan Barrett",
"Omo Oaiya",
"Rajan Madhok",
"Robert Zurynski",
"Alan Barrett",
"Omo Oaiya",
"Rajan Madhok"
],
"abstract": "Open Online Courses (OOCs) are offered by Peoples-uni at http://ooc.peoples-uni.org to complement the courses run on a separate site for academic credit at http://courses.peoples-uni.org. They provide a wide range of online learning resources beyond those usually found in credit bearing Public Health courses. They are self-paced, and students can enrol themselves at any time and utilise Open Educational Resources free of copyright restrictions. In the two years that courses have been running, 1174 students from 100 countries have registered and among the 1597 enrollments in 15 courses, 15% gained a certificate of completion. Easily accessible and appealing to a wide geographical and professional audience, OOCs have the potential to play a part in establishing global Public Health capacity building programmes.",
"keywords": [
"online learning",
"developing country",
"Public Health",
"open online courses"
],
"content": "Introduction\n\nPeoples-uni was developed with the mission “To contribute to improvements in the health of populations in low- to middle-income countries by building Public Health capacity via e-learning at very low cost” (http://www.peoples-uni.org/content/overall-objectives)1,2. From 2008, formal courses have been run online, and it has been possible for students to gain academic credit towards a Master of Public Health award. A small fee is charged and an army of volunteer tutors facilitate online discussions and set and mark assignments – 1256 people, from 80 countries have enrolled, 464 have passed at least one module and 111 have graduated with a Master of Public Health to date. In 2014, a sister site was established for free Open Online Courses (OOCs) (http://ooc.peoples-uni.org), with the aim of extending the offerings, reaching a wider audience and contributing further to global health and health system strengthening. This educational innovation is also designed to contribute to leadership development through lifelong learning among health professionals. While there are similarities with Massive Open Online Courses (MOOCs), there are a number of differences, including resources to be read rather than video recorded lectures. This report summarises the experience so far.\n\n\nMethods\n\nA suite of courses was developed and placed on the Moodle open source educational platform at http://ooc.peoples-uni.org. Access is by self-enrolment with nomination of a username and password for future use. All resources are Open Educational Resources, free of copyright restrictions. A common format is used with learning objectives, links to or copies of key parts of online resources, and metadata to direct students through the resources. Rather than online discussions facilitated by tutors (as in the Peoples-uni academic stream and many MOOCs), questions on the content and implications of the resources are posed for students to reflect upon, and forums are enabled for students to post these reflections for other students to see. Quizzes were developed to test the knowledge gained, and a certificate of completion is automatically generated if various criteria are met such as accessing resources, completing the quizzes, posting to a forum or providing feedback. There is no specified timetable and students pace themselves through the course. The courses were developed by Peoples-uni volunteer academics and IT support staff, with input and review from various experts to ensure relevance of the course material. One of the courses reported here was developed using e-learning course materials from the University of Nottingham (Basic Epidemiology), and others were developed in response to requests from external organisations, including the UK Global Health Exchange (http://www.globalhealthexchange.co.uk), to provide basic Public Health knowledge to health professionals planning to volunteer overseas. Courses are published under a Creative Commons Attribution 4.0 International License.\n\nInformation about the courses was offered to students and graduates of the Peoples-uni academic stream and was posted on various social media sites. Two courses were provided to participants planning to travel overseas through the Global Health Exchange, and in one case information was distributed to deans of Australian and New Zealand medical schools to encourage medical students to learn about the Public Health implications of climate change.\n\nWe report here the first two years of experience with the Peoples-uni Open Online Courses, including information from the questions asked on registration about student demographics and how they planned to access the courses. Formal feedback was not a requirement generally, although some courses provided the option for feedback, and we report some of these comments. No ethical approval was required for publication of de-identified student demographics.\n\nData on student demographics at registration were obtained by SQL enquiries using the configurable report facility in Moodle. Data on whether the student had obtained a certificate were obtained by an SQL query against the Moodle database, supplemented by course data obtained from the course databases in Moodle. Descriptive analysis of frequency counts was performed using the R statistical package.\n\n\nResults\n\nThe data reported here relate to those who enrolled as students on 15 self-paced courses up to December 2016, with variable start dates from June 2014. 1174 students registered, from 100 countries. Some students enrolled in more than one course and we report on enrolments in 1597 courses. Table 1 shows the number of students per course, the number who gained a certificate of completion, and the criteria for a certificate. Courses have been added over the years, so the time period over which students can enrol varies. The criteria for the award of a certificate can be seen to vary, and although the overall percentage of students that were awarded certificates was 15%, there was some small variation between courses. Seven students each enrolled in 9 courses or more – they were responsible for 49 (20%) of the 243 certificates gained.\n\nThe criteria for a certificate are also shown.\n\n* : Offered to participants in the Global Health Exchange programme\n\n** : Later subdivisions of the Public Health for the GHE course\n\n*** : Information about launch of this module sent to deans of Australian and NZ medical schools\n\n**** : Previously offered as timetabled courses with online facilitated discussions\n\nAs part of the enrolment process, a number of questions were asked to the students. The responses are shown in Table 2 and Table 3. Table 2 shows that the largest single group of students came from Africa. All students were evenly distributed between males and females, mostly born between 1970 and 1989, and 58% were health professionals and 25% students. Table 3 shows that the majority of students came as a recommendation from someone else and that this would be their first experience of online learning (63 and 68% of those responding, respectively). Only 20% found the site by an internet search. While the majority would be able to spend only up to 2 hours a week on the course, 21% of those responding claimed to be able to spend 4 hours or more per week. The majority of those responding, 83%, planned to access the courses by computer rather than phone or tablet.\n\n* : Includes access recommended by the Global Health Exchange\n\nSome comments from feedback forms are shown in Box 1. The responses were generally positive, although suggestions for improvement were made. Four additional courses were developed in partnership with other organisations, and were offered with a timetable of expert tutors facilitating online discussions. 127 students enrolled in these, of which 18 (14%) gained a certificate. Two of the courses were later adapted as self-paced versions, and appear in the course list in Table 1 in their later iterations.\n\n“The idea was great. Its an easy to learn method faster and quite informative.”\n\n“very interesting and fruitful courses”\n\n“Thank you for the course. It is a broad overview of may different areas in medical ethics.”\n\n“Really enjoyed the course - very interesting”\n\n“Good outline and overview of selected topics”\n\n“overall is a good course”\n\n“It is a very good course, and I am very happy with the results.”\n\n“The course is well structured.”\n\n“I have taken previous courses in these modules offered online and this one seems a bit too hands-off for a course. It reads more like a manual that certainly attracts interested people but doe snot provide overly a learning experience. I presume that mini quizzes, crossroads-exercises that block advancement unless completed and the like would create a teaching scenario better.”\n\n“this course has been very very rewarding. it has enlightening my knowledge knowing that the Public Health is essential in line with the rights of everyone involved. I believed that more emphasis be made on low-income countries like mine (Liberia).”\n\n“Great course - really enjoyed it”\n\n“i am working in Central African Republic and I am an immunization specialist. I am working with …... and I think that after this course, it mandotory for me to make sure that evrychldren in the refugees camp get his polio vaccines correctly.”\n\n“what I like most about this course was the simple break down in the course delivery, and the in which all lessons were well structured, also a clear explanation of every terminology,”\n\n“The course was good, very basic and a good introduction.”\n\n“short and informative for a basic introduction to the difference aspects of public health”\n\n“Its very educative”\n\n\nDiscussion\n\nOur experience demonstrates that a volunteer-led organisation can develop and offer OOCs which are accessible by a global audience. A wide range of topics have been covered, beyond those usually found in award courses in Public Health, and more courses have since been posted on the site further to those included in this report, whilst others are under development.\n\nStudents were equally spread between genders, mostly aged around 25–40, and included a high proportion from developing countries. Certificates were gained by 15% of participants, and there were no obvious differences in course characteristics that explained the small variation in proportion of participants gaining these certificates between courses. The major predictor of gaining a certificate among those we examined was the number of courses taken by a student, with just 7 students gaining 20% of the certificates.\n\nThe qualitative feedback reported here is selective and may well not be representative of the general experience of the students, however the majority were positive about their experiences. We are utilising both the positive feedback and constructive suggestions to work to improve the course experience.\n\nThe format for the Peoples-uni Open Online Courses differs from that of MOOCs in a number of ways, although the basic methodology of online learning remains the same. The courses we report here contain mainly written content with hyperlinks to the resources, rather than the ‘talking head’ videos which are the staple of MOOCs (although this reliance on video lectures has been criticised3). This allows us to utilise Open Educational Resources (OER)4 and access excellent educational material instead of having to develop it anew. In contrast to the usual MOOCs, students can enrol at any time, there is no specified timetable and students pace themselves through the course. Forums are available, but designed for reflection rather than discussion, and a certificate of completion is available according to various criteria such as taking a quiz and downloading resources (see Table 1). Our model excludes interaction between students and tutors, but allows greater flexibility in timing and access to education.\n\nMOOCs have been offered by many educational organisations. The majority of their students are from North America or Europe, an experience common to most5. The Johns Hopkins School of Public Health has a long history of open access education, and they report experience with a number of MOOCs6. The School reports a median completion rate of 11%6, consistent with 12.6% reported by Jordan7 and higher than the Coursera experience of 4%5. To date, we have had approximately half of our students from developing countries. Our certification rate of 15% is consistent with the MOOC experience, although not many comparisons can be made in terms of course length, complexity, audiences and topics.\n\nMOOCs have been subdivided into xMOOCs, based on traditional university courses but without teacher-student interactions, and cMOOCS where collectivists of teachers and learners work together to explore content8. There are a number of other described variants, of which Self Paced Open Courses (SPOCs) are most closely related to the Peoples-uni type of course9.\n\nBased on our experience, it would appear that the Peoples-uni type of programme has a place on the educational spectrum. We see OOCs as being a major component of a modern framework for public health capacity building through global learning. The approach responds to current worldwide pressures in public health and workforce development to use low-cost models based on online learning, international volunteer tutors, teaching throughout career progression, and providing timely and appropriate content.\n\nWe have also offered this platform to other providers and, in keeping with the social enterprise model of Peoples-uni, have developed courses for other organisations and their audiences. There are currently more than 20 courses available on our site; we welcome others who wish to utilise this platform in collaboration.\n\n\nConclusions\n\nOpen Online Courses, offered by Peoples-uni on http://ooc.peoples-uni.org to complement the courses run on a separate site for academic credit on http://courses.peoples-uni.org, provide a wide range of online learning beyond that usually found in credit bearing Public Health courses. Accessible to a wide geographical and professional audience, and providing a certificate to those who persist in the learning process, they complement MOOCs in being available for self-paced learning at any time. They have the potential to play a part in establishing global Public Health capacity building programmes.\n\n\nData availability\n\nDataset 1: De-identified data collected showing numbers of students at Peoples-uni enrolled in each course, and the number of students who gained a certificate, from June 2014 to December 2016. These data were used to create Table 1.\n\nDOI, 10.5256/f1000research.10728.d15176310\n\nDataset 2: De-identified data collected on student demographics at Peoples-uni from June 2014 to December 2016. These data were used to create Table 2 and Table 3.\n\nDOI, 10.5256/f1000research.10728.d15176411",
"appendix": "Author contributions\n\n\n\nRFH wrote and edited the manuscript, RZ and AB performed the data analysis, RM and OO provided intellectual input and reviewed and edited the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe thank Associate Professor Jane Heller for statistical help.\n\n\nReferences\n\nHeller RF, Chongsuvivatwong V, Hailegeorgios S, et al.: Capacity-building for public health: http://peoples-uni.org. Bull World Health Organ. 2007; 85(12): 930–4. PubMed Abstract | Free Full Text\n\nHeller RF: Experience with a “social model” of capacity building: the Peoples-uni. Hum Resour Health. 2009; 7: 43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMIT Media Lab: Why there are so many video lectures in online learning, and why there probably shouldn’t be. 2015. Reference Source\n\nUNESCO: What are Open Educational Resources (OERs)? Reference Source\n\nZhenghao C, Alcorn B, Christiensen G, et al.: Who’s Benefiting from MOOCs, and Why. Harvard Bus Rev. 2015. Reference Source\n\nGooding I, Klass B, Yager JD, et al.: Massive open online courses in public health. Front Public Health. 2013; 1: 59. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJordan K: Massive open online course completion rates revisited: Assessment, length and attrition. Int Rev Res Open Distrib Learn. 2015; 16: 3. Publisher Full Text\n\nBates T: Comparing xMOOCs and cMOOCs: philosophy and practice. Reference Source\n\nDavidson C: MOOC, SPOC, DOCC, Massive Online Face2Face Open . . . (Uh Oh!): Age of the Acronym. Reference Source\n\nHeller RF, Zurynski R, Barrett A, et al.: Dataset 1 in: Open Online Courses in Public Health: experience from Peoples-uni. F1000Research. 2017. Data Source\n\nHeller RF, Zurynski R, Barrett A, et al.: Dataset 2 in: Open Online Courses in Public Health: experience from Peoples-uni. F1000Research. 2017. Data Source"
}
|
[
{
"id": "21124",
"date": "20 Mar 2017",
"name": "Jane-frances Obiageli Agbu",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study on \"Open Online Courses in Public Health: experience from Peoples-uni\" is very insightful as it shares findings of a unique Public health course offered by Peoples-uni.\n\nObservations Grammatical expression was a bit poor. I suggest paper should be reviewed by an English expert for better clarity. Flowery languages in the text should be discouraged (eg...\"an army of volunteer tutors\", \"a sister site\", \"a suite of courses was developed\" etc.\nResult: This statement is not clear \"Some students enrolled in more than one course and we report on enrolments in 1597 courses\" Is this referring to student population or number of courses enrolled in?\nFurthermore, please take note of typographical errors (eg, OOCs instead on MOOCs).",
"responses": [
{
"c_id": "2676",
"date": "28 Apr 2017",
"name": "Richard F Heller",
"role": "Author Response",
"response": "We have made changes to clarify some of the wording and language."
}
]
},
{
"id": "21231",
"date": "05 Apr 2017",
"name": "Michael Rowe",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe institution seems like it is doing wonderful work and represents an interesting approach to open online courses. The courses offered by the institution seem useful and provide a learning opportunity for participants with an emphasis on those in developing countries.\n\nThe structure of the courses could probably be improved by using pedagogical principles and learning theory with respect to design, but I think that is probably true of most open online courses. It also does not seem to fall within the scope of this research note, and I mention it only because so much attention is paid by the authors to the course design.\n\nThis note represents what could possibly become a reasonable study of effectiveness of this innovative approach to open online courses, and I encourage the authors to build on this early work by designing a rigorous method and incorporating some analysis of the data.",
"responses": [
{
"c_id": "2675",
"date": "28 Apr 2017",
"name": "Richard F Heller",
"role": "Author Response",
"response": "Response: We have added a paragraph in the Discussion to discuss the pedagogy, and have added a sentence suggesting the need for a more rigorous evaluation to measure effectiveness. ."
}
]
},
{
"id": "22022",
"date": "20 Apr 2017",
"name": "Chris Zielinski",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nInteresting and useful paper. My quibbles are\n\nWith the numbers. In the abstract is says “1174 students from 100 countries have registered”. In the introduction, it says “1256 people, from 80 countries, have enrolled.” From the data presented (and the repetition in the Results paragraph) it appears that the first of these is correct – although I have some doubts given the round numbers (100, 80) of countries cited. In methods it merely states that “A suite of courses was developed” – it would be interesting to know by whom the suite of courses was developed and when. Some background would be welcome. The second paragraph in the Results has some lapses in language (“a number of questions were asked to the students”, “the majority of students came as a recommendation from someone else”. The whole paragraph should be redrafted. There is a typo in Box 1 in the longest quote – at the top of the column “but does not” should be “but does not”. There is some repetition throughout – for example, the second paragraph of the discussions repeats some of the second para of the results.",
"responses": [
{
"c_id": "2674",
"date": "28 Apr 2017",
"name": "Richard F Heller",
"role": "Author Response",
"response": "Response: The confusion has arisen due to the fact that the statement “1256 people, from 80 countries, have enrolled.”reflects enrolments in our courses for academic credit as part of the introduction to Peoples-uni, and are not results from the study we report in this paper. In order to try and clarify this issue, we have redrafted the section in the Introduction.We have also added a new first paragraph to the Methods section, and redrafted paragraph 2 in the Results section."
}
]
}
] | 1
|
https://f1000research.com/articles/6-170
|
https://f1000research.com/articles/6-589/v1
|
27 Apr 17
|
{
"type": "Research Article",
"title": "Epidemiology of adulthood drowning deaths in Bangladesh: Findings from a nationwide health and injury survey",
"authors": [
"Mohammad Jahangir Hossain",
"Animesh Biswas",
"Saidur Rahman Mashreky",
"Fazlur Rahman",
"Aminur Rahman",
"Animesh Biswas",
"Saidur Rahman Mashreky",
"Fazlur Rahman",
"Aminur Rahman"
],
"abstract": "Background: Annual global death due to drowning accounts for 372,000 lives, 90% of which occur in low and middle income countries. Life in Bangladesh exposes adults and children to may water bodies for daily household needs, and as a result drowning is common. In Bangladesh, due to lack of systemic data collection, drowning among adults is unknown; most research is focused on childhood drowning. The aim of the present study was to explore the epidemiology of adulthood drowning deaths in Bangladesh. Methodology: A nationwide cross-sectional survey was conducted from January to December in 2003 among 171,366 rural and urban households, with a sample of 819,429 individuals to determine the epidemiology of adulthood drowning in Bangladesh.\n\nResults: Annual fatal drowning incidence among adults was 5.85/100,000 individuals. Of these, 71.4% were male and 28.6% were female (RR 2.39). In total, 90% of the fatalities were from rural areas. Rural populations were also found to have a 8.58 times higher risk of drowning than those in urban areas. About 95% of drowning occurred in natural water bodies. About 61.6% of the deaths occurred at the scene followed by 33.5% at the home. Of the drowning fatalities, 67% took place in water bodies within 100 meters of the household. Among the drowning fatalities 78.4% occurred in daylight between 7.00 and 18.00. Over 97% of the victims were from poor socio economic conditions with a monthly income tk. 6,000 ($94) or less. Only 25.5% of incidences were reported to the police station. Conclusions: Every year a significant number of adults die due to drowning in Bangladesh. Populations living in rural areas, especially men, were the main victims of drowning. This survey finding might help policy makers and scientists to understand the drowning scenario among adults in Bangladesh.",
"keywords": [
"fatal",
"drowning",
"adult",
"Bangladesh."
],
"content": "Introduction\n\nDrowning is the process of experiencing respiratory impairment from submersion or immersion in liquid, and the outcomes are classified as death, morbidity and no morbidity1. Drowning is an important but neglected public health issue that affects children and youths in many societies worldwide2,3. Following road traffic and injury sustained from falls, drowning is the 3rd leading cause of injury death in the world, claiming 42 lives every hour and 372,000 lives a year, which is almost two thirds attributed to malnutrition and over half of malaria2. Of all drowning deaths more than 90% occur in low and middle income countries where individuals are exposed to water during daily life3–5. According to the WHO (2014), drowning contributes to 7% of all injury-related annual deaths worldwide6. South-East Asian countries are considered the most affected region with 2.49 million disability adjusted life years as a result of death and disability from drowning7.\n\nBangladesh is a low-lying, riverine country located in the subtropical region of South Asia and bordering with the Bay of Bengal. Its tropical monsoon climate is characterized by heavy rainfall and melting snow in the Himalayan territory, leading to large rivers, such as the Ganga, Brahmaputra and Meghna. The country has a landmass of 147,570 square kilometers and is one of the most densely inhabited countries in the world with a population of 160 million. Daily life in Bangladesh exposes people to water bodies, such as ponds, ditches, rivers, canals and the ocean, which are used for daily household needs, including agriculture, fishing and transportation. As a result, drowning effects all ages of the Bangladeshi population.\n\nMost research on drowning conducted in Bangladesh has focused on childhood drowning8–10. In Bangladesh, there is no established routine mortality registration system11, which, combined with inadequacy of research12, results in unknown drowning deaths among the adult population. To design an appropriate preventive measure for reducing adult drowning, it is important to determine the nationwide burden of drowning. Drowning mostly occurs among the rural populations8, so community-based household survey data is important. The objective of this study was to estimate fatal adult drowning in Bangladesh and its variation by sex, place of residence, and seasonality using a nationally representative survey.\n\n\nMethods\n\nData for this study was extracted from Bangladesh Health and Injury Survey (BHIS), which was conducted during January and December 2003. The following methodology details how the survey data were collected.\n\nThis was a nationwide community based cross-sectional study.\n\nThe study population were from 12 randomly selected districts, namely Thakurgaon, Serajgonj, Sherpur, Narsinghdi, Hobigonj, Comilla, Shariatpur, Jessore, Khulna, Pirojpur, Chittagong and Rangamati. The study also covered Dhaka Metropolitan City of Bangladesh. In total, 819,429 individuals were covered in this nationwide study. By using multi-stage cluster sampling technique, a total of 171,366 households were selected; 88,380 form rural areas, 45,183 from district towns and 37,803 from Dhaka Metropolitan city. There are several upazilas (sub districts) in each district. Populations covered in the upazila level was considered as rural population. From each district one upazila was randomly selected. An upazila comprises a number of unions, which is the lowest administrative unit of an upazila, comprising about 20,000 population. From each upazila, two unions were selected randomly and each union was considered as a cluster of this survey. All households in the selected unions were included in the survey. All 12 selected district headquarters and Dhaka Metropolitan City were considered as urban area. In the urban areas, mohalla served as cluster. Mohalla is the lowest part of the city corporation. Each mohalla constituted about 400–500 households. Systematic sampling method was applied to achieve the required number of households.\n\nIndividuals 18 years and above who drowned resulting in a fatality were included as a case.\n\nForty-eight full time data collectors were selected for the data collection and six supervisors were employed for the supervision and monitoring of the data collection process. All data was collected through face-to-face interviews. All selected data collectors and supervisors were trained in collecting data from individuals.\n\nDue to the availability at the household level, mothers were preferred as primary respondent in this survey. However, if the mother was not available, the most knowledgeable members of the household were considered as respondents. Where possible, the head of household and as many members of the household as possible, were present to corroborate or add detail to the respondent’s interview answers. For the identification of any mortality or morbidity cases in the household, screening forms were used. A household member was defined as someone living in the same house, including domestic helpers or long-term guests who shared daily meals and participated in regular activities within the household. For mortality information, respondents were asked about any deaths over the period of last two years, and for morbidity information, respondents were asked about any illness had occurred over the period of last 6 months. If any illnesses/deaths were identified, the interviewer proceeded with further clarification regarding the injuries. Structured questionnaires were used to identify drowning death, and drowning related data was extracted for further analysis. Distance between household of living and drowning site was determined by asking to the respondent, if the site is near to the household then data collector measured it visually. Repeat visits were made to the households where respondents were unavailable during the first visit. In spite of repeated attempts, 2.7% of households could not be interviewed. A total of 166,766 households completed participation in the study.\n\nData related to drowning death were extracted from the main data set. As the recall period was over the last two years, only data from the last year was taken for analysis. Standard descriptive statistics were used to analyze the characteristics of adulthood drowning. Mean, standard deviation (SD), and proportion were used where appropriate. Drowning deaths were presented by gender, age, seasonality and place of residence. Age was categorized into seven groups (Figure 1). Rates were calculated with 95% confidence intervals (CI). Relative risk (RR) was calculated to compare the drowning risks in different age groups, place of residence, and gender using open EPI-Info software (http://www.openepi.com/Menu/OE_Menu.htm). The methodology has been described elsewhere13–15.\n\n\nResults\n\nIn this nationwide cross sectional survey, the annual incidence of drowning fatalities was found to be 5.85/100,000 (95% CI 4.14-8.14) in individuals aged 18 and over. Among the drowning fatalities, 71.40% were male and 28.60% were female. Males were found to be 2.39 times higher at risk than females (RR 2.399; 95% CI 1.04-5.49). Among the victims, 90% were from rural areas and 10% from urban areas. In addition, rural populations were found to have be at an 8.58 times higher risk of drowning than individuals living in the urban areas (RR 8.58; 95% CI 2.47-29.80). The mean age was 46.70 years (SD ± 21.90) ranging from 18 to 95 years. Populations aged over 60 years were found to be 3.60 times higher at risk of drowning compared with the combined populations with ages ranging from 18 to 60 years (RR 3.6; 95% CI 1.14 to 9.15) (Figure 1 and Table 1).\n\nAround 95% of the drowning occurred in natural water bodies, whereas only 5% of fatalities occurred in a place other than a natural water source. About 61.6% of the deaths occurred at the scene followed by 33.5% at the home and 5% in hospital following rescue from water.\n\nOf the drowning fatalities, 67% of the incidences took place in water bodies within 100 meters of the household and about 33% of the drowning incidence occurred in water bodies that were over 100 meters of distance from the household.\n\nAmong the drowning fatalities, 78.4% occurred among in daylight between 07:00 and 18:00, and 21.5% of drowning occurred between 18:00 and 06:00 (Table 2).\n\nAmong the causalities, 62.8% could swim (Table 2). Swimming ability was defined by reference to ‘‘survival swimming’’ skills (ability to swim 25m)16.\n\nThe study findings revealed that drowning incidences were relatively low during the winter season (November to February). The incidence increased during March and September, which are considered as summer and monsoon season. The incidence peaked during March and April (Figure 2).\n\nOver 97% of the victims were from poor socio economic conditions with a monthly income of tk. 6,000 ($94) or less. Only 25.5% of the incidences were reported to the police station. Among the drowning fatalities, pre-diagnosed individuals with epilepsy and those that were mentally ill totaled 9.6% and 9.9%, respectively.\n\n\nDiscussion\n\nIn Bangladesh, natural and man-made water sources are commonly located in close proximity of households, especially in rural areas. People use these water sources for daily household needs, such as irrigation, fish farming, bathing, swimming, animal feeding and washing clothes. In addition to this, a large number of the population use water transport for regular travelling and goods carrying. As a result, regular exposure to water bodies is very high. Bangladeshi population are experiencing massive destructive natural disasters, such as floods and cyclones, frequently, which often cause a high number of unexpected drowning deaths (https://en.wikipedia.org/wiki/List_of_Bangladesh_tropical_cyclones). In this study, the main three causes of death due to drowning were bathing, working and travelling.\n\nThe survey findings revealed that the annual drowning fatality among adults aged 18 years and above is 5.85/100,000 individuals, which means annually about 8,195 fatal drownings take place among the adult population of Bangladesh. Of these 5,851 are male and 2,344 are female. Adult males were found to be 2.39 times higher at risk of drowning than females in this study. Our findings of higher risk among the male population are similar to other studies on drowning from other countries3,17,18.\n\nIndividuals aged over 60 years were found to be 3.5 times at a higher risk than those aged between 18 and 60 years. The reasons behind that could be due to lack of a water supply in rural areas; therefore, people use natural water bodies as a source of water for daily regular activities and older populations are not under supervision. Similar findings were also observed in a study conducted among US populations between 1999 and 201019.\n\nDrowning is always sudden, unexpected and often fatalities occur at the scene of the water bodies. As a result, drowned individuals need emergency medical support on the site immediately when rescued from the water. Like most developing countries, emergency medical help is absent, particularly in rural areas, of Bangladesh20,21. In this study, 61.5% of the drowning incidents ended with fatality at the scene of drowning. Findings in Finland suggested that around 24% causalities ended with fatality at the scene22. In addition, of those who were rescued alive (38.5%) from water bodies only 20% sought medical care from the hospital. This suggested that rural populations do not consider receiving medical care following drowning. The study findings show that among the drowning fatalities 56.1% took place in water bodies that were over 20 meters far the household, whereas the same survey finding shows that about 80% of child fatalities due to drowning took place within 20 meters of the household23. In rural Bangladesh, households are located near water bodies so that getting water is easy for daily household needs. As a result exposure to water is very high for both adults and children.\n\nAs in most developing countries, injury incidences are poorly reported to the police station by the relatives of the victims24. The survey findings identified that only 25% of cases were reported to the police station following drowning fatalities. Drowning is not a new event concerning injury, like road traffic or machine injury, instead it is an issue that has occurred for thousands of years among populations living near water sources. Rural populations consider drowning as a part of a natural death and pre-decided ‘God’s will25; as a result relatives of the drowning victims start the burial process immediately after fatal drowning occurs. Unless the drowning incident was intentional, relatives of the victim do not report the death to the police station or any other agencies to avoid further investigation about the death.\n\nMany high income countries reduced drowning rates by introducing effective interventions1. This paper describes the epidemiological situation of adulthood drowning in Bangladesh so as to explore people’s perceptions on drowning and to design effective interventions for the adult population further research is needed. In addition, this paper might draw the attention to the policy makers to design possible preventive measures.\n\n\nConclusions\n\nAdult drowning is an important, but neglected, public health issue in Bangladesh, especially in populations living in the rural areas. Every year a significant number of unwanted and preventable adult drowning fatalities occur in Bangladesh. The current survey findings might help policy makers and scientists to understand the epidemiology and the risk factors leading to adult drowning in Bangladesh.\n\n\nData availability\n\nBHIS data is stored at the Department of Public Health Science and Injury Prevention of CIPRB. Due to sensitivity of the data (contains identifying information), permission is required from the ethical committee for sharing data with a third party. Data can be requested from the Department of Public Health Science and Injury Prevention of CIPRB, who will contact the ethical review committee to gain approval to share the data. The conditions for gaining data access are a formal request with a clear objective and formal permission from the ethical committee. Please contact Dr Saidur Rahman Mashreky (mashreky@ciprb.org) in order to request the data.\n\n\nEthics and consent\n\nEthical approval for the collection of the BHIS data was obtained from the Ethical Committee of the Institute of Child and Mother Health, Dhaka (ref: ICMH/ECR/2002/009). During conduction of the survey all participants were informed about the objectives and benefits of the study. As the sample was over 800,000 individuals, only oral consent was obtained from each of the household head before proceeding the interview.",
"appendix": "Author contributions\n\n\n\nAuthors FR and AR designed this nationwide study. Authors MJH, AB, SRM and AR reviewed literatures, analyzed surveyed data and prepared the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nBHIS was financially supported by UNICEF, Bangladesh.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe are gratefully acknowledge the contribution of UNICEF, TASC, ICMH and DGHS for this study. Special thanks to Tom Mecrow for reviewing and editing the manuscript.\n\n\nReferences\n\nvan Beeck EF, Branche CM, Szpilman D, et al.: A new definition of drowning: towards documentation and prevention of a global public health problem. Bull World Health Organ. 2005; 83(11): 853–6. PubMed Abstract | Free Full Text\n\nRahman A, Giashuddin SM, Svanström L, et al.: Drowning--a major but neglected child health problem in rural Bangladesh: implications for low income countries. Int J Inj Contr Saf Promot. 2006; 13(2): 101–5. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Global report on drowning preventing a leading killer. 2014. Reference Source\n\nWorld Health Organization: Drowning Prevention in the South-East Asia Region–2014. 2014. Reference Source\n\nFact sheet about drowning. 2000. Reference Source\n\nWorld Health Organization: Drowning, Fact sheet no. 347. Geneva; 2014. Reference Source\n\nWorld Health Organization: The Global Burden of Disease: 2004 update. 2008; 2010. Reference Source\n\nRahman A, Mashreky SR, Chowdhury SM, et al.: Analysis of the childhood fatal drowning situation in Bangladesh: exploring prevention measures for low-income countries. Inj Prev. 2009; 15(2): 75–9. PubMed Abstract | Publisher Full Text\n\nAhmed MK, Rahman M, van Ginneken J: Epidemiology of child deaths due to drowning in Matlab, Bangladesh. Int J Epidemiol. 1999; 28: 306–11. PubMed Abstract | Publisher Full Text\n\nIqbal A, Shirin T, Ahmed T, et al.: Childhood mortality due to drowning in rural Matlab of Bangladesh: magnitude of the problem and proposed solutions. J Health Popul Nutr. 2007; 25(3): 370–6. PubMed Abstract | Free Full Text\n\nBaqui AH, Black RE, Arifeen SE, et al.: Causes of childhood deaths in Bangladesh: results of a nationwide verbal autopsy study. Bull World Health Organ. 1998; 76(2): 161–71. PubMed Abstract | Free Full Text\n\nSethi D, Zwi A: Challenge of drowning prevention in low and middle income countries. Inj Prev. 1998; 4(2): 162. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMashreky SR, Hossain MJ, Rahman A, et al.: Epidemiology of electrical injury: findings from a community based national survey in Bangladesh. Injury. 2012; 43(1): 113–6. PubMed Abstract | Publisher Full Text\n\nHossain J, Biswas A, Rahman F, et al.: Snakebite Epidemiology in Bangladesh — A National Community Based Health and Injury Survey. Health. 2016; 8: 479–86. Publisher Full Text\n\nBiswas A, Dalal K, Hossain J, et al.: Lightning Injury is a disaster in Bangladesh? - Exploring its magnitude and public health needs [version 1; referees: 3 approved, 1 approved with reservations]. F1000Res. 2016; 5: 2931. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe Royal Life Saving Society Australia: Swimming & lifesaving: water safety for all Australians. Sydney: Elsevier Mosby; 2004. Reference Source\n\nDonson H, Van Niekerk A: Unintentional drowning in urban South Africa: a retrospective investigation, 2001–2005. Int J Inj Contr Saf Promot. 2013; 20(3): 218–26. PubMed Abstract | Publisher Full Text\n\nMeel BL: Drowning deaths in Mthatha area of South Africa. Med Sci Law. 2008; 48(4): 329–32. PubMed Abstract | Publisher Full Text\n\nXu J: Unintentional drowning deaths in the United States, 1999–2010. NCHS Data Brief. 2014; (149): 1–8. PubMed Abstract\n\nRazzak Ja, Kellermann AL: Emergency medical care in developing countries: is it worthwhile? Bull World Health Organ. 2002; 80(11): 900–5. PubMed Abstract | Free Full Text\n\nHossain M, Rahman A, Dalal K, et al.: Effects of Emergency Injury Care (EIC) Training for the Community Volunteers in the Rural Community of Bangladesh. Int J Trop Dis Heal. 2016; 19(1): 1–7. Publisher Full Text\n\nVähätalo R, Lunetta P, Olkkola KT, et al.: Drowning in children: Utstein style reporting and outcome. Acta Anaesthesiol Scand. 2014; 58(5): 604–10. PubMed Abstract | Publisher Full Text\n\nRahman A, Mashreky SR, Chowdhury SM, et al.: Analysis of the childhood fatal drowning situation in Bangladesh: exploring prevention measures for low-income countries. Inj Prev. 2009. 15(2): 75–9. PubMed Abstract | Publisher Full Text\n\nDandona R, Kumar GA, Ameer MA, et al.: Under-reporting of road traffic injuries to the police: results from two data sources in urban India. Inj Prev. 2008; 14(6): 360–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRahman A, Shafinaz S, Linnan M, et al.: Community perception of childhood drowning and its prevention measures in rural Bangladesh: A qualitative study. Aust J Rural Health. 2008; 16(3): 176–80. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22377",
"date": "09 May 2017",
"name": "William D. Ramos",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is timely and relevant. The authors are correct in stating that there is little attention given to adult drowning across the world. This type of epidemiological study is crucial as a first step to developing effective strategies for intervention.\nI would like to see more explanation on how rural versus urban settings were determined as well. In regards to distance of incident to homes, it should be stated that since it was measured visually by data collectors there may be some issues with accuracy.\nMore clarification on self-reported swimming ability would also be helpful to better understand the validity of that variable.\nI’m cautious about the use of the Wikipedia source cited in the discussion.\nOverall the article is well developed and methodologically sound. Conclusion drawn appropriately from the data.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22378",
"date": "10 May 2017",
"name": "Kazi Selim Anwar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe topic of the article: This is a well written epidemiologically sound and statistically valid original research article on an important topic of public health, titled ‘Epidemiology of adulthood drowning deaths in Bangladesh: Findings from a nationwide health and injury survey’ highlighting several crucial issues pertaining to drowning epidemiology in the country.\nThe quality & merit of this paper:\nThis research paper is based on the analysis of a nationwide cross-sectional survey conducted in Bangladesh in 2003. Data analysis that this study attempted highlights several scientific potentials, public health implications and policy issues. These important findings add values in the current knowledge-base on fatal adult drowning not only in Bangladesh but also in global science - which has the potentiality of being replicated by other researchers/scientists from other countries.\n\nOverall comment on this manuscript reviewed\nI recommend this paper for publication in F1000Research but it would definitely carry more value if the authors consider in bringing certain minor changes, as I suggested below:\n\nThe title:\nIt would have sound better as ‘Epidemiology of fatal adult drowning in Bangladesh: Findings from a nationwide health and injury survey’ – it the authors consider only as fine.\n\nThe Methodology part in abstract (on page 1 of 7): Right on the first line of the methodology section, it would have been more rationale to add… ’This updated paper based on a ..’ so as to read the first line of methodology as ‘This updated paper based on a nationwide cross-sectional survey was conducted… again only if the authors consider it as fine (Thus to make relevant corrections in methodology section on page 3, too)\n\nThe Introduction: (on page 3 of 7) Even there is no big mistake or major flaw in the introduction section it might have sound more logical & lucid if the authors consider the 2nd and 3rd paragraph of ‘Introduction’ section to be framed (re-written) as shown below (first paragraph of introduction looks fine):\nBangladesh is a low-lying, riverine country located in the subtropical region of South Asia and bordering with the Bay of Bengal. The country has a landmass of 147,570 square kilometers being world’s 8th-most densely populated countries in the world with a population of 160 million people.\nHaving a tropical monsoon climate it is characterized by heavy rainfall and melted-snow from the Himalayan territory, leading to three large rivers: the Ganges, the Brahmaputra and the Meghna. Daily life in Bangladesh exposes people to water bodies, such as ponds, ditches, rivers, canals and the ocean- which serve the daily household needs, particularly in rural areas including agriculture, fishing and transportation. In adjunct to country’s geographic & climatic phenomena drowning plausibly effects all ages of the Bangladeshi population compounded by round-the-year prevailing natural disasters like cyclone, flood, hurricane, tidal bore, etc.\nMost of the research conducted in Bangladesh on drowning remains focused on childhood drowning 8–10. In Bangladesh, there is no established routine mortality registration system 11, that also may have contributed in the inadequacy of research 12, resulting in ‘unknown adult drowning deaths’ often among the adult population. To design an appropriate preventive measure in reducing adult drowning, it is imperative to determine the nationwide burden of drowning. Further, drowning mostly occurs among rural populations in Bangladesh 8, so robust data from community-based household survey remains crucial.\nBased on the aforementioned facts & figures, this study was conducted with the objective of estimating gender-specific fatal adult drowning in Bangladesh including seasons and place of residence using the data of a nationally representative survey.\n\nData availability: (on page 6 of 7) It is very well referenced, adequately explained and logically utilized (Available data). Only that this section may be taken to methodology section to insert right between 'Statistical analysis' and ‘Results’ sections (on page on page 4 of 7)\n\nFinal comment on the quality of this manuscript reviewed to be published: I strongly recommend that this original article written well scientifically being very sound epidemiologically, pertinent methodologically and valid statistically to be published in F1000Research. This will add values in pertinent topic not only in national (Bangladesh) but also in global data-archiving system towards enhancing the current knowledge base on fatal adult drowning issues.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22706",
"date": "15 May 2017",
"name": "Puspa Raj Pant",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nVery well composed article, very timely and it must appeal the decision makers and donors. The authors' contribution must be appreciated. As stated by the authors, this paper might draw the attention to the policy makers to design possible preventive measures. Therefore I would like to recommend for its publication. I would like to recommend the authors to look into following minor comments.\nIntroduction, second paragraph: it would be better to add - what proportion of the total area of the country is water i.e. roughly 7% of it is covered with water.\n\nIntroduction, third paragraph: Would it be possible to highlight the neglected importance of drowning research that the data collected in 2003 that hasn't yet been utilised to uncover the problem of drowning among adult population. Keeping in mind, children related findings were already published with the support of Unicef and other children's agencies.\n\nMethods: In the first paragraph, you can also refer previously published Methodological details.\n\nCase ascertainment: Please make it clear that the \"individuals 18 years and above who drowned resulting in a fatality were included as a case\" is for this paper. However, BHIS might have collected much more.\n\nAlthough case identification has been clearly described under \"Data collection and interview\", the 'mother' is the primary source of information. Therefore there are chances that this study under-reports the adult drowning rates (as compared to GBD estimates are nearly 4 times higher for the year 2003). This can be something that should be considered for adult injuries in future.\n\nDiscussion: the term 'Regular Travelling' should be replaced by 'Commuting' in the sentence - \"a large number of the population use water transport for regular travelling\"\n\nDiscussion: As indicated in #5 above, it is suspected that this study under-reports adult drowning mortality. But the estimates for children are closer to GBD estimates. Is it due to the fact - \"In this study, the main three causes of death due to drowning were bathing, working and travelling.\" or the authors' intention was to say the disaster related drownings for adults were not included.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22872",
"date": "18 May 2017",
"name": "Mahfuzar Rahman",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the Methods section: the author may consider providing reference of BHIS data from any previous published article. If no article published using this dataset, then the authors should be explaining the entire dataset in separate heading under the Methods section.\n\nIt would be good and understandable if authors consider providing a flow chart of their population selection by strata.\n\nIn the data collection and interview section: The authors mentioned that “Structured questionnaires were used to identify drowning death, and drowning related data was extracted for further analysis.” It is not clear how the authors confirmed that the reason of the death was due to drowning. However, it is possible that the primary respondents could confirm the death but it does not make a strong conclusion of the reason behind the death. Potential question could raise “how the primary respondents know that it was not suicide?”. Another clarification is required that in Bangladesh mostly the rural people - adults knows how to swim, particularly those live near the river bank. And also, those have ponds and/or shared ponds, they take baths and use for daily primary source of water for their household use. The author should mention if they use verbal autopsy in this case. If not, please mention it in the limitation.\n\nIn the swimming ability skill - how did the authors collected that information. Is that the primary respondents responded on the died person?\n\nIt's hard to believe that the person who knows how to swim died due to drowning? Please clarify in the discussion part. And also, it creates more confusion when the death occurs in the natural water bodies. What does the authors mean by natural water bodies? It would be also useful if the authors consider providing a brief context of Bangladeshi population using water bodies for their daily HH work. Map of Bangladesh water bodies would help other nationals to understand the context of natural water distribution of Bangladesh. It could make more sense.\n\nThe authors failed to mention their limitation and strength of the study. I believe there are good number of limitation which authors skipped.\n\nFurthermore, discussion part was poorly written and the authors failed to discuss their finding elaborately in this part\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22570",
"date": "18 May 2017",
"name": "Abbas Ali Keshtkar",
"expertise": [
"Reviewer Expertise Methodology of Observational and Interventional studies. Systematic review and meta-analysis in biomedical research"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nINTRODUCTION: If we accept the mentioned issues that presented in the last paragraph of INTRODUCTION (line 1: In Bangladesh, there is no established routine mortality registration system ....), then the line 4 issue (Drowning mostly occurs among the rural populations) is in opposite / contrast with the paragraph starting issue.\n1. Please delete the sentence of line 4 of last paragraph ( Drowning mostly occurs among the rural populations).\nMETHODS: Study Population: If the population size of each UNION was 20,000 persons and the study sample was all of the eligible persons of the selected UNION, then almost 88,000 persons were included from 4-5 UNIONs.\n2. Please clearly describe the sampling method and specify the numbers of Upzila and UNION (in total population) and the number of selected Upzila and UNION.\n3. Please specify the sampling frame or list for including the selected households and selected eligible persons.\nData collection and interview: The time-frame of this survey is unknown.\n4. Please report the study (data collection) time-frame (from data collection starting point to the ending)\n5. Why the investigators use 6-months period for assessing the drawing occurrence? As you know, we need the 12-months period for assessing the seasonality pattern.\nStatistical analysis: The investigators used Mixed sampling method (combining Cluster, Systematic and Simple random sampling methods) and this situation causes the estimator variance inflation (increasing) and the widening the 95% CIs for the prevalence, incidence, mean and relative risk indicators. It should be noted that cluster sampling method mainly leads to the phenomenon. In other hand, the common event in community survey is the different distribution of main demographic variables such as gender and age groups.\nThe \"Survey Data Analysis\" (SDA) or \"Complex Sampling Analysis\" method was developed for correcting or adjusting the two essential pitfalls as well as the finite population problem and the stratified random sampling consideration.\n\nUnfortunately, if the investigator(s) don't perform SDA in the mixed sampling methods (similar to the above paper), the point and interval estimation (95% CIs) of incidence or prevalence measures are not valid and also the effect size measures (Risk Ratio, Rate Ratio, ...) may be inaccurate an imprecise.\n6. I suggest that the authors/ investigators indicate to Survey Data Analysis (SDA) method as the statistical method for estimating the valid and reliable INCIDENCE data (point and 95% interval estimation). Please specify the important components of SDA method (PSU or Primary Sampling Units, Stratum/ strata, Sampling weights, ....).\n7. It is obvious the investigators should be carried out the data re-analysis using SDA method (based on previous item) by the relevant statistical package such as STATA.\n\nRESULTS:\n8. As I mentioned, all of the study findings should be corrected based on the above suggestions (items no 6 and 7).\n\n9. In reference to the item no 5 (Major limitation for assessing the incidence seasonality), please delete to the seasonality pattern. Of course, the investigators could report the time fluctuation of the incidence data based on the collected data and indicate to this limitation in DISCUSSION part.\nDISCUSSION:\n10. Unfortunately, the study limitations were not indicated. Please specify the study limitations.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNo\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-589
|
https://f1000research.com/articles/6-389/v1
|
29 Mar 17
|
{
"type": "Research Note",
"title": "Wash-in and wash-out of sevoflurane in a test-lung model: A comparison between Aisys and FLOW-i",
"authors": [
"Petter Jakobsson",
"Madleine Lindgren",
"Jan G. Jakobsson",
"Petter Jakobsson",
"Madleine Lindgren"
],
"abstract": "Background: Modern anaesthesia workstations are reassuringly tight and are equipped with effective gas monitoring, thus providing good opportunities for low/minimal flow anaesthesia. A prerequisite for effective low flow anaesthesia is the possibility to rapidly increase and decrease gas concentrations in the circle system, thereby controlling the depth of anaesthesia. Methods: We studied the wash-in and wash-out of sevoflurane in the circle system with fixed fresh gas flow and vaporizer setting. We compared two modern anaesthesia work stations, the Aisys (GE, Madison, WI, USA) and FLOW-i (Maquet, Solna, Sweden) in a test lung model. Results: We found fresh-gas flow to have, as expected, a major influence on wash-in, as well as wash-out of sevoflurane. The wash-in time to reach a stable circle 1 MAC (2.1%) decreased from an average of 547 ± 83 seconds with a constant fresh gas flow of 300 ml/min and vaporizer setting of 8%, to a mean of 38 ± 6 seconds at a fresh gas flow of 4 L/min. There were only minor differences between the two works-stations tested; the Aisys was slightly faster at both 300 and 4 L/min flow. Time to further increase circle end-tidal concentration from 1-1.5 MAC showed likewise significant associations to fresh gas and decreased from 330 ± 24 seconds at 300 ml/L to less than a minute at constant 4 L/min (17 ± 11 seconds), without anaesthetic machine difference. Wash-out was also fresh gas flow dependent and plateaued at 7.5 L/min. Conclusions: Circle system wash-in and wash-out show clear fresh gas dependency and varies somewhat between the Aisys and Flow-i. The circle saturation, reaching 1 MAC end-tidal or increasing from 1-1.5 MAC can be achieved with both work-stations within 1.5 minutes at a constant fresh gas flow of 2 and 4 L/min. Wash-out plateaued at 7.5 L/min.",
"keywords": [
"wash-in",
"low-flow anaesthesia",
"MAC",
"End-tidal concentration",
"sevoflurane"
],
"content": "Introduction\n\nA rapid change in inspired anaesthetic agent is a requisite for the control of the depth of anaesthesia. Low flow anaesthesia has been increasingly adopted, as it is associated with several benefits, including conserving humidity and temperature, which improves the quality of anaesthesiai. Reducing the amount of anaesthetic agent consumed is of interest not only for reducing cost, but also for reducing environmental burden. The merits of reducing flow must not overrule safety, by maintaining adequate oxygen content in the circle, and avoiding hypoxic gas mixture, inadequate anaesthesia control and too light anaesthesia with risk for awarenessii,iii. Increasing and decreasing the circle system anaesthetic concentration during low and minimal flow anaesthesia calls for knowledge around the technique and kinetics of the anaesthetic gas used. We studied wash-in, increase and decrease of the end-tidal gas concentration, within the circle system with two anaesthetic machines, GE Aisys and Maquet FLOW-i, at different fixed fresh gas flow and fixed vaporizer settings in a test model. Our hypothesis was that the wash-in and wash-out should be fresh gas flow dependent, and that the time for reaching target end tidal concentrations would be faster for the FLOW-i device, which does not have a classical reservoir.\n\n\nMethods\n\nThe two anaesthetic workstations, Aisys (GE Healthcare, Madison, WI, USA) and FLOW-i (Maquet, Solna, Sweden), including a standard CO2 absorber and a standard circle system (patient circuit, adult, disposable 1.8 m; GE Healthcare) and a Humid-Vent Filter (Teleflex, Wayne, PA, USA), were connected to a 2 L elastic test reservoir (Intersugical Ltd., East Syracuse, NY, USA).\n\nWash-in: time to reach a stable circle concentration; end-tidal concentration of 1 and 1.5 MAC age adjusted (40 year old male) sevoflurane (2.1 and 3.1%) was studied with the circle connected to the test reservoir. The ventilation was set at tidal volume 500 ml, respiratory rate 10 and PEEP 5 cmH2O, for both devices. The oxygen fraction was set at 0.4. Fresh gas flow was fixed at 300, 500, 1000, 2000 and 4000 ml/minute. The vaporizer setting was fixed at 8% for both devices during the wash-in.\n\nThe time to reach 1 MAC and the time to increase from 1-1.5 MAC circle concentration was recorded for each value, based on mean of 3 repeated tests.\n\nWash-out: time to decrease from 1.5 MAC to 0 gas concentration with a fixed fresh gas setting of 2500, 5000, 7500 and 10000 ml/minute.\n\nAll data are presented as mean and standard deviation based on 3 – 5 repeats. The effects on time events (wash-in: increase from 0 – 1 MAC and further increase up to 1.5 MAC; wash-out: decrease from 1.5 MAC to 0 gas) between fresh gas flows and anaesthetic work-stations were calculated by ANOVA. P<0.05 was considered to be statistically significant. Data was analysed with StatView (v1.04) for MAC.\n\n\nResults\n\nFixed fresh gas flows had a significant impact on the speed of wash-in - time to achieve a stable circle end-tidal sevoflurane concentration of 1 MAC age adjusted (2.1%). The mean time of both machines decreased from 547 ± 83 seconds, at a fixed fresh gas flow of 300 ml/min and fixed vaporizer setting of 8%, to 38 ± 6 seconds at a fresh gas flow of 4000 ml/min. The time to further increase the circle system end-tidal sevoflurane from 1 to 1.5 MAC also showed a significant dependency on fresh gas flow: 330 ± 24 seconds, at a fixed fresh gas flow of 300 ml/min and fixed vaporizer setting of 8%, to 17 ± 11 seconds at a fresh gas flow of 4000 ml/min.\n\nBoth anaesthetic work-stations showed the same fresh-gas flow dependent for wash-in and wash-out pattern, but the Aisys showed overall a slightly faster wash-in time (Table 1 and Figure 1).\n\nVaporizer set at 8%, tidal volume 500 ml, respiratory rate 10, PEEP 5 and volume controlled ventilation.\n\nWash-in time (minutes) to increase to 1 MAC (sevoflurane 2.1%) for (A) Aysis and (B) FLOW-i anaesthetic work stations.\n\nWash-in (time to reach a stable 1 MAC circle sevoflurane concentration) was achieved within 1.5 minute at a fixed fresh gas flow of 2000 ml/min for both machines tested, 48 ± 2 and 75 ± 2 seconds for the Aisys and Flow-I, respectively (p<0.001), and within 1 minute, mean 33 ± 3 and 42 ± 3 seconds, for Aisys and Flow-i at 4000 ml/min, respectively (p<0.05). A further increase from 1 to 1.5 MAC was achieved within 1 minute for both machines (22 ± 3 and 46 ± 3 seconds for Asysis and Flow-i, respectively) at a fixed fresh gas flow of 2000 ml/min. When a 4000 ml/min was used, the monitoring system not fast enough to catch the increase for the Aisys, but recorded the increases as 25 ± 10 seconds for Flow-i.\n\nWash-out was likewise flow dependent, and plateaued at 7.5 L/min (Table 2 and Figure 2).\n\nTidal volume 500 ml, respiratory rate 10, PEEP 5 and volume controlled ventilation.\n\nWash-out time (minutes) for decrease from 1.5 to 0 MAC with sevoflurane at 3.1-0% in (A) Aysis and (B) FLOW-i anaesthetic work stations.\n\n\nDiscussion\n\nThe present study was set-up to evaluate the impact of fresh gas flow on saturation of and wash-out from the circle system/test-lung set up and whether the modern anaesthesia machines performed differently. We found a clear fresh-gas flow dependency for the time to saturate and wash-out of the circle and test-lung system, as expected. Wash-in to 1 MAC and further increasing the circle concentration to 1.5 MAC decreased with the fresh gas flow, and a 1 and further increase to 1.5 MAC was achieved within 1 minute at a fresh gas flow of 4 L/min. The wash-out was not further improved between 7.5 and 10 L/min fresh gas flow.\n\nWe found somewhat surprisingly that the Aisys was slightly faster than the FLOW-i, although the FLOW-i should have a small internal gas reservoir. Lucangelo et al. studied the FLOW-i performance regarding tidal volume in case of minor leakageiv. They found the system to be highly accurate. They also described the gas flow control in detail addressing the technical features of the flow regulators. Thus, our hypothesis was a faster saturation of the circle gas with the FLOW-i technology.\n\nDosch et al.v studied the change in circle gas composition in three anaesthetic machines and found that fresh gas flow and breathing system volume have the biggest effect on time to equilibrium. In a previous study, we analysed the wash-in of desflurane and sevoflurane during fixed fresh gas flow and vaporizer setting with the Aisys anaesthesia workstationvi. We found, as expected, desflurane to be associated to a significantly faster wash-in compared to sevoflurane with a significant impact from the fresh gas flow: The increase from 0.5 L/min to 1.0 L/min in fresh gas flow reduced the time to reach 1 MAC age adjusted end-tidal concentration from 15.2±2.4 minutes to 6.2±1.3 minutes. We found in that study a rather large variability for sevoflurane, which we considered was related to a combination of circle system gas saturation and uptake.\n\nKern et al. studied the saturation of neonatal anaesthesia systemsvii. They found huge differences in the time to reach the end tidal concentration above 95% of inspired. They also found wash-in times to decrease with higher fresh gas flows and higher minute ventilation rates; however, they saw that the effect of doubling fresh gas flow was variable and less than expected. Struys et al. made a study much like ours comparing the Zeus apparatus with direct injection of inhaled anaesthetics and the Primus apparatus using a classical out-of-circle vaporizerviii. They found the Zues to have a faster time course, but their study set-up was different from ours; they used fresh gas and auto control modes, providing a high initial fresh gas bolus. We compared the novel FLOW-i with a similar injection technique and without classical reservoir, and the Aysis with a more classic design. Carette et al. studied the performance of the automatic control mode of the FLOW-iix. The possibility to use an automatic algorithm to reach desired circle, end-tidal concentration is an interesting option and we plan to do further studies assessing the automatic technique. One limitation of the study was that it is an entirely experimental study.\n\nIn conclusion, wash-in, saturation of and wash-out of the circle system is fresh gas flow dependent. A 1 MAC can be reached within 1 minute at a fixed vaporizer setting of 8 at a fresh gas flow of 4 L/min and further increase from 1-1.5 MAC can be reached within 1 minute at a fresh gas flow of 2 L/min. Wash-out was found likewise flow dependent, but the time to reach a zero end-tidal concentration plateaued at 7.5 L/min.\n\n\nEthical statement\n\nThis is a test model study. The research does not involve human participants and/or animals, and thus no informed consent has been requested. The set-up is entirely experimental and no human or animals have been exposed to anaesthetics, and thus no ethical review board assessment has been considered necessary.\n\n\nData availability\n\nDataset 1: Raw data from the wash-in increase of Et-sevoflurane at fixed fresh gas flow and vaporiser setting. doi, 10.5256/f1000research.11255.d156064x\n\nDataset 2: Raw data from the wash-out of Et-sevoflurane at zero vaporiser setting and increasing fresh gas flow. doi, 10.5256/f1000research.11255.d156065xi",
"appendix": "Author contributions\n\n\n\nAll authors have contributed equal to study design, set up, conduct of experiments, analysis, compilation, and preparation of manuscript\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\nJan Jakobsson, have however previously received research grants for previous research activities from Maquet, Abbott, Baxter, MSD, Phitzer, Nycomed, PhaseIn, Grunenthal. He has been lecturing and taken part in advisory board activities for Maquet, Abbott, Baxter, MSD, Phitzer, Nycomed, PhaseIn, Masimo, Grunenthal. He has a paid consult agreement with Linde Healthcare as safety physician.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nKleemann PP: Humidity of anaesthetic gases with respect to low flow anaesthesia. Anaesth Intensive Care. 1994; 22(4): 396–408. PubMed Abstract\n\nFeldman JM: Managing fresh gas flow to reduce environmental contamination. Anesth Analg. 2012; 114(5): 1093–101. PubMed Abstract | Publisher Full Text\n\nBrattwall M, Warrén-Stomberg M, Hesselvik F, et al.: Brief review: theory and practice of minimal fresh gas flow anesthesia. Can J Anaesth. 2012; 59(8): 785–97. PubMed Abstract | Publisher Full Text\n\nLucangelo U, Ajčević M, Accardo A, et al.: FLOW-i ventilator performance in the presence of a circle system leak. J Clin Monit Comput. 2017; 31(2): 273–280. PubMed Abstract | Publisher Full Text\n\nDosch MP, Loeb RG, Brainerd TL, et al.: Time to a 90% change in gas concentration: a comparison of three semi-closed anesthesia breathing systems. Anesth Analg. 2009; 108(4): 1193–7. PubMed Abstract | Publisher Full Text\n\nHorwitz M, Jakobsson JG: Desflurane and sevoflurane use during low- and minimal-flow anesthesia at fixed vaporizer settings. Minerva Anestesiol. 2016; 82(2): 180–5. PubMed Abstract\n\nKern D, Larcher C, Basset B, et al.: Inside anesthesia breathing circuits: time to reach a set sevoflurane concentration in toddlers and newborns: simulation using a test lung. Anesth Analg. 2012; 115(2): 310–4. PubMed Abstract | Publisher Full Text\n\nStruys MM, Kalmar AF, De Baerdemaeker LE, et al.: Time course of inhaled anaesthetic drug delivery using a new multifunctional closed-circuit anaesthesia ventilator. In vitro comparison with a classical anaesthesia machine. Br J Anaesth. 2005; 94(3): 306–17. PubMed Abstract | Publisher Full Text\n\nCarette R, De Wolf AM, Hendrickx JF: Automated gas control with the Maquet FLOW-i. J Clin Monit Comput. 2016; 30(3): 341–6. PubMed Abstract | Publisher Full Text\n\nJakobsson P, Lindgren M, Jakobsson JG: Dataset 1 in: Wash-in and wash-out of sevoflurane in a test-lung model: A comparison between Aisys and FLOW-i. F1000Research. 2017. Data Source\n\nJakobsson P, Lindgren M, Jakobsson JG: Dataset 2 in: Wash-in and wash-out of sevoflurane in a test-lung model: A comparison between Aisys and FLOW-i. F1000Research. 2017. Data Source"
}
|
[
{
"id": "21357",
"date": "13 Apr 2017",
"name": "Göran Hedenstierna",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a straightforward and simple, in a good sense, model study on the wash-in and wash-out of an anesthetic gas in a test lung model. Comparison has been made between two commercially available anesthesia machines, Aisys and Flow-i. A major finding was that with increasing gas flow, wash-in and wash-out times were reduced, as might be anticipated. The difference was small between the two anesthesia machines but there was a slight difference that in itself was unexpected, i.e. the Aisys being a little faster than the Flow-i. The Aisys uses conventional reservoir bag whereas the Flow-i uses an internal reflector that, at least in theory, should reduce any dead space effect and thus shorten time constants in gas dynamics.\n\nI have some comments.\n\nFirstly, the wash-in and wash-out times have been measured in a lung model, the same for both anesthesia machines but the gas concentration has been measured with built in equipment in the Aisys and Flow-i and thus different for the two anesthesia machines. At least I interpret of the results this way. This means that there might be a difference in results that are not caused by the different techniques of internal gas reservoir. With different gas analyzers the results may be related to how the analyzers have been calibrated and what algorithms have been used. Ideally, one should use the same gas analyzer for both machines if the intention has been to test the effect of the design of the gas reservoir. This is my major comment and I suggest that the gas analyzers should be compared to an independent reference. Other comments follow below.\n\nAbstract line 14: 300 ml/min. Methods, wash in: You might have used additional settings of the ventilator such as tidal volume, respiratory rate and PEEP. At least you might discuss this. Results, page 4 line 3 under figure 1: was not fast enough. Discussion, page 5, third last line in the right column: Why as expected? Page 6, left column, second paragraph: Zeus\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "2642",
"date": "13 Apr 2017",
"name": "Jan Jakobsson",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Dear RefereeThank you for comments;We acknowledge the limitation using the machine gas monitor. Both machines are equipped with \"standard\" side-stream multi-gas monitors. These instruments have an internal calibration at start up. We did perform a startup prior to each test. It should also be noticed that we performed the study with standard anaesthetic machines used at our department, machines used for ordinary patient care. We performed the test during late afternoons and evenings. Thus we do expect the IR-multi-gas readings to be adequate.Abstract line 14: 300 ml/min.There is indeed a text error in the abstract line 14 please excuse, should read 300 ml/min., no doubtThe ventilator settings are presented in the methods section; The ventilation was set at tidal volume 500 ml, respiratory rate 10 and PEEP 5 cmH2O, for both devices. Results, page 4 line 3 under figure 1: much agree, the was is indeed missing, please excuseDiscussion, page 5, third last line in the right column: Why as expected?This is simply because all authors are clinically active and we are so used to see the lower solubility benefit and subsequently faster \"blood compartment wash-in\" associated to desflurane as compared to sevoflurane. This can indeed be argued when looking merely at circle system equilibration/wash-in. Thank you for most adequat comment.Page 6, left column, second paragraph: Zeus,This referes to the Zeus anaesthesia apparatus (Dräger, Lubeck, Germany)I hope our comments/responses are acceptable and your adequate and effective review.Best regardsJan Jakobsson, on behalf of all authors"
}
]
},
{
"id": "21842",
"date": "24 Apr 2017",
"name": "Ian Smith",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have examined the effect of fresh gas flow rates on the wash-in and wash-out of a single volatile anaesthetic agent in two modern anaesthetic workstations under experimental conditions.The investigation is essentially in two parts. The first shows that wash-in and wash-out and flow dependent for both anaesthesia machines. As these are exponential processes, this finding is entirely predictable and rapid wash-in (or wash-out) will only be achieved at fairly high fresh gas flow rates, whatever the internal design of the anaesthetic machine. While it is still important to define optimal flow rates for new equipment, this part of the paper does not really provide any very new information.\nThe second part of the study is potentially far more interesting and this is the comparison between the two systems. The authors speculated, quite appropriately in my opinion, that wash-in and wash-out would be faster with the Flow-i system due to its lower internal volume. However, they did not find this to be the case, especially with regard to wash-in where, in fact, the Flow-i was actually the slower of the two systems. What I find very disappointing is that the authors do not really address why they did not obtain the results they anticipated. Although their discussion mentions a few related points, there is almost nothing concerning the differences between the two systems studied or what the implications of these findings might be. The authors also seem not to have commented on the fact that differences between the two machines were not consistent between wash-in and wash-out and whether this provides any insight into the observed results. For me, this would be a far more interesting paper if these differences were explored in a bit more detail. I would also be interested to know if I needed to change the way I altered vapouriser settings and flow rates in practice were I to switch from one of these systems to the other.\n\nOne possible explanation for the apparent differences has already been mentioned in a previous review, namely that there was not really a difference at all, but that using the in-built gas analysers of the machines resulted in apparent differences due to differences in sampling times/mechanisms/algorithms. I also wonder if differences in the vapouriser technology might explain the findings, at least in part. According to the Maquet user's manual, during controlled ventilation “a larger proportion of the fresh gas is added during the inspiration phase, also contributing to minimising agent consumption”. Our technical staff tell me that the Flow-i vapouriser only injects anaesthetic agent during inspiration and is inactive during the expiratory phase of the cycle. If this is correct, and if the Aisys delivers vapour continuously, like a conventional vapouriser, that might explain why wash-in to a test lung is slower than expected. It is also likely under those circumstances that the difference between the machines would be reduced if higher respiratory rates were used and might also be completely different during spontaneous ventilation. Do the authors have detailed knowledge of how the two vapouriser systems function?\n\nAlthough the Flow-i, unexpectedly, was slower than the Aisys, this was not consistently observed and again I think this deserves some discussion. For wash-in from 0–1 MAC, the Aisys was consistently faster, with times ranging from 64–79% those of the Flow-i. However, for wash-in from 1–1.5 MAC, the times were much less consistent, ranging from 109% to 40% those of the Flow-i. Neither was any pattern evident for the second stage wash-in, with the Aisys actually being slower at 0.3 and 1 l/min, but faster at 0.5, 2 & 4 l/min. Can the authors explain this at all? Could sampling error be enough to explain the differences?\n\nThe wash-out results are even more confusing. The Flow-i was slower from 5 to 10 litres/minute, but the wash-out times were so fast that the differences are small and probably within the limits of sampling error. However, at 2.5 l/min the Flow-i was 70% faster. This faster wash-out is entirely consistent with the lower internal volume of the Flow-i (as hypothesised by the authors) and, to me, lends support to the concept that the unexpected observations during wash-in are probably related to differences in the function of the vapourisers.\n\nAs the authors state, this was a test model study. However, the results are likely to be used to inform clinical practice. The problem is that the results are unlikely to be reproduced in clinical practice. As the authors used a test lung, there was no uptake of anaesthetic vapour from the breathing circuit. Adding a patient compartment is likely to delay the wash-in of the anaesthetic machine compartment due to uptake of anaesthetic. This should affect both machines to a similar degree, although if the Flow-i preferentially injects vapour during inspiration while the Aisys does not, it is possible that the addition of a patient may affect the systems differently. Wash-out will also be delayed by the addition of anaesthetic vapour to the breathing system from the patient’s lungs. This should affect both systems equally, but in this case will differ with the duration of anaesthesia delivery and hence the amount of agent taken up by the patient. Do the authors have any clinical data to indicate by how much their findings are actually altered in clinical practice?\n\nA few other comments: What determined the number of repeat measurements? In the methods it is stated wash-in times were determined “based on [the] mean of 3 repeated tests”, but in the statistics section it is said data are means “based on 3–5 repeats”.\n\nIt is stated that data are shown as means and standard deviation. However, all of the tables and figures show only mean data. Standard deviations are only given for the data specifically highlighted in the main text. Without standard deviations, the degree of imprecision of the data cannot be assessed.\n\nAlthough a statistically significant p value is defined, the authors do not state the minimum size of differences which they considered to be clinically important. In a few cases, the differences are only a few seconds and of little consequences, but in quite a few cases, the differences are 100 seconds or more and represent quite large percentage differences between the systems. An indication of a minimal clinically-important difference, in association with an indication of the degree of variability, would greatly aid interpretation of the results.\n\nStatistical tests have been applied to the differences in wash-in and wash-out for both systems (combined) at high versus low gas flows, even though these differences are large in magnitude and highly predictable. However, statistical tests do not appear to have been reported for the far more interesting differences between the anaesthesia machines at each flow rate.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-389
|
https://f1000research.com/articles/6-398/v1
|
30 Mar 17
|
{
"type": "Opinion Article",
"title": "Yellow fever in the Americas: the growing concern about new epidemics",
"authors": [
"Yeimer Ortiz-Martínez",
"Andrés Mauricio Patiño-Barbosa",
"Alfonso J. Rodriguez-Morales",
"Yeimer Ortiz-Martínez",
"Andrés Mauricio Patiño-Barbosa"
],
"abstract": "Yellow fever (YF) is a haemorrhagic viral disease with a high case fatality rate. It is considered a reemerging infectious disease of remarkable importance. During the last outbreaks in Angola (2015-2016) and Brazil (2016-2017), many cases of YF emerged despite high YF vaccination coverage, increasing the risk of major epidemics in the Americas. Several factors, including the vast border and migratory status of Brazil, the widespread distribution of Aedes mosquitoes and the lack of efficient health policies and surveillance systems, favour this complex epidemiological scenario of reemergence. Therefore, mass vaccination of the population at risk, public health awareness and preparedness are urgently needed in this region. This article describes the current global epidemiological situation of YF, focusing especially on the Americas, as well the risk and vulnerabilities in the region that would be of concern for major expansion to other countries apart from Brazil.",
"keywords": [
"yellow fever",
"epidemics",
"Africa",
"Americas",
"Brazil",
"vector-borne disease",
"arbovirus"
],
"content": "Introduction\n\nYellow fever (YF) is a haemorrhagic viral, vector-borne disease with a high case fatality rate (CFR), spread by infected mosquitoes. It has reappeared as a threat to global public health, evidenced by new epidemics in several countries in Africa and South America through autochthonous transmission, and in Asia with imported cases1. In Asia, but also Europe and North America, Nevertheless, potential spreads beyond the borders of the endemic countries is a matter of global concern. Currently, there are around 1 billion people, from 49 endemic countries, that are considered at risk1,2.\n\n\nRecent outbreaks\n\nAlthough relatively wide scale YF vaccination has been applied, a growing number of outbreaks have been documented in several African countries in the last decade. The most recent outbreak occurred in Angola, resulting in 7,344 suspected cases, 962 laboratory-confirmed cases and 137 deaths (with a CFR of 14.2%), and lasting from December 2015 to October 20162. In addition to spread of YF by autochthonous transmission, confirmed imported cases of YF were identified in China and Kenya1–3. Other countries, such as Chad, Ghana and Guinea have also reported outbreaks or sporadic cases not linked to the outbreak in Angola1–3.\n\n\nThe concern raised from Brazil\n\nEven though no new cases have been confirmed since the last year in Angola, the global threat continues, now with its epicentre in South America. An ongoing outbreak of YF has started in Brazil since December 1, 2016. Up to February 22, 2017, a total of 1,336 cases of YF infection have been reported (292 laboratory confirmed, 920 suspected and 124 ruled out), resulting in 215 deaths (101 confirmed, 109 suspected, 5 ruled out) across six states of the country (Bahia, Espírito Santo, Minas Gerais, Rio Grande do Norte, São Paulo and Tocantins). The current CFR is 35% (from confirmed cases) and 12% (from suspected cases)3.\n\nThe geographical spread of the cases in Brazil has led to major concern, because cases are no longer being reported just in the jungle, but also in the most densely populated cities and states such as Minas Gerais and São Paulo. Fortunately, these regions have a long history of high YF vaccination coverage in young people, in contrast with the low vaccination rates in other major urban centres of Brazil4.\n\nAlthough the epidemiology and clinical manifestations of YF should be familiar to healthcare workers in endemic countries, where clinical manifestations can overlap with other acute viral haemorrhagic fevers and other etiologies of the febrile syndrome, a rapid spread of misinformation about this harmful disease in social media and a lack of online training for healthcare workers has been reported in the recent outbreak of 2016–2017 in the Americas5,6. In addition to limited health resources, this highlights that early identification could be a challenge in Latin America, as has been observed in the past with Zika and chikungunya virus outbreaks in this region, particularly in countries such as Brazil and Colombia7,8.\n\n\nConclusions\n\nThere seems to be an almost imminent risk of YF outbreaks turning into a large epidemic9. Unvaccinated travelers heading to the affected states in Brazil are at risk of spreading the virus in to areas where YF risk factors (human susceptibility, prevalence of competent vector, and animal reservoirs) are present. Ecological factors and enzootics would promote the necessary spillover that would lead to an epidemic10–12. Moreover, the vast border of Brazil, with 10 neighboring countries/territories (Uruguay, Argentina, Paraguay, Bolivia, Peru, Colombia, Venezuela, Guyana, Suriname and French Guiana), the lack of efficient health policies and surveillance systems, and the distribution of Aedes vectors (as well the uncontrollable sylvatic vector species in the genus Haemagogus and Sabethes), raise the possibility of the widespread YF throughout the Americas, including the USA. The USA has suitable conditions for autochthonous cases in areas such as South Florida, where Aedes albopictus is present and has been linked to transmission of dengue virus (another flavivirus), chikungunya and possibly Zika.\n\nMass vaccination of the at-risk population13, and public health awareness and preparedness is urgently needed to control the current 2016–2017 outbreak in Brazil and prevent a possible epidemic related to this deadly disease. More studies, as well as new innovative strategies for vector control (e.g. involving community participation), early prevention (e.g. sampling in risk areas to look for asymptomatic subjects), warning and enhanced surveillance (using smart phones), are necessary in order to improve the scenario of this reemerging arboviral threat14,15.",
"appendix": "Author contributions\n\n\n\nYOM, AMPB and AJRM all participated in the writing and editing of the manuscript. All authors have agreed to the final content of this article.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nGardner CL, Ryman KD: Yellow fever: a reemerging threat. Clin Lab Med. 2010; 30(1): 237–260. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWorld Health Organization: Yellow fever situation report. 2016; [accessed 08/03/2017]. Reference Source\n\nWorld Health Organization: Yellow fever - Brazil. World Health Organization. 2017; [accessed 09/03/2017]. Reference Source\n\nDyer O: Yellow fever stalks Brazil in Zika's wake. BMJ. 2017; 356: j707. PubMed Abstract | Publisher Full Text\n\nOrtiz-Martínez Y, Jiménez-Arcia LF: Yellow fever outbreaks and Twitter: Rumors and misinformation. Am J Infect Control. 2017; pii: S0196-6553(17)30148-7. PubMed Abstract | Publisher Full Text\n\nOrtiz-Martinez Y: Yellow fever: Massive open online courses (MOOCs) in the outbreaks era. Travel Med Infect Dis. 2017; pii: S1477-8939(17)30034-0. PubMed Abstract | Publisher Full Text\n\nRodriguez-Morales AJ, Villamil-Gómez WE, Franco-Paredes C: The arboviral burden of disease caused by co-circulation and co-infection of dengue, chikungunya and Zika in the Americas. Travel Med Infect Dis. 2016; 14(3): 177–9. PubMed Abstract | Publisher Full Text\n\nRodríguez-Morales AJ: Zika: the new arbovirus threat for Latin America. J Infect Dev Ctries. 2015; 9(6): 684–685. PubMed Abstract | Publisher Full Text\n\nRodriguez-Morales AJ, Villamil-Gómez WE: Fiebre Amarilla: De nuevo, una preocupación global. Hechos Microbiol. 2014; 5(1): 1–3. Reference Source\n\nRifakis PM, Benitez JA, De-la-Paz-Pineda J, et al.: Epizootics of yellow fever in Venezuela (2004–2005): an emerging zoonotic disease. Ann N Y Acad Sci. 2006; 1081(1): 57–60. PubMed Abstract | Publisher Full Text\n\nWeaver SC: Host range, amplification and arboviral disease emergence. Arch Virol Suppl. 2005; (19): 33–44. PubMed Abstract | Publisher Full Text\n\nBisanzio D, McMillan JR, Barreto JG, et al.: Evidence for West Nile virus spillover into the squirrel population in Atlanta, Georgia. Vector Borne Zoonotic Dis. 2015; 15(5): 303–10. PubMed Abstract | Publisher Full Text\n\nGrobusch MP, van Aalst M, Goorhuis A: Yellow fever vaccination - Once in a lifetime? Travel Med Infect Dis. 2017; 15: 1–2. PubMed Abstract | Publisher Full Text\n\nJaramillo-Martinez GA, Vasquez-Serna H, Chavarro-Ordoñez R, et al.: Ibagué Saludable: a novel tool of Information and Communication Technologies for surveillance, prevention and control of dengue, chikungunya, Zika and other vector-borne diseases in Colombia. J Infect Public Health. 2017; (accepted, in press # JIPH-D-17-00109).\n\nBenítez JA, Rodríguez Morales AJ, Salas MC, et al.: Puestos de Notificación de Triatominos (PNTs) como Alternativa de Vigilancia Epidemiológica No Convencional para la Enfermedad de Chagas en Venezuela. Acta Científica Estudiantil. 2007; 5(4): 147–163. Reference Source"
}
|
[
{
"id": "21529",
"date": "04 Apr 2017",
"name": "Jean-Paul J. Gonzalez",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle:
Although the title is appropriate and clearly leads to the risk of YF re-emergence in the Americas, the manuscript is more centred to South and eventually Central Americas. Also, for this matter (i.e. maintain the title), the authors could emphasize strongly on the risk of imported cases in temperate zone (i.e. Central and North Americas) during the boreal summer and Aedes spp. activity in Central-North America (e.g. as it is documented elsewhere for Dengue virus - New Mexico or Texas – and, Airport malaria in the US).\n\nAbstract: “Angola” appears at first, also if the authors want to focus on the Americas, it will be better, in my opinion, to have a short sentence at the end of the abstract that focus on imported risk from endemic area outside of Americas (i.e. Africa).\n\nThe authors wrote “despite high YF vaccination coverage”, this is not accurate: Indeed, in many areas and populations worldwide, YF vaccination coverage is discouraging low for years (e.g. Nigeria). This needs to be clear: YF vaccine is certainly the best live attenuated vaccine among all, the less expensive and the first of its kind, consequently there is no reasons today – except politics and funding allocation - to have the people of endemic areas not yet entirely immunized with a real 100% vaccine coverage.\n\nIntroduction: For the reader, CFR needs to be expressed as a number of a general historical consensus. Needs also to document the historical dimension of multiple consistent re-emergence of Yellow fever since it discovery beside the excellence of the vaccine(s). While frequency and size of outbreaks are recently (a decade ago) increasing.\nRecent outbreaks: China emergence needs to be more specific (i.e. risk) from where (climatic zone) these imported cases were observed.\n“Concern raised from Brazil”:\nFrom the general title or this of such chapter section, one is misleading “Americas (title) or Brazil (this section)”? I suggest something like: “From Brazilian experience, a concern of YF risk for the Americas”\n\nLine 3: “epicentre:” this needs to be more precise geographically or the sentence clearly linked to the following one, starting by “Indeed, …“\n\nSecond section, line 4: “a long history of high YF vaccination coverage”, I am not sure this is applicable to Minas Gerais’s remote areas, at least for “long history”. The lack of YF vaccination coverage was raised several times by the Brazilian provincial health authorities back in the early 2000s, unable to reach the remote western zones of the province.\n\nSection 3, line 3, top of the page: to be politically correct we do not use anymore “Latin America” but “South America”.\n\nConclusions: Main concerns, regarding the YF risk of emergence/re-emergence, seems to be missing:\nVaccine: 1/ The recent outbreaks and the lack of Yellow fever vaccine stock piling (WHO). This needs to be strategized (YF vaccine availability) by the country health authorities and international community. 2/ Also the lifelong protection of the vaccine, its inocuity, and the reduction by 1/10 of the immunity dose are new and of extremely high importance (i.e. for the public & public health).\n\nBiosurveillance needs to be stressed: Mosquito biosurveillance is an important issue to control the epidemic risk, also Haemagogus and Sabethes are specific for South America and have well studied, the risk and ability of Aedes albopictus (expansion) to transmit the virus in the Americas needs to be assessed and an entomological priority set up when needed (i.e. Public health priority in at risk areas).\n\nTrans-border risk. Ultimately traveler’s from/to endemic areas need to be covered by a mandatory international certificate of vaccination to protect the borders (trans-border risk).\n\nThe long time mystery of the absence of YFV in South East Asia can be also stressed in term of global risk.",
"responses": [
{
"c_id": "2621",
"date": "04 Apr 2017",
"name": "Alfonso Rodriguez-Morales",
"role": "Author Response",
"response": "Dear Drs. Gonzalez and Richt Thanks for you valuable comments. The first thing that should be pointed out is that this is not a Review Article, is a short Opinion Article. Nevertheless, we fully agree to carefully revise our manuscript for a new version (Version 2), considering all the comments you made and based on that correct it accordingly base on each of them. Regard your comments: Title: Although the title is appropriate and clearly leads to the risk of YF re-emergence in the Americas, the manuscript is more centred to South and eventually Central Americas. Yes, certainly this was focused on the concern of expansion in Latin America beyond Brazil where currently is an epidemic situation, where, since the beginning of the outbreak in December 2016 up to 29 March 2017, there were 1,987 cases of yellow fever reported (574 confirmed, 926 discarded, and 487 suspected under investigation), including 282 deaths (187 confirmed, 24 discarded, and 71 under investigation). The case fatality rate (CFR) is 33% among confirmed cases. Also, for this matter (i.e. maintain the title), the authors could emphasize strongly on the risk of imported cases in temperate zone (i.e. Central and North Americas) during the boreal summer and Aedes spp. activity in Central-North America (e.g. as it is documented elsewhere for Dengue virus - New Mexico or Texas – and, Airport malaria in the US). We agree with this comment. This will be definitively included in our new version. Abstract: “Angola” appears at first, also if the authors want to focus on the Americas, it will be better, in my opinion, to have a short sentence at the end of the abstract that focus on imported risk from endemic area outside of Americas (i.e. Africa). Fully agree, we will change the abstract according those considerations. The authors wrote “despite high YF vaccination coverage”, this is not accurate: Indeed, in many areas and populations worldwide, YF vaccination coverage is discouraging low for years (e.g. Nigeria). This needs to be clear: YF vaccine is certainly the best live attenuated vaccine among all, the less expensive and the first of its kind, consequently there is no reasons today – except politics and funding allocation - to have the people of endemic areas not yet entirely immunized with a real 100% vaccine coverage. We would rephrase that, in order to make clear that although in some areas of some countries at risk, there is a high YF vaccination coverage, there are many areas and populations worldwide, where that is low for years (e.g. Nigeria). Introduction: For the reader, CFR needs to be expressed as a number of a general historical consensus. We will explain more about the CFR historical reports. Needs also to document the historical dimension of multiple consistent re-emergence of Yellow fever since it discovery beside the excellence of the vaccine(s). While frequency and size of outbreaks are recently (a decade ago) increasing. Ok. We will also comment on this, according to your recommendation. Recent outbreaks: China emergence needs to be more specific (i.e. risk) from where (climatic zone) these imported cases were observed. Ok. Now is more detailed available information about it, then we will address this in the revised version. “Concern raised from Brazil”: From the general title or this of such chapter section, one is misleading “Americas (title) or Brazil (this section)”? I suggest something like: “From Brazilian experience, a concern of YF risk for the Americas” Well, the concern is for Americas, Brazil is already with epidemics. Then, given that, we will change the title of section to “From Brazilian experience, a concern of YF risk for the Americas”. Line 3: “epicentre:” this needs to be more precise geographically or the sentence clearly linked to the following one, starting by “Indeed, …“ Ok, we will correct it. Second section, line 4: “a long history of high YF vaccination coverage”, I am not sure this is applicable to Minas Gerais’s remote areas, at least for “long history”. The lack of YF vaccination coverage was raised several times by the Brazilian provincial health authorities back in the early 2000s, unable to reach the remote western zones of the province. Agree, we will make such clarification. Section 3, line 3, top of the page: to be politically correct we do not use anymore “Latin America” but “South America”. Well, that is not really accurate, both terms are correct. But Latin America includes both Central and South America. You can consult any reference and you will realize this. South America is not exchangeable to Latin America, with this you will be excluding Central America and Mexico. Conclusions: Main concerns, regarding the YF risk of emergence/re-emergence, seems to be missing: Vaccine: 1/ The recent outbreaks and the lack of Yellow fever vaccine stock piling (WHO). This needs to be strategized (YF vaccine availability) by the country health authorities and international community. This will be included in our new revised version. 2/ Also the lifelong protection of the vaccine, its inocuity, and the reduction by 1/10 of the immunity dose are new and of extremely high importance (i.e. for the public & public health). This too. Biosurveillance needs to be stressed: Mosquito biosurveillance is an important issue to control the epidemic risk, also Haemagogus and Sabethes are specific for South America and have well studied, the risk and ability of Aedes albopictus (expansion) to transmit the virus in the Americas needs to be assessed and an entomological priority set up when needed (i.e. Public health priority in at risk areas). We will add comments about this. Trans-border risk. Ultimately traveler’s from/to endemic areas need to be covered by a mandatory international certificate of vaccination to protect the borders (trans-border risk). Ok, agree. We will include comments about this. The long time mystery of the absence of YFV in South East Asia can be also stressed in term of global risk. Ok, we will make also comments on this."
}
]
},
{
"id": "21389",
"date": "18 Apr 2017",
"name": "Paola Barato",
"expertise": [
"Reviewer Expertise Infectious diseases"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn general, it is an interesting and well written opinion article. Some comments:\nTITLE: Okay.\nABSTRACT: Okay.\nINTRODUCTION: 1. I consider it to be desirable to include in the introduction the name of the virus (species) which causes yellow fever.\n2. Please review the punctuation of this sentence: ..\"In Asia, but also Europe and North America, Nevertheless, potential spreads beyond the borders of the endemic countries is a matter of global concern.\"\n\nRECENT OUTBREAKS 3. Because this section is referring only to information outside of Americas, I respectfully suggest modified the subtitle as: RECENT OUTBREAKS OUTSIDE OF AMERICAS\n\nTHE CONCERN RAISED FROM BRAZIL In the introduction was stated..\"the lack of efficient health policies\", however in the development of this idea in this section there is very little (a sentence) about what are the Brazilian health policies for yellow fever or for vector-borne disease. This information could be very useful to go in deep to discussion about the \"lack of efficient health policies\"\nCONCLUSIONS Again, the statement \"the lack of efficient health policies\" needs a deeper discussion in the previous section to be included in the conclusions.\nI have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I strongly recommend to include the comments outlined above.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2651",
"date": "19 Apr 2017",
"name": "Alfonso Rodriguez-Morales",
"role": "Author Response",
"response": "Dear Dr. BaratoThank you very much for your valuable assessment and comments. Regard the specific comments:INTRODUCTION:1. I consider it to be desirable to include in the introduction the name of the virus (species) which causes yellow fever.Agree, we will include it.2. Please review the punctuation of this sentence: ..\"In Asia, but also Europe and North America, Nevertheless, potential spreads beyond the borders of the endemic countries is a matter of global concern.\"Agree, we will review it.RECENT OUTBREAKS3. Because this section is referring only to information outside of Americas, I respectfully suggest modified the subtitle as: RECENT OUTBREAKS OUTSIDE OF AMERICASThanks for your comment, we will modify it.THE CONCERN RAISED FROM BRAZILIn the introduction was stated..\"the lack of efficient health policies\", however in the development of this idea in this section there is very little (a sentence) about what are the Brazilian health policies for yellow fever or for vector-borne disease.This information could be very useful to go in deep to discussion about the \"lack of efficient health policies\"We will include information about the Brazilian health policies for yellow fever and vector-borne diseases.CONCLUSIONSAgain, the statement \"the lack of efficient health policies\" needs a deeper discussion in the previous section to be included in the conclusions.Ok, we will go deeper in the discussion regarding that point."
}
]
}
] | 1
|
https://f1000research.com/articles/6-398
|
https://f1000research.com/articles/5-889/v1
|
13 May 16
|
{
"type": "Opinion Article",
"title": "Sequestering seawater on land: a water-based solution to global issues",
"authors": [
"Stéphane Boyer",
"Marie-Caroline Lefort",
"Marie-Caroline Lefort"
],
"abstract": "The ‘surplus’ of oceanic water generated by climate change offers an unprecedented opportunity to tackle a number of global issues through a very pragmatic process: shifting the excess water from the oceans onto the land. Here we propose that sea-level rise could be mitigated through the desalination of very large amounts of seawater in massive desalination plants. To efficiently mitigate sea-level rise, desalinized water could be stored on land in the form of crop, wetlands or new forests. Based on a US$ 500 million price to build an individual mega desalination plant with current technology, the cost of controlling current sea-level rise through water desalination approaches US$ 23 trillion. However, the economic, environmental and health benefits would also be immense and could contribute to addressing a number of global issues including sea-level rise, food security, biodiversity loss and climate change. Because these issues are intimately intertwined, responses should aim at addressing them all concurrently and at global scale.",
"keywords": [
"Biodiversity loss",
"Climate Change",
"Desalination",
"Food security",
"Sea-level rise",
"Sustainable Development Goals"
],
"content": "Introduction\n\nAlthough the impacts of climate change on the oceans are ‘harder to see than receding glaciers’1, the rise in sea-level and its economic and social consequences are already visible for people inhabiting low lying oceanic islands2. Seawater thermal expansion and the melting of glaciers and polar icecaps3,4 have led to an average sea-level rise of 3.2 [2.8 to 3.6] mm per year between 1993 and 20105. Rising oceans cause coastal land to be lost or become inhabitable6 and will likely generate millions of ‘climate change migrants’7 as well as major economic and environmental damage in the near future1,8. However, this ‘surplus’ of water offers an unprecedented opportunity to tackle a number of global issues through a very pragmatic process: shifting the excess water from the oceans onto the land.\n\n\nStoring desalinized water\n\nHere we propose that sea-level rise could be mitigated through the desalination of very large amounts of seawater in massive desalination plants.\n\nThe resulting economic, environmental and health benefits would be considerable. Desalinized seawater can be used to grow crops in desertified and drought-prone areas9. This can directly contribute to increased food security in countries where water resources for agriculture are limited, by the reliable production of local food. Water is also needed to refill lakes and river systems dried up from human consumption and rising temperatures, as is already done for the Jordan River in Israel10. The second largest reserve of freshwater after polar icecaps, is groundwater, but its depletion in recent years due to increased water demand (mainly for agriculture) and to a very long recycling time11, is also contributing to sea-level rise12. Desalinized water could be used to counterbalance groundwater depletion and maintain current levels. To ensure long-term ’storage’ on land, desalinized water could also be ‘captured’ in the form of restored wetland vegetation and novel forested areas. Wetlands provide essential ecosystem services, particularly relevant in a changing climate13, but 87% of wetland areas have been lost since 1700 AD14. Novel forests will not only capture and store water, they would also act as important carbon sinks (Figure 1), thereby supplementing existing forests, which may have reached saturation15, and mitigating climate change16. These restored or newly created habitats will also contribute to the conservation of particularly vulnerable and declining biodiversity14,17, thus tackling yet another major global issue.\n\n\nDesalinizing the excess seawater\n\nToday’s desalination plants are designed mainly to produce potable water for human consumption, but also to support agricultural activities. As freshwater resources are becoming more unreliable in many parts of the world, the number and the size of these facilities are rapidly increasing. The Sorek desalination plant built in Israel in 2013 has reached full capacity in 2015 and is now producing up to 624,000 m3 of desalinized seawater per day, thereby providing potable water for 20% of Israeli households18. Although most of the 17,000 existing desalinization plants are smaller than the Israeli mega plant, globally 80 million m3 of seawater were processed per day in 201319 and this figure was predicted to reach 97.5 million m3 in 201520. Yet this is still a drop out of oceans that expand every year by 9–12 trillion m3. To counteract such increase, 46,000 mega plants like the one in Israel would be required. With an individual price tag of US$ 500 million, they would cost US$ 23 trillion to build. However, technological advances and even bigger plants could significantly reduce this cost. The methodology used in Sorek and most modern desalination facilities is reverse osmosis20, where seawater is forced through semi-permeable membranes at 27 times atmospheric pressure to overcome the osmotic pressure of seawater21. To achieve this, reverse osmosis requires large amounts of energy. Therefore, an important area of research and innovation is the production of renewable energy such as solar, wind and tide-generated electricity to power desalination plants. Another important consideration is the fact that water used in agriculture does not require the same quality as drinking water22. As a consequence, desalination for agriculture purposes is technically less challenging and significantly cheaper23.\n\n\nIncentives to engage on the proposed path\n\nThe projected acceleration of sea-level rise24 means that the desalinating capacity required for the proposed response is always increasing. A critical tipping point is the melting of Antarctica’s ice shelves, which is projected to become irreversible if atmospheric warming exceeds 1.5 to 2 degrees Celsius above current temperatures4. This point could be reached in less than 50 years under the current emission scenario5. Given the timeframe required to deploy a worldwide array of massive desalination facilities, as well as the means and infrastructures to redistribute desalinized water where needed, it may not be achievable in the next 50 years. However benefits from engaging on the proposed path will be perceptible from the onset. The consequences of initiating the construction of massive desalination facilities will directly contribute to 9 of the United Nations’ 17 Sustainable Development Goals25 (Figure 2). The progression to these positive outcomes is stepwise. At a local scale, these include the creation of jobs, the development of infrastructure, the driving of innovation in water treatment methods, and the possibility of increasing food production in famine-prone areas through the reliable and sustainable provision of water for agriculture. These outcomes should be seen as incentives to engage in the development of large-scale seawater desalination facilities, particularly in areas where freshwater availability is unreliable and food security is poor.\n\n\nConclusion\n\nAlthough a radical option, the proposed strategy remains simple in principle, relies on existing and continuously improving technology and is scalable to mitigate sea-level rise and contribute to addressing a number of global issues including food security, climate change and biodiversity loss. Because these issues are intimately intertwined, solutions that only address one of them have a limited chance of success. There is a pressing need for proposing and testing more proactive and ambitious ways to address multiple global issues under one sole umbrella. The massive financial investment required to mitigate sea level rise through seawater desalination is likely to be largely balanced by socio-economic and environmental benefits. This perspective raises a number of critical questions relating to the financing and ownership of thousands of mega desalination plants around the world; the mechanism for distributing desalinised water to agriculture and other means; and the governance of such a global cooperation. The other major hurdle, is a global political plan to engage in a worldwide coordinated effort for mitigating sea-level rise.",
"appendix": "Author contributions\n\n\n\nS.B. developed the concept. S.B and M.-C.L. wrote the manuscript and prepared the figures.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nAllison EH, Bassett HR: Climate change in the oceans: Human impacts and responses. Science. 2015; 350(6262): 778–82. PubMed Abstract | Publisher Full Text\n\nMcCubbin S, Smit B, Pearce T: Where does climate fit? Vulnerability to climate change in the context of multiple stressors in Funafuti, Tuvalu. Glob Environ Chang. [Internet]. Elsevier Ltd; 2015; 30: 43–55. Publisher Full Text\n\nWouters B, Martin-Español A, Helm V, et al.: Glacier mass loss. Dynamic thinning of glaciers on the Southern Antarctic Peninsula. Science. 2015; 348(6237): 899–903. PubMed Abstract | Publisher Full Text\n\nGolledge NR, Kowalewski DE, Naish TR, et al.: The multi-millennial Antarctic commitment to future sea-level rise. Nature. 2015; 526(7573): 421–5. PubMed Abstract | Publisher Full Text\n\nIPCC: Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. IPCC. 2014; 151. Reference Source\n\nPopkin G: Breaking the Waves. Science. 2015; 350(6262): 756–9. PubMed Abstract | Publisher Full Text\n\nNicholls RJ, Hanson SE, Lowe JA, et al.: Sea-level scenarios for evaluating coastal impacts. WIREs Clim Chang. 2014; 5(1): 129–50. Publisher Full Text\n\nHinkel J, Lincke D, Vafeidis AT, et al.: Coastal flood damage and adaptation costs under 21st century sea-level rise. Proc Natl Acad Sci U S A. 2014; 111(9): 3292–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurn S, Hoang M, Zarzo D, et al.: Desalination techniques — A review of the opportunities for desalination in agriculture. Desalination. [Internet]. Elsevier B.V.; 2015; 364: 2–16. Publisher Full Text\n\nBismuth C, Hansjürgens B, Yaari I: Technologies, Incentives and Cost Recovery: Is There an Israeli Role Model? In: Hüttl RF, Bens O, Bismuth C, Hoechstetter S, editors. Society - Water - Technology SE - 16 [Internet]. Cham: Springer International Publishing; 2016; 253–75. Publisher Full Text\n\nGleeson T, Befus KM, Jasechko S, et al.: The global volume and distribution of modern groundwater. Nat Geosci. 2016; 9: 161–167, In press. Publisher Full Text\n\nWada Y, Van Beek LPH, Sperna Weiland FC, et al.: Past and future contribution of global groundwater depletion to sea-level rise. Geophys Res Lett. 2012; 39(9): 1–6. Publisher Full Text\n\nCostanza R, de Groot R, Sutton P, et al.: Changes in the global value of ecosystem services. Glob Environ Chang. [Internet]. Elsevier Ltd; 2014; 26: 152–8. Publisher Full Text\n\nDavidson N: How much wetland has the world lost? Long-term and recent trends in global wetland area. Mar Freshw Res. 2014; 65(1981): 934–41. Publisher Full Text\n\nHedin LO: Biogeochemistry: signs of saturation in the tropical carbon sink. Nature. 2015; 519(7543): 295–6. PubMed Abstract | Publisher Full Text\n\nCanadell JG, Raupach MR: Managing forests for climate change mitigation. Science. 2008; 320(5882): 1456–7. PubMed Abstract | Publisher Full Text\n\nNewbold T, Hudson LN, Phillips HR, et al.: A global model of the response of tropical and sub-tropical forest biodiversity to anthropogenic pressures. Proc Biol Sci. 2014; 281(1792): pii: 20141371. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTechnologies IDESorek Project. [Internet]. 2015; [cited 2015 Nov 22]. Reference Source\n\nTalbot D: Megascale Desalination The world’s largest and cheapest reverse-osmosis desalination plant is up and running. MIT Technology Review. 2016. Reference Source\n\nGude VG: Desalination and Sustainability - An Appraisal and Current Perspective. Water Res. [Internet]. Elsevier Ltd; 2016; 89: 87–106. PubMed Abstract | Publisher Full Text\n\nKim SJ, Ko SH, Kang KH, et al.: Direct seawater desalination by ion concentration polarization. Nat Nanotechnol. Nature Publishing Group; 2010; 5(4): 297–301. PubMed Abstract | Publisher Full Text\n\nQuist-Jensen CA, Macedonio F, Drioli E: Membrane technology for water production in agriculture: Desalination and wastewater reuse. Desalination. 2015; 364: 17–32. Publisher Full Text\n\nZarzo D, Campos E, Terrero P, et al.: Spanish experience in desalination for agriculture Spanish experience in desalination for agriculture. 2016; 3994(April): 52–66.\n\nWatson CS, White NJ, Church JA, et al.: Unabated global mean sea-level rise over the satellite altimeter era. Nat Clim Chang. 2015; 5: 1–5. Publisher Full Text\n\nUnited Nation: Resolution adopted by the General Assembly on 25 September 2015. 2015. Reference Source\n\nSaito T, Yasuda H, Sakurai M, et al.: Monitoring of Stem Water Content of Native and Invasive Trees in Arid Environments Using GS3 Soil Moisture Sensors. Vadose Zo J. 2016; 15(3). Publisher Full Text\n\nChapotin SM, Razanameharizaka JH, Holbrook NM: A biomechanical perspective on the role of large stem volume and high water content in baobab trees (Adansonia spp.; Bombacaceae). Am J Bot. 2006; 93(9): 1251–64. PubMed Abstract | Publisher Full Text\n\nCermák J, Kucera J, Bauerle WL, et al.: Tree water storage and its diurnal dynamics related to sap flow and changes in stem volume in old-growth Douglas-fir trees. Tree Physiol. 2007; 27(2): 181–98. PubMed Abstract | Publisher Full Text\n\nPire R, Ojeda M, Pereira A, et al.: Extracción de N, P y K en tres cultivares de vid en la zona de el Tocuyo, estado Lara. (Removal of N, P and K in three grape cultivars in El Tocuyo, Lara state, Venezuela.). Rev Fac Agron (LUZ). 2001; 18: 201–16. Reference Source"
}
|
[
{
"id": "15875",
"date": "25 Aug 2016",
"name": "J.H. Martin Willison",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting conceptual paper. It is well written and the concept is clearly outlined, but the analysis is very incomplete. In its current form, the paper is just a trial balloon floating off into the unknown. For example, the paper briefly refers to a desalination plant in Israel that uses reverse osmosis (RO) technology. The plant is described as the largest in the world, yet the reference used to support this statement is undated. I found other sources indicating that it is indeed a very large RO plant and does supply 20% of Israel's potable water, but may no longer be the largest in the world. Remarkably, the paper lacks reference to an even larger desalination plant in Saudi Arabia at Ras al Khair which uses both RO and distillation. About 50% of Saudi Arabia's potable water needs are supplied by desalination (mostly obtained by distillation) and desalinated water has long been used in Saudi Arabia for growing crops. A more thorough analysis of the concept would examine both cases.\n\nClearly, sea levels are rising due to addition of water and thermal expansion. Also clearly, water removed by desalination temporarily reduces sea levels by moving water from the ocean phase to various freshwater phases (such as groundwater, lakes, bottles, land plants, and so on). In addition, there will probably be an increase in the amount of water in atmospheric phases (cloud, etc.). Huge problems with this proposal are not addressed, however, and are mostly not acknowledged. Monetary cost is addressed, but this should be the least of our concerns.\n\nWhere would the excess salt go and what would be the effects of local increases in ocean salinity? How much energy is consumed in the desalination, brine management, and water distribution processes and how much of that is generated by burning fossil fuels? Manufacture, maintenance and operation of the desalination plants involves mining, metal processing, transport of materials, and so on. Will the removal of water by desalination compensate for the impact of those processes on the climate system, or will this 'solution' simply add to the problem by creating more sea level rise than is reduced by desalination? How would the water be moved from its ocean-side sources to places where it would be useful for plant growth? How much energy will that cost, and how much of it would be fossil-fuel derived? What are the potential agricultural and forest productivities? How would food be delivered from production site to market? Given that the Mediterranean basin is likely to be a suitable initial site (noting that desalination in Israel is identified as a prime example), what are the regional socio-political implications of such a huge project on this region? Where are the production sites and the markets in the Mediterranean basin model? Would such a large-scale project relieve or exacerbate socio-political tensions in the region? How does the potential for desalination in the Mediterranean basin region compare with the experience of desalination in the Red Sea basin?\n\nWithout a deeper analysis to answer the wide range of questions that this concept invites, I am left unconvinced that the concept could be beneficial. A better start might be to do a full environmental and socio-economic cost-benefit analysis of the large desalination plants in Israel and Saudi Arabia and try scaling up the costs and benefits of the concept from there.\n\nGroundwater depletion is a serious global problem, but many of the places where this is happening are in the centres of large continents, far from any ocean. I suggest that moving water over long distances makes the concept unrealistic in full cost-benefit terms. Perhaps desalination could be used to create green corridors along routes of prevailing winds, and in that way help water to move further inland, but there is no hint of this large-scale geo-engineering idea in the paper, and I would anyway tend to dismiss it as not feasible without a detailed analysis of a specific site, such as the Arabian peninsula. Realistically, to make a difference at a global level, massive quantities of water would have to be desalinated and moved from the oceans towards the middle of large dry continents (Asia, Africa, Australia) using wholly renewable sources of energy. How much, how far, and at what environmental costs?\n\nAll in all, I think this paper is intellectually stimulating but too preliminary to be indexed as a scientific paper.",
"responses": [
{
"c_id": "2665",
"date": "25 Apr 2017",
"name": "Stéphane Boyer",
"role": "Author Response",
"response": "Thanks for the review.At the time of the original concept of the paper, Sorek was the biggest fully operational desalination plant. As pointed out by reviewer 1, another plant in Saudi Arabia: Ras al Khair is now the world’s biggest in terms of quantity of water produced. We have modified the manuscript to also mention that plant. Because the cost of building Ras al Khair is about 14 times that of Sorek, the latter remains a more economic solution for the proposed idea. Therefore, we maintained the focus on the Sorek model. We also provide new references regarding the data for the Sorek plant (Faigon 2016).We acknowledge that the proposed concept comes with a number of issues and not all of these are addressed in the paper. Our aim here is to propose a new potential solution to mitigate sea-level rise and foster new thinking in the scientific community. Overall there will probably be no excess salt because the aim is, at best, to maintain the existing volume of water in the ocean. However there will be significant local increases in ocean salinity, which are likely to cause environmental issues. This is now discussed. The amount of energy needed to power a large desalination plant is considerable. Powering 46,000 of them may seem difficult to conceive at the moment, but this would only happen through time. Desalination plants that are built today for potable water consumption or to support agricultural activities are already turning towards renewable sources of energy (see citations in the revised manuscript: Caldera et al. 2016; Shahabi et al. 2014). There is no reason why this would not be the case for the proposed plants. Again, the idea is not to build 46,000 plants at once. It would be a gradual process. As time goes by, it is likely that the costs of building desalination plants will decrease, as for the cost of solar energy technologies over this past decade, while their capacity and the portion of energy from renewable sources will increase.For each plant, there should be careful considerations of sustainability and environmental issues just like it is the case for plants that are built today. The aim of the paper is to propose a global solution. It is not to select sites or analyse the socio-political implications for a particular region. It is obvious that every country engaged in this effort would face very specific social, political, economic and geographic issues some of which may depend on when these countries engage in the process. It is for example therefore difficult to predict the socio-economic or political state of a particular country 5, 10 or 20 years ahead. We don’t think it is sensible to extrapolate a full environmental, socio-economic, cost-benefit, geographic and political analysis of one region. For these reasons, the case study and extrapolation is really not the angle we want to take."
}
]
},
{
"id": "14081",
"date": "23 Jan 2017",
"name": "Tushaar Shah",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and Abstract: The main title is fine; the sub-title “a water-based solution to global issues’ is vague; desalination of sea water is the central argument of the article and should appear in the title somewhere.\nArticle Content: The article proposes a bold, nay sensational, solution to the potential consequences of seawater rise due to global warming. The proposal to the humankind is: begin by investing US $ 23 trillion in building 46,000 mega desalination plants to desalinate 9-12 trillion m3 of sea water every year, and use it for agriculture, wetland restoration, groundwater recharge, afforestation and in general, meeting fresh water scarcity around the world. Once this threshold capacity is commissioned, build more desalination plants to take care of increased global warming as it accelerates thermal expansion and further seawater rise. Doing this would protect low-lying island nations and coastal communities globally although other threats from climate change and warming will still challenge the humankind.\nI would have normally thought the proposal bizarre, even reckless. But global warming and sea water rise are formidable and complex challenges and the world needs to be receptive to all manner of ideas, even if they sound bizarre. I would have thought that a policy proposal of such gigantic economic and ecological dimensions should be advanced with analytical effort of commensurate magnitude and depth; but this is missing in this analysis which I find a trifle casual in its approach. In fact, I am afraid the article has not subjected, to even a preliminary scrutiny, the various implications of its proposals using evidence freely available on the internet.\nFor example, there are important questions of technical feasibility that have not been touched, leave alone addressed. If fossil energy is to be used for desalination, how much will the 46,000 mega-desalination plants themselves contribute to further global warming and seawater rise (unless, of course, they all use solar or wind power)[1]? At a very conservative 5 kWh/m3 [2], producing 9-12 trillion m3 of desalinated sea water per year would use 45-60 trillion kWh of thermal energy every year, over two times the 22.7 trillion kWh of total electricity the world generated in 2012 [3]. The article provides no hint of how much more energy would be needed to transport desalinated water from plants to places of use, with what further impact on global warming? Given that the world’s annual use of groundwater around 2010 was estimated at only 1.1 trillion m3 (Margat and van der Gun 2013 p.6), do depleted aquifers have empty storage to absorb a significant portion of the 9-12 trillion m3 that desalination plants will have to dispose of every year? Likewise, given that the world’s total annual water withdrawals for all uses are of the order of 4 trillion m3 [4] is it possible for the earth to absorb 9-12 trillion desalinated sea water/year that will be made available from desalination plants on fields, forests and wetlands?\n\nA redeeming aspect, the authors suggest, is that agricultural water use would not require removing all salts from seawater to make it drinking water quality and therefore will cost less money and energy. This is a dangerous proposal. Dumping 9-12 trillion m3 of partially desalinated seawater with say 10,000 ppm salts (compared to 35,000 ppm in sea water) could, over a decade or two salinize soils in the surface of the entire earth to the same level as the lower Indus basin in Pakistan, with frightening consequences.\nAre there competing ideas/proposals for coping with sea water rise that we can compare the present proposal with? The authors are silent on this. Would it not be far cheaper to build levees around low-lying island nations to counter sea level rise? Might it be more cost effective to desilt the seas to expand their storage and use the silt to build the levees? Many geoengineering proposals are dismissed for being far less bizarre than the one proposed here; but injecting aerosols into the atmosphere to reflect sunlight away from the earth[5] would be far cheaper and a lot less riskier than 46,000 mega desalination plants and might resolve all problems of global warming than just seawater rise.\nWould it be cheaper to live with those impacts or use the money saved to compensate people-at-risk from seawater rise? Climate Central estimated that worldwide 147-216 million people live on land that will be below sea level or regular flood levels by the end of the century, assuming emissions of heat-trapping gases continue on their current trend [6]. The Boyer-Lefort article proposes a one-time investment in building desalination plants at US $ 106,500-156,000 per person-at-risk but glosses over US $ 42,000-82,000/year per person-at-risk ever after in operational cost of desalinating sea water at US $ 1/m3 [7]. Would it take a lot more money to help these communities to make a transition to the Netherlands-type dykes-and-levee ecosystem to cope with rising sea level?\nDeveloping countries have been, for over two decades, fighting hard to get the world to commit US $ 100 billion in assistance to help them move to low-carbon technologies; but without any success. After all, US $ 100 billion to protect the global carbon sinks is just 1 percent of the US $ 9-12 trillion the idea proposed here would cost. If this has not clicked, what would it take to persuade the world to invest a third of its GDP (estimated in 2014 at US $ 78 trillion [9]) in desalination plants to mitigate the impacts of sea level rise?\nThere are also other important questions about economic feasibility. Does this proposal compare well with other comparable proposals doing rounds? A 2014 report from World Energy Agency estimated that it would cost US $ 44 trillion in investments, between now and 2050, for the world to change entirely to renewables and halt climate change and sea water rise all together [9]. A similar analysis by IPCC estimated the investments required to stabilize GHG levels in the atmosphere at US $ 13 trillion. These clearly appear to be far better deals than the one proposed here since even after 46,000 mega desalination plants, most ill-effects of global warming, bar sea water rise, will continue to afflict the humankind.\nConclusion: Overall, even a cursory examination suggests that the proposal advanced is unrealistic, even disingenuous. I would have expected authors to undertake a modicum of scrutiny before offering their piece for publication. I do believe that its science is unacceptably poor.\nReferences:\nMargat, Jean and Jac van der Gun. 2013. Groundwater Around the World: A Geographic Synopsis, London: Taylor and Francis.\n\n[1] Which authors hint at but make no effort to cost.\n[2] https://en.wikipedia.org/wiki/Desalination\n[3] https://en.wikipedia.org/wiki/World_energy_consumption\n[4] http://www2.worldwater.org/data.html\n[5] https://royalsociety.org/topics-policy/publications/2009/geoengineering-climate/\n[6] http://www.climatecentral.org/news/new-analysis-global-exposure-to-sea-level-rise-flooding-18066\n[7] https://en.wikipedia.org/wiki/Desalination\n[8] https://en.wikipedia.org/wiki/Gross_world_product\n[9] https://www.technologyreview.com/s/527196/how-much-will-it-cost-to-solve-climate-change/",
"responses": [
{
"c_id": "2666",
"date": "25 Apr 2017",
"name": "Stéphane Boyer",
"role": "Author Response",
"response": "The reviewer seems to have misread the paper on two important points. First, the proposed strategy does not ‘begin’ by investing US $23 trillion. We propose a gradual process where the building of desalination megaplants would obviously take many years. The figure of $23 tn corresponds to the current price to build 46,000 clones of the Sorek plant. This is only a conservative estimate of what it could cost. It is very unlikely that all plants will be exactly the same and it is very unlikely that their cost will remain the same through time. Second, the paper does not mention the building of ‘more desalination plants to take care of increased global warming’. We simply stress the fact that the building of the proposed plants would inevitably have a positive impact on the mitigation of global warming. The original manuscript stated that ‘an important area of research and innovation is the production of renewable energy such as solar, wind and tide-generated electricity to power desalination plants’. We have now developed this into a new subsection focusing on the main technical limitations of operating the proposed plants (i.e. energy consumption and environmental impacts). We also mention the issue around transportation with support form the literature. The reviewer’s concern about ‘annual water withdrawal for all use’ being limited to 4 trillion m3 makes little sense because we propose to store a large proportion of the water in forest, wetlands and as groundwater. On the latter, although the annual use of groundwater was 1.1 trillion m3 in 2010, the renewing of this resource is very slow and as a result, the estimated global groundwater depletion during 1900–2008 alone was estimated to be ∼4.5 trillion m3 (Konikow 2011) and the rate of depletion likely increased after 2008. Our statement about agriculture was that ‘water used in agriculture does not require the same quality as drinking water’ the reviewer’s interpretation as ‘dumping 9-12 trillion m3 of partially desalinated seawater with say 10,000 ppm salts’ is quite a jump. An acceptable salinity for irrigation water is < 1600 ppm of total dissolved solid as opposed to 400 ppm for drinking water (Sarai Atab et al. 2016). We have now added this information to the manuscript.This paper is meant to be an opinion, not a review. As such, we chose to focus on explaining what the core new idea is and discuss its implications rather than reviewing all other potential solutions, which have been published and discussed at length elsewhere. This is not the point of this manuscript. We also want to stress that the paper is limited in its length.With regards to alternative solutions involving renewable energies. All recent simulations show that if the world changed entirely to renewable energies by 2050, sea level rise will continue to rise, possibly for centuries due to sea-level rise commitment (Leverman et al. 2013), including the melting of antarctic sea shelves (Golledge et al. 2015). Even under the rather unlikely IMAGE 2.6 scenario, which includes very aggressive emissions reductions early in the 21st century and deployment of negative emissions technologies later in the century to achieve radiative forcing of 2.6 w/m2 in 2100, the reviewer’s statement does not stand. Although 100% of renewable energy by 2050 would be a prodigious achievement, it would cost more than the investment we propose ($44 vs $23 tn), and it would most certainly not ‘halt climate change and sea water rise’ as claimed by the reviewer. We do not claim to propose a silver bullet that would surpass all other solutions proposed to date. However, we believe storing desalinised seawater on land is a valuable solution and one that is worth exploring because It is novel It brings specific advantages that could be complementary to other solutions presented to date, It can create a number of positive outcomes from its inception, even if the building of 46,000 Sorek-like plants is not achieved It is easily reversible as opposed to other geoengineering solutions. If need be, the plants can simply be turned off. As time goes by, our capacity to succeed is likely to increase (due to better performing and cheaper technology being developed) It has the potential to be a long-term solution as opposed to the temporary storing of water as snow on the Antarctic proposed by Friedler et al. (2016) for example. References cited in this response Frieler K, Mengel M, Levermann A. Delaying future sea-level rise by storing water in Antarctica. Earth Syst Dyn. 2016;7(1):203–10. Konikow LF. Contribution of global groundwater depletion since 1900 to sea-level rise. Geophys Res Lett. 2011;38(17):1–5 Levermann A, Clark PU, Marzeion B, Milne G a, Pollard D, Radic V, et al. The multimillennial sea-level commitment of global warming. Proc Natl Acad Sci [Internet]. 2013;110(34):13745–13750. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23858443 Golledge NR, Kowalewski DE, Naish TR, Levy RH, Fogwill CJ, Gasson EGW. The multi-millennial Antarctic commitment to future sea-level rise. Nature [Internet]. 2015;526(7573):421–5. Available from: http://www.nature.com/doifinder/10.1038/nature15706 Sarai Atab M, Smallbone AJ, Roskilly AP. An operational and economic study of a reverse osmosis desalination system for potable water and land irrigation. Desalination [Internet]. The Authors; 2016;397:174–84. Available from: http://dx.doi.org/10.1016/j.desal.2016.06.020"
}
]
}
] | 1
|
https://f1000research.com/articles/5-889
|
https://f1000research.com/articles/6-553/v1
|
24 Apr 17
|
{
"type": "Case Report",
"title": "Case Report: Novel mutations in TBC1D24 are associated with autosomal dominant tonic-clonic and myoclonic epilepsy and recessive Parkinsonism, psychosis, and intellectual disability",
"authors": [
"Erika Banuelos",
"Keri Ramsey",
"Newell Belnap",
"Malavika Krishnan",
"Chris D. Balak",
"Szabolcs Szelinger",
"Ashley L. Siniard",
"Megan Russell",
"Ryan Richholt",
"Matt De Both",
"Ignazio Piras",
"Marcus Naymik",
"Ana M. Claasen",
"Sampathkumar Rangasamy",
"Matthew J. Huentelman",
"David W. Craig",
"Philippe M. Campeau",
"Vinodh Narayanan",
"Isabelle Schrauwen",
"Erika Banuelos",
"Keri Ramsey",
"Newell Belnap",
"Malavika Krishnan",
"Chris D. Balak",
"Szabolcs Szelinger",
"Ashley L. Siniard",
"Megan Russell",
"Ryan Richholt",
"Matt De Both",
"Ignazio Piras",
"Marcus Naymik",
"Ana M. Claasen",
"Sampathkumar Rangasamy",
"Matthew J. Huentelman",
"David W. Craig",
"Philippe M. Campeau"
],
"abstract": "Mutations disrupting presynaptic protein TBC1D24 are associated with a variable neurological phenotype, including DOORS syndrome, myoclonic epilepsy, early-infantile epileptic encephalopathy, and non-syndromic hearing loss. In this report, we describe a family segregating autosomal dominant epilepsy, and a 37-year-old Caucasian female with a severe neurological phenotype including epilepsy, Parkinsonism, psychosis, visual and auditory hallucinations, gait ataxia and intellectual disability. Whole exome sequencing revealed two missense mutations in the TBC1D24 gene segregating within this family (c.1078C>T; p.Arg360Cys and c.404C>T; p.Pro135Leu). The female proband who presents with a severe neurological phenotype carries both of these mutations in a compound heterozygous state. The p.Pro135Leu variant, however, is present in the proband’s mother and sibling as well, and is consistent with an autosomal dominant pattern linked to tonic-clonic and myoclonic epilepsy. In conclusion, we describe a single family in which TBC1D24 mutations cause expanded dominant and recessive phenotypes. In addition, we discuss and highlight that some variants in TBC1D24 might cause a dominant susceptibility to epilepsy",
"keywords": [
"Autosomal dominant Epilepsy",
"Parkinsonism",
"psychosis",
"intellectual disability",
"TBC1D24"
],
"content": "Introduction\n\nMutations in the TBC1D24 gene are the cause of multiple rare disorders whose phenotype consists of varying degrees of intellectual disability, deafness, cortical malformations, and/or epilepsy1. To date, the disorders caused by TBC1D24 dysfunction make up a continuum of six distinct phenotypes that include DOORS syndrome (Deafness, Onochydystrophy, Osteodystrophy, mental Retardation and Seizures; autosomal recessive; AR), familial infantile myoclonic epilepsy (FIME; AR), progressive myoclonus epilepsy (PME; AR), early-infantile epileptic encephalopathy (EIEE16; AR), autosomal recessive non-syndromic hearing loss (DFNB86; AR), and autosomal dominant non-syndromic hearing loss (DFNA65; AD). TBC1D24 is highly expressed in the brain and can bind ADP ribosylation factor (ARF) 6, a small GTP-binding protein whose function serves to regulate vesicular trafficking2. Drosophila with mutations in the sky gene (TBC1D24 orthologue) have a larger readily releasable pool of synaptic vesicles and show a dramatic increase in basal neurotransmitter release3. Overall, evidence demonstrates that TBC1D24 is a critical player in synaptic vesicle endocytosis, neurotransmitter release and presynaptic function4.\n\nIn this report, we describe a single family in which TBC1D24 mutations cause both dominant and recessive phenotypes: dominant tonic-clonic and myoclonic epilepsy, and a recessive severe disorder with epilepsy, Parkinsonian tremor, intellectual disability and psychosis. We discuss and highlight for the first time that dominant inheritance of TBC1D24 mutations might be associated with epilepsy.\n\n\nCase presentation\n\nA 37-year-old Caucasian female (II:1; Figure 1A) with a complex neurological phenotype characterized by myoclonic epilepsy, cerebellar ataxia, cognitive limitation, fatigue, Parkinsonism, photo-sensitivity and psychosis was referred to TGen’s Center for Rare Childhood Disorders (Supplementary Figure 1).\n\n(A) Pedigree and inheritance of the variants in TBC1D24, p.Pro135Leu; c.404C>T in exon 2 and p.Arg360Cys; c.1078C>T in exon 4 (NM_001199107.1). (B) Amino acid sequence alignment of both variants between species demonstrating that both amino acids are conserved between species. (C) T1-weighted sagittal MRI scan of brain, illustrating mild cerebellar atrophy affecting the superior vermis. (D) T2-weighted axial MRI scan of the brain showing mild atrophy of the hemispheres. (E) T1-weighted coronal MRI scan of the brain showing mild atrophy of the hemispheres.\n\nThe patient is the oldest child of a non-consanguineous marriage, born full term by vaginal delivery. At three months of age, she had daily episodes of vomiting, flexing and extending her legs and shaking her arms lasting 30 minutes to 3 hours. At seven months, she developed a Parkinsonian-like tremor in both hands. She experienced focal episodes that included dystonic attacks: spasm of one side of the body, neck, or on one side of her face, in which she was fully conscious.\n\nAt 11 months, the patient experienced generalized seizures. At two years, the patient was able to walk, but did so with an unsteady, wide-based gait. She also experienced an upper extremity myoclonus with dystonic features. A neurological examination revealed ptosis, limited upward gaze, and left esotropia. At 2.5 years, a cranial CT scan suggested mild anterior cerebellar atrophy.\n\nThe patient’s current symptoms include severe fatigue and an unpredictable sleep cycle. Her speech was delayed (2.5 years onset) and is slow and slightly dysarthric. She was alert and oriented to person and place, but her attention and concentration were reduced. Pure-tone audiometry at 250–8,000 Hz at age 15 showed that she had normal hearing (0–15 db), and tympanometry showed normal stapedial reflexes. She continuously complained of intermittent tinnitus. An ophthalmologic exam at age 13 and 36 showed she had a reduced visual acuity (ETDRS and Snellen acuity testing; OD:40, Snellen 20/160; OS: 36; Snellen 20/200), and optical coherence tomography at age 36 showed a significant retinal and optical nerve thinning (central macular thickness 191 OD, 205 OS, and average nerve rim thickness 71 OD, 67 OS). Intermittent horizontal nystagmus was also observed. Phalanges and nail beds were normal. A physical examination showed that the patient’s muscle tone, strength, and deep tendon reflexes were normal. Although she was able to walk, she had a broad based ataxia (37-years old; Supplementary Video 1). She had dysmetria on nose to finger test, as well as a coarse action tremor. She continues to have involuntary jerking of upper more than lower extremities that last up to an hour. The patient has not had any focal or generalized seizures since the age of 35.\n\nHer cognitive ability was impaired, with a significant decline observed during childhood but no significant change in cognitive ability since adolescence. At age 4.5, a Wechsler Intelligence Scale for Children (WISC-R) test showed an average overall IQ of 93, and at age 8, this was 815. A WISC-III test at 14 years resulted in an IQ score of 65, and at the age of 32 she scored an overall IQ of 67 measured by the Wechsler Adult Intelligence Scale - IV (WAIS-IV)6,7. Neuropsychological exams at age 14 and 32 showed severe impairment in the area of visual scanning, working memory, and executive function. She was administered the trail making test and stroop test at the age of 14 and 32 and showed similar scores8,9: For the trail making test, she took 119 seconds to complete trail A as compared to 102 seconds in at age 14. She was unable to complete the trail B task in both occasions. Both of these suggest severe impairment in the area of visual scanning, working memory, and set shifting. The stroop test of executive functioning revealed a severely impaired range on both occasions (age 14 and 32; T < 20). Some variability was seen in the area of working memory. The patient previously performed in the average range on the digit span (age 15), but scored in the mild deficit range on more current testing (age 32). However, on the arithmetic test, which is also a measure of working memory, she previously performed in the severe deficit range, but currently scored in the moderate deficit range. Performance on measures of perceptual reasoning, processing speed and verbal abstraction were impaired as well, as measured by WISC-III and WAIS-IV at age 14 and 32, and remained stable between both ages.\n\nSince 12 years of age, the patient has had episodes of visual and auditory hallucinations. In addition, she suffered from paranoia, agitated behavior, disinhibition and depression. Overall, the patient presents with intellectual disability with increasing symptoms of psychosis.\n\nThe patient has undergone two muscle biopsies, and showed little evidence for mitochondrial disease. An increased complex I and IV activity were noted using electron transport studies, although muscle and mitochondrial morphology were normal. Multiple tests for mitochondrial and nuclear DNA mutations were normal. A magnetic resonance imaging (MRI) test in 2014 showed mild cerebellar atrophy in the superior cerebellar vermis and the cerebellar hemispheres (Figure 1C–D), and an overnight video EEG showed diffuse mild background slowing consistent with mild cerebral dysfunction, but no epileptiform activity was observed during the test.\n\nThe patient’s family history (Figure 1A) is notable in that the mother (I:2) experienced general tonic-clonic seizures before the age of 5, but was not treated with anticonvulsants. The mother also noted that her father had similar episodes. The mother also reported the presence of hearing loss starting at the age of 40, and needed hearing aids in both ears by the age of 60. Moreover, the patient’s older brother (II:2) experienced general myoclonic spells similar to that of the patient and was treated with anticonvulsants through the age of 18.\n\n\nMethods\n\nDNA was extracted from the blood of both parents and proband. Exomic libraries were prepared with the SureSelect All Human XT v5 exome kits (Agilent Technologies, Santa Clara, CA, USA), following the manufacturer’s protocol. Sequencing was performed by 101bp paired-end sequencing on a HiSeq2000 instrument (Illumina Inc, San Diego, CA, USA). Filtered reads were aligned to the Human genome (Hg19/GRC37) using the Burrows-Wheeler transform (BWA-MEM; v0.7.8). Reads where sorted and polymerase chain reaction (PCR) duplicates were removed using Picard (v1.111) and base quality recalibration, indel realignment were performed using the Genome Analysis Toolkit (GATK; v3.1-1). Variants were called jointly with HaplotypeCaller and recalibrated with GATK, annotated with dbNSFP (v2.9) and snpEff (3.5h) for protein-coding events. Prediction scores were loaded from dbNSFP (v2.9) and used for filtering.\n\nSanger sequencing was performed on the proband and brother by GeneDx (Gaithersburg, MD, USA), and on the parents by the authors. In short, the target areas of the gene were PCR amplified and capillary sequencing was performed. A bi-directional sequence was assembled, aligned to reference gene sequences based on human genome build GRCh37/UCSC hg19 and analyzed for known familial sequence variant(s) (Applied Biosystems Inc., Foster City, CA, USA).\n\n\nResults\n\nExome sequencing in the proband and parents led to the identification of compound heterozygote mutations in TBC1D24 in the patient (II:1), p.Pro135Leu; c.404C>T in exon 2 and p.Arg360Cys; c.1078C>T in exon 4 (NM_001199107.1) (Figure 1). These mutations were confirmed by Sanger sequencing in the entire pedigree (Figure 1). Neither variant was observed in the ExAC Browser database of 60,706 unrelated individuals10, and both were predicted to be damaging by Polyphen 2, MutationTaster, have a high combined annotation dependent depletion (CADD) score (26.5 and 17.7 for P135L and R360C respectively)11 and are conserved between species (Figure 1B). Both the mother and younger brother, affected by seizures at younger ages, are heterozygous for the p.Pro135Leu variant. The father is carrier of the p.Arg360Cys variant and asymptomatic (Figure 1A).\n\n\nDiscussion\n\nWe identified two previously unreported pathogenic variants in the TBC1D24 gene that segregate in a family with a severe complex neurological disorder and tonic-clonic and myoclonic epilepsy (p.Pro135Leu and p.Arg360Cys; Figure 1A). The p.Pro135Leu missense mutation follows an autosomal dominant pattern associated with mild seizures in the proband, mother and younger brother. The proband shows a severe but atypical recessive TBC1D24 neurological phenotype and is compound heterozygote for p.Pro135Leu and p.Arg360Cys. The p.Arg360Cys carrier shows no clinical symptoms.\n\nMutations in TBC1D24 so far have shown to cause an autosomal recessive variable phenotype, ranging from non-syndromic hearing loss, epileptic disorders, to DOORS syndrome1. An autosomal dominant inheritance pattern has only been described in cases with non-syndromic slowly progressive adult onset hearing impairment12. However, there are other reported cases of TBC1D24 mutation carriers who had seizures, such as the mother of a child with DOORS syndrome, who carries a frameshift mutation (c.1008delT, p.His336GlnfsTer12*) and who had absence seizures as a child13. In another family with recessive deafness caused by TBC1D24 mutation, a heterozygous carrier of a missense mutation (c.208G>T, p.Asp70Tyr) had epilepsy since the age of 3 continuing in adulthood (individual IV-8 of pedigree PDKF799 in 14). In another family, the mother and half-sister of an individual carrying the NM_001199107.1:c.32A>G, p.Asp11Gly mutation had epilepsy beginning in adulthood (they were not tested for the mutation15). Finally, the father of a patient with the following TBC1D24 mutations, c.1460dupA, p.His487Glnfs*71 and c.313T>C, p.Cys105Arg, also had a history of seizures1. While it is difficult to attribute with certainty the epilepsy in these carriers or possible carriers to the TBC1D24 mutations, the incidence of epilepsy in carriers is higher than what would be expected by a chance co-occurrence, given that there are less than 40 families described thus far and the prevalence of epilepsy in the general population is 7 per 1000 individuals16. Moreover, as suggested with the family we describe here, some variants might be more susceptible to cause dominant epilepsy. With regards to Parkinsonism, it was previously described in one individual with DOORS syndrome who was later found to have two TBC1D24 mutations (c.619C>T, p.Gln207* and c.1126G>C, p.Gly376Arg, PMID: 27281533)1,17.\n\nThe behavior of the p.Pro135Leu variant is distinct from previous reported variants, and is of interest as it acts almost in a semi- or partial dominant manner. This variant occurs at a highly conserved position in the Rab-GAP N-terminal Tre2–Bub2–Cdc16 (TBC) domain of the protein. It was recently discovered that this domain directly binds phosphoinositides through a cationic pocket and that phosphoinositide binding is critical for presynaptic function4. A fly model with 3 clinically pathogenic mutations (3Glu mutant) in the phosphoinositide-binding pocket causes severe neurological defects, including impaired synaptic-vesicle trafficking and seizures4.\n\nThe p.Arg360Cys variant is located between the TBC and TBC–LysM (TLDc) domain. p.Arg360Leu, a change at the same site as p.Arg360Cys, has been described in a recessive state in a patient with progressive myoclonus epilepsy18. Functional studies in primary mouse cortical cells transfected with p.Arg360Leu mutant showed a significant reduced induction of outgrowth in neurite length compared to wild-type1. Both p.Arg360Leu and p.Arg360Cys only seem to exhibit a clinical phenotype in a recessive or compound heterozygote state in this and previous studies18.\n\nIn conclusion, this family’s clinical presentation highlights the broad spectrum of both AD and AR TBC1D24 disorders, including Parkinsonism, psychiatric symptoms, and autosomal dominant tonic-clonic and myoclonic epilepsy.\n\n\nData availability\n\nExome data of individuals have been added to the Database of Genotypes and Phenotypes (dbGaP; http://www.ncbi.nlm.nih.gov/gap) under project phs000816. Both variants have been reported to ClinVar (http://www.ncbi.nlm.nih.gov/clinvar/) under variation IDs SCV000494664 (p.Arg360Cys), SCV000494665 and SCV000494666 (p.Pro135Leu; NM_001199107.1). The raw sequence data of the father (C4RCD_0194), mother (C4RCD_0193), and propositus (C4RCD_0192) were submitted to the Sequence Read Archive (SRA; http://www.ncbi.nlm.nih.gov/sra) with the respective Biosample ID numbers SAMN05687268, SAMN05687209 and SAMN05687491.\n\n\nConsent\n\nWritten informed consent for publication of their clinical details, (identifiable) clinical images and videos was obtained from the legally authorized representative (as the patient has a diminished decision-making capacity, due to her intellectual disability and her disorder described here) and the patient’s family. The study was explained to the extent compatible with the subject’s understanding, and was enrolled into the Center for Rare Childhood Disorders program at the Translational Genomics Research Institute (TGen). The study protocol and consent procedure was approved by the Western Institutional Review Board (study number, 20120789).",
"appendix": "Author contributions\n\n\n\nEB, IS and MK prepared the first draft of the manuscript. KR, NB, PMC and VN contributed towards the clinical summary, discussion and assessed the patient. SS, ALS, CB, AMC performed the sequencing, and MR, RR, MDB, IP, and MN performed bioinformatics analysis. KR, NB, SS, ALS, MR, RR, MDB, CB, IP, MN, AMC, SR, DWG, MJH, IS and VN contributed to the experimental design. All authors provided expertise in genomics and were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by private donations to TGen’s Center for Rare Childhood Disorders.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank the family for participating in this study and all the previous members of the C4RCD research group not included in the author list, including previous member Jason J. Corneveaux.\n\n\nSupplementary material\n\nSupplementary Figure 1: Proband at the age of 37 years.\n\nClick here to access the data.\n\nSupplementary Video 1: A video of the walking at the age of 37 years, showing an unsteady, wide-based gait.\n\nClick here to access the data.\n\n\nReferences\n\nBalestrini S, Milh M, Castiglioni C, et al.: TBC1D24 genotype-phenotype correlation: Epilepsies and other neurologic features. Neurology. 2016; 87(1): 77–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFalace A, Buhler E, Fadda M, et al.: TBC1D24 regulates neuronal migration and maturation through modulation of the ARF6-dependent pathway. Proc Natl Acad Sci U S A. 2014; 111(6): 2337–42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUytterhoeven V, Kuenen S, Kasprowicz J, et al.: Loss of skywalker reveals synaptic endosomes as sorting stations for synaptic vesicle proteins. Cell. 2011; 145(1): 117–32. PubMed Abstract | Publisher Full Text\n\nFischer B, Lüthy K, Paesmans J, et al.: Skywalker-TBC1D24 has a lipid-binding pocket mutated in epilepsy and required for synaptic function. Nat Struct Mol Biol. Nature Research, 2016; 23(11): 965–973. PubMed Abstract | Publisher Full Text\n\nWechsler D: Wechsler Intelligence Scale for Children-Revised. In Psychological Corporation; 1974. Reference Source\n\nWechsler D: WISC-III: Wechsler Intelligence Scale for Children : Manual. 3rd ed. Psychological Corporation, 1991; 294. Reference Source\n\nWechsler D: Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV). In: 4th ed. Pearson Education, 2008. Reference Source\n\nStroop JR: Studies of interference in serial verbal reactions. J Exp Psychol. Psychological Review Company; 1935; 18(6): 643–62. Publisher Full Text\n\nTombaugh TN: Trail Making Test A and B: Normative data stratified by age and education. Arch Clin Neuropsychol. 2004; 19(2): 203–14. PubMed Abstract | Publisher Full Text\n\nLek M, Karczewski KJ, Minikel EV, et al.: Analysis of protein-coding genetic variation in 60,706 humans. Nature. 2016; 536(7616): 285–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKircher M, Witten DM, Jain P, et al.: A general framework for estimating the relative pathogenicity of human genetic variants. Nat Genet. Nature Publishing Group, A division of Macmillan Publishers Limited. 2014; 46(3): 310–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAzaiez H, Booth KT, Bu F, et al.: TBC1D24 mutation causes autosomal-dominant nonsyndromic hearing loss. Hum Mutat. 2014; 35(7): 819–23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCampeau PM, Kasperaviciute D, Lu JT, et al.: The genetic basis of DOORS syndrome: an exome-sequencing study. Lancet Neurol. 2014; 13(1): 44–58. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRehman AU, Santos-Cortez RL, Morell RJ, et al.: Mutations in TBC1D24, a gene associated with epilepsy, also cause nonsyndromic deafness DFNB86. Am J Hum Genet. 2014; 94(1): 144–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStražišar BG, Neubauer D, Paro Panjan D, et al.: Early-onset epileptic encephalopathy with hearing loss in two siblings with TBC1D24 recessive mutations. Eur J Paediatr Neurol. 2015; 19(2): 251–6. PubMed Abstract | Publisher Full Text\n\nHirtz D, Thurman DJ, Gwinn-Hardy K, et al.: How common are the \"common\" neurologic disorders? Neurology. 2007; 68(5): 326–37. PubMed Abstract | Publisher Full Text\n\nBilo L, Peluso S, Antenora A, et al.: Parkinsonism may be part of the symptom complex of DOOR syndrome. Parkinsonism Relat Disord. 2014; 20(4): 463–5. PubMed Abstract | Publisher Full Text\n\nMuona M, Berkovic SF, Dibbens LM, et al.: A recurrent de novo mutation in KCNC1 causes progressive myoclonus epilepsy. Nat Genet. Europe PMC Funders, 2015; 47(1): 39–46. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "22162",
"date": "03 May 2017",
"name": "Deepa S. Rajan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nInteresting Case.\n\nIn the attached video, the phenotype would most likely fit into ataxia/ wide based gait/ cerebellar origin in my opinion and think that the term parkinsonism might be misleading. Other than the mentioned tremor were there any other features to suspect extrapyramidal disease?\n\nWere there any other variant of currently unknown significance on the exome?\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "23438",
"date": "13 Jun 2017",
"name": "Brian Appavu",
"expertise": [
"Reviewer Expertise My area of interest is in epilepsy and EEG"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI read this case report with great interest. TBC1D24-related diseases represent a unique subset of conditions for which the rapid emergence of both clinical and basic science data has helped elucidate insight into the pathophysiology of this condition. This case report adds to the existing literature of TBC1D24-related diseases, and suggests a novel inheritance pattern in which an autosomal-dominant pattern of inheritance leads to a phenotype that includes epilepsy. This paper can benefit from minor revisions.\n\nWith regard to the case presentation, were any seizures or specific episodes of myoclonus captured on video EEG monitoring? If so, what did they show? If myoclonus was captured, did it appear either cortical or subcortical?\n\nThe terms, variant and mutation are used at various points throughout the manuscript. To make the terminology clear, I would refer to those mutations that are pathogenic as pathologic variants, rather than mutations.\n\nDiscussion, paragraph 2 line 1: For grammatical purposes, change the following line to \"Mutations in TBC1D24 so far have been shown to cause an autosomal recessive variabe phenotype, ranging from non-syndromic hearing loss to epileptic disorders and DOORS syndrome\"\n\nDiscussion, paragraph 2 line 7: Did epilepsy begin at 3 months or 3 years of age? Be specific.\n\nConclusion: Given that this is the first paper to suggest an an autosomal-dominant mode of inheritance for TBC1D24-related epilepsy, commentary regarding the need for further functional studies of this and other related variants would help strengthen this manuscript.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-553
|
https://f1000research.com/articles/5-2378/v1
|
26 Sep 16
|
{
"type": "Research Article",
"title": "Development of a clinical algorithm for treating urethral strictures based on a large retrospective single-center cohort",
"authors": [
"Yuri Tolkach",
"Thomas Herrmann",
"Axel Merseburger",
"Martin Burchardt",
"Mathias Wolters",
"Stefan Huusmann",
"Mario Kramer",
"Markus Kuczyk",
"Florian Imkamp",
"Yuri Tolkach",
"Thomas Herrmann",
"Axel Merseburger",
"Martin Burchardt",
"Mathias Wolters",
"Stefan Huusmann",
"Mario Kramer",
"Markus Kuczyk"
],
"abstract": "Aim To analyze clinical data from male patients treated with urethrotomy and to develop a clinical decision algorithm. Materials and methods Two large cohorts of male patients with urethral strictures were included in this retrospective study, historical (1985-1995, n=491) and modern cohorts (1996-2006, n=470). All patients were treated with repeated internal urethrotomies (up to 9 sessions). Clinical outcomes were analyzed and systemized as a clinical decision algorithm. Results The overall recurrence rates after the first urethrotomy were 32.4% and 23% in the historical and modern cohorts, respectively. In many patients, the second procedure was also effective with the third procedure also feasible in selected patients. The strictures with a length ≤ 2 cm should be treated according to the initial length. In patients with strictures ≤ 1 cm, the second session could be recommended in all patients, but not with penile strictures, strictures related to transurethral operations or for patients who were 31-50 years of age. The third session could be effective in selected cases of idiopathic bulbar strictures. For strictures with a length of 1-2 cm, a second operation is possible for the solitary low-grade bulbar strictures, given that the age is > 50 years and the etiology is not post-transurethral resection of the prostate. For penile strictures that are 1-2 cm, urethrotomy could be attempted in solitary but not in high-grade strictures. Conclusions We present data on the treatment of urethral strictures with urethrotomy from a single center. Based on the analysis, a clinical decision algorithm was suggested, which could be a reliable basis for everyday clinical practice.",
"keywords": [
"stricture",
"urethra",
"endoscopic treatment",
"urethrotomy"
],
"content": "Introduction\n\nUrethral stricture disease is a common problem in urological practice1,2. In general, the following three main types of treatment are applied in patients with urethral stricture disease: urethral dilatation, endoscopic treatment (urethrotomy) and urethroplasty3, with urethrotomy being the most frequently applied and mastered by almost all urologists4,5. The reported success rates for endoscopic urethrotomy widely range from 32% to 73.1% with an understudied long-term success rate2,6,7.\n\nThe relative easiness of the procedure and direct initial effect after procedure in all patients could explain the misuse of urethrotomy in patients, in whom recurrence after treatment is an obvious reality. The guidelines issued by the professional organizations do not generally recommend urethrotomy in patients with strictures longer than 1 cm or repeated urethrotomy sessions. Nevertheless, there is no strict evidence from prospective studies about the patient selection or repeated urethrotomy implementation or for the best treatment for stricture disease in general3.\n\nThe aim of the current study was to analyze clinical data for a period of more than 20 years with endoscopic treatment of strictures in a large cohort of male patients as well as to develop a relevant and flexible clinical decision algorithm that could optimize the treatment of this patient group.\n\n\nMaterials and methods\n\nThe study was retrospective in nature. During the data acquisition period, clinical information was retrieved from medical records of male patients, who were initially treated in the urological clinic of Hannover Medical School with a diagnosis of urethral stricture using the urethrotomy in a period from 1985 to 2006.\n\nTwo large cohorts of male patients with urethral strictures were included in this study, one historical cohort (Cohort I, treatment years 1985–1995, n=366) and one contemporary cohort (Cohort II, years 1996–2006, n=470) with a total of 961 patients. The patients were divided in these two cohorts with regard to the data quality (given the data acquisition was restrospective) with more consistent and full data in the “modern” Cohort II.\n\nClinical data, obtained from patient records, included the patient age at the moment of the first and following operations, stricture etiology, stricture localization, stricture length, stricture grade and number of strictures in every patient. The results of the preoperatively performed urethrography were obtained, whenever possible. The stricture length was calculated according to the urethrography images and partially derived from the urethroscopy protocols. A completely developed stricture grade classification was used. In all patients, the proportion of the minimal diameter in the stricture zone to the diameter of the normal urethra was calculated as a percent. Grade I was defined as lumen stenosis 33% or less, Grade II – 33–66% reduction in the lumen diameter, and Grade III – 66% and greater reduction. In some patients diameter of the urethra in the stricture zone was measured with the urethral catheter and further calculated as the percent of luminal stenosis. All information was entered in a database for subsequent statistical analysis.\n\nAll patients were treated with respect to a urethral stricture using the internal cold-knife urethrotomy with the section at 12 o’clock while they were under general anesthesia. Only patients without any prior treatment of urethral strictures were included in the study. Other urological conditions (e.g. benign prostatic hyperplasia, prostatitis, prostate cancer, medicaments) were not considered as exclusion criteria. A substantial number of the patients received multiple treatment sessions (up to 9 urethrotomies). The duration of the catheterization was documented for all procedures.\n\nPostoperative follow-up was performed in all patients by means of questionnaires and, in most of patients (especially in the modern cohort), uroflowmetry. When the stricture recurrence was considered, ultrasound investigation, urethrography and urethroscopy were performed to aid the diagnosis. Stricture recurrence was defined as the progressive deterioration of the voiding based on objective symptom assessment using International Prostate Symptom Score (IPSS) questionnaire and visualization of the stricture with cystoscopy and cystography with more than 30% of urethral lumen obstruction.\n\nIRB approval was not required by our institution due to the retrospective nature of the study.\n\nStatistical analysis was performed using the STATISTICA 8.0 software (StatSoft, Tulsa, USA). All data samples were tested for normality. Pair-wise comparison of the different parameters among clinical groups was performed with the use of parametric and non-parametric methods. A P level < 0.05 was considered statistically significant. Correlation analysis was performed to identify the associations of clinical/perioperative variables with the outcome. Logistic regression, multiple regression and discriminant analysis were used to create a model for the re-stricture rate prediction based on the clinical and perioperative variables.\n\nOne of the aims of our study was to develop a clinical decision algorithm based on the analysis of recurrence or success of urethrotomy in different categories of patients with different disease characteristics, which would incorporate the clinical information and allow for selection of proper treatment in individual patients.\n\n\nResults\n\nThe patient demographics and stricture characteristics are listed in Table 1 and Table 2.\n\nComments: Age distribution difference between Cohorts I and II was not statistically significant (p>0.05, chi–square test)\n\nComments: TUR – transurethral resection, RPE – radical prostatectomy, * – combination means that more than 2 anatomical segments are affected. § – in our historical cohort the information on the number of strictures by multiple strictures was available only in selected patients. Therefore we show these cases jointly.\n\nThe age distribution of the patients in two cohorts was comparable (p=0.16). More than 70% of patients in both cohorts were 41–80 years old and most were in the 61–70-year-old group. This was reflected in the stricture etiology data (Table 2) with a prevalence of the strictures, related to prostatic operations (transurethral resection of prostate and radical prostatectomy), which are usually the case in the aforementioned age group.\n\nOne of the main differences between both cohorts, which might influence the outcomes, is the catheterization time, and most patients in the historical cohort were on a catheter for fewer than 3 days (67.4% only 1 day) with 3 days and more days in approx. 90% of patients in contemporary group. This parameter was further analyzed as a prognostic factor for the success/failure of the urethrotomy in patients with urethral strictures.\n\nIn our analysis, especially related to the development of a clinical decision algorithm, we used the second cohort, which is better investigated and fully supported by clinical and radiological data, while cohort I was used as reference and control for some critical issues that arose during the analysis in the modern cohort.\n\nThe overall recurrence rates for cohort II, according to the number of consecutively performed urethrotomies, can be observed in Figure 1A. For this cohort, it was demonstrated that the first and second operation had similar recurrence rates. The recurrence rate significantly increased after the third procedure (p<0.001). The overall recurrence rate after the first operation in cohort I was 32.4% (159 out of 491 patients), which was slightly higher than for Cohort II (23%).\n\nA - Recurrence rate with respect to the number of urethrotomies in each patient. B - Recurrence rate with respect to the etiology after the 1st, 2nd and 3rd urethrotomies. C - Recurrence rate with respect to the patient age after the 1st and 2nd urethrotomies. D - Recurrence rate with respect to the stricture location and number of consecutive procedures. E - Recurrence rate with respect to the stricture length (≤1 cm and 1–2 cm) and number of urethrotomies. F - Recurrence rate with respect to the number of consecutive strictures in each patient after the 1st and 2nd attempts. G - Recurrence rate with respect to the catheterization duration after the 1st urethrotomy.\n\nWhen the stricture etiology was considered (Figure 1B), differences in the recurrence rates were identified in Cohort II, and there was the highest recurrence rate after the first operation in patients with traumatic lesions and strictures of infectious origin. Interestingly, only 1 out of 6 patients with failure of the first urethrotomy (initial n=16) recurred after the second treatment in the trauma group. No other etiological group was associated with an improved success rate following the second operation. In contrast with all other etiological groups with a generally unfavorable course, patients with idiopathic disease had a stable recurrence rate from the first to third procedure (where the number of cases was sufficient to show a tendency). In Cohort I (Figure 3-A), a similar success level was detected for the first urethrotomy with the exception of a low recurrence rate for the infection-related strictures. For the second attempt, a substantial increase of the recurrence rate was observed in all etiology groups, except for strictures related to the catheterization.\n\nA - Recurrence rate with respect to the etiology after the 1st and 2nd urethrotomies. B - Recurrence rate with respect to the patient age after the 1st urethrotomy. C - Recurrence rate with respect to the stricture location and number of consecutive procedures (1st, 2nd and 3rd).\n\nThe recurrence rates in Cohort II after the first and second urethrotomy were analyzed with respect to the age of patients (Figure 1C). Due to inadequate numbers of patients, the age-dependent outcomes of the further treatment attempts were not analyzed. Importantly the lowest recurrence rate after the first procedure was in the 81–90 (only 2 out of 24 patients, 8%) and >90-year-old groups (0 out of 4 patients, 0%) with an overall trend of 19–29% in younger patients without statistical significance between both groups. However, significant differences that negatively affect the success rate after second urethrotomy were observed for the 31–40 and 41–50-year-old groups, demonstrating that the second treatment attempt was by far less successful in those patients. In the Cohort I controls (Figure 3-B), the same trend was observed favoring the 81–90-year-old group, but a significant difference negatively affecting the outcome of the 41–50-year-old group was evident compared to almost all other age groups (all p<0.05).\n\nThe location of the stricture in Cohort II appeared in the outcomes of urethrotomies (Figure 1D), demonstrating an unfavorable course of penile strictures after the second treatment compared to the bulbar and prebulbar strictures. The third operation in penile strictures failed in more than half of the patients. In Cohort I, a slight tendency to increasing recurrence rates after the second treatment of bulbar and prebulbar and a significant increase in penile strictures was observed. Generally, the third treatment attempt was unfavorable for all patients and combination strictures had an intermediate position.\n\nInterestingly, in Cohort II, there were no significant differences in the recurrence rates for patients with a stricture length of 1 cm or less compared to the strictures that were 1–2 cm in length (Figure 1E). Multiple strictures tended to be more recurrent than single ones after the first and lacked a difference after the second procedure (Figure 1F). In Cohort I, information on the length and multifocality of strictures showed no influence on the outcome, which could be statistically demonstrated (p>0.05).\n\nOne of the important findings in Cohort II is that a prolonged catheterization (6–10 days) tended to be more favorable in terms of recurrence than ultrashort (1–2 days) and short (3–5 days) regimens (p<0.01) (Figure 1G). On the contrary, of all patients in Cohort I, 67.4% were postoperatively catheterized for only 1 day, and the minority were catheterized for more than 5 days.\n\nThe stricture grade (calculated as the percent of the urethral lumen obstruction), available for analysis (n=255 in Cohort I and n=176 in Cohort II), did not influence the outcome after the first urethrotomy (p>0.05).\n\nAssuming that the stricture length influences the operative outcomes of urethrotomy, we further analyzed the available data from Cohort II (Figure 4 and Figure 5).\n\nA - Recurrence rate with respect to the patient age (*n=5; p1 <0.01). B - Recurrence rate with respect to the stricture etiology (p2, p3 < 0.01 compared to the TURP, trauma, infection and catheterization groups). C - Recurrence rate with respect to the stricture localization. D - Recurrence rate with respect to the number of consecutive strictures in each of the patients. E - Recurrence rate with respect to the catheterization duration (p4 <0.05 compared to the other two groups).\n\nA - Recurrence rate with respect to the patient age (p1, p2 <0.05 compared to all other groups). B - Recurrence rate with respect to the stricture etiology (p3, p4 < 0.01 in comparison to other groups). C - Recurrence rate with respect to the stricture localization (p5 <0.01 to bulbar, p6 < 0.05 to prebulbar). D - Recurrence rate with respect to the number of consecutive strictures in each of the patients (p7 <0.001). E - Recurrence rate with respect to the catheterization duration (p8 <0.01 compared to the other two groups).\n\nDuring the pairwise comparison in patients with or without recurrence after first urethrotomy, there were only slight differences identified. A lower recurrence rate was observed in patients older than 80 years (14% vs. 24–44% in the other age groups, p<0.01; in the 31–40-year-old group, the recurrence rate was 10%, n=5), in patients with idiopathic strictures (19%) and post-radical prostatectomy strictures (18%), but higher recurrence rates were observed in patients with post-TURP strictures (39%), post-traumatic strictures (33%), post-infectious strictures (33%) and strictures related to the catheterization (40%), which had p-levels <0.01. The number of strictures and their stricture localization did not significantly influence the outcome. Patients with ultrashort (1–2 days) catheterization (n=8) had a better success rate (p<0.05). These tendencies were a global trend when all strictures, regardless of length, were considered for analysis.\n\nWhen patients with a urethral stricture length greater than 1 cm were considered as a separate group, new factors arose that were important for patient selection in this indication.\n\nThe 71–80 and 81–90-year-old groups showed a favorable trend in terms of recurrence after the first procedure, 16% and 0%, respectively, compared to 29–42% in the other groups (p<0.05), except for the 21–30-year-old group in which 9 patients presented with a recurrence rate of 0%. Etiologically, there were no observed advantages of the idiopathic and post-RPE strictures (as was the case in strictures < 1 cm), indicating that the length of a stricture represents a more important factor in these groups. Penile strictures presented with a higher recurrence rate of 37% compared to bulbar strictures (19%; p<0.01). Moreover, the number of strictures seemed to play an important role with a > 2-fold increase in the failure rate in patients with more than 1 versus a singular stricture (45% vs. 20%, p<0.001), and this trend was not present in the patients with short strictures (1 cm or less). The other important finding is the association between the length of catheterization and the success rate of the first procedure. Patients who stayed on a catheter for 6 days or more had a recurrence rate of 0% (n=23) compared to 27% (27 out of 99 patients) in patients who were on a catheter for 5 days and less (p<0.001), indicating that prolonged catheterization influences the outcome of strictures that are 1–2 cm long.\n\nThe main clinical questions arising in everyday practice are: Who are the patients who should only be treated once? Who are the patients for whom two attempts could be considered? And who are the patients for whom urethrotomy should never be performed? Further analysis focused on these questions (data from Cohort II) to develop a decision algorithm in patients with stricture disease.\n\nTo answer this clinical question, patients with and without recurrence after second stricture treatment were selected and compared to identify factors that were indicative of treatment failure. The most important finding was, that the patients in whom a second operation was successful, had a predominance of bulbar and prebulbar strictures, implying that penile stricture cases are unfavorable for second urethrotomy (recurrence rates of 33%, 7% and 16% for patients with penile, prebulbar and bulbar strictures, p<0.01). Moreover, post-TURP etiology tended to be a greater predictor of failure than other etiological groups (recurrence rates of 31%, 20% and 17% in patients with post-TURP, idiopathic and post-traumatic strictures; p<0.05 for the two latter groups to post-TURP strictures).\n\nTherefore, men with penile strictures and post-TURP etiology are patients in whom any other attempts, except the first, are generally not reasonable.\n\nWe selected patients from our cohort (n=16) in whom 3 consecutive attempts of urethrotomy were performed with consecutive recurrent strictures, recurred, which represents a group that should initially be treated with other treatment modalities.\n\nOur intention was to identify clinical factors that might be indicative for a successful initial internal urethrotomy. However, besides a trend of a higher number of penile strictures (43.7% of patients in this group) and post-TURP etiology (50% of patients), other parameters were distributed equally compared to the entire study population, providing no answer to this clinical question.\n\nWe have attempted to develop a prediction model based on the database of the Cohort II patients, integrating multiple clinical parameters, such as the age, stricture etiology, length, grade, localization, number of strictures and length of catheterization for predicting the risk of recurrence. Nevertheless, statistical analysis by logistic regression, multiple regression and discriminant analysis did not reveal clear discriminating factors.\n\n\nDiscussion\n\nCold-knife direct vision urethrotomy is a technically simple and easy procedure to perform in patients with urethral strictures. As a result, it is the default treatment approach for urethral strictures compared to long-lasting, complex open urethral reconstructions, requiring experience, precise surgical technique, specific instruments and, often, additional materials1,4,5. But, the long-term results of urethrotomy are questionable with convincing evidence of high recurrence rates2. Nevertheless, general recommendations about who should undergo urethrotomy and who should not are still lacking3.\n\nWe publish results of an analysis in our two consecutive cohorts of patients, who were repeatedly treated with urethrotomy. The high number of patients (n=961) and multiple treatment sessions provide sufficient data for clinical decision-making in patients with urethral strictures.\n\nIn the present cohort, some patients had strictures related to the trauma (n=19 and n=16 in Cohorts I and II, respectively) and post-prostatectomy strictures of the vesico-urethral anastomosis (n=20, Cohort II). Both, in our opinion, have to separately be considered due to different endoscopic and other treatment modalities. In case of post-prostatectomy anastomotic strictures, internal urethrotomy or other endoscopic procedures (transurethral resection or laser incision) is the only available treatment modality. These well-established procedures could be combined with experimental techniques, such as glucocorticoid injection in the resection area, with very good overall results8–11. The data derived from cohort II demonstrated that performing internal urethrotomy in only patients who have an anastomosis stricture achieves a relatively good success rate of 90% after the first urethrotomy (Figure 1D).\n\nTrauma-related strictures represent a separate clinical problem. Open urethroplasty is considered to be the best treatment at a specialized center of excellence due to the high recurrence rates in case of endoscopic treatment. Moreover, all attempts to perform urethrotomy and other urethral manipulations substantially decrease the success rate of consecutive open urethroplasty12–15. Only a few patients with short and passable strictures without coarse scarring could be considered for direct vision internal urethrotomy. In our small group of patients with traumatic strictures, the failure rate of the first procedure was relatively high in cohort II (n=16, 38%, p<0.05) and comparable in cohort I (n=19, 26%, p>0.05).\n\nAccording to our analysis of all other strictures in the anterior urethra, a set of clinical factors influences the outcomes of internal urethrotomy, namely the patient age, stricture etiology, stricture length, number of consecutive strictures in one patient, stricture localization and catheterization duration. These considerations, deriving from the analysis in Cohort II, namely the probability of success and failure of urethrotomy in certain clinical settings (dependent on the characteristics of the stricture disease), allowed us to formulate the clinical decision algorithm for patients with urethral strictures.\n\nPatients who are 70 years of age and older should be considered as ideal candidates for urethrotomy. The length of the stricture should only be considered in relation to other factors. In patients with short strictures (<1 cm), the etiology, number of strictures and stricture localization did not influence the success rate. The ideal duration of catheterization in this group is 1–5 days (ultra-short catheterization of 1–2 days can be considered). For strictures that are 1–2 cm long, the number of strictures and etiology as well as the duration of catheterization (optimal 6–10 days) significantly influenced the clinical outcomes. Penile strictures (>1 cm) could be treated endoscopically in the presence of a tender stricture. Other treatments should be considered if the number of strictures in those patients exceeds 1. Bulbar strictures with a length of 1–2 cm could be treated endoscopically at least once. Having more than 1 stricture is a predictor of failure. The stricture grade and other parameters should be cautiously considered. A second treatment attempt is generally not recommended in the 31–50-year-old age group. Penile strictures, as well as post-TURP strictures, should only be treated once. All other localizations or etiologies, except multiple long strictures, could be attempted twice. A third attempt should not be performed except for highly selected cases of idiopathic bulbar strictures. Strictures longer than 2 cm should only be considered for an open reconstruction.\n\nMoreover, other factors influenced the outcomes. In the present study, we aimed to create a prognostic model based on the aforementioned clinical parameters. However, it was impossible to identify factors in spite of the clinically significant stricture-related factors. This implies that the factors were randomly distributed throughout the cohort and neither a single nor multiple factors were able to predict the outcome. Therefore, other factors (e.g., severity of spongiofibrosis and individual reactivity) that were not in the scope of this study might be useful for predicting treatment outcomes. Spongiofibrosis, according to several promising exploratory studies and believed to significantly limit success of internal urethrotomy in patients with stricture disease, could be detected pre-operatively by means of magnetic resonance imaging or ultrasound investigation and can therefore be considered with other clinical variables, given that specificity and sensitivity of the diagnostic modality could reach acceptable levels16–18.\n\nOur algorithm demonstrates that there are some discrepancies with other large, published series. Our analysis shows that in many bulbar and prebulbar strictures, a second urethrotomy, even in case of long strictures up to 2 cm, could be safely attempted with promising success rates. Other authors reported that repeated urethrotomy did not improve the success rate, concluding that only a single procedure should be considered in all patients6,19. Due to the high failure rates, the treatment of strictures with a length of more than 1 cm by urethrotomy should be avoided in accordance with several studies7,20,21. The duration of the postoperative bladder drainage is also controversially discussed7,21. Nevertheless, in the majority of these studies, the overall success rate was approximately 60%, implying that a more flexible algorithm could extend the indications for direct vision internal urethrotomy, even for disease with recurrent structures. Given that our patients, in case of recurrence, received repeated treatment sessions, we were able to perform a thorough analysis of cases of which repeated urethrotomies were successful, leading to the development of the aforementioned algorithm, providing a therapeutic reserve before these patients were subjected to open urethral reconstruction. Certainly this algorithm needs further investigation in a prospective trial to confirm its applicability and reliability.\n\nAnother important issue to consider is that more than 50% of all strictures originate with iatrogenic manipulations (transurethral resection, prostatectomy and catheterization), which should be a serious alert for urologists. This finding substantiates that no safe and easy manipulations on the urethra are available and that the urethra is very sensitive to traumatization, warranting a careful approach.\n\nOur study is not devoid of limitations related to the retrospective nature of data acquisition, possible biases, and the absence or inaccuracy of data in some patients. Nevertheless, this retrospective study design provides extensive valid information for performing a thorough statistical analysis that could be used to generate important issues that could be implemented in our clinical decision algorithm.\n\n\nConclusions\n\nBased on two cohorts of patients, we have performed analysis of the clinical factors related to the efficacy of the primary and repeated urethrotomies in male patients with urethral stricture disease. Based on these findings, a flexible clinical decision algorithm was developed for this group of patients, providing a rationale for the optimal selection of patients for endoscopic treatment.\n\n\nData availability\n\nF1000Research: Dataset 1. Database of 470 patients from the Cohort II (modern cohort of our study) with full raw data, 10.5256/f1000research.9427.d13546522",
"appendix": "Author contributions\n\n\n\nFI, YT and MB conceived the study. YT and FI carried out the research. AM, MK and TH contributed to the design of the study. YT prepared the first draft of the manuscript. TH, AM, MB, FI, MW, SJ, MK and MK managed the patients. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nFerguson GG, Bullock TL, Anderson RE, et al.: Minimally invasive methods for bulbar urethral strictures: a survey of members of the American Urological Association. Urology. 2011; 78(3): 701–706. PubMed Abstract | Publisher Full Text\n\nHampson LA, McAninch JW, Breyer BN: Male urethral strictures and their management. Nat Rev Urol. 2014; 11(1): 43–50. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWong SS, Aboumarzouk OM, Narahari R, et al.: Simple urethral dilatation, endoscopic urethrotomy, and urethroplasty for urethral stricture disease in adult men. Cochrane Database Syst Rev. 2012; 12: CD006934. PubMed Abstract | Publisher Full Text\n\nPalminteri E, Maruccia S, Berdondini E, et al.: Male urethral strictures: a national survey among urologists in Italy. Urology. 2014; 83(2): 477–484. PubMed Abstract | Publisher Full Text\n\nvan Leeuwen MA, Brandenburg JJ, Kok ET, et al.: Management of adult anterior urethral stricture disease: nationwide survey among urologists in the Netherlands. Eur Urol. 2011; 60(1): 159–166. PubMed Abstract | Publisher Full Text\n\nHeyns CF, Steenkamp JW, De Kock ML, et al.: Treatment of male urethral strictures: is repeated dilation or internal urethrotomy useful? J Urol. 1998; 160(2): 356–358. PubMed Abstract | Publisher Full Text\n\nNaudé AM, Heyns CF: What is the place of internal urethrotomy in the treatment of urethral stricture disease? Nat Clin Pract Urol. 2005; 2(11): 538–545. PubMed Abstract | Publisher Full Text\n\nGiannarini G, Manassero F, Mogorovich A, et al.: Cold-knife incision of anastomotic strictures after radical retropubic prostatectomy with bladder neck preservation: efficacy and impact on urinary continence status. Eur Urol. 2008; 54(3): 647–656. PubMed Abstract | Publisher Full Text\n\nKravchick S, Lobik L, Peled R, et al.: Transrectal ultrasonography-guided injection of long-acting steroids in the treatment of recurrent/resistant anastomotic stenosis after radical prostatectomy. J Endourol. 2013; 27(7): 875–879. PubMed Abstract | Publisher Full Text\n\nEltahawy E, Gur U, Virasoro R, et al.: Management of recurrent anastomotic stenosis following radical prostatectomy using holmium laser and steroid injection. BJU Int. 2008; 102(7): 796–798. PubMed Abstract | Publisher Full Text\n\nBrodak M, Kosina J, Pacovsky J, et al.: Bipolar transurethral resection of anastomotic strictures after radical prostatectomy. J Endourol. 2010; 24(9): 1477–1481. PubMed Abstract | Publisher Full Text\n\nCulty T, Boccon-Gibod L: Anastomotic urethroplasty for posttraumatic urethral stricture: previous urethral manipulation has a negative impact on the final outcome. J Urol. 2007; 177(4): 1374–1377. PubMed Abstract | Publisher Full Text\n\nSingh BP, Andankar MG, Swain SK, et al.: Impact of prior urethral manipulation on outcome of anastomotic urethroplasty for post-traumatic urethral stricture. Urology. 2010; 75(1): 179–182. PubMed Abstract | Publisher Full Text\n\nLevine J, Wessells H: Comparison of open and endoscopic treatment of posttraumatic posterior urethral strictures. World J Surg. 2001; 25(12): 1597–1601. PubMed Abstract | Publisher Full Text\n\nGoel MC, Kumar M, Kapoor R: Endoscopic management of traumatic posterior urethral stricture: early results and followup. J Urol. 1997; 157(1): 95–97. PubMed Abstract | Publisher Full Text\n\nOsman Y, El-Ghar MA, Mansour O, et al.: Magnetic resonance urethrography in comparison to retrograde urethrography in diagnosis of male urethral strictures: is it clinically relevant? Eur Urol. 2006; 50(3): 587–593; discussion 594. PubMed Abstract | Publisher Full Text\n\nSung DJ, Kim YH, Cho SB, et al.: Obliterative urethral stricture: MR urethrography versus conventional retrograde urethrography with voiding cystourethrography. Radiology. 2006; 240(3): 842–848. PubMed Abstract | Publisher Full Text\n\nOh MM, Jin MH, Sung DJ, et al.: Magnetic resonance urethrography to assess obliterative posterior urethral stricture: comparison to conventional retrograde urethrography with voiding cystourethrography. J Urol. 2010; 183(2): 603–607. PubMed Abstract | Publisher Full Text\n\nPansadoro V, Emiliozzi P: Internal urethrotomy in the management of anterior urethral strictures: long-term followup. J Urol. 1996; 156(1): 73–75. PubMed Abstract | Publisher Full Text\n\nIshigooka M, Tomaru M, Hashimoto T, et al.: Recurrence of urethral stricture after single internal urethrotomy. Int Urol Nephrol. 1995; 27(1): 101–6. PubMed Abstract | Publisher Full Text\n\nAlbers P, Fichtner J, Brühl P, et al.: Long-term results of internal urethrotomy. J Urol. 1996; 156(5): 1611–1614. PubMed Abstract | Publisher Full Text\n\nTolkach Y, Herrmann T, Merseburger A, et al.: Dataset 1 in: Development of a clinical algorithm for treating urethral strictures based on a large retrospective single-center cohort. F1000Research. 2016. Data Source"
}
|
[
{
"id": "16943",
"date": "24 Oct 2016",
"name": "Peter F.W.M Rosier",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper provides a detailed overview of single center data. I slightly miss 'time to recurrence' and basically the endeavour remains without the result that was aimed. Few other issues are that all information about self dilatiation misses. And also that it is unclear what the definition of 'traumatic stricture' has been, as separate from 'unknown'. Last but not least the single center results could be much better integrated in the body of evidence that exists for this topic.",
"responses": [
{
"c_id": "2659",
"date": "24 Apr 2017",
"name": "Yuri Tolkach",
"role": "Author Response",
"response": "Dear Dr. Rosier,We are very thankful for your input and review. We have added the time to recurrence information in the Results section of the Manuscript. We are agree that the self dilatation is a very important parameter to analyse, however in our cohort this data was only partially available, making it's inclusion into analysis impossible. We have stated this explicitly in the Materials and Methods and also in the Discussion of the limitations of our article. It's indeed could lead to somehow higher rates of the success for urethrotomy, the effect which we also state as limitation of our study. Moreover, we have provided the definition of the \"traumatic stricture\" used in our study (in Materials and Methods).Thank you very much for efforts,With kind regards, on behalf of all authors,Yuri Tolkach and Florian Imkamp"
}
]
},
{
"id": "17960",
"date": "29 Nov 2016",
"name": "Bastian Amend",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe presented study is based retrospectively on a considerable patient population. This makes conclusive statements possible.\nSome questions need revision/to be answered: The time to recurrence of urethral stricture would be necessary to know. In addition, it might be interesting to see whether the specialist status has an influence on the recurrence rate. Have patients with laser urethrotomy been excluded?\nOverall the idea of a flow chart as a result of data analysis is excellent and worthful for clinical daily practice.",
"responses": [
{
"c_id": "2658",
"date": "24 Apr 2017",
"name": "Yuri Tolkach",
"role": "Author Response",
"response": "Dear Dr. Amend, Thank you very much for your comment. Indeed, a time to recurrence is a very important parameter. We have added the information to the time to recurrence into the Results section of the Manuscript.Our study does not include the patients with laser urethrotomy, concentrating on the optical cold-knife urethrotomy. We have stated this explicitly in the Materials and Methods. In the Materials and Methods we have also clarified that the specialist status was not separately assessed in our study. We agree that this is always a very interesting and understudied confounder of the procedure success, which although very hard to define and evaluate. Once again thank you for you time and critical review,On behalf of all authors,Yuri Tolkach and Florian Imkamp"
}
]
},
{
"id": "18345",
"date": "08 Dec 2016",
"name": "Margit Fisch",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article “Development of a clinical algorithm for treating urethral strictures based on a large retrospective single-center cohort” provides durable data based on a large number of patients treated by direct vision internal urethrotomy (DVIU) for urethral stricture diseases. Aim of this study was to develop an algorithm for clinical decision-making.\n\nIntroduction:\nAs discussed, success rates given by the literature vary broadly. The authors stated it ranges between 32% and 73.1%. This leaves out data given by Santucci et al. with significantly lower success rates between 0 and 9% only, that said, this was a smaller patient cohort1. These data should be included as they mirror the variety in results given by the literature.\n\nThe authors stated: “The guidelines issued by the professional organizations do not generally recommend urethrotomy in patients with strictures longer than 1 cm or repeated urethrotomy sessions.” These guidelines should be quoted2.\n\nResults/discussion:\n\nThe time to recurrence of urethral stricture would be interesting to know.\n\nVesicourethral anastomosis stenosis (VUS), bladder neck stenosis (BNS) and traumatic posterior urethral stenosis are different to anterior urethral strictures. It should be avoided to mix theses different reasons for bladder outlet obstruction. As partially discussed the 28 (VUS and BNS) and 16 (trauma) patients in the modern cohort should be excluded. If they have been, as it remains unclear after restudying the material and methods as well as the discussion part, it needs to be clarified more precisely.\n\nIn the material and methods section it is stated: “Only patients without any prior treatment of urethral strictures were included in the study”. However, in the results we have listed 13 patients with prior urethroplasty in the modern cohort (Table 2). These patients should be excluded. Stricture recurrence after urethroplasty is a different situation and should be considered elsewhere3.\n\nFurther on, data as shown in Figure 1 should be supported by total number of patients analyzed: Data like declining recurrence rates in repeated DVIUs in patients with prebulbar strictures hint at small sample sizes. This issue is addressed in results: “No other etiological group was associated with an improved success rate following the second operation.” But no conclusion has been drawn from this statement.\n\nThe authors stated: “One of the important findings in Cohort II is that a prolonged catheterization (6–10 days) tended to be more favorable in terms of recurrence than ultrashort (1–2 days) and short (3–5 days) regimens (p<0.01)”. This stands in contrast to published data by Albers et al. with a comparable patient cohort4. This issue needs to be discussed.\n\nThe significant differences negatively affecting the success rate after second DVIU for the 31–50-year-old groups, demonstrating that the second treatment attempt was by far less successful in those patients, needs to be discussed as well. As stated: “The number of strictures and their stricture localization did not significantly influence the outcome”. This again is in contrast to most data given. It needs to be discussed as well.\n\nIn Figure 2, some arrows to illustrating the next suggested treatment (for example: after 1.DVIU in a short stricture in case of recurrence, an arrow to other treatments in case of younger age, penile or post-TURP-strictures) would be helpful to faster understand the figure.\n\nOverall, the effort made to find a clinical algorithm how and when to apply DVIUs is excellent. In this paper data given by this large cohort seems reliable. Unfortunately, statistical analysis did not reveal clear discriminating factors. This extenuates the power of the proposed algorithm and should be discussed more clearly. It seems obvious that other factors, as discussed by the authors, seem to influence outcome more relevant. As long as these factors are not identified clearly, an algorithm as proposed by the authors seems to be the most applicable tool.\n\nBased on these findings, this reviewer considers this manuscript as a minor revision.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2378
|
https://f1000research.com/articles/6-541/v1
|
21 Apr 17
|
{
"type": "Research Article",
"title": "Looking into Pandora's Box: The Content of Sci-Hub and its Usage",
"authors": [
"Bastian Greshake"
],
"abstract": "Despite the growth of Open Access, potentially illegally circumventing paywalls to access scholarly publications is becoming a more mainstream phenomenon. The web service Sci-Hub is amongst the biggest facilitators of this, offering free access to around 62 million publications. So far it is not well studied how and why its users are accessing publications through Sci-Hub. By utilizing the recently released corpus of Sci-Hub and comparing it to the data of ~28 million downloads done through the service, this study tries to address some of these questions. The comparative analysis shows that both the usage and complete corpus is largely made up of recently published articles, with users disproportionately favoring newer articles and 35% of downloaded articles being published after 2013. These results hint that embargo periods before publications become Open Access are frequently circumnavigated using Guerilla Open Access approaches like Sci-Hub. On a journal level, the downloads show a bias towards some scholarly disciplines, especially Chemistry, suggesting increased barriers to access for these. Comparing the use and corpus on a publisher level, it becomes clear that only 11% of publishers are highly requested in comparison to the baseline frequency, while 45% of all publishers are significantly less accessed than expected. Despite this, the oligopoly of publishers is even more remarkable on the level of content consumption, with 80% of all downloads being published through only 9 publishers. All of this suggests that Sci-Hub is used by different populations and for a number of different reasons, and that there is still a lack of access to the published scientific record. A further analysis of these openly available data resources will undoubtedly be valuable for the investigation of academic publishing.",
"keywords": [
"publishing",
"copyright",
"sci-hub",
"open access",
"intellectual property",
"piracy"
],
"content": "Introduction\n\nThrough the course of the 20th century, the academic publishing market has radically transformed. What used to be a small, decentralized marketplace, occupied by university presses and educational publishers, is now a global, highly profitable enterprise, dominated by commercial publishers1. This development is seen as the outcome of a multifactorial process, with the inability of libraries to resist price increases, the passivity of researchers who are not directly bearing the costs and the merging of publishing companies, leading to an oligopoly2.\n\nIn response to these developments and rising subscription costs, the Open Access movement started out to reclaim the process of academic publishing3. Besides the academic and economic impact, the potential societal impact of Open Access publishing is getting more attention4,5, and large funding bodies seem to agree with this opinion, as more and more are adopting Open Access policies6–8. These efforts seem to have an impact, as a 2014 study of scholarly publishing in the English language found that, while the adoption of Open Access varies between scholarly disciplines, an average of around 24 % of scholarly documents are freely accessible on the web9.\n\nAnother response to these shifts in the academic publishing world is what has been termed Guerilla Open Access1, Bibliogifts10 or Black Open Access11. Or in short, the usage of semi-legal or outright illegal ways of accessing scholarly publications, like peer2peer file sharing, for example the use of #icanhazpdf on Twitter10, or centralized web services like Sci-Hub/LibGen12.\n\nEspecially Sci-Hub, which started in 2011, has moved into the spotlight in the recent years. According to founder Alexandra Elbakyan, the website uses donated library credentials of contributors to circumvent publishers’ paywalls and thus downloads large parts of their collections13. This clear violation of copyright not only lead to a lawsuit by Elsevier against Elbakyan14, but also to her being called \"the Robin Hood of Science\"15, with both sparking further interest in Sci-Hub.\n\nDespite this, there has been little research into how Sci-Hub is used and what kind of materials are being accessed through it. A 2014 study has looked at content provided through LibGen10. In 2016 Sci-Hub released data on ~28 million downloads done through the service16. This data was subsequently analyzed to see in which countries the website is being used, which publishers are most frequent13 and how downloading publications through Sci-Hub relates to socio-economic factors, such as being based in a research institution17 and how it impacts interlibrary loans12.\n\nIn March 2017 Sci-Hub released the list of ~ 62 million Digital Object Identifiers (DOIs) of the content they have stored. This study is the first to utilize both the data on which publications are downloaded through Sci-Hub, as well as the complete corpus available through them. This allows a data-driven approach to evaluate what is stored in the Sci-Hub universe, how the actual use of the service differs from that, and what different use cases people might have for Sci-Hub.\n\n\nMethods\n\nThe data on the around 62 million DOIs indexed by Sci-Hub was taken from the dataset released on 2017-03-1918. In addition, the data on the 28 million downloads done through Sci-Hub between September 2015 and February 201616 was matched to the complete corpus of DOIs. This made it possible to quantify how often each object listed in Sci-Hub was actually requested from its user base.\n\nThe corresponding information for the publisher, the year of publication, as well as the journal in which it was published was gotten from doi.org, using the RubyGem Terrier (v1.0.2, https://github.com/Authorea/terrier). Acquiring the metadata for each of the 62 million DOIs in Sci-Hub was done between 2017-03-20 and 2017-03-31. In order to save time, the DOIs of the 28 million downloads were then matched to the superset of the already resolved DOI of the complete Sci-Hub catalog. In both cases, DOIs that could not be resolved were excluded from further analysis, but they are included in the dataset released with this article.\n\nFor each publisher, the number of papers downloaded was compared to the expected number of downloads, given the publishers’ presence in the whole Sci-Hub database. For this the relative contribution to the database was calculated for each publisher, excluding all missing data. The number of actual downloads was then compared to the expected number of downloads using a binomial test. All p-values were corrected for multiple testing with False Discovery Rate19 and post-correction p<0.05 were accepted.\n\n\nResults\n\nFor the 61,940,926 DOIs listed in the Sci-Hub data dump, a total of 46,931,934 DOIs could be resolved (75.77%). Manual inspection of the unresolvable 25% shows that nearly all of these could not be resolved as they are not available via doi.org, and are not a technical error in the procedure to resolve them (i.e. lack of internet connection). For the data on the downloads done through Sci-Hub, 21,515,195 downloads could be resolved out of 27,819,965 total downloads (77.34%).\n\nTo estimate the age distribution of the publications listed in Sci-Hub, and which fraction of these publications is actually requested by the people using Sci-Hub, the respective datasets were tabulated according to the year of publication, see Figure 1. While over 95% of the publications listed in Sci-Hub were published after 1950, there is nevertheless a long tail, reaching back to the 1619 edition of Descriptio cometæ20.\n\nRed bars denote the years 1914, 1918, 1939 and 1945. Bottom: Number of publications downloaded by year of publication.\n\nAs a general trend the number of publications listed in Sci-Hub increases from year to year. Two notable exceptions are the time periods of the two World Wars, at which ends the number of publications dropped to pre-1906 and pre-1926 levels, respectively (red bars in Figure 1).\n\nWhen it comes to the publications downloaded by Sci-Hub users, the skew towards recent publications is even more extreme. Over 95% of all downloads fall into publications done after 1982, with ~35% of the downloaded publications being less than 2 years old at the time they are being accessed (i.e. published after 2013). Despite this, there is also a long tail of publications being accessed, with articles published even in the 1600s being amongst the downloads, and 0.04% of all downloads being made for publications released prior to 1900.\n\nThe complete released database contains ~177,000 journals, with ~60% of these having at least a single paper downloaded. The number of articles per journal likely follows an exponential function, for both the total number of publications listed on Sci-Hub as well as the number of downloaded articles (see Supplementary Figure S1), with <10% of the journals being responsible for >50% of the total content in Sci-Hub. The skew for the downloaded content is even more extreme, with <1% of all journals getting over 50% of all downloads.\n\nContrasting the 20 most frequent journals in the complete database with the 20 most downloaded ones (Figure 2), one observes a clear shift not only in the distribution but also in the ranking, with the most abundant journal of the whole corpus not appearing in the 20 most downloaded journals. In addition, chemical journals appear to be overrepresented in the downloads (12 journals), compared to the complete corpus (7 journals), with no other discipline showing an increase amongst the 20 most frequent journals.\n\nBottom: The 20 journals with the most downloads. In both panels Chemistry journals are highlighted in red.\n\nLooking at the data on a publisher level, there are ~1,700 different publishers, with ~1,000 having at least a single paper downloaded. Both corpus and downloaded publications are heavily skewed towards a set of few publishers, with the 9 most abundant publishers having published ~70% of the complete corpus and ~80% of all downloads respectively (see Supplementary Figure S2).\n\nGiven the background frequency in the complete corpus, the download numbers were compared to the expected numbers using a binomial test. After false discovery rate correction for multiple testing, 982 publishers differed significantly from the expected download numbers, with 201 publishers having more downloads than expected and 781 being underrepresented. Interestingly, while some big publishers like Elsevier and Springer Nature come in amongst the overly downloaded publishers, many of the large publishers, like Wiley-Blackwell and the Institute of Electrical and Electronics Engineers (IEEE) are being downloaded less than expected given their portfolio (Figure 3).\n\n\nDiscussion\n\nEarlier investigations into the data provided through Sci-Hub and LibGen focused large on either on the material being accessed13 or on the data stored in these resources10. This study is the first to make use of both the whole corpus of Sci-Hub as well as data on how this corpus is being accessed by its users.\n\nComparing actual usage with the background set of articles shows that articles from recent history are highly sought for, giving some evidence that embargoes prior to making publications Open Access seem to become less effective. These findings are in line with prior research into the motivations for crowd-sourced, peer2peer academic file sharing21. While embargoes have impact on the use of those publications22, these hurdles are being surpassed more and more by Black Open Access11, as provided by Sci-Hub.\n\nWhile a good part of the literature available through Sci-Hub seems to be rarely accessed, the long tail of, publications, especially older ones, seems to be put to use - albeit at a lower frequency. With DOIs that are unresolvable due to issues on publishers’ sides23, and with Open Access publications that disappear behind accidental paywalls24, this use for Black Open Access might play an important role and needs to be investigated more closely. It is worth noting that all analyses related to the number of downloads are limited to the six month period between September 2015 and February 2016, and do not necessarily reflect the complete use of Sci-Hub.\n\nLooking at the disproportionately frequented journals, one finds that 12 of the 20 most downloaded journals can broadly be classified as being within the subject area of chemistry. This is an effect that has also been seen in a prior study looking at the downloads done from Sci-Hub in the United States12. In addition, publishers with a focus on chemistry and engineering are also amongst the most highly accessed and overrepresented. While it is unclear whether this imbalance comes due to lack of access by university libraries, it’s noteworthy that both disciplines have a traditionally high number of graduates who go into industry. The 2013 Survey of Doctorate Recipients of the National Center for Science and Engineering Statistics (NCSES) of the United States finds that 50% of chemistry graduates and 58% of engineering graduates move to private, for-profit industry while only 32% and 27% respectively stay at educational institutions25. In comparison, in the life sciences these numbers are nearly switched, with 52% of graduates staying at educational institutions, which presumably offer more access to the scientific literature.\n\nThe prior analysis of the roughly 28 million downloads done through Sci-Hub showed a bleak picture when it came to the diversity of actors in the academic publishing space, with around 1/3 of all articles downloaded being published through Elsevier13. The analysis presented here puts this into perspective with the whole space of academic publishing available through Sci-Hub, in which Elsevier is also the dominant force with ~24% of the whole corpus. The general picture of a few publishers dominating the market, with around 50% of all publications being published through only 3 companies, is even more pronounced at the usage level compared to the complete corpus, perpetuating the trend of the rich getting richer. Only 11% of all publishers, amongst them already dominating companies, are downloaded more often than expected, while publications of 45% of all publishers are significantly less downloaded.\n\n\nConclusions\n\nThe analyses presented here suggest that Sci-Hub is used for a variety of reasons, by different populations. While most usage is biased towards getting access to recent publications, there is a subset of users interested in getting historical academic literature. Compared to the complete corpus, Sci-Hub seems to be a convenient resource, especially for engineers and chemists, as the overrepresentation shows. Lastly, when it comes to the representation of publishers, the Sci-Hub data shows that the academic publishing field is even more of an oligopoly in terms of actual usage when compared to the amount of literature published. Further analysis of how, by whom and where Sci-Hub is used will undoubtedly shed more light onto the practice of academic publishing around the globe.\n\n\nData availability\n\nAll the data used in this study, as well as the code to analyze the data and create the figures, is archived on Zenodo as Data and Scripts for Looking into Pandora’s Box: The Content of Sci-Hub and its Usage (DOI, 10.5281/zenodo.472493)26.\n\nIn addition the analysis code can also be found on GitHub at http://www.github.com/gedankenstuecke/scihub.",
"appendix": "Competing interests\n\n\n\nThe author uses SciHub regularly in his own research. Otherwise the author declares no competing financial, personal, or professional interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nThe author wants to acknowledge Alexandra Elbakyan, for releasing both data sets used in this study. Further thanks go to John Bohannon, who analyzed and helped release the initial data on downloads from Sci-Hub, and to Athina Tzovara and Philipp Bayer, for fruitful discussion of this manuscript as well as the statistics and analyses involved.\n\n\nSupplementary materials\n\nSupplementary Figure S1: Top: The distribution of publications per journal in the whole corpus, sorted in ascending order of articles. Bottom: The distribution of downloads per journals, sorted in ascending order of downloads.\n\nClick here to access the data\n\nSupplementary Figure S2: The proportion of the whole content as aggregated by publisher, both for the corpus (top) and downloads (bottom). Sorted by number of publications in the respective dataset. Only the 9 most frequent publishers are listed, smaller ones are grouped as other.\n\nClick here to access the data\n\n\nReferences\n\nBalázs B: Pirates in the library – an inquiry into the guerilla open access movement. Paper prepared for the 8th Annual Workshop of the International Society for the History and Theory of Intellectual Property, CREATe, University of Glasgow, UK July 6–8, 2016. 2016. Reference Source\n\nLarivière V, Haustein S, Mongeon P: The Oligopoly of Academic Publishers in the Digital Era. PLoS One. 2015; 10(6): e0127502. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoyster P: A brief history of open access. (accessed 4th of april, 2017), 2016. Reference Source\n\nTennant JP, Waldner F, Jacques DC, et al.: The academic, economic and societal impacts of Open Access: an evidence-based review [version 3; referees: 3 approved, 2 approved with reservations]. F1000Res. 2016; 5: 632. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWildschut D: The need for citizen science in the transition to a sustainable peer-to-peer-society. Futures. 2017. Publisher Full Text\n\nButler D: Gates Foundation announces open-access publishing venture. Nature. 2017; 543(7647): 599. PubMed Abstract | Publisher Full Text\n\nJahn N, Tullney M: A study of institutional spending on open access publication fees in Germany. Peer J. 2016; 4: e2323. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGiles J: Trust gives warm welcome to open access. Nature. 2004; 432(7014): 134. PubMed Abstract | Publisher Full Text\n\nKhabsa M, Giles CL: The number of scholarly documents on the public web. PLoS One. 2014; 9(5): e93949. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCabanac G: Bibliogifts in libgen? a study of a text-sharing platform driven by biblioleaks and crowdsourcing. J Assoc Inf Sci Technol. 2015; 67(4): 874–884. Publisher Full Text\n\nBjörk BC: Gold, green, and black open access. Learned Publishing. 2017; 30(2): 173–175. Publisher Full Text\n\nGardner GJ, McLaughlin SR, Asher AD: Shadow libraries and you: Sci-hub usage and the future of ill. In ACRL 2017, Baltimore, Maryland, March 22–25, 2017. 2017. Reference Source\n\nBohannon J: Who’s downloading pirated papers? Everyone. Science. 2016; 352(6285): 508–12. PubMed Abstract | Publisher Full Text\n\nElsevier inc. et al. v. sci-hub et al. case no. 1:15-cv-04282. 2015. Reference Source\n\nOxenham S: Meet the robin hood of science. (accessed 4th of april, 2017). 2016. Reference Source\n\nBohannon J, Elbakyan A: Data from: Who’s downloading pirated papers? everyone. 2016. Publisher Full Text\n\nGreshake B: Correlating the sci-hub data with world bank indicators and identifying academic use. The Winnower. 2016. Publisher Full Text\n\nHahnel M: List of dois of papers collected by scihub. figshare. 2017. Publisher Full Text\n\nBenjamini Y, Hochberg y: Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological). 1995; 57(1): 289–300. Reference Source\n\nSnell W: De cometarum materia, qui in solis vicinia non exarserunt. In Descriptio Cometæ. Elsevier BV, 1619. 53–57. Publisher Full Text\n\nGardner CC, Gardner GJ: Fast and furious (at publishers): The motivations behind crowdsourced research sharing. Coll Res Libr. 2017; 78(2): 131–149. Publisher Full Text\n\nOttaviani J: Correction: The Post-Embargo Open Access Citation Advantage: It Exists (Probably), It's Modest (Usually), and the Rich Get Richer (of Course). PLoS One. 2016; 11(10): e0165166. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMounce R: Comparing oup to other publishers. (Accessed 4th of april, 2017), 2017. Reference Source\n\nMounce R: Hybrid open access is unreliable. (Accessed 4th of april, 2017), 2017. Reference Source\n\nNational Center for Science and Engineering Statistics: 2013 survey of doctorate recipients. (Accessed 4th of april, 2017), 2014. Reference Source\n\nGreshake B: Data and Scripts for Looking into Pandora’s Box: The Content of Sci-Hub and its Usage. Zenodo. 2017. Data Source"
}
|
[
{
"id": "22119",
"date": "25 Apr 2017",
"name": "April Hathcock",
"expertise": [
"Reviewer Expertise Scholarly Communication"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a clear and well-researched paper on a very timely topic for science communication. I have just a few issues with some of the conclusions reached and with some of the literature represented in the review.\nSo far it is not well studied how and why its users are accessing publications through Sci-Hub.\nThis isn’t necessarily true. The last year has seen a lot of articles pop up in the science communication and library literature about SciHub and the whys and hows of its use, including last year’s widely shared Science article by John Bohannon, which you briefly mention. This statement should be a bit tempered.\nSpeaking of the whys of Sci-Hub, you discuss the founder’s description of how it is done but did not include any discussion from her about why she chose to develop the database. Her main occupation is as a scientist and she chose to develop SciHub because of being unable to access the literature in her field. I think that story is a compelling backdrop to your own research here.\nAgain, Bohannon’s Science article from April 2016 “Who’s downloading pirated papers? EVERYONE,” gets very little mention in your paper. In any case, it certainly warrants a bit more discussion in your work. What did Bohannon do right in his analysis? Wrong? How does your work build on or diverge from his findings? In addition to Bohannon’s work, there have been a number of scholarly communication experts who have explored and written about they hows and whys of Sci-Hub usage, particularly in the library and information science field. I think a review of some of that literature would really help to ground your work.\nThe analyses presented here suggest that Sci-Hub is used for a variety of reasons, by different populations. You argue that your study shows that users use Sci-Hub for a “variety of reasons” but I don’t know that your research really supports that. Certainly you’ve shown what is being accessed and revealed interesting findings in terms of disciplinary, publisher, and publication date distribution, but your results can hardly be said to reveal the underlying motivations of users accessing materials from Sci-Hub. You posit some interesting theories that could explain the numbers you found (lack of access because of lack of well-funded institutional affiliation, etc.), but they are just that: theories. I’d be a bit more cautious in the conclusions you draw from your data, as interesting as they may be.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "22122",
"date": "02 May 2017",
"name": "Gabriel Gardner",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn general, this is a clearly written and argued paper on a developing topic affecting the scholarly communications ecosystem. The author has engaged with much of the recent literature on the topic of which we are aware. The underlying data is freely available and thus possible to replicate. The quantitative analysis proceeds logically and is easy to understand. There are a few areas where we would like to see discussion expanded (noted below) though overall this paper is a very valuable contribution to the literature on this topic.\n\nSpecific Criticisms: The abstract brings up the question of who uses Sci-Hub and why. However, there is relatively little discussion of this in the paper. By our reading of the literature, the question has not been rigorously addressed to date. But some have taken steps toward an answer. Specifically Travis, (2016) is a data point worth discussing <http://www.sciencemag.org/news/2016/05/survey-most-give-thumbs-pirated-papers>. (The survey had a large response rate but should be viewed with the skepticism that would normally apply to any “open” internet survey.)\nThe Supplementary Figures are worth incorporating into the text. S2, in particular, is an informative chart. It should be improved by matching the colors for each publisher in the legend. That is, “other” should appear as the darkest blue in both bars, rather than being assigned different shades of blue as it is presently. That will allow readers to observe the important differences easily.\n\nYour methods section should include some additional discussion of what you mean by “expected number of downloads for each publisher.” You are using “expected” in a mathematical sense that diverges from the word’s everyday meaning, so you should spell this out for the reader.\nWe find the use of the term “Black Open Access” in the discussion section puzzling. “Guerilla open access” is more widely used, as suggested by Google Trends <https://trends.google.com/trends/explore?q=%22black%20open%20access%22,%22guerilla%20open%20access%22>. Additionally, there are important issues of “respectability politics” to consider here; there are vocal OA advocates and practitioners who condemn Sci-Hub and do not want the OA movement to be associated with it or with copyright violation. Using the word “black” may be interpreted as implying that Sci-Hub is compatible with so-called green and gold OA publishing. Librarians in particular are loath to associate Sci-Hub with the OA movement, due to professional norms that often include upholding intellectual property restrictions on ethical grounds (e.g., <http://crln.acrl.org/content/78/2/86.full>, <https://thewinnower.com/papers/3489-signal-not-solution-notes-on-why-sci-hub-will-not-open-access>. On the other end of the spectrum, Sci-Hub’s supporters and sympathizers may object to negative connotations conjured by the term “black.” None of the above comments are meant to imply that your usage of “Black Open Access” is wrong. However, if you are going to use the less familiar term, you should explain why and note that this is a contested issue.\nIn the Introduction section, your remarks on Sci-Hub’s legal status are well made, but another aspect of this is the fact that credential sharing is explicitly prohibited by many publishers (and some libraries) in their terms of use. This is worth mentioning. Elsevier’s and Wiley’s Terms are clear on this issue. <https://www.elsevier.com/legal/elsevier-website-terms-and-conditions> <http://onlinelibrary.wiley.com/termsAndConditions> Due to the ambiguous legality of copying factual and educational works under various copyright regimes, we prefer the terms “potentially illegal” or “likely illegal” when describing Sci-Hub’s activities. A recent ruling in India, for instance, suggests that Sci-Hub may not violate the law in that country.\n\n<https://hughstephensblog.net/2016/09/27/the-indian-high-court-decision-on-delhi-universitys-copy-shop-a-pyrrhic-victory/>\nAlso in the Introduction, the citation for the sentence discussing #icanhazpdf refers to Cabanac, 2015. However, #icanhazpdf is mentioned in that article only in passing. A more thorough analysis can be found in Gardner & Gardner, 2015. <http://eprints.rclis.org/24847/>\nBodó deserves to be cited, but there are better sources on long-term changes in the academic publishing industry. Thompson (2005) is an especially good candidate. And Royster’s slides on the history of the OA movement [3] strikes us as insufficiently authoritative. Willinsky (2006) and/or Suber (2012) are potential alternatives.\nUnder “Data Sources,” you should credit Elbakyan (not Hahnel) with releasing the list of DOIs in Sci-Hub. <https://sci-hub.cc/downloads/doi.7z> <https://twitter.com/Sci_Hub/status/843546352219017218>\n\n---\nSuber, Peter. 2012. Open Access. MIT Press Essential Knowledge Series. Cambridge, Mass: MIT Press. Thompson, John B. 2005. Books in the Digital Age: The Transformation of Academic and Higher Education Publishing in Britain and the United States. Cambridge, UK ; Malden, MA: Polity Press. Willinsky, John. 2006. The Access Principle: The Case for Open Access to Research and Scholarship. Cambridge, MA: MIT Press.\nMinor corrections:\nPage 2, first sentence of 2nd paragraph: Change “was gotten from” to “was obtained from.” Page 2, last sentence of 2nd paragraph (and throughout): “peer-to-peer” is preferable to “peer2peer.”\n\nPage 2, first sentence of last paragraph: Change “publications is actually” to “publications are actually.” Page 2, last sentence of 3rd paragraph: Change “lead” to “led” (past tense). Page 2, first sentence of Data Sources (and throughout): Change “DOI” to “DOIs” for plural use. Page 2, second sentence of Data Sources: Change “downloads” to “download requests” Page 2, second sentence of Resolving DOIs: Change “meta data” to “metadata.” Page 2, second sentence of Results (and throughout): Insert comma after “i.e.” Page 3, first sentence of “Which Journals are Being Read?” and first sentence of “Are Publishers Created Equal?”: Change “at least a single paper downloaded” to clarify that you’re referring to the 6 months included in the log dataset. Page 3, first paragraph of the Discussion section: Change “large” to “largely.” Page 3, first paragraph of the Discussion section: “the whole corpus of Sci-Hub” implies you used the articles themselves. Change to “metadata for the whole corpus” or something similar. Page 3, second paragraph of the Discussion section: Change “more and more surpassed” to “more and more by.” Page 3, last paragraph: errant comma after ‘the long tail of’. Page 6, “Competing interests”: Change “SciHub” to “Sci-Hub.” Reference [1] should read “Balázs Bodó” instead of “Bodó Balázs.” “Bodó” is both his legal surname and his familiar name, so he occasionally flips the order.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "22123",
"date": "05 May 2017",
"name": "Jill Emery",
"expertise": [
"Reviewer Expertise scholarly communication and scholarly publishing"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBastian Greshake has done a good job in presenting his argument and providing supporting documentation. He may want to consider: Mark Ware's 2015 STM Report noted below1 in regards to the research behaviour & motivation, as there may be information in this report that help further augment why SciHub is used & who is \"reading\". Greshake's graphs readily illustrate the points he is making regarding regarding the represented journals & publishers. His use of the publicly available data and noting both where the data is located and scripts used in order to perform his study lend to the transparency of his study. Lastly, these findings are of use and interest to librarians and information scientists as well as to product and resource developers looking to develop mechanisms to counter the \"SciHub phenomena.\"\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22120",
"date": "12 May 2017",
"name": "Balázs Bodó",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe analysis of shadow libraries usage data is not a trivial matter, and requires some caution, especially when someone tries to understand the processes that produce these usage numbers. The article is very modest in its aims, and hopes to present only a very basic analysis of the Sci-Hub usage, but I believe more could have been done in terms of the analysis, and more caution should have been used in when offering explanations.\nDuring the analysis, I think the logic of Sci-Hub allows us to distinguish between two processes: the one that produces the collection, and one that consumes the collection. Articles get into the Sci-Hub collection when someone bumps into a paywall, and turns to Sci-Hub to circumvent it. This means that the corpus of Sci-Hub is indicative of works that have limited accessibility. When analyzing the corpus, the distribution of publishers, and topics, one should look at it from this perspective, and check, for example the open access policies of the most highly represented publishers, or journals, and analyse the results not just within the sci-hub universe, but against the whole population of articles/journals/publishers/topics, including those with widespread open access policies.\n\nThe download numbers, on the other hand, represent the demand for an article. I would argue that articles with only 1 download only inform about the accessibility (someone met a paywall, and downloaded the article from sci-hub), while articles with more than 1 downloads actually suggest some things about the demand (how many individuals were interested in that article/discipline).\nOn that note I missed the geographic analysis, especially as some data on the location of the download was also available in the original dataset.\nRegarding the interpretation of the data. I think the analysis in the Who’s reading? section is not substantiated by the data in any manner. On the contrary, while the data covers all downloads, across all the globe, the interpretation relies on a US census. I don't think that is appropriate. Local usage is structured and explained by local characteristics of higher education, research, and economy. One should not generalize a US explanation to the whole dataset.\nThe analysis in the Non solus section is also misleading. It makes claims about the academic publishing space in general, while the sci-hub data is biased, as it only contains articles with accessibility problems. Articles, journals and publishers with no accessibility problems are probably missing from, or are heavily underrepresented in the dataset, thus one cannot come to any conclusion on the state of academic publishing. Take the case of PLOSone as an example, on why the current analysis is flawed.\nAs a result, the validity of the overall conclusions is limited.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-541
|
https://f1000research.com/articles/6-526/v1
|
20 Apr 17
|
{
"type": "Research Article",
"title": "The elephant shark methylome reveals conservation of epigenetic regulation across jawed vertebrates",
"authors": [
"Julian R. Peat",
"Oscar Ortega-Recalde",
"Olga Kardailsky",
"Timothy A. Hore",
"Julian R. Peat",
"Oscar Ortega-Recalde",
"Olga Kardailsky"
],
"abstract": "Background: Methylation of CG dinucleotides constitutes a critical system of epigenetic memory in bony vertebrates, where it modulates gene expression and suppresses transposon activity. The genomes of studied vertebrates are pervasively hypermethylated, with the exception of regulatory elements such as transcription start sites (TSSs), where the presence of methylation is associated with gene silencing. This system is not found in the sparsely methylated genomes of invertebrates, and establishing how it arose during early vertebrate evolution is impeded by a paucity of epigenetic data from basal vertebrates. Methods: We perform whole-genome bisulfite sequencing to generate the first genome-wide methylation profiles of a cartilaginous fish, the elephant shark Callorhinchus milii. Employing these to determine the elephant shark methylome structure and its relationship with expression, we compare this with higher vertebrates and an invertebrate chordate using published methylation and transcriptome data. Results: Like higher vertebrates, the majority of elephant shark CG sites are highly methylated, and methylation is abundant across the genome rather than patterned in the mosaic configuration of invertebrates. This global hypermethylation includes transposable elements and the bodies of genes at all expression levels. Significantly, we document an inverse relationship between TSS methylation and expression in the elephant shark, supporting the presence of the repressive regulatory architecture shared by higher vertebrates. Conclusions: Our demonstration that methylation patterns in a cartilaginous fish are characteristic of higher vertebrates imply the conservation of this epigenetic modification system across jawed vertebrates separated by 465 million years of evolution. In addition, these findings position the elephant shark as a valuable model to explore the evolutionary history and function of vertebrate methylation.",
"keywords": [
"DNA methylation",
"epigenetics",
"elephant shark",
"vertebrate",
"evolution",
"cartilaginous fish",
"gene regulation"
],
"content": "Introduction\n\nThe methylation of DNA at cytosine bases constitutes an epigenetic regulatory system that is essential for the development of bony vertebrates1–3. Of particular significance is the modification of CG dinucleotides, whose symmetry allows methylation signals in this context to be perpetuated by maintenance methyltransferases following DNA replication4. CG methylation and the epigenetic memory encoded by it thus form a stable but flexible storage system for molecular information.\n\nThe methylomes of studied vertebrates – including bony fish, amphibians and mammals – exhibit similar global patterns in which the majority of CG sites are methylated in somatic tissues5–9. Regulatory elements such as promoters and enhancers are an important exception to this pervasive methylation landscape, particularly when associated with short CG-rich regions termed CpG islands. At the transcription start site (TSS), the presence of methylation is associated with transcriptional silencing, an effect achieved through the inhibition of transcription factor binding and the action of proteins that recognise methylated DNA and induce an inaccessible chromatin configuration10,11. The inverse relationship of TSS methylation with gene expression has been documented across a wide range of vertebrate taxa5,8,12–16, indicating an evolutionarily important function. The molecular machinery that invokes an inactive state in response to methylation signals also appears to be conserved10. Differences in methylation at regulatory regions are linked to the definition of cell fate during developmental progression and the stable maintenance of this identity in differentiated tissues5,17–19. Indeed, widespread erasure of methylation marks in the cells of humans and mice plays a prominent role in the reprogramming of fate specification in both natural and experimental systems17,20,21.\n\nHigh levels of methylation outside the TSS of genes also serve an important function in vertebrate genomes. A substantial fraction of vertebrate genomes is composed of repetitive transposable elements (TEs), whose activity must be repressed to safeguard genome integrity22,23. These elements are ubiquitously methylated in vertebrate somatic tissues8,9,16,24, and experiments performed in mammalian model systems has shown this to be critical for their transcriptional repression25. Hypermethylation of gene bodies is also a conserved feature of vertebrate genomes, and – unlike methylation at the TSS – this is compatible with active transcription in all species profiled to date5,8,12,14–16,26–28. Although the relationship of intragenic methylation with gene expression levels is complex and appears to vary across taxa and even cell-type5,8,12–16,26,28,29, it has been shown to suppress spurious transcription30 and regulate exon splicing31,32 in mammalian systems.\n\nThe distribution and regulatory functions of methylation in vertebrates are unique amongst the metazoa, but the evolution of this system is poorly understood. In striking contrast to the pervasive hypermethylation that characterises vertebrates, invertebrate genomes are sparsely methylated and certain species such as the nematode Caenorhabditis elegans and fruit fly Drosophila melanogaster are apparently devoid of cytosine methylation14,33–36. Where present, the predominant pattern is a mosaic configuration, in which unmethylated regions are interspersed with hypermethylated sequences, the latter preferentially located in gene bodies and in loose positive association with transcription14,33–35,37. Significantly, invertebrates lack the inverse relationship between TSS methylation and expression that constitutes a key regulatory mechanism in vertebrates, and the low levels of methylation do not appear to act as a control against TE activity in their genomes14,35,38–41.\n\nMethylation in Ciona intestinalis, a sea squirt belonging to the subphylum tunicata, the chordate lineage most closely related to vertebrates42 (Figure 1), typifies the invertebrate mosaic pattern14,33,35. The methylation system present in higher vertebrates can thus be inferred to have evolved at some point after the divergence of tunicates from vertebrate progenitors (~680 Mya43) and before the radiation of bony fish and tetrapods (~430 Mya43; Figure 1). Understanding the timing of this progression at greater resolution and the factors that stimulated its development is hindered by the absence of methylation data from basal vertebrate classes.\n\nThe genomes of higher vertebrates are pervasively hypermethylated, with the exception of regulatory elements such as transcription start sites (TSSs), where the presence of methylation is associated with gene silencing (blue line). In contrast, invertebrate genomes are generally sparsely methylated in a mosaic pattern, and lack the inverse relationship between TSS methylation and expression that characterises vertebrates (green line). Certain invertebrate species appear to lack methylation altogether. Due to a paucity of data from basal vertebrate species, the evolutionary history of the CG methylation system present in higher vertebrates is unclear. Preprint methylation data from the sea lamprey Petromyzon marinus is not indicated here (see discussion). The names of organisms examined in this study are noted underneath the appropriate class. * The lobe-finned fish (sarcopterygii), as well as the cephalochordata (a basal chordate taxon), have been omitted for clarity. The following terms have been treated as equivalent: jawless fish and cyclostomata, jawed vertebrates and gnathostomata, cartilaginous fish and chondrichthyes, bony vertebrates and euteleostomi. Median divergence times from the TimeTree database43 were used to construct the tree.\n\nHere, we use whole-genome bisulfite sequencing to generate the first methylation profiles of a cartilaginous fish, the elephant shark Callorhinchus milii. Through detailed comparison with published methylation and expression datasets, we demonstrate that the elephant shark methylome is characteristic of vertebrates in its global hypermethylation – including at TEs and gene bodies – and, crucially, association with transcriptional silencing at the TSS. These findings indicate conservation of a complex methylation system across jawed vertebrates separated by 465 million years of evolution, and identify the elephant shark as an important model to examine the origins and function of methylation in vertebrates.\n\n\nMethods\n\nElephant shark tissue samples were sourced as by-product of deceased animals harvested from commercial fishing in the Otago coastal region. As such, no animal ethics permission was applicable in this circumstance. No animal experimentation or manipulation was undertaken as defined by the Animal Welfare Act (2009, New Zealand), or according to guidelines issued by the New Zealand National Animal Ethics Advisory Committee (NAEAC, Occasional Paper No 2, 2009, ISBN 978-0-478-33858-4).\n\nDNA was purified using a modified magnetic bead approach44. Briefly, cells were first homogenised in “GITC” lysis buffer (4 M Guanidine thiocyanate, Sigma G6639; 50 mM Tris, Thermo 15568-025; 20 mM EDTA; Thermo 15575-020; 2% Sarkosyl, Sigma L9150-50G; 0.1% Antifoam, Sigma A8311-50ML), and this lysate mixture was then combined with TE-diluted Sera-Mag Magnetic SpeedBeads (GE Healthcare, GEHE45152105050250) and isopropanol in a volumetric ratio of 2:3:4, respectively. Following capture with a neodymium magnet, beads were washed once with isopropanol, twice with 70% ethanol and resuspended in filter-sterile milliQ water.\n\nWGBS-seq was undertaken using a post-bisulfite adapter tagging (PBAT) method adapted from Peat et al., 201445. Briefly, 50–100 ng of purified DNA was subjected to bisulfite conversion using the Imprint DNA modification kit (Sigma, MOD50). Converted DNA underwent first strand synthesis with a biotin-labelled adapter sequence possessing seven random nucleotides at its 3’ end (BioP5N7, biotin- ACACTCTTTCCCTACACGACGCTCTTCCGATCTNNNNNNN). The product of first strand synthesis was captured using streptavidin-coated Dynabeads (Thermo, 11205D) and magnetic immobilisation. Double-stranded DNA was created using the immobilized first-strand as a template and an additional adapter that also possesses seven random nucleotides at its 3’ end (P7N7, GTGACTGGAGTTCAGACGTGTGCTCTTCCGATCTNNNNNNN). Unique molecular barcodes and sequences necessary for binding to Illumina flow-cells were added to libraries by PCR using 1X HiFi HotStart Uracil+ Mix (KAPA, KK2801 and 10 μM indexed Truseq-type oligos), with thermal cycling as follows: 12× (94°C, 80 sec; 65°C, 30 sec; 72°C, 30 sec).\n\nFor deep sequencing, libraries were sequenced with a single-end 100bp protocol on a HiSeq 2500 instrument (Illumina) using rapid run mode. For low-coverage sequencing of additional samples, libraries were sequenced on a MiSeq instrument (Illumina) until the desired depth (at least 15,000 mapped CG calls) was attained.\n\nDetailed sequencing results are provided in Table S1.\n\nMapped CG methylated calls for mouse liver5 were downloaded from GEO (accession GSE42836, sample GSM1051157) and analysed directly. For zebrafish muscle8 (SRA study SRP020008, run SRR800081) and sea squirt muscle14 (GEO accession GSE19824, sample GSM497251), raw sequencing data was downloaded and processed along with elephant shark WGBS-seq data generated in this study as follows.\n\nTrimming was performed to remove both poor-quality calls and adapters sequences using TrimGalore (v0.4.0, default parameters). For the elephant shark data, 10bp were also removed from the 5’ end of reads to account for sequence biases associated with PBAT library construction.\n\nTrimmed reads were aligned using Bismark46 (v0.14.3, default parameters) with the --pbat option specified for elephant shark data. The following genome assemblies were used for alignment: zebrafish, GRCz10; elephant shark, 6.1.3; sea squirt, KH. For sea squirt and elephant shark, alignment was only performed against scaffolds larger than 277kb to avoid gene annotation issues and assembly artefacts. The deep-sequenced elephant shark data generated in this study was additionally mapped to the mitochondrial genome.\n\nBismark mapping reports were used to determine global methylation levels for low-coverage elephant shark data. All other datasets were deduplicated and CG methylation calls extracted using Bismark (--comprehensive and --merge_non_CG options specified).\n\nThe number of mapped cytosine calls for sequencing performed in this study are provided in Table S1. The frequency of non-CG methylation indicates the maximum rate of non-conversion during the bisulfite treatment step; by this measure, all libraries had a bisulfite conversion efficiency of at least 98.9%.\n\nIn order to determine the number of CG methylation calls required to accurately predict genome-wide methylation levels, bootstrap sampling of reads from the deep-sequenced male elephant shark dataset was performed to generate regular intervals of CG calls from approximately 100 to 30,000. These reads were trimmed, mapped and methylation quantified as described above, and following 1000 iterations, the proportion of data falling within the 0.5-99.5 percentiles was calculated to generate a 99% confidence interval. An asymptotic model described by the equation y=2.208/x was used to fit a curve to the data. At our minimum sequencing depth of 15,000 CG calls, bootstrap sampling predicts a margin of error (99% confidence interval) of approximately ±1.8 methylation percentage points.\n\nCG methylation calls were imported into the SeqMonk program (v1.37.1) for analysis. For elephant shark and sea squirt, custom SeqMonk genomes were built using GFF annotation files downloaded from NCBI and Ensembl, respectively.\n\nTo analyse methylation at the level of individual CG dinucleotides, we generated an annotation track of each CG site using Bowtie v1.1.247. A minimum of five methylation calls was required for inclusion of a CG site in analyses.\n\nFor mouse, zebrafish and elephant shark, precompiled annotation tracks of repetitive elements generated using the RepeatMasker program were downloaded from UCSC. For sea squirt, we generated these annotations by running the RepeatMasker program (v4.0.6) on the KH assembly with the -s option and specifying Ciona intestinalis as the species. The various classes of transposable elements were extracted from these annotation files and where indicated, merged for analysis. A minimum of five calls was applied as a threshold for inclusion when quantifying individual elements.\n\nTo examine methylation profiles across genes or TEs and neighbouring sequences, methylation was quantified at individual CGs and the mean plotted across a size-standardised gene or TE as well as 10kb upstream and downstream regions, using the quantitation trend plot function. Figures were produced using Prism (GraphPad, v7), with smoothing applied to flanking regions by averaging 100 neighbours.\n\nTranscription start sites were defined as 200bp centred on the first nucleotide of an annotated mRNA, and a minimum of five methylation calls was applied as a threshold for inclusion in analyses. For analysis of gene bodies, 2kb running windows were quantified (with a minimum of 50 methylation calls applied for inclusion) within annotated mRNAs, excluding 1kb at the 5’ end, and the mean was reported for each mRNA.\n\nViolin plots and histograms were drawn using the ggplot2 package48 in R.\n\nWe downloaded raw sequencing data from previous studies as follows; sea squirt muscle14, GEO accession GSE19824, sample GSM497252; elephant shark liver49, SRA study SRP013772, run SRR513760; zebrafish muscle8, SRA study SRP020008, run SRR800045; mouse liver (ENCODE Consortium50,51), GEO accession GSE78583, sample GSM2072415.\n\nTrimming was performed to remove both poor-quality calls and adapters sequences using TrimGalore (v0.4.0, default parameters). In addition, 12bp were removed from the 5’ end of sea squirt reads and 10bp from the 5’ end of both elephant shark and mouse reads to avoid sequence biases.\n\nTrimmed reads were aligned to the reference genomes described above with HISAT252 (v2.0.5) using single-end or paired-end mode, as appropriate. Known splice sites were specified from a file built from GTF annotation files downloaded from Ensembl (release 87) using the HISAT2 python script. No GTF file was available for elephant shark, so a GFF annotation file downloaded from NCBI was first converted to GTF format using the gffread program (https://github.com/gpertea/gffread).\n\nAlignments from HISAT2 were imported into the SeqMonk program, specifying a minimum mapping quality of 60 to select only uniquely aligned reads.\n\nThe RNA-seq quantitation pipeline was used to generate raw read counts across the exons of nuclear protein-coding genes with a correction for any DNA contamination. Counts were corrected by transcript length and genes were divided into quintiles according to expression level.\n\n\nResults\n\nTo generate genome-wide methylation profiles, we extracted DNA from the liver tissue of one female and one male adult elephant shark and performed whole-genome bisulfite sequencing (WGBS-seq). Detailed sequencing results are provided in Table S1.\n\nAs described in the somatic tissues of other vertebrates, we found that methylation is much more prevalent in nuclear DNA at CG dinucleotides (69 – 71.6%) than in non-CG context (0.8 – 1%) or mitochondrial DNA (1.6 – 2.5%; Figure 2A). Low-coverage WGBS-seq demonstrated similar global methylation levels in three additional individuals for liver, and in spleen and pancreas samples (Figure 2B). While we observed a small trend for lower methylation in female samples (Figure 2B; female mean 66.4%, male mean 68.6%), this was not significant according to a t-test (p=0.2308) and within the margin of error expected at this sequencing depth (Figure S1).\n\nA: Global methylation levels of deep-sequenced liver samples in different contexts. ‘CG’ refers to symmetrical CG dinucleotides; ‘Non-CG’ indicates all other sequence contexts. B: Global CG methylation levels in elephant shark tissues examined by low-coverage sequencing. The horizontal bar indicates the mean; gold dots, female samples; blue dots, male samples. The difference between female and male liver samples is not significant according to a t-test, and within the technical margin of error expected at the threshold sequencing depth used (±1.8 methylation percentage points; Figure S1).\n\nWe proceeded with further analysis of CG methylation in deep-sequenced liver datasets as an example of the elephant shark somatic methylome, and combined male and female samples to enhance sequencing coverage.\n\nExisting data indicate that methylation patterns differ markedly between vertebrates and invertebrates. In order to delineate the characteristics of these disparate systems and establish their relationship to the elephant shark methylome, we reanalysed published WGBS-seq data from two vertebrates, mouse (Mus musculus)5 and zebrafish (Danio rerio)8, as well as an invertebrate from the closest chordate outgroup, the sea squirt Ciona intestinalis14 (Table 1A).\n\nAccession numbers are provided in the methods.\n\nAs expected from analysis of global levels, examination of methylation at individual CG dinucleotides in the elephant shark showed that the majority of sites are highly methylated (≥ 80%), and fewer than one tenth are unmethylated (Figure 3A). Both this pattern and the global methylation level are comparable to mouse and zebrafish (Figure 3A–B). In contrast, mean methylation in the invertebrate sea squirt is only 22.9%, and over two thirds of CG sites are unmethylated.\n\nA: Distribution of methylation at individual CG dinucleotides. ‘M’ denotes percentage CG methylation. B: Mean methylation of CG dinucleotides. C: Distribution of methylation within 2kb running windows covering the entire genome. Black dots denote the median. D: Genome screenshots of methylation quantified in 2kb running windows over the first 3Mb of chromosome 1 in sea squirt, zebrafish and mouse, and of the largest scaffold (NW_006890054.1) in elephant shark. These regions were arbitrarily chosen as an unbiased section of each genome.\n\nA further striking distinction is evident when the genome is profiled in 2kb running windows. High methylation levels are pervasive in the elephant shark genome (Figure 3C–D), resembling the structure of other vertebrate methylomes. In contrast, the sea squirt methylome is characterised by a bimodal but largely unmethylated distribution (Figure 3C), resulting from a mosaic pattern in which background hypomethylation is punctuated by shorter stretches of methylated sequences (Figure 3D). Interestingly, running windows show a broader distribution of methylation in elephant shark than in mouse or zebrafish (Figure 3C). Whether this is a feature of basal vertebrates generally or of elephant shark specifically will require analysis of methylation patterns in additional cartilaginous fish.\n\nHaving established that the global structure of the elephant shark methylome is characteristic of vertebrates, we sought to determine the profile and impact of methylation at specific functional elements.\n\nTransposable elements (TEs) are highly methylated in vertebrate genomes, a feature which is linked to the necessity of repressing their transcription to prevent destabilising transposase activity8,9,16,22–24. The generally low levels of methylation at TEs in invertebrates such as the sea squirt do not appear to regulate their activity14,38,39.\n\nExamination of methylation patterns at TEs and flanking sequences showed that the elephant shark exhibits hypermethylation at the large majority of TEs and a slight increase in mean methylation relative to adjacent regions (Figure 4A–B), conforming to the pattern of other vertebrates. While mean methylation levels of TEs in sea squirt are moderately elevated compared to flanking sequences, the large majority of TEs are hypomethylated. Little variation in methylation was observed between the two predominant TE classes in the elephant shark genome, long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs; Figure 4C–D), indicating that – as in other vertebrates8,9,16,24 – hypermethylation of TEs is ubiquitous.\n\nA: Distribution of methylation at transposable elements. Mean methylation values are divided into 10 bins. B: Mean CG methylation across transposable elements and 10kb flanking regions. Quantification was performed at the level of individual CG dinucleotides. Flanking regions were smoothed by averaging 100 neighbours. C: Distribution of methylation at long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs) in the elephant shark genome. Mean methylation values are divided into 10 bins. D: Methylation at long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs) in the elephant shark genome, plotted as in (B).\n\nSilencing of gene expression through the deposition of methylation at transcription start sites (TSSs) constitutes an important regulatory mechanism in vertebrates, but appears to be absent from invertebrates5–10,12–16,35,40,41. To compare the relationship of methylation and transcription in elephant shark with higher vertebrates and the sea squirt, we made use of tissue-matched published RNA-seq datasets8,14,49,50 (Table 1B) to classify protein-coding genes into expression quintiles.\n\nHypomethylation at the TSS of expressed genes constitutes a conspicuous exception to the otherwise pervasively methylated elephant shark genome, matching the higher vertebrates examined (Figure 5A–C). Significantly, we document an inverse relationship between TSS methylation and expression level in the elephant shark (Figure 5A, Figure 5E). A bimodal distribution in which a large proportion of sequences are methylated at low expression levels contrasts with negligible methylation at most TSSs of intermediate and highly expressed genes. The association of TSS methylation with transcriptional silencing is a distinguishing feature of higher vertebrate methylomes5,8,12–16 that is recapitulated here for zebrafish and mouse (Figure 5B–C, Figure 5E), and its presence in the elephant shark indicates that methylation at the TSS induces repression in a similar manner. Consistent with reports showing that invertebrates lack this wide variation in TSS methylation as a function of expression level14,35,41, the large majority of sea squirt TSSs are hypomethylated at all expression levels and methylation levels at the TSS are comparable to intergenic sequences (Figure 5D–E).\n\nA – D: Mean CG methylation across genes and 10kb flanking regions, classified into quintiles according to expression level in RNA-seq datasets (5 = highest). Quintile 4 is omitted for clarity. Quantification was performed at the level of individual CG dinucleotides. Flanking regions were smoothed by averaging 100 neighbours. E – F: Distribution of methylation at the transcription start site (E) and within the body (F) of genes classified into quintiles according expression level (5 = highest). Each violin is scaled to the same maximum width (total area is not constant between violins) to demonstrate distributions for each quintile. Black dots denote the median.\n\nInterestingly, a larger number of TSSs at highly expressed genes remain methylated in elephant shark compared to mouse and zebrafish. This may suggest that the association of methylation with repression is less absolute than in higher vertebrates, but could also be attributed to poorer TSS annotation in the less intensively-studied and incompletely assembled elephant shark genome.\n\nThe methylomes of higher vertebrates and invertebrates also differ within gene bodies. While intragenic methylation in sea squirt forms the bimodal distribution reported in invertebrates35,37, and most silenced genes lack methylation, vertebrate gene bodies are generally hypermethylated at all expression levels (Figure 5F). Intragenic methylation in the elephant shark is characteristic of this vertebrate pattern. In addition, higher expression levels are associated with moderately elevated gene body methylation in elephant shark liver, but not in zebrafish muscle or mouse liver. Given the limited understanding of the role played by intragenic methylation in the regulation of vertebrate gene expression, the functional relevance of this relationship is unclear.\n\n\nDiscussion\n\nMethylation of CG dinucleotides forms a heritable but flexible epigenetic memory that constitutes a critical regulatory system in bony vertebrates, where it is employed in the modulation of gene expression and suppression of transposon element activity. The genomes of studied vertebrates are pervasively hypermethylated, with the exception of regulatory elements such as transcriptional start sites (TSSs), where the presence of methylation is linked to transcriptional silencing1–10,12–16,22–25. These features are not found in the sparsely methylated genomes of invertebrates, including chordates closely related to vertebrates14,33–40, but establishing when this important regulatory system arose and the factors that drove its development has been has been impeded by a lack of methylation data from basal vertebrates (Figure 1).\n\nIn this study, we employ WGBS-seq to generate the first genome-wide methylation profiles of a cartilaginous fish, the elephant shark Callorhinchus milii. Through detailed comparison with published methylation and expression datasets, we demonstrate that the elephant shark methylome is characteristic of higher vertebrates and in clear contrast to the prevailing invertebrate configuration.\n\nWe first note that methylation in the elephant shark is primarily located in symmetric CG context, where comparable global methylation levels of approximately 65–70% were found by low-coverage WGBS-seq in the male and female liver, as well as in the spleen and pancreas (Figure 2). The similarity of male and female methylation indicates that, unlike certain bony fish species53, the uncharacterised sex-determination mechanism in the elephant shark is not associated with large differences in global methylation. Examination of liver profiles at higher resolution demonstrated that – like higher vertebrates – the majority of elephant shark CG sites are methylated, and this is ubiquitous throughout the genome rather than concentrated in short stretches in the invertebrate mosaic pattern, typified by the sea squirt (Figure 3). The global hypermethylation of the elephant shark genome includes both major transposon classes, LINEs and SINEs (Figure 4), whose transcriptional repression is thought to be an important function of vertebrate methylation systems as a safeguard against destabilising transposition activity.\n\nCrucially, the elephant shark mirrors higher vertebrates in their inverse relationship of methylation with expression at the TSS (Figure 5); most expressed genes are unmethylated while a large proportion of inactive genes are hypermethylated at the TSS. This indicates that TSS methylation represses gene expression in a similar fashion in the elephant shark, and implies that this key regulatory mechanism – which is absent from invertebrates – is present in cartilaginous fish. While the association of TSS methylation with silencing is conserved across the vertebrates examined, we also observe that a greater number of expressed genes are methylated at the TSS in elephant shark than in mouse or zebrafish. It will be important to clarify whether this arises from the poorer annotation of the less intensively studied elephant shark genome, or a meaningful biological difference in the repressive potency of methylation in this system.\n\nThe hypermethylation of most gene bodies at all levels of transcription is a feature of higher vertebrate methylomes, that our data show is also shared by the elephant shark (Figure 5). We additionally document an interesting association between higher expression levels and elevated methylation in the elephant shark, a trend which is absent from the higher vertebrate tissues we examined. The relationship between intragenic methylation and expression is complex and appears to vary between vertebrate taxa and even within the tissues of a single species5,8,12–16,26,28,29. Indeed, although a variety of functions for intragenic methylation have been suggested, including suppression of spurious transcription and regulation of exon splicing30–32, their generality is poorly understood, particularly outside mammalian systems. Significant further research will be required to uncover the impact of intragenic methylation in vertebrate genomes and determine the biological relevance of its positive relationship with expression in the elephant shark.\n\nThe observation that methylation patterns in a cartilaginous fish are characteristic of higher vertebrates implies the conservation of a complex methylation system across jawed vertebrates separated by 465 million years of evolution (Figure 1). Of particular note, they support the common presence of a regulatory architecture that links methylation at the TSS to transcriptional repression.\n\nPreprint methylome data from the sea lamprey Petromyzon marinus, a basal jawless vertebrate, indicate that this species lacks the genome-wide hypermethylation and functional relationships of higher vertebrates (https://doi.org/10.1101/033233). While the data from this study has not yet been released, the authors state that methylation patterns in sea lamprey more closely resemble those of the sea squirt and appear to represent a transitional intermediate. In the context of our findings, this implies that the evolution of the higher vertebrate methylation system was achieved after the emergence of jawed vertebrates (~600 Mya43), but before the divergence of bony and cartilaginous fish (~465 Mya43; Figure 1). These data further identify cartilaginous fish as the most divergent class to possess a DNA modification system similar to our own, and position the elephant shark as a valuable model to examine the function and evolution of the vertebrate methylation system. As the slowest evolving vertebrate documented49, the elephant shark bears the closest resemblance to the most recent common ancestor of all jawed vertebrates, enhancing its appeal in this respect. Moreover, the extensive orthology of its small genome to those of tetrapods49 facilitates comparative studies.\n\nTransposon aggressiveness correlates with the degree of sexual outcrossing in the host, and repression of this destabilising activity has been proposed as a major reason for genome-wide hypermethylation in sexually-reproducing organisms such as plants and vertebrates14,38,54. This control mechanism appears to have been discarded as unnecessary in early asexual metazoans, and alternative suppression systems such as the piwi-piRNA pathway were developed in their sexually-reproducing invertebrate descendants54,55. The reason for the apparent reinvention of methylation-based silencing in vertebrates is unclear. Comparison of TE dynamics in the cells of elephant shark and basal chordates offers the opportunity to determine whether the need for additional control mechanisms was a primary driver for genome-wide hypermethylation in jawed vertebrates.\n\nWe note that in addition to substantial physiological changes, the emergence of jawed vertebrates was accompanied by major innovations in gene regulatory networks, notably non-coding RNA elements49. These advances may have facilitated, or conversely been enabled by, the development of a complex methylation system during the same time period. The role of the whole-genome duplications that occurred in vertebrate progenitors56 in the acquisition of components that act downstream of the methylation signal, or as a stimulus for new mechanisms of regulating gene dosage, also merits further investigation.\n\nMethylation of elements that modulate gene expression forms an epigenetic memory that plays an important role in defining and stabilising cell identity in higher vertebrates5,17–21. The reprogramming of this specification in the germline to regenerate full developmental competence after fertilisation, and the pathways employed to achieve this – such as active demethylation by ten-eleven-translocase (TET) enzymes, vary considerably across vertebrates57. Examination of these phenomena in the elephant shark will provide insight into the evolutionary history of epigenetic control in the life cycle and its consequences for vertebrate development.\n\nOur findings provide fresh perspective on an important epigenetic modification. The elephant shark methylome delineates the evolutionary extent of the complex methylation system found in higher vertebrates, and sets the scene for comparative studies that will address longstanding questions about the primary purpose of this system and how these functions evolved from the mosaic pattern of invertebrates. It will be particularly pertinent to understand the development of the mechanism that links TSS methylation to transcriptional repression. Epigenetic studies in the elephant shark also open promising avenues to explore the ways in which methylation is put to use during development and the specification of cell fate, and the conservation of these strategies amongst vertebrates.\n\n\nData availability\n\nAll raw WGBS-seq data (including low-coverage WGBS-seq data), as well as mapped CG call files for male and female liver deep-sequencing, are deposited in the GEO database under accession number GSE96683.\n\nSource for published WGBS-seq datasets:\n\nMapped CG methylated calls for mouse liver5 were downloaded from GEO, accession GSE42836, sample GSM1051157.\n\nRaw sequencing data for zebrafish muscle8 were downloaded from SRA, study SRP020008, run SRR800081.\n\nRaw sequencing data for sea squirt muscle14 were downloaded from GEO, accession GSE19824, sample GSM497251.\n\nSource for published RNA-seq datasets:\n\nRaw sequencing data for elephant shark liver49 were downloaded from SRA, SRA study SRP013772, run SRR513760.\n\nRaw sequencing data for mouse liver (ENCODE Consortium50,51) were downloaded from GEO, accession GSE78583, sample GSM2072415.\n\nRaw sequencing data for zebrafish muscle8 were downloaded from SRA, study SRP020008, run SRR800045.\n\nRaw sequencing data for sea squirt muscle14 were downloaded from GEO, accession GSE19824, sample GSM497252;",
"appendix": "Author contributions\n\n\n\nTAH conceived the project and JRP and TAH designed the study. JRP performed data analysis and wrote the manuscript. OK prepared WGBS-seq libraries. OO-R performed bootstrap sampling. TAH supervised the study, assisted with data analysis and contributed to the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis research was funded by a University of Otago Research Committee Grant (111899.01.R.LA).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to acknowledge Andrew Grey (University of Otago) for kindly providing statistical advice. We are also grateful to Les McNoe and Aaron Jeffs (Otago Genomics) for assistance with high-throughput sequencing, and Felix Krueger and Simon Andrews (Babraham Institute) for bioinformatics advice.\n\n\nSupplementary material\n\nTable S1: Whole-genome bisulfite sequencing of elephant shark somatic tissues.\n\nThe table lists the number of cytosine calls at either symmetric CG dinucleotides ('CG') or in other sequence contexts ('non-CG'), mapped against the elephant shark 6.1.3 genome assembly or mitochondrial DNA. Details of bioinformatic processing are provided in the methods section. For deep-sequenced samples, the number of calls following deduplication are given. The frequency of non-CG methylation indicates the maximum rate of non-conversion during the bisulfite treatment step; by this measure, all libraries had a bisulfite conversion efficiency of at least 98.9%.\n\nClick here to access the data.\n\nFigure S1: Bootstrap sampling to determine margin of error in low-coverage WGBS-seq.\n\nEmpirical prediction of the margin of error (99% confidence interval) associated with low coverage WGBS-seq, as calculated by bootstrap sampling of the deep-sequenced male elephant shark liver dataset. Details of the sampling approach are provided in the methods section. An asymptotic model with the equation y=2.208/x was used to fit a curve to the data. At our minimum sequencing depth of 15,000 CG calls, bootstrap sampling predicts a margin of error of approximately ±1.8 methylation percentage points.\n\nClick here to access the data.\n\n\nReferences\n\nLi E, Bestor TH, Jaenisch R: Targeted mutation of the DNA methyltransferase gene results in embryonic lethality. Cell. 1992; 69(6): 915–26. PubMed Abstract | Publisher Full Text\n\nOkano M, Bell DW, Haber DA, et al.: DNA methyltransferases Dnmt3a and Dnmt3b are essential for de novo methylation and mammalian development. Cell. 1999; 99(3): 247–57. PubMed Abstract | Publisher Full Text\n\nTittle RK, Sze R, Ng A, et al.: Uhrf1 and Dnmt1 are required for development and maintenance of the zebrafish lens. Dev Biol. 2011; 350(1): 50–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGoll MG, Bestor TH: Eukaryotic cytosine methyltransferases. Annu Rev Biochem. 2005; 74: 481–514. PubMed Abstract | Publisher Full Text\n\nHon GC, Rajagopal N, Shen Y, et al.: Epigenetic memory at embryonic enhancers identified in DNA methylation maps from adult mouse tissues. Nat Genet. 2013; 45(10): 1198–206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLister R, Pelizzola M, Dowen RH, et al.: Human DNA methylomes at base resolution show widespread epigenomic differences. Nature. 2009; 462(7271): 315–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSession AM, Uno Y, Kwon T, et al.: Genome evolution in the allotetraploid frog Xenopus laevis. Nature. 2016; 538(7625): 336–343. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPotok ME, Nix DA, Parnell TJ, et al.: Reprogramming the maternal zebrafish genome after fertilization to match the paternal methylation pattern. Cell. 2013; 153(4): 759–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMeissner A, Mikkelsen TS, Gu H, et al.: Genome-scale DNA methylation maps of pluripotent and differentiated cells. Nature. 2008; 454(7205): 766–70. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBogdanović O, Veenstra GJ: DNA methylation and methyl-CpG binding proteins: developmental requirements and function. Chromosoma. 2009; 118(5): 549–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen PY, Feng S, Joo JW, et al.: A comparative analysis of DNA methylation across human embryonic stem cell lines. Genome Biol. 2011; 12(7): R62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaurent L, Wong E, Li G, et al.: Dynamic changes in the human methylome during differentiation. Genome Res. 2010; 20(3): 320–331. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcGaughey DM, Abaan HO, Miller RM, et al.: Genomics of CpG methylation in developing and developed zebrafish. G3 (Bethesda). 2014; 4(5): 861–869. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZemach A, McDaniel IE, Silva P, et al.: Genome-wide evolutionary analysis of eukaryotic DNA methylation. Science. 2010; 328(5980): 916–919. PubMed Abstract | Publisher Full Text\n\nLaine VN, Gossmann TI, Schachtschneider KM, et al.: Evolutionary signals of selection on cognition from the great tit genome and methylome. Nat Commun. 2016; 7: 10474. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDerks MF, Schachtschneider KM, Madsen O, et al.: Gene and transposable element methylation in great tit (Parus major) brain and blood. BMC Genomics. 2016; 17: 332. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee HJ, Hore TA, Reik W: Reprogramming the methylome: erasing memory and creating diversity. Cell Stem Cell. 2014; 14(6): 710–719. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZiller MJ, Gu H, Müller F, et al.: Charting a dynamic DNA methylation landscape of the human genome. Nature. 2013; 500(7463): 477–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHemberger M, Dean W, Reik W: Epigenetic dynamics of stem cells and cell lineage commitment: digging Waddington’s canal. Nat Rev Mol Cell Biol. 2009; 10(8): 526–37. PubMed Abstract | Publisher Full Text\n\nBagci H, Fisher AG: DNA demethylation in pluripotency and reprogramming: the role of Tet proteins and cell division. Cell Stem Cell. 2013; 13(3): 265–9. PubMed Abstract | Publisher Full Text\n\nSeisenberger S, Andrews S, Krueger F, et al.: The dynamics of genome-wide DNA methylation reprogramming in mouse primordial germ cells. Mol Cell. 2012; 48(6): 849–62. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYoder JA, Walsh CP, Bestor TH: Cytosine methylation and the ecology of intragenomic parasites. Trends Genet. 1997; 13(8): 335–340. PubMed Abstract | Publisher Full Text\n\nO’Neill RJ, O’Neill MJ, Graves JA: Undermethylation associated with retroelement activation and chromosome remodelling in an interspecific mammalian hybrid. Nature. 1998; 393(6680): 68–72. PubMed Abstract | Publisher Full Text\n\nSmith ZD, Chan MM, Humm KC, et al.: DNA methylation dynamics of the human preimplantation embryo. Nature. 2014; 511(7511): 611–615. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith ZD, Meissner A: DNA methylation: roles in mammalian development. Nat Rev Genet. 2013; 14(3): 204–20. PubMed Abstract | Publisher Full Text\n\nZhong Z, Du K, Yu Q, et al.: Divergent DNA Methylation Provides Insights into the Evolution of Duplicate Genes in Zebrafish. G3 (Bethesda). 2016; 6(11): 3581–3591. pii: g3.116.032243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBogdanovic O, Long SW, van Heeringen SJ, et al.: Temporal uncoupling of the DNA methylome and transcriptional repression during embryogenesis. Genome Res. 2011; 21(8): 1313–1327. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJjingo D, Conley AB, Yi SV, et al.: On the presence and role of human gene-body DNA methylation. Oncotarget. 2012; 3(4): 462–474. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuo JU, Ma DK, Mo H, et al.: Neuronal activity modifies the DNA methylation landscape in the adult brain. Nat Neurosci. 2011; 14(10): 1345–1351. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeri F, Rapelli S, Krepelova A, et al.: Intragenic DNA methylation prevents spurious transcription initiation. Nature. 2017; 543(7643): 72–77. PubMed Abstract | Publisher Full Text\n\nMaunakea AK, Chepelev I, Cui K, et al.: Intragenic DNA methylation modulates alternative splicing by recruiting MeCP2 to promote exon recognition. Cell Res. 2013; 23(11): 1256–69. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShukla S, Kavak E, Gregory M, et al.: CTCF-promoted RNA polymerase II pausing links DNA methylation to splicing. Nature. 2011; 479(7371): 74–9. PubMed Abstract | Publisher Full Text\n\nTweedie S, Charlton J, Clark V, et al.: Methylation of genomes and genes at the invertebrate-vertebrate boundary. Mol Cell Biol. 1997; 17(3): 1469–1475. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeng S, Cokus SJ, Zhang X, et al.: Conservation and divergence of methylation patterning in plants and animals. Proc Natl Acad Sci U S A. 2010; 107(19): 8689–8694. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuzuki MM, Kerr AR, De Sousa D, et al.: CpG methylation is targeted to transcription units in an invertebrate genome. Genome Res. 2007; 17(5): 625–631. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRaddatz G, Guzzardo PM, Olova N, et al.: Dnmt2-dependent methylomes lack defined DNA methylation patterns. Proc Natl Acad Sci U S A. 2013; 110(21): 8627–31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSarda S, Zeng J, Hunt BG, et al.: The Evolution of Invertebrate Gene Body Methylation. Mol Biol Evol. 2012; 29(8): 1907–1916. PubMed Abstract | Publisher Full Text\n\nSimmen MW, Leitgeb S, Charlton J, et al.: Nonmethylated Transposable Elements and Methylated Genes in a Chordate Genome. Science. 1999; 283(5405): 1164–1167. PubMed Abstract | Publisher Full Text\n\nHolland LZ, Gibson-Brown JJ: The Ciona intestinalis genome: when the constraints are off. Bioessays. 2003; 25(6): 529–532. PubMed Abstract | Publisher Full Text\n\nKeller TE, Han P, Yi SV: Evolutionary Transition of Promoter and Gene Body DNA Methylation across Invertebrate-Vertebrate Boundary. Mol Biol Evol. 2016; 33(4): 1019–1028. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSuzuki MM, Yoshinari A, Obara M, et al.: Identical sets of methylated and nonmethylated genes in Ciona intestinalis sperm and muscle cells. Epigenetics Chromatin. 2013; 6(1): 38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDelsuc F, Brinkmann H, Chourrout D: Tunicates and not cephalochordates are the closest living relatives of vertebrates. Nature. 2006; 439: 965–968. Publisher Full Text\n\nHedges SB, Dudley J, Kumar S: TimeTree: A public knowledge-base of divergence times among organisms. Bioinformatics. 2006; 22(23): 2971–2972. PubMed Abstract | Publisher Full Text\n\nDeangelis MM, Wang DG, Hawkins TL: Solid-phase reversible immobilization for the isolation of PCR products. Nucleic Acids Res. 1995; 23(22): 4742–4743. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeat JR, Dean W, Clark SJ, et al.: Genome-wide Bisulfite Sequencing in Zygotes Identifies Demethylation Targets and Maps the Contribution of TET3 Oxidation. Cell Rep. 2014; 9(6): 1990–2000. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrueger F, Andrews SR: Bismark: a flexible aligner and methylation caller for Bisulfite-Seq applications. Bioinformatics. 2011; 27(11): 1571–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLangmead B, Trapnell C, Pop M, et al.: Ultrafast and memory-efficient alignment of short DNA sequences to the human genome. Genome Biol. 2009; 10(3): R25. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWickham HM: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2009. Reference Source\n\nVenkatesh B, Lee AP, Ravi V, et al.: Elephant shark genome provides unique insights into gnathostome evolution. Nature. 2014; 505(7482): 174–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSloan CA, Chan ET, Davidson JM, et al.: ENCODE data at the ENCODE portal. Nucleic Acids Res. 2016; 44(D1): D726–D732. PubMed Abstract | Publisher Full Text | Free Full Text\n\nENCODE Project Consortium: An integrated encyclopedia of DNA elements in the human genome. Nature. 2012; 489(7414): 57–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim D, Langmead B, Salzberg SL: HISAT: a fast spliced aligner with low memory requirements. Nat Methods. 2015; 12(4): 357–360. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShao C, Li Q, Chen S, et al.: Epigenetic modification and inheritance in sexual reversal of fish. Genome Res. 2014; 24(4): 604–615. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZemach A, Zilberman D: Evolution of eukaryotic DNA methylation and the pursuit of safer sex. Curr Biol. 2010; 20(17): R780–R785. PubMed Abstract | Publisher Full Text\n\nAravin AA, Hannon GJ, Brennecke J: The Piwi-piRNA pathway provides an adaptive defense in the transposon arms race. Science. 2007; 318(5851): 761–764. PubMed Abstract | Publisher Full Text\n\nSmith JJ, Kuraku S, Holt C, et al.: Sequencing of the sea lamprey (Petromyzon marinus) genome provides insights into vertebrate evolution. Nat Genet. 2013; 45(4): 415–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBogdanović O, Gómez-Skarmeta JL: Embryonic DNA methylation: insights from the genomics era. Brief Funct Genomics. 2013; 13(2): 121–130. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22183",
"date": "11 May 2017",
"name": "Matthew M. Hindle",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPeat et al. reveal the first methylome of a cartilaginous fish. A valuable contribution to comparative epigenetics that sheds light on the emergence of DNA methylation as a regulatory mechanism in vertebrate genomes. The lack of any published methylomes between Sea Squirts and Zebrafish emphasises just how important this paper is for understanding the evolution of epigenetics as a mechanism for transcript regulation.\nThe paper is a great read and an excellent example of how to present genome-wide methylation data in an interesting and compelling format. The main claim that the elephant shark genome has a “higher vertebrate” like methylation profile is convincingly made in Figure 3. I am also persuaded by the higher vertebrate like correlation of CpG methylation and transposable elements (Fig 4B) and TSS (Fig 5 A-D). It is particularly encouraging that they managed to show such a convincing correlation in a non-model species, where genomic annotation is often inaccurate and incomplete. It is tempting to infer that the differences between the elephant shark methylome and those of higher vertebrates indicate an evolutionary intermediate/transitionary stage in the functional importance of TSS methylation to repress transcription (authors indicate there may be \"less absolute repression than in higher vertebrates\"). However, the authors appear to be very cautious in limiting their claims and acknowledge that the reduced amplitude of the TSS methylation correlation in Fig4B compared to mouse and zebrafish could be due to TSS miss-annotation in a non-model genome. It will be interesting to see if improvements of TSS annotation with CAGE or similar data alter the differences in the TSS methylation ratio. Given the authors carefully qualify their claims on the observed differences of elephant shark methylomes to higher vertebrates, it would be unreasonable to request that they improve on the reference annotation. Similarly, the transcript correlation differences between elephant shark and higher vertebrates in Figure 5 E-F could be the result of incomplete or erroneous transcript models. However, it is very interesting that compared to higher vertebrates there appears to be underrepresentation of CpG methylation for gene bodies in low-level expressed genes and for TSS an slight overrepresentation of CpG methylation in highly expressed genes. Again it is a tantalising finding but difficult to draw conclusions because of the annotation quality differences and the differences in the RNASeq datasets.\n\nIt is a shame the WGBS Sea squirt and Zebrafish data is muscle rather than liver (Table 1) but the authors made use of what was available.\nI use a similar Babraham pipeline for BS data and all the methods appear to be appropriate.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22066",
"date": "18 May 2017",
"name": "Arthur Georges",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper, the authors report use of whole-genome bisulfite sequencing to generate the first genome-wide methylation profiles for a cartilaginous fish, the elephant shark Callorhinchus milii. The results are preceded by an excellent introduction to the topic, pointing to the pertinent recent literature. The work is significant because the cartilaginous fish arise form a lineage that branches early from the vertebrate phylogeny and their methylation profiles have not before been examined. The authors are able to compare this methylation profile with that of other vertebrates – zebrafish, mouse and sea squirt -- to show that the cartilaginous fishes exhibit a methylation profile that is characteristic of vertebrates generally, and in contrast to the pattern shown by invertebrates, including the chordate sea squirt. They were also able to report an inverse relationship between TSS methylation and gene expression in the elephant shark, supporting the presence of the repressive regulatory architecture shared by other vertebrates so far examined. This study narrows considerably the evolutionary window in which the widespread methylation pattern characteristic of vertebrates evolved. They demonstrate conservation of a complex methylation system across jawed vertebrates separated by 465 million years of evolution.\n\nThe interpretation of the results is sound for the most part, and gives sufficient evidence to support their conclusion of conservation of methylation across jawed vertebrate, which fills the gap in our knowledge on the methylation system of vertebrate compared with invertebrate.\nWe have concerns about the detail presented on the methylation data to support the conclusions. Information on the basic methylation data is an omission that needs to be rectified, and the reasons to focusing solely on CpG sites for further analyses. What quantity of methylome data was produced, what was the average depth per strand for each sample, and what was the mapping ratio? What was the density of methylated genomic cytosines and how did the detailed distribution pattern of mCs differ with context?\nThe most interesting point of the paper is that the elephant shark resembled the methylation pattern of other vertebrate lineages based on the characterization of genomic methylation. DNA methylation is catalyzed by the three DNA methyltransferases (DNMT1, 2 and 3). Is it possible to also add a comparative analysis of DNMT to see if the cartilaginous fishes also resemble other vertebrate lineages (multiple copy) in comparison with invertebrates (single copy), including the non-vertebrate chordates?\nThe authors interpret similarity of male and female methylation to indicate that, unlike some bony fish species, the uncharacterised sex-determination mechanism in the elephant shark is not associated with large differences in global methylation. We think the authors mean that the outcome of sexual differentiation in the elephant shark does not yield, or is not governed by, large differences in global methylation. Specifically, the reference to sex related differences in methylation in the bony fish species is to work that demonstrates a difference between ovary and testes. As such it refers to differences in tissue specific patterns of methylation, not necessarily related to sex determination itself.\nThere is a semantic point, and the authors may wish to word it out of the manuscript. It centres on the use of basal taxa and higher vertebrates. The elephant shark studied is extant and so has had as much time to evolve and diverge from the common ancestor as we have. We suggest replacing references to the elephant shark as basal by \"the elephant shark has arisen from an early branch of the vertebrate phylogeny\". The logic on the implications of the results remains unchanged. Also consider wording out \"higher vertebrates\" with \"other vertebrate lineages\".\nApart from those points, the authors might like to address some repetition of points made in the manuscript with a view to removing them and tightening up the manuscript. There appears to be an error in the caption of Figure 5 which states that Quintile 4 is omitted for clarity, but it appears to be included.\nWe really liked this paper. it is an excellent contribution to our understanding of the evolution of the distinctive global methylation pattern of vertebrates. The work in couched in an introduction to the topic that provides the reader with a good introduction to the topic of global methylation patterns, by which to place the work in context.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-526
|
https://f1000research.com/articles/4-42/v1
|
11 Feb 15
|
{
"type": "Research Article",
"title": "A double blinded, placebo-controlled pilot study to examine reduction of CD34+/CD117+/CD133+ lymphoma progenitor cells and duration of remission induced by neoadjuvant valspodar in dogs with large B-cell lymphoma",
"authors": [
"Daisuke Ito",
"Michael Childress",
"Nicola Mason",
"Amber Winter",
"Timothy O’Brien",
"Michael Henson",
"Antonella Borgatti",
"Mitzi Lewellen",
"Erika Krick",
"Jane Stewart",
"Sarah Lahrman",
"James Leary",
"Davis Seelig",
"Joseph Koopmeiners",
"Stephan Ruetz",
"Jaime Modiano",
"Daisuke Ito",
"Michael Childress",
"Nicola Mason",
"Amber Winter",
"Timothy O’Brien",
"Michael Henson",
"Antonella Borgatti",
"Mitzi Lewellen",
"Erika Krick",
"Jane Stewart",
"Sarah Lahrman",
"James Leary",
"Davis Seelig",
"Joseph Koopmeiners",
"Stephan Ruetz"
],
"abstract": "We previously described a population of lymphoid progenitor cells (LPCs) in canine B-cell lymphoma defined by retention of the early progenitor markers CD34 and CD117 and “slow proliferation” molecular signatures that persist in the xenotransplantation setting. We examined whether valspodar, a selective inhibitor of the ATP binding cassette B1 transporter (ABCB1, a.k.a., p-glycoprotein/multidrug resistance protein-1) used in the neoadjuvant setting would sensitize LPCs to doxorubicin and extend the length of remission in dogs with therapy naïve large B-cell lymphoma. Twenty dogs were enrolled into a double-blinded, placebo controlled study where experimental and control groups received oral valspodar (7.5 mg/kg) or placebo, respectively, twice daily for five days followed by five treatments with doxorubicin 21 days apart with a reduction in the first dose to mitigate the potential side effects of ABCB1 inhibition. Lymph node and blood LPCs were quantified at diagnosis, on the fourth day of neoadjuvant period, and 1-week after the first chemotherapy dose. Valspodar therapy was well tolerated. There were no differences between groups in total LPCs in lymph nodes or peripheral blood, nor in event-free survival or overall survival. Overall, we conclude that valspodar can be administered safely in the neoadjuvant setting for canine B-cell lymphoma; however, its use to attenuate ABCB1+ cells does not alter the composition of lymph node or blood LPCs, and it does not appear to be sufficient to prolong doxorubicin-dependent remissions in this setting.",
"keywords": [
"canine",
"non-Hodgkin",
"lymphoma",
"lymphoma",
"progenitor",
"cells",
"ABCB1/P-glycoprotein",
"valspodar"
],
"content": "Introduction\n\nThe importance of tumor-propagating cells in the pathogenesis of cancer is becoming increasingly well recognized1. However, there are only few reports supporting the existence of such cells in human lymphoma cell lines or in transgenic lymphoma mouse models2–5. Our group identified a subset of lymphoid progenitor cells (LPCs) in primary canine B-cell lymphomas that were characterized by co-expression of hematopoietic progenitor antigens CD34, CD117, and CD133, the B-lymphoid lineage marker CD22, and the common leukocyte antigen CD456. These LPCs had phenotypic properties consistent with tumor-initiating or tumor-propagating cells (TIC/TPC); they also persisted in the xenotransplantation setting, suggesting that they were relevant to the biology of this disease in vivo6. When compared with the bulk of the tumor cells, LPCs showed significantly lower expression of 44 genes across the genome, mapping to cell cycle and transmembrane signaling pathways7. This indicated that LPCs exhibit the characteristic “slow proliferation” seen in normal bone marrow-derived hematopoietic stem cells and in TIC/TPC in other cancers.\n\nOne common feature of TIC/TPC in solid tumors is the expression of ATP binding cassette (ABC) transporter proteins such as ABCB1 (multidrug resistance protein-1 or P-glycoprotein) and ABCG2 (breast cancer resistance protein)8. ABC transporter proteins confer drug resistance by actively transporting drugs from the intracellular space to the extracellular space, thereby preventing the interaction of these drugs with their intracellular targets. In the case of ABCB1, expression has been shown to confer resistance to vinca alkaloids, anthracyclines, taxanes, epipodophyllotoxins, and other drugs9,10.\n\nGenome-wide gene expression profiling data showed that mRNAs for ABCB1 and ABCG2 were expressed in several types of spontaneous canine lymphomas, including diffuse large B cell lymphoma (DLBCL) and marginal zone lymphoma (MZL)11. Valspodar (PSC-833) is a selective ABC transporter inhibitor with an acceptable safety profile. Specifically, valspodar had acceptable toxicity when given alone and in combination with cytotoxic chemotherapy in Phase I/II clinical trials in humans with several types of cancer and in one study of dogs with naturally occurring osteosarcoma treated with doxorubicin12–16. These favorable toxicological and pharmacokinetic profiles made valspodar an attractive candidate for targeting LPCs, especially because a safe protocol had been previously established for its neoadjuvant use to inhibit ABCB1 in dogs receiving doxorubicin chemotherapy14. This precedent allowed us to test whether valspodar used in a comparable setting would enhance chemosensitivity of LPCs and extend the time in remission for dogs with spontaneous large B-cell lymphomas.\n\n\nMaterials and methods\n\nClinical grade valspodar (PSC-833) was provided by Novartis Pharma AG (Basel, Switzerland). Valspodar was compounded for use in pet dogs by Custom Rx Compounding Pharmacy (Roy D. Katz R. Ph., Richfield, MN). Capsules containing 100 mg valspodar or placebo (compounding materials without valspodar) were formulated with the same method used to compound cyclosporine-A for oral use in dogs, since these compounds share a high degree of structural similarity. Activity of the compounded valspodar was confirmed using the side population assays described below. Research grade valspodar and verapamil were purchased from Sigma-Aldrich (St. Louis, MO) and were diluted in dimethyl sulfoxide (DMSO; Sigma-Aldrich) for use in vitro. Lymphoma cells were maintained in short-term culture as described6,17,18. COSB hemangiosarcoma cells were maintained as adherent cultures as described19.\n\nThis was a double blinded, placebo-controlled trial with 10 dogs in each study arm. The main statistical endpoint was a change in LPCs following treatment. The hypothesis was that a significant reduction in the number of LPCs in blood and/or in lymph node cells would occur in dogs treated with valspodar, but not in dogs receiving the placebo. The sample size of 10 dogs per group was selected to provide 80% power to establish a difference of ± 2 S.D. in LPCs pre-and post-valspodar or placebo treatment within and between groups. The study was not powered to detect significant differences in duration of remission or overall survival. However, outcomes were recorded to evaluate trends that could be used to design future studies. Inclusion criteria included (1) clinical diagnosis of multicentric lymphoma (WHO stage I-V); (2) confirmed WHO classification of large B-cell lymphoma (DLBCL or MZL in transition)20; (3) favorable performance status with an expected survival time of > 30 days; (4) body weight more over 15 kg (to allow adequate blood sampling) and less than 40 kg (to ensure dosing feasibility); (5) platelet count ≥100,000/ml and packed cell volume ≥30%; and (6) informed pet owner consent in writing. Exclusion criteria included (1) disease substage b; (2) any previous therapy for lymphoma, including corticosteroids; (3) lymphomas classified as other than DLBCL or MZL in transition; (4) dogs from herding breeds with high frequency of inactivating MDR-1 polymorphisms21,22; and (5) significant co-morbidities, such as renal or hepatic failure, congestive heart failure, or clinical coagulopathy. There were no restrictions based on age, gender, neuter status, or other physical parameters.\n\nTreatment costs for eligible participants up to $2500 were paid by study funds through the end of the chemotherapy protocol. The study was conducted with approval and under the oversight of the University of Minnesota Institutional Animal Care and Use Committee (IACUC Protocol 1011A92815 “Ablation of tumor initiating cells by P-glycoprotein inhibition: Proof of principle study in canine diffuse large B-cell lymphoma”). The trial design and implementation conformed to the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) guidelines23 where they apply to studies in companion animals. The flow of participants is provided in Figure 1. The demographic composition of the study population after unblinding is provided in Table 1. The timing of each procedure is shown in Table 2.\n\nFlow chart with details of dogs enrolled in the study and exclusions from each of the measured endpoints.\n\nIncisional wedge biopsies collected during eligibility screening before treatment (Day 0) and tru-cut biopsies collected on the fourth day of neoadjuvant treatment for enrolled dogs (Day 4) were processed as described24. Briefly, representative sections from each biopsy were fixed in 10% neutral buffered formalin for 24 hours and embedded in paraffin for routine histological analysis. Sample processing, staining, and immunohistochemical stains were done by the Comparative Pathology Shared Resource of the Masonic Cancer Center, University of Minnesota. Samples were classified according to the modified WHO scheme for canine lymphoma based on cell morphology, immunophenotyping using antibodies against human CD3 (AbD Serotec Cat# MCA1477T RRID:AB_10845948), human CD20 (Lab Vision Cat# RB-9013-P0 RRID:AB_149766), and CD79a (clone HM47/A9, Cat# CM 067 C RRID: pending), and available clinical history by two board certified veterinary pathologists (TDO and DMS). The remainder of the biopsy samples was used to prepare single cell suspensions to support the diagnoses through flow cytometry; these suspensions were cryopreserved in liquid nitrogen storage for the following analyses as described6,24.\n\nBlood samples were collected in evacuated EDTA tubes at Day 0, Day 4, and Day 11 to monitor toxicity and to evaluate blood LPCs. Adverse events were recorded and classified according to the Veterinary Cooperative Oncology Group (VCOG) criteria25.\n\nFlow cytometry analysis was performed as described6,17. Briefly, 5 x 105 tumor cells were incubated with dog immunoglobulin G (IgG; Jackson ImmunoResearch, West Grove, PA) to prevent non-specific binding of antibodies to Fc receptors. Cells were stained using fluorescein isothiocyanate (FITC), phycoerythrin (PE), or allophycocyanin (APC) and conjugated antibodies against dog CD3 (clone CA17.2A12, AbD Serotec Cat# MCA1774F RRID:AB_2291174), dog CD4 (clone YKIX302.9, AbD Serotec Cat# MCA1038F RRID:AB_321271), dog CD5 (clone YKIX322.3, AbD Serotec Cat# MCA1037F RRID:AB_322643), dog CD8 (clone YCATE55.9, AbD Serotec Cat# MCA1039PE RRID:AB_322646), dog CD45 (clone YKIX716.13, AbD Serotec Cat# MCA1042F RRID:AB_324047, Cat# MCA1042PE RRID:AB_322644, and AbD Serotec Cat# MCA1042APC RRID:AB_324810), dog CD21 (clone CA2.1D6, AbD Serotec Cat# MCA1781PE RRID:AB_323238), human ABCB1 (clone UIC2, eBioscience Cat# 17-2439-42 RRID:AB_10736477), and human ABCG2 (clone 5D3, eBioscience Cat# 12-8888-82 RRID:AB_466219). Anti-human CD22 antibody (clone RFB4, Abcam Cat# ab23620 RRID:AB_447570) was labeled using the Zenon anti-mouse IgG1 Alexa-Fluor 647 labeling kit (Invitrogen-Molecular Probes, Carlsbad, CA). LPCs were detected by a cocktail of antibodies directed against human CD34 (clone 1H6, BD Biosciences Cat# 559369 RRID:AB_397238), human CD117 (clone YB5.B8, BD Biosciences Cat# 555714 RRID:AB_396058), and mouse CD133 (clone 13A4, eBioscience Cat# 12-1331-80 RRID:AB_465848), where the mix was designated as “Progenitor”6. The antibodies directed against human and mouse antigens have been shown to recognize the canine homologs6,18,26. Cells were gated based on their light scatter properties, and dead cells were excluded using 7-amino-actinomycin D (7-AAD; eBioscience) staining. Flow cytometry was performed using a LSRII cytometer (BD Immunocytometry Systems, San Jose, CA), and results were analyzed using FlowJo software (Tree Star, RRID:nif-0000-30575).\n\nSide populations were measured as described27. Briefly, DyeCycle Violet (DCV) (Life Technologies, Eugene, OR) was added to a final concentration of 10 μM, and 5 × 105 cells were incubated for an additional 60 minutes at 37°C with intermittent mixing. Cells were washed, filtered, and maintained on ice until analysis. To exclude dead cells from analysis, 7-AAD was added to each sample immediately before collection. DCV emission was detected using a BD LSRII flow cytometer (BD Biosciences). Valspodar and verapamil were diluted in DMSO for use in this assay. Equivalent amounts of DMSO were added to control samples, and verapamil was used to determine the side population gates. Data were analyzed using FlowJo software (Tree Star, RRID:nif-0000-30575).\n\nRNA prepared from biopsies obtained at diagnosis (Day 0) and on the fourth day of neoadjuvant treatment for enrolled dogs (Day 4) was quantified and assessed for quality as described11,19. Briefly, total RNA was quantified using a fluorimetric RiboGreen assay and the total RNA integrity was assessed using capillary electrophoresis in the Agilent BioAnalyzer 2100 to generate RNA Integrity Numbers (RIN). Samples passed a QC step if they contained >1 µg with a RIN >8. Next-generation RNA sequencing (RNAseq) was done in 14-paired (pre- and post-treatment) samples and two additional pre-treatment samples as described19. Each sample was sequenced to a targeted depth of ~20 million paired end reads. Base call (.bcl) files for each cycle of sequencing were generated by the Illumina Real Time Analysis (RTA) software. Primary analysis and de-multiplexing were performed using Illumina’s CASAVA software 1.8.2 to verify the quality of the sequence data. The end result of the CASAVA workflow was de-multiplexed into FASTQ files for analysis. Bioanalyzer quality control, RNA labeling, microarray hybridization and reading, and RNASeq were done at the University of Minnesota Genomics Center. Data will be available through the National Center for Bioinformatics (Submitted to Gene Expression Omnibus; GEO).\n\nFASTQ files were mapped to the CanFAM3 genome and the resulting BAM files were summarized to fragments per kilobase of exon per million fragments mapped (FPKM) values using CUFFDIFF. Sequences mapped to 13,952 annotated, named genes. Two-group t-tests were used to determine genes that were differentially expressed between the two groups (i.e., pre- and post-treatment). Expression differences with p-value and false discovery rate (FDR) of less than 0.05 were considered significant.\n\nEligible dogs were randomized into an experimental treatment group that was given encapsulated valspodar (7.5 mg/kg orally every 12 hours for 5 days) or a control group that was given the equivalent encapsulated placebo over the same schedule. Starting on Day 4, every dog received five doses of doxorubicin 21 days apart using a dosing schedule based on a previous study using valspodar in the neoadjuvant setting with single agent doxorubicin chemotherapy in dogs with osteosarcoma14. The first dose was reduced by 30% from the standard (from 30 mg/m2 to 21 mg/m2) to mitigate potential side effects of ABCB1 inhibition by the neoadjuvant valspodar. If no serious toxic effects of combined doxorubicin/valspodar were observed, subsequent doxorubicin treatments were dosed at 30 mg/m2. If toxic effects were observed, the dose remained at 21 mg/m2 and subsequent dose escalation to 30 mg/m2 only occurred if no serious adverse events were recorded following the previous dose. An overview of the treatment and collection of blood and tissue samples is provided in Table 2. The treatment responses were evaluated based on the VCOG criteria for lymphoma in dogs28. The last treatment was given at 111 days; dogs were examined once more at 180 days, which was near the expected median survival for single agent doxorubicin protocol29, and then released to their attending veterinarian. The status for each dog was ascertained by telephone or electronic mail communication with the attending veterinarians and/or the owners periodically thereafter until a death event was recorded or >500 days had elapsed. Relapse was determined using clinical parameters (generalized lymphadenopathy on physical exam) with conventional testing as needed (routine radiographs or ultrasound imaging, fine needle aspirate). Dogs were considered off-study at relapse and were then eligible to undergo rescue therapy (N=11) or enter other clinical studies (N=4).\n\nSerum samples collected on the fourth day of neoadjuvant treatment (Day 4) were stored at -80˚C until analysis. Valspodar was quantified by liquid chromatography/mass spectroscopy (LC-MS/MS) using a high-performance liquid chromatograph (Agilent 1200 Series, Santa Clara CA) coupled with a TSQ Quantum triple stage quadrupole mass spectrometer (Thermo-Electron, San Jose, CA) as described30.\n\nDescriptive statistics (mean, median, minimum, maximum) were recorded for age, gender, breed, and disease stage; for each variable, differences between groups were determined using Fisher’s exact test. Time to remission, duration of remission, and overall survival were recorded in days starting on the date that the dogs first received a clinical diagnosis. The percentage of LPCs in lymph node samples was calculated based on expression of relevant cell surface markers (CD34/CD117/CD133) as a proportion of live, large, CD22+ B cells6. The ΔLPC was calculated as the ratio of LPCs at Day 4 over LPCs at Day 0. The Mann-Whitney Test (Prism 5, GraphPad Software, Inc., La Jolla, CA) was used to determine significance between LPC numbers in lymph nodes from dogs in the experimental treatment and in the control groups. The associations between variables were determined using the Pearson correlation. Differences between groups in duration of remission and overall survival were determined using Kaplan-Meier probability and log-rank tests.\n\n\nResults\n\nValspodar is a potent, selective inhibitor of the ABCB1 efflux transporter12,13. To confirm that the clinical grade compound retained potency after compounding, we examined its effect to inhibit DCV efflux using the flow cytometric side population assay. COSB canine hemangiosarcoma cells contain a subpopulation of cells that shows robust dye efflux in this assay27 (Figure 2, Dataset a). The compounded, clinical grade valspodar was as effective as the research grade valspodar in this assay, eliminating >90% of the side population (i.e., it inhibited dye efflux) at concentrations as low as 30 ng/ml (Figure 2, Dataset a). The effect of valspodar was comparable to that observed in verapamil (Figure 2, Dataset a), which inhibits both ABCB1 and ABCG2 at the 50–100 µM concentrations used in this assay.\n\nSide population analyses were done as described in Materials and methods using cultured COSB canine hemangiosarcoma cells. (A) Live cells were gated based on light scatter properties and exclusion of 7-AAD, and (B) the side populations were determined based on DyeCycle Violet (DCV) efflux. Verapamil was used to inhibit ABCB1 and ABCG2 at 50-100 µM concentrations. Clinical grade and research grade valspodar was used at concentrations that were achieved in the plasma of dogs in the study (30 – 600 ng/ml) as well as at the saturating dose of 1 µg/ml. The Y-axis is DCV-blue (450+/-50 nm) emission while the X-axis is DCV-red (660 +/- 40 nm) on the LSR-II. Data were analyzed and dot plots were created in FlowJo.\n\nExcluding dogs that had received previous chemotherapy, 40 dogs were screened for eligibility. Twenty dogs were eligible and enrolled in the trial. Of the 20 dogs that were excluded, 5 dogs had a lymphomas that were classified as other than DLBCL or MZL in transition (specifically, three had T-cell lymphoma, one had an indolent type of lymphoma, and one had disease largely confined to spleen with minimal peripheral lymphadenopathy that precluded biopsy) and 15 dogs had hypercalcemia (N=2), lymphoma in substage b or an ongoing co-morbidity (N=8), exceeded the maximum allowable body weight (N=1), or the owners declined participation (N=4).\n\nOf the twenty dogs enrolled, 10 were randomized to each group. The distribution of dogs according to demographic characteristics is shown in Table 1. The composition of the study population was predictable31,32, and there were no statistically significant differences in any category between the experimental treatment group and the control group. One dog in the placebo group did not receive doxorubicin chemotherapy after the neoadjuvant period per its owner’s decision. This dog was censored in the outcome assessments.\n\nSix dogs, including three in the placebo group and three in the experimental (valspodar) group had reportable events during the study (Table 3). The most common toxicities observed in both groups were grade-1 and grade-2 inappetence, lethargy, vomiting, and diarrhea. No grade-4 or grade-5 toxicities were observed, although one event was potentially dose limiting. One dog had grade-2 hematological toxicity (neutropenia and thrombocytopenia) after the first administration of doxorubicin. The doxorubicin dose for the second administration was maintained at 21 mg/m2 and no toxicity was observed. However, the owner only permitted subsequent doxorubicin doses to be escalated to 24 mg/m2. The dog that was withdrawn after neoadjuvant placebo had grade-2 gastrointestinal toxicity and grade-1 lethargy.\n\n1Owner elected to withdraw dog from study prior to receiving doxorubicin\n\n2Vomiting and diarrhea\n\n3Dog’s second doxorubicin treatment was dosed at 21 mg/m2; similar toxic effects were not observed. However, the dog’s owner only permitted subsequent doxorubicin doses to be escalated to 24 mg/m2\n\n4Diarrhea\n\n5Neutropenia (grade 2) and thrombocytopenia (grade 1)\n\n6Vomiting\n\nBlood and lymph node LPCs were quantified for each dog at diagnosis (Day 0) and on the fourth day of neoadjuvant treatment (Day 4) as described in Materials and Methods. Table 4 shows that LPCs were detectable in every sample at a comparable frequency to what was previously reported6. The distribution of lymph node LPCs at diagnosis was narrower in the dogs that received valspodar than in the control dogs (Figure 3A), but the two groups were not significantly different, and neither group showed a statistically significant reduction in LPCs on the fourth day of the neoadjuvant period (ΔLPC). Similar results were observed for blood LPCs, with the exception that the variance in frequency of these cells in blood was noticeably increased on the fourth day of the neoadjuvant period (Figure 3B).\n\n(A) Box plots showing median (white line), 75% confidence intervals, and outliers of the percent LPCs in lymph nodes at diagnosis (top) and relative change in LPCs (bottom) from the time of diagnosis (Day 0) to the fourth day of the neoadjuvant period (Day 4) ΔLPC = 1.0 means no change in the percent in each group of dogs. A LPCs measured at both time points. (B) Box plots showing median (white line), 75% confidence intervals, and outliers of the percent LPCs in peripheral blood at diagnosis (top) and relative change in LPCs (bottom) from the time of diagnosis (Day 0) to the fourth day of the neoadjuvant period (Day 4) in each group of dogs. Data were analyzed and graphs were assembled using MS Excel.\n\nThe absence of a treatment effect on total LPCs suggested that we could not reject the null hypothesis that neoadjuvant valspodar did not enhance chemosensitivity of LPCs, and could reflect variable expression of ABC transporters by these cells. Samples from 15 dogs in the study (six in the placebo group and nine in the valspodar group) had sufficient material for analysis of ABCB1 and ABCG2 expression in LPCs at diagnosis. The proportion of ABCB1+ LPCs and ABCG2+ LPCs was variable. In the placebo group, between 1.6% and 52.4% of lymph node LPCs expressed these proteins at the time of diagnosis; in the valspodar group, the range of ABCB1 and ABCG2 transporter expression in lymph node LPCs at the time of diagnosis was 10.0% to 72.7% (Table 5). When we examined the proportion of ABCB1+ LPCs and ABCG2+ LPCs in dogs from each treatment group, we saw an intriguing reversal in the trends with regard to event-free survival (Figure 4), although neither group showed a significant correlation between the number of ABCB1+ or ABCG2+ cells at diagnosis and survival (all the R2 values were less than or equal to 0.42).\n\nDot plots showing the relationship between ABCB1 expression and event-free survival (EFS) in days (top) and between ABCG2 expression and EFS in days (bottom) in dogs treated with placebo (N = 9) or with neoadjuvant valspodar (N = 9) where samples were available for these measurements. The dashed lines represent linear regressions and their R2 values are indicated on each graph. The Y-axis represents the % of ABC+/Progenitor+lymph node B cells. Data were analyzed and graphs were assembled using MS Excel.\n\nSamples from four dogs (two in the placebo group and two in the valspodar group) had sufficient material for analysis of ABCB1 and ABCG2 to determine if valspodar specifically reduced the number of ABCB1+ and ABCG2+ LPCs in paired pre-and post-treatment samples. There was a quantifiable decrease in the frequency of ABCB1+ and ABCG2+ LPCs, but this change was comparable between the two dogs that received valspodar and the two dogs that received placebo (Table 5 and Supplementary Figures 1A–1D, Dataset b).\n\nWe examined if the inhibition of ABCB1 activity with valspodar changed genome-wide patterns of gene expression in lymph nodes from dogs in both groups. Paired pre- (Day 0) and post- (Day 4) treatment samples were available from five dogs in the placebo group and from nine dogs in the valspodar group. One additional pre-treatment sample from dogs in each group was available and included in the analysis, making a total of 16 pre-treatment samples and 14 post-treatment samples. We did not identify any genes with significantly differently expression between groups or between pre- and post-treatment samples in the placebo or the valspodar groups.\n\nThe observation that valspodar treatment did not specifically alter the total blood or lymph node LPCs or the frequency of ABCB1+ and ABCG2+ LPCs, and that it did not lead to significant changes in gene expression of lymph node cells, could be attributed to poor bioavailability. To evaluate this possibility, we examined the purity of the compounded, encapsulated drug and the levels of valspodar in serum samples obtained at Day 4 from seven dogs using LC-MS/MS. Valspodar was undetectable in placebo capsules, and the purity of the compounded capsules was 104% as compared to research grade valspodar.\n\nValspodar was also undetectable (<5 ng/ml) in dogs that received placebo, but it was present at detectable levels in each of four dogs that received compounded valspodar capsules (34, 63, 375, and 623 ng/ml, respectively). This is equivalent to levels between 0.025 to 0.5 µM on the fourth day of twice-daily administration, which is in the range seen in dogs where valspodar was given at the same dose in an oil-based drinking solution14.\n\nEighteen treated dogs achieved clinical remission, defined as a complete response (disappearance of all evidence of disease with all lymph nodes shrinking to non-pathologic size within the judgment of the evaluator) after the first dose of doxorubicin. One dog in the valspodar group did not achieve clinical remission, but survived with stable disease for 428 days. One dog in the placebo group never received doxorubicin and was censored from this analysis. This dog was treated with palliative intent using prednisone only; it failed to achieve remission and died 59 days after diagnosis.\n\nThe time to remission after the initial valspodar treatment ranged from 7 to 106 days (after doxorubicin) in the placebo group, and from 7 to 105 days (after doxorubicin) in the valspodar group (excluding the dog that never achieved remission). There were no differences between groups with reference to the median time to remission, the median (or range) duration of remission, the number of dogs alive at the 180-day milestone, or the number of dogs alive at 500 days (Table 6). The event-free survival and overall survival times for each group are shown in Figure 5.\n\n*Excluding dog that did not achieve remission 7-105\n\nKaplan–Meier analysis of event-free survival (top) and overall survival (bottom) in dogs treated with doxorubicin with the addition of neoadjuvant placebo or valspodar. The table below the graphs shows the median event-free and overall survival for each group. Data were analyzed and graphs were assembled using MS Excel.\n\nTo test the hypothesis that LPCs contribute to disease progression, we examined if there were direct or inverse correlations between the proportion of LPCs at diagnosis and the ΔLPCs with duration of remission as well as with overall survival for dogs in the valspodar and control groups, individually and for all of the dogs in the study. Figure 6A and 6B show scatterplots illustrating no correlations between the proportion of lymph node LPCs at diagnosis and the ΔLPCs (D4/D0), respectively, and event-free survival (duration of remission) and overall survival. The results were similar when we analyzed correlations between the proportion of blood LPCs at diagnosis or ΔLPCs and survival outcomes (data not shown).\n\n(A) Dot plots showing the relationship between the percent of lymph node LPCs at diagnosis and EFS (N=9), and the relative change in LPCs from the time of diagnosis (Day 0) to the fourth day of the neoadjuvant period (Day 4) and EFS (N=8), in days in dogs treated with placebo (N = 9) or with neoadjuvant valspodar. (B) Dot plots showing the same relationships for overall survival (OS, N=9 and N=10 for LPCs at diagnosis and for ΔLPCs, respectively). Data were analyzed and graphs were assembled using MS Excel.\n\n\nConclusions and discussion\n\nWe conducted a double-blinded, placebo controlled study in 20 dogs to determine whether valspodar used in the neoadjuvant setting would sensitize LPCs to doxorubicin and increase the length of remission in dogs with therapy naïve large B-cell lymphoma. Our results confirmed the previous observation from Cagliero et al.14 showing that valspodar can be safely administered to dogs twice daily at a dose of 7.5 mg/kg. Furthermore, we verified that CD22+/CD34+/CD117+/CD133+ LPCs constitute between 0.3 – 2% of lymph node B cells and 0.001 – 3% of peripheral blood B cells in dogs with large cell B-cell lymphomas. The observation that these cells are virtually undetectable in lymph node samples from healthy dogs, while they exist in a steady state in canine B-cell lymphomas even in the xenotransplantation setting6, suggests that they contribute to the maintenance or propagation of the tumor population.\n\nUpregulation of ABC transporters is a well-described mechanism of acquired drug resistance in lymphoma and other cancers, making these proteins attractive targets for pharmacologic modulation33,34. These proteins are transport channels that extrude a variety of compounds, including xenobiotics, from cells. Cells expressing these proteins have been defined functionally as “side populations” based on their ability to exclude fluorescent dyes in flow cytometric assays. The possibility that increased expression of ABCB1 and other transporters was due to selection of cells intrinsically possessing this trait, as opposed to through de novo induction of expression, was proposed more than 20 years ago35 and recapitulated most recently in canine lymphomas in vitro through drug selection, with expansion of a valspodar-sensitive subclone that had increased expression of ABCB1 and ABCG236.\n\n“Side populations” are routinely detectable in canine lymphomas37. In that study, 0.1 to 4% of cells in the canine B-cell lymphoma cell lines GL-1 and 17-71 excluded Hoechst 33342 and expressed detectable levels of ABCB1 and ABCB2. A dye-excluding side population was also variably detectable in five primary lymphomas. GL-1 cells and one of the lymphoma samples expressed a form of ABCB1 with slower electrophoretic mobility, possibly representing the active, phosphorylated form of this transporter38. ABCG2 was expressed ubiquitously in GL-1 cells and in the five primary lymphomas. However, the side population identified by Kim and colleagues was insensitive to verapamil and to fumitremorgin-C37, suggesting that the dye exclusion activity might have been mediated by an ABC transporter distinct from ABCB1 and ABCG2.\n\nThe notion that cells expressing ABC transporters can behave like cancer stem cells in lymphomas is not universally accepted. Indeed, the existence of tumor-initiating or tumor-propagating cells (TIC/TPC) or of a hierarchical organization in lymphoid malignancies at all remains a matter of debate39. In acute lymphoblastic leukemias (ALL), models for cells of origin have been proposed, including common hematopoietic progenitors, common lymphoid progenitors, and committed B-lymphoid cells, depending largely upon the molecular subtype of ALL. In preliminary experiments, samples from two human patients with ALL included a subset of CD117+ cells that were present at a similar frequency to LPCs in canine lymphoma (D. Ito and J. Modiano, unpublished results); however, the functional significance of this finding remains to be determined. The evidence for TIC/TPC in solid lymphomas is even more sparse. Drug resistant TIC/TPCs were defined in follicular lymphoma using side population assays and increased expression of ABCG22. Tumor formation in these cells was limited by an obligate interaction with follicular dendritic cells in the microenvironment niche, which was mediated through the CXCR4 chemokine receptor. TIC/TPC were similarly identified using side population assays in a mouse model of mantle cell lymphoma4, and more recently in human anaplastic lymphoma kinase (ALK)-positive and -negative anaplastic large cell lymphomas40.\n\nNext generation sequencing and genome-wide epigenomic analyses of human DLBCL have revealed a potential mechanism to explain how lymphoid cells might acquire TIC/TPC properties and how this acquisition could be related to the expression of ABC transporters. The gene encoding the enhancer of zeste homolog 2 (EZH2) had gain of function mutations in 7/49 (14%) DLBCL patients sequenced41. EZH2 is a histone methyltransferase that functions as part of the polycomb group complex, which controls the balance between self-renewal and differentiation42. In germinal center (GC) B cells, EZH2 appears to suppress differentiation genes and favor behavior that resembles stem cells43. As in GC DLBCL cells, depletion of EZH2 in Bel/Fu hepatocellular carcinoma cells inhibited proliferation, but in Bel/Fu cells this depletion also increased methylation at the ABCB1 gene, reduced ABCB1 gene and protein expression44, and showed consequent sensitization of these cells to the cytotoxic effects of 5-fluorouracil45. Together, these findings provide a strong rationale for use of neoadjuvant therapies to sensitize TIC/TPCs in lymphoma using ABC transporter inhibitors, at least in a subset of GC DLBCL.\n\nOur data show that LPCs in canine large B-cell lymphoma were heterogeneous regarding expression of ABCB1 and ABCG2, with slightly fewer present in the dogs randomized to the placebo group. Such heterogeneity is consistent with previous observations in human lymphoma samples3. The apparent reversal in outcome trends between the placebo and valspodar groups as a function of the percent lymph node B-cell LPCs at diagnosis was intriguing, and while tempered by the small sample size, it suggests this approach merits additional investigation.\n\nThe proportion of ABCB1+ and ABCG2+ LPCs appeared to decrease in the samples from four dogs during the neoadjuvant period where we could perform the analysis; however, the change was unrelated to valspodar, since a reduction of similar magnitude occurred in the dogs assigned to both the placebo and the valspodar groups. Furthermore, statistically significant differences were not found in either the total number of LPCs or in the duration of remission (or overall survival) between groups of dogs treated with valspodar and placebo.\n\nIt is worth noting that the duration of remission and the overall survival of dogs in this study slightly exceeded the expectations based on previously published results using single agent doxorubicin29. This could be attributed to improved management of cancer patients over time, but it also could be due to recruitment of a relatively uniform population of dogs based on clinical and pathologic criteria20. The latter possibility highlights the benefits of study designs that narrow disease heterogeneity, particularly for canine lymphoma where each disease entity in this complex is considered as an individual disease.\n\nThere are several possible explanations for the absence of clinical improvement in dogs receiving valspodar vs. placebo. First, it is possible that this treatment would be most effective against a specific subset of DLBCL, such as EZH2-mutated GC DLBCL. It has been challenging to separate canine DLBCLs into activated B-cell (ABC) type and GC-type DLBCL11,46, although one study suggested canine DLBCL might be more similar to human ABC type DLBCL47. Second, it must be noted that the study was designed to address chemosensitization of LPCs by valspodar, and the sample size was not powered to reveal if this protocol would significantly improve survival outcomes. Based on our results, we estimate that a clinical trial where we could detect a doubling of the median overall survival (from 12 months to 24 months) in dogs receiving neoadjuvant valspodar would require 35 dogs each in the treatment and in the placebo arms.\n\nNonetheless, we confirmed absorption and bioavailability of the drug on the fourth day of administration, and we showed that the drug was able to fully inhibit ABC transporter activity in a side population assay even at the lowest dose detected. However, the levels of valspodar required for sustained, active inhibition of ABC transporter activity in vivo have not been conclusively established. For example, when valspodar (50 mg/kg) and paclitaxel (10 mg/kg) were administered concurrently to mice through the oral route, they passed rapidly through the stomach and reached the intestine together, but showed enhanced uptake and plasma levels for paclitaxel48. In rats, oral valspodar was absorbed rapidly and had excellent bioavailability with low hepatic extraction49. In human patients with chemotherapy resistant multiple myeloma, a dose escalation study showed similar pharmacokinetic properties. Orally administered valspodar combined with doxorubicin, vincristine and dexamethasone led to a doubling of area under the curve for doxorubicin levels in the plasma and reduced its clearance by half16. The concentration of valspodar in serum increased proportionately with a dose of up to 15 mg/kg/day, although it reached a maximum effectiveness level vis-à-vis increasing plasma doxorubicin at 5 mg/kg/day where the median trough and peak levels (of valspodar) were 461 ng/ml and 1134 ng/ml, respectively. The treatment regimen was associated with increased toxicity and required dose reduction in more than 50% of the patients (13/22). Yet, 14 of the patients treated had either a partial response or stable disease, and ABCB1 expression in bone marrow plasma cells was reduced in four of the five responding patients examined.\n\nIn another study, valspodar was administered concurrently with doxorubicin to 31 cancer patients using an intravenous loading dose of 1–2 mg/kg and a continuous dose of 1–10 mg/kg over 24 hours. Doxorubicin was given immediately at the end of the loading dose and the treatment was repeated every 21 days until there was disease progression or unacceptable toxicity15. As noted in the Sonneveld study16, patients receiving valspodar showed a significantly increased area under the curve for doxorubicin, with a 50% shortening of doxorubicin clearance as compared to controls. The steady-state concentrations of valspodar over the time of continuous administration ranged from 190 ng/ml to 1383 ng/ml with unchanged rates of clearance, and serum from treated patients contained sufficiently high levels of valspodar to inhibit ABCB1 activity in an in vitro bioassay. Dose limiting toxicities were observed only in patients treated with the highest dose of valspodar (2 mg/kg loading dose and 10 mg/kg continuous dose) and 50 mg/kg doxorubicin. One patient (ovarian cancer) had a partial response, but none of the patients in this trial had non-Hodgkin lymphoma15.\n\nThe effective serum concentrations and positive bioassay results in these studies are in contrast to those in another series of experiments showing that the concentration required to inhibit ABC transporter activity in vitro under complete serum conditions (cells cultured in 100% fetal bovine serum) is almost a full order of magnitude (8–9 times) higher than the plasma concentrations achieved in clinical trials, probably due to binding of valspodar by serum lipoproteins50. Among the compounds examined, daunorubicin was the most relevant. In 100% serum, the half maximal concentration of valspodar required to inhibit ABCB1-mediated daunorubicin transport was approximately 1.5 µM (or approximately 1800 ng/ml), which is close to the peak levels achievable using continuous infusions15 and almost 3-fold higher than the levels we measured in our study.\n\nIt also is possible that inhibiting ABCB1 and ABCG2 in LPCs is insufficient to ablate the population. In our study, 30% to 90% of lymph node LPCs did not express ABCB1 or ABCG2. In addition, the variable sensitivity to verapamil and other ABC transporter inhibitors by LPCs and side population cells in leukemia and lymphoma suggests that these cells might rely on alternative mechanisms of drug export and/or drug resistance. Still, it has been shown that clinically relevant anti-lymphoma immunotherapies including rituximab51 and anti-CD19 antibodies52 induce ABCB1 to translocate out of lipid rafts, reducing its ability to extrude chemotherapy agents such as vincristine and doxorubicin and increasing the chemosensitivity of drug-resistant lymphoma cell lines. We propose that the totality of data continues to support the rationale for implementing treatment approaches for non-Hodgkin lymphoma that target ABCB1 and ABCG2 in the neoadjuvant or the adjuvant settings. These treatments might be most effective for patients with tumors that do not respond to other targeted agents, such as those diagnosed with EZH2-mutant GC DLBCL. Thus, additional work and diligently crafted clinical trials, as well as creative animal models of induced and spontaneous disease, will be needed to establish the significance of LPCs in the pathogenesis of lymphoid malignancies and the potential to improve patient outcomes by targeting the ABC transporter-enriched and the ABC transporter-deficient subsets of these cell populations.\n\n\nData availability\n\nF1000Research: Dataset 1. Data of pilot study on valspodar in neoadjuvant settings for canine B-cell lymphoma, 10.5256/f1000research.6055.d4289753",
"appendix": "Author contributions\n\n\n\nDI, MOC, NJM, KMS, TDO, MHS, AB, JCS, JFL, and JFM conceptualized and designed the study; MOC, NJM, KMS, JCS, and JFM assembled and integrated the study teams; DI and JCS performed flow cytometry in lymph node and blood samples, respectively; DI performed in vitro assays; MOC, ALW, KMS, MSH, AB, JCS, JFL, SR, and JFM developed standard operating protocols and implemented the study; MOC, NJM, ALW, KMS, MSH, AB, EK, SRL contributed to case accrual, acquisition of clinical data and medical care of study animals; KMS coordinated interactions with pharmacists, IACUC, and among study sites, managed study budgets, and dispensed drugs; KMS and SRL supervised assignment of study animals to treatment groups and maintained investigator blinding; ALW maintained all study records; ML was responsible for sample management and archiving, record keeping, and data validation; TDO was responsible for assigning WHO classification during screening; TDO and DMS reviewed and interpreted all pathological samples; JFL was responsible for training Purdue personnel and interpreting flow cytometric data from peripheral blood samples; JK contributed to statistical analysis; SR provided clinical grade valspodar; DI, MOC, NJM, JFL, and JFM assembled and interpreted data; JFM secured project funding; DI and JFM wrote the first draft of the manuscript; all authors edited the manuscript and approved the final version; DI and JFM are the corresponding authors.\n\n\nCompeting interests\n\n\n\nDr. Stephan Ruetz is employed by Novartis Pharma AG.\n\nAll other authors have no competing interest to declare.\n\n\nGrant information\n\nThis project was supported in part by grant VTM-1 from the Veterinary Translational Medicine program of the Clinical and Translational Science Institutes of the University of Minnesota and Indiana University and by grants UL1 RR033183 (University of Minnesota Clinical and Translational Science Award), P30 CA077598 (Comprehensive Cancer Center Support Grant, Masonic Cancer Center, University of Minnesota for support of the Comparative Pathology Shared Resource, the Flow Cytometry Shared Resource, and the Clinical Pharmacology Analytical Services); and P30 CA023168 (Purdue Center for Cancer Research for use of the Purdue Flow Cytometry and Cell Separation Shared Resource) from the National Institutes of Health. Clinical grade valspodar was provided for this study by Novartis Pharma AG. Dr. Daisuke Ito was supported in part by a FIRST Award from Morris Animal Foundation (D12CA-302). The authors gratefully acknowledge donations to the Animal Cancer Care and Research Program of the University of Minnesota that helped support this project.\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors acknowledge support from the Biomedical Genomics Center, University of Minnesota and the Minnesota Supercomputing Institute for RNA sequencing, data storage, and assistance with data analysis; Dr. Aaron Sarver for assistance mapping RNA sequencing data to the canine genome and for guidance with bioinformatic analyses; Dr. Aaron Becker for assistance with RNA sequencing design and implementation; Dr. Jong-Hyuk Kim for assistance with bioinformatic analysis; Drs. Jim Fisher and Pamala Jacobson and the staff at the Clinical Pharmacology Analytical Services of the University of Minnesota for assistance with serum measurements; Ms. Milcah Scott and Ms. Ashley Graef for assistance with program and budget management; Dr. Erin Dickerson for assistance with ABC transporter measurements and helpful discussions, and Dr. Sandra Wells for review of the manuscript.\n\n\nSupplementary materials\n\nSupplemental Figure 1. Expression of ABCB1 and ABCG2 in lymph node LPCs from dogs with large B-cell lymphoma at diagnosis and on the fourth day of the neoadjuvant period.\n\nSamples from two dogs in each group were available for analysis of ABCB1 and ABCG2 expression in LPCs prior to (Day 0) and on the fourth day (Day 4) of neoadjuvant treatment. Live lymphocytes were gated based on light scatter properties and exclusion of 7-AAD. T cells were excluded based on CD5 staining; progenitor cells were gated based on expression of CD34, CD117, and CD133 as described in Materials and methods. Dye exclusion was measured as described in Figure 2. (A) ABCB1 and ABCG2 expression at the time of diagnosis (Day 0) in two dogs treated with valspodar (MN02 and MN10). (B) ABCB1 and ABCG2 expression on the fourth day of neoadjuvant treatment (Day 4) in two dogs treated with valspodar (MN02 and MN10). (C) ABCB1 and ABCG2 expression at the time of diagnosis (Day 0) in two dogs treated with placebo (MN05 and MN09). (D) ABCB1 and ABCG2 expression on the fourth day of neoadjuvant treatment (Day 4) in two dogs treated with placebo (MN05 and MN09). Data were analyzed and dot plots were created in FlowJo.\n\n\nReferences\n\nNguyen LV, Vanner R, Dirks P, et al.: Cancer stem cells: an evolving concept. Nat Rev Cancer. 2012; 12(2): 133–43. PubMed Abstract | Publisher Full Text\n\nLee CG, Das B, Lin TL, et al.: A rare fraction of drug-resistant follicular lymphoma cancer stem cells interacts with follicular dendritic cells to maintain tumourigenic potential. Br J Haematol. 2012; 158(1): 79–90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee MR, Ju HJ, Kim BS, et al.: Isolation of side population cells in B-cell non-Hodgkin's lymphomas. Acta Haematol. 2013; 129(1): 10–7. PubMed Abstract | Publisher Full Text\n\nVega F, Davuluri Y, Cho-Vega JH, et al.: Side population of a murine mantle cell lymphoma model contains tumour-initiating cells responsible for lymphoma maintenance and dissemination. J Cell Mol Med. 2010; 14(6B): 1532–45. PubMed Abstract | Publisher Full Text\n\nWang Y, Liu Y, Malek SN, et al.: Targeting HIF1alpha eliminates cancer stem cells in hematological malignancies. Cell Stem Cell. 2011; 8(4): 399–411. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIto D, Endicott MM, Jubala CM, et al.: A tumor-related lymphoid progenitor population supports hierarchical tumor organization in canine B-cell lymphoma. J Vet Intern Med. 2011; 25(4): 890–6. PubMed Abstract | Publisher Full Text\n\nIto D, Frantz AM, Modiano JF: Canine lymphoma as a comparative model for human non-Hodgkin lymphoma: recent progress and applications. Vet Immunol Immunopathol. 2014; 159(3–4): 192–201. PubMed Abstract | Publisher Full Text\n\nDonnenberg VS, Donnenberg AD: Multiple drug resistance in cancer revisited: the cancer stem cell hypothesis. J Clin Pharmacol. 2005; 45(8): 872–7. PubMed Abstract | Publisher Full Text\n\nGottesman MM, Fojo T, Bates SE: Multidrug resistance in cancer: role of ATP-dependent transporters. Nat Rev Cancer. 2002; 2(1): 48–58. PubMed Abstract | Publisher Full Text\n\nUeda K, Cardarelli C, Gottesman MM, et al.: Expression of a full-length cDNA for the human \"MDR1\" gene confers resistance to colchicine, doxorubicin, and vinblastine. Proc Natl Acad Sci U S A. 1987; 84(9): 3004–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFrantz AM, Sarver AL, Ito D, et al.: Molecular profiling reveals prognostically significant subtypes of canine lymphoma. Vet Pathol. 2013; 50(4): 693–703. PubMed Abstract | Publisher Full Text\n\nNobili S, Landini I, Giglioni B, et al.: Pharmacological strategies for overcoming multidrug resistance. Curr Drug Targets. 2006; 7(7): 861–79. PubMed Abstract | Publisher Full Text\n\nTai HL: Technology evaluation: Valspodar, Novartis AG. Curr Opin Mol Ther. 2000; 2(4): 459–67. PubMed Abstract\n\nCagliero E, Ferracini R, Morello E, et al.: Reversal of multidrug-resistance using Valspodar (PSC 833) and doxorubicin in osteosarcoma. Oncol Rep. 2004; 12(5): 1023–31. PubMed Abstract | Publisher Full Text\n\nMinami H, Ohtsu T, Fujii H, et al.: Phase I study of intravenous PSC-833 and doxorubicin: reversal of multidrug resistance. Jpn J Cancer Res. 2001; 92(2): 220–30. PubMed Abstract | Publisher Full Text\n\nSonneveld P, Marie JP, Huisman C, et al.: Reversal of multidrug resistance by SDZ PSC 833, combined with VAD (vincristine, doxorubicin, dexamethasone) in refractory multiple myeloma. A phase I study. Leukemia. 1996; 10(11): 1741–50. PubMed Abstract\n\nIto D, Brewer S, Modiano JF, et al.: Development of a novel anti-canine CD20 monoclonal antibody with diagnostic and therapeutic potential. Leuk Lymphoma. 2015; 56(1): 219–225. PubMed Abstract | Publisher Full Text\n\nIto D, Frantz AM, Williams C, et al.: CD40 ligand is necessary and sufficient to support primary diffuse large B-cell lymphoma cells in culture: a tool for in vitro preclinical studies with primary B-cell malignancies. Leuk Lymphoma. 2012; 53(7): 1390–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGorden BH, Kim JH, Sarver AL, et al.: Identification of three molecular and functional subtypes in canine hemangiosarcoma through gene expression profiling and progenitor cell characterization. Am J Pathol. 2014; 184(4): 985–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nValli VE, San Myint M, Barthel A, et al.: Classification of canine malignant lymphomas according to the World Health Organization criteria. Vet Pathol. 2011; 48(1): 198–211. PubMed Abstract | Publisher Full Text\n\nMealey KL: Therapeutic implications of the MDR-1 gene. J Vet Pharmacol Ther. 2004; 27(5): 257–64. PubMed Abstract | Publisher Full Text\n\nMealey KL, Bentjen SA, Gay JM, et al.: Ivermectin sensitivity in collies is associated with a deletion mutation of the mdr1 gene. Pharmacogenetics. 2001; 11(8): 727–33. PubMed Abstract | Publisher Full Text\n\nChan AW, Tetzlaff JM, Altman DG, et al.: SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013; 158(3): 200–7. PubMed Abstract | Publisher Full Text\n\nJubala CM, Wojcieszyn JW, Valli VE, et al.: CD20 expression in normal canine B cells and in canine non-Hodgkin lymphoma. Vet Pathol. 2005; 42(4): 468–76. PubMed Abstract | Publisher Full Text\n\nVail DM: Veterinary Co-operative Oncology Group - Common Terminology Criteria for Adverse Events (VCOG-CTCAE) following chemotherapy or biological antineoplastic therapy in dogs and cats v1.0. Vet Comp Oncol. 2004; 2(4): 195–213. PubMed Abstract | Publisher Full Text\n\nKhammanivong A, Gorden BH, Frantz AM, et al.: Identification of drug-resistant subpopulations in canine hemangiosarcoma. Vet Comp Oncol. 2014. PubMed Abstract | Publisher Full Text\n\nGorden BH, Saha J, Khammanivong A, et al.: Lysosomal drug sequestration as a mechanism of drug resistance in vascular sarcoma cells marked by high CSF-1R expression. Vasc Cell. 2014; 6: 20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVail DM, Michels GM, Khanna C, et al.: Response evaluation criteria for peripheral nodal lymphoma in dogs (v1.0)--a Veterinary Cooperative Oncology Group (VCOG) consensus document. Vet Comp Oncol. 2010; 8(1): 28–37. PubMed Abstract | Publisher Full Text\n\nChun R: Lymphoma: which chemotherapy protocol and why? Top Companion Anim Med. 2009; 24(3): 157–62. PubMed Abstract | Publisher Full Text\n\nBinkhathlan Z, Somayaji V, Brocks DR, et al.: Development of a liquid chromatography-mass spectrometry (LC/MS) assay method for the quantification of PSC 833 (Valspodar) in rat plasma. J Chromatogr B Analyt Technol Biomed Life Sci. 2008; 869(1–2): 31–7. PubMed Abstract | Publisher Full Text\n\nModiano JF, Breen M, Avery AC, et al.: Breed Specific Canine Lymphoproliferative Diseases. In: Ostrander EA, Giger U, Lindblad-Toh K editors. The Dog and its Genome. Cold Spring Harbor: CSH Press; 2005.\n\nModiano JF, Breen M, Burnett RC, et al.: Distinct B-cell and T-cell lymphoproliferative disease prevalence among dog breeds indicates heritable risk. Cancer Res. 2005; 65(13): 5654–61. PubMed Abstract | Publisher Full Text\n\nTan B, Piwnica-Worms D, Ratner L: Multidrug resistance transporters and modulation. Curr Opin Oncol. 2000; 12(5): 450–8. PubMed Abstract\n\nModok S, Mellor HR, Callaghan R: Modulation of multidrug resistance efflux pump activity to overcome chemoresistance in cancer. Curr Opin Pharmacol. 2006; 6(4): 350–4. PubMed Abstract | Publisher Full Text\n\nRodriguez C, Commes T, Robert J, et al.: Expression of P-glycoprotein and anionic glutathione S-transferase genes in non-Hodgkin's lymphoma. Leuk Res. 1993; 17(2): 149–56. PubMed Abstract | Publisher Full Text\n\nZandvliet M, Teske E, Schrickx JA: Multi-drug resistance in a canine lymphoid cell line due to increased P-glycoprotein expression, a potential model for drug-resistant canine lymphoma. Toxicol In Vitro. 2014; 28(8): 1498–506. PubMed Abstract | Publisher Full Text\n\nKim MC, D'Costa S, Suter S, et al.: Evaluation of a side population of canine lymphoma cells using Hoechst 33342 dye. J Vet Sci. 2013; 14(4): 481–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIdriss HT, Hannun YA, Boulpaep E, et al.: Regulation of volume-activated chloride channels by P-glycoprotein: phosphorylation has the final say! J Physiol. 2000; 524(Pt 3): 629–36. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBernt KM, Armstrong SA: Leukemia stem cells and human acute lymphoblastic leukemia. Semin Hematol. 2009; 46(1): 33–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMoti N, Malcolm T, Hamoudi R, et al.: Anaplastic large cell lymphoma-propagating cells are detectable by side population analysis and possess an expression profile reflective of a primitive origin. Oncogene. 2014. PubMed Abstract | Publisher Full Text\n\nLohr JG, Stojanov P, Lawrence MS, et al.: Discovery and prioritization of somatic mutations in diffuse large B-cell lymphoma (DLBCL) by whole-exome sequencing. Proc Natl Acad Sci U S A. 2012; 109(10): 3879–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLund K, Adams PD, Copland M: EZH2 in normal and malignant hematopoiesis. Leukemia. 2014; 28(1): 44–9. PubMed Abstract | Publisher Full Text\n\nVelichutina I, Shaknovich R, Geng H, et al.: EZH2–mediated epigenetic silencing in germinal center B cells contributes to proliferation and lymphomagenesis. Blood. 2010; 116(24): 5247–55. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang Y, Liu G, Lin C, et al.: Silencing the EZH2 gene by RNA interference reverses the drug resistance of human hepatic multidrug-resistant cancer cells to 5–Fu. Life Sci. 2013; 92(17–19): 896–902. PubMed Abstract | Publisher Full Text\n\nTang B, Zhang Y, Liang R, et al.: RNAi-mediated EZH2 depletion decreases MDR1 expression and sensitizes multidrug-resistant hepatocellular carcinoma cells to chemotherapy. Oncol Rep. 2013; 29(3): 1037–42. PubMed Abstract | Publisher Full Text\n\nMudaliar MA, Haggart RD, Miele G, et al.: Comparative gene expression profiling identifies common molecular signatures of NF-κB activation in canine and human diffuse large B cell lymphoma (DLBCL). PloS one. 2013; 8(9): e72591. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichards KL, Motsinger-Reif AA, Chen HW, et al.: Gene profiling of canine B-cell lymphoma reveals germinal center and postgerminal center subtypes with different survival times, modeling human DLBCL. Cancer Res. 2013; 73(16): 5029–39. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBardelmeijer HA, Ouwehand M, Beijnen JH, et al.: Efficacy of novel P-glycoprotein inhibitors to increase the oral uptake of paclitaxel in mice. Invest New Drugs. 2004; 22(3): 219–29. PubMed Abstract | Publisher Full Text\n\nBinkhathlan Z, Hamdy DA, Brocks DR, et al.: Pharmacokinetics of PSC 833 (valspodar) in its Cremophor EL formulation in rat. Xenobiotica. 2010; 40(1): 55–61. PubMed Abstract | Publisher Full Text\n\nSmith AJ, Mayer U, Schinkel AH, et al.: Availability of PSC833, a substrate and inhibitor of P-glycoproteins, in various concentrations of serum. J Natl Cancer Inst. 1998; 90(15): 1161–6. PubMed Abstract\n\nGhetie MA, Crank M, Kufert S, et al.: Rituximab but not other anti-CD20 antibodies reverses multidrug resistance in 2 B lymphoma cell lines, blocks the activity of P-glycoprotein (P-gp), and induces P-gp to translocate out of lipid rafts. J Immunother. 2006; 29(5): 536–44. PubMed Abstract | Publisher Full Text\n\nGhetie MA, Marches R, Kufert S, et al.: An anti-CD19 antibody inhibits the interaction between P-glycoprotein (P-gp) and CD19, causes P-gp to translocate out of lipid rafts, and chemosensitizes a multidrug-resistant (MDR) lymphoma cell line. Blood. 2004; 104(1): 178–83. PubMed Abstract | Publisher Full Text\n\nIto D, Childress MO, Mason NJ, et al.: Data of pilot study on valspodar in neoadjuvant settings for canine B-cell lymphoma. F1000Research. 2015. Data Source"
}
|
[
{
"id": "8454",
"date": "01 May 2015",
"name": "Douglas H. Thamm",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very well written and articulate manuscript exploring the effects of valspodar or placebo on putative tumor-initiating cell number and clinical outcome in dogs treated with doxorubicin. There is very encouraging preliminary data identifying these putative tumor-initiating cells and documenting their high expression of efflux pumps such as P-glycoprotein. Unfortunately there are 2 major issues with conception/design of the study, which are a cause for concern and need additional justification/clarification if the manuscript was to be considered for indexation.As written, it appears that changes in lymphoid progenitor cell percentages in patient dogs were assessed before and after valspodar treatment, but before any chemotherapy was given (“Trial design” section and Table 2). The role of P-GP in mediating chemotherapy sensitivity is through facilitating the cellular efflux of certain cytotoxic drugs. In the absence of these cytotoxic drugs, P-GP inhibitors would be expected to have no independent cytotoxic effect on any cell population. Thus, the absence of a change in LPC percentage (or gene expression) is intuitive based on the mechanism of the drug and the study as-designed. If LPC number was assessed following doxorubicin (+/- valspodar) treatment, a difference MIGHT have been observed. If I am somehow mistaken about the study design, then it needs substantial clarification. Extensive clinical evaluation of valspodar/chemotherapy combinations in humans has failed to demonstrate an improvement in outcome, even in very large randomized phase-3 trials. For example, CALGB 19808 randomized 302 patients with AML to chemotherapy or chemotherapy plus valspodar – response rates, DFS and OS were no different and were actually numerically shorter in the valspodar arm (Kolitz et al, 2010). A second study randomized 762 patients with ovarian cancer to carboplatin/paclitaxel +/- valspodar – no difference in outcome was observed (Lhommé et al., 2008). A third randomized study, ECOG E1A95, evaluated VAD +/- valspodar in 94 patients with refractory myeloma. No difference in outcome was observed (Friedenberg et al, 2006). Although an alternate mechanism for drug efficacy is invoked in this study, it seems counterintuitive to think that differences in outcome (although a secondary measure) would be observed in a study of 20 dogs with lymphoma. None of the above-mentioned studies were discussed or cited in the Introduction or Discussion.",
"responses": [
{
"c_id": "2436",
"date": "10 Feb 2017",
"name": "Jaime Modiano",
"role": "Author Response",
"response": "Comment: This is a very well written and articulate manuscript exploring the effects of valspodar or placebo on putative tumor-initiating cell number and clinical outcome in dogs treated with doxorubicin. There is very encouraging preliminary data identifying these putative tumor-initiating cells and documenting their high expression of efflux pumps such as P-glycoprotein. Response: We appreciate the reviewer’s positive comments Unfortunately there are 2 major issues with conception/design of the study, which are a cause for concern and need additional justification/clarification if the manuscript was to be considered for indexation. Comment: As written, it appears that changes in lymphoid progenitor cell percentages in patient dogs were assessed before and after valspodar treatment, but before any chemotherapy was given (“Trial design” section and Table 2). The role of P-GP in mediating chemotherapy sensitivity is through facilitating the cellular efflux of certain cytotoxic drugs. In the absence of these cytotoxic drugs, P-GP inhibitors would be expected to have no independent cytotoxic effect on any cell population. Thus, the absence of a change in LPC percentage (or gene expression) is intuitive based on the mechanism of the drug and the study as-designed. If LPC number was assessed following doxorubicin (+/- valspodar) treatment, a difference MIGHT have been observed. If I am somehow mistaken about the study design, then it needs substantial clarification. Response: We agree with the reviewer, and refer to our response to comments from reviewer 1 and our cover letter to the editors for a detailed explanation. Moreover, we re-analyzed the RNA sequencing data, and corrected a technical error in our previous statement, although it does not change the interpretation of the data. In fact, there were observed genes whose expression was significantly different in pre-treatment and post-treatment groups. However, we did not find consistent, genome-wide changes in gene expression that could be attributed to drug treatment (valspodar vs. placebo) or to time (Day 0 vs. Day 4). A more precise description of the methods used, the results, and the interpretation are included in the revised manuscript. Comment: Extensive clinical evaluation of valspodar/chemotherapy combinations in humans has failed to demonstrate an improvement in outcome, even in very large randomized phase-3 trials. For example, CALGB 19808 randomized 302 patients with AML to chemotherapy or chemotherapy plus valspodar – response rates, DFS and OS were no different and were actually numerically shorter in the valspodar arm (Kolitz et al, 2010). A second study randomized 762 patients with ovarian cancer to carboplatin/paclitaxel +/- valspodar – no difference in outcome was observed (Lhommé et al., 2008). A third randomized study, ECOG E1A95, evaluated VAD +/- valspodar in 94 patients with refractory myeloma. No difference in outcome was observed (Friedenberg et al, 2006). Although an alternate mechanism for drug efficacy is invoked in this study, it seems counterintuitive to think that differences in outcome (although a secondary measure) would be observed in a study of 20 dogs with lymphoma. None of the above-mentioned studies were discussed or cited in the Introduction or Discussion. Response: The reviewer is correct that valspodar had been evaluated previously in three large phase-3 clinical trials with no evidence of improved response rates. Our intent was to determine if inhibition of ABC transporters, and ABCB1 in particular, would sensitize LPCs to the cytotoxic effects of doxorubicin, presumably by increasing the retention time of the drug in the cells. Our study was not powered to detect differences in duration of remission or overall survival, but differences in LPCs might have supported a revised study design for new trials in cancers of dogs or humans that are presumed to be driven by tumor-initiating or tumor-propagating cells with elevated ABC transporter activity. We appreciate the reviewer’s point, that clarifying the previous use of valspodar in cancer patients and how this study was meant to build on the negative data, and specifically how the design differed from those studies, is valuable for context. We have added this information to the Introduction and Discussion sections."
}
]
},
{
"id": "8911",
"date": "05 Jun 2015",
"name": "Michael S. Kent",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript entitled “A double blinded, placebo-controlled pilot study to examine reduction of CD34+/CD117+/CD133+ lymphoma progenitor cells and duration of remission induced by neoadjuvant valspodar in dogs with large B-cell lymphoma” presents a prospective study evaluating the effects of valspodar in a single agent setting for effects on LPCs and as a secondary endpoint evaluates outcomes in dogs treated with valspodar and doxorubicin for B-cell lymphoma.The manuscript is well written and describes the study design and materials and methods well.My main concern with the manuscript is that the main outcome studied was a change in LPCs before and 4 days after starting valspodar. Given the mechanism of action of the drug in inhibiting the ABC transporter and not cytotoxcity of these cells specifically I am not surprised that the authors did not see a drop in their numbers. It is unfortunate that there were only a few samples with enough material to test if valspodar could inhibit the ABC transporters in vivo as this would have been a very useful component of the study. In my opinion these points should be in the discussion. While underpowered I think the length of remission and overall survival data are important as they show that there is not likely to be differences in these groups even in a larger study.",
"responses": [
{
"c_id": "2435",
"date": "10 Feb 2017",
"name": "Jaime Modiano",
"role": "Author Response",
"response": "Comment: The manuscript entitled “A double blinded, placebo-controlled pilot study to examine reduction of CD34+/CD117+/CD133+ lymphoma progenitor cells and duration of remission induced by neoadjuvant valspodar in dogs with large B-cell lymphoma” presents a prospective study evaluating the effects of valspodar in a single agent setting for effects on LPCs and as a secondary endpoint evaluates outcomes in dogs treated with valspodar and doxorubicin for B-cell lymphoma. The manuscript is well written and describes the study design and materials and methods well. Response: We appreciate the reviewer’s positive comments Comment: My main concern with the manuscript is that the main outcome studied was a change in LPCs before and 4 days after starting valspodar. Given the mechanism of action of the drug in inhibiting the ABC transporter and not cytotoxcity of these cells specifically I am not surprised that the authors did not see a drop in their numbers. It is unfortunate that there were only a few samples with enough material to test if valspodar could inhibit the ABC transporters in vivo as this would have been a very useful component of the study. In my opinion these points should be in the discussion. Response: This comment (as well as a comment from Reviewer #2) made it clear to us that we failed to explain clearly why we hypothesized that LPCs would be depleted in dogs treated with neoadjuvant valspodar followed by doxorubicin. This failure was partly due to circumstances explained in our cover letter to the editors, and our efforts to correct it were the main reason it took so long for us to submit a response. In fact, we agree with the reviewer that valspodar is not cytotoxic and should not cause a reduction in LPSc by itself. Rather, our prediction was that by reducing the activity of ATP-binding cassette transporter (ABC) proteins, including ABCB1 (also known as P-glycoprotein), valspodar would sensitize LPCs to the cytotoxic effects of doxorubicin. We expected doxorubicin treatment to induce rapid clinical remission with consequent lymphoid necrosis and lymphodepletion of malignant nodes. Thus, we felt that quantification of LPCs from this nodal environment would be challenging. Instead, we decided to measure depletion of LPCs in peripheral blood as a surrogate measure of valspodar-induced sensitization to doxorubicin. As noted in the materials and methods section, blood was collected from each dog on day 11 of the study, 7 days after administration of doxorubicin. We believe this was a reasonable time point for sampling blood to investigate whether circulating LPCs were sensitized by the neoadjuvant valsopodar and depleted by doxorubicin treatment in treated dogs as compared to controls. Comment: While underpowered I think the length of remission and overall survival data are important as they show that there is not likely to be differences in these groups even in a larger study. Response: We appreciate the reviewer’s comment. We agree, and that is the reason why we included the data in the original manuscript and kept this section unchanged."
}
]
}
] | 1
|
https://f1000research.com/articles/4-42
|
https://f1000research.com/articles/6-470/v1
|
12 Apr 17
|
{
"type": "Case Report",
"title": "Case Report: Making a diagnosis of familial renal disease – clinical and patient perspectives",
"authors": [
"Zahra Iqbal",
"John A. Sayer",
"Zahra Iqbal"
],
"abstract": "Background: A precise molecular genetic diagnosis has become the gold standard for the correct identification and management of many inherited renal diseases. Methods: Here we describe a family with familial focal segmental glomerulosclerosis, and include a clinical and patient perspective on the diagnostic workup and relaying of genetic results following whole exome sequencing. Results: Through next generation sequencing approaches, we identified a pathogenic mutation in TRPC6, the underlying cause of the phenotype. The identification of this mutation had important clinical consequences for the family, including allowing a living-unrelated kidney transplant to proceed in the index case. There are also wider ranging social and ethical dilemmas presented when reaching a genetic diagnosis like this one, which are explored here by both physicians and the index case. Conclusions: Through physician and patient perspectives in a family with inherited renal failure we explore the implications and the magnitude of a molecular genetic diagnosis.",
"keywords": [
"focal segmental glomerulosclerosis",
"genetics",
"whole exome sequencing",
"TRPC6",
"podocytye",
"proteinuria",
"ethics"
],
"content": "Introduction\n\nFamilial renal disease is a challenging problem, in terms of diagnosis, treatment and ethical decisions. Here we describe a family affected by a familial form of focal segmental glomerulosclerosis (FSGS), which has resulted in end stage renal disease (ESRD) in two family members, with other family members at risk of the same disease. We wished to explore the significance of making a genetic diagnosis of familial ESRD and the impact of such a diagnosis on the index patient and their family. We therefore outline both the clinical and patient perspective of the index patient and her family.\n\n\nClinical case report\n\nThe index case presented in 2003 at the age of 30 years to renal services after her first pregnancy in 2003. She had developed heavy proteinuria and hypoalbuminemia during her pregnancy. After delivery of a healthy son at 40+2 weeks, her proteinuria reduced from a urine protein/creatinine ratio (uPCR) of 1200mg/mmol to 350mg/mmol at 6 months post-partum (Figure 1A). Her serum creatinine and blood pressure values remained normal during this pregnancy.\n\nA. Clinical progression of index case over time with serum albumin (solid line), urine protein/creatinine ratio (dashed line) and estimated Glomerular Filtration Rate (eGFR). Pregnancies marked with shaded area (P). B. Family tree with index case arrowed. Males are squares, females are circles. Heterozygously affected individuals are semi-shaded. Sibling with proteinuria marked with “?”.\n\nA positive family history of renal disease was known (Figure 1B). The index case’s mother had presented similarly during her first pregnancy at age 30 in 1973. Her renal function steadily declined despite commencement of an ACE inhibitor and she reached ESRD in 2015 at the age of 70 years, and was commenced on peritoneal dialysis. In addition, a maternal grandmother had died in her 60s of “renal disease” but the exact diagnosis was unknown. The index case also had two maternal aunts who are not known to have renal disease. At this stage, no other family members had presented with symptoms consistent with renal disease.\n\nIn 2004, due to persistent proteinuria, 10 months after her first pregnancy, the index patient underwent a renal biopsy, which demonstrated FSGS. This was managed conservatively. At the age of 32 years and with careful pre-conception counselling, our patient conceived her second child. Her proteinuria again increased during this pregnancy. A healthy daughter was delivered successfully and post-partum, the proteinuria settled (uPCR = 650mg/mmol). During her third pregnancy aged 36, her proteinuria increased dramatically (uPCR = 1090mg/mmol at 20 weeks gestation and 1340mg/mmol at 33 weeks’ gestation). This was associated with other features of nephrotic syndrome, including serum albumin of 20g/L and estimated Glomerular Filtration Rate (eGFR) declining to 45ml/min/1.73m2 (Figure 1A). This prompted early delivery of her son at 36+5 weeks at a weight of 2.3kg, who then required a ten day stay at the specialist baby unit.\n\nFollowing this third and final pregnancy, blood pressure was optimised with a combination of angiotensin receptor blockers and thiazide diuretic within the renal clinic. Despite these measures, her eGFR continued to progressively decline (Figure 1A) and she received counselling and information regarding the various methods of renal replacement therapy, opting for peritoneal dialysis when required. Her husband offered to be a living-unrelated kidney donor and pre-emptive renal transplantation work-up was commenced.\n\nGiven the likelihood of familial FSGS leading to ESRD, based on renal histology and the clinical course of the index case and her mother, genetic studies were initiated, following informed consent from the index case and her mother. Targeted genetic studies excluded mutations in WT1 and NPHS2 genes, and this was followed by whole exome sequencing which identified a known pathogenic variant in TRPC6 (c.2683C>T; p.Arg895Cys) (Table 1), which segregated from the affected mother. The finding of a genetic mutation causing FSGS meant that the likelihood of recurrence of FSGS in a non-related donor was low and her living-unrelated transplant surgery was expedited. There have been no known recurrences of FSGS after renal transplants in patients with underlying mutations in TRPC61,2.\n\nIn 2014, at age 41, the index case received a pre-emptive living-unrelated renal transplant from her husband, which had immediate graft function. Her transplant function remains excellent with no evidence of recurrent FSGS.\n\nMore recently, the index case’s brother was identified as having heavy proteinuria in 2016 at age 40, and is undergoing further investigations (Figure 1B). The three children of the index patient, who are fit and well, have not yet been tested for the disease causing variant.\n\n\nGenetics and underlying mechanism of disease of familial FSGS\n\nThe first identification of human TRPC6 mutations was reported in 2005. Here, a point mutation in TRPC6 was identified in a family with autosomal dominant focal segmental glomerulosclerosis1. Since then, several other mutations in TRPC6 have been described. The TRPC6 p.R895C heterozygous mutation that we report here has been described previously in a large Mexican family2. Here, 9 of 25 family members were affected and presented between the ages of 18 and 46 years, and 6 of the family members reached ESRD. Of these, 2 had received renal transplants with no evidence of recurrent disease. TRPC6 is a non-selective cation channel3 which is expressed in podocytes and glomerular endothelial cells2. TRPC6 channel activity at the slit diaphragm is required for the regulation of podocyte structure and function2. Biophysical analysis of the p.R895C mutant TRPC6 channel showed pathogenic changes in the current-voltage relationship which were suggestive of a gain-of-function2, which in vivo would be predicted to increase calcium influx. Interestingly, podocytes express other TRPC channels, including TRPC1, TRPC2 and TRPC5, and an overlap in function may account for the usual adult onset of glomerular disease. Another level of complexity is that TRPC6 may also form heterotetramers with other TRPC channels2. The fact that the p.R895C mutation causes a gain-of-function means that selective TRPC6 inhibitors such as larixyl acetate may represent a pharmacological therapy for this form of FSGS4. More recently, a role for TRPC6 in renal fibrosis has been identified, which may spur on efforts for the clinical use of TRPC6 inhibition in other progressive renal diseases5.\n\n\nPatient perspective\n\n“Even though my mother had a history of renal disease, and I had presented with proteinuria during my first pregnancy, there had been no suggestion made to me, or present in my mind of a possible genetic renal condition. When following my biopsy in 1999, I received a probable diagnosis of familial FSGS, it came as a huge shock, not only to hear I had FSGS, but also the rarer familiar form. Furthermore, knowing you have a rare chronic illness is one thing, but more significantly, I was devastated about what the future might hold for our children.\n\nWith this is mind, we began enquiries about how to find out which faulty gene had caused the FSGS and it was decided to undertake genetic tests including sequencing my whole exome. Whilst the result might not help in the short term, it would be useful in terms of being able to test other family members in the future.\n\nI remember the consultant saying that finding the change in the faulty gene was like looking for 1 change in 6 billion pieces of genetic code and the expression ‘needle in a haystack’ was mentioned. Even after such detailed analysis I was told that the results are sometimes inconclusive. Not wanting to miss an opportunity, I flippantly mentioned screening for other faulty genes – by which I mean other non-renal conditions. I did not consider the possibility that our type of familial FSGS may be caused by more than one faulty gene and this is a serious and worrying consideration for patients waiting for the results of any genetic sequencing, notwithstanding the added complications it infers for future research and potential management or cure.\n\nThe screening process took a long time (over 6 months) and several clinic appointments passed before we received the results. Fortunately, the investigations were positive and as strange as it sounds, I was pleased to be told that I carried a variant in the TRPC6 gene. No other faulty genes were identified. My mother was extremely interested in the diagnosis given her condition has been termed “nephrotic syndrome” for thirty years but this was tempered by an ill-founded sense of guilt that she had passed on her condition to her daughter. This is an emotion I can identify with in terms of my own children.\n\nConsequently, the excitement about a positive result naively produced a sense of hope about a potential innovation in the near future, given we had a precise genetic cause. TRPC6 encodes a calcium channel and based on current understanding the protein is expressed in the podocyte of the kidney, an area currently undergoing a lot of research. Whilst the condition is extremely complex, this form of FSGS may well be a candidate for clinical trials aiming to modify the faulty channel.\n\nAs a patient, having something concrete to hold on to, such as the likely cause of our condition, provided some comfort and a sense of empowerment. Receiving the news that the cause of our FSGS was genetic meant it was much less likely for proteinuria to reoccur in a transplant, whereas the risk of recurrence is high in patients with other forms of FSGS. The prospect of immediate kidney rejection is daunting even without the added anxiety of the disease reoccurring and causing rejection, and having this information was an enormous relief for our family. In addition, awareness of this mutation now means that other family members (should the need arise) need only have a blood test rather than a kidney biopsy.\n\nThe sting in the tail, in our particular case, is that the pathogenic variant of TRPC6 remains a very rare cause of familial FSGS, with only a small amount of published reports for doctors to refer to. The rate of deterioration in kidney function has been very different in myself and my mother, whose renal replacement therapy began at 70 years of age. In 2016, my brother presented with proteinuria and mildly raised blood pressure, and is awaiting the results of his genetic tests. He has two children and will no doubt have considered the possibility that they may also be vulnerable. We have all had unique experiences, and this does not make the analysis easier for the nephrologists, or give them the tools to predict future outcomes.\n\nIdentifying the variant in TRPC6 contributing to our form of familial FSGS, does however open up the opportunity to support directed research studies and help further the knowledge about this condition. At this stage, we have decided with the support of the nephrologists, not to test our children, and will do so when the time is right. Yet whilst the threat of this condition hangs over their heads, we continue to fundraise and support renal research in the hope that one day a cure may be found.”\n\n\nDiscussion and conclusions\n\nUsing a clinical case summary and a reflective patient perspective, we provide an example of how a molecular genetic diagnosis in a life threatening inherited renal disease may provide an explanation of the underlying disease process and offer the ability for screening of other family members without the need for invasive tests such as renal biopsy. A genetic diagnosis, by its very nature, also raises issues within the patient and their family members, which may be far reaching. Importantly, a genetic diagnosis often furthers our knowledge of disease phenotypes in rare inherited disorders, and hopefully provides momentum for future research into precision medicine therapies. Engagement of patients and their families in the importance and value of genetic and genomic data for diagnostic, therapeutic and prognostic use should be actively encouraged. Mainstreaming of genomic medicine into medical specialties such as nephrology needs to be embraced by patients and their physicians.\n\n\nConsent\n\nWritten informed consent was obtained from the patient and family for publication of this case report and any accompanying images and other details that could potentially reveal the family’s identity.",
"appendix": "Author contributions\n\n\n\nThe project was conceived and directed by JAS. ZI and JAS drafted the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nWe thank Northern Counties Kidney Research Fund who supported this work.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe recognise a valuable contribution from the index case who provided an insightful patient perspective which is quoted in full. We thank all family members who contributed to the information provided within this report.\n\n\nReferences\n\nWinn MP, Conlon PJ, Lynn KL, et al.: A mutation in the TRPC6 cation channel causes familial focal segmental glomerulosclerosis. Science. 2005; 308(5729): 1801–4. PubMed Abstract | Publisher Full Text\n\nReiser J, Polu KR, Möller CC, et al.: TRPC6 is a glomerular slit diaphragm-associated channel required for normal renal function. Nat Genet. 2005; 37(7): 739–44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHofmann T, Obukhov AG, Schaefer M, et al.: Direct activation of human TRPC6 and TRPC3 channels by diacylglycerol. Nature. 1999; 397(6716): 259–63. PubMed Abstract | Publisher Full Text\n\nUrban N, Wang L, Kwiek S, et al.: Identification and Validation of Larixyl Acetate as a Potent TRPC6 Inhibitor. Mol Pharmacol. 2016; 89(1): 197–213. PubMed Abstract | Publisher Full Text\n\nWu YL, Xie J, An SW, et al.: Inhibition of TRPC6 channels ameliorates renal fibrosis and contributes to renal protection by soluble klotho. Kidney Int. 2017; 91(4): 830–41. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21847",
"date": "02 May 2017",
"name": "Moin A. Saleem",
"expertise": [
"Reviewer Expertise Paediatric nephrology",
"glomerular biology",
"genetics",
"podocyte biology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting way of presenting a familial case of renal disease from the patient perspective. The case illustrates how a new diagnosis of monogenic (Mendelian) disease can impact on the family, by firstly clarifying the underlying cause, and being able to discuss in more detail the prognostic and genetic testing implications.\nIt also illustrates the powerful impact of the rapid improvements in genetic sequencing technologies, and how these have translated into clinical practice. It is worth emphasising that this will have impact on all clinicians treating renal (and other) diseases, who will need to keep up to date with the current screening technologies and interpretation/limitations of the data generated. In the UK routine whole genome sequencing of patients with rare diseases is being rolled out, so there will be more cases like this being uncovered and requiring counselling.\n\nOne key issue touched upon is the ethical implications of testing younger family members, who may (or may not) develop disease much later in life. The consensus at the moment is to wait until the young person is of an age to make the decision to test independently, unless there are treatment implications of knowing earlier in life. With this particular mutation this is unlikely to be the case, though cases of TRPC6 mutations have been reported to present with proteinuria as early as 6 years of age.\nAnother ethical consideration of whole exome or genome sequencing is of finding completely unrelated potentially pathogenic mutations. This needs careful pre-counselling before the test is done.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "21845",
"date": "19 May 2017",
"name": "Aoife M. Waters",
"expertise": [
"Reviewer Expertise Molecular mechanisms of glomerulosclerosis utilising inducible transgenic mouse models and molecular genetic studies of human disease."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nFamilial cases of rare diseases provide a remarkable opportunity to delineate important biological processes relevant to human disease. Of particular importance, are the implications for future therapeutic strategies for refractory and progressive diseases such as focal segmental glomerulosclerosis.\n\nIqbal and Sayer through their report elegantly present the natural history of TRPC6-associated glomerulopathy arising as a result of the mutation c.2683C>T; p.Arg895Cys which concurs with previous reports of disease manifestation of the same genotype by Reiser et al in 2005. Presentation tends to have an insidious onset in mid-adulthood and despite intervention with ACEIs, patients with this genotype progress towards end-stage renal disease. Functional characterisation of the TRPC6 variant, p.Arg895Cys, utilising electrophysiological studies, revealed the likelihood that this variant represented a gain of function in the encoded mutant protein. Increased intracellular calcium influx as a result of gain of function mutations in TRPC6 have been shown to lead to increased podocyte apoptosis, a common pathogenetic mechanism of FSGS. Therefore, downstream inhibition of increased TRPC6 activation, represents a therapeutic strategy for patients with TRPC6-associated glomerulopathy. As highlighted by Iqbal and Sayer, without effective treatment for this disease, progression to end-stage renal disease and renal transplantation occurs. For this particular variant, the risk of recurrence post renal transplant is negligible.\nIncluding the patient perspective is an innovative approach to the case reporting of interesting pedigrees of rare disease. By undertaking such a strategy, Iqbal and Sayer highlight the importance of patient engagement in genomics research by consent for testing of additional affected and unaffected family members to fully characterise the genotype-specific disease manifestations. Furthermore, it provides an answer for the affected patients and provides a rationale for clinical decisions relating to their care.\nImportantly, this report also provided perspective with regards to future testing of prospective asymptomatic younger family members and whether to burden families with advance knowledge of prospective disease susceptibility.\nWhat this knowledge does offer, of course, is the possibility to influence the disease course prior to manifestation of overt disease symptoms and careful consideration of this prospect should be highlighted at time of consent.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "22625",
"date": "22 May 2017",
"name": "Larissa Kerecuk",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nExcellent case report highlighting an important area of genetic testing being very useful for managing the patient. The report shows the importance of taking a family history in any disease especially renal and exploring this further. Having the patient perspective adds a different angle which is very important for doctors to be aware of as lots of learning can happen from this.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "22500",
"date": "23 May 2017",
"name": "Beata S. Lipska-Ziętkiewicz",
"expertise": [
"Reviewer Expertise clinical genetics",
"genomic disorders",
"hereditary kidney disease"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present the insidious course of an infrequent subtype of FSGS, the TRPC6-related glomerulopathy, that is an extremely rare renal disorder with ca.100 families diagnosed worldwide. Most likely an individual nephrologist will see no more than one-two such cases during his/her years of practice. The presented paper is an excellent source of information on how to comprehensively handle such patients, not only from medical but also emotional and ethical perspective.\n\nThe strong points of the work are 1) presenting the recent advances in diagnostics resulting from rapid improvements in genetic sequencing technologies making kidney biopsy (almost) obsolete; 2) including the patients perspective and highlighting their engagement not only in clinical management but also in research .\nThe weak point is too superficial presentation of the current standards of preemptive genetic testing in minors for late-onset conditions. I would recommend at least adding a reference to European Society of Human Genetics position on the issue (for details see: https://www.eshg.org/eshgdocs.0.html) and/or paragraph discussing the ethical, legal, and psychosocial implications of such genetic testing.\n\nI would also suggest to modify Figure 1B (family pedigree). Currently it seems to report results of genetic testing in the entire family, but most of the family members were not subject to any genetic testing, the affected grandmother included. Therefore, I recommend that the graph reports on phenotype and not genotype, ie. the individuals with phenotype expressed should be marked as “fully filled-in” while the information re: mutational status is to be provided below where available.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-470
|
https://f1000research.com/articles/6-463/v1
|
11 Apr 17
|
{
"type": "Software Tool Article",
"title": "GenRank: a R/Bioconductor package for prioritization of candidate genes",
"authors": [
"Chakravarthi Kanduri",
"Irma Järvelä",
"Irma Järvelä"
],
"abstract": "Modern high-throughput studies often yield long lists of genes, a fraction of which are of high relevance to the phenotype of interest. To prioritize the candidate genes of complex genetic traits, our R/Bioconductor package GenRank ranks genes based on convergent evidence obtained from multiple layers of independent evidence. We implemented three methods to rank genes that integrate gene-level data generated from multiple layers of evidence: (a) the convergent evidence (CE) method aggregates evidence based on a weighted vote counting method; (b) the rank product (RP) method performs a meta-analysis of microarray-based gene expression data, and (c) the traditional method combines p-values. The methods are implemented in R and are available as a package in the Bioconductor repository (http://bioconductor.org/packages/GenRank/).",
"keywords": [
"ranking genes",
"candidate gene prioritization",
"convergent evidence",
"rank product",
"combine p-values",
"bioconductor",
"R package"
],
"content": "Introduction\n\nGenetic studies employ multiple independent lines of investigation spanning pan-omics approaches to holistically understand the molecular background of complex genetic traits. This includes studying the roles of various forms of genomic variation (e.g. SNPs, InDels, and CNVs) and gene expression in multiple tissues, and the regulation of a single phenotype across single or multiple species (e.g, humans and other relevant model organisms). One of the common objectives of performing such diverse experimental assays across multiple types of cells, tissues, treatments, time-points and species is to find the causal genes underlying a specific disease or trait. Integration of data from such diverse experimental assays (hereafter referred to as evidence layers) would enable prioritization of genes that are most relevant to the phenotype. Meta-analytic approaches that integrate gene-level data from multiple evidence layers have been shown to be successful in identifying and prioritizing candidate genes for complex genetic traits (Ayalew et al., 2012). However, no implementation of candidate gene prioritization methods existed in the Bioconductor project at the time this package was written, which otherwise offers a seamless framework to perform various statistical analyses in biomedical research. The majority of the existing meta-analysis related packages in Bioconductor have been exclusively developed to integrate microarray gene expression data, but do not serve the purpose of integrating gene-level data from multiple study types. Here, we implemented three methods to rank genes by integrating gene-level data generated from multiple evidence layers.\n\n\nMethods\n\nThe methods are implemented in R and available as a package in the Bioconductor repository (http://bioconductor.org/packages/GenRank/). The package requires R version 3.2.3 or later versions and runs on all operating systems. Figure 1 shows an overview of the workflow of the GenRank package.\n\nTo obtain convergent evidence for the molecular basis of phenotypes, GenRank bioconductor package implements three methods to integrate gene-level data generated from multiple independent experiments. Examples of evidence layers are experiment assay-type (e.g., GWAS, RNAseq, ChIPseq), tissue-type (e.g., blood, liver, intestine), cell-type (e.g., neutrophils, lymphocytes), time-series (e.g., 0h, 2h, 6h), species-type (e.g., human, mouse, drosophila), treatment-type (e.g., control, dexamethasone, lipopolysachharide).\n\nGenRank provides three methods to prioritize gene-level data obtained through multiple independent evidence layers. It requires a tab-delimited text file with three required fields: gene symbols or IDs, type of evidence layer and a significance statistic (e.g., p-value or effect-size). The first two fields are sufficient for the convergent evidence method. Summary statistics to prioritize the genes are computed as follows.\n\n\nThe convergent evidence (CE) method\n\nThe convergent evidence (CE) method aggregates ranks of genes based on a weighted vote counting method. A conceptually similar gene-level integration has been successfully used to prioritize candidate genes in neuropsychiatric diseases (Ayalew et al., 2012).\n\nHere, to rank genes, we compute convergent evidence scores. The convergent evidence score of gene G is given by\n\nCE(G) = CE(GL1)/n(L1)+....+CE(GLn)/n(Ln)\n\nHere CE(GLi) refers to the self-importance of evidence layer-i, while n(Li) refers to the number of genes within evidence layer-i. Additionally, we propose two other ways to compute convergent evidence scores. One of them is to ignore the number of genes within each layer, thus\n\nCE(G) = CE(GL1)+....+CE(GLn)\n\nIn this case, the convergent evidence score would be equivalent to the primitive vote counting. Another alternative method enables the researchers to determine the importance of each layer based on their own intuition. This involves assigning custom weights to each evidence layer based on their expert knowledge in the field. For example, when a researcher knows that a specific technology could yield less reproducible findings, such evidence layer could be given relatively less weight compared to the other evidence layers. Another objective way of assigning custom weights to each evidence layer could be based on the sample sizes of each evidence layer. In this case the convergent evidence score is\n\nCE(G) = CE(GL1) * w(L1)+....+CE(GLn) * w(Ln)\n\nwhere w(Li) refers to the custom weight assigned to evidence layer-i. Figure 2 shows an illustration of how CE scores are computed.\n\nThis illustration shows six evidence layers (Layer.1–Layer.6). The point indicates the detection of a gene in an evidence layer, while the size of the point indicates the importance of an evidence layer (custom weights assigned by the user). Here, genes A, B and D are detected twice each. However, based on a weighted vote counting method, gene D would get a better rank than genes A and B.\n\n\nThe rank product (RP) method\n\nThe rank product (RP) method has been used widely to perform differential expression analysis in microarray-based gene expression datasets. This biologically motivated method is simple, yet powerful and ranks genes that are consistently ranked highly in replicated experiments, based on the geometric mean (Breitling et al., 2004). This method has been implemented earlier as a Bioconductor package to perform meta-analysis of gene expression experiments (Hong et al., 2006). We adapted the rank product method to identify genes that are consistently highly ranked across evidence layers. The rank product is computed and compared to a permutation-based distribution of rank product values to estimate the proportion of false predictions (pfp; equivalent to FDR).\n\n\nCombining p-values\n\nCombining p-values has been one of the traditional methods of meta-analysis. To combine p-values of a gene from multiple evidence layers, the p-values should have been estimated from the same null hypothesis. Popular methods to combine p-values include Fisher’s and Stouffer’s methods, where the latter incorporates custom weights (e.g. sample sizes). These popular methods have already been implemented in the Bioconductor package survcomp (Schröder et al., 2011). Here, we built a wrapper around those methods to suit the overarching theme of this package (integrating gene-level data from multiple evidence layers). Missing p-values in some evidence layers could lead to a potential bias when combining p-values. To handle this issue, our implementation returns the combined p-values of only those genes, for which p-values are available at least across half of the evidence layers. However, it would be an ideal scenario to have p-values available across all evidence layers.\n\nTo avoid a potential bias owing to duplicated genes, duplicated genes are counted only once (as a single vote) within each evidence layer in all the three methods implemented in this package. When retaining duplicated genes, those with significant test statistic (e.g low p-values or high effect-size) were retained.\n\n\nUse cases\n\nThe use cases are explained in detail, with example data in the package vignette available at the package webpage here:\n\nhttps://www.bioconductor.org/packages/devel/bioc/vignettes/GenRank/inst/doc/GenRank_Vignette.html\n\nOikkonen et al. (2016) serves as an interesting use case that used convergent evidence scores to prioritize candidate genes obtained through diverse experiment types in a complex genetic trait.\n\n\nbioRxiv\n\nAn earlier version of this article can be found on bioRxiv at (http://biorxiv.org/content/early/2016/04/12/048264)\n\n\nSoftware availability\n\nThe GenRank package is hosted on Bioconductor at:\n\nhttp://bioconductor.org/packages/GenRank/.\n\nLatest source code:\n\nhttps://github.com/Bioconductor-mirror/GenRank\n\nArchived source code as at the time of publication:\n\nhttp://doi.org/10.5281/zenodo.439738. (Kanduri & Järvela, 2017)\n\nLicense: Artistic-2.0 license.",
"appendix": "Author contributions\n\n\n\nCK and IJ conceived the study and drafted the manuscript. CK carried out the implementation.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe study has been funded by the University of Helsinki (Grant number: 73603104).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAyalew M, Le-Niculescu H, Levey DF, et al.: Convergent functional genomics of schizophrenia: from comprehensive understanding to genetic risk prediction. Mol Psychiatry. 2012; 17(9): 887–905. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBreitling R, Armengaud P, Amtmann A, et al.: Rank products: a simple, yet powerful, new method to detect differentially regulated genes in replicated microarray experiments. FEBS Lett. 2004; 573(1–3): 83–92. PubMed Abstract | Publisher Full Text\n\nHong F, Breitling R, McEntee CW, et al.: RankProd: a bioconductor package for detecting differentially expressed genes in meta-analysis. Bioinformatics. 2006; 22(22): 2825–2827. PubMed Abstract | Publisher Full Text\n\nKanduri C, Järvela I: GenRank: Bioconductor package for candidate gene prioritization based on convergent evidence [Data set]. Zenodo. 2017. Data Source\n\nOikkonen J, Onkamo P, Järvelä I, et al.: Convergent evidence for the molecular basis of musical traits. Sci Rep. 2016; 6: 39707. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchröder MS, Culhane AC, Quackenbush J, et al.: survcomp: an R/Bioconductor package for performance assessment and comparison of survival models. Bioinformatics. 2011; 27(22): 3206–8. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21804",
"date": "19 Apr 2017",
"name": "Joshua W. K. Ho",
"expertise": [
"Reviewer Expertise Bioinformatics"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, Kanduri and Jarvela present a bioconductor R package that facilitates integration of multiple layers of experimental data to prioritise disease- or phenotype-associated genes. This fairly simple package contains three methods: convergence evidence (CE), rank product (RE) and combination of p-values (p). These methods were intended to combine multiple sources of experimental evidence if the evidence are in the form of presence/absence of detection of genes (CE), ranking of genes (RE) and p-values of genes (p).\nThis paper did not present any theoretical justification or empirical evaluation of these methods, and the 'Use cases' presented in their R package's vignette is based on some very simple toy examples. There is no evidence in this paper or on the github repository that directly supports their claim that these methods can 'prioritize the candidate genes of complex genetic traits' (Abstract).\nAfter further careful examination of their source code, I believe their methods have important flaws, and their description in the text contains errors.\nThe major flaw is that they fail to consider two important implicit assumptions: (1) each evidence layer is independent, and (2) the same number of genes are tested in each evidence layer. All the methods described in this manuscript are only potentially valid if these two assumptions are satisfied. Nonetheless, considering the wide range of applications described in their Introduction, it is very easy to imagine these assumptions will be violated in practice. In fact, the failure to consider differences in the gene universe in different evidence layer (assumption 2) is a particularly problematic issue. For example, when combining data from different detection platforms (custom microarrays, targeted or non-targeted proteomic experiments, and NGS-based data), the number of genes that are probed in each experiment can vary a lot. Their CE method implicitly assumes the gene universe to be identical. Their RP method assumes that any missing genes are imputed with rank (n+1) where n is the number of detectable genes in that evidence layer (described in the online Vignette of the package). Their p method excludes genes that have too many missing entries. None of these approaches are entirely appropriate to address the issues related to the violation of these assumptions.\nBoth RP and the p-value combination methods were designed for other more specific purposes, and have been implemented in other bioconductor packages. They were not specifically designed for performing the type of integrative meta-analysis proposed by the authors in this manuscript.\nThe CE method is essentially a very simple weighted sum of presence/absence detection across multiple layers. Even if the two implicit assumptions are satisfied, I still find this CE method rather useless. There is no statistical significance associated with the CE score, and the inclusion of a 'custom weight' is rather arbitrary. In essence, the entire method can be implemented in 2-3 lines of R code. It does not seem necessary to develop a whole bioconductor package for this.\n\nI also found a technical error in weighted CE equation on page 3. Based on their source code (https://github.com/KanduriC/GenRank/blob/master/R/compute_CE.R), the equation should have been:\nCE(G) = [CE(G_L1)*w(L1)+...+CE(G_Ln)*w(Ln)]/ [w(L1)+...+w(Ln)].\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? No\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No",
"responses": []
},
{
"id": "21806",
"date": "03 May 2017",
"name": "Emma E. Laing",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nBeing able to combine evidence from multiple sources to prioritize genes associated with a particular scientific question is very desirable. GenRank is a Bioconductor package that aims to integrate gene-level data generated from multiple layers of evidence (e.g. multiple study types, tissues, analysis tools) to prioritize candidate genes (it is not clear for what). Three methods are implemented via easy-to-use functions. The Convergent Evidence (CE) method counts the number of times a gene is present in each layer of evidence. Counts can be weighted relative to the total number of genes per layer of evidence and further weighted by the type of evidence. The Rank Product (RP) method applies the rank product strategy originally developed for the analysis of microarray data to the p-values or effect sizes across layer of evidence for a set of genes. The third method is a wrapper of the combine.test function from the survcomp package, which combines p-values estimated from the same null hypothesis in different studies.\nThe manuscript and package, in their current form, are of limited value. Each of the functions ‘wrap’ existing methods and is a task easily achieved by any proficient bioinformatician i.e. a bioinformatician would simply use the existing packages. Thus, the likely users of the package are biologists with limited experience or R/programming who want simple to use tools. Whilst the package offers simple to use functions there is limited discussion on how to achieve such analysis nor the ‘weight’ of each parameter, how to approach such analysis, how to merge the data (so no missing data), etc. For example, no advice is offered for the parameters of ’z.transform’ or ‘logit’ for combing p-values. Whilst this information may be available in the original survcomp package this defeats the idea of having an easy ‘out-of-the-box’ package which GenRank aims to be. Without such information it is not easy to see the contribution of this work to the field.\nIn light of the above, and the technical aspects picked up below, we believe this manuscript and package requires a substantial amount of work before it can be indexed and make a contribution to the scientific community.\nTechnical aspects: The tool was installed, manual and vignette read, and all examples successfully run. The tool was also tested with in-house data and there were no problems. Technical issues are:\n\nGenRank package indicates a dependency on R (>= 3.2.3), however, I could only install GenRank on R 3.3.3. It looks like GenRank depends on survcomp, which depends on SuppDists, which is only available for R 3.3.3.\nDetails: In R 3.2.3, I tried to install GenRank as follows:\nsudo R CMD INSTALL GenRank_1.2.0.tar.gz\nI got an error indicating that I needed dependency survcomp. I tried to install survcomp but I got an error indicating that I needed several dependencies. I was able to install all dependencies successfully (except for SuppDists) by running:\ninstall.packages(\"package_name\", repos=\"http://cran.cnr.berkeley.edu\", dependencies=TRUE)\nFor SuppDists I then tried\nsudo R CMD INSTALL SuppDists_1.1-9.4.tar.gz\nwhich produced the error\nERROR: this R is version 3.2.3, package 'SuppDists' requires R >= 3.3.0\nIn addition, the SuppDists package’s maintenance status is orphaned, i.e. the maintainer is unresponsive (dated 2013-03-22).\n\nIn the abstract the RP method is described as: “the rank product (RP) method performs a meta-analysis of microarray-based gene expression data” however, in the context of the manuscript, the method is not restricted to microarray data. The reference manual has a more suitable description under the ComputeRP function: “the rank product (RP) method returns ranks of the genes based on rank product method”.\n\nThe writing style of the Vignette could be improved, in particular the RP tutorial section.\n\nThe PC argument of the ComputeCE function can have three different values, which correspond to each of the three different ways of computing the CE score. It would very useful if the meaning of these options were included in the R help documentation.\n\nThe method argument of the ComputeP function can have three different values, which correspond to each of the three different ways of computing the combined p-values. It would be very useful if a short description of each method was included in the R help documentation. Please see the description provided in combine.test function.\n\nThere are a few assumptions and requisites described in the vignette that could be included in the R help documentation, for example:\n\nIn the RP method, the gene-list or the number of genes should be the same across all evidence layers. In the Combining p-values method, the p-values must be one-sided.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? No\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? No\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-463
|
https://f1000research.com/articles/6-460/v1
|
11 Apr 17
|
{
"type": "Research Article",
"title": "Defining the inflammatory signature of human lung explant tissue in the presence and absence of glucocorticoid",
"authors": [
"Tracy L Rimington",
"Emily Hodge",
"Charlotte K Billington",
"Sangita Bhaker",
"Binaya K C",
"Iain Kilty",
"Scott Jelinsky",
"Ian P Hall",
"Ian Sayers",
"Emily Hodge",
"Charlotte K Billington",
"Sangita Bhaker",
"Binaya K C",
"Iain Kilty",
"Scott Jelinsky",
"Ian P Hall",
"Ian Sayers"
],
"abstract": "Background: Airway inflammation is a feature of many respiratory diseases and there is a need for newer, more effective anti-inflammatory compounds. The aim of this study was to develop an ex vivo human lung explant model which can be used to help study the mechanisms underlying inflammatory responses and which can provide a tool to aid drug discovery for inflammatory respiratory diseases such as asthma and COPD. Method: Parenchymal lung tissue from 6 individual donors was dissected and cultured with two pro-inflammatory stimuli, lipopolysaccharide (LPS) (1 µg/ml) and interleukin-1 beta (IL-1β) (10 ng/ml) in the presence or absence of dexamethasone (1 µM). Inflammatory responses were assessed using Luminex analysis of tissue culture supernatants to measure levels of 21 chemokines, growth factors and cytokines. Results: A robust and reproducible inflammatory signal was detected across all donors for 12 of the analytes measured following LPS stimulation with a modest fold increase (<2-fold) in levels of CCL22, IL-4, and IL-2; increases of 2-4-fold in levels of CXCL8, VEGF and IL-6 and increases >4-fold in CCL3, CCL4, GM-CSF, IL-10, TNF-α and IL-1β. The inflammatory signal induced by IL-1β stimulation was less than that observed with LPS but resulted in elevated levels of 7 analytes (CXCL8, CCL3, CCL4, GM-CSF, IL-6, IL-10 and TNF-α). The inflammatory responses induced by both stimulations was supressed by dexamethasone for the majority of analytes. Conclusions: These data provide proof of concept that this ex vivo human lung explant model is responsive to inflammatory signals and could be used to investigate the anti-inflammatory effects of existing and novel compounds. In addition this model could be used to help define the mechanisms and pathways involved in development of inflammatory airway disease. Abbreviations: COPD: Chronic Obstructive Pulmonary Disease; ICS: inhaled corticosteroids; LPS: lipopolysaccharide; IL-1β: interleukin-1 beta; PSF: penicillin, streptomycin and fungizone",
"keywords": [
"COPD",
"asthma",
"chemokines",
"inflammation",
"lung",
"multiplex",
"luminex",
"tissue explant",
"ex-vivo"
],
"content": "Introduction\n\nObstructive lung diseases such as asthma and Chronic Obstructive Pulmonary Disease (COPD) are characterised by inflammation which can affect both large and small airways1. Treatment options for these inflammatory lung diseases remain limited and not all patients respond to the most commonly used medicines, including inhaled corticosteroids (ICS) and β-2 adrenergic receptor agonists2–5. There is a need for new treatments for both asthma and COPD, and particularly for approaches which target inflammation4, especially in the small airways, which have been increasingly recognised as an important site of inflammation6,7.\n\nWhilst there have been some studies which have used ex vivo cell or tissue to look at inflammatory responses, the lack of a robust human tissue system has to some extent hindered pre-clinical drug development and mechanistic studies in these diseases. Animal models have long been used to try and predict efficacy in human disease but findings in animal models often fail to predict responses in humans. This is particularly true for diseases such as asthma and COPD for which animal models are only able to recapitulate some of the features of the human disease2,8,9. A human tissue explant model would therefore complement those in vivo mouse models which currently exist.\n\nPreliminary data exist demonstrating that ex vivo human lung tissue models can be used to study the effect of allergens and other inflammatory stimuli on selected cytokine responses10,11. Our aim was to develop a reproducible human lung tissue explant model which could be used for target validation and to help investigate mechanisms underlying inflammation relevant to airway disease.\n\nIn this study we assessed human lung tissue explants ex vivo to define inflammatory signalling using multiplex cytokine assays. In order to elicit an inflammatory response in human lung tissue bacterial lipopolysaccharide (LPS) and interleukin-1 beta (IL-1β) were used12–14. We defined the cytokine and chemokine signature of this tissue in response to these stimuli and also provide data on the reproducibility of this model by assessing responses to 21 chemokines, growth factors and cytokines. To determine the usefulness of the model to identify anti-inflammatory mechanisms we also examined the effect of potential inhibitory responses using dexamethasone.\n\n\nMethods\n\nHuman parenchymal lung tissue was obtained from the Nottingham Research Biorepository from patients undergoing lung resection surgery at Nottingham University Hospitals, UK. Written consent was obtained from all patients and the study was approved by North West 7 REC – Greater Manchester Central (ethics reference 10/H1008/72). The patient demographics of the six donor subjects used in the current study are shown in Supplementary table 1. The mean age of donors was 75.5 ± 10.5 years (4 females and 2 males). In total, three individuals were ex-smokers (stopped ≥ 3 years), two were recent smokers (stopped ≤ 3 years) and one was never a smoker. Three subjects had spirometry suggesting the presence of COPD.\n\nLung tissue was dissected into 30–100 mg (wet weight) pieces and incubated for 24h in RPMI 1640 (with 2.05 mM L-glutamine and 25 mM HEPES) (Sigma, 51536C) containing Antibiotic Antimycotic Solution (PSF, penicillin, streptomycin and fungizone) (Sigma, A5955). Following initial incubation, media was replaced, and following the addition of LPS or IL-1β (1 µg/ml or 10 ng/ml respectively) or vehicle controls in the presence or absence of 1 µM dexamethasone, the tissue was incubated for a further 24h, followed by the collection of supernatants. All experimental conditions were prepared in duplicate.\n\nWe designed a custom multiplex panel of 21 Luminex assays to provide comprehensive information on the protein secretory profile of the human lung tissue. This panel was designed to encompass the main inflammatory pathways activated in the lung, including chemokine, cytokine and growth factor pathways (Supplementary table 2).\n\nLuminex assays (supplied by R&D, product code LXSAHM) were performed according to the manufacturer’s recommendations using a custom Magnetic Luminex Screening Assay with a Human Premixed Multi-Analyte Kit (R&D systems). Each duplicate supernatant from the lung tissue explant experiment was assayed in duplicate.\n\nResults were normalised using wet tissue weights in individual experiments and data were normalised to maximal inflammatory stimulus level (i.e. LPS or IL-1β, 100%) in each experiment prior to combining data. Statistical analysis was performed using ANOVA and post-hoc Dunnett’s multiple comparisons test. Statistical analysis was performed using GraphPad Prism software (Version 6, GraphPad Software Inc.).\n\n\nEthics approval and consent statement\n\nWritten consent was obtained from all patients and the study was approved under ethics reference 10/H1008/72, 12/SC/0526 and 08/H0304/56+5. All samples were obtained and research conducted under the approval of the Nottingham Health Science Biobank, Arden Tissue Bank and Papworth Hospital Research Tissue Bank. Written consent was obtained from all patients to publish research findings obtained from the use of patient samples under the approval of the Research Tissue Banks.\n\n\nResults\n\nUsing the multiplex approach, 12 of the 21 inflammatory analytes assayed generated quantifiable signals in the Luminex assay across all donors following 24h incubation under baseline (unstimulated) conditions (Figure 1 and Supplementary table 3). The analytes detected were a range of chemokines including CCL3, CCL4, cytokines including IL-6, CXCL8 and several growth factors including VEGF (Figure 1 and Supplementary table 3).\n\nLPS significantly induced the release of 12 analytes in cultured lung tissue explants, including chemokines and other factors (e.g. growth factors) (A) and cytokines (B). For 10 of these analytes, this response was attenuated with dexamethasone treatment. Results were normalised using tissue mass and data were then normalised to the LPS stimulation (100%) from each donor and are presented as mean (±SEM, n=6). IL-1β significantly induced the release of 7 analytes in cultured lung tissue explants, including chemokines and other factors (C) and cytokines (D). For 6 of these analytes, this response was attenuated with dexamethasone treatment. Results were normalised using wet tissue mass and data were then normalised to the IL-1β stimulation (100%) from each donor and are presented as mean (±SEM, n=4). Due to limited tissue availability for two donors, it was not possible to obtain tissue from all six donors for the IL-1β experiments.\n\nWith the exception of IL-2, there was a significant induction of levels of all analytes detectable following LPS stimulation (Figure 1A and 1B). The fold stimulation within donor samples was reasonably reproducible for CXCL8 (3.5-fold), CCL3 (~33-fold), CCL4 (~18-fold), CCL22 (1.6-fold), GM-CSF (~25-fold) and VEGF (1.8-fold) (Figure 1A). There was also a significant cytokine induction in the tissue, characterised by elevated levels of IL-4 (1.8-fold), IL-6 (3.8-fold), IL-10 (~96-fold), TNF-α (~600-fold), IL-1β (~30-fold) and IL-2 (1.3) (Figure 1B). The absolute values (as opposed to the fold stimulations) varied to some extent across donors even when corrected for tissue wet weight.\n\nFrom the 12 analytes that exhibited a significant LPS driven response, pre-treatment with dexamethasone (1 µM) attenuated this response by >50% for 9 of the analytes. Dexamethasone was unable to significantly attenuate the stimulation of CCL22 or IL-2 production, suggesting this induction was steroid insensitive (Figure 1A and 1B).\n\nIL-1β was also able to induce an inflammatory response in the human lung tissue, however both the magnitude and diversity of the responses observed across the 21 analytes was diminished in comparison to LPS. IL-1β stimulated production of 7 of the analytes, and this response was attenuated by dexamethasone treatment for 6 of these targets (Figure 1C and 1D). The greatest level of induction was observed for TNF-α (~35-fold), followed by GM-CSF (~32-fold), IL-10 (~10-fold), CCL3 (~5-fold), CCL4 (2.3-fold), CXCL8 (2.3-fold) and IL-6 (1.8-fold) (Figure 1C and 1D). Treatment with dexamethasone attenuated these inflammatory responses to varying degrees, with the greatest reduction being TNF-α (~70%), although the actual concentration of this analyte was low (~0.8 pg/ml/mg tissue) compared to the LPS stimulated sample (~30 pg/ml/mg tissue) (Supplementary table 3). Attenuation of the inflammatory response was >35% for the remaining 5 analytes, for which statistically significant reductions were seen for IL-6, CCL3, GM-CSF, CXCL8 and IL-10. Although there was a 33% reduction in CCL4 following treatment with dexamethasone, this was not statistically significant (Figure 1C and 1D).\n\nLPS induced a more pronounced inflammatory response than IL-1β when comparing absolute concentrations of the analytes measured (Supplementary table 3). Figure 2 allows direct comparisons to be made between the two pro-inflammatory stimuli, and provides insight into the degree of inter-donor variability observed in the model. Some donor variation was apparent for both CXCL8 and CCL3 at both basal levels and following stimulation (Figure 2A–D).\n\nLPS and IL-1β both induced an inflammatory response in ex-vivo lung tissue, although the response with LPS was greater than with IL-1β. Compared to the IL-1β stimulation, there was an overall 1.8-fold increase in CXCL8 concentration (A and B) following LPS stimulation and an overall 11-fold increase in levels of CCL3 (C and D).\n\nIn order to further explore the degree of variability in responses, we measured CXCL8 production in tissue obtained from 5 additional subjects. The mean basal levels of CXCL8 produced (total of n=11 donors) were 1941 (range 232–7927) pg/ml/mg tissue and the fold stimulation observed with LPS was 3.9-fold (range 2.2–12.7).\n\n\nDiscussion\n\nThere is a need for well-characterised human lung tissue models to assess pro and anti-inflammatory responses in the lung and to help with target validation during the drug development process. We have therefore developed an explant model using ex-vivo human lung tissue to investigate the inflammatory responses induced using two physiologically relevant stimuli. We characterised responses using Luminex assays to simultaneously permit analysis of a range of cytokines and other mediators. We chose LPS as a stimulus to mimic bacterial infection and IL-1β as a more selective pro-inflammatory signal. The data presented demonstrate that reasonably reproducible responses can be obtained in this model despite there being an inevitable element of heterogeneity in the tissue obtained from each donor. We also used pre-treatment with dexamethasone as proof of concept to identify anti-inflammatory effects in this model. The reduction in inflammatory mediator responses observed after dexamethasone pre-treatment support the use of this model for investigation of potentially anti-inflammatory effects of novel compounds in the human lung.\n\nModels currently used in airway disease research have limitations. For instance, rodent in vivo models have been heavily relied upon and whilst these can provide useful mechanistic insights they do not always translate well when assessing efficacy in human disease8,9. Human tissue based models should enhance mechanistic and pre-clinical studies and will hopefully prove more predictive for target validation for diseases such as asthma and COPD.\n\nWe describe here the inflammatory secretory profiles obtained using pro-inflammatory stimuli LPS and IL-1β in this model. Both induced release of a range chemokines, cytokines and growth factors. As would be expected, the magnitude of effect was greater with LPS than with IL-1β. Appropriate vehicle controls were included in all experiments and did not produce responses. Some variability in both basal and stimulated levels of mediators was seen between donors, although within donor reproducibility of responses was generally good (Figure 2).\n\nBacterial infection and exacerbation are common in COPD and asthma patients12,13,15–17. The broad secretory profile that is obtained following LPS stimulation supports its role as a broad activator of intra-cellular signalling pathways. The responses we observed in the ex vivo model broadly mirror observations in the clinical setting; for example, IL-6, CXCL8 and TNF-α are elevated following COPD exacerbation in induced sputum or bronchoalveolar lavage samples15–17. The data presented here also agree with previous work assessing cytokine responses in a less extensively characterised human lung tissue model, in which TNF-α, IL-1β, IL-6, CXCL8 and IL-10 production was observed following LPS exposure or following influenza virus induced inflammation18,19.\n\nIL-1β stimulation resulted in an induction of mediators where only 7 of the analytes measured increased by significant levels, and the magnitude of effect was lower than that seen with LPS stimulation, reflecting the more selective induction of signalling pathways with this agonist.\n\nThere is a pressing need for the development of new human disease models, in particular those which help reduce the need for animal models2,8,20. There are intensive efforts to reconstitute the key components of the airway to generate clinically relevant in vitro models to be used in basic research and compound evaluation, including lung-on-a chip21, dendritic cell-epithelium-fibroblast scaffolds22 and differentiated epithelial cell layers23. These approaches have both strengths and weaknesses; applicability to scale up is a strength but none are fully representative of an in vivo human lung. One of the advantages of the lung explant model over models such as air liquid interface culture of epithelial cells is maintenance of in vivo cell architecture without the need to induce differentiation in culture. Another approach that is growing in popularity and shares many of the advantages of an ex vivo tissue model is precision cut lung slices which provide a scaled model of the explant approach24,25. However, preparing precision cut lung slices from human tissue is technically much more difficult than from mouse tissue, and for the study of inflammatory approaches (as opposed to contractile responses) it offers no real advantages.\n\nIt is also possible that the use of a human tissue based approach could reduce the use of animals in target validation and overcome some of the obstacles and pitfalls faced when progressing from pre-clinical studies with animals to human trials2. However, it is also important to note that there are limitations; including limited accessibility to human tissue, the relatively heterogeneous nature of resection samples, and natural donor variation in responses.\n\n\nConclusions\n\nIn summary, we have demonstrated proof of concept that an ex vivo human lung tissue explant model can be used to mimic airway inflammation and provide low/medium throughput screening of anti-inflammatory properties of candidate drugs for the treatment of airway disease. This model should also help with target validation and reduce the reliance on animal models, thus reducing animal usage in the drug development process.\n\n\nData availability\n\nDataset 1: LPS raw data. Luminex raw data for LPS stimulated tissue (Donors n=6).\n\nDOI, 10.5256/f1000research.10961.d15779226\n\nDataset 2: IL-1b raw data. Luminex raw data for IL-1b stimulated tissue (Donors n=4).\n\nDOI, 10.5256/f1000research.10961.d15779427",
"appendix": "Author contributions\n\n\n\nIS, IPH, IK and SJ designed the study. TLR assisted with study design, performed experiments and completed data analyses. EH assisted with study design and performed experiments. CKB assisted with study design and experiments. SB and BKC assisted with experiments. TLR, IS and IPH drafted the manuscript. All authors approved the final manuscript.\n\n\nCompeting interests\n\n\n\nIK and SJ are employees of Pfizer Inc. Pfizer employees, IK and SJ were involved in study design, decision to publish and approved the final manuscript.\n\n\nGrant information\n\nThe study was funded by Pfizer Inc.\n\n\nSupplementary material\n\nSupplementary Table 1: Patient demographics.\n\nClick here to access the data.\n\nSupplementary Table 2: Custom Luminex panel design and standard curve range of analytes measured.\n\nClick here to access the data.\n\nSupplementary Table 3: Concentration of analytes determined by Luminex.\n\nClick here to access the data.\n\n\nReferences\n\nBrusselle G, Bracke K: Targeting immune pathways for therapy in asthma and chronic obstructive pulmonary disease. Ann Am Thorac Soc. 2014; 11(Suppl 5): S322–328. PubMed Abstract | Publisher Full Text\n\nEdwards J, Belvisi M, Dahlen SE, et al.: Human tissue models for a human disease: what are the barriers? Thorax. 2015; 70(7): 695–697. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHall IP, Sayers I: Pharmacogenetics and asthma: false hope or new dawn? Eur Respir J. 2007; 29(6): 1239–1245. PubMed Abstract | Publisher Full Text\n\nPortelli M, Sayers I: Genetic basis for personalized medicine in asthma. Expert Rev Respir Med. 2012; 6(2): 223–236. PubMed Abstract | Publisher Full Text\n\nSayers I, Hall IP: Pharmacogenetic approaches in the treatment of asthma. Curr Allergy Asthma Rep. 2005; 5(2): 101–108. PubMed Abstract | Publisher Full Text\n\nGentile DA, Skoner DP: New asthma drugs: small molecule inhaled corticosteroids. Curr Opin Pharmacol. 2010; 10(3): 260–265. PubMed Abstract | Publisher Full Text\n\nLahzami S, King GG: Targeting small airways in asthma: the new challenge of inhaled corticosteroid treatment. Eur Respir J. 2008; 31(6): 1145–1147. PubMed Abstract | Publisher Full Text\n\nHolmes AM, Solari R, Holgate ST: Animal models of asthma: value, limitations and opportunities for alternative approaches. Drug Discov Today. 2011; 16(15–16): 659–670. PubMed Abstract | Publisher Full Text\n\nWenzel S, Holgate ST: The mouse trap: It still yields few answers in asthma. Am J Respir Crit Care Med. 2006; 174(11): 1173–1176; discussion 1176–1178. PubMed Abstract | Publisher Full Text\n\nHackett TL, Scarci M, Zheng L, et al.: Oxidative modification of albumin in the parenchymal lung tissue of current smokers with chronic obstructive pulmonary disease. Respir Res. 2010; 11: 180. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChang Y, Al-Alwan L, Alshakfa S, et al.: Upregulation of IL-17A/F from human lung tissue explants with cigarette smoke exposure: implications for COPD. Respir Res. 2014; 15: 145. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMurakami D, Yamada H, Yajima T, et al.: Lipopolysaccharide inhalation exacerbates allergic airway inflammation by activating mast cells and promoting Th2 responses. Clin Exp Allergy. 2007; 37(3): 339–347. PubMed Abstract | Publisher Full Text\n\nLowe AP, Thomas RS, Nials AT, et al.: LPS exacerbates functional and inflammatory responses to ovalbumin and decreases sensitivity to inhaled fluticasone propionate in a guinea pig model of asthma. Br J Pharmacol. 2015; 172(10): 2588–2603. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang H, Kim YK, Govindarajan A, et al.: Effect of adrenoreceptors on endotoxin-induced cytokines and lipid peroxidation in lung explants. Am J Respir Crit Care Med. 1999; 160(5 Pt 1): 1703–1710. PubMed Abstract | Publisher Full Text\n\nHacievliyagil SS, Gunen H, Mutlu LC, et al.: Association between cytokines in induced sputum and severity of chronic obstructive pulmonary disease. Respir Med. 2006; 100(5): 846–854. PubMed Abstract | Publisher Full Text\n\nPatel IS, Seemungal TA, Wilks M, et al.: Relationship between bacterial colonisation and the frequency, character, and severity of COPD exacerbations. Thorax. 2002; 57(9): 759–764. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTumkaya M, Atis S, Ozge C, et al.: Relationship between airway colonization, inflammation and exacerbation frequency in COPD. Respir Med. 2007; 101(4): 729–737. PubMed Abstract | Publisher Full Text\n\nNicholas B, Staples KJ, Moese S, et al.: A novel lung explant model for the ex vivo study of efficacy and mechanisms of anti-influenza drugs. J Immunol. 2015; 194(12): 6144–6154. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHackett TL, Holloway R, Holgate ST, et al.: Dynamics of pro-inflammatory and anti-inflammatory cytokine release during acute inflammation in chronic obstructive pulmonary disease: an ex vivo study. Respir Res. 2008; 9: 47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolgate S, Agusti A, Strieter RM, et al.: Drug development for airway diseases: looking forward. Nat Rev Drug Discov. 2015; 14(6): 367–368. PubMed Abstract | Publisher Full Text\n\nHuh D, Matthews BD, Mammoto A, et al.: Reconstituting organ-level lung functions on a chip. Science. 2010; 328(5986): 1662–1668. PubMed Abstract | Publisher Full Text\n\nHarrington H, Cato P, Salazar F, et al.: Immunocompetent 3D model of human upper airway for disease modeling and in vitro drug evaluation. Mol Pharm. 2014; 11(7): 2082–2091. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStewart CE, Torr EE, Mohd Jamili NH, et al.: Evaluation of differentiated human bronchial epithelial cell culture systems for asthma research. J Allergy (Cairo). 2012; 2012: 943982. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeuhaus V, Schwarz K, Klee A, et al.: Functional testing of an inhalable nanoparticle based influenza vaccine using a human precision cut lung slice technique. PLoS One. 2013; 8(8): e71728. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLauenstein L, Switalla S, Prenzler F, et al.: Assessment of immunotoxicity induced by chemicals in human precision-cut lung slices (PCLS). Toxicol In Vitro. 2014; 28(4): 588–599. PubMed Abstract | Publisher Full Text\n\nRimington T, Hodge E, Billington C, et al.: Dataset 1 in: Defining the inflammatory signature of human lung explant tissue in the presence and absence of glucocorticoid. F1000Research. 2017. Data Source\n\nRimington T, Hodge E, Billington C, et al.: Dataset 2 in: Defining the inflammatory signature of human lung explant tissue in the presence and absence of glucocorticoid. F1000Research. 2017. Data Source"
}
|
[
{
"id": "21786",
"date": "26 Apr 2017",
"name": "Yassine Amrani",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis interesting report describes the use of human lung tissue explants as an in vitro model to study responses to various proinflammatory stimuli. The authors showed that the model can be cultured for 24 hr and retains responsiveness to LPS and IL-1β with the production of different cytokines and growth factors. Interestingly, the proinflammatory potential of cultured human lung tissue explants can still be modulated by dexamethasone, suggesting that the model would be ideal to test novel anti-inflammatory drugs. Overall the manuscript is clearly written and data are novel. There are some minor points that need clarifications.\n\nIt would important to state how tissue viability was assessed following the 2 days culture\n\nPlease also provide, if possible, some representative picture of the histology of human lung tissue explants before and following the 2 day culture\n\nIt would better to express the data in Figure 1 as net cytokine increase rather than % of LPS.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "21789",
"date": "27 Jun 2017",
"name": "Colin D. Bingle",
"expertise": [
"Reviewer Expertise My research focus is on pulmonary cell and molecular biology."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTracy Rimington and colleagues present a novel preliminary analysis of the inflammatory signature of human lung explant tissue. They correctly identify that limitations in human lung tissue models has hindered pre-clinical and mechanistic studies and provide data to suggest that a simple explanted lung tissue model may have some utility in this area.\nA significant advantage of this model is the lack of requirement for complex model development and should allow the technique to be utilised in any tissue culture facility with ease. The model takes multiple small pieces of human lung tissue and cultures them in a standard growth media for periods of up to 24 hours. Media is removed and then can be used for downstream assays. In this case media was used in a custom Luminex assay to detect a range of inflammatory mediators.\nThe results suggest that the model allows for the detection if secretion of 12/21 analytes chosen and shows that secretion of these was modified by inclusion of the pro-inflammatory mediators, bacterial LPS and interleukin beta into the media.\nThis preliminary data suggests that this type of model could be further developed and may become a valuable tool for pulmonary research.\nThe current data shows how there is a high level of variability between different explants, both in terms of basal and stimulated levels. The reasons for this will need to be explored. Does it represent true variability between different donors or could it be due to the way in which individual donor explants are generated? Maybe the size of each tissue fragment will have a significant influence on responses?\nIt will also be helpful to explore the tissue viability and to investigate extending the time frame of the explant cultures? Again perhaps tissue fragment size will be an important variable.\nOverall, the data provide a useful primer to the further development of lung tissue explants as a research tool.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-460
|
https://f1000research.com/articles/6-455/v1
|
10 Apr 17
|
{
"type": "Research Note",
"title": "Cell signaling promoting protein carbonylation does not cause sulfhydryl oxidation: Implications to the mechanism of redox signaling",
"authors": [
"Yuichiro J. Suzuki",
"Faisal Almansour",
"Camilla Cucinotta",
"Vladyslava Rybka",
"Lucia Marcocci",
"Faisal Almansour",
"Camilla Cucinotta",
"Vladyslava Rybka",
"Lucia Marcocci"
],
"abstract": "Reactive oxygen species (ROS) have been recognized as second messengers, however, targeting mechanisms for ROS in cell signaling have not been defined. While ROS oxidizing protein cysteine thiols has been the most popular proposed mechanism, our laboratory proposed that ligand/receptor-mediated cell signaling involves protein carbonylation. Peroxiredoxin-6 (Prx6) is one protein that is carbonylated at 10 min after the platelet-derived growth factor (PDGF) stimulation of human pulmonary artery smooth muscle cells. In the present study, the SulfoBiotics Protein Redox State Monitoring Kit Plus (Dojindo Molecular Technologies) was used to test if cysteine residues of Prx6 are oxidized in response to the PDGF stimulation. Human Prx6 has a molecular weight of 25 kDa and contains two cysteine residues. The Dojindo system adds the 15 kDa Protein-SHifter if these cysteine residues are reduced in the cells. Results showed that, in untreated cells, the Prx6 molecule predominantly exhibited the 55 kDa band, indicating that both cysteine residues are reduced in the cells. Treatment of cells with 1 mM H2O2 caused the disappearance of the 55 kDa band and the appearance of a 40 kDa band, suggesting that the high concentration of H2O2 oxidized one of the two cysteine residues in the Prx6 molecule. By contrast, PDGF stimulation had no effects on the thiol status of the Prx6 molecule. We concluded that protein carbonylation is a more sensitive target of ROS during ligand/receptor-mediated cell signaling than sulfhydryl oxidation.",
"keywords": [
"cell signaling",
"protein oxidation",
"reactive oxygen species",
"redox signaling"
],
"content": "Introduction\n\nReactive oxygen species (ROS) have been shown to play important roles in cell signaling (Finkel, 2011; Suzuki et al., 1997). In particular, the roles of ROS in cell growth signaling have been well documented (Rao & Berk, 1992; Sundaresan et al., 1995). For the mechanism of ROS signaling, the receptor activation producing ROS via NAD(P)H oxidase is a widely accepted concept (Griendling et al., 1994). However, molecular targeting mechanisms for ROS in cell signaling have been unclear. ROS targeting protein cysteine thiols has been the most popular proposed mechanism (D’Autreaux & Toledano, 2007; Forman et al., 2010; Moran et al., 2001; Rhee et al., 2000; Sen, 2000; Truong & Carroll, 2012; Veal et al., 2007), yet the occurrence of thiol oxidation requires levels of ROS that are much higher than what is expected to occur during cell signaling (Burgoyne et al., 2007).\n\nOur laboratory has proposed that ligand/receptor-mediated cell signaling involves protein carbonylation (Wong et al., 2008; Wong et al., 2010), which occurs on four susceptible amino acid residues: proline, arginine, lysine, and threonine (Amici et al., 1989; Berlett & Stadtman, 1997). Notably, in cultured cells, hydrogen peroxide (H2O2) as low as 0.5 µM was found to promote protein carbonylation (Wong et al., 2008).\n\nMore recently, we identified proteins that are carbonylated in response to the platelet-derived growth factor (PDGF) stimulation. Among them, peroxiredoxin-6 (Prx6) was found to be carbonylated in response to a 10-min treatment of human pulmonary artery smooth muscle cells with PDGF (Wong et al., 2013). Peroxiredoxins have been shown to regulate cell signaling (Woo et al., 2010). The present study tested whether this signaling mechanism also promotes sulfhydryl oxidation within the Prx6 molecule.\n\n\nMethods\n\nHPASMCs (ScienCell Research Laboratories, Carlsbad, CA, USA) were serum-starved overnight and treated with recombinant human PDGF-BB or H2O2 for 10, 15 or 30 min. Protein thiol states were monitored using SulfoBiotics Protein Redox State Monitoring Kit Plus (Dojindo Molecular Technologies, Rockville, MD, USA) in accordance with the manufacturer’s instructions. Briefly, cells were washed, proteins precipitated with trichloroacetic acid and “Protein-SHifters” were added to each sample. Samples were then loaded onto a sodium dodecyl sulfate polyacrylamide gel and electrophoresed. The gel was exposed to UV light to cut the “Protein-SHifters.” The resultant non-reducing SDS polyacrylamide gel was electroblotted to a nitrocellulose membrane (Bio-Rad Laboratories, Hercules, CA, USA). The membrane was blocked with 5% milk for 30 min at room temperature and incubated with the anti-Prx6 antibody produced in rabbit (Sigma-Aldrich Chemical Company, St. Louis, MO, USA; Catalogue no. P0058; 1:1,000 dilution) at 4°C overnight. The membrane was then washed three times and incubated with goat anti-rabbit IgG-horseradish peroxidase conjugate (Bio-Rad; Catalogue no. 1706515; 1:3,000 dilution) for 45 min at room temperature. After washing three times, signals were obtained using an Enhanced Chemiluminescence System (GE Healthcare Bio-Sciences, Pittsburgh, PA, USA).\n\n\nResults\n\nThe technology developed for SulfoBiotics Protein Redox State Monitoring Kit Plus, by Dojindo Molecular Technologies adds a 15 kDa Protein-SHifter on free sulfhydryl groups, allowing the visualization of the thiol status of a given protein by coupling with immunoblotting. The human Prx6 molecule with a molecular weight of 25 kDa has two cysteine residues. Our results indicated that untreated human pulmonary artery smooth muscle cells predominantly contain the 55 kDa species, consistent with the Prx6 molecule, which has two Protein-SHifters incorporated, indicating that both cysteine residues occur in the reduced form in the cells (Figure 1A, lane 1). Treatment of cells with PDGF (10 ng/ml) for 10 min, which promoted protein carbonylation of Prx6 (Wong et al., 2013), did not alter the thiol state of Prx6 (Figure 1A, lane 1 and lane 2). The PDGF treatment for 30 min did not alter the thiol state of Prx6 either (Figure 1A, lane 1 and lane 3). By contrast, treatment of H2O2 at a high concentration (1 mM) eliminated the 55 kDa band and generated a 40 kDa band that is consistent with one sulfhydryl group being oxidized (Figure 1A, lane 4). These results were reproduced at least five times. Dataset 1 (Suzuki et al., 2017) contains the uncropped version of Figure 1A and the uncropped repeats. The bar graph shows the data from five separate experiments with five separate cell treatments. Control experiments were performed to ensure that PDGF stimulated protein phosphorylation as well as carbonylation.\n\nHuman pulmonary artery smooth muscle cells were treated with PDGF (10 ng/ml) for 10 or 30 min as described in Wong et al. (2013), or with H2O2 (1 mM) for 15 min. Cellular proteins were precipitated with trichloroacetic acid and lysate samples were prepared in accordance with the manufacturer’s instructions for SulfoBiotics Protein Redox State Monitoring Kit Plus (Dojindo). The Protein-SHifter Plus that covalently binds to reduced protein thiols was added and the samples were subjected to electrophoresis through a 12% polyacrylamide gel. Each Protein SHifter Plus causes ~15 kDa shift of the protein bands. After electrophoresis, the gel was exposed to UV irradiation to excise the Protein-SHifter Plus moiety, and then subjected to electrotransfer to a nitrocellulose membrane and Western blotting with the Prx6 antibody. (A) Representative Western blotting image of six experiments. (B) Diagram of the native 25 kDa Prx6 molecule, the 40 kDa Prx6 molecule with one Protein-SHifter attached, and the 55 kDa Prx6 molecules with two Protein-SHifters attached. (C) The bar graph represents means (± SEM) of the intensity of the 55 kDa band (N = 5). The symbol (*) denotes that the value is significantly different from all other values.\n\n\nDiscussion\n\nUnlike protein carbonylation of Prx6, which is promoted in response to PDGF-treatment of human pulmonary artery smooth muscle cells (Wong et al., 2013), PDGF stimulation of cells does not cause the oxidation of two cysteine residues within the human Prx6 molecule. By contrast, cysteine oxidation within the Prx6 molecule can be promoted by treating cells with mM concentrations of H2O2 that are not likely to be generated in ligand/receptor-mediated cell signaling. We conclude that protein carbonylation, but not sulfhydryl oxidation, is a likely ROS-targeting mechanism for growth factor stimulation and cell signaling.\n\nProtein carbonylation is promoted by metal-catalyzed generation of hydroxyl radicals, which are known to promote oxidation indiscriminately. However, the caged and site-directed production of hydroxyl radicals via metals could confer specificity (Stadtman & Berlett, 1991; Wong et al., 2010).\n\n\nData availability\n\nDataset 1. The uncropped version of Figure 1A and the uncropped repeats.\n\nDOI, 10.5256/f1000research.11296.d157362 (Suzuki et al., 2017)",
"appendix": "Author contributions\n\n\n\nYJS conceived the study and designed the experiments. CC, FA, LM, VR, and YJS carried out the research. YJS prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the National Institutes of Health, National Heart, Lung, and Blood Institute and National Institute of Aging (Grants R01 HL72844 and R03 AG047824) to YJS. The content is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAmici A, Levine RL, Tsai L, et al.: Conversion of amino acid residues in proteins and amino acid homopolymers to carbonyl derivatives by metal-catalyzed oxidation reactions. J Biol Chem. 1989; 264(6): 3341–3346. PubMed Abstract\n\nBerlett BS, Stadtman ER: Protein oxidation in aging, disease, and oxidative stress. J Biol Chem. 1997; 272(33): 20313–20316. PubMed Abstract | Publisher Full Text\n\nBurgoyne JR, Madhani M, Cuello F, et al.: Cysteine redox sensor in PKGIa enables oxidant-induced activation. Science. 2007; 317(5843): 1393–1397. PubMed Abstract | Publisher Full Text\n\nD’Autréaux B, Toledano MB: ROS as signalling molecules: mechanisms that generate specificity in ROS homeostasis. Nature Rev Mol Cell Biol. 2007; 8(10): 813–824. PubMed Abstract | Publisher Full Text\n\nFinkel T: Signal transduction by reactive oxygen species. J Cell Biol. 2011; 194(1): 7–15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nForman HJ, Maiorino M, Ursini F: Signaling functions of reactive oxygen species. Biochemistry. 2010; 49(5): 835–842. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGriendling KK, Minieri CA, Ollerenshaw JD, et al.: Angiotensin II stimulates NADH and NADPH oxidase activity in cultured vascular smooth muscle cells. Circ Res. 1994; 74(6): 1141–1148. PubMed Abstract | Publisher Full Text\n\nMoran LK, Gutteridge JM, Quinlan GJ: Thiols in cellular redox signalling and control. Curr Med Chem. 2001; 8(7): 763–772. PubMed Abstract | Publisher Full Text\n\nRao GN, Berk BC: Active oxygen species stimulate vascular smooth muscle cell growth and proto-oncogene expression. Circ Res. 1992; 70(3): 593–599. PubMed Abstract | Publisher Full Text\n\nRhee SG, Bae YS, Lee SR, et al.: Hydrogen peroxide: a key messenger that modulates protein phosphorylation through cysteine oxidation. Sci STKE. 2000; 2000(53): pe1. PubMed Abstract | Publisher Full Text\n\nSen CK: Cellular thiols and redox-regulated signal transduction. Curr Top Cell Regul. 2000; 36: 1–30. PubMed Abstract | Publisher Full Text\n\nStadtman ER, Berlett BS: Fenton chemistry. Amino acid oxidation. J Biol Chem. 1991; 266(26): 17201–17211. PubMed Abstract\n\nSundaresan M, Yu ZX, Ferrans VJ, et al.: Requirement for generation of H2O2 for platelet-derived growth factor signal transduction. Science. 1995; 270(5234): 296–299. PubMed Abstract | Publisher Full Text\n\nSuzuki Y, Almansour F, Cucinotta C, et al.: Dataset 1 in: Cell signaling promoting protein carbonylation does not cause sulfhydryl oxidation:Implications to the mechanism of redox signaling. F1000Research. 2017. Data Source\n\nSuzuki YJ, Forman HJ, Sevanian A: Oxidants as stimulators of signal transduction. Free Radic Biol Med. 1997; 22(1–2): 269–285. PubMed Abstract | Publisher Full Text\n\nTruong TH, Carroll KS: Redox regulation of epidermal growth factor receptor signaling through cysteine oxidation. Biochemistry. 2012; 51(50): 9954–9965. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVeal EA, Day AM, Morgan BA: Hydrogen peroxide sensing and signaling. Mol Cell. 2007; 26(1): 1–14. PubMed Abstract | Publisher Full Text\n\nWong CM, Cheema AK, Zhang L, et al.: Protein carbonylation as a novel mechanism in redox signaling. Circ Res. 2008; 102(3): 310–318. PubMed Abstract | Publisher Full Text\n\nWong CM, Marcocci L, Das D, et al.: Mechanism of protein decarbonylation. Free Radic Biol Med. 2013; 65: 1126–1133. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWong CM, Marcocci L, Liu L, et al.: Cell signaling by protein carbonylation and decarbonylation. Antioxid Redox Signal. 2010; 12(3): 393–404. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoo HA, Yim SH, Shin DH, et al.: Inactivation of peroxiredoxin I by phosphorylation allows localized H2O2 accumulation for cell signaling. Cell. 2010; 140(4): 517–528. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22184",
"date": "25 Apr 2017",
"name": "Sabah N.A. Hussain",
"expertise": [
"Reviewer Expertise Angiogenesis",
"ROS signaling",
"NO biology"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors provided indirect evidence that peroxiredoxin-6 does not undergo sulfhydryl oxidation when human pulmonary artery smooth muscle cells are exposed to PDGF but this protein undergo sulfhydryl oxidation when these cells were exposed to H2O2. It was concluded that protein carbonylation is more sensitive target of ROS during ligand/receptor-mediated cell signaling than sulfhydrul oxidation.\n\nMajor comments: I believe that the conclusion of this study is too general and the authors should restrict themselves to the main findings of this study and do not extend their observation beyond one type of cells exposed to one growth factor (PDGF).\n\nIn addition, the authors used an indirect method to assess sulfydryl oxidation rather than a direct measurement. Moreover, the authors did not provide evidence in the current study that PDGF actually produced carbonylation of Prx6. This data is required to document the differential oxidation response of this protein to these two interventions (H2O2 vs. PDGF exposure).\n\nFinally, the authors need to provide data as to the time course of Prx6 oxidation in response to H2O2 exposure. They have only shown one time point.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "2922",
"date": "31 Jul 2017",
"name": "Yuichiro Suzuki",
"role": "Author Response",
"response": "The authors provided indirect evidence that peroxiredoxin-6 does not undergo sulfhydryl oxidation when human pulmonary artery smooth muscle cells are exposed to PDGF but this protein undergo sulfhydryl oxidation when these cells were exposed to H2O2. It was concluded that protein carbonylation is more sensitive target of ROS during ligand/receptor-mediated cell signaling than sulfhydrul oxidation.[RESPONSE: Please note that, in this study, a high concentration of H2O2 was merely used as a positive control to ensure that our experimental system works in accordance with the instruction for the Dojindo SulfoBiotics Protein Redox State Monitoring Kit Plus.]Major comments: I believe that the conclusion of this study is too general and the authors should restrict themselves to the main findings of this study and do not extend their observation beyond one type of cells exposed to one growth factor (PDGF). [RESPONSE: The reviewer is correct that we herein report findings concerning PDGF-signaling in human pulmonary artery smooth muscle cells, as indicated in Abstract, Introduction, Methods, Results, and Discussion sections. ]In addition, the authors used an indirect method to assess sulfydryl oxidation rather than a direct measurement. Moreover, the authors did not provide evidence in the current study that PDGF actually produced carbonylation of Prx6. This data is required to document the differential oxidation response of this protein to these two interventions (H2O2 vs. PDGF exposure).[RESPONSE: Using mass spectrometry, we have recently identified the formation of glutamic semialdehyde on the Prx6 protein molecule in response to the PDGF treatment of cultured human pulmonary artery smooth muscle cells, confirming the induction of protein carbonylation. This work is currently ongoing, and we wish to publish these results soon. Also, please note that, in this Research Note, a high concentration of H2O2 was merely used as a positive control to ensure that our experimental system works in accordance with the instruction for the Dojindo SulfoBiotics Protein Redox State Monitoring Kit Plus.]Finally, the authors need to provide data as to the time course of Prx6 oxidation in response to H2O2 exposure. They have only shown one time point.[RESPONSE: Please note that, in this reported study, H2O2 (1 mM, 15 min) was merely used as a positive control to ensure that our experimental system works in accordance with the instruction for the Dojindo SulfoBiotics Protein Redox State Monitoring Kit Plus. For the subsequent study, our laboratory has performed time course and dose response experiments with H2O2. We found that the appearance of the 40 kD band occurs as early as 5 min, and the level is sustained up to at least 30 min. We wish to publish these results along with other new findings conceding the redox regulation of Prx6 soon.]"
}
]
},
{
"id": "22403",
"date": "02 May 2017",
"name": "Tanea T. Reed",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors studied the response of the antioxidant protein, peroxiredoxin-6 to treatment with PDGF and hydrogen peroxide. By using a commercially available kit, the authors discovered oxidation in one of the cysteine residues at high concentrations of H2O2. My only issue with this work is for Figure 1A. The authors state that they tested three time points of hydrogen peroxide, but only one is shown in the figure. By showing all three time points could further verify the finding of this report as the 40kD would be most potentially pronounced at 30 min.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": [
{
"c_id": "2782",
"date": "12 Jun 2017",
"name": "Yuichiro Suzuki",
"role": "Author Response",
"response": "Referee: My only issue with this work is for Figure 1A. The authors state that they tested three time points of hydrogen peroxide, but only one is shown in the figure. By showing all three time points could further verify the finding of this report as the 40kD would be most potentially pronounced at 30 min. Authors' Response: The reviewer is correct that the statement in the Methods section \"HPASMCs were serum-starved overnight and treated with recombinant human PDGF-BB or H2O2 for 10, 15 or 30 min.\" is confusing. More precisely, it should have stated \"HPASMCs were serum-starved overnight and treated with recombinant human PDGF-BB for 10 or 30 min and H2O2 for 15 min.\" Experimental design was based on our previous report (Wong et al., 2013), showing that PDGF causes carbonylation of Prx6 at 10 min and decarbonyltion at 30 min. H2O2 was merely used as a positive control in accordance with the instruction for the Dojindo SulfoBiotics Protein Redox State Monitoring Kit Plus. We have performed time course experiments with H2O2 and found that the appearance of the 40 kD band occurs as early as 5 min and the level is sustained up to at least 30 min."
}
]
},
{
"id": "22633",
"date": "15 May 2017",
"name": "Brian McDonagh",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe the effects of PDGF and H2O2 treatment on the oxidation state of Prdx6 using a thiol probe, that when attached to free thiols increases the molecular weight of the protein by 15 kDa for each probe attached to the protein. The authors demonstrate that H2O2 treatment causes a change in the redox status of Prdx6 as compared to PDGF treatment. There are a number of issues that need to be resolved and validated by the authors before they can make some of the statements made within the manuscript.\nIt is essential that the authors fully describe the sample preparation before analysis as this could greatly affect the results and interpretations made. In Figure 1 the authors describe that Prdx6 when the “Protein-SHifter” is added the protein has a mol weight of 55 kDa in controls and the PDGF treatments and one free thiol with the H2O2 treatment, but in Fig1B they show the native state of Prdx6 forming an intra- disulphide, was a reducing agent used in the sample preparation to reduce this disulphide? Does the catalytic Cys47 of this 1-Cys peroxiredoxin form an intra-disulphide with Cys91? It would also be helpful if a non “Protein-SHifter” treated sample was included in the blot to demonstrate the native band at 25 kDa. From Fig1A it would appear that there is a much more intense band for Prdx6 in the H2O2 treated samples, is there a loading control that can be included for this blot?\nCarbonylation usually refers to the introduction of an aldehyde or ketone group on an amino acid, I am not sure if this is what the authors are referring to in the title and throughout the manuscript. It is well known that Cys47 of Prdx6 forms a sulphinic (-SO2H) and/or sulphonic (-SO3H) acid. Indeed Prdx6 has been described as having quite a number of various modifications (Jeong, J et al, Proteomics, 2012) so the authors need to confirm the carbonylation or other modifications by mass spectrometry. It is clear that one of the Cys residues is not amenable to “Protein-SHifter” after H2O2 treatment, it would be helpful if they could identify which cysteine residue is susceptible to oxidation.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": [
{
"c_id": "2761",
"date": "07 Jun 2017",
"name": "Yuichiro Suzuki",
"role": "Author Response",
"response": "Referee: It is essential that the authors fully describe the sample preparation before analysis as this could greatly affect the results and interpretations made. Authors’ response: As stated in the Methods section “Protein thiol states were monitored using SulfoBiotics Protein Redox State Monitoring Kit Plus (Dojindo Molecular Technologies, Rockville, MD, USA) in accordance with the manufacturer’s instructions.” The instructions for SulfoBiotics Protein Redox State Monitoring Kit Plus including the sample preparations can be viewed at http://www.dojindo.com/store/p/942-SulfoBiotics-Protein-Redox-State-Monitoring-Kit-Plus.html. Referee: In Figure 1 the authors describe that Prdx6 when the “Protein-SHifter” is added the protein has a mol weight of 55 kDa in controls and the PDGF treatments and one free thiol with the H2O2 treatment, but in Fig1B they show the native state of Prdx6 forming an intra- disulphide, was a reducing agent used in the sample preparation to reduce this disulphide? Does the catalytic Cys47 of this 1-Cys peroxiredoxin form an intra-disulphide with Cys91? Authors’ response: The referee is correct that the scheme in Fig. 1B is confusing. In this figure, we did not imply that the 25kD species actually has a disulfide bond but the cartoon merely depicts that both sulfhydryl groups are oxidized with “0 SH”. Referee: It would also be helpful if a non “Protein-SHifter” treated sample was included in the blot to demonstrate the native band at 25 kDa. Authors’ response: We have done these control experiments many times. Without “Protein-SHifter”, Prx6 gives a band at 25 kDa. Referee: From Fig1A it would appear that there is a much more intense band for Prdx6 in the H2O2 treated samples, is there a loading control that can be included for this blot? Authors’ response: Other than that BCA protein assay can be used to monitor total protein levels in the cell lysates prepared using Dojindo SulfoBiotics Protein Redox State Monitoring Kit Plus, neither Dojindo Molecular Technology, Inc nor our laboratory have yet developed loading control to be used in this system. Thus, we rely on performing multiple experiments to make appropriate conclusions. Referee: Carbonylation usually refers to the introduction of an aldehyde or ketone group on an amino acid, I am not sure if this is what the authors are referring to in the title and throughout the manuscript. Authors’ response: The referee is correct that we refer protein carbonylation as a process that forms reactive ketones or aldehydes that can be reacted by 2,4-dinitrophenylhydrazine (DNPH) to form hydrazones. Referee: It is well known that Cys47 of Prdx6 forms a sulphinic (-SO2H) and/or sulphonic (-SO3H) acid. Indeed Prdx6 has been described as having quite a number of various modifications (Jeong, J et al, Proteomics, 2012) so the authors need to confirm the carbonylation or other modifications by mass spectrometry. It is clear that one of the Cys residues is not amenable to “Protein-SHifter” after H2O2 treatment, it would be helpful if they could identify which cysteine residue is susceptible to oxidation. Authors’ response: This particular Research Note was intended to communicate with the scientific community that, under the condition where protein carbonylation is elicited as previously described by our laboratory (Wong et al., 2013), thiol oxidation was not detected by using a novel method of Dojindo SulfoBiotics Protein Redox State Monitoring Kit Plus. The referee raises exciting and important questions. Based on some of the results obtained while performing experiments for the present study, our laboratory is further investigating the redox biology of peroxiredoxin 6 and we wish to publish a full paper in the near future."
}
]
}
] | 1
|
https://f1000research.com/articles/6-455
|
https://f1000research.com/articles/6-197/v1
|
28 Feb 17
|
{
"type": "Opinion Article",
"title": "hackseq: Catalyzing collaboration between biological and computational scientists via hackathon",
"authors": [
"hackseq Organizing Committee 2016"
],
"abstract": "hackseq (http://www.hackseq.com) was a genomics hackathon with the aim of bringing together a diverse set of biological and computational scientists to work on collaborative bioinformatics projects. In October 2016, 66 participants from nine nations came together for three days for hackseq and collaborated on nine projects ranging from data visualization to algorithm development. The response from participants was overwhelmingly positive with 100% (n = 54) of survey respondents saying they would like to participate in future hackathons. We detail key steps for others interested in organizing a successful hackathon and report excerpts from each project.",
"keywords": [
"Hackathon",
"Genomics",
"Bioinformatics",
"Open Science",
"Diversity in Science"
],
"content": "Introduction\n\nTechnological advances in the biological sciences have led to an increasing availability of so-called ‘-omic’ datasets, allowing fundamental questions in biology to be answered at an unprecedented rate1. However, these datasets are complex, requiring novel and specialized informatics tools for proper analysis and to overcome the computational bottleneck in research. Open-source bioinformatics tools and pipeline development accelerates the rate of research by allowing the community to both reuse and thoroughly assess such methods. Thus, by solving biological problems in an open and collaborative manner, the field can progress at a faster rate than if code remains unavailable to the larger community2.\n\nHackathons offer a solution to catalyze tool and pipeline development for biological data science, as well as foster interdisciplinary collaborations3. These events aim to solve defined computational problems over a period of days by bringing together small teams of individuals with different and diverse skillsets. Although frequently valuable for the outputs they generate, hackathons have faced criticism due to low levels of diversity amongst participants4. We therefore established hackseq, a genomics hackathon collective (http://www.hackseq.com) that aims to promote open science, collaboration and diversity. We placed special emphasis in promoting leadership amongst women, minorities and early-career scientists. The inaugural hackseq event took place over three days in October 2016 in Vancouver (British Columbia, Canada) and was a satellite event to the annual American Society of Human Genetics (ASHG) meeting. Here we report a summary of this hackathon in the hopes of promoting similar events in the future.\n\n\nhackseq format\n\nhackseq was the first genomics hackathon in Vancouver and was based on the NCBI hackathon format3. Some hackathons can be perceived as high-pressure events exclusive to technically inclined and experienced individuals. We thus took measures to ensure that people of all skill levels and backgrounds were encouraged to apply. We structured hackseq as a three-day event that runs primarily from 8AM – 5PM on the Saturday/Sunday/Monday prior to the 2016 ASHG meeting. The hackseq itinerary is accessible on the hackseq github repository (https://github.com/hackseq/October_2016/blob/master/hackseq_2016_schedule.md).\n\nFirst, we opened a call for ‘team leaders’ to propose a project and lead a small team at hackseq through social media, such as Twitter, making announcements at the Vancouver Bioinformatics User Group (VanBUG) and bioinformatics.ca, as well as direct email contact with potential leaders. We screened the projects to confirm that their aims and scope would be appropriate for a 72 hour hackathon. For the ten accepted projects we used GitHub as a discussion board, creating issue threads (https://github.com/hackseq/hackseq_projects_2016/issues) for each project, allowing prospective participants to view and discuss the details about each project before applying to join a particular team.\n\nOnce we established the ten hackseq projects, we opened the call for participants. Our main goal in recruiting participants was to reach out to a diverse group of individuals and to promote participation of women, minorities and early-career scientists. To this end, we partnered with organizations, such as Society for Canadian Women in Science and Technology (SCWIST) and VanBUG, to attract local participants. To encourage early-career scientist involvement, we contacted undergraduate and graduate-level computational sciences and bioinformatics programs at regional universities. To reach the global scientific audience, we contacted several human genetics societies around the world asking them to email participant application information to their respective mailing lists.\n\nTo promote economic diversity and lower the barrier to entry for international participants, we partnered with ASHG to create travel awards based on financial need and/or minority status. hackseq had no registration fee. Lastly, we made announcements on Twitter, Galaxy Project’s events calendar, and various international conferences, such as Bioinformatics Open Source Conference 2016 and the 13th International Congress of Human Genetics, leading up to the hackathon.\n\nIn the participant application form, prospective participants ranked the top three projects on which they would like to work. Participants could apply for travel awards by ASHG and request child care, covered by our budget, to promote participation amongst parents. The organizing committee and the team leaders evaluated the applications based on not only their skill levels, but also their interests and passion for genomics. To ensure well-rounded teams, we considered both the project preferences and skill levels during the team assignment phase, ensuring a balance between novices and expert coders, and biological and computational expertise. All forms developed for hackseq are available online (https://github.com/hackseq/October_2016/blob/master/Forms.md).\n\nBy defining the projects and teams beforehand, participants got to know their team members and prepare technical infrastructure. Teams hit the ground running on the first day, beginning work unprompted by the organizers at 8AM of the first day.\n\nhackseq had 66 participants in attendance from nine nations, divided into nine teams ranging from 3 to 10 individuals. Of the accepted ten projects, two team leaders withdrew prior to the hackathon for personal reasons, and one popular project split into two teams, resulting in nine teams. The mode age-category was 30–34 years old (62.5%) for team leaders, and 25–29 years old (58%) for participants (Figure 1A). Graduate students made up the largest fraction of participants with 48.2%, followed by academic staff (15.5%), industrial scientists (13.8%), undergraduates (10.3%), postdoctoral fellows (6.9%) and academic faculty (5.2%). Notably, the team leaders were more likely to be industry scientists (44.4%) or young faculty (22.2%) (Figure 1B). In total, 22 of 62 (35.5%) participants identified as female and 40 as male. A total of 41% self-identified as Caucasian, 40% as Asian or Pacific Islander, and 19% as Arab, Latin American or unspecified (Figures 1C and D).\n\nTo measure diversity of hackseq participants, we asked team leaders and participants to self-report their (A) age, (B) current occupation, (C) ethnicity and (D) gender. Data is shown for team leaders (yellow) and participants (blue).\n\n\nTechnical and logistical requisites\n\nHackathons have little essential resource requirements. In this section, we outline the core logistics and technical infrastructure we employed. While these requisites could be stripped down, our experience was that attention and planning for these details maximized the efficiency of our teams to focus on coding and development.\n\nTo encourage participation, hackseq had no entry-cost for participants. To ensure teams could focus on the hackathon and not technical or logistical issues, we secured funding for the venue, technical infrastructure, food, transportation and stationary by partnering with different organizations.\n\nA sponsorship package was created to approach different academic, non -profit and industry organizations. Besides asking for financial support, we also made communication and marketing requests given that one of hackseq’s goals was to recruit a diverse pool of participants. A strong emphasis was placed on women’s groups in science and technology.\n\nIn November 2015, we contacted ASHG to ask if we could be a satellite event for their meeting. Given that ASHG 2016 conference was planned to be held in Vancouver, hackseq gained exposure from the ASHG's communication strategy. The ASHG also provided three travel grants to participants based on financial need and diversity.\n\nThese partnerships allowed hackseq to take place in a large, bright atrium at the University of British Columbia. This allowed all the teams to be in a single-space and interact with one another. Food was provided to minimize distraction and two social events were hosted, one the first night and one on the last night to encourage collaboration and networking amongst participants.\n\nReliable technical infrastructure is necessary for organizing a successful hackathon; primarily, electrical power, Internet access and computing resources. We ensured the venue had adequate electrical outlets for the participants’ laptops and organized a dedicated Wi-Fi network connection be established for the event through the university's information technology office.\n\nUnlike many hackathons, hackseq was not restricted to coding. It also included genomic data analyses, which required additional computational resources. To promote reproducibility and collaboration, all the projects were based on pre-organized GitHub teams and repositories (see hackseq organization on Github; https://github.com/hackseq). To provide teams with reliable and powerful computation, we secured in-kind donation of cloud computing from Amazon Web Services Elastic Compute Cloud (AWS-EC2), and Canada’s Michael Smith Genome Science Centre genOmics Research Container Architecture (ORCA). We used Linuxbrew, a cross-platform package management tool, to install bioinformatics software on ORCA5.\n\nThere was an equal usage of AWS-EC2 and ORCA amongst the participants (43%, not mutually exclusive) and an additional 12% using high-performance computing resources from their resident institutions. Users showed a preference for resources they were previously experienced with, and reported that it was not feasible to learn to use a new computing resource in the given time. Allowing team leaders and participants access to computing resources ahead of time in the future to ‘experiment’ and familiarize themselves with the different resources is advisable.\n\nEach team chose which programming language and software they used. The majority of participants relied on Python (82.6%) and R (53.8%) programming languages and also used specialized software that related to their particular projects (Figure 2).\n\nAt the conclusion of hackseq, we asked participants to complete a survey on their experience at hackseq. There were 52 responses to the question, “Which programming languages and tools did you and your team use during the course of hackseq? (Comma delimited please).” These responses were parsed and the number of unique instances is reported. Languages or software listed <2 is reported as ‘Other’.\n\nIn summary, the infrastructure requisites for running a successful hackathon are minimal and many can be acquired as in-kind donations from related organizations. In highlighting the essentials and key lessons, we hope to encourage the motivated reader to run a local scientific hackathon.\n\n\nResearch project summaries\n\nThe projects undertaken during hackseq were from diverse fields within bioinformatics, ranging from human genomic variation analysis, microbial ecology and transcriptomics, to bioinformatic algorithm development. The projects were proposed by the team leaders, who defined the scope of the work, with the idea that at the end of the 72 hours there will have developed a working prototype. Here we provide a brief summaries from the projects. Scientific abstracts, videos of final presentations and updated information on each project can be found at www.hackseq.com/projects16.\n\nModern transcriptomics analysis tools have limited capacity for analyzing thousands of single-cell RNA-sequencing data (scRNA-seq). VASCO is an intuitive user-interface to visualize gene-cell expression and cell clustering data to explore the relationship between populations of cells and gene expression, including cell cluster of differentiation markers (CD-markers). This project was awarded the “People’s Choice” for the most outstanding project developed at hackseq.\n\nHuman sex chromosomes violate typical ploidy assumptions made for NGS autosome copy number and variant measurement, which is further confounded by mis-alignment between the X and Y chromosomes. XYalign was developed to measure sex chromosome ploidy and remap reads based on the inferred sex for downstream analysis.\n\nMany bioinformatics software, such as genome sequence alignment and assembly, requires optimization of several input parameters to maximize a target metric. ParetoParrot measured the performance of several ‘black-box’ optimization algorithms to improve the performance of genome sequence assembly software.\n\nThere is a wealth of sequencing datasets for cell types that have helped to understand and prioritize non-coding variants. Unfortunately, for many of those cell types we still don't have complete genotype information. BaklavaWGS recovers genotype data from cell lines aggregating sequencing data to aid downstream allele specific analysis. A preliminary analysis is available at http://www.baklavawgs.com/.\n\nA variety of datasets and approaches were investigated for analyzing cell type and state-specific genome regulation. The outcome of the experimental work in exploring differentially methylated regions from different epigenomic data and public databases, such as ENCODE ChIP-seq, IHEC and JASPAR, is presented.\n\nCommercial SNP arrays fail to capture the diversity of African populations and limit the capacity to conduct large-scale medical genetic studies. Using African whole genome sequencing (WGS) data, an algorithm was developed to quickly identify SNP tags for this population. This will be used to improve upon SNP arrays for this richly diverse continent.\n\nCalling somatic mutations relies on matched tumour and normal DNA sequencing, but a matched normal sample is often not available. The SMUSH algorithm was developed to differentiate wild type, germline and somatic mutations from linked-read DNA sequencing libraries.\n\nAnalysis of shotgun metagenomic sequencing data is limited in its capacity to assemble over homologous sequences. MetaGenius uses linked-read DNA sequencing to improve the assembly of a mixture of five bacterial species.\n\nMetagenomic sequencing has largely focused on 16S rRNA amplicons. This mICP strategy uses a mixture of long PacBio and short Illumina reads to identify contigs from environmental sequencing samples, which predict the environmental state from which they were found.\n\n\nDiscussion\n\nThe overarching themes of hackseq were inclusivity, open science and collaboration. To gauge the extent to which we were successful in delivering on these themes, we performed a final survey at the conclusion of hackseq. Participants overwhelmingly described their experience as positive (Figure 3), with 100% (n = 54) of the survey respondents indicating that they would participate in an event like hackseq again and a further 80% indicating that they would like to take on an organizational/leadership role in future hackseq events. Participants specifically highlighted that hackseq created ample recruitment, employment and collaborative opportunities, while also exposing participants to different datasets and analyses. We believe this reflects the underlying desire amongst young scientists to share, collaborate and learn from one another. They only need be given the opportunity to do so.\n\nTo measure the quality of the experience hackseq participants had after the event, we asked (A) “Please write three single word adjectives to describe your experience at hackseq?” Responses were parsed and used to make a word-cloud (www.wordle.net), where the size of the word is proportional to the number of occurrences of that word in the survey responses. For scale, in 50 responses: ‘fun’ was mentioned 26 times; ‘exciting’ 6 times; and ‘supercalifragilisticexpialidocious’ once. (B) Additionally, we asked participants to rate four dimensions of their experience on a linear scale from 1 to 5. The kernel density of responses for these dimensions are shown, with a red dotted line showing the mean value of the responses.\n\nBy organizing hackseq as a satellite meeting of an international conference like ASHG, we were able to attract team leaders and participants from around the world, including a large proportion of young investigators and female participants (Figure 1). There was a higher proportion of females at hackseq (35.5%), than reported ratios at hackathons for which data is available, 20% at NASA’s Space Apps Challenge (https://www.fastcompany.com/3059036/most-creative-people/what-do-women-want-at-hackathons-nasa-has-a-list) or 15% at Spotify-organized hackathons (https://labs.spotify.com/2015/01/13/diversify-how-we-created-a-hackathon-with-50-50-female-male-participants/), which we believe to be a consequence of starting with a representative organizing committee and specifically encouraging female participation during recruitment.\n\nTo further increase global representation at future hackseq events, we recommend providing additional targeted travel awards or remote participation options to reduce proximity/cost restrictions. Further improvements could include educational resources to address common technical issues, the provision of an overnight area for participants who would like to continue to work after hours and additional activities to encourage interaction with members from different teams.\n\n\nConclusion\n\nThe nature of biological sciences has shifted to an increasing emphasis on computational analysis. Collaborative events, such as hackseq, offer an exciting platform to bring together a wide spectrum of scientists to work together and innovate. We present demographic information about the first hackseq hackathon and encourage future organizers to do likewise, to quantify social inequalities that may be present in such events, and strive to achieve equal representation in the sciences. It’s our hope that the information presented here will aid and encourage others in organizing genomics hackathons.\n\n\nData availability\n\nDataset 1: hackseq demographics: De-identified demographic data from hackseq participants in the pre-meeting survey/confirmation of attendance. doi, 10.5256/f1000research.10964.d1528026\n\nDataset 2: Post-hackseq survey responses: De-identified post-hackseq survey response data for the figures. doi, 10.5256/f1000research.10964.d1528037",
"appendix": "Author contributions\n\n\n\nAll members of the hackseq Organizing Committee 2016 contributed equally to hackseq and participated in the discussions expressed in this manuscript:\n\nArtem Babaian, Terry Fox Laboratory, BC Cancer Agency, Vancouver, BC, Canada\n\nBritt Drögemöller, Faculty of Pharmaceutical Sciences, University of British Columbia\n\nBruno M Grande, Department of Molecular Biology and Biochemistry, University of British Columbia\n\nShaun D Jackman, BC Cancer Agency Genome Sciences Centre\n\nAmy Huei-Yi Lee, Department of Microbiology and Immunology, University of British Columbia\n\nSantina Lin, Bioinformatics Training Program, University of British Columbia\n\nCatrina Loucks, Department of Molecular Biology and Biochemistry, Simon Fraser University\n\nAdriana Suarez-Gonzalez, Department of Botany, University of British Columbia\n\nTiffany Timbers, Masters of Data Science & Department of Statistics, University of British Columbia\n\nGalen Wright, Centre for Molecular Medicine and Therapeutics, BC Children's Hospital Research Institute, University of British Columbia\n\nAB, BD, BG, AL, SL, ASG and GW wrote the first draft of the manuscript. All authors were involved in the revision of the manuscript and agreed to the final version.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe would first and foremost thank the hackseq participants without which this event would not have happened. By team we’d like to thank, Jean-Christophe Berube, Ogan Mancarci, Erin Marshall, Edward Mason, Celia Siu, Ben Weisburd, Shing Hei Zhan, Grace X.Y. Zheng; Madeline Couse, Bruno Grande, Eric Karlins, Tanya Phung, Phillip Richmond, Timothy H. Webster, Whitney Whitford, Melissa A. Wilson Sayres; Craig Glastonbury, Daisie Huang, Hamid Younesy, Jasleen Grewal, Laura Gutierrez Funderburk, Lisa Bang, Shaun Jackman, Veera Manikandan Rajagopal, Y. Brian Lee; Carolyn Ch'ng, David Brazel, Karthigayini Sivaprakasam, Jill Moore, Shobhana Sekar, Stephen Kan, Jing Yun Alice Zhu, Ka-Kyung Kim, Luca Pinello; Fotis Tsetsos, Kieran O'Neill, Shreejoy Tripathy, Manuel Belmadani; Ayton Meintjes, Scott Hazelhurst, Vincent Montoya, Marcia MacDonald, Jocelyn Lee, Dan Fornika, Brian Lee, Austin Reynolds, Tommy Carstensen; Amanjeev Sethi, Eric Zhao, Hua Ling, Patrick Marks, Peng Zhang, Samantha Kohli; Erik Gafni, Dan Kvitek, Jake Lever and Michael Schnall-Levin; Ben Busby, Justin Chu, Jessica Hardwicke, Sean La and Feng Xu.\n\nWe would like to thank our sponsorship partners 10X Genomics, ECOSCOPE, Amazon AWS, American Society of Human Genetics, Vancouver Tourism, Genome British Columbia, Association for Computing Machinery – Women, Affymetrix, bioinformatics.ca, and GitHub. We also partnered with local organizations for logistical support: Society for Canadian Women in Science and Technology, BC Cancer Agency Graduate Student and Post-Doctoral Fellow Society, National Center for Biotechnology Information and the Vancouver Bioinformatics User Group. Sponsorship partners had no role in data collection and analaysis or preperation of the manuscript.\n\n\nReferences\n\nStephens ZD, Lee SY, Faghri F, et al.: Big Data: Astronomical or Genomical? PLoS Biol. Public Library of Science; 2015; 13(7): e1002195. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPrins P, de Ligt J, Tarasov A, et al.: Toward effective software solutions for big biology. Nat Biotechnol. 2015; 33(7): 686–687. PubMed Abstract | Publisher Full Text\n\nBusby B, Lesko M; August 2015 and January 2016 Hackathon participants, et al.: Closing gaps between open software and public data in a hackathon setting: User-centered software prototyping [version 2; referees: not peer reviewed]. F1000Res. 2016; 5: 672. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRichard GT, Kafai YB, Adleberg B, et al.: StitchFest: Diversifying a College Hackathon to Broaden Participation and Perceptions in Computing. Proceedings of the 46th ACM Technical Symposium on Computer Science Education - SIGCSE ’ 15. New York, USA: ACM Press; 2015; 114–119. Publisher Full Text\n\nJackman S, Birol I: Linuxbrew and Homebrew for cross-platform package management [version 1; not peer reviewed]. F1000Res. 2016; 5(ISCB Comm J): 1795 (poster). Publisher Full Text\n\nhackseq Organising Committee (2016): Dataset 1 in: hackseq: Catalyzing collaboration between biological and computational scientists via hackathon. F1000Research. 2017. Data Source\n\nhackseq Organising Committee (2016): Dataset 2 in: hackseq: Catalyzing collaboration between biological and computational scientists via hackathon. F1000Research. 2017. Data Source"
}
|
[
{
"id": "20626",
"date": "31 Mar 2017",
"name": "Kate L. Hertweck",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you to the authors for writing a summary of what seems to be a very successful collaborative coding event, with this manuscript in particular focused on preparation for the event, managing logistic concerns during the event, and an overview of the projects supported. The manuscript is quite well written, and I have no concerns about the content presented therein.\nI especially appreciate the recommendations for how to solicit diverse leaders/participants, engage with partner organizations, and carefully craft a sense of community among attendees. Moreover, the authors include suggestions on how to improve similar events in the future. The data reported here provide an important context for comparison for events which continue to encourage participation from underrepresented groups.\n\nAlthough not highlighted in the paper, the itinerary for the three-day meeting described here includes a number of additional details which would be useful to other coding event organizers. For example, while the majority of meeting time was dedicated to team work, extra workshops and talks were offerred (e.g., introduction to git) that would be encourage skills development for students or other participants new to the field. I'll be interested to see whether this model for hosting a hackathon continues at ASHG or other meetings.",
"responses": []
},
{
"id": "20964",
"date": "31 Mar 2017",
"name": "Jiarong Guo",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors reported a detailed summary of their genomic hackathon, hackseq, to help those interested in organizing similar hackathons in future. The hackseq brought together 66 biological and computational scientists with diverse demographic background to collaborate on nine projects on genomics ranging from data visualization to algorithm development. All the participants had positive responses in post assessment and showed interests in future hackathons.\nThe background is clearly articulated. There are detailed descriptions on hackseq format and technical and logistical requisites, which are useful for future hackathons. Brief research project summaries are also described with more information available on GitHub. The data for reproducing the figures are made available on F1000 and schedules and application forms are available on GitHub.\nMajor comments: Strengths: Overall, sharing details and experiences of the hackseq such as recruiting project leaders and participants, assigning teams, logistical and technical requisites, and post assessment is valuable for the open science community to organize future hackathons.\nIt is a great idea to organize the hackathon as a satellite event of bigger events. The bigger events can help on travel cost of participants and more importantly promote hackathon participations.\n\nThis hackseq is very successful at recruiting diverse participants, because it has a representative committee that encourages female, minority and early career scientist participations and also good promotion strategies that it partners with organizations such as Society for Canadian Women in Science.\nWeakness: The team leaders seem to have a critical role in each project, but their roles and responsibilities during the hackathon are not clearly mentioned in the manuscripts.\nMinor comments:\n\nLast paragraph in “Hackseq format” section: it states that participants ranked top three projects on which they would like to work, but there are four projects of choice in the application form. First paragraph in “Core logistics” section: stationary -> stationery. Second paragraph in “Discussion”: female percentage in biology is significantly higher than in engineering. Thus the direct comparison of female participation rate of hackseq to engineering (NASA and Spotify) hackathons is not meaningful. Discussion: assessment is difficult with participants from diverse background. Some discussion on the current assessment and possible improvement would be useful.",
"responses": [
{
"c_id": "2615",
"date": "03 Apr 2017",
"name": "Shaun Jackman",
"role": "Reader Comment",
"response": "Thank you for your review, Jiarong. > The team leaders seem to have a critical role in each project, but their roles and responsibilities during the hackathon are not clearly mentioned in the manuscripts. I helped organize Hackseq and was also a project leader. Key responsibilities of the team leaders are: Prior to the event 1. Describing the proposed project. 2. Describing to the organizers: 1. desired number of participants and skill set 2. required compute resources 3. required software 4. required data 5. logistical requirements 3. Discussing the project with interested participants. 4. Discussing the suitability of the participants assigned to the project with the organizers. 5. Confirming that the required software is installed and works. 6. Downloading the required data. 7. Planning the scope and strategy of the project. 8. Dividing the project into separable components. During the event 1. Introducing the participants to each other. 2. Introducing the project and necessary background information to the participants. 3. Describing the components of the project to the participants. 4. Assigning those components to participants based on their interest. 5. Periodically discussing progress with the participants. 6. Troubleshooting technical issues with the help of organizers when needed. 7. Organizing the final report and presentation."
}
]
}
] | 1
|
https://f1000research.com/articles/6-197
|
https://f1000research.com/articles/3-169/v1
|
24 Jul 14
|
{
"type": "Research Article",
"title": "Myelin-specific T helper 17 cells promote adult hippocampal neurogenesis through indirect mechanisms",
"authors": [
"Johannes Niebling",
"Annette E. Rünker",
"Sonja Schallenberg",
"Karsten Kretschmer",
"Gerd Kempermann",
"Johannes Niebling",
"Annette E. Rünker",
"Sonja Schallenberg",
"Karsten Kretschmer"
],
"abstract": "CD4+ T cells provide a neuro-immunological link in the regulation of adult hippocampal neurogenesis, but the exact mechanisms underlying enhanced neural precursor cell proliferation and the relative contribution of different T helper (Th) cell subsets have remained unclear. Here, we explored the proneurogenic potential of interleukin 17-producing T helper (Th17) cells, a developmentally and functionally distinct Th cell subset that is a key mediator of autoimmune neurodegeneration. We found that base-line proliferation of hippocampal precursor cells in a T cell-deficient mouse model of impaired hippocampal neurogenesis can be restored upon adoptive transfer with homogeneous Th17 populations enriched for myelin-reactive T cell receptors. In these experiments, enhanced proliferation was independent of direct interactions of infiltrating Th17 cells with precursor cells or neighboring cells in the hippocampal neurogenic niche. Complementary studies in immunocompetent mice identified several receptors for Th17 cell-derived cytokines with mRNA expression in hippocampal precursor cells and dentate gyrus tissue, suggesting that Th17 cell activity in peripheral lymphoid tissues might promote hippocampal neurogenesis through secreted cytokines.",
"keywords": [
"adult neurogenesis",
"hippocampus",
"stem cells",
"immune deficiency",
"regulatory T cells",
"cytokines",
"plasticity",
"learning and memory"
],
"content": "Background\n\nIn the hippocampus of the adult brain, neurogenesis originates from neural precursor cells residing in the subgranular zone of the dentate gyrus that proliferate and differentiate in response to intrinsic and extrinsic stimuli, allowing adaptation of the neuronal network to changing needs throughout life1,2. Besides innate immune mechanisms3,4, CD4+ T cells of the adaptive immune system promote adult hippocampal neurogenesis and convey functional benefits in reversal learning that have been related to adult neurogenesis5–7. However the molecular and cellular mechanisms underlying CD4+ T cell-mediated enhancement of adult neurogenesis have largely remained unclear. For example, whether or not the infiltration of myelin-reactive T cells into the CNS is a prerequisite for the proneurogenic activity of CD4+ T cells has been controversially discussed5,6,8,9. Additionally, the relative contribution of the activation or differentiation status of the CD4+ T cells to their proneurogenic activity remains to be determined. This includes CD4+ T cell-derived soluble factors that could either act directly on hippocampal precursor cells or promote precursor cell activity through indirect mechanisms, e.g. by acting on neighbouring cells within the neurogenic niche of the adult hippocampus.\n\nUpon appropriate T cell and cytokine receptor signals, initially naïve CD4+ T cells can differentiate into different T helper (Th) cell subsets with distinct cytokine profiles and effector functions10–12. This includes interleukin-17 (IL-17)-producing Th17 cells that additionally express the orphan nuclear receptor ROR-γt. Besides mediating anti-microbial immunity at epithelial barriers13–17, ROR-γt+ Th17 cells have been broadly linked to the pathogenesis of various autoimmune and chronic inflammatory conditions18–23, most notably demyelinating inflammatory disorders of the CNS, such as multiple sclerosis in humans and experimental autoimmune encephalomyelitis (EAE) in rodents. In EAE, a local reactivation of myelin-reactive Th17 cells that have crossed the blood-brain barrier initiates a cascade of neuroinflammatory responses, ultimately leading to demyelination in the CNS and neurodegeneration. More recent evidence suggests that there are different subsets of Th17 cells comprising a wide spectrum of effector phenotypes. Among these are nonpathogenic Th17 cells with regulatory properties that restrict tissue destruction during inflammatory responses and promote tissue remodeling and repair14,24–29. This together with the broad expression of surface receptors for Th17-derived cytokines on both immune and non-immune cells15,16,30, prompted us to assess the capacity of myelin-reactive Th17 cells to enhance precursor cell proliferation in an αβ T cell-deficient mouse model of impaired hippocampal neurogenesis.\n\n\nMethods\n\nC57BL/6 mice were purchased from Janvier. NestinGFP31, TCRα−/−32, and 2D2 mice33 expressing a transgenic TCR recognizing amino acids 35-55 of myelin oligodendrocyte glycoprotein (MOG35-55), were on the C57BL/6 background. 2D2 mice additionally expressed a transgenic Foxp3GFP reporter (2D2 × Foxp3GFP34). C57BL/6 and TCRα–/– mice were intercrossed to obtain heterozygous TCRα+/– F1 mice. All mice were housed at the Experimental Center of the Medizinisch-Theoretisches Zentrum (Technische Universität Dresden, Germany) under specific pathogen-free (SPF) conditions. They received food (standard mouse food „R/M-H“ from Ssniff Spezialitäten GmbH, Soest, Germany) and water ad libitum and lived on a light/dark cycle of 12 h/12 h with lights on at 8 am. Animal experiments were approved by the responsible regulatory authority at Regierungspräsidium Dresden (Approval numbers 24-9168.24-1/2008-5 and 24-9168.11-1/2008-12).\n\nSingle cell suspensions of pooled spleen and lymph nodes (mesenteric and subcutaneous) from 2D2 × Foxp3GFP mice were prepared using 70 µm cell strainers (BD). Monoclonal antibodies (mAbs) to CD4 (Monoclonal Rat IgG, GK1.5, BD Biosciences, Cat. No. 5532728), CD25 (Monoclonal Rat IgG, PC61, BD Biosciences, Cat. No. 551071) and CD62L (Monoclonal Rat IgG, MEL-14, eBioscience, Cat. No. 17-0621) and Pacific Blue-conjugated streptavidin were purchased from eBioscience or BD Biosciences. Before FACS, for some experiments, CD4+ cells were enriched using biotinylated mAbs against CD4, streptavidin-conjugated microbeads and an AutoMACS (Miltenyi Biotec). Intracellular ROR-γt expression was analyzed using the Foxp3 staining buffer set (eBioscience) and an anti-ROR-γt mAb (Monoclonal Rat IgG, AFKJS-9, eBioscience, Cat. No. 12-6988). Intracellular cytokine staining was performed using the Cytofix/Cytoperm kit and mAbs to IL-17 (Monoclonal Rat IgG, TC11-18H10.1, eBioscience, Cat. No. 51-7172-80;) and IFN-γ (Monoclonal Rat IgG, XMG1.2, BD Biosciences, Cat. No. 554412). Samples were analyzed on a FACSCalibur or sorted using a FACSAria II or III (BD Biosciences). Data were analyzed using FlowJo software (Tree Star, Inc.).\n\nT cells were cultured in IMDM medium, supplemented with 10% FCS (v/v), 1 mM sodium pyruvate, 1 mM HEPES, 2 mM Glutamax, 100 U/ml Penicillin-Streptomycin, 0.1 mg/ml Gentamicin, 0.1 mM non-essential amino acids, and 0.55 mM β-mercaptoethanol (all Invitrogen), at 37°C and 5% CO2. For Th17 differentiation in vitro, FACS-purified naïve CD4+ T cells (CD4+CD62LhighCD25−Foxp3GFP−) were cultured for one week in 24-well plates (0,5 × 106 cells/ml) together with 20 Gy irradiated T cell-depleted C57BL/6 splenocytes at a 1:5 ratio, in the presence of soluble anti-CD3ε (2 μg/ml, Monoclonal Armenian Hamster IgG, 145-2C11, BD Biosciences, Cat. No. 550275), recombinant human TGF-β1 (1 ng/ml), murine IL-6 (50 ng/ml) (PeproTech), and neutralizing mAbs to IL-4 (10 μg/ml, Monoclonal Rat IgG, 11B11, eBioscience, Cat. No. 14-7041) and IFN-γ (10 μg/ml, Monoclonal Rat IgG, XMG1.2, eBioscience, Cat. No. 16-7311). After 2–3 days, cultures were supplemented with fresh cytokines. Murine IL-23 (10 ng/ml; R&D Systems) was added on day 4. Prior to flow cytometry of cytokine expression, Th17 differentiation cultures were briefly (4 h) restimulated on day 7 with 50 ng/ml Phorbol 12-myristate 13-acetate (PMA; Sigma-Aldrich) and 200 ng/ml Ionomycin (Iono; Calbiochem), in the presence of 10 μg/ml brefeldin A (BFA; Sigma-Aldrich). On day 7 of Th17 differentiation cultures, 4 × 106 cells/200 μl PBS were injected i.v. into TCRα−/− recipients. Control mice received PBS only. Adoptively transferred CD4+ T cells were tracked by flow cytometry after 2 weeks in the peripheral blood of recipients, as indicated.\n\nMice received 3 consecutive i.p. injections of BrdU (50 mg/kg body weight in 100 μl NaCl; Sigma-Aldrich) at intervals of 6 hours. Twenty-four hours after the first injection, mice were killed with an overdose of anesthetics and perfused transcardially, first with ice-cold saline and then with 4% paraformaldehyde (Sigma-Aldrich). The brains were removed from the scull, postfixed overnight, washed with PBS and cryoprotected for ≥ 3 days in a 30% sucrose solution. Free-floating, 40 μm coronal sections were obtained on a freezing microtome (Leica SM2010R) and stored at 4°C. Immunohistochemistry was performed on 1-in-6 series of free-floating sections of each brain as previously described35. To visualize the immune reaction we used the peroxidase method (ABC-Elite; Vector Laboratories) with biotinylated anti-rat and anti-rabbit antibodies (Jackson ImmunoResearch) and nickel-intensified diaminobenzidine (DAB; Sigma-Aldrich) as chromogen. Primary antibodies were rat anti-BrdU (Monoclonal Rat IgG, BU1/75 (ICR1), AbD Serotec, Cat. No. MCA2060) or polyclonal rabbit anti-CD3 (Abcam, Anti-CD3 antibody, ab 5690). Sections were mounted on gelatine-coated slides, air-dried, incubated in Neoclear (Merck) for 90 min and coverslipped. BrdU+ cells in the granule cell layers and within two cell diameters below in the subgranular zone of the dentate gyrus on both sides were counted exhaustively throughout the rostro-caudal extension of the dentate gyrus by an observer blind to the treatment conditions on a light microscope (Leica DM750, 40x objective). Numbers of BrdU+ cells in the selected coronal sections of each brain were multiplied by 6 as an estimate of total BrdU+ cell numbers in both dentate gyri.\n\nFor RNA isolation, dentate gyri of NestinGFP mice were dissected as described before36. For hippocampal precursor cell cultures from microdissected dentate gyri of adult C57BL/6 mice, tissue dissection, digestion and cell enrichment were performed as previously described37,38. After enrichment, 1 × 104 cells/cm2 were cultured in poly-D-lysine- and laminin-coated (Sigma-Aldrich and Roche, respectively) T25 cell culture flasks (TPP) in proliferation medium, consisting of Neurobasal Medium supplemented with B27, Glutamax and 50 U/ml Penicillin-Streptomycin (all Invitrogen), as well as 20 ng/ml human Fibroblast Growth Factor-basic (FGF-2) and 20 ng/ml human Epidermal Growth Factor (EGF; both PeproTech). Every other day, 75% of the medium was replaced by fresh medium. Cells were passaged, when 80% of confluence was reached.\n\nFor mRNA expression analysis of cells from microdissected dentate gyri, the tissue was passed several times through a 25-gauge needle in RLT buffer (QIAGEN) supplemented with 1% β-Mercaptoethanol (Bio-Rad). For mRNA expression analysis of isolated neural precursor cells, cultured cells were detached from the flask surface with Accutase (PAA) and washed with PBS prior to lysis in RLT buffer. Total RNA was extracted using the RNeasy Mini kit according to the manufacturer’s protocol (Qiagen), including on-column DNase I digestion to minimize genomic DNA contaminations. For real-time RT-PCR, cDNA was synthesized using SuperScript II reverse transcriptase (Invitrogen). cDNA was analyzed in duplicates using a Mastercycler ep realplex thermal cycler (Eppendorf), the QuantiFast SYBR Green PCR kit (Qiagen), and primers listed in Table 1. With the exception of GAPDH39, primers were designed using NCBIPrimer-BLAST (http://www.ncbi.nlm.nih.gov/tools/primer-blast/). Relative mRNA expression was calculated using the ΔCt method and GAPDH as housekeeping gene. Only mRNAs with an expression below ΔCt = 15 were considered to be expressed. PCR specificity was confirmed employing melting curve analysis and gel-electrophoresis of PCR products.\n\nStatistical analysis was performed with GraphPad Prism 5 software and the GraphPad web calculator (http://www.graphpad.com/quickcalcs/). A two-tailed unpaired Student’s t-test was used for analysis of the experiments shown in Figure 2. Data from the experiments shown in Figure 1 were analyzed with ANOVA followed by Dunnett’s Multiple Comparison Test. Differences were considered statistically significant at p < 0.05.\n\n(A) Representative BrdU immunohistochemistry of the hippocampal dentate gyrus from eight week-old wild-type (A1), TCRα+/− (A2) and TCRα−/− (A3) mice, 24 hours after the first of 3 consecutive BrdU injections. Scale bar, 100 μm. (B) Quantification of BrdU+ cells in the dentate gyrus of wild-type (n = 6), TCRα+/− (n = 7) and TCRα−/− (n = 8) mice. All numbers are mean ± SEM. ANOVA, * p < 0.05.\n\n(A–C) Th17 polarization in vitro. (A) Flow cytometry of encephalitogenic CD4+ T cells. Dot plots show pre-sort (left) and post-sort (right) analysis of naïve T cells (CD4+CD62LhighCD25−Foxp3GFP−) from pooled spleen and lymph nodes of 2D2 x Foxp3GFP mice. FACS-purified T cell populations were cultured under Th17-polarizing conditions, as described in the Methods section. On day 7, efficiency of Th17 cell differentiation was confirmed by intracellular flow cytometry of (B) the Th17 transcription factor ROR-γt and (C) the Th17 and Th1 signature cytokines IL-17 and IFN-γ, respectively. (D–I) Impact of adoptive Th17 cell transfer on hippocampal precursor cell proliferation in TCRα−/− mice. (D) On day 7, 4 × 106 total cells from Th17 polarization cultures were injected i.v. into adult TCRα−/− mice. (D1) Dot plots show representative flow cytometry of CD4+ T cells in peripheral blood of recipient mice that express the Vα3.2 chain of the transgenic 2D2 TCR, two weeks after adoptive transfer. (D2) and (D3) Graphs show composite percentages of total CD4+ T cells (D2) and MOG35-55-reactive Vα3.2+ T cells among gated CD4+ T cell populations (D3) from peripheral blood of recipient mice. The arrowheads in (D2) and (D3) highlight an individual mouse that exhibited immune cell infiltrations in the brain (see below). Numbers in dot plots in (A–D) indicate the percentage of cells in the respective quadrant or gate. (E,F) Anti-CD3 immunohistochemistry. (E) Immunohistochemistry of the dentate gyrus of TCRα−/− recipient mice for the pan-T cell marker CD3, two weeks after adoptive Th17 cell transfer. Infiltrating CD3+ T cells were found to be below the level of detection in all mice analyzed (E1, scale bar, 100 μm), with the exception of an individual recipient mouse that exhibited CD3+ T cell and other immune cell infiltrations in some brain areas, including the hippocampus (E2, scale bar, 100 μm). (F) Anti-CD3 immunohistochemistry of the spleen from wild-type C57BL/6 mice was included as a positive control (F1, scale bar, 100 μm; F2, scale bar, 25 μm). The arrowhead in (F2) indicates an individual CD3+ T cell. (G,H) Quantification of hippocampal cell proliferation. (G) BrdU immunohistochemistry of the dentate gyrus of TCRα−/− mice, which had been injected with either (G1) PBS or (G2) Th17 cells two weeks earlier, was performed 24 hours after the first of 3 consecutive BrdU injections. (G3) depicts the dentate gyrus of the mouse exhibiting immune cell infiltrations (see E2). Scale bar, 100 μm. (H) Quantification of BrdU+ cells in the dentate gyrus of TCRα−/− mice injected with either PBS (n = 7) or Th17 cells (n = 7). All numbers are mean ± SEM. t-test, * p < 0.05. (I) Scatter diagram to visualize a possible relationship between cell proliferation in the dentate gyrus and the percentage of MOG35-55-reactive Vα3.2+ T cells among CD4+ T cells in the peripheral blood of recipient mice two weeks after adoptive transfer. No statistically significant correlation was found.\n\n\nResults\n\nFreshly microdissected tissue and cultured neural precursor cells from the dentate gyrus of adult NestinGFP and C57BL/6 wild-type mice were subjected to mRNA expression analysis of T cell-relevant cytokine receptor genes by real-time RT-PCR, as indicated. (A) Gel electrophoresis of RT-PCR duplicate samples. For details on indicated cytokine receptor genes, see below. Total cells from spleen and pooled lymph nodes were included as positive control. (B) Quantification by real-time RT-PCR. Relative mRNA expression values of indicated genes encoding cytokine receptor chains, as revealed by quantitative RT-PCR using GAPDH for normalization. Only mRNAs with an expression below ΔCt = 15 were considered to be expressed (n.d., not detected). Shown are mean values ± range of duplicate samples. (B1) Receptor subunits for essential T cell effector cytokines, including Th17 cell-derived cytokines, were expressed in the dentate gyrus as well as isolated precursor cells. TNFR1, IFN-γR1, IL-10Rβ and IL-17RC are components of the receptors for TNF-α, IFN-γ (Th1), IL-10 (Treg/Th2) as well as IL-22, IL-17A and IL-17F (Th17). (B2) Shared receptor subunits of class I cytokine receptors (glycoprotein 130, gp130; common beta chain, βc; common gamma chain, γc) were expressed in the dentate gyrus but, with the exception of gp130, not by the isolated precursor cells. Similar results were obtained for TGF-βR1, TGF-βR2, IL-2Rα (ΔCt = 17,86), IL-2Rβ (ΔCt = 16.75), IL-4Rα, IL-6Rα, IL-10Rα, IL-13Rα1, IL-21R, IL-22Rα1 (ΔCt = 18.78) and GM-CSFRα (ΔCt = 19.22). As a representative example, relative mRNA expression values for TGF-βR1 are shown (B3). Expression of the cytokine receptor mRNA for IL-5Rα could be detected neither in microdissected tissue nor in cultured precursor cells from the dentate gyrus.\n\nWe first assessed steady-state levels of cell proliferation in the hippocampal dentate gyrus of adult, 8-week-old TCRα–/– mice, which are characterized by a complete lack of αβ T cells (both CD4+ and CD8+) due to targeted deletion of the gene encoding the TCRα chain (TCRα–/–). For in vivo labeling of dividing cells, TCRα–/– mice received three consecutive i.p. injections of the thymidine analog bromodeoxyuridine (BrdU) at intervals of six hours. In these experiments, age-matched cohorts of fully immunocompetent TCRα+/– and C57BL/6 wild-type mice were included for comparison. Twenty-four hours after the first BrdU injection, experimental mice were subjected to immunohistochemical quantification of BrdU+ cells in the dentate gyrus (Figure 1A). Consistent with a published study in TCRα–/– mice based on endogenous Ki67 expression as proliferation marker7, we found that TCRα–/– mice (10205 ± 492 BrdU+ cells, n = 8) exhibited significantly reduced levels of proliferation compared to C57BL/6 wild-type mice (11843 ± 556, n = 6; ANOVA, F (2, 18) = 3.698, p < 0.05; Figure 1B). TCRα+/– mice (11469 ± 273, n = 7) ranged between controls and knockouts. Overall, these results are in agreement with our previous observation that CD4+ T cells provide a neuro-immunological link in the base-line regulation of hippocampal precursor cell activity6.\n\nTo assess the impact of myelin-reactive Th17 cells on proliferation in vivo, we employed adoptive T cell transfers and quantified BrdU+ cells in the hippocampus of TCRα-/- recipients two weeks later. For the generation of Th17 cells, naïve CD4+ T cells (CD4+CD62LhighCD25−Foxp3GFP−) carrying the MOG35-55-specific 2D2 TCR as a transgene were FACS-purified from peripheral lymphoid tissues of 2D2 × Foxp3GFP mice (Figure 2A) and cultured under T cell stimulatory conditions that promote efficient differentiation into Th17 cells with a ROR-γt+IL-17+ phenotype (Figure 2B and C). As expected based on previous observations with differentiated Th17 cells in vitro, these cultures exhibited limited IFN-γ production (Figure 2C).\n\nOn day 7 of Th17 differentiation cultures, 4 × 106 total cells were injected i.v. into adult, age-matched cohorts of TCRα–/– mice. TCRα–/– mice that received PBS only were included as controls. Two weeks later, small populations of adoptively transferred CD4+ T cells could be detected in the peripheral blood of recipient mice (Figure 2D). In these experiments, significant proportions of CD4+ Th17 cells expressed the MOG35-55-specific 2D2 TCR transgene (ranging from 15.6 % to 56.1%), as judged by flow cytometry of the TCRα subunit of the transgenic 2D2 TCR, employing anti-Vα3.2 mAbs (Figure 2D). Importantly, throughout the observation period, TCRα–/– recipients of in vitro generated Th17 cells appeared phenotypically healthy and exhibited no clinical symptoms of EAE. Consistently, immunohistochemistry for the pan-T cell marker CD3 revealed that infiltrating Th17 cells in the dentate gyrus were below the detection level in all mice analyzed (Figure 2E and F), with the exception of an individual recipient with immune cell infiltrates, including some CD3+ T cells (Figure 2E2).\n\nTwo weeks after adoptive T cell transfer, cell proliferation in the dentate gyrus of TCRα–/– recipients was assessed by immunohistochemical quantification of BrdU+ cells, as described above. After applying Grubbs’ outlier test, one animal in the control group with exceptionally high numbers of BrdU-positive cells was excluded from further analysis. The results showed that TCRα–/– recipients of Th17 cells exhibit significantly increased numbers of BrdU+ cells in the hippocampal dentate gyrus (11758 ± 347, n =7), as compared to control-injected TCRα–/– mice (10602 ± 214, n = 7; t-test, p < 0.05; Figure 2G and H). Thus, populations of Th17 cells enriched for myelin-reactive TCRs appear sufficient to restore impaired base-line proliferation of hippocampal precursor cells in adult TCRα–/– recipient mice, in the absence of T cell infiltration and direct interaction with neural precursor cells or cellular components of the hippocampal neurogenic niche such as microglia. Consistently, the enhanced proliferative activity of precursor cells did not correlate with the proportion of Vα3.2+ Th17 cells that accumulated in recipient mice (Pearson’s r = –0.28, p = 0.538, 95% CI –0.85 to 0.60, n = 7; Figure 2I).\n\nBesides the signature cytokines IL-17A and IL-17F, ROR-γt+ Th17 cells have been reported to produce a variety of cytokines such as TNF-α, IFN-γ, IL-9, IL-10, IL-21 and IL-22. In a first attempt to provide insight into possible mechanisms underlying the enhancement of hippocampal precursor cell proliferation by Th17 cell-secreted cytokines, we assessed expression levels of mRNAs encoding relevant cytokine receptors in the neurogenic niche. To this end, we performed quantitative RT-PCR analysis of freshly microdissected tissue as well as isolated precursor cells from the dentate gyrus of adult, immunocompetent C57BL/6 mice (Figure 3).\n\nThis approach identified several subunits of receptors for Th17-derived cytokines with detectable mRNA expression levels in both total dentate gyrus tissue and isolated precursor cells (Figure 3A and B), namely IL-17 receptor C (IL-17RC), tumor necrosis factor R1 (TNFR1), interferon-gamma R1 (IFN-γR1) as well as IL-10R beta (IL-10Rβ), a common subunit involved in the formation of the receptors for IL-10 and IL-2240. Next, we extended our analysis to the type I cytokine receptor family (glycoprotein 130: gp130, CD130; common γ chain: γc, CD132; common β chain: βc, CD131), which is involved in the formation of more than 20 different cytokine receptors. In these experiments, mRNA expression of all three receptor family members (gp130, βc, γc) could be detected in total dentate gyrus tissue (Figure 3A and B). Furthermore, mRNA encoding gp130, a subunit shared between the receptors for cytokines such as IL-6, leukemia inhibitory factor (LIF) and ciliary neurotrophic factor (CNTF), was also expressed in isolated neural precursor cells. Interestingly, IL-6, LIF and CNTF directly affect the differentiation of adult hippocampal precursor cells in vitro41–44.\n\nCytokine receptor subunits with detectable mRNA expression levels in microdissected total tissue but not in isolated precursor cells from the dentate gyrus included transforming growth factor beta receptor 1 and 2 (TGF-βR1 and TGF-βR2), IL-2R alpha and beta (IL-2Rα and IL-2Rβ), IL-4Rα, IL-10Rα, IL-13Rα1, IL-21R, IL-22Rα1 and granulocyte-macrophage colony stimulating factor receptor alpha (GM-CSFRα). Among these the ΔCt-values for IL-2Rα (ΔCt = 17,86), IL-2Rβ (ΔCt = 16.75), IL-22Rα1 (ΔCt = 18.78) and GM-CSFRα (ΔCt = 19.22) were found to be below the chosen cut-off (ΔCt < 15). Nevertheless, in all of these cases a specific product could be detected by gel electrophoresis of RT-PCR samples (Figure 3A). Additionally, and in contrast to previous reports on IL-6Rα mRNA expression in dentate gyrus-derived precursor cells42, we found IL-6Rα mRNA to be expressed in whole dentate gyrus but not in isolated neural precursor cells. The underlying reason for this apparent discrepancy between studies remains to be determined, but may include methodological differences in the preparation and/or purity of isolated precursor cells. Lastly, among the cytokine receptor subunits whose expression was analyzed in the present study, we failed to detect mRNA expression for IL-5R alpha (IL-5Rα) in both microdissected tissue and isolated precursor cells (Figure 3A).\n\n\nDiscussion\n\nPrevious studies on mice with transgenic expression of a myelin-specific TCR on CD4+ T cells5 and a non-TCR transgenic mouse model of MOG-inducible EAE9 have provided the first evidence that encephalitogenic CD4+ T cell activity can promote hippocampal precursor cell proliferation and adult neurogenesis. Here, we have extended these observations and show that, in the absence of autoimmune neuroinflammation, small numbers of myelin-reactive CD4+ T cells with a ROR-γt+IL-17+ phenotype are sufficient to restore base-line proliferation of hippocampal precursor cells in TCRα–/– mice that lack endogenous αβ T cells.\n\nMechanistically, and consistent with the proneurogenic activity of non-infiltrating CD4+ T cells with a polyclonal TCR repertoire6,7, the overall absence of immune cell infiltrations in our Th17 adoptive transfer model emphasizes that direct cell-cell interaction is not a prerequisite of enhanced Th17-mediated hippocampal precursor cell proliferation. Alternatively, Th17 cells residing in peripheral lymphoid tissues outside the brain may secrete cytokines that are actively transported across the blood-brain barrier45,46 and act on the hippocampal neurogenic niche to promote precursor cell proliferation. Indeed, it is becoming increasingly clear that the impact of inflammatory cytokines on hippocampal neurogenesis appears much more context-dependent than anticipated based on previous studies highlighting overall detrimental effects3,4,47–49. Factors that influence the impact of inflammatory cytokines on neurogenesis include the administration route and local cytokine concentrations, the strength and duration of enhanced cytokine receptor signalling as well as the target cell within the neurogenic niche. While the present study suggests that several receptors for Th17 cell-derived cytokines (TNF-α, IFN-γ, IL-17, IL-22) are expressed on hippocampal precursor cells as well as neighbouring cells in the dentate gyrus, it will be important to investigate whether the pattern of expressed cytokine receptors observed in mice under physiological baseline conditions is subject to differential regulation in response to intrinsic or extrinsic stimuli. Another important, unresolved question is whether the proneurogenic effect of Th17 cells can be attributed to an individual inflammatory cytokine or is rather mediated by the combined action of different Th17-cell derived factors.\n\nAt present, cytokines with reported proneurogenic potential in hippocampal neurogenesis include IFN-γ50,51, TNF-α52,53, TGF-β54, CNTF44 as well as IL-1β and IL-642,55. Interestingly, the Th17 signature cytokine IL-17 has recently been found to increase neurite outgrowth from adult postganglionic sympathetic neurons, a process that required NFkB activation28. Importantly, the NFkB pathway, which is shared between many cytokine receptor signaling pathways, has previously been implicated in the regulation of neural precursor cell proliferation and differentiation56,57. Clearly, future studies are warranted to directly address a putative role of IL-17 and the NFkB pathway in hippocampal proliferation and neurogenesis.\n\nIn summary, the present study exemplifies that the TCRα–/– mice represents a suitable experimental model to assess the proneurogenic potential of homogeneous Th cell populations that had been generated under well-defined in vitro conditions. This is likely to facilitate mechanistic studies on the relative contribution of various CD4+ Th cell subsets (Th1, Th2, Th17 etc.) to the regulation of adult hippocampal neurogenesis.\n\n\nData availability\n\nF1000Research: Dataset 1. Hippocampal neurogenesis data in T helper 17 cell-deficient and control mice, http://dx.doi.org/10.5256/f1000research.4439.d3170858",
"appendix": "Author contributions\n\n\n\nJN performed and analyzed the experiments and contributed to the data interpretation and wrote the manuscript. AR and SS performed experiments, contributed to the research design and the analysis and interpretation of data. KK and GK conceived the research, guided its design, analysis and interpretation, and wrote the manuscript.\n\n\nCompeting interests\n\n\n\nNo relevant competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the CRTD, Center for Regenerative Therapies Dresden and Cluster of Excellence (Deutsche Forschungsgemeinschaft FZT 111) and a CRTD Seed Grant to K.K. and G.K.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThe authors are indebted to Marie Boernert, Tina Koenig, Carmen Friebel (Kretschmer Group) and Daniela Lasse (Kempermann Group) for excellent technical assistance. We would like to thank Marcin Dembinski, Pei-Yun Tsai, Cathleen Petzold (Kretschmer Group) and Rupert Overall (Kempermann Group) for help with some experiments and fruitful discussions.\n\n\nReferences\n\nRömer B, Krebs J, Overall RW, et al.: Adult hippocampal neurogenesis and plasticity in the infrapyramidal bundle of the mossy fiber projection: I. Co-regulation by activity. Front Neurosci. 2011; 5(107). PubMed Abstract | Publisher Full Text | Free Full Text\n\nKrebs J, Römer B, Overall RW, et al.: Adult Hippocampal Neurogenesis and Plasticity in the Infrapyramidal Bundle of the Mossy Fiber Projection: II. Genetic Covariation and Identification of Nos1 as Linking Candidate Gene. Front Neurosci. 2011; 5(106). PubMed Abstract | Publisher Full Text | Free Full Text\n\nMonje ML, Toda H, Palmer TD: Inflammatory blockade restores adult hippocampal neurogenesis. Science. 2003; 302(5651): 1760–5. PubMed Abstract | Publisher Full Text\n\nEkdahl CT, Claasen JH, Bonde S, et al.: Inflammation is detrimental for neurogenesis in adult brain. Proc Natl Acad Sci U S A. 2003; 100(23): 13632–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZiv Y, Ron N, Butovsky O, et al.: Immune cells contribute to the maintenance of neurogenesis and spatial learning abilities in adulthood. Nat Neurosci. 2006; 9(2): 268–75. PubMed Abstract | Publisher Full Text\n\nWolf SA, Steiner B, Akpinarli A, et al.: CD4-positive T lymphocytes provide a neuroimmunological link in the control of adult hippocampal neurogenesis. J Immunol. 2009; 182(7): 3979–84. PubMed Abstract | Publisher Full Text\n\nHuang GJ, Smith AL, Gray DH, et al.: A genetic and functional relationship between T cells and cellular proliferation in the adult hippocampus. PLoS Biol. 2010; 8(12): e1000561. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZiv Y, Schwartz M: Immune-based regulation of adult neurogenesis: implications for learning and memory. Brain Behav Immun. 2008; 22(2): 167–76. PubMed Abstract | Publisher Full Text\n\nHuehnchen P, Prozorovski T, Klaissle P, et al.: Modulation of adult hippocampal neurogenesis during myelin-directed autoimmune neuroinflammation. Glia. 2011; 59(1): 132–42. PubMed Abstract | Publisher Full Text\n\nMosmann TR, Coffman RL: TH1 and TH2 cells: different patterns of lymphokine secretion lead to different functional properties. Annu Rev Immunol. 1989; 7: 145–73. PubMed Abstract | Publisher Full Text\n\nMoss RB, Moll T, El-Kalay M, et al.: Th1/Th2 cells in inflammatory disease states: therapeutic implications. Expert Opin Biol Ther. 2004; 4(12): 1887–96. PubMed Abstract | Publisher Full Text\n\nHarrington LE, Hatton RD, Mangan PR, et al.: Interleukin 17-producing CD4+ effector T cells develop via a lineage distinct from the T helper type 1 and 2 lineages. Nat Immunol. 2005; 6(11): 1123–32. PubMed Abstract | Publisher Full Text\n\nBasu R, Hatton RD, Weaver CT: The Th17 family: flexibility follows function. Immunol Rev. 2013; 252(1): 89–103. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber S, Gagliani N, Flavell RA: Life, death, and miracles: Th17 cells in the intestine. Eur J Immunol. 2012; 42(9): 2238–45. PubMed Abstract | Publisher Full Text\n\nKorn T, Bettelli E, Oukka M, et al.: IL-17 and Th17 Cells. Annu Rev Immunol. 2009; 27: 485–517. PubMed Abstract | Publisher Full Text\n\nMiossec P, Korn T, Kuchroo VK: Interleukin-17 and type 17 helper T cells. N Engl J Med. 2009; 361(9): 888–98. PubMed Abstract | Publisher Full Text\n\nWilson NJ, Boniface K, Chan JR, et al.: Development, cytokine profile and function of human interleukin 17–producing helper T cells. Nat Immunol. 2007; 8(9): 950–7. PubMed Abstract | Publisher Full Text\n\nBettelli E, Oukka M, Kuchroo VK: T(H)-17 cells in the circle of immunity and autoimmunity. Nat Immunol. 2007; 8(4): 345–50. PubMed Abstract | Publisher Full Text\n\nDardalhon V, Korn T, Kuchroo VK, et al.: Role of Th1 and Th17 cells in organ-specific autoimmunity. J Autoimmun. 2008; 31(3): 252–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaddur MS, Miossec P, Kaveri SV, et al.: Th17 cells: biology, pathogenesis of autoimmune and inflammatory diseases, and therapeutic strategies. Am J Pathol. 2012; 181(1): 8–18. PubMed Abstract | Publisher Full Text\n\nOkada H, Khoury SJ: Type17 T-cells in central nervous system autoimmunity and tumors. J Clin Immunol. 2012; 32(4): 802–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWilke CM, Bishop K, Fox D, et al.: Deciphering the role of Th17 cells in human disease. Trends Immunol. 2011; 32(12): 603–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZepp J, Wu L, Li X: IL-17 receptor signaling and T helper 17-mediated autoimmune demyelinating disease. Trends Immunol. 2011; 32(5): 232–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSugimoto K, Ogawa A, Mizoguchi E, et al.: IL-22 ameliorates intestinal inflammation in a mouse model of ulcerative colitis. J Clin Invest. 2008; 118(2): 534–44. PubMed Abstract | Free Full Text\n\nPickert G, Neufert C, Leppkes M, et al.: STAT3 links IL-22 signaling in intestinal epithelial cells to mucosal wound healing. J Exp Med. 2009; 206(7): 1465–72. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZenewicz LA, Yancopoulos GD, Valenzuela DM, et al.: Innate and adaptive interleukin-22 protects mice from inflammatory bowel disease. Immunity. 2008; 29(6): 947–57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEsplugues E, Huber S, Gagliani N, et al.: Control of TH17 cells occurs in the small intestine. Nature. 2011; 475(7357): 514–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChisholm SP, Cervi AL, Nagpal S, et al.: Interleukin-17A increases neurite outgrowth from adult postganglionic sympathetic neurons. J Neurosci. 2012; 32(4): 1146–55. PubMed Abstract | Publisher Full Text\n\nMcGee HM, Schmidt BA, Booth CJ, et al.: IL-22 promotes fibroblast-mediated wound repair in the skin. J Invest Dermatol. 2013; 133(5): 1321–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEyerich S, Eyerich K, Cavani A, et al.: IL-17 and IL-22: siblings, not twins. Trends Immunol. 2010; 31(9): 354–61. PubMed Abstract | Publisher Full Text\n\nYamaguchi M, Saito H, Suzuki M, et al.: Visualization of neurogenesis in the central nervous system using nestin promoter-GFP transgenic mice. Neuroreport. 2000; 11(9): 1991–6. PubMed Abstract | Publisher Full Text\n\nMombaerts P, Clarke AR, Rudnicki MA, et al.: Mutations in T-cell antigen receptor genes alpha and beta block thymocyte development at different stages. Nature. 1992; 360(6401): 225–31. PubMed Abstract | Publisher Full Text\n\nBettelli E, Pagany M, Weiner HL, et al.: Myelin oligodendrocyte glycoprotein-specific T cell receptor transgenic mice develop spontaneous autoimmune optic neuritis. J Exp Med. 2003; 197(9): 1073–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFontenot JD, Rasmussen JP, Williams LM, et al.: Regulatory T cell lineage specification by the forkhead transcription factor foxp3. Immunity. 2005; 22(3): 329–41. PubMed Abstract | Publisher Full Text\n\nKronenberg G, Reuter K, Steiner B, et al.: Subpopulations of proliferating cells of the adult hippocampus respond differently to physiologic neurogenic stimuli. J Comp Neurol. 2003; 467(4): 455–63. PubMed Abstract | Publisher Full Text\n\nHagihara H, Toyama K, Yamasaki N, et al.: Dissection of hippocampal dentate gyrus from adult mouse. J Vis Exp. 2009; (33): 1–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBabu H, Cheung G, Kettenmann H, et al.: Enriched monolayer precursor cell cultures from micro-dissected adult mouse dentate gyrus yield functional granule cell-like neurons. PLoS One. 2007; 2(4): e388. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBabu H, Claasen JH, Kannan S, et al.: A protocol for isolation and enriched monolayer cultivation of neural precursor cells from mouse dentate gyrus. Front Neurosci. 2011; 5(89): 1–10. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLluís F, Roma J, Suelves M, et al.: Urokinase-dependent plasminogen activation is required for efficient skeletal muscle regeneration in vivo. Blood. 2001; 97(6): 1703–11. PubMed Abstract | Publisher Full Text\n\nOzaki K, Leonard WJ: Cytokine and cytokine receptor pleiotropy and redundancy. J Biol Chem. 2002; 277(33): 29355–8. PubMed Abstract | Publisher Full Text\n\nNakanishi M, Niidome T, Matsuda S, et al.: Microglia-derived interleukin-6 and leukaemia inhibitory factor promote astrocytic differentiation of neural stem/progenitor cells. Eur J Neurosci. 2007; 25(3): 649–58. PubMed Abstract | Publisher Full Text\n\nBarkho BZ, Song H, Aimone JB, et al.: Identification of astrocyte-expressed factors that modulate neural stem/progenitor cell differentiation. Stem Cells Dev. 2006; 15(3): 407–421. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOh J, McCloskey MA, Blong CC, et al.: Astrocyte-derived interleukin-6 promotes specific neuronal differentiation of neural progenitor cells from adult hippocampus. J Neurosci Res. 2010; 88(13): 2798–809. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMüller S, Chakrapani BP, Schwegler H, et al.: Neurogenesis in the dentate gyrus depends on ciliary neurotrophic factor and signal transducer and activator of transcription 3 signaling. Stem Cells. 2009; 27(2): 431–41. PubMed Abstract | Publisher Full Text\n\nBanks WA, Farr SA, Morley JE: Entry of blood-borne cytokines into the central nervous system: effects on cognitive processes. Neuroimmunomodulation. 2002–2003; 10(6): 319–27. PubMed Abstract | Publisher Full Text\n\nBanks WA, Erickson MA: The blood-brain barrier and immune function and dysfunction. Neurobiol Dis. 2010; 37(1): 26–32. PubMed Abstract | Publisher Full Text\n\nKaneko N, Kudo K, Mabuchi T, et al.: Suppression of cell proliferation by interferon-alpha through interleukin-1 production in adult rat dentate gyrus. Neuropsychopharmacology. 2006; 31(12): 2619–26. PubMed Abstract | Publisher Full Text\n\nKoo JW, Duman RS: IL-1beta is an essential mediator of the antineurogenic and anhedonic effects of stress. Proc Natl Acad Sci U S A. 2008; 105(2): 751–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVallières L, Campbell IL, Gage FH, et al.: Reduced hippocampal neurogenesis in adult transgenic mice with chronic astrocytic production of interleukin-6. J Neurosci. 2002; 22(2): 486–92. PubMed Abstract\n\nBaron R, Nemirovsky A, Harpaz I, et al.: IFN-gamma enhances neurogenesis in wild-type mice and in a mouse model of Alzheimer’s disease. FASEB J. 2008; 22(8): 2843–52. PubMed Abstract | Publisher Full Text\n\nWong G, Goldshmit Y, Turnley AM: Interferon-gamma but not TNF alpha promotes neuronal differentiation and neurite outgrowth of murine adult neural stem cells. Exp Neurol. 2004; 187(1): 171–7. PubMed Abstract | Publisher Full Text\n\nIosif RE, Ekdahl CT, Ahlenius H, et al.: Tumor necrosis factor receptor 1 is a negative regulator of progenitor proliferation in adult hippocampal neurogenesis. J Neurosci. 2006; 26(38): 9703–12. PubMed Abstract | Publisher Full Text\n\nChen Z, Palmer TD: Differential roles of TNFR1 and TNFR2 signaling in adult hippocampal neurogenesis. Brain Behav Immun. 2013; 30: 45–53. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBattista D, Ferrari CC, Gage FH, et al.: Neurogenic niche modulation by activated microglia: transforming growth factor beta increases neurogenesis in the adult dentate gyrus. Eur J Neurosci. 2006; 23(1): 83–93. PubMed Abstract | Publisher Full Text\n\nSeguin JA, Brennan J, Mangano E, et al.: Proinflammatory cytokines differentially influence adult hippocampal cell proliferation depending upon the route and chronicity of administration. Neuropsychiatr Dis Treat. 2009; 5: 5–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYoung KM, Bartlett PF, Coulson EJ: Neural progenitor number Is regulated by nuclear factor-kappaB p65 and p50 subunit-dependent proliferation rather than cell survival. J Neurosci Res. 2006; 83(1): 39–49. PubMed Abstract | Publisher Full Text\n\nZhang Y, Liu J, Yao S, et al.: Nuclear factor kappa B signaling initiates early differentiation of neural stem cells. Stem Cells. 2012; 30(3): 510–24. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNiebling J, Rünker A, Schallenberg S, et al.: Dataset 1. Hippocampal neurogenesis data in T helper 17 cell-deficient and control mice. F1000Research. 2014. Data Source"
}
|
[
{
"id": "5569",
"date": "30 Jul 2014",
"name": "Stefano Pluchino",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nStudy design, methods are adequate. Results are generally sound. A few claims are only in part justified by the proposed approach and controls. The importance or relevance of the myelin-specificity/reactivity in the observed promotion of endogenous hippocampal neurogenesis by adoptively transferred CD4+CD62LhighCD25-FoxP3GFP- Th17 cells is not fully addressed in this paper. It is in fact quite clear that adoptive transfer of Th17 cells increases BrdU incorporation (and hence proliferation) at the level of the DG of the hippocampus. What is instead much less clear, and only speculated, is that higher proliferation leads to more neurons, and that reactivity to MOG35-55 (via the expression of the Vα3.2 chain of the transgenic 2D2 TCR) is indispensable for promoting adult hippocampal neurogenesis. Main commentsNewly generated neurons should be quantified. Experiments in Figure 2 and 3 should include also mice adoptively transferred with non-myelin reactive Th17 cells. This is a key aspect of this paper, especially in the perspective of proposing a mechanism of brain homeostasis that is promoted by a specific subset of immune cells acting in periphery (e.g. peripheral lymphoid tissues; as anticipated in the abstract). As such, also experiments in Figure 2B should include the very same positive controls as those in Figure 2A. Ideally, one would like to see at least what is the contribution of the Th17 cell adoptive transfer to the cytokine milieu of the host, both at the level of the DG as well as of peripheral lymphoid tissues. Minor commentsWondering whether it would read better if the characterization of the pattern of expression for cytokine receptors is showed prior to the transfer experiments. Nice to briefly show or comment on the clinical phenotype of that individual mouse showing remarkable accumulation of adoptively transferred Th17 cells in the brain (did it develop EAE-like signs?).",
"responses": [
{
"c_id": "2618",
"date": "07 Apr 2017",
"name": "Gerd Kempermann",
"role": "Author Response",
"response": "1. Newly generated neurons should be quantified. It is correct that we quantified neural precursor cell proliferation as one component of adult hippocampal neurogenesis. Indeed, the term neurogenesis is inclusive of the entire process from cell division to full maturity of new-born neurons after about 4 weeks (Kempermann et al., 1997). Beside proliferation, quiescence, self-renewal vs. cell fate decision (neuron vs. astrocyte, microglia; mostly few days after birth) and survival of new-born cells (30 – 70% die mainly between their first and second week) are further determinants of neurogenesis. Neurogenesis can be regarded as affected once any of the mentioned sub-processes is altered. Changes in individual sub-processes will inevitably affect the outcome, i.e. the number of new mature neurons (“net neurogenesis”), unless additional, counteracting changes occur in additional sub-processes. We, as many authors, use the term neurogenesis in this “inclusive” sense, for example in the title. Of note, assessing new-born cells or neurons at a later stage of neurogenesis allows a better estimate of the resulting number of newly generated mature neurons (most precise at 4 weeks), but give a “collapsed” view of preceding sub-processes, i.e. assessed differences can not be assigned to a specific underlying cellular mechanism. Obviously, the reverse argumentation has to be considered as well. Others and we have previously shown that lack of (CD4+) T cells (depletion or deficiency) negatively impacts neurogenesis by lowering baseline proliferation and result in fewer mature neurons. Changes in further sub-processes, such as cell fate decision/differentiation (7d after division) or survival (up to 4 weeks) have not consistently been reported5,6,7. In line with this, reconstitution of T-cell deficiency with (preparations that contain) CD4+ T-cells led to increased proliferation of neural precursor cells (BrdU+) and/or neuronally committed cells (BrdU+ after 7d and Dcx+)5,6,7. One aim of our study was to follow up on these studies and further define whether CD4+ T-cells with a specific status of activation or differentiation are responsible for the reported pro-proliferative effect. Therefore, we concentrated on the investigation of proliferation as the most validated affected measure of neurogenesis. We state this now in manuscript (Background, last sentence). We agree that this does not allow a direct conclusion on the number of newly generated neurons and we did not intend to give this impression. 2. Experiments in Figure 2 and 3 should include also mice adoptively transferred with non-myelin reactive Th17 cells. This is a key aspect of this paper, especially in the perspective of proposing a mechanism of brain homeostasis that is promoted by a specific subset of immune cells acting in periphery (e.g. peripheral lymphoid tissues; as anticipated in the abstract). As such, also experiments in Figure 2B should include the very same positive controls as those in Figure 2A. Myelin-specific T cells seem to possess a pronounced ability to promote the proliferation of neural precursor cells in the adult hippocampus5. The authors of this initial report therefore suspected an underlying mechanism based on the infiltration of these self-reactive T cells into the CNS in order to directly interact with cellular components of the neurogenic niche of the dentate gyrus. In our manuscript, we report that adoptively transferred myelin-specific Th17 cells promote hippocampal cell proliferation in TCRα-/- mice evidently without entering the brain in significant numbers. Therefore, we concluded that peripheral effector functions must be responsible for this Th17 cell-mediated effect on hippocampal precursor cells. We do not, however, claim that specificity for a CNS-antigen would be a prerequisite for this effect. In fact, we do believe that Th17 cell populations with a polyclonal TCR repertoire would potentially have a similar effect on the proliferative activity of hippocampal precursor cells. We decided to use generated Th17 cells with a transgenic TCR recognizing a CNS-antigen, since we wanted to point out that under physiological conditions T cell subpopulations can affect the neurogenic region of the adult hippocampus and despite their specificity do so without infiltrating the CNS. The crucial point may simply be that the T cells become activated in the peripheral lymphoid tissues upon encounter of their cognate antigen. This should be the fact for both self-reactive as well as polyreactive T cell populations. 3. Ideally, one would like to see at least what is the contribution of the Th17 cell adoptive transfer to the cytokine milieu of the host, both at the level of the DG as well as of peripheral lymphoid tissues. Analyzing the cytokine milieu after adoptive T cell transfer could indeed help deciphering underlying clues of a Th17 cell-mediated regulatory mechanism of hippocampal cell proliferation. By additionally using neutralizing antibodies against certain Th17 cell-derived cytokines, one could potentially obtain even more precise results in this regard. However, the transfer experiments turned out to involve high costs and considerable expenditure of time and effort. An assay, using our cultured precursor cells, might probably be a more cost efficient alternative in the first instance (see below). We consider such experiments beyond the scope of the present manuscript. Minor comments 1. Wondering whether it would read better if the characterization of the pattern of expression for cytokine receptors is showed prior to the transfer experiments. Our intention was to use our main finding of an increased hippocampal cell proliferation after adoptive Th17 cell transfer as a starting point and subsequently descent to the molecular level in order to find first clues for potential underlying mechanisms and provide a basis for further experimentation. 2. Nice to briefly show or comment on the clinical phenotype of that individual mouse showing remarkable accumulation of adoptively transferred Th17 cells in the brain (did it develop EAE-like signs?). The affected animal did not develop clinical signs of EAE during the observation period. The cellular infiltrations found in the brain of this animal, while thwarting a quantification of hippocampal cell proliferation, were found to be limited rather than widespread, which is probably why they did not result in a clinical phenotype."
}
]
},
{
"id": "5572",
"date": "05 Aug 2014",
"name": "Carlos P. Fitzsimons",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this paper, Niebling et al., study the regulation of hippocampal neurogenesis by interleukin 17-producing T helper (Th17) cells. Using a T-cell deficient mouse model with impaired base-line levels of hippocampal neurogenesis, they found that the neurogenesis impairment present in these animals can be restored after transfer of homogenous populations of Th17 cells, in a way that was independent of direct interaction between the transferred Th17 cells and cells of the dentate gyrus. From these results, they conclude that Th17 cells may promote hippocampal neurogenesis via secreted cytokines.The manuscript is well written and easy to read, the title is descriptive of the content and the abstract provides a clear summary of the article.The experiments are clearly designed and information on how the data has been analyzed is provided.The main conclusions made by the authors are supported by the data presented in the manuscript’s figures. Main Comments:Some details in the description of the methods are unclear. Specifically, the age of the animals used in the experiments is not clearly stated in the corresponding methods section. Although eight weeks old is mentioned in legend to figure 1 this should be clarified in the methods section because age is a strong determinant of base-line hippocampal neurogenesis. Were animals transcardially perfused until complete elimination of blood cells before fixation? The authors should clarify this point to facilitate the reader’s interpretation of the data presented in figure 3, discarding blood cell contamination in dentate gyrus microdissected tissue samples, which could be an alternative interpretation for the differences in cytokine receptor expression in tissue vs. cultured precursor cells (last paragraph of the results section). The authors suggest that one of the limitations of their study is the lack of direct characterization of the specific factor/cytokine(s) responsible for the cell-cell interaction-independent promotion of hippocampal neurogenesis they observe. However, it could have been very interesting to see some experimental effort made to support the hypothesis that the effects of Th17 cells are mediated by specific cytokines produced by these cells. This could have been done in vitro, using their cultured precursor cells, providing a stronger first proof-of-principle for the important concept they propose. In the adoptive T cell transfer experiments, control mice were injected with PBS only. Although this experimental approach provides a suitable negative control, one wonders whether the injection of other non-IL17 producing T cells (e.g. naive CD4+ differentiated into Th1 or Th2 cells) could have been a more sensible negative control for these experiments, given the specific immune functions of these different T cell sets, which are highlighted by the authors in the last sentence of their discussion. Perhaps the authors could comment on the limitations of their experimental approach in this respect? The discussion could benefit from a short paragraph including a more detailed discussion of possible pathological implications of the observations described in the paper. For example, other authors have recently found that hippocampal neurogenesis is enhanced in an arthritis model in rat (Leuchtweis et al., 2014). Perhaps another interesting point of discussion could be the possible implications for depression, a disease commonly linked to alterations in hippocampal neurogenesis and possibly linked to immune function (Eyre and Baune, 2012). The data presented in dataset 1 seems compatible with the figures presented with the article. However, the data could be presented in a format more easily understandable for the reader. i.e. Dataset 1a is labelled “C57BL/6,TCR?+/?,TCR??/?”, while is should presumably be “C57BL/6, TCRα+/-, TCRα−/−” and the data in rows 8 and 9 seems to be in an odd format compared to the other data rows?Minor comments:In \"BrdU administration and immunohistochemistry\" “scull” should be “skull”. Abbreviations should be defined the first time they are used in the text.",
"responses": [
{
"c_id": "2619",
"date": "07 Apr 2017",
"name": "Gerd Kempermann",
"role": "Author Response",
"response": "1.Some details in the description of the methods are unclear. Specifically, the age of the animals used in the experiments is not clearly stated in the corresponding methods section. Although eight weeks old is mentioned in legend to figure 1 this should be clarified in the methods section because age is a strong determinant of base-line hippocampal neurogenesis.Were animals transcardially perfused until complete elimination of blood cells before fixation? The authors should clarify this point to facilitate the reader’s interpretation of the data presented in figure 3, discarding blood cell contamination in dentate gyrus microdissected tissue samples, which could be an alternative interpretation for the differences in cytokine receptor expression in tissue vs. cultured precursor cells (last paragraph of the results section). Microdissection of DGs was done without prior perfusion of NestinGFP mice (according to Ref.36). Even though the dissected tissue was thoroughly washed before RNA extraction, we cannot exclude the possibility that residues of blood cells might have contributed to our quantitative RT-PCR results in whole DG samples. Still, perfusion before microdissection cannot be regarded as a guaranteed way to exclude blood cell contamination, but rather as another measure to reduce its likelihood. Apart from that, we believe that the differences in cytokine receptor expression in tissue vs. cultured precursor cells can be very well explained by the absence of virtually all bystander cells in samples of neural precursor cell (NPC) cultures. As mentioned in the manuscript, neighbouring cells together with systemic factors and components of the extracellular matrix comprise a neurogenic niche, which provides a supportive microenvironment for the activity of the residing precursor cells whilst at the same time it regulates their expansion, differentiation and migration (Li & Xie, 2005). In this regard, several studies particularly emphasized the role of microglia as important mediators of immune-based regulatory mechanisms of adult hippocampal neurogenesis3,4,5 (also: Cacci et al, 2005; Butovsky et al., 2006). The expression of several cytokine receptor molecules assessed in our study has been repeatedly demonstrated previously for cells that contribute to the neurogenic niche such as microglia, astrocytes, brain endothelial cells and mature neurons (Sawada et al., 1993; Rock et al., 2004; Mathieu et al., 2010; Szelényi, 2001). Taken together, we consider the differences in cytokine receptor expression between cultured NPCs and microdissected DG tissue to be mainly caused by the heterogeneous cellular composition of the DG and not by blood cell contamination, which rather has to be regarded as potential confounding factor, that could only be partially prevented by prior perfusion of the experimental animals.2. The authors suggest that one of the limitations of their study is the lack of direct characterization of the specific factor/cytokine(s) responsible for the cell-cell interaction-independent promotion of hippocampal neurogenesis they observe. However, it could have been very interesting to see some experimental effort made to support the hypothesis that the effects of Th17 cells are mediated by specific cytokines produced by these cells. This could have been done in vitro, using their cultured precursor cells, providing a stronger first proof-of-principle for the important concept they propose. One possible next approach is indeed to test whether cultured NPCs are in principle responsive to the candidate cytokines (TNFa, IFNg, IL-10, IL-17, IL-6). However, a possible NPC response to particular cytokines in culture will have limited value as supporting evidence of an influence of Th17 cell-secreted cytokines on hippocampal NPCs in vivo. In fact, in the brain, activated microglia are the main source of several cytokines including TNFa, IL-6, IL-10, or INFg. Activated microglia is mostly known as a suppressor, but might also act as enhancer of adult neurogenesis dependent on their phenotype or mode of activation (i.e. inflammation associated activation3,4,52 versus Th cell associated IL-4 or INFg activation5 (also: Butovsky et al., 2006)). In addition some cytokines such as TNFa, IFNg, and IL-6 are expressed by astrocytes49 or adult hippocampal NPCs themselves (Klassen et al., 2003). Finally, the impact of Th17 cell-derived cytokines on adult neurogenesis might be indirect, i.e. mediated by cells of the neurogenic niche, such as astrocytes, microglia or endothelial cells. In summary, Th17 influences on hippocampal neurogenesis are presumably complex and highly context-dependent, and are thus ideally addressed in vivo. 3. In the adoptive T cell transfer experiments, control mice were injected with PBS only. Although this experimental approach provides a suitable negative control, one wonders whether the injection of other non-IL17 producing T cells (e.g. naive CD4+ differentiated into Th1 or Th2 cells) could have been a more sensible negative control for these experiments, given the specific immune functions of these different T cell sets, which are highlighted by the authors in the last sentence of their discussion. Perhaps the authors could comment on the limitations of their experimental approach in this respect? Our observation of an increased hippocampal precursor cell proliferation by Th17 cell-mediated indirect mechanisms provides the first evidence of a distinct CD4+ T helper subset affecting adult hippocampal neurogenesis . Certainly more work will have to be done in order to present a cohesive picture of the particular contribution of different Th cell subtypes to the pro-proliferative effect on adult NPCs. In fact, our original objective was to expand our adoptive transfer model in TCRα-/- mice towards Th1 and Th2 lineages as well. Actually, we consider the Th1 subtype to be an obvious and promising candidate. The receptor for IFN-γ, the signature cytokine of Th1 cells, is also expressed on adult neural precursor cells and several studies demonstrated that IFN-γ could influence their proliferation both and 50,51 (also: Butovsky et al. 2006). Moreover, Th1 cells together with Th17 cells are key factors in the pathogenesis of EAE, during which modulations of adult hippocampal neurogenesis have been observed9. Unfortunately the transfer experiments involved higher effort and expenses as initially assumed. Especially the generation of naïve T cells into effector cells with a ROR-γt+ IL-17+ phenotype, due to the small yields, could only be achieved by including high numbers of mice. Therefore, we could not follow up on our initial goals in the scope of the present paper. 4. The discussion could benefit from a short paragraph including a more detailed discussion of possible pathological implications of the observations described in the paper. For example, other authors have recently found that hippocampal neurogenesis is enhanced in an arthritis model in rat (Leuchtweis et al., 2014). Perhaps another interesting point of discussion could be the possible implications for depression, a disease commonly linked to alterations in hippocampal neurogenesis and possibly linked to immune function (Eyre and Baune, 2012). In a previous study, our group could already establish a relationship between adaptive peripheral immune responses and enhanced neurogenesis in the adult hippocampus (Wolf et al. 2009b). Similar to the work by Leuchtweis and colleagues, a T cell-dominated adaptive immune response was generated either by intraperitoneal injection of staphylococcus enterotoxin B or the induction of adjuvant-induced rheumatoid arthritis in C57BL/6 mice. Under both conditions a transient increase in hippocampal precursor cell proliferation and neurogenesis could be observed, whereas the intraperitoneal administration of lipopolysaccharide (LPS), an extremely potent activator of innate immune responses, had the opposite effect. Reduced levels of hippocampal precursor cell proliferation in different animal models of adaptive immunodeficiency and its restoration upon adoptive T cell transfers, observed in previous studies5,6,7 and in the present paper, underline once again the potential role of T cell-mediated immune responses in brain homeostasis and neurogenesis. Several immune mediators have been proposed in the pathogenesis of depression and the associated changes in adult hippocampal neurogenesis. Among them the proinflammatory cytokine IL-1β, which is known to be a key molecular mediator of innate immune responses, has repeatedly been shown to be partly responsible for the anti-neurogenic effects and depression-like behaviour in mice exposed to acute or chronic stress48 (also: Goshen et al. 2008). Regarding the opposite effects of innate versus adaptive immune responses on adult hippocampal neurogenesis, these observations could eventually be the basis for new therapeutic approaches. By steering peripheral immune responses towards an adaptive, more T cell-dominated phenotype, it might be possible to modulate the cytokine milieu in the adult brain in a way that supports neurogenesis and reduces anhedonic behaviour in patients suffering from depression.We believe, however, that such forward-looking statements would potentially oversimplify the underlying mechanisms of CD4+ T cell-mediated neuro-immunoregulation, which still remains largely unclear. To this end, our manuscript has to be considered only as a starting point for future research. We would therefore prefer to keep the previous paragraphs out of the manuscript in order not to overstrain the current data situation.5. The data presented in dataset 1 seems compatible with the figures presented with the article. However, the data could be presented in a format more easily understandable for the reader. i.e. Dataset 1a is labelled “C57BL/6,TCR?+/?,TCR??/?”, while is should presumably be “C57BL/6, TCRα+/-, TCRα−/−” and the data in rows 8 and 9 seems to be in an odd format compared to the other data rows? It appears that there have been problems with the formatting. The datasets will therefore be converted into PDF for easier accessibility. Minor comments:
• In\"BrdU administration and immunohistochemistry\" “scull” should be “skull”.• Abbreviations should be defined the first time they are used in the text. These points will be addressed."
}
]
},
{
"id": "5570",
"date": "26 Aug 2014",
"name": "Francis G. Szele",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe link between the immune system and neurogenesis has become increasingly interesting with some papers showing positive and others showing negative effects in health and disease. It is likely that different subsets of immune cells, and different states of activation are in part responsible for these discrepancies. This short paper begins to address the need for detailed examination of these immune cell subsets by showing that systemic adoptive transfer of myelin reactive T17 cells into a mouse model with T cell deficiency rescues proliferation in the hippocampal dentate gyrus. This appears to occur without the need for direct cell contact since the majority of mice did not have significant numbers of transplanted T cells in the hippocampus.The decrease in TCRa-/- DG proliferation, though statistically significant, is rather small and one wonders whether these differences would translate into functional differences. Is there also increased neurogenesis? The paper would benefit from additional experiments showing neurogenic changes are paralleled by behavioral changes. Could it be that T17 cells migrated into and then left the DG within the first week after adoptive transfer? What are the kinetics of T cell immigration and emigration into the DG? Did you exclude the animal (Fig. 2E2) with significant T cell infiltration from your analysis? The authors present a hypothesis that T cell derived cytokines affect DG neurogenesis. It would be interesting to examine cytokine receptor levels in TCRa-/- mice.Minor Points:Please spell out TCR the first time it is used.Scull should be \"skull”.Fig. 3 title “mRNA of cytokine receptor chains in the dentate gyrus and precursor cells”; I recommend substituting “chain” with “subunit”.",
"responses": [
{
"c_id": "2620",
"date": "07 Apr 2017",
"name": "Gerd Kempermann",
"role": "Author Response",
"response": "1. The decrease in TCRa-/- DG proliferation, though statistically significant, is rather small and one wonders whether these differences would translate into functional differences. Is there also increased neurogenesis? The paper would benefit from additional experiments showing neurogenic changes are paralleled by behavioral changes. Impaired precursor cell proliferation has been observed in different animal models of adaptive immune deficiency. While this effect appeared to be rather discrete using TCRα-/-mice in the present study and the work by Huang and colleagues7, more pronounced results could be obtained in severe combined immune deficiency (SCID) and RAG1-/-mice, respectively5,6. Since the relative decrease in proliferation in RAG1-/-and TCRα-/-mice was fairly comparable using Ki67 as a marker of proliferation7, it is possible that the different methodologies in addition to the genetic background of the animals caused the discrepancies between the mentioned studies. The question on whether our observed increased proliferation after adoptive transfer results in more new-born neurons together with the rational to restrict our analysis to proliferation as one component of adult neurogenesis has been discussed above (our response to the 1. main comment of the first Reviewer, S. Pluchino). In brief, our aim was to follow up on previous studies showing increased proliferation and, likely as a consequence, more neurons after CD4+ T-cell reconstitution5,6,7 and further define the relevant T-cell subtypes. The functional relevance of T cells as a positive regulation factor of hippocampal neurogenesis is still a matter of debate. Due to resource limitations, we were not able to extend our observations on hippocampus-dependent learning processes, which can be objectified in the Morris water maze (MWM). In this regard, we can only refer to previous work done by our group, in which CD4-depleted mice had shown an impaired performance at least in the reversal learning phase of the MWM6. 2. Could it be that T17 cells migrated into and then left the DG within the first week after adoptive transfer? What are the kinetics of T cell immigration and emigration into the DG? Clearly we cannot exclude a transient infiltration of Th17 cells into the CNS during the two weeks between adoptive transfer and analysis. The absence of clinical symptoms during the observation period, however, argues against a neuroinflammatory process. As already stated in the manuscript, it is per se unlikely that a relevant amount of immune cells infiltrate into the brain parenchyma across an intact BBB under physiological conditions (Engelhardt & Ransohoff, 2005; Prendergast & Anderton, 2009). In murine EAE the first signs of clinical disease can be observed within a period of four up to >100 days post induction depending on the genetic background of the host animals and the mode of disease induction (Miller et al.. 2010). Demyelination and inflammation in the CNS thereby largely reflect the clinical expression of the disease (Pachner, 2011). Using intravital imaging in a rat model of passive EAE, an infiltration of transferred T cells into the brain parenchyma could be seen as early as on day three after transfer (Bartholomäus et al. 2009). By the adoptive transfer of polyclonal as well as non-activated myelin-specific T cell populations into immunodeficient host animals and subsequent CD3 immunohistochemistry, our group could, however, exclude an early immigration of T cells into the brain within the first four days following injection, at least under the chosen experimental conditions (Wolf et al., 2009b; unpublished data). 3. Did you exclude the animal (Fig. 2E2) with significant T cell infiltration from your analysis? Although the cell infiltrations found in the CNS of this specific mouse were limited to only a few brain regions, they affected some parts of the hippocampus as well. As it can be seen in Figure 2G3, this would have certainly prevented a correct and unbiased quantification of cellular proliferation, which is why the animal concerned was excluded from our analysis. 4. The authors present a hypothesis that T cell derived cytokines affect DG neurogenesis. It would be interesting to examine cytokine receptor levels in TCRa-/- mice. Even though several other cell types within the brain, such as astrocytes and microglia, were found to produce a plethora of different effector cytokines (Sawada et al., 1995; Gonzalez-Perez et al., 2012), there seems to be no doubt that an absence of T cells, as a major source of cytokines in the organism, could potentially lead to differential expression of cytokine receptors in the putative target cells of the neurogenic niche. Quantitative RT-PCR could be adopted to verify differences in the mRNA expression levels of cytokine receptor genes between TCRα-/-and immunocompetent control mice. Still, although this kind of experimental approach might eventually contribute to a better understanding of the general picture of a T cell-mediated regulation of adult hippocampal neurogenesis, it would probably only allow very imprecise interpretation of the underlying molecular signalling, which is why we do not consider it to be a prerequisite for the conclusions drawn in the current manuscript. Minor Points: • Please spell out TCR the first time it is used. • Scull should be \"skull”. • Fig. 3 title “mRNA of cytokine receptor chains in the dentate gyrus and precursor cells”; I recommend substituting “chain” with “subunit”. These points have been addressed."
}
]
}
] | 1
|
https://f1000research.com/articles/3-169
|
https://f1000research.com/articles/6-448/v1
|
07 Apr 17
|
{
"type": "Review",
"title": "Last rolls of the yoyo: Assessing the human canonical protein count",
"authors": [
"Christopher Southan"
],
"abstract": "In 2004, when the protein estimate from the finished human genome was only 24,000, the surprise was compounded as reviewed estimates fell to 19,000 by 2014. However, variability in the total canonical protein counts (i.e. excluding alternative splice forms) of open reading frames (ORFs) in different annotation portals persists. This work assesses these differences and possible causes. A 16-year analysis of Ensembl and UniProtKB/Swiss-Prot shows convergence to a protein number of ~20,000. The former had shown some yo-yoing, but both have now plateaued. Nine major annotation portals, reviewed at the beginning of 2017, gave a spread of counts from 21,819 down to 18,891. The 4-way cross-reference concordance (within UniProt) between Ensembl, Swiss-Prot, Entrez Gene and the Human Gene Nomenclature Committee (HGNC) drops to 18,690, indicating methodological differences in protein definitions and experimental existence support between sources. The Swiss-Prot and neXtProt evidence criteria include mass spectrometry peptide verification and also cross-references for antibody detection from the Human Protein Atlas. Notwithstanding, hundreds of Swiss-Prot entries are classified as non-coding biotypes by HGNC. The only inference that protein numbers might still rise comes from numerous reports of small ORF (smORF) discovery. However, while there have been recent cases of protein verifications from previous miss-annotation of non-coding RNA, very few have passed the Swiss-Prot curation and genome annotation thresholds. The post-genomic era has seen both advances in data generation and improvements in the human reference assembly. Notwithstanding, current numbers, while persistently discordant, show that the earlier yo-yoing has largely ceased. Given the importance to biology and biomedicine of defining the canonical human proteome, the task will need more collaborative inter-source curation combined with broader and deeper experimental confirmation in vivo and in vitro of proteins predicted in silico. The eventual closure could be well be below ~19,000.",
"keywords": [
"proteins",
"genes",
"human genome",
"proteomics",
"mass spectrometry"
],
"content": "Introduction\n\nWhile hypothesis-neutral scientific endeavours are sometimes referred to in derogatory terms as “stamp collecting”, the collation of molecular part lists (e.g. genes, transcripts, proteins and metabolites) remains a crucially important exercise, not only for many aspects of basic biology, but also for application to the biomedical sciences and drug discovery. Paradoxically, however, despite technical advances in analytical experimentation that should be making them easier to verify and quantify, definitive (or “closed”) counts of even just these four entities for key species remain largely refractive. This is particularly so for proteins, as the most demonstrably biologically functional of these entity sets, even though they were the first to emerge historically by many decades1. In 2001, an analysis of the first public version of the draft human genome included an estimate of ∼24,500 protein-coding genes2. The general opinion at that time was that this was lower than expected and would thus probably rise above 30,000. Notwithstanding, when the more complete first reference assembly (92% euchromatic coverage at 99.99% accuracy) was released in May 2004, the estimate was revised slightly downwards to ∼24,0003. In the same year a detailed review appeared supporting a lower bound of ∼25,0004. This latter publication alluded to a “yoyo” effect that persisted in subsequent reviews by falling to ∼20,500 in 20075, rising to 22,333 in 20106, but then dropping to ∼19,000 by 20147. Those accepting the latter estimate may have felt a touch of chagrin as the count thereby fell to ∼ 1000 below the model worm Caenorhabditis elegans. While we humans were still, reassuringly perhaps, ∼ 7000 proteins ahead of the model fly Drosophila melanogaster, we are still ∼20,000 behind the lowly Paramecium (see Table 1).\n\nEnsembl numbers for the yeast, worm, fly and a protozoan are included for comparison (abbreviations are defined in the text).\n\nThis article will compare and discuss the current numbers (as of 1Q 2017) from major sources. The evidence types and theory behind protein counting have been described in many publications and documentation from the individual database portals, but the reviews referenced above provide complementary background. It needs to be stated that numbers used herein refer to what can be termed the “canonical” human proteome. This has its origins in the Swiss-Prot approach to protein annotation whereby protein sequence differences arising from the same genomic locus either by alternative splicing or alternative initiations (or permutations of both) and/or genetic variants, are all cross referenced to a single, maximal length, protein entry8. Importantly, while this was originally introduced as the curatorial strategy of choosing the longest mRNA for an entry, it actually turns out to have post-genomic data support, not only in the form that coding-loci express a single main protein (i.e. that most predicted alternative transcripts may not be translated), but also that in most cases this is the max-exon form (i.e. the curatorial choice actually seems to be the biological “default”)9.\n\n\nHistorical growth\n\nThe set of open reading frames (ORFs) constituting the canonical human proteome can be historically followed in Ensembl and Swiss-Prot (as the manually reviewed and expert annotated sub-set of UniProtKB). Both of these are very different pipelines, but are partially coupled in the sense that the latter is one of the inputs to the automated ORF-building algorithms of the former. We can assess the progress of Ensembl first, since it has been compiling an approximation to the human proteome based on genomic predictions since 200110. A 2004 review assessed historical figures from the first three years, over which the total shifted only marginally from 24,037 to 24,0464. While a maximum of 29,181 was reached in January 2002, this was an artefact associated with clone orientation changes caused by a switch in the assembly source, and this number had dropped back to 24,179 by the next release. Despite some year gaps (not covered by the current archived data sets) the older figures can be plotted with the most recent ones to give a 15 year-span (Figure 1).\n\nThe latter are only those from the current archive that have protein rebuilds rather than maintenance/patch releases with nearly identical numbers.\n\nIt is important to note that, for technical reasons, the longitudinal Ensembl protein numbers are not strictly comparable, since the pipeline model, its parameterisations and data feeds, have, as one might expect, evolved considerably over the years (e.g. the assembly source change mentioned above). This has included incremental improvements of various kinds (e.g. in the quality of the reference genome), but some changes have altered the exact definitions of the headline protein numbers. For example, the pseudogene figures given in the early 2001–3 releases needed to be subtracted from the totals. Those earlier numbers also specified a proportion of novel genes (defined as not having an exact match to RefSeq or UniProt entries at build time), but these tailed off from a maximum of 12,398 in November 2001 to only 46 by 2009 (release 54).\n\nThe most recent releases have other changes that complicate protein counts. One of these is the inclusion of “alternative sequence”, referring to genomic sections that differ from the primary contiguous assembly. The current release of Ensembl (87.38) specifies 2,541 proteins in this category, but it is not clear which of these are just variants of those derived from the primary assembly. Another, somewhat enigmatic aspect, is the appearance in the protein count of so called “read-through” genes. These are defined as transcripts connecting two independent loci on the same strand. These debuted at 463 in release 74, via manual annotation, climbing slowly to the current total of 526. While they are also included in the NCBI genome annotation, these have not been included in the Figure 1 counts because, if they exist at all as translated chimeric proteins, they are non-canonical by definition. Despite these shifts in exactly what the protein numbers represent, we can draw three principle conclusions from Figure 1. These are: a) yo-yoing has at least subsided, if not ceased; b) the number has plateaued at just below 20,000; and c) the pipeline has ceased to spawn significant numbers of novel proteins (i.e. they are now predominantly “seen before”).\n\nOne of the core operations for Ensembl is resolving transcripts and their mRNA coding sections (CDSs) against ORFs predicted ab initio. Swiss-Prot, on the other hand, has historically been doing this for mRNA-to-protein independently of genomic coordinates (although it increasingly now maps the two together where possible). Over the years, the criteria and manual triage for defining canonical ORFs have been consistently applied in Swiss-Prot. This means the growth rate can be straightforwardly recorded by slicing Swiss-Prot human proteins by “create date” (Figure 2). The pattern is interpretable as a concerted effort towards provisional closure of the proteome at 19,658 by 2008. Subsequent increases were essentially incremental, climbing slowly to 20,168 by 2017.\n\nThe blue columns include the additional selection for existence evidence at the protein or transcript levels (note the date is just for the entry into Swiss-Prot, not the first appearance of the sequence in TrEMBL that can be many years earlier).\n\nWhile issues around evidence types will be addressed later, a simple filter can be applied to count just those proteins with either transcript and/or other forms of experimental support for their existence. The result, in the Figure 2 plot, shows this difference to be fairly constant (i.e. that in the order of ∼1,400 sequences remaining experimentally unsupported). There are three other salient features. The first is that the total has only increased by a modest 516 since 2009, whereas Ensembl shrunk by 1,455 over the same period. They have thus both converged towards ∼20,000 (it is not clear if the two sets are congruent for the same ORFs, but this question will be addressed later). However, there were already indications of approximate concordance as early as 2001, where adding the Ensembl novels to the Swiss-Prot knowns reached 18,191. The inference is that the number of novel proteins confirmed since 2001 is less than 2000. Note also that many are TrEMBL-to-Swiss-Prot promotions (i.e. with data already surfaced) rather than de-novo deposited protein sequences. By comparing 2009 with the subsequent seven years we can also infer that Swiss-Prot has not purged significant numbers of accessions (i.e. they have revised sequences but generally not removed them).\n\n\nCurrent counts\n\nWe can move on from tracking historical numbers to taking a contemporary snapshot of major sources (including the two already described) that are well established and regularly declare revised protein counts (Table 1). There are many aspects that could be expanded on from this set, but the feature that immediately stands out is the difference of nearly 3000 between highest and lowest (i.e. 13%). The highest figure comes from what can be considered a meta-source, GeneCards, that merges different pipeline outputs, so this could be expected to be an upper bound11. The protein-coding set from the NCBI genome annotation pipeline ranks second but there are some caveats regarding comparability with the other sources12. One of these is the inclusion of 1235 “LOC” entries with low homology support. Although 107 of these do have Ensembl gene IDs, none have been assigned Human Gene Nomenclature Committee (HGNC) symbols. Removing LOCs from the NCBI protein set would drop them down to seventh at 19,436.\n\nThe next two sources are related in that neXtprot takes the human Swiss-Prot set as a starting point for evidence expansion and interrogation enhancements. This is why these have (almost) the same count (the residual differences being due to synchronisation timings)13. The next three sources are also coupled in the sense that not only are GENECODE and Vega marked-up in Ensembl, but there are plans to merge the three. However, they do show a small difference of 182, with the lowest being the Vega pipeline (as Havanna manual curation). But even from Vega, there is a substantial drop of 735 to the stringently reviewed approved protein-coding gene-based assignments from the HGNC. The lowest number in Table 1, coming in at just below 19,000, comes from the Consensus Coding Sequence (CCDS) project. These correspond to a core set of proteins annotated as having full length transcripts that exactly match reference genome coordinates.\n\nSome sources have invested effort into mapping between each other’s identifiers. This can establish if the protein sequence in pipeline output A is the same as pipeline B. However, the fidelity of such a mapping (and consequent cross-reference reciprocity) depends on differences in methods and stringencies. For example, for all intents and purposes the beta-secretase 1 entry (BACE1) is the same across all 9 pipelines. However, a different population variant was chosen on each side of the Atlantic. Therefore, the RefSeq and Gene ID sequence NP_036236 differs by one residue (481 Cys → Arg) from the Swiss-Prot and Ensembl sequence as P56817. Note also that HNGC does not instantiate sequence entries in the way that the other pipelines do, but collates cross-references, so in this case HGNC 933/BACE1 points to both sequences. The process of cross-referencing between multiple annotation sources allows the generation of both intersects and differences. Crucially, in terms of protein counting, this gives us the possibility to discern where they are concordant or discordant and (on a good day) we may be able to identify causes for the latter.\n\n\nCross-reference counting\n\nAll nine sources in Table 1 provide some extent of cross-referencing between what should be the same protein in different sources (also referred to as cross-mapping). However, the choice was made here to exemplify just four identifiers, Swiss-Prot accession numbers, HGNC IDs (directly, or via the current gene symbols) Ensembl gene IDs and NCBI Entrez Gene IDs. These were chosen for their global prominence but also methodological complementarity. This derives from the fact that that the first two are essentially automated pipelines (but different), while the second two are primarily manual expert annotation operations (but also different). Each of the four offers their own internal ways of querying cross-references, including BioMart installations14 or downloadable mapping tables for this to be done extrinsically. However, because it has the largest number of selectable cross-references, as well as extended options for live-linked result displays and filtered downloading, the UniProt interface was used here. Intersects for the four sources can be seen in Figure 3.\n\nThe results are generated via cross-reference totals according to UniProt, not from the sources in situ. EN, Ensembl; SP, Swiss-Prot; HG, Human Gene Nomenclature Committee; GI, NCBI Entrez Gene.\n\nFigure 3 can be explained as follows: The queries executed gave the totals indicated in the segments. Note that some segments are empty, because, by definition, the identifier mapping has been done “inside” Swiss-Prot (even if in some cases the external sources collaborated in generating the mappings). By comparing with Table 1, we can thus see that 2,923 NCBI proteins did not map at all (which includes most of the LOCs). Similarly, 834 Ensemble protein gene IDs also did not map. For HGNC, on the other hand, we see the cross-reference result is actually 905 higher than the distinct identifier count at source. One explanation could be a proportion of a one-to-many relationship (e.g. Swiss-Prots with more than one HGNC). Some were identified, such as haemoglobin subunit alpha (P69905) that maps to HGNC HBA1 and HBA2.\n\nA notable result from Figure 3 is that a 1:1:1:1 mapping (i.e. four-way concordance) is achieved for only 18,690 proteins, lower than any of the totals from Table 1. Detailed analysis of all the segments cannot be presented here but some trends can be noted. Starting with the 187 in the “SP” segment (i.e. Swiss-Prot only, absent from the other three), the majority of the protein names are given as “putative” or “uncharacterised”. The 391 common elements in \"SP\", \"EN\" and \"HG\" (i.e. missing in NCBI Gene) are clearly dominated by variable domains of immunoglobulin light chains and HLA class I histocompatibility antigen alpha chains, the polymorphic nature of which necessitates a level of manual annotation that may not have been compatible with the NCBI pipeline automation. The 179 common elements in \"SP\", \"HG\" and \"GI” (i.e. missing in Ensembl) are enriched for “Uncharacterized protein” from the so called Chromosome ORF predictions. The large set of 697 common elements in \"SP\" and \"HG\" (i.e. missing in NCBI Gene and Ensembl) are heterogeneous but show enrichment for translated endogenous retrovirus transcripts, putative uncharacterized proteins encoded by LINC loci and include 41 odour receptors. Notably, in these three sets, the HGNC cross-references classify them as not being within their own protein-coding set of 19,033, but rather as endogenous retrovirus, long non-coding RNAs and pseudogenes, respectively. This particular discordance (i.e. in UniProt but not a protein according to the HGNC) explains the 1: many cross-references mentioned at the start of this section. A duplicate check on the 960 indicated only 152 could be ascribed to Swiss-Prots with multiple HGNCs. It can also be seen in Figure 3 that two of the Swiss-Prot intersects are empty. The explanation is that Ensemble and NCBI Gene have consolidated mapping reciprocity for proteins in Swiss-Prot (but, as mentioned above, many proteins from these two sources are still nominally “outside” Swiss-Prot).\n\nAs one of its powerful utilities, we can interrogate ∼ 90 cross references in UniProt. While not all of these are human-relevant we can chose those to compare with Table 1. This has already been done for the four above but can be extended. For example, we can determine counts of 18,384 from CCDS and 19,940 for GeneCards. Note both of these are below the in situ counts by 510 and 1,871 respectively (GENCODE and Vega do not currently have cross-references inside Swiss-Prot). In some cases it may be possible to investigate counts reciprocally. For example, from the HGNC protein-coding download table we can establish that the 19,035 rows in the UniProt mapping column contained 18,997 Swiss-Prot IDs. The same table includes 19,035 Vega Gene ID mappings that also collapse to 18,973 distinct entries. This confirms what was implicated already above, as a small proportion of multiple Swiss-Prots < > HGNCs is also occurring for HGNC < > Vega. Cross-mapping counts can similarly be explored via other sources for comparison, depending on what query and/or download options are available. However, accumulating such results can quickly generate large Venn-type sets that generally end up being more confusing than illuminating.\n\nFollowing on from above, since they are derived from structure data sources, cross-references give precise protein counts; but they also have associated equivocality (even though they will be used further in this report). For this reason, it is important to understand (e.g. via source documentation) technical differences in exactly how the mappings are determined. A second problem is they may be circular (i.e. source B may collegially accept A< > B mappings from source A without independently verifying the reciprocity of B >A). The third problem is synchronisation, where release dates are at different intervals (and may not always include mapping refreshes). The forth problem is the “churn” rate (appearance and/or disappearance of protein records) in genome resources. This is much lower that is was some years ago, but can still be an issue.\n\n\nExistence evidence\n\nIn the context of advancing towards proteomic “closure”, the imperative to verify the existence of an in silico database ORF as an in vivo protein translation product is obvious. By definition, the prerequisite mRNA transcription also needs experimental verification; especially if the ORF is only a genomic DNA prediction. However, on its own, active transcription is insufficient to prove translation, even with a predicted CDS, and it is established that pseudogenes can exhibit low-level transcription15. While it has inherited the categorisations from UniProt, the neXtprot database has a particular focus on the evidence code system and has set up collaborations to extend experimental support in general13. The outlines of this can be seen in Figure 4.\n\nThe categories (expanded on in the neXprot documentation) are as follows:\n\n1. PE1: evidence that includes at least partial Edman sequencing, mass spectrometry (MS) with a threshold of 2 peptides of at least 9 amino-acids, X-ray or NMR structure, protein-protein interaction data or detection by antibodies (Abs).\n\n2. PE2: not proven at protein level but has transcription data (e.g. cDNA, RT-PCR or Northern blots).\n\n3. PE3: probable existence based on orthologues with high similarity scores being found in related species.\n\n4. PE4: no evidence at the protein, transcript, or homology levels.\n\n5. PE5: may be a spurious in silico translation of a non-coding transcript.\n\nThere is now a community effort to promote more proteins to P1 using both MS and Abs, so we can go into these in more detail. The former has a long history with a proprietary project reporting MS identification of 14,223 human proteins as early as 200416. An analogous public effort described the verification of 11,115 Ensembl coding sequences, made available in the first data release of the ProteinAtlas (PA) in 200517. By 2017 the Human Proteome Organisation has been extensively engaged in MS initiatives, particularly in regarded to the “missing proteins” (i.e. those still in P2 to P5) that remain refractory to tryptic peptide verification at the necessary stringency. This aspect has been the subject of several recent reviews and so does not need expanding here18,19.\n\nAs another important methodological push, antibody-based proteomics has developed more recently into a large-scale enterprise. This was first described in 2014 as the Human Protein Atlas project with its own associated database20. This has now been extended with the setting up of an International Working Group for Antibody Validation and the accompanying Antibodypedia database21. These have the objective to increase the reproducibility of protein identification and ultimately, as with the MS initiatives, to move more sequences up to the P1 evidence code)\n\nWe can use the categories above to further “slice and dice” cross-referencing to gain more insight into particular subsets (e.g. via downloadable identifier sets for P1to P5). The possible query combinations are many, so we need to frame useful questions. Notably, it is now possible to select proteins supported by PA MS support entries (17,084) or HPA (16,800) or both (15,189) (n.b. numbers differ slightly from those in neXtprot of 18,083 for PA and 16,473 for HPA). In terms of questions, an example that can be posed is “how many proteins, either supported by HPA or PA, overlap with the 4-database consensus set generated in Figure 3?” The result (Figure 5) effectively intersects the in silico with the in vivo evidence sets.\n\nAs was done for Figure 3, lists from the Venn sections were input to the UniProt ID mapping interface to examine trends. Not all of these can be discussed here, but looking at the unique sets exposed some initially counter-intuitive results. For example, the 4-way only (734) included 214 P1s, but without HPA or PA cross-references. This is because P1 also includes 3D structures and interaction data. Looking at the 152 HPA-only set included 101 at P4 or P5 levels (i.e. unexpectedly high for the implied Ab confirmation which might be expected to push them up to P1). It turns out there is a cross-reference specificity problem from the inclusion of uncertain results. The HPA link (for the 16,800) actually means the protein has been tested (i.e. had an antibody raised against peptide sections) but is not necessarily confirmed. The histochemistry support status, including consistency with two sources of transcript data are commented on in each HPA entry. However, from the HPA download for 16.1, only 10,230 (of the Ensembl proteins as primary identifier) are designated as “approved” or “supported” at the histochemistry level. Examples of evidence complications include the 40-residue of putative protein FAM86JP as the Swiss-Prot entry Q05BU3. Flagged as P5, this shows anomalies including designation as a pseudogene by HGNC (n.b. it has neither GeneID nor an Ensembl cross-reference which excluded it from the 4-way set) and the HPA entry ENSG00000186523-FAM86B1 was flagged as uncertain based on two antibodies. A second example exposes a different problem. The putative uncharacterized protein C7orf76 (Q6ZVN7) is mapped from UniProt to a different protein in HPA as ENSG00000127922-SHFM1 (i.e. P60896). The miss-mapping appears to be extrinsic to HPA and in this case could be a UniProt < > Ensembl problem (which is why this is not in the 4-way set). It is important to emphasise that none of this is about fault finding, but these examples attest to the technical challenges of evidence classifications and mapping fidelity.\n\nInspecting the 360 “PepAt” (i.e. PeptideAtlas only) set reveals a different set of interpretive challenges. An example is the smallest of the set at only 11 residues as morphogenetic neuropeptide (P69208). This has no genomic annotation, but does have an apparent match in PeptideAtlas for the peptide QPPGGSKVILF. The Swiss-Prot entry has its origins in an Edman sequencing result from 1986 and is consequently indicated as “Experimental evidence at protein level”, but has been dropped from neXtprot. A large proportion of the rest of the 360 are immunoglobulin heavy variable and HLA class I histocompatibility antigen chains for which the ability of the PeptideAtlas system to resolve into separate proteins is unclear.\n\n\nSmall proteins\n\nBack in 2004, it was already mooted that a significant expansion in protein number was likely to occur via the discovery of small ORFs (smORFs). However, this was not supported by Swiss-Prot statistics at that time4. In the intervening decade, the smORF question has surfaced regularly22 and it now overlaps with the two closely related themes of de novo protein evolution (i.e. recent non-coding to coding transitions)23 and ribosomal profiling experiments attempting to define the translation of novel smORFs from what was hitherto classified as non-coding RNA24. In addition, the theme of existence evidence discussed above is also relevant, since whatever data support type is being sought (e.g. active transcription plus detection by MS or Abs), the experimental verification of smORFs becomes more difficult.\n\nAn obvious approach to this topic is to repeat the exercise first performed in 20044, namely splitting the smORF count in Swiss-Prot by create date. By setting a cut-off of 100 residues, the current total is 682/20,168. This can be compared with the corresponding 2009 totals of 612/19,675. This establishes that the proportional smORF content has only risen from 3.1% to 3.3%. In addition, from the latest 2017 size cut, 161 of the 682 do not have an HGNC biotype designation as protein-coding. Many also only have the protein existence support as Edman sequencing reads from the earliest Swiss-Prot releases. These short sequences are difficult to genome map and/or re-confirm by MS, which is why six were recently purged from neXtprot (P.Gaudet personal communication). We are thus presented with a paradox that, despite many reports of putative novel human smORF discovery, very few are crossing the Swiss-Prot evidence threshold for becoming new protein entries.\n\nNotwithstanding, recently confirmed smORF examples have surfaced that are informative from the protein counting viewpoint. The first of these, the apelin receptor early endogenous ligand, was integrated into Swiss-Prot in 2014 (HGNC symbol APLEA, synonyms Elabela, Toddler; see Swiss-Prot P0DMC3 for cross-references including links to the discovery papers). It was in fact “hiding in plain sight” in so far as its full-length cDNA (AK092578) had been in GenBank since 2008. However, since this sequence translates into eight possible smORFs, the submission process for the high-throughput cloning project (sensibly) chose not to annotate a CDS in the feature lines of this prostate library entry, since there was no basis on which to choose any of the possible translations by protein similarity at that time (although arguably, manual sequence analysis, including TBLASTX, might have given clues). Significantly though, this transcript had originally been annotated in Vega as a Long non-coding RNA (LncRNA) giving rise to speculation that additional cryptic smORFs could be “hiding” in other LncRNAs. Such a second case has in fact been described in 2016 in paper entitled “A peptide encoded by a transcript annotated as long noncoding RNA enhances SERCA activity in muscle”, although the work was done in mouse25. The publication was processed by Swiss-Prot in March 2016 to generate P0DN83 and P0DN84 for a 34 residue mouse and human proteins, respectively.\n\nThese two smORFs illustrate a spectrum of evidence differences as follows:\n\nIn terms of transcript support, APLEA has been re-cloned as KJ158076 with a submitted CDS, but this is not yet incorporated in the Swiss-Prot annotation. The DWORF authors mention obtaining cDNAs but have neither deposited human or mouse mRNA accession numbers. There are many TBLASTN matches as supporting evidence for the protein (not withstanding miss-matches, see below) both to mammalian sequences designated as LOC non-coding RNAs and over 30 human expressed sequence tag (EST) mRNAs.\n\nAPLEA has three-way genomic support and a CCDS, while DWORF has no human genome cross reference in Swiss-Prot. The mouse paralogue does have an Ensembl protein mapping (ENSMUSG00000103476) despite still being flagged as an LncRNA gene in the Mouse Genome Atlas. However, multiple lines of evidence (Southan, unpublished observations) indicate the correct human sequence is the 35 resides represented in ENSG00000240045 (via Vega) as TrEMBL A0A1B0GTW0 (but circularly as this was picked up from Ensembl) and independently as ACT64388 from 2009. The predicted transcript is classified by NCBI as a non-coding LOC100507537.\n\nNeither APLEA nor DWORF have any cross-references in the seven MS sources in UniProt. Note that APLEA cannot pass the double 9-mer criteria for neXtprot, and DWORF only has a single predicted tryptic peptide. Whether either protein passes the verification threshold for MS datasets in the future remains to be seen.\n\nPublications for both APELA and DWORF have included Western blots from Abs raised against peptides (but mouse for the latter). However, neither yet has an HPA entry. While the possibility of inclusion in a future update is clear for APELA, there may not only be technical challenges from the small size of DWORF, but also, since HPA uses Ensembl IDs for its primary identifiers, this protein and its transcript would need first to be resolved in a future Ensembl release (n.b. LOC100507537 appears to have somehow parsed HPA transcript data, but this may be a miss-mapping).\n\nReplication of the basic findings and expanded aspects of in vivo function have been consolidated in numerous publications for APELA, including a 2017 paper26. While the experimental characterisation of DWORF rests on one study done with mouse so far25, consolidation of the human protein evidence is to be expected in forthcoming work.\n\nTo summarise the implications; the discovery of additional smORFs seems certain, especially given that the putative LncRNA gene count has recently risen to 27,91927. However, the question remains as to how many will be verified to the evidence level sufficient to enter the major genome and protein portals (even though it will be challenging to obtain Abs and MS verification data). On a continuum of what we might expect between 10, 100 or 1000, the middle estimate seems most likely.\n\n\nPharmacological interaction intersects\n\nThis last section assesses the corroboration of data linkages by existence evidence and other types of concordance. Many of the Swiss-Prot cross-references are related to protein function and other attributes such as tissue distribution or post-translational modification. Others would include pathway membership, protein-protein interactions, Genome Ontology categorisation, disease associations, interactions between enzymes and substrates, drugs and their targets, as well as endogenous ligands for receptor proteins. The advantage of the analyses described above is that results centred on functional categories can be intersected with independent cross-references. This can be exemplified by selecting the curated ligand interactions in the IUPHAR/BPS Guide to PHARMACOLOGY28 (GtoPdb) that are included in the set of five chemistry (interaction) cross-references. The current UniProt has 1,460 human Swiss-Prot records (as defined by the GtoPdb criteria for submitting the links) that have publication-supported molecular interactions. The majority are pharmacologically active small-molecules, but the curated relationships include some protein-protein interactions, for example, antibody ligands directed against cytokine targets (n.b. a proportion of these proteins are derived from a new project as the Guide to Immunopharmacology). The result of the corroboration analysis is shown in Figure 6.\n\nThe first of the two intersected lists are labelled as “Prot Exist” with evidence at the transcript and protein levels (i.e. PE1 and PE2 from Figure 4), and the 4-way major source consensus set (i.e. the central panel of Figure 3).\n\nWe can see the results of a three way comparison in Figure 6 between existence evidence, four-source convergence and GtoPdb entries. The first feature to note is that not all proteins with existence evidence are in the four-source set, and vice versa. Possible systematic reasons behind this cannot be explored here, but may be related to the points discussed for Figure 5. The key observation for GtoPdb is that, reassuringly, 1,450 entries intersect with both existence evidence and four-source identifiers. Notwithstanding, there are nine intersects with the existence set but not four-source corroborated with one vice versa (i.e. in the four-source set but not evidence-supported). Given that GtoPdb interactions are expert-curated, the result from Figure 6 raises questions about the annotation of the 10 protein entries. These were followed up to establish that the lack of evidence support for P0C264 arises from the absence of an mRNA entry (i.e. it remains a genomic prediction). The existence of this kinase seems well supported (e.g. via CCDS74457), but a cloned cDNA would be an important consolidation. Inspection of the other nine sequences also supported their existence but they all had a mixture of cross-referencing failures that had excluded them from the four-source set. For example, for the aspartyl aminopeptidase, DNPEP (Q9ULA0) the protein is solidly supported even to the extent of a PDB structure, but the Entrez GeneID is missing (although this is cross-referenced by HGNC). Likewise, the alpha-2B adrenergic receptor, ADRA2B (P18089) is solidly supported, but in this was missing the Ensembl cross-reference (it turns out from Swiss-Prot update enquiry this was due to an unusual accession number change associated with a TrEMBL to Swiss-Prot transition, Gasteiger, personal communication). In both cases GtoPdb had in fact been manually curated in the correct links for the Entrez Gene ID in Target ID 1559 and Ensembl Gene for Target ID 26, respectively (n.b. the appropriate UniProt corrections have been suggested via the feedback form). This cross-checking for GtoPdb targets thus proved a useful exercise that will be re-visited as our protein content expands.\n\n\nConclusions\n\nDespite over 16 years having elapsed since the first draft human genome, the diversity of current counts indicates that progress towards what the community might consider a gold-standard set of canonical protein sequences, remains frustratingly slow. This is especially so considering that the “zone of equivocality” lies only between an upper bound of ∼ 20,000 and a lower one of ∼18,500. The slow progress towards closure is clearly a reflection of both the inherent biological complexity of protein translation, as well as the challenges of combining automated annotation with various proportions of expert curation needed to define the entire expressed genomic landscape29. There are of course caveats, even with the concept of closure, in so far as recent evidence indicates that each of us, on average have at least 100 protein loss-of function variants (i.e. proteomes are “personal”)30.\n\nThe wider bioscience community could be forgiven being puzzled that major global efforts continue to produce different sets of canonical proteins at roughly the same time from the same primary data (leaving aside another layer of yet more inter-source differences in alternative splice and/or initiation forms). Those of us with some insight into the bioinformatic, genomic and proteomic challenges might be more sanguine in our judgment, but the criticism still stands (note also that human is the testbed from which the community needs progress to analogous proteomic closure for at least mouse, rat and Zebrafish). Approaching the question as to why this situation persists and possible solutions, would necessitate a detailed comparison of the underlying assumptions, data processing models and pipeline parameterisations. However, inter-source clustering of explicit protein sequences could make identifying difference more effectively than cross-references alone (e.g. a possible resurrection of the Human Protein Index initiative31).\n\nRegardless of the technical options to solving the problem, substantial resources have been committed over decades by the major gene and protein annotation resources globally. We should thus expect more inter-team collaboration dedicated to harmonising amongst themselves for the mere ∼2000 protein sequences in question (i.e. not many compared to the 0.55 million and 77 million processed in Swiss-Prot and TrEMBL respectively). It could be argued that additional (collective) manual curation would be needed to accomplish this, but the consequent improvement in silico concordance could then be consolidated by an expansion of experimental existence verification both in vitro and in vivo. This could include a supply of expressed protein standards, advances in MS-based proteomics, including sets of synthetic proteotypic peptides for spiking experiments32, deep transcript profiling by RNA-seq and the increased availability of validated antibody reagents.\n\n\nData availability\n\nThese statistics on protein numbers are presented and compared here in good faith and with implicit expectation that they should be reproducible, including by others who may want to repeat and/or extend these types of analyses. Notwithstanding, this may be confounded by several factors that could give rise to slightly different results (but it is hoped not major discrepancies). The most obvious is data updates that can be as frequently as monthly for some sources (e.g. since the completion of this work UniProt notched up to UniProt release 2017_03 on March 15, 2017 with the human SwissProt count increasing, from Table 1, by 13 proteins to 20,184). Another is the exact form of the queries, which vary between resources, particularly when each selection interface has a different look and feel, different syntactic formats of execution and download lists having different formats of cross-referenced identifier columns. One example is the need to covert UniProt interface queries into the equivalent SPARQL queries in neXtprot as shown below. The UniProt syntax to count HGNC cross-references, as entered in the web query box, is below:\n\ndatabase:(type:hgnc) AND reviewed:yes AND organism:\"Homo sapiens (Human) [9606]\"\n\nThe answer was 19967 (March 2017), but note we need to make to pre-selects for a) species/organism and b) “reviewed” to select Swiss-Prot over TrEMBL. For the neXtProt equivalent cross-reference query, these two pre-selects are not necessary since is human Swiss-Prot derived anyway. The HGNC select has the form below:\n\nselect distinct ?entry where {\n\n?entry :reference ?ref .\n\n?ref :provenance db:HGNC ;\n\n:accession ?ac.\n\nfilter (regex(?ac,'^HGNC'))\n\n}\n\nIn this case the result was 19956. The basic listings from sources used and some of the result sets have been made available as a Figshare data collection (https://figshare.com/collections/Supplementary_data_for_assessing_the_human_canonical_protein_count/371641333). If any reproducibility issues do arise, interested parties are welcome to contact the author.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author was supported for part of this work by the Wellcome Trust (grant number, 108420/Z/15/Z).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe efforts of all the genomic and protein annotation teams referred to in this work are much appreciated. Discussions of discordances and other quirks should not be misinterpreted as criticism of the resources concerned. Thanks are due to those who answered questions on this topic on BioStars, various database helpdesks and Twitter, as well as Dr Pascale Gaudet for help with neXtprot queries.\n\n\nReferences\n\nSanger F: The arrangement of amino acids in proteins. Adv Protein Chem. 1952; 7: 1–67. PubMed Abstract | Publisher Full Text\n\nLander ES, Linton LM, Birren B, et al.: Initial sequencing and analysis of the human genome. Nature. 2001; 409(6822): 860–921. PubMed Abstract | Publisher Full Text\n\nInternational Human Genome Sequencing Consortium: Finishing the euchromatic sequence of the human genome. Nature. 2004; 431(7011): 931–945. PubMed Abstract | Publisher Full Text\n\nSouthan C: Has the yo-yo stopped? An assessment of human protein-coding gene number. Proteomics. 2004; 4(6): 1712–1726. PubMed Abstract | Publisher Full Text\n\nClamp M, Fry B, Kamal M, et al.: Distinguishing protein-coding and noncoding genes in the human genome. Proc Natl Acad Sci U S A. 2007; 104(49): 19428–19433. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPertea M, Salzberg SL: Between a chicken and a grape: estimating the number of human genes. Genome Biol. 2010; 11(5): 206. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEzkurdia I, Juan D, Rodriguez JM, et al.: Multiple evidence strands suggest that there may be as few as 19,000 human protein-coding genes. Hum Mol Genet. 2014; 23(22): 5866–5878. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe UniProt Consortium: UniProt: the universal protein knowledgebase. Nucleic Acids Res. 2017; 45(D1): D158–D169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTress ML, Abascal F, Valencia A: Alternative Splicing May Not Be the Key to Proteome Complexity. Trends Biochem Sci. 2017; 42(2): 98–110. PubMed Abstract | Publisher Full Text\n\nAken BL, Achuthan P, Akanni W, et al.: Ensembl 2017. Nucleic Acids Res. 2017; 45(D1): D635–D642. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFishilevich S, Zimmerman S, Kohn A, et al.: Genic insights from integrated human proteomics in GeneCards. Database (Oxford). 2016; 2016: pii: baw030. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNCBI Resource Coordinators: Database Resources of the National Center for Biotechnology Information. Nucleic Acids Res. 2017; 45(D1): D12–D17. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGaudet P, Michel PA, Zahn-Zabal M, et al.: The neXtProt knowledgebase on human proteins: 2017 update. Nucleic Acids Res. 2017; 45(D1): D177–D182. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmedley D, Haider S, Durinck S, et al.: The BioMart community portal: an innovative alternative to large, centralized data repositories. Nucleic Acids Res. 2015; 43(W1): W589–W598. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuo X, Lin M, Rockowitz S, et al.: Characterization of Human Pseudogene-Derived Non-Coding RNAs for Functional Potential. PLoS One. 2014; 9(4): e93972. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcGowan SJ, Terrett J, Brown CG, et al.: Annotation of the human genome by high-throughput sequence analysis of naturally occurring proteins. Curr Proteomics. 2004; 1(1): 41–48. Publisher Full Text\n\nDesiere F, Deutsch EW, King NL, et al.: The PeptideAtlas project. Nucleic Acids Res. 2006; 34(Database issue): D655–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOmenn GS, Lane L, Lundberg EK, et al.: Metrics for the Human Proteome Project 2016: Progress on Identifying and Characterizing the Human Proteome, Including Post-Translational Modifications. J Proteome Res. 2016; 15(11): 3951–3960. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSegura V, Garin-Muga A, Guruceaga E, et al.: Progress and pitfalls in finding the 'missing proteins' from the human proteome map. Expert Rev Proteomics. 2017; 14(1): 9–14. PubMed Abstract | Publisher Full Text\n\nFagerberg L, Hallström BM, Oksvold P, et al.: Analysis of the human tissue-specific expression by genome-wide integration of transcriptomics and antibody-based proteomics. Mol Cell Proteomics. 2014; 13(2): 397–406. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUhlen M, Bandrowski A, Carr S, et al.: A proposal for validation of antibodies. Nat Methods. 2016; 13(10): 823–7. PubMed Abstract | Publisher Full Text\n\nPueyo JI, Magny EG, Couso JP: New Peptides Under the s(ORF)ace of the Genome. Trends Biochem Sci. 2016; 41(8): 665–678. PubMed Abstract | Publisher Full Text\n\nSchmitz JF, Bornberg-Bauer E: Fact or fiction: updates on how protein-coding genes might emerge de novo from previously non-coding DNA [version 1; referees: 3 approved]. F1000Res. 2017; 6(F1000 Faculty Rev): 57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMumtaz MA, Couso JP: Ribosomal profiling adds new coding sequences to the proteome. Biochem Soc Trans. 2015; 43(6): 1271–1276. PubMed Abstract | Publisher Full Text\n\nNelson BR, Makarewich CA, Anderson DM, et al.: A peptide encoded by a transcript annotated as long noncoding RNA enhances SERCA activity in muscle. Science. 2016; 351(6270): 271–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYang P, Read C, Kuc RE, et al.: Elabela/Toddler Is an Endogenous Agonist of the Apelin APJ Receptor in the Adult Cardiovascular System, and Exogenous Administration of the Peptide Compensates for the Downregulation of its Expression in Pulmonary Arterial Hypertension. Circulation. 2017; 135(12): 1160–1173. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHon CC, Ramilowski JA, Harshbarger J, et al.: An atlas of human long non-coding RNAs with accurate 5′ ends. Nature. 2017; 543(7644): 199–204. PubMed Abstract | Publisher Full Text\n\nSouthan C, Sharman JL, Benson HE, et al.: The IUPHAR/BPS Guide to PHARMACOLOGY in 2016: towards curated quantitative interactions between 1300 protein targets and 6000 ligands. Nucleic Acids Res. 2016; 44(D1): D1054–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMudge JM, Harrow J: The state of play in higher eukaryote gene annotation. Nat Rev Genet. 2016; 17(12): 758–772. PubMed Abstract | Publisher Full Text\n\nNarasimhan VM, Hunt KA, Mason D, et al.: Health and population effects of rare gene knockouts in adult humans with related parents. Science. 2016; 352(6284): 474–477. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGriss J, Martín M, O’Donovan C, et al.: Consequences of the discontinuation of the International Protein Index (IPI) database and its substitution by the UniProtKB “complete proteome” sets. Proteomics. 2011; 11(22): 4434–4438. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPerez-Riverol Y, Vizcaíno JA: Synthetic human proteomes for accelerating protein research. Nat Methods. 2017; 14(3): 240–242. PubMed Abstract | Publisher Full Text\n\nSouthan C: Supplementary data for assessing the human canonical protein count. figshare. 2017. Data Source"
}
|
[
{
"id": "21691",
"date": "18 Apr 2017",
"name": "Michael Tress",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAbstract: “In 2004, when the protein estimate from the finished human genome was only 24,000, the surprise was compounded as reviewed estimates fell to 19,000 by 2014. “ This makes no sense; it seems to be missing a large chunk.\n“miss-annotation“\nIntroduction: “This has its origins in the Swiss-Prot approach to protein annotation whereby protein sequence differences arising from the same genomic locus either by alternative splicing or alternative initiations (or permutations of both) and/or genetic variants, are all cross referenced to a single, maximal length, protein entry8.“\nThis is not strictly true, SwissProt does NOT divide up all proteins from the same gene in different entries (TMPO for example). Here you have to be clear that SwissProt does this most of the time.\n“Importantly, while this was originally introduced as the curatorial strategy of choosing the longest mRNA for an entry, it actually turns out to have post- genomic data support, not only in the form that coding-loci express a single main protein (i.e. that most predicted alternative transcripts may not be translated), but also that in most cases this is the max-exon form (i.e. the curatorial choice actually seems to be the biological “default”)9.”\nStrictly speaking this is true, the longest SwissProt form is the biological default in most cases. But it is purely technical and is not the best way of selecting the biological default. The way this paragraph is written makes it sound like it is. Better to say:\n“not only in the form that coding-loci express a single main protein [ref to Ezkurdia et al, JPR] (i.e. that most predicted alternative transcripts may not be translated), but also that in most cases this max-exon form (i.e. the curatorial choice) actually coincides with the biological “default”)9.”\nRef: Ezkurdia I, Rodriguez JM, Carrillo-de Santa Pau E, Vázquez J, Valencia A, Tress ML. Most highly expressed protein-coding genes have a single dominant isoform. J Proteome Res. 2015 Apr 3;14(4):1880-7. doi: 10.1021/pr501286b. 1\nHistorical Growth: “One of these is the inclusion of “alternative sequence”, referring to genomic sections that differ from the primary contiguous assembly. The current release of Ensembl (87.38) species 2,541 proteins in this category, but it is not clear which of these are just variants of those derived from the primary assembly.”\nAlternative sequence genes are not included in the Ensembl reference counts.\nprinciple = principal\nGENCODE not GENECODE!\nAnd GENCODE, VEGA and Ensembl ARE merged and have been for a number of years.\nVEGA is annotated by the HAVANA group (part of the GENCODE Consortium), not Havanna.\n\nIt’s also worth pointing out that Ensembl (since it is now merged with GENCODE) is essentially a manually curated annotation too with manual curations coming from the HAVANA team.\n“Consensus Coding Sequence (CCDS) project. These correspond to a core set of proteins annotated as having full length transcripts that exactly match reference genome coordinates.”\nIn fact CCDS transcript models need to exactly match between RefSeq and Ensembl/GENCODE, which explains why CCDS is the smallest set. This is actually an important caveat for the next paragraph, as might be imagined.\nCross-reference counting: It is also worth mentioning that these are the only four independent sets, in that Vega and GENCODE merge into Ensembl, NextProt is UniProt and GeneCards and CCDS are essentially intersections and unions of different subsets.\nEnsembl, not Ensemble.\n“The explanation is that Ensemble and NCBI Gene have consolidated mapping reciprocity for proteins in Swiss-Prot (but, as mentioned above, many proteins from these two sources are still nominally “outside” Swiss-Prot).”\nI think what it really says is that genes annotated in both Ensembl and SwissProt are automatically included in HGNC.\n“GENCODE and Vega do not currently have cross-references inside Swiss-Prot”\nBecause GENCODE/VEGA == Ensembl\n“forth” - fourth\nExistence evidence: “However, on its own, active transcription is insuficient to prove translation, even with a predicted CDS”\nMaybe given the proliferation of such papers it might be worth pointing out that neither is ribosome profiling evidence …\n“in regarded to” in regard to\n“As was done for Figure 3, “ I think this whole paragraph could be written more carefully. I can follow it, but I suspect most people wouldn't. The data sets being compared need to be introduced specifically (again) and the numbers cross-checked. The examples are interesting, but:\n“A second example exposes a different problem. The putative uncharacterized protein C7orf76 (Q6ZVN7) is mapped from UniProt to a different protein in HPA as ENSG00000127922- SHFM1 (i.e. P60896). The miss-mapping appears to be extrinsic to HPA and in this case could be a UniProt < > Ensembl problem (which is why this is not in the 4-way set).”\nActually the problem stems from the fact that Ensembl annotates a single gene (now called SEM1) for these coordinates, while RefSeq has two (those listed in the paper). I have looked at this case before and wrote “RefSeq has two genes for SHFM1; RefSeq is right”. I am not 100% sure that it is, but if it is one gene, it looks to be a gene that has two ORFs and hence it makes sense that UniProt has two entries.\nSmall proteins “APLEA” = APELA\nAlso worth pointing out the conservation all the way back to Danio for APELA. In fact cross-species conservation studies currently being undertaken by Ensembl may unearth some “missing” smORFs.\n“not withstanding miss-matches”\n“miss-mapping”\n“However, inter-source clustering of explicit protein sequences could make identifying difference more effectively than cross- references alone (e.g. a possible resurrection of the Human Protein Index initiative31).”\nNooooo, do not resurrect the IPI, it died for good reasons. I don’t believe that we need any more competing (and only superficially communicating) bodies in the field.\n“It could be argued that additional (collective) manual curation would be needed to accomplish this”\nThis is nice in principal, but manual curation is VERY subjective. For many genes whether it is annotated as coding or not is based on the balance of probabilities and each annotator has his/her own balance of probabilities. What is needed is more and better information for the corner cases.\nGeneral Comment: There are a lot of distinct sets being compared in the figures. I would name and define the sets clearly in the text when possible, otherwise readers will struggle to see what is being compared.\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": [
{
"c_id": "3949",
"date": "03 Sep 2018",
"name": "Christopher Southan",
"role": "Author Response",
"response": "1) While this is over a year after the fact, all the Referee's comments were appreciated :) 2) According to the FAQ on this site, this had (just) passed peer review hence the entitlement to PMID:28529709. Thus, for the record, it is citable as such, despite a certain NAR Editor wrongly insisting this had to be cited as a DOI in the text :( 3) Believe it or not, I had intended to make a revision. However, a recent more detailed analysis and review (by one of the Referees in fact) PMID:29982784 renders such a revision effectively redundant. 4) I have accepted this new review in good faith (regardless of the DOI cite) even to the extent of giving it an F1000 recommendation https://f1000.com/prime/contributor/evaluate/article/733610726"
}
]
},
{
"id": "21692",
"date": "05 May 2017",
"name": "Elspeth A. Bruford",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral remarks:\nThe author has investigated the question of how many protein coding genes are encoded in the human genome, and come to the conclusion that while protein coding gene counts from a variety of resources do appear to be converging, there are still significant differences. One key aspect the author has maybe not fully appreciated is the considerable level of collaboration already occurring between the cited resources, which can be both advantageous - by reaffirming correct annotations - and disadvantageous - by perpetuating erroneous annotations through multiple resources. At the same time, definitions of biotypes, and membership within in each biotype, do still present differences which groups, including members of the CCDS collaboration, are looking to unify. Furthermore, while different interpretations of available data can of course cause discrepancies (and this is perhaps an area where more work is required by the community to reach agreed standards, for example see PMID 263675421), unsurprisingly some resources access different datasets which cause further differences. While collaborations, definitions and data-sharing could be tightened up, there is no doubt that what is most needed is concerted experimental investigation of the remaining putative/hypothetical/dubious protein coding loci that remain within the genome, so that the resulting data can be used to decide upon a definitive biotype for these loci.\nOverall this is discussing an important question but there is a tendency to be rambling in sections and I think the paper needs better organising to highlight some interesting questions the author raises. Some assumptions made about the various projects also need to be corrected, and more attention to detail is required for the numbers quoted to avoid confusing readers.\nSpecific remarks:\nuse of \"miss-\" throughout instead of mis- Ensembl, not Ensemble Entrez Gene is more widely referred to now as \"NCBI Gene\" - but variously referenced throughout the ms as \"GeneID\", \"NCBI genome annotation\", \"RefSeq and Gene\", \"NCBI Entrez Gene\", \"GI\", \"NCBI\", \"NCBI pipeline automation\" etc... GENCODE, not GENECODE HGNC is HUGO Gene Nomenclature Committee (not Human) neXtProt, not neXtprot\nAbstract:\nI disagree that the only suggestion that total numbers of protein coding genes may rise is from reports of smORFS - as the author discusses later in the paper, even the very few genes reported to date to encode \"smORFs\" have limited evidence. I would anticipate most increase would come from careful re-annotation using increasing amounts of data (conservation, RNAseq, etc) and from annotation of multiple haplotypes that may cover regions of the human genome that are not currently included in the reference assembly and could be included in the future as alternate loci by the GRC.\nIntroduction:\nWhile saying that the longest mRNA strategy has data support, it would also be worth mentioning the exception of read-through transcripts which can confuse this strategy significantly.\nHistorical Growth:\nIt could also be worth noting that Ensembl and Swiss-Prot/UniProtKb are also coupled as Ensembl sequences that are absent from UniProtKB are imported into UniProtKB/TrEMBL and tagged as part of their human proteome.\nEnsembl's statistics make it very clear the number of proteins encoded by readthrough transcripts and on \"alternative sequence\", so I don't see how these could be said to \"complicate\" the figures. The issue of how many of the proteins (and protein coding genes) included on the alt loci are not represented in the primary assembly is however an interesting question.\nFigure 2 shows the Swiss-Prot protein counts divided into total in red and those with protein or transcript evidence in blue - it would be nice to have the 2017 figures actually stated as opposed to having to guesstimate them from the graph.\nCurrent Counts:\nThe author does not seem to understand the relationship between GENCODE (not GENECODE), Vega/Havana and Ensembl. It is nicely explained on the GENCODE site: https://www.gencodegenes.org/faq.html\nHence none of these figures are truly independent at all, and any differences between Ensembl and GENCODE figures are likely due to release asynchrony.\nAs it is unsurprising that GeneCards, which combines data from a variety of resources, has the largest \"protein coding gene count\" it is equally unsurprising that the CCDS consortium has the lowest as they are looking for the consensus CDS from Ensembl/Havana (=GENCODE) and RefSeq.\nI disagree with the statement that mapping identifiers across sources can \"establish if the protein sequence in pipeline output A is the same as pipeline B\", and indeed the author discusses the example of BACE1 which shows this is not necessarily true; however, this discrepancy is not due to the mappings themselves or how they are made, but simply due to the methods of protein prediction/selection used in each resource. The mapping may correctly suggest that both pipelines are considering the same genomic locus (in this case the BACE1 gene), but agreement on the encoded protein(s) is not guaranteed. This paragraph would be better rephrased to make this clear.\nTypo: if HGNC instantiate then they should also collate (not collates)\nCross-reference counting:\n\"However, the choice was made here to exemplify just four identifiers, Swiss-Prot accession numbers, HGNC IDs (directly, or via the current gene symbols) Ensembl gene IDs and NCBI Entrez Gene IDs. These were chosen for their global prominence but also methodological complementarity. This derives from the fact that that the first two are essentially automated pipelines (but different), while the second two are primarily manual expert annotation operations (but also different)\"\n\nThe first two resources in the list are Swiss-Prot and HGNC, and neither are \"essentially automated\"; I think the author meant to say \"last two\" as nowhere else in this paper does he suggest either of these resources rely heavily on automation. As discussed earlier, the Ensembl gene set is a merge of their automated predictions with Havana manual annotations, and the manual annotations make up the vast majority of the protein coding genes. Likewise, the NCBI \"Entrez Gene IDs\" undergo extensive manual curation, especially for the human set. Therefore I would disagree that any of these four resources are \"essentially automated pipelines\".\nFigure 3 is very confusing with all of the \"zero\" segments - this figure would make more sense if it was an \"all by all\" comparison, as opposed to being based solely on the Swiss-Prot dataset. In fact I would venture that a simple table would be more readable, and as stated later in the paper this is a \"Venn-type set(s) that generally end(s) up being more confusing than illuminating.\" Further, the numbers listed in Fig 3 do not correspond with those in the text - in Table 1 20,617 mappings were listed for \"GeneID\"/NCBI, whereas here there are 18,896, a difference of 1,721, not 2,923 as stated in the text. And for HGNC 19,957 is 924 higher than the 19,033 listed in Table 1, not 905 higher as stated.\nThe reason for the increase in mappings to HGNC IDs is more likely the inclusion of mappings to loci that HGNC do not regard as protein-coding, such as immunoglobulin light chain segments, than it is due to Swiss-Prot having more than one HGNC ID in any given record. In fact this explanation is discussed in the next paragraph where the types of loci enriched in specific segments are discussed, such as immunoglobulin light chain segments and endogenous retroviruses (note again the NCBI pipeline is referred to as \"automation\" which I do not think is a fair representation). I think it would make more sense for these two paragraphs to be rewritten to present the reasons more coherently.\n\nPerhaps it also would have made a better comparison to limit the Swiss-Prot data to loci that ALL four resources regard as protein coding, or to at least present how many of the loci in some segments of the diagram each resource individually considers as protein coding? This would also have made a comparison with the figures in Table 1 more valid, as currently the figures in Table 1 represent a different set to those being compared in Figure 3. Finally, the zero figures in the Ensembl (not Ensemble!)/Swiss-Prot and NCBI/Swiss-Prot segments must reflect their efforts to map between resources, though I am surprised there are no differences at all, even due to update cycle asynchrony? These figures certainly do not result from HGNC importing everything that NCBI and Ensembl annotate automatically (which I note Michael has suggested in his review), as this is definitely not the case. In the next paragraph 19,035 rows are quoted (twice) for HGNC data, but again this does not tally with the figure of 19,033 quoted in Table 1 for HGNC protein coding loci.\nExistence Evidence:\nIn the figures quoted with evidence from Peptide Atlas for Swiss-Prot and neXtProt (17,084 vs 18,083) I would disagree that this could be described as a \"slight\" difference; this is nearly 1000 loci, which is at least 5% of the protein coding loci in the genome, even using the highest of the counts cited in this paper. What is the reason for this difference, it would be interesting to know. In the next paragraph the figure of 152 is quoted for the HPA-only set, but from Figure 5 this looks to be 158. Which is correct?\nTypo - \"...complications include the 40-residue of putative protein FAM86JP...\".\nAlso note that while FAM68JP does not have cross-reference to NCBI Gene or Ensembl from Swiss-Prot these can be found in HGNC and NCBI Gene. The last paragraph of this section again mentions the issue of IG chains, which most resources do not class as \"protein coding\".\nSmall Proteins:\nHGNC symbol is APELA (not APLEA). Typo: \"...to generate P0DN83 and P)DN84 for a 34 residue mouse and human proteins...\"\nThe name of the second smORF (DWORF) is not actually mentioned until it is listed in the bullet points, it would be good to introduce \"DWORF\" by name in the paragraph above. I do not agree with the author that from the (paucity of) evidence cited for these examples that it is therefore \"certain\" that additional smORfs will be discovered, I think \"likely\" would be more appropriate.\n\nData Availability:\nWhen discussing data update cycles, note that HGNC have daily updates and I think the same can be said of NCBI Gene, so these are both far more frequently than monthly.\nAgain the figure quoted in the text does not match numbers given in Figure 3: this section says that the query for HGNC cross-references gave 19967, while Fig. 3 quotes 19957.\nTypo: \"...pre-selects are not necessary since is human Swiss-Prot derived...\"\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Partly",
"responses": [
{
"c_id": "2716",
"date": "18 May 2017",
"name": "Michael Tress",
"role": "Reviewer Response",
"response": "I really must get more sleep before I send referee comments. I wrote: \"I think what it really says is that genes annotated in both Ensembl and SwissProt are automatically included in HGNC.\" What I meant to suggest was that since the flow of evidence is often Ensembl ---> UniProt and RefSeq ---> UniProt, it would not be entirely surprising that genes were already annotated in HGNC by the time they were accepted by SwissProt. That could explain the low numbers (well, zero in both cases) in the HGNC/SwissProt/coordinate-based reference overlap. I meant \"automatically\" in the sense of \"inevitably\", rather than computer annotation. Sorry for the confusion. Michael"
}
]
},
{
"id": "22248",
"date": "09 May 2017",
"name": "Sylvain Poux",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article compares the number of canonical proteins encoded by the human genome in different resources, including UniProtKB/Swiss-Prot, HGNC, neXtProt, GeneID, Ensembl or CCDS. The major conclusion is that the number of canonical proteins should be around 19,000 and that, while numbers converge across resources the full canonical human proteome is still not defined.\nThis is a good description of the current situation and the article is therefore interesting even if there could be confusion between protein-coding genes and canonical proteins. If the author assumes it is the same, maybe for consistency reason it would be good to mention only protein-coding genes or to explain what the differences are. The author also suggests that an inter-team collaboration could come-up with a finished canonical proteome and seems to ignore the ways the different resources already collaborate. As this has already been raised in the review of E.Bruford, we will not enter into details. The question of the release cycle is also important and should be developed in more details. Many discrepancies are only transitional and only due to the release cycle of the resources compared. As mentioned in the article, neXtProt is built on UniProtKB/Swiss-Prot and differences between these resources are only due to release schedule. But this is also holds true for the other resources and should be emphasized.\nAnother issue concerns the methodology of the study. A number of resources compared in this study do not have the same primary mission and it is therefore normal to have discrepancies between them. For example, HGNC is a nomenclature committee and official gene names are assigned when a consensus name is reached in the community. As a consequence, some clear protein-coding genes, such as NSG1 and NSG2 (UniProt P42857 and Q9Y328, respectively) are not yet present in HGNC, because no consensus has been found for these genes. The same is true for CCDS, which aims to provide a consensus sequence for all protein-coding genes: some protein-coding genes are absent from the CCDS set because no consensus has been found for the sequence (for example ELOA3C; UniProt A0A087WX78).\nAn alternative approach to assess the number of human protein-coding genes might be to compare portals described in this article with proteomics resources: it might be interesting to investigate the number of peptides that do not match to protein-coding genes in HGNC, UniProtKB/Swiss-Prot or GeneID. We think that the article would benefit developing these different points in the discussion.\nThere are a number of typos and imprecisions in the text listed below that alter the quality of the manuscript and should be reviewed:\n“are all cross referenced to a single, maximal length, protein entry.” It is not absolutely true since the maximal length is one the criteria. However, the relevance of the selected canonical protein in Swiss-Prot in term of expression and biological relevance are also considered among other criteria.\n\n“the fact that that”\n“the first two are essentially automated pipelines” it is not clear what the author is referring to? Swiss-Prot and HGNC?\n“There is now a community effort to promote more proteins to P1” The author uses indifferently PE1 to PE5 and P1 to P5. This could be misleading.\n“indicate the correct human sequence is the 35 resides represented in”\nOne should read residues instead of resides.\n“The current UniProt has” When mentioning the database, prefer UniProtKB\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-448
|
https://f1000research.com/articles/5-2897/v1
|
21 Dec 16
|
{
"type": "Opinion Article",
"title": "Building the infrastructure to make science metrics more scientific",
"authors": [
"Jennifer Lin",
"Fiona L. Murphy",
"Mike Taylor",
"Liz Allen",
"Jennifer Lin",
"Mike Taylor",
"Liz Allen"
],
"abstract": "Research leaders, policy makers and science strategists need evidence to support decision-making around research funding investment, policy and strategy. In recent years there has been a rapid expansion in the data sources available that shed light onto aspects of research quality, excellence, use, re-use and attention, and engagement. This is at a time when the modes and routes to share and communicate research findings and data are also changing.\n\nIn this opinion piece, we outline a series of considerations and interventions that are needed to ensure that research metric development is accompanied by appropriate scrutiny and governance, to properly support the needs of research assessors and decision-makers, while securing the confidence of the research community. Key among these are: agreed ‘gold standards’ around datasets and methodologies; full transparency around the calculation and derivation of research-related indicators; and a strategy and roadmap to take the discipline of scientific indicators and research assessment to a more robust and sustainable place.",
"keywords": [
"Altmetrics",
"scientometrics",
"science policy",
"research indicators",
"research evaluation",
"impact",
"research policy",
"funding"
],
"content": "Introduction\n\nIt is an exciting and challenging time for research evaluators and strategists; in the post-digital era, technical limitations around what can be used to assess different aspects of research are falling away. The availability of article-based citation metrics and indicators that capture research article reach, attention, and engagement is helping to reduce a reliance on misleading journal-based assumptions of scientific quality and importance. Many researchers now openly share components of their research – often within a research article, but increasingly outwith. For example, databases, datasets, software, and artistic outputs are often now on a range of platforms (e.g. Figshare, Zenodo) and independently citable (through the use of a digital identifier, such as a DOI). In addition, many researchers share analysis through non-traditional media (e.g. preprints, blog posts and policy documents).\n\nAt their essence, research metrics are designed to shed light on a range of attributes of research to support decision-making around resource allocation and research funding strategy (including tenure, career appointments and grant applications). In addition, metrics today routinely support national research assessment exercises, as exemplified by REF2014 in the UK and ERC2015 in Australia. Despite this, there continues to be limited investment in either research on the quality and validity of the indicators or the governance and stewardship of the data upon which indicators are derived.\n\nPolicy experts and researchers have long petitioned to make research metrics more robust, evidence-based and scientific (Lane, 2010) and therefore acceptable to the community they are meant to serve. Recent analyses have also reported on the current limitations of research metrics, calling for more research on, and improvements in, the infrastructure to support science indicators (Hicks et al., 2015; Wilsdon et al., 2015). The EU also recently issued a consultation to put ‘alternative’ metrics on firmer footing as part of its drive to encourage open science approaches and robust ways to evaluate research (Amsterdam Call for Action on Open Science, 2016). However, the ‘science’ of research metrics (scientometrics) paradoxically remains an orphan discipline given that more effective and accurate science metrics could make science more effective.\n\n\nBuilding an evidence base for metrics\n\nWe are now at a pivotal point of the research indicator story where a political and administrative appetite for research metrics to build and sustain efficient and effective research systems co-exists with a burgeoning in the sources of intelligence about research outputs. What is needed to harness this momentum is cross-sector agreement on the next best steps and actions to make research metrics more robust, transparent and empowered to work for the whole research community.\n\nSeveral initiatives are underway whose aim is, at least in part, to consider how to improve the evidence base upon which science is evaluated and make science more effective (see for example, the EU Open Science Policy Platform, and the UK Forum for Responsible Research Metrics [announced in September 2016]). The key ways that such initiatives will be able to make a real difference, is four-fold. First, ensure active participation from across the whole scientific research community in a broad way to include researchers, institutions and funding agencies, alongside scientific publishers, learned societies and technology platform providers. Second, deliver a roadmap for the key requirements needed to build and assure quality science metrics for the benefit of science. Third, question existing assumptions around how we conduct and reward research, and test out new approaches and ways of working. Fourth, secure access to resources and influence, as well as make actionable decisions.\n\nAgainst this backdrop, we believe that there are now a number of very practical ingredients that can potentially act as part of a roadmap to ensure the development of robust and fair science indicators that have community support. We outline these below.\n\nFor research metrics to be understood and used consistently there needs to be agreement around common vocabulary and descriptors of terms. As an example, CASRAI is building a dictionary of scholarly research output terminology. This dictionary has multiple users, including groups involved in the development of research metrics.\n\nThe definitions themselves need to be definitive, openly sourced, managed, curated, versionable and quality assured. Additionally, the data upon which the indicator is best derived need to be identified. One of the challenges around research indicator derivation to date is that many of those in common usage are based upon opaque methodologies and proprietary datasets. This has eroded trust among the user base - many of whom don’t have access to the data - and pragmatically makes it difficult for particular metrics to be reproduced and explained.\n\nAn important concern around current research metrics is that they are often compiled and enabled through proprietary databases with locked access to the underlying data. This creates challenges for third parties wanting to replicate a metric, apply it in a different context or produce aggregate datasets from multiples sources. It also leads to mistrust and scepticism among users and those whose research is described (Wilsdon et al., 2015).\n\nThe community needs a reference set – a Gold Standard (GS) dataset – for proper metrics development. A GS dataset would also enable an ongoing appraisal of best practice for a particular metric’s use and application – and potential inter-relationship with other metrics. Currently, a wide array of metrics is available. These make similar claims, but derive from different formulations. If enabled to work by correlating against a GS dataset, analysts can conduct systematic and rigorous testing and benchmarking for these options to surface the ones most useful across different applications. In short, while the open availability of raw metrics data is critical to transparency and to support innovation in metrics development and provisioning, we need a separate reference dataset that ensures the raw data which underlie a specific metric or metrics are properly preserved and audited.\n\nIn addition to the raw data, required analytical tools also need to be made available for true transparency and reproducibility (and thereby trust in the metrics). This includes products, such as a defined (minimum core) dataset, and open source standards on how the data are derived and defined (perhaps through an intermediary such as Crossref or by a cross-functional stakeholder group). The National Information Standards Organization’s work in this area can be built upon in future research. Commercial entities might also serve as potential sources where available to the broader community.\n\nPerhaps most importantly given the stakes involved, we need greater consensus around how science and research-related metrics are best used to support decision making in science. As noted earlier, metrics need to be created to answer specific research evaluation questions. Research on research (science of science) is needed to help answer the important research evaluation questions and determine which metrics are useful and have the potential to provide insight to these research questions. As researchers adopt new ways to share and publish their research at speed, metrics and indicators that track and assess the value, quality and utility of those activities need to keep pace.\n\nWe see a valuable role for funders to play in supporting this particular research area. The community working in the field is small and funding can be difficult to allocate even where funding for research evaluation studies is available (such as the UK’s Medical Research Council’s report on how science is funded). Focused funding is also needed to train a cadre of researchers to conduct experiments around what works for science and research, and this includes analyses of research assessment and metrics. Additionally, they (along with policy-makers) can contribute use cases and research questions to the researchers developing metrics to ensure that the outputs are practical and meet real needs. Simply by taking additional notice of this field, funders will be making a critical contribution towards highlighting its significance and expediting progress. Having key leverage on the drivers, incentives and value systems of the research ecosystem, they can enable a shift in behaviours and culture.\n\nAs noted in Wilsdon et al., 2015, the digital infrastructure underpins not only the research enterprise but also the creation of metrics. Scholarly outputs of all stripes – articles, pre-prints, datasets, software, and peer review reports – need identifiers (such as DOIs) within this networked ecosystem to facilitate the derivation of metrics. This need extends beyond research artefacts: identifiers for researchers (ORCIDs), funders (Open Funder Registry), as well as research institutions. For research metrics to be open, trusted and useful, research objects need to be reliably and meaningfully linked to each other, as well as to researchers, institutions and funding agencies to support strategy and decision-making (see for example Amsterdam Call for Action on Open Science, 2016).\n\nCurrently, research and documentation on metrics is dispersed. As a non-disciplinary grouping, not a single scholarly community or society spans all the relevant groups working on theory, advancing analytics, data quality, visualisation, policy (and economics). No single party takes responsibility for collecting or documenting process, evidence of good or bad practice, or any other significant issues. The value of these resources may not be immediately obvious, but their absence can stunt the progress of metrics utility, innovation, transparency and dependability.\n\n\nA path to fulfil these needs\n\nAs researchers adopt new ways to share their scholarly contributions at speed, metrics which describe and provide insight into that work need to keep pace. Different metrics are likely to have different value across output types, research fields and in different circumstances. Yet we believe that a coordinated, cross community effort to enhance our knowledge and application of research metrics is both the timely and sensible route to take. By leveraging the capacity of multiple sectors, we can more effectively create the evidence base context needed to develop metrics able to serve a modern, vibrant research enterprise.\n\nRecent and current initiatives to study and report on scientometrics are evidence of the growing urgency of this issue, but do not so far encompass a sufficient range of functions, regions, technologies or the wider community. For this discipline to be able to progress towards its true potential, a global, cross-stakeholder and truly open project and consultation process needs to be devised. Governance and consultation processes will be critical in order to build trust amongst a wide range of users. Mediation between non-profit and commercial entities, funders, researchers and institutions will need to be baked into the project’s fundamental structure.\n\n\nConclusion\n\nThis piece is the result of a number of conversations between the authors and others operating in the metrics field. The writing process was punctuated by the EU Consultation on Metrics and, more recently, the announcement of the UK’s Responsible Metrics Forum. These initiatives informed our thinking but, as outlined above, did not fully encompass the scale of action and community involvement we argue is necessary for the paradigmatic change required.\n\nWe propose that a coordinated, cross-sector and international effort is required, which operates openly and shares data, resource and expertise across the stakeholders represented. As a next step, we call for the establishment of a group (and adequate funding) to take the first steps by developing the scope and structure of the major project outlined above through community consultations. We hope to see a consensus building round a reputable, transparent entity representing and spear-heading the further development and safe-guarding of scientometrics. Powered by the community, this entity would bear the responsibility to take actions that would address the range of concerns and requirements outlined above.\n\nThis community entity might take any number of forms. A few examples include:\n\n1. an independent non-profit membership organisation (e.g. like ORCID) managed by a cross-sector board and executive.\n\n2. an independent research metrics foundation – funded by a consortium of national and independent research funding agencies, whose aim would be to deliver establishment of\n\n3. an independent, international office of research metrics - funded by national governments and organisations, whose remit would be to develop standards and deliver research metrics – including to provide ‘a Frascati Manual’ of definitions and standards for research/science metrics. This could include an ongoing programme of research (including ability to commission research) to keep pace with developments in science and research practice.\n\n4. an international, distributed hub of experts (similar to a learned society) that could, for example, commission and that can both deliver and advise on scientific indicators and commission work or work with an existing independent funding agency to support a research programme.\n\nMore than ever, scholarly research needs effective, trusted research metrics in today’s dynamic communications environment. Each of the actions proposed here is concrete and practical. Yet, they are all united in service of this broad and ambitious goal, so fundamental to the support of the scholarly enterprise at large.",
"appendix": "Author contributions\n\n\n\nAll the authors contributed equally to this article.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nHicks D, Wouters P, Waltman L, et al.: Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015; 520(7548): 429–31. PubMed Abstract | Publisher Full Text\n\nLane J: Let's make science metrics more scientific. Nature. 2010; 464(7288): 488–489. PubMed Abstract | Publisher Full Text\n\nWilsdon J, Allen L, Belfiore E, et al.: The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. 2015. Publisher Full Text"
}
|
[
{
"id": "18877",
"date": "03 Jan 2017",
"name": "David J. Currie",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn an opinion piece, Lin, Murphy, Taylor and Allen point out that: 1) researchers’ contributions to their fields are evaluated for purposes of research grant allocations, career advancement, prizes, etc.; and 2) the quality and validity of metrics that underlie these decisions are not well studied.\nLin and colleagues call for the development of a discipline that will improve the evidence and infrastructure with which science is evaluated. Their point is well-taken. The manuscript was useful (to me) in pointing out some references and links to initiatives that are now underway in this field. However, to tell the reader what needs to be done is much less useful than actually doing something. This manuscript offers some reasonable suggestions about steps that might improve the evaluation of science; the difficulty is that the article does not present any evidence of an advance. More than opinion is necessary to advance the field.",
"responses": []
},
{
"id": "19300",
"date": "13 Jan 2017",
"name": "Maria Nedeva",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI believe that the title is appropriate and the abstract captures the essence of this opinion piece. I’m emphasising that this is an opinion piece because judgement regarding sources and data would be very different. There is much that I like about this piece and one of the main things is that it puts out there a very important at present discussion: how we use indicators in research evaluation and how we can do this better (or at least in a way that doesn’t disadvantage the development of science). The authors are well informed about the state of play and have given serious consideration to what can be done.\n\nI also can see how the very practical proposals in this piece could be implemented and yield some results.\nMy reservations are about the failure to reach beyond the ‘technical’ – this is very needed though probably outside of what the authors have set out to achieve here. This is why, I believe that this piece should be published and, possibly, scholars in the UK and beyond encouraged to take part in this kind of discussion.\n\nHope this helps.",
"responses": []
},
{
"id": "19504",
"date": "13 Feb 2017",
"name": "Ivan Oransky",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for the opportunity to review this manuscript. We should note, however, that given the nature of the manuscript, which is really an editorial, we are of the opinion that it probably does not require the kind of rigorous peer review usually applied to original research.\nThe manuscript presents a good summary of many of the issues facing research into metrics, and offers a plan for addressing them, at a very high level. We would suggest a few additions for improvement:\n\nA nod to the fact that metrics can always be gamed, and that while making them more scientific could cut down on this risk, it will likely always be possible. It might also bear mentioning that no matter what metrics we end up with, they are no substitute for reading a particular paper. Put another way, metrics may be useful for certain things (eg large-scale productivity) but not others (eg quality).\n\nA more specific set of next steps. What first steps might funders, scientists, and administrators take? A systematic review and meta-analysis? A gathering to frame the questions and identify funding priorities? Etc.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2897
|
https://f1000research.com/articles/6-443/v1
|
07 Apr 17
|
{
"type": "Software Tool Article",
"title": "DangerTrack: A scoring system to detect difficult-to-assess regions",
"authors": [
"Igor Dolgalev",
"Fritz Sedlazeck",
"Ben Busby",
"Igor Dolgalev",
"Fritz Sedlazeck"
],
"abstract": "Over recent years, multiple groups have shown that a large number of structural variants, repeats, or problems with the underlying genome assembly have dramatic effects on the mapping, calling, and overall reliability of single nucleotide polymorphism calls. This project endeavored to develop an easy-to-use track for looking at structural variant and repeat regions. This track, DangerTrack, can be displayed alongside the existing Genome Reference Consortium assembly tracks to warn clinicians and biologists when variants of interest may be incorrectly called, of dubious quality, or on an insertion or copy number expansion. While mapping and variant calling can be automated, it is our opinion that when these regions are of interest to a particular clinical or research group, they warrant a careful examination, potentially involving localized reassembly. DangerTrack is available at https://github.com/DCGenomics/DangerTrack.",
"keywords": [
"Breakpoint",
"Structural Variants",
"SNP",
"CNV",
"Clinical Genetics"
],
"content": "Introduction\n\nThe advent of next generation sequencing has enabled the comparison of cells, organisms, and even populations at the genomic level. Whole genome sequencing experiments are run worldwide on a daily basis with various aims, from exploring novel genomes to diagnosing complex variations in high-ploidy cancer samples. A common step in all of these studies is the mapping of the sequence to a reference genome or assembly to identify variations (whole genome sequencing) or expression (RNA sequencing) of the sample.\n\nMultiple studies so far have suffered from mapping artifacts typically occurring in highly variable regions, including single nucleotide polyporphisms (SNPs) and structural variants (SVs), which may be repetitive regions or regions that are not correctly represented by the reference genome (Degner et al., 2009). Multiple methods have been suggested to overcome this bias, including constructing a personalized reference genome (Satya et al., 2012), sequencing the parental genomes (Graze et al., 2012), building graph genomes over all known variants (Dilthey et al., 2015), or carefully reconciling particular subregions. The latter includes discarding reads using a mapping quality filter, realigning reads locally, or computing a localized de novo assembly using the Genome Analysis Toolkit to improve the quality of SNP calls. However, all these methods often depend on the sample quality (e.g. coverage, error rate), may result in additional expenses, and are often optimized only for human genome data.\n\nHere, we present DangerTrack, the first approach to automatically classify difficult-to-assess regions by combining annotated features, such as mappability and SV calls. DangerTrack can be applied to any genome and organism of interest. It runs within minutes and provides a Browser Extensible Data (BED) file with a score for every 5 kb region. The height of the score indicates the trustworthiness of the region in terms of SNP calling, and thus how difficult an accurate mapping can be. DangerTrack represents a flexible and easy to use method to detect hard-to-analyze regions with a pure mapping approach. We compared the results of DangerTrack to the blacklisted regions of ENCODE (https://personal.broadinstitute.org/anshul/projects/encode/rawdata/blacklists/hg19-blacklist-README.pdf), as well as to the list of problematic regions from the National Center for Biotechnology Information (NCBI).\n\n\nMethods\n\nWe downloaded the SVs dataset from the 1000 Genomes Project (Sudmant et al., 2015) (1KG) from dbVar (estd219; https://www.ncbi.nlm.nih.gov/dbvar/studies/estd219/), as well as a 16-candidate SV callset from the Genome in a Bottle (GIAB; Zook et al., 2016) (Ashkenazi son dataset available at ftp://ftp-trace.ncbi.nlm.nih.gov/giab/ftp/release/AshkenazimTrio/HG002_NA24385_son/latest; NIST Reference Material 8391: HG002 and NA24385). These Variant Call Format (VCF) datasets were converted into BED files using SURVIVOR (Jeffares et al., 2017) (available from https://github.com/fritzsedlazeck/SURVIVOR). Each SV was represented by two entries in the BED file, listing the breakpoints of each reported SV.\n\nNext, we binned the breakpoints in 5 kb windows and counted the number of SVs in these windows. The number of SV breakpoints per window was normalized by the 99% quantile number of breakpoints within a window across the whole genome. Thus, the higher the ratio, the more SV breakpoints are in a given window, and therefore the less trustworthy the reference seems to be.\n\nWe downloaded the 50 bp and 100 bp mappability tracks from UCSC (http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeMapability/).\n\nThese mappability tracks contain a measurement for each base of the reference genome. These tracks were generated using different window sizes, with high signals indicating areas where the sequence is unique. The GEM (GEnome Multitool) mapper (http://big.crg.cat/services/gem_aligner) was used to generate CRG k-mer alignability. The method is equivalent to mapping sliding windows of k-mers back to the genome. For each window, a mappability score is computed as 1 divided by the number of matches found in the genome. Thus, a score of 1 indicates one match in the genome, 0.5 indicates two matches in the genome, and so on.\n\nNext, we computed the score for uniqueness of regions. This was done by subtracting the average mappability value from 1. Thus, a value of 0 represents a unique region. Similarly to the SVs computation method, we summarized the average uniqueness score per 5 kb window, obtained by simple average across the window for both 50 bp and 100 bp tracks.\n\nWe computed the DangerTrack score by combining all four features with a uniform weighting schema. Note that our score operates between 0 and 1, where 0 means a unique, easy-to-assess region, and 1 means a region that is repetitive and enriched for structural variations.\n\nThe resulting genome-wide DangerTrack score in BED and bedGraph formats are available at: https://github.com/DCGenomics/DangerTrack. The repository also contains the bash and R scripts for downloading, cleaning, and summarizing the data, so the score can be computed independently or for different window sizes. The code can be adapted for use with other genomes assuming comparable mappability and structural variation data sets are available.\n\nWe downloaded the Blacklisted Regions that are defined as problematic by ENCODE Data Analysis Center (https://www.encodeproject.org/annotations/ENCSR636HFF/), as well as the list from the Genome Reference Consortium (ftp.ncbi.nlm.nih.gov/pub/grc/human/GRC/Issue_Mapping/) of regions that either underwent manual curation from GRch37 to GRch38 or are listed as problematic for future versions of the human genome. For the comparison, we binned the list of regions similar to our approach in 5 kb regions. Next, we compared the values between our track and the generated ENCODE and GRC tracks.\n\n\nResults\n\nTo assess the ability of DangerTrack to highlight suspicious regions, we computed the DangerTrack score over the human reference genome (hg19) using data from the 1000 Genomes Project and GIAB, as well as mappability tracks from UCSC. We downloaded 72,432 SVs from the 1000 Genomes Project data and 135,653 SVs from the GIAB database, for a total of 363,234 breakpoints. Figure 1 shows the histogram of the number of SV breakpoints within each 5 kb bin. As expected, the SV events predicted by the 1000 Genomes Project (labeled 1KG in Figure 1) and GIAB data highlight regions in the genome with high structural variability. However, very few regions exist that incorporate more than 20 events within 5 kb. Interestingly, these two tracks are not very similar, with a correlation of only 0.06 over a subsample of 10,000 bins.\n\n(A) Histogram over the hg19 genome of mappability with respect to 50 bp. (B) Histogram over the hg19 genome of mappability with respect to 100 bp. (A and B) are obviously closely related, with the exception that (A) (50 bp regions) includes more regions that have on average a lower score. (C) Distribution of SVs across the hg19 genome based on 16 SV data sets. (D) Distribution of SVs across the hg19 genome based on the 1000 Genomes Project call set.\n\nFor the mappability data, we naturally expect a high correlation, since the regions that are not unique within a 100 bp region will also not be unique given a 50 bp sequence. The correlation over the 10,000 subsampled 5 kb regions is therefore high (0.95). We chose these two mappability tracks as they reassemble the often-used read length and also take into account local alignment-based clipping of reads.\n\nNext, we compared the DangerTrack score to manually curated regions from ENCODE and NCBI. These regions represent areas along the genome that are either discarded due to their problematic mapping from previous experiences during the ENCODE project, updated in GRCh38, or still under manual curation for future genome releases. Figure 2 and Figure 3 represent the comparison between the DangerTrack score and the listed regions for ENCODE and NCBI, respectively. We observe a very high correlation for both tracks, highlighting that the DangerTrack score captures these regions. Figure 4 represents the overlap of the DangerTrack score and the annotated regions from Encode and NCBI for chromosomes 1–8.\n\n\nConclusions and next steps\n\nThe results of DangerTrack overlap with previously-established troubling regions from the ENCODE blacklist and with regions of assembly error identified by the Genome Reference Consortium. Furthermore, we identified 48,891 5 kb regions (7.9% of all regions) that are not trustworthy. Thus, the mappability score and the concentration of SV breakpoints in a region indicate that the region is less reliable for SNP calling alone. This difficulty may be due to a high degree of difference in the reference sequence or the number of unresolved regions. While we showed that DangerTrack is capable of capturing these challenges for hg19, this method is universally applicable regardless of organism. The mappability tracks can be established easily and SV calls from other organisms can be incorporated. Nevertheless, DangerTrack is only a first step in understanding the underlying complexity of certain regions. Future work will include a revised weighting of the individual tracks.\n\n\nSoftware availability\n\nThe code for the pipeline and the resulting genome-wide DangerTrack score are publically available at: https://github.com/DCGenomics/DangerTrack\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.438344 (igor & DCGenomics, 2017).\n\nLicense: MIT",
"appendix": "Author contributions\n\n\n\nI.D., F.J.S., and B.B participated in designing the study, carrying out the research, and preparing the manuscript. I.D., F.J.S., and B.B were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nF.S. was supported through a National Science Foundation award (DBI-1350041) and National Institutes of Health award (R01-HG006677). B.B. was supported by the Intramural Research Program of the National Library of Medicine.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors wish to thank NCBI Hackathon organizers, Cold Spring Harbor Labs, Mike Schatz, Vamsi Kodapalli. The authors also thank Lisa Federer, NIH Library Writing Center, for manuscript editing assistance.\n\n\nReferences\n\nDegner JF, Marioni JC, Pai AA, et al.: Effect of read-mapping biases on detecting allele-specific expression from RNA-sequencing data. Bioinformatics. 2009; 25(24): 3207–3212. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDilthey A, Cox C, Iqbal Z, et al.: Improved genome inference in the MHC using a population reference graph. Nat Genet. 2015; 47(6): 682–688. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGraze RM, Novelo LL, Amin V, et al.: Allelic imbalance in Drosophila hybrid heads: exons, isoforms, and evolution. Mol Biol Evol. 2012; 29(6): 1521–1532. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIgor; DCGenomics: NCBI-Hackathons/DangerTrack: DangerTrack Release 1.1 [Data set]. Zenodo. 2017. Data Source\n\nJeffares DC, Jolly C, Hoti M, et al.: Transient structural variations have strong effects on quantitative traits and reproductive isolation in fission yeast. Nat Commun. 2017; 8: 14061. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSatya RV, Zavaljevski N, Reifman J: A new strategy to reduce allelic bias in RNA-Seq readmapping. Nucleic Acides Res. 2012; 40(16): e127. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSudmant PH, Rausch T, Gardner EJ, et al.: An integrated map of structural variation in 2,504 human genomes. Nature. 2015; 526(7571): 75–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZook JM, Catoe D, McDaniel J, et al.: Extensive sequencing of seven human genomes to characterize benchmark reference materials. Sci Data. 2016; 3: 160025. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21625",
"date": "24 Apr 2017",
"name": "Melissa A. Gymrek",
"expertise": [
"Reviewer Expertise Bioinformatics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present DangerTrack, a method to score regions of the genome that are likely to be problematic for variant calling. The method combines information from structural variant catalogs and mappability scores across the genome into a single track whose score is meant to correlate with the “trustworthiness” of a region.\nAssessing which regions of the genome are likely to be error-prone for variant calling is indeed an important issue, especially for the use case mentioned of clinicians and biologists interested in particular variants. The manuscript is for the most part well-written and the method is clearly described. However, the rationale for developing a new annotation track on top of existing annotations such as the ENCODE “black list” is not well described, and the authors do not provide sufficient evidence of the claim that DangerTrack successfully classifies “difficult” regions of the genome for variant calling. These and other concerns are outlined in more detail below.\nMajor comments:\nThe rationale and goal are not clearly defined: What was the primary rationale for developing DangerTrack? It was not immediately clear why another annotation is needed, given that tracks such as the ENCODE blacklist and the NCBI problematic region list already exist. One potential reason is that those lists are inadequate, and we need a list that is better at picking out truly problematic regions for variant calling. If that was indeed the goal, the authors do not present sufficient evidence that their annotation is any better than the existing lists. On the other hand, another reasonable motivation to create this tool is that the ENCODE/NCBI lists were created manually, and could not be easily constructed for a new genome or a new individual. If that is the case, the authors should explicitly state early on that this was their primary motivation. Finally, there are other automated tools/tracks such as RepeatMasker and dustmasker that might be used to filter likely low quality variants. How does DangerTrack compare to those?\n\nInsufficient evaluation: The authors claim that their score, based on the # SV breakpoints/5kb, tracks with SNP call quality. However, this is never backed up with any evidence. Thus, it is impossible to tell whether this track actually adds any value in filtering low quality SNP calls. One potential validation would be to look at SNP quality scores or SNP call accuracy stratified by DangerTrack value, and show a relationship. If DangerTrack does a better job of classifying incorrect vs. true SNP calls than other tracks, then that would be clear evidence that it gives value added over existing tools. Similarly the discussion states that the authors identify ~48K “untrustworthy” regions, but there is no data to back up the statement that those regions are indeed enriched for incorrect calls.\n\nMinor comments:\nLast paragraph of introduction, suggest to change “height of the score” to “magnitude of the score”\n\nLast sentence before “Evaluation of DangerTrack”, change “reassemble” to “resemble”\n\nSame sentence, how are the mappability tracks related to base-clipped reads?\n\nHow did the authors decide on the weighting scheme to combine different features?\n\nThe low overlap between 1000 Genomes and GIAB breakpoints raises concerns over how reproducible DangerTrack will be and how sensitive it is to the quality of the SV catalog used as input.\n\nFigures 2 and 3 are not well described, it was not clear what is being depicted.\n\nThe authors indicate that there is “very high correlation” with the ENCODE blacklist track. This should be stated more quantitatively, in terms of e.g. correlation or % overlap.\n\nIs the rationale for developing the new software tool clearly explained? No\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No",
"responses": []
},
{
"id": "21624",
"date": "24 Apr 2017",
"name": "Justin M. Zook",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a tool that finds potentially problematic regions for small variant calling, based on mappability and previously called SVs. It's an interesting and useful tool, and I have a few suggestions for improvement:\nIt is worth noting that the 1000 Genomes SVs were discovered using only short reads in thousands of individuals, and the GIAB SVs were discovered using short and long reads in only one mother-father-son trio. There are nuances to this that would be useful to discuss. In particular, GIAB SV locations may or may not be SVs in other individuals, and because they are not highly curated, they may contain FPs around repetitive regions, but these are still good regions to identify as problematic for small variant calling\n\nI don't see the script for running SURVIVOR commands and generating the dangertrack bed files in the GitHub site, which would be useful to include to reproduce the results.\n\nWhen the authors say \"Furthermore, we identified 48,891 5 kb regions (7.9% of all regions) that are not trustworthy,\" what dangertrack score threshold do they use (e.g., ==1, >0.9, >0.5, ...)?\n\nIt may be interesting to compare to the GIAB high-confidence bed file for NA12878 (ftp://ftp-trace.ncbi.nlm.nih.gov/giab/ftp/release/NA12878_HG001/NISTv3.3.2/GRCh37/HG001_GRCh37_GIAB_highconf_CG-IllFB-IllGATKHC-Ion-10X-SOLID_CHROM1-X_v.3.3.2_highconf_nosomaticdel.bed) and the Platinum Genomes bed file for NA12878 (ftp://ftp-trace.ncbi.nlm.nih.gov/giab/ftp/technical/platinum_genomes/2016-1.0/hg19/small_variants/ConfidentRegions.bed.gz), since these are both regions where they purport to make high-confidence small variant calls.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "21627",
"date": "24 Apr 2017",
"name": "Andrew Carroll",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis work uses the frequency of detected structural variant events as a proxy for reference quality and thus as a measure of trustworthiness for variant calls in a region. The foundation for the work is a consensus set of multiple SV methods applied in both Genome in a Bottle and 1000 genomes.\nThe concept is interesting and well applied. The analysis is clearly communicated, and the code is both openly available and interpret-able.\nI do have a minor issue with the communication of the issue. The article doesn't get specific about why performing analysis in these genomic regions might be \"dangerous\", which I would categorize as:\nRegions which represent a problem in the reference itself (either regions of mis-assembly, or locations where a rare SV event was present in the sequence used which the rest of the population doesn't have)\n\nRegions which may contain \"SV hotspots\"\n\nLow complexity regions, centromeric regions, telomeric regions.\n\nSegmental duplications with insufficient divergence, mobile elements.\n\nRegions of high heterozygosity in the population - which may or may not be SV in origin but due to their diversity may manifest as SV events.\n\nSince the paper doesn't specifically break down these possibilities, I feel that the tone of the discussion would imply to a naive reader that these regions are \"dangerous\" because they are SV hotspots, when I think the authors would agree with me that reference assembly issues, low complexity regions, and segmental duplications are probably responsible for most of the \"dangerous\" regions.\nIt might be worthwhile to spend a few sentences to explain why a region might be dangerous, though the paper is written so tightly around the problem that it might distract the paper from its core point.\nSeparately, another distinction it might be worthwhile to make is that the SVs from the 1000 genomes project are quite different than the HG002 SVs. In one case, we have SVs generated over a population using more limited sequencing technology. HG002 is very different, where we are instead generating a wealth of calls from a single sample. Both approaches are worthwhile and it is appropriate to combine them. It would be interesting to note whether there are differences between the types of regions the two approaches identify.\nFrom this point, one could ask a number of additional questions beyond the scope of the manuscript. For example - which of these regions are no longer SV hotspots when using longer reads.\nHopefully authors of SV tools will begin to take this DangerTrack into account when developing their methods - either to give quality values to their calls, or as a means to separate calling on hard and easy regions of the genome.\nOverall, the work is sound and the paper very well written.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-443
|
https://f1000research.com/articles/5-2790/v2
|
25 Jan 17
|
{
"type": "Opinion Article",
"title": "Animating and exploring phylogenies with fibre plots",
"authors": [
"William D. Pearse"
],
"abstract": "Despite the progress that has been made in the visualisation of information since Haeckel's time, phylogenetic visualisation has moved forward remarkably little. In this brief essay, I give a short review of what I consider to be some recent major advances, and outline a new kind of phylogenetic visualisation. This new graphic, the fibre plot, uses the metaphor of sections through a tree to describe change in a phylogeny. As an animation, I suggest it is a powerful method to help interpret large phylogenetic hypotheses, although snapshots of it can also be displayed. As we enter the Anthropocene, I argue there has never been a greater need to know humanity's true place in the world, as depicted in the tree of life.",
"keywords": [
"phylogeny",
"visualisation",
"3D",
"fractal",
"animation",
"tree of life"
],
"content": "\n\nA new generation of phylogeneticists are piecing together the entire tree of life, making vast phylogenies of millions of taxa1,2. Many have produced tree-like depictions of the relationships among species, both before and after Darwin described the origin of species3,4, but Haeckel’s drawings are perhaps the most well-known. As our phylogenies become larger, a problem has emerged: humans cannot easily interpret phylogenies with millions of tips. In this brief essay, I will describe recent progress in the visualisation of phylogenies, and outline a new kind of plot in development (the fibre plot). My aim is not to write a review [c.f. 5], but rather to provide an opinionated commentary on some major milestones in the progress of phylogenetic visualisation.\n\nHaeckel’s phylogenies are beautiful to look at, and convey the overall structure of a phylogeny well. Each minor branch rarely maps onto a particular species, but their presence reminds the reader of the ever-changing nature of diversification. Both Haeckel and Darwin convey two kinds of information in their visualisations: time through depth on the page, and relatedness through the branching structure itself. Haeckel is also notable for producing a series of phylogenies, each examining a finer phylogenetic scale. Haeckel grasped that humans cannot process the fine details of all species without becoming lost, and that a series of phylogenies provides the same information in a more digestible format than a single, large, fully-resolved tree.\n\nThe last one hundred years have seen transformative changes to phylogenetic inference [see 6, but the same is not true of phylogenetic visualisation. The pace of change of phylogenetic visualisation has not matched that of other aspects of statistical visualisation. A time-traveller from 1859 could decipher a phylogeny from 2016 with On the Origin of Species3 as a guide, but the box-plots7 and histograms8 we rely on today would be foreign to them. Circular (“radial”) phylogenies are sometimes preferred when space is limited, and “magnifiers” in some computer programs highlight certain parts of the tree in more detail [e.g. 9], but for the most part any advances have been relatively minor.\n\nTo my knowledge, Paloverde10 was the first openly-released 3D phylogeny viewer. While it offered the user the ability to explore the most tree-like phylogenetic depiction to date, it also permitted the user to fly around a circular phylogeny rendered in a virtual space. Paloverde is notable for its claim that phylogeny is something to be explored, not merely viewed, and that “a 3D world, offers visual cues that aid in navigation and display that is unavailable in strictly 2D versions of the same layout.”10 The author of Paloverde, like Haeckel, recognised that scientists need to shift between finer and coarser phylogenetic scales when examining data, and so allowed users to collapse nodes at will. The program was a major advance in helping phylogeneticists conceptualise their own phylogenetic hypotheses. At least as transformative was the release of OneZoom11: a fractal phylogeny representation capable (theoretically) of displaying the entire tree of life on one page. OneZoom also requires the user to explore the tree, scanning up and down between finer and coarser details to make sense of the entire tree. Critically, OneZoom’s authors recognised that we are reaching the limits of what can be displayed in books: “[w]e now need to take the next step with a transition to data visualization that is optimized for interactive displays rather than printed paper.”11 They suggest that the way to display the next generation of data is to use the next generation of technology. A common thread running through these developments is their capacity to change the information displayed to the viewer, to better emphasise difference in structure across different phylogenetic depths. Consequently, I suggest the use of a new visualisation, the “fibre plot”, which is intended to leverage our natural ability to detect visual change through time. The fibre plot may be considered a horizontal slice through the tree of life, taken at whatever height (depth) the viewer requires (Figure 1). By moving along the tree, from the root to the tip, viewers will see the relative width of each fibre, and so gauge the number of terminal tips subtending that clade. I suggest an animation, with frames recorded at equal intervals along that trunk, which will provide the viewer with an intuitive sense of the timing of the diversification of major clades. I emphasise that, while Figure 1 shows the underlying logic behind the plot, the “plot” should really be called an animation - it is most readily interpretable when the user watches a video composed of successive slices through the trunk of the tree. I have included a toy implementation of this preliminary idea in R (Supplementary File 1), and an example of how it can be used to visualise the entire mammal tree of life12 (Supplementary File 2). Picking an ideal layout of the tips in the two-dimensional plane at the top of the tree is complex; a colleague has suggested using Hilbert curves based on the phylogeny’s distance matrix as an alternate solution to the one I have implemented.\n\nOn the left, I show a phylogeny (in grey) with a series of slices cut through it (in black). To the right, I show views through those slices surrounded in black outlines: each of these slices forms the basis of a fibre plot. Within each slice, a square represents descendent tips, and colours of those squares represent the composition of clades within a particular time slice. Squares of the same colour form a “fibre” in the tree of life. A true fibre plot would be an animation of the transition between these slices, showing how the clades (fibres) that make up the tree split as diversification takes place. Alternate colouring schemes are possible for the fibres (e.g., representing clade age), and I suggest displaying the age of each slice as the animation progresses (see Supplementary files for examples).\n\nDespite humanity being closer than ever to a true tree of all life on Earth1,2, phylogenetic visualisation may seem like a niche topic. I strongly feel that phylogenetic visualisation is critical to grasp the full extent of our planet’s biodiversity. Human activity has carelessly altered almost every aspect of our planet, and we must now live with the shame and hubris of a geologic age we named after ourselves13. There has never been a greater need to find a way to show humanity our true place in the world. In whatever sense a phylogeneticist can have a duty, I believe it is ours to show the world that we are nothing more than a twig on a tree that we are cutting down.",
"appendix": "Author contributions\n\n\n\nWDP was responsible for all aspects of this work.\n\n\nCompeting interests\n\n\n\nThe author has no competing interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nI am grateful to Bonnie G. Waring for constructive comments and suggestions, and Eric Talevich for suggesting using Hilbert curves.\n\n\nSupplementary files\n\nSupplementary File 1: R code for fibre plot.\n\nClick here to access the data.\n\nSupplementary File 2: Mammals.gif. Animated fibre plot in Graphics Interchange Format (GIF) of a phylogeny of all extant mammals12.\n\nClick here to access the data.\n\n\nReferences\n\nHedges SB, Marin J, Suleski M, et al.: Tree of life reveals clock-like speciation and diversification. Mol Biol Evol. 2015; 32(4): 835–845. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHinchliff CE, Smith SA, Allman JF: Synthesis of phylogeny and taxonomy into a comprehensive tree of life. Proc Natl Acad Sci U S A. 2015; 112(41): 12764–12769. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDarwin C: On the origin of species. John Murray, London; 1859. Reference Source\n\nPietsch TW: Trees of life: a visual history of evolution. Syst Biol. 2012; 61(6): 1083. Publisher Full Text\n\nPage RD: Space, time, form: viewing the Tree of Life. Trends Ecol Evol. 2012; 27(2): 113–120. PubMed Abstract | Publisher Full Text\n\nFelsenstein J: Inferring phylogenies. Sinauer Associates; 2004. Reference Source\n\nTukey JW: Exploratory data analysis. Reading, Mass; 1977. Reference Source\n\nIoannidis Y: The history of histograms (abridged).Proceedings of the 29th international conference on Very large data bases-Volume 29. VLDB Endowment; 2003; 19–30. Reference Source\n\nHuson DH, Scornavacca C: Dendroscope 3: an interactive tool for rooted phylogenetic trees and networks. Syst Biol. 2012; 61(6): 1061–1067. PubMed Abstract | Publisher Full Text\n\nSanderson MJ: Paloverde: an OpenGL 3D phylogeny browser. Bioinformatics. 2006; 22(8): 1004–1006. PubMed Abstract | Publisher Full Text\n\nRosindell J, Harmon LJ: OneZoom: a fractal explorer for the tree of life. PLoS Biol. 2012; 10(10): e1001406. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBininda-Emonds OR, Cardillo M, Jones KE, et al.: The delayed rise of present-day mammals. Nature. 2007; 446(7135): 507–12. PubMed Abstract | Publisher Full Text\n\nCrutzen PJ: Geology of mankind. Nature. 2002; 415(6867): 23. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "19939",
"date": "10 Feb 2017",
"name": "Rafael Zardoya",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper entitled “Animating and exploring phylogenies with fibre plots” by Pearse is an interesting contribution that proposes a new and distinct way to visualize phylogenetic trees. The new method propose by the author uses fibre plots to slice a phylogenetic tree from root to tips and visualize, as an animation, the cladogenetic process in time.\n\nAs the author correctly argues, while it is now possible to reconstruct phylogenetic trees involving tens of thousands of species, visualization of such trees is complex and has not advanced at the same pace as probabilistic inference methodology. Hence, the challenge is set. There are many programs for visualizing trees but few have explored the need of dealing with large phylogenies. Different strategies have been proposed to represent phylogenies including the collapse of certain nodes, distortion of the view, and representation in 3D, but thus far, the most popular approach probably consists on zooming in and out the phylogeny (OneZoom, Rosindell et al. 2012) using appropriate tools (e.g., a tablet). These viewers are complemented with others that allow incorporating other information pertinent to the phylogeny (e.g., iTOL, Letunic and Bork 2016).\n\nThe proposal here presented explores in a very different direction. While the idea of looking at different temporal slices in the phylogeny to get a feeling of the timing of diversification of the different clades is original, I think it is too preliminary in the present contribution. The video composed of successive slices shows in different colors how a single (ancestral) lineage is successively split into many but the viewer is unable to discern to which exact descendant lineages is looking at, as there are no labels. Moreover, at some point the number of splits (and colors) is too large to obtain useful information from the animation. As presently devised, the analysis of different clades will render very similar plots, which will be difficult to interpret (beyond seen an increase in the number of lineages) and compare. If the author wants this tool to be widely used, he should make the final outcome more appealing and understandable (e.g., perhaps a grid plot with labels of each lineage in the corresponding axis would help following which lineages and their ancestors are diverging) by peers from other fields than phylogenetics and by the general public.\n\nMinor changes:\nThe author mentions that PaloVerde in 2006 was the first 3D phylogeny viewer to his knowledge. I think he should check the Walrus graph visualisation tool by Hughes et al. 2004 Close brackets after [see 6]",
"responses": [
{
"c_id": "2605",
"date": "05 Apr 2017",
"name": "Will Pearse",
"role": "Author Response",
"response": "Thank you for these comments. I hope the changes I have made address some of your concerns about interpretation, all of which I think are legitimate. I have added the capacity to label (and track) particular species and clades through the animation, and have added an optional display of the phylogeny to the side of the animation. I hope the reviewer agrees that this addresses their concerns. Your comments about the colouring scheme, in particular, were very useful - I now colour everything according to clade age, which I hope you will agree makes for a much more informative plot. Thank you also for mentioning Walrus - omitting this was a huge mistake, and I'm grateful you've corrected me."
}
]
},
{
"id": "19876",
"date": "13 Feb 2017",
"name": "Stephen A. Smith",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPearse presents a new means for visualizing large phylogenies called the fibre plot. The purpose of this plot is to better represent splits by different colors and shades. This is an interesting idea and is demonstrated with an accompanying animated gif. However, I am left wondering if there are significant insights gathered from this view of the mammal tree. The animation proceeds and areas of the graph change color. I understand why they change but don’t know where I am in the tree and what the significance of the change is in speed or area of the graph. The figure presented (Figure 1) shows a somewhat different view of the fibre plot as presented alongside the phylogeny. This makes me think that perhaps a more informative presentation would be the view of the phylogeny along with the fibre plot. Then the animation would follow a line that moves in a preorder fashion from the root to the tips. This would allow for a more direct comparison of the tree and the plot. Without this additional guide, I am not sure what to make of the animation. I don’t know where I am in the tree (in time or place) and I can’t “move around” in any particular way. I can also envision any number of statistics presented with the plot.\n\nThis is an interesting start of an idea but I think it needs a little more development before it would be useful for navigating the size of the tree intended by the author. However, there may be some interesting uses for this or something like it in the future.\n\nEditorial comments I recommend that the author edit the abstract. For example, the sentence “Despite the progress that has been made in the visualisation of information since Haeckel's time, phylogenetic visualisation has moved forward remarkably little.” seems to suggest that Haeckel was the first person to try and visualize data. While this may be accurate for some biological data, it is not true for data in general as cartographers have been trying to visualize information and data for centuries. The final sentence in the paragraph could also use some adjustments. While the statement is trying to convey a general sense of the importance of phylogenies, I am not certain that “our place” in the tree of life will dramatically change as a result of visualization of the data. I would also recommend changes to the intervening sentences.\nI would recommend some changes to the remaining text but won’t outline all of those here.",
"responses": [
{
"c_id": "2604",
"date": "05 Apr 2017",
"name": "Will Pearse",
"role": "Author Response",
"response": "Thank you for these suggestions. I have now altered the fibre plot code in exactly the way you suggested, adding a traditional phylogeny to the right-hand side of the animation that grows and matches colour with the fibre plot. I think it makes the plot much easier to interpret - thank you for this excellent suggestion! I have edited the abstract following your suggestions, and removed the final sentence from it entirely."
}
]
},
{
"id": "19938",
"date": "17 Feb 2017",
"name": "Diego San Mauro",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper presents a brief overview on phylogenetic visualization and introduces a novel approach for visualizing phylogenies (timetrees) using fibre plots. Given the rapid accumulation of phylogenetic information over the last years that has enabled the construction of massive trees (mega-phylogenies) containing millions of branches and leaves (taxa), the new visualization method appears to be interesting and with some potential. However, it is outlined only succinctly in the paper, and we feel that there are a few issues that require further attention. More discussion is needed on the specific applications and/on implementations of phylogenetic fibre plots compared to the other visualization approaches already available. For instance, what are the advantages of fiber plots over conventional phylogenetic plots in terms of comparing e.g. different topology sets? (as used for example in hypothesis testing). Also, what is the applicability (if any) of fiber plots for visualizing phylogenetic trees whose branches represent rate of evolution (e.g., substitutions/site) instead of time? (as in phylograms). Or, how do fibre plots deal with extinct branches? (as those displayed by extinct fossil lineages). Discussing these issues (among others) more in detail would make it easier for the reader to assess the breadth of novelty and usefulness of the new method for the general field of phylogenetics, and its applicability beyond the reconstruction of the timetree of life. As described in the current paper, it seems that fibre plots could be a complement, but not substitute of the other (more conventional) phylogenetic visualization approaches. The output of the fibre plot is colorful, but in general very difficult to interpret. In fact, interpreting the fibre plot output of very large phylogenies or even the tree of all life would be more difficult than interpreting more conventional approaches (those zooming in and out the phylogeny). Implementing some sort of labeling/cross-referencing with lists of taxa or even conventional phylogenetic trees live on the side could help in the precise interpretation of what is being displayed at each timeframe.\nThere are also some additional issues that we want to mention:\nFirst paragraph: The sentence beginning \"Many have...\" needs some rewording... It is true that many have produced tree-like depictions of the relationships among species, but certainly not many before Darwin. So, please reword.\n\nSecond paragraph: Please provide a reference for the sentence beginning \"Haeckel grasped that humans...\".\n\nThird paragraph: Besides Dendroscope, it would be fair to cite FigTree (http://tree.bio.ed.ac.uk/software/figtree/) as well in the last sentence.\n\nFourth paragraph: It would be good to cite and discuss Walrus (http://www.caida.org/tools/visualization/walrus/) here as well. It appeared in 2001 (earlier than Paloverde) and allowed interactive 3D visualization of hierarchical graphs.\n\nFifth paragraph: Please add references and expand the last statement about using Hilbert curves.\n\nLast paragraph: The last paragraph of the paper appears unnecessary and probably should be removed. Only the first sentence could be kept as part of the previous paragraph (as closing statement). If this sentence is retained, please keep in mind that phylogenies (e.g., the tree of life) are hypotheses. Therefore, it would be more appropriate to say \"...being closer than ever to a reliable tree of all life\", rather than \"...being closer than ever to a true tree of all life\".",
"responses": [
{
"c_id": "2603",
"date": "05 Apr 2017",
"name": "Will Pearse",
"role": "Author Response",
"response": "I thank you both for your comments, which have greatly improved the article. I'm particularly grateful that you mentioned Walrus; this was a huge oversight on my part, and I'm glad to have an opportunity to correct it! I apologise for that, due to space limitations of an opinion article (limited to 1000 words), I am not able to go into as much detail as I would like on some of the broader topics you raise. I have, however, significantly altered the code of the fibre plot following your suggestion about non-ultrametric phylogenies and highlighting particular taxa. In particular, your suggest of a phylogeny to the side of the plot, mirroring reviewer 2's suggestion, has greatly improved the figure. Thank you! Responding to each of your comments in turn: Branch lengths and extinct taxa. I have re-written the function so that it supports dated and undated trees, and highlights extinct taxa to show the time period within which they went extinct. I describe this in the penultimate paragraph of the manuscript. Ease of interpretation and suggestion of replacement of other phylogenies. I agree with the reviewers that this is not a replacement for a traditional visualisation; as I discuss in the text I find the visualisation captures well changes in timing and diversification more readily in extremely large phylogenies (e.g., the ~5000 taxon example I provide). I have followed the reviewers' suggestions and allowed the user to highlight clades and taxa of interest, which, along with the comments of reviewer 3, I hope make the plot easier to interpret. First paragraph: I respectfully disagree with your comment; the book by Pietsch I reference contains many examples of tree-like structures preceeding Darwin. I have altered the text to make my meaning clearer. Second paragraph: Thank you for this; I have added a reference earlier in the paragraph. Third paragraph: Thank you for this; I now cite FigTree and the R package ape. Fourth paragrph: Thank you for this; I now mention Walrus (citing a 1997 conference paper that describes what is essentially the same software under the name 'H3'), and cite another software package that converts phylogenies into Walrus format. Fifth paragraph: Thank you for this; having now experimented more thoroughly with the approach, I didn't find it aided interpretation. I have changed the code to alter the layout of the fibres, but I have dropped this reference from the text. Final paragraph: Thank you for this; I have made the changes your suggested."
}
]
}
] | 2
|
https://f1000research.com/articles/5-2790
|
https://f1000research.com/articles/5-1281/v1
|
08 Jun 16
|
{
"type": "Method Article",
"title": "A cross-package Bioconductor workflow for analysing methylation array data",
"authors": [
"Jovana Maksimovic",
"Belinda Phipson",
"Alicia Oshlack",
"Alicia Oshlack"
],
"abstract": "Methylation in the human genome is known to be associated with development and disease. The Illumina Infinium methylation arrays are by far the most common way to interrogate methylation across the human genome. This paper provides a Bioconductor workflow using multiple packages for the analysis of methylation array data. Specifically, we demonstrate the steps involved in a typical differential methylation analysis pipeline including: quality control, filtering, normalization, data exploration and statistical testing for probe-wise differential methylation. We further outline other analyses such as differential methylation of regions, differential variability analysis, estimating cell type composition and gene ontology testing. Finally, we provide some examples of how to visualise methylation array data.",
"keywords": [
"methylation",
"bioconductor",
"workflow",
"array"
],
"content": "Introduction\n\nDNA methylation, the addition of a methyl group to a CG dinucleotide of the DNA, is the most extensively studied epigenetic mark due to its role in both development and disease (Bird, 2002; Laird, 2003). Although DNA methylation can be measured in several ways, the epigenetics community has enthusiastically embraced the Illumina HumanMethylation450 (450k) array (Bibikova et al., 2011) as a cost-effective way to assay methylation across the human genome. More recently, Illumina has increased the genomic coverage of the platform to >850,000 sites with the release of their MethylationEPIC (850k) array. As methylation arrays are likely to remain popular for measuring methylation for the foreseeable future, it is necessary to provide robust workflows for methylation array analysis.\n\nMeasurement of DNA methylation by Infinium technology (Infinium I) was first employed by Illumina on the HumanMethylation27 (27k) array (Bibikova et al., 2009), which measured methylation at approximately 27,000 CpGs, primarily in gene promoters. Like bisulfite sequencing, the Infinium assay detects methylation status at single base resolution. However, due to its relatively limited coverage the array platform was not truly considered “genome-wide” until the arrival of the 450k array. The 450k array increased the genomic coverage of the platform to over 450,000 gene-centric sites by combining the original Infinium I assay with the novel Infinium II probes. Both assay types employ 50bp probes that query a [C/T] polymorphism created by bisulfite conversion of unmethylated cytosines in the genome, however, the Infinium I and II assays differ in the number of beads required to detect methylation at a single locus. Infinium I uses two bead types per CpG, one for each of the methylated and unmethylated states (Figure 1a). In contrast, the Infinium II design uses one bead type and the methylated state is determined at the single base extension step after hybridization (Figure 1b). The 850k array also uses a combination of the Infinium I and II assays but achieves additional coverage by increasing the size of each array; a 450k slide contains 12 arrays whilst the 850k has only 8.\n\n(a) Infinium I assay. Each individual CpG is interrogated using two bead types: methylated (M) and unmethylated (U). Both bead types will incorporate the same labeled nucleotide for the same target CpG, thereby producing the same color fluorescence. The nucleotide that is added is determined by the base downstream of the ‘C’ of the target CpG. The proportion of methylation can be calculated by comparing the intensities from the two different probes in the same color. (b) Infinium II assay. Each target CpG is interrogated using a single bead type. Methylation state is detected by single base extension at the position of the ‘C’ of the target CpG, which always results in the addition of a labeled ‘G’ or ‘A’ nucleotide, complementary to either the ‘methylated’ C or ‘unmethylated’ T, respectively. Each locus is detected in two colors, and methylation status is determined by comparing the two colors from the one position.\n\nRegardless of the Illumina array version, for each CpG, there are two measurements: a methylated intensity (denoted by M) and an unmethylated intensity (denoted by U). These intensity values can be used to determine the proportion of methylation at each CpG locus. Methylation levels are commonly reported as either beta values (β = M/(M+U+α)) or M-values (Mvalue = log2(M/U)). Beta values and M-values are related through a logit transformation. Beta values are generally preferable for describing the level of methylation at a locus or for graphical presentation because percentage methylation is easily interpretable. However, due to their distributional properties, M-values are more appropriate for statistical testing (Du et al., 2010).\n\nIn this workflow, we will provide examples of the steps involved in analysing methylation array data using R (R Core Team, 2014) and Bioconductor (Huber et al., 2015), including: quality control, filtering, normalization, data exploration and probe-wise differential methylation analysis. We will also cover other approaches such as differential methylation analysis of regions, differential variability analysis, gene ontology analysis and estimating cell type composition. Finally, we will provide some examples of useful ways to visualise methylation array data.\n\n\nDifferential methylation analysis\n\nTo demonstrate the various aspects of analysing methylation data, we will be using a small, publicly available 450k methylation dataset (Zhang et al., 2013). The dataset contains 10 samples in total; there are 4 different sorted T-cell types (naive, rTreg, act_naive, act_rTreg), collected from 3 different individuals (M28, M29, M30). For details describing sample collection and preparation, see Zhang et al. (2013). An additional birth sample (individual VICS-72098-18-B) is included from another study (Cruickshank et al., 2013) to illustrate approaches for identifying and excluding poor quality samples.\n\n\n\n\n\nThere are several R Bioconductor packages available that have been developed for analysing methylation array data, including minfi (Aryee et al., 2014), missMethyl (Phipson et al., 2016), wateRmelon (Pidsley et al., 2013), methylumi (Davis et al., 2015), ChAMP (Morris et al., 2014) and charm (Aryee et al., 2011). Some of the packages, such as minfi and methylumi include a framework for reading in the raw data from IDAT files and various specialised objects for storing and manipulating the data throughout the course of an analysis. Other packages provide specialised analysis methods for normalisation and statistical testing that rely on either minfi or methylumi objects. It is possible to convert between minfi and methylumi data types, however, this is not always trivial. Thus, it is advisable to consider the methods that you are interested in using and the data types that are most appropriate before you begin your analysis. Another popular method for analysing methylation array data is limma (Ritchie et al., 2015), which was originally developed for gene expression microarray analysis. As limma operates on a matrix of values, it is easily applied to any data that can be converted to a matrix in R.\n\nWe will begin with an example of a probe-wise differential methylation analysis using minfi and limma. By probe-wise analysis we mean each individual CpG probe will be tested for differential methylation for the comparisons of interest and p-values and moderated t-statistics will be generated for each CpG probe.\n\nIt is useful to begin an analysis in R by loading all the package libraries that are likely to be required.\n\n\n\nThe minfi package provides the Illumina manifest as an R object which can easily be loaded into the environment. The manifest contains all of the annotation information for each of the CpG probes on the 450k array. This is useful for determining where any differentially methylated probes are located in a genomic context.\n\n\n\n\n\nThe simplest way to read the raw methylation data into R is using the minfi function read.450k.sheet, along with the path to the IDAT files and a sample sheet. The sample sheet is a CSV (comma-separated) file containing one line per sample, with a number of columns describing each sample. The format expected by the read.450k.sheet function is based on the sample sheet file that usually accompanies Illumina methylation array data. It is also very similar to the targets file described by the limma package. Reading the sample sheet into R creates a data.frame with one row for each sample and several columns. The read.450k.sheet function uses the specified path and other information from the sample sheet to create a column called Basename which specifies the location of each individual IDAT file in the experiment.\n\n\n\n\n\n\n\n\n\n\n\nNow that we have imported the information about the samples and where the data is located, we can read the raw intensity signals into R from the IDAT files. This creates an RGChannelSet object that contains all the raw intensity data, from both the red and green colour channels, for each of the samples. At this stage, it can be useful to rename the samples with more descriptive names.\n\n\n\n\n\n\n\n\n\nOnce the data has been imported into R, we can evaluate its quality. Firstly, we need to calculate detection p-values. We can generate a detection p-value for every CpG in every sample, which is indicative of the quality of the signal. The method used by minfi to calculate detection p-values compares the total signal (M + U) for each probe to the background signal level, which is estimated from the negative control probes. Very small p-values are indicative of a reliable signal whilst large p-values, for example >0.01, generally indicate a poor quality signal.\n\nPlotting the mean detection p-value for each sample allows us to gauge the general quality of the samples in terms of the overall signal reliability (Figure 2). Samples that have many failed probes will have relatively large mean detection p-values.\n\n\n\n\n\n\n\nThe minfi qcReport function generates many other useful quality control plots. The minfi vignette describes the various plots and how they should be interpreted in detail. Generally, samples that look poor based on mean detection p-value will also look poor using other metrics and it is usually advisable to exclude them from further analysis.\n\n\n\nPoor quality samples can be easily excluded from the analysis using a detection p-value cutoff, for example >0.05. For this particular dataset, the birth sample shows a very high mean detection p-value, and hence it is excluded from subsequent analysis (Figure 2).\n\n\n\n\n\n\n\n\n\n\n\n\n\nTo minimise the unwanted variation within and between samples, various data normalizations can be applied. Many different types of normalization have been developed for methylation arrays and it is beyond the scope of this workflow to compare and contrast all of them (Fortin et al., 2014; Maksimovic et al., 2012; Mancuso et al., 2011; Pidsley et al., 2013; Sun et al., 2011; Teschendorff et al., 2013; Touleimat & Tost, 2012; Triche et al., 2013; Wang et al., 2012; Wu et al., 2014). Several methods have been built into minfi and can be directly applied within its framework (Fortin et al., 2014; Maksimovic et al., 2012; Touleimat & Tost, 2012; Triche et al., 2013), whilst others are methylumi-specific or require custom data types (Mancuso et al., 2011; Pidsley et al., 2013; Sun et al., 2011; Teschendorff et al., 2013; Wang et al., 2012; Wu et al., 2014). Although there is no single normalisation method that is universally considered best, a recent study by Fortin et al. (2014) has suggested that a good rule of thumb within the minfi framework is that the preprocessFunnorm (Fortin et al., 2014) function is most appropriate for datasets with global methylation differences such as cancer/normal or vastly different tissue types, whilst the preprocessQuantile function (Touleimat & Tost, 2012) is more suited for datasets where you do not expect global differences between your samples, for example a single tissue. As we are comparing different blood cell types, which are globally relatively similar, we will apply the preprocessQuantile method to our data (Figure 3). Note that after normalization, the data is housed in a GenomicRatioSet object. This is a much more compact representation of the data as the colour channel information has been discarded and the M and U intensity information has been converted to M-values and beta values, together with associated genomic coordinates.\n\n\n\n\n\n\n\n\n\n\n\n\n\nMulti-dimensional scaling (MDS) plots are excellent for visualising data, and are usually some of the first plots that should be made when exploring the data. MDS plots are based on principle components analysis and are an unsupervised method for looking at the similarities and differences between the various samples. Samples that are more similar to each other should cluster together, and samples that are very different should be further apart on the plot. Dimension one (or principle component one) captures the greatest source of variation in the data, dimension two captures the second greatest source of variation in the data and so on. Colouring the data points or labels by known factors of interest can often highlight exactly what the greatest sources of variation are in the data. It is also possible to use MDS plots to decipher sample mix-ups.\n\n\n\nExamining the MDS plots for this dataset demonstrates that the largest source of variation is the difference between individuals (Figure 4). The higher dimensions reveal that the differences between cell types are largely captured by the third and fourth principal components (Figure 5). This type of information is useful in that it can inform downstream analysis by including obvious sources of unwanted variation in our statistical model to account for them, in this case individual to individual variation.\n\n\n\nPoor performing probes are generally filtered out prior to differential methylation analysis. As the signal from these probes is unreliable, by removing them we perform fewer statistical tests and thus incur a reduced multiple testing penalty. We filter out probes that have failed in one or more samples based on detection p-value.\n\n\n\n\n\n\n\n\n\n\n\nDepending on the nature of your samples and your biological question you may also choose to filter out the probes from the X and Y chromosomes or probes that are known to have common SNPs at the CpG site. As the samples in this dataset were all derived from male donors, we will not be removing the sex chromosome probes as part of this analysis, however example code is provided below. A different dataset, which contains both male and female samples, is used to demonstrate a Differential Variability analysis and provides an example of when sex chromosome removal is necessary (Figure 13).\n\n\n\nThere is a function in minfi that provides a simple interface for the removal of probes where common SNPs may affect the CpG. You can either remove all probes affected by SNPs (default), or only those with minor allele frequencies greater than a specified value.\n\n\n\n\n\nWe will also filter out probes that have shown to be cross-reactive, that is, probes that have been demonstrated to map to multiple places in the genome. This list was originally published by Chen et al. (2013) and can be obtained from the authors’ website.\n\n\n\n\n\n\n\n\n\nOnce the data has been filtered and normalised, it is often useful to re-examine the MDS plots to see if the relationship between the samples has changed. It is apparent from the new MDS plots that much of the inter-individual variation has been removed as this is no longer the first principal component (Figure 6), likely due to the removal of the SNP-affected CpG probes. However, the samples do still cluster by individual in the second dimension (Figure 6 and Figure 7) and thus a factor for individual should still be included in the model.\n\n\n\n\n\nThe next step is to calculate M-values and beta values (Figure 8). As previously mentioned, M-values have nicer statistical properties and are thus better for use in statistical analysis of methylation data whilst beta values are easy to interpret and are thus better for displaying data. A detailed comparison of M-values and beta values was published by Du et al. (2010).\n\n\n\n\n\n\n\n\n\n\n\nThe biological question of interest for this particular dataset is to discover differentially methylated probes between the different cell types. However, as was apparent in the MDS plots, there is another factor that we need to take into account when we perform the statistical analysis. In the targets file, there is a column called Sample_Source, which refers to the individuals that the samples were collected from. In this dataset, each of the individuals contributes more than one cell type. For example, individual M28 contributes naive, rTreg and act_naive samples. Hence, when we specify our design matrix, we need to include two factors: individual and cell type. This style of analysis is called a paired analysis; differences between cell types are calculated within each individual, and then these differences are averaged across individuals to determine whether there is an overall significant difference in the mean methylation level for each CpG site. The limma User’s Guide extensively covers the different types of designs that are commonly used for microarray experiments and how to analyse them in R.\n\nWe are interested in pairwise comparisons between the four cell types, taking into account individual to individual variation. We perform this analysis on the matrix of M-values in limma, obtaining moderated t-statistics and associated p-values for each CpG site. The comparison that has the most significantly differentially methylated CpGs is naive vs rTreg (n=3021 at 5% false discovery rate (FDR)), while rTreg vs act_rTreg doesn’t show any significant differential methylation.\n\n\n\n\n\n\n\n\n\nWe can extract the tables of differentially expressed CpGs for each comparison, ordered by B-statistic by default, using the topTable function in limma. The results of the analysis for the first comparison, naive vs. rTreg, can be saved as a data.frame by setting coef=1.\n\n\n\n\n\nThe resulting data.frame can easily be written to a CSV file, which can be opened in Excel.\n\n\n\nIt is always useful to plot sample-wise methylation levels for the top differentially methylated CpG sites to quickly ensure the results make sense (Figure 9). If the plots do not look as expected, it is usually an indication of an error in the code, or in setting up the design matrix. It is easier to interpret methylation levels on the beta value scale, so although the analysis is performed on the M-value scale, we visualise data on the beta value scale. The plotCpg function in minfi is a convenient way to plot the sample-wise beta values stratified by the grouping variable.\n\n\n\n\n\nAlthough performing a probe-wise analysis is useful and informative, sometimes we are interested in knowing whether several proximal CpGs are concordantly differentially methylated, that is, we want to identify differentially methylated regions. There are several Bioconductor packages that have functions for identifying differentially methylated regions from 450k data. Some of the most popular are the dmrFind function in the charm package, which has been somewhat superseded for 450k arrays by the bumphunter function in minfi (Aryee et al., 2014; Jaffe et al., 2012), and, the recently published dmrcate in the DMRcate package (Peters et al., 2015). They are each based on different statistical methods. In our experience, the bumphunter and dmrFind functions can be somewhat slow to run unless you have the computer infrastructure to parallelise them, as they use permutations to assign significance. In this workflow, we will perform an analysis using the dmrcate. As it is based on limma, we can directly use the design and contMatrix we previously defined.\n\nFirstly, our matrix of M-values is annotated with the relevant information about the probes such as their genomic position, gene annotation, etc. By default, this is done using the ilmn12.hg19 annotation, but this can be substituted for any argument compatible with the interface provided by the minfi package. The limma pipeline is then used for differential methylation analysis to calculate moderated t-statistics.\n\n\n\n\n\n\n\n\n\nOnce we have the relevant statistics for the individual CpGs, we can then use the dmrcate function to combine them to identify differentially methylated regions. The main output table DMRs$results contains all of the regions found, along with their genomic annotations and p-values.\n\n\n\n\n\n\n\n\n\nAs for the probe-wise analysis, it is advisable to visualise the results to ensure that they make sense. The regions can easily be viewed using the DMR.plot function provided in the DMRcate package (Figure 10).\n\n\n\n\n\nThe Gviz package offers powerful functionality for plotting methylation data in its genomic context. The package vignette is very extensive and covers the various types of plots that can be produced using the Gviz framework. We will re-plot the top differentially methylated region from the DMRcate regional analysis to demonstrate the type of visualisations that can be created (Figure 11).\n\nWe will first set up the genomic region we would like to plot by extracting the genomic coordinates of the top differentially methylated region.\n\n\n\nNext, we will add some genomic annotations of interest such as the locations of CpG islands and DNAseI hypersensitive sites; this can be any feature or genomic annotation of interest that you have data available for. The CpG islands data was generated using the method published by Wu et al. (2010); the DNAseI hypersensitive site data was obtained from the UCSC Genome Browser.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nNow, set up the ideogram, genome and RefSeq tracks that will provide context for our methylation data.\n\n\n\nEnsure that the methylation data is ordered by chromosome and base position.\n\n\n\n\n\n\n\n\n\nCreate the data tracks using the appropriate track type for each data type.\n\n\n\nSet up the track list and indicate the relative sizes of the different tracks. Finally, draw the plot using the plotTracks function (Figure 11).\n\n\n\n\nAdditional analyses\n\nOnce you have performed a differential methylation analysis, there may be a very long list of significant CpG sites to interpret. One question a researcher may have is, “which gene pathways are over-represented for differentially methylated CpGs?” In some cases it is relatively straightforward to link the top differentially methylated CpGs to genes that make biological sense in terms of the cell types or samples being studied, but there may be many thousands of CpGs significantly differentially methylated. In order to gain an understanding of the biological processes that the differentially methylated CpGs may be involved in, we can perform gene ontology or KEGG pathway analysis using the gometh function in the missMethyl package (Phipson et al., 2016).\n\nLet us consider the first comparison, naive vs rTreg, with the results of the analysis in the DMPs table. The gometh function takes as input a character vector of the names (e.g. cg20832020) of the significant CpG sites, and optionally, a character vector of all CpGs tested. This is recommended particularly if extensive filtering of the CpGs has been performed prior to analysis. For gene ontology testing (default), the user can specify collection=\"GO\" for KEGG testing collection=\"KEGG\". In the DMPs table, the Name column corresponds to the CpG name. We will select all CpG sites that have adjusted p-value of less than 0.05.\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe gometh function takes into account the varying numbers of CpGs associated with each gene on the Illumina methylation arrays. For the 450k array, the numbers of CpGs mapping to genes can vary from as few as 1 to as many as 1200. The genes that have more CpGs associated with them will have a higher probability of being identified as differentially methylated compared to genes with fewer CpGs. We can look at this bias in the data by specifying plot=TRUE in the call to gometh (Figure 12).\n\n\n\n\n\nThe gst object is a data.frame with each row corresponding to the GO category being tested. The top 20 gene ontology categories can be displayed using the topGO function. For KEGG pathway analysis, the topKEGG function can be called to display the top 20 enriched pathways.\n\n\n\n\n\nFrom the output we can see many of the top GO categories correspond to immune system and T cell processes, which is unsurprising as the cell types being studied form part of the immune system.\n\nFor a more generalised version of gene set testing for methylation data where the user can specify the gene set to be tested, the gsameth function can be used. To display the top 20 pathways, topGSA can be called. gsameth accepts a single gene set, or a list of gene sets. The gene identifiers in the gene set must be Entrez Gene IDs. To demonstrate gsameth, we are using the curated genesets (C2) from the Broad Institute Molecular signatures database. These can be downloaded as an RData object from the WEHI Bioinformatics website.\n\n\n\n\n\n\n\n\n\nRather than testing for differences in mean methylation, we may be interested in testing for differences between group variances. For example, it has been hypothesised that highly variable CpGs in cancer are important for tumour progression. Hence we may be interested in CpG sites that are consistently methylated in one group, but variably methylated in another group.\n\nSample size is an important consideration when testing for differentially variable CpG sites. In order to get an accurate estimate of the group variances, larger sample sizes are required than for estimating group means. A good rule of thumb is to have at least ten samples in each group (Phipson & Oshlack, 2014). To demonstrate testing for differentially variable CpG sites, we will use a publicly available dataset on ageing, where whole blood samples were collected from 18 centenarians and 18 newborns and profiled for methylation on the 450k array (Heyn et al., 2012). We will first need to load, normalise and filter the data as previously described.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAs this dataset contains samples from both males and females, we can use it to demonstrate the effect of removing sex chromosome probes on the data. The MDS plots below show the relationship between the samples in the ageing dataset before and after sex chromosome probe removal (Figure 13). It is apparent that before the removal of sex chromosome probes, the sample cluster based on sex in the second principal component. When the sex chromosome probes are removed, age is the largest source of variation present and the male and female samples no longer form separate clusters.\n\n\n\n\n\nWe can test for differentially variable CpGs using the varFit function in the missMethyl package. The syntax for specifying which groups we are interested in testing is slightly different to the standard way a model is specified in limma, particularly for designs where an intercept is fitted (see missMethyl vignette for further details). For the ageing data, the design matrix includes an intercept term, and a term for age. The coef argument in the varFit function indicates which columns of the design matrix correspond to the intercept and grouping factor. Thus, for the ageing dataset we set coef=c(1,2). Note that design matrices without intercept terms are permitted, with specific contrasts tested using the contrasts.varFit function.\n\n\n\n\n\n\n\n\n\nSimilarly to the differential methylation analysis, is it useful to plot sample-wise beta values for the differentially variable CpGs to ensure the significant results are not driven by artifacts or outliers (Figure 14).\n\n\n\nAn example of testing for differential variability when the design matrix does not have an intercept term is detailed in the missMethyl vignette.\n\nAs methylation is cell type specific and methylation arrays provide CpG methylation values for a population of cells, biological findings from samples that are comprised of a mixture of cell types, such as blood, can be confounded with cell type composition (Jaffe & Irizarry, 2014). The minfi function estimateCellCounts facilitates the estimation of the level of confounding between phenotype and cell type composition in a set of samples. The function uses a modified version of the method published by Houseman et al. (2012) and the package FlowSorted.Blood.450k, which contains 450k methylation data from sorted blood cells, to estimate the cell type composition of blood samples.\n\n\n\n\n\n\n\nAs reported by Jaffe & Irizarry (2014), the plot demonstrates that differences in blood cell type proportions are strongly confounded with age in this dataset (Figure 15). Performing cell composition estimation can alert you to potential issues with confounding when analysing a mixed cell type dataset. Based on the results, some type of adjustment for cell type composition may be considered, although a naive cell type adjustment is not recommended. Jaffe & Irizarry (2014) outline several strategies for dealing with cell type composition issues.\n\n\nDiscussion\n\nHere we present a commonly used workflow for methylation array analysis based on a series of Bioconductor packages. While we have not included all the possible functions or analysis options that are available for detecting differential methylation, we have demonstrated a common and well used workflow that we regularly use in our own analysis. Specifically, we have not demonstrated more complex types of analyses such as removing unwanted variation in a differential methylation study (Leek et al., 2012; Maksimovic et al., 2015; Teschendorff et al., 2011), block finding (Aryee et al., 2014; Hansen et al., 2011) or A/B compartment prediction (Fortin & Hansen, 2015). Our differential methylation workflow presented here demonstrates how to read in data, perform quality control and filtering, normalisation and differential methylation testing. In addition we demonstrate analysis for differential variability, gene set testing and estimating cell type composition. One important aspect of exploring results of an analysis is visualisation and we also provide an example of generating region-level views of the data.\n\n\nSoftware availability\n\nThis workflow uses the following packages available from Bioconductor (version 3.2):\n\n\n\n",
"appendix": "Author contributions\n\n\n\nJM and BP designed the content and wrote the paper. AO oversaw the project and contributed to the writing and editing of the paper.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nAO was supported by an NHMRC Career Development Fellowship APP1051481.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nAryee MJ, Jaffe AE, Corrada-Bravo H, et al.: Minfi: a flexible and comprehensive Bioconductor package for the analysis of Infinium DNA methylation microarrays. Bioinformatics. 2014; 30(10): 1363–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAryee MJ, Wu Z, Ladd-Acosta C, et al.: Accurate genome-scale percentage DNA methylation estimates from microarray data. Biostatistics. 2011; 12(2): 197–210. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBibikova M, Barnes B, Tsan C, et al.: High density DNA methylation array with single CpG site resolution. Genomics. 2011; 98(4): 288–95. PubMed Abstract | Publisher Full Text\n\nBibikova M, Le J, Barnes B, et al.: Genome-wide DNA methylation profiling using Infinium® assay. Epigenomics. 2009; 1(1): 177–200. PubMed Abstract | Publisher Full Text\n\nBird A: DNA methylation patterns and epigenetic memory. Genes Dev. 2002; 16(1): 6–21. PubMed Abstract | Publisher Full Text\n\nChen YA, Lemire M, Choufani S, et al.: Discovery of cross-reactive probes and polymorphic CpGs in the Illumina Infinium HumanMethylation450 microarray. Epigenetics. 2013; 8(2): 203–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCruickshank MN, Oshlack A, Theda C, et al.: Analysis of epigenetic changes in survivors of preterm birth reveals the effect of gestational age and evidence for a long term legacy. Genome Med. 2013; 5(10): 96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDavis S, Du P, Bilke S, et al.: Methylumi: Handle Illumina Methylation Data. 2015. Reference Source\n\nDu P, Zhang X, Huang CC, et al.: Comparison of Beta-value and M-value methods for quantifying methylation levels by microarray analysis. BMC Bioinformatics. 2010; 11(1): 587. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFortin JP, Hansen KD: Reconstructing A/B compartments as revealed by Hi-C using long-range correlations in epigenetic data. Genome Biol. 2015; 16(1): 180. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFortin JP, Labbe A, Lemire M, et al.: Functional normalization of 450k methylation array data improves replication in large cancer studies. Genome Biology. 2014; 15(12): 503. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHansen KD, Timp W, Bravo HC, et al.: Increased methylation variation in epigenetic domains across cancer types. Nature Genetics. 2011; 43(8): 768–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHeyn H, Li N, Humberto HJ, et al.: Distinct DNA methylomes of newborns and centenarians. Proc Natl Acad Sci U S A. 2012; 109(26): 10522–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHouseman EA, Accomando WP, Koestler DC, et al.: DNA methylation arrays as surrogate measures of cell mixture distribution. BMC Bioinformatics. 2012; 13(1): 86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuber W, Carey VJ, Gentleman R, et al.: Orchestrating high-throughput genomic analysis with Bioconductor. Nat Methods. 2015; 12(2): 115–21. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJaffe AE, Irizarry RA: Accounting for cellular heterogeneity is critical in epigenome-wide association studies. Genome Biol. 2014; 15(2): R31. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJaffe AE, Murakami P, Lee H, et al.: Bump hunting to identify differentially methylated regions in epigenetic epidemiology studies. Int J Epidemiol. 2012; 41(1): 200–209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLaird PW: The power and the promise of DNA methylation markers. Nat Rev Cancer. 2003; 3(4): 253–66. PubMed Abstract | Publisher Full Text\n\nLeek JT, Johnson WE, Parker HS, et al.: The sva package for removing batch effects and other unwanted variation in high-throughput experiments. Bioinformatics. 2012; 28(6): 882–3. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaksimovic J, Gagnon-Bartsch JA, Speed TP, et al.: Removing unwanted variation in a differential methylation analysis of Illumina HumanMethylation450 array data. Nucleic Acids Res. 2015; 43(16): e106. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMaksimovic J, Gordon L, Oshlack A: SWAN: Subset-quantile within array normalization for illumina infinium HumanMethylation450 BeadChips. Genome Biol. 2012; 13(6): R44. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMancuso FM, Montfort M, Carreras A, et al.: HumMeth27QCReport: an R package for quality control and primary analysis of Illumina Infinium methylation data. BMC Res Notes. 2011; 4: 546. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMorris TJ, Butcher LM, Feber A, et al.: ChAMP: 450k Chip Analysis Methylation Pipeline. Bioinformatics. 2014; 30(3): 428–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPeters TJ, Buckley MJ, Statham AL, et al.: De novo identification of differentially methylated regions in the human genome. Epigenetics Chromatin. 2015; 8(1): 6. PubMed Abstract | Free Full Text\n\nPhipson B, Oshlack A: DiffVar: a new method for detecting differential variability with application to methylation in cancer and aging. Genome Biol. 2014; 15(9): 465. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPhipson B, Maksimovic J, Oshlack A: missMethyl: an R package for analyzing data from Illumina’s HumanMethylation450 platform. Bioinformatics. 2016; 32(2): 286–88. PubMed Abstract | Publisher Full Text\n\nPidsley R, Chloe CY, Volta M, et al.: A data-driven approach to preprocessing Illumina 450K methylation array data. BMC Genomics. 2013; 14(1): 293. PubMed Abstract | Publisher Full Text | Free Full Text\n\nR Core Team: R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. 2014. Reference Source\n\nRitchie ME, Phipson B, Wu D, et al.: limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015; 43(7): e47, gkv007. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSun Z, Chai HS, Wu Y, et al.: Batch effect correction for genome-wide methylation data with Illumina Infinium platform. BMC Med Genomics. 2011; 4: 84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeschendorff AE, Marabita F, Lechner M, et al.: A beta-mixture quantile normalization method for correcting probe design bias in Illumina Infinium 450 k DNA methylation data. Bioinformatics. 2013; 29(2): 189–96. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTeschendorff AE, Zhuang J, Widschwendter M: Independent surrogate variable analysis to deconvolve confounding factors in large-scale microarray profiling studies. Bioinformatics. 2011; 27(11): 1496–1505. PubMed Abstract | Publisher Full Text\n\nTouleimat N, Tost J: Complete pipeline for Infinium® Human Methylation 450K BeadChip data processing using subset quantile normalization for accurate DNA methylation estimation. Epigenomics. 2012; 4(3): 325–41. PubMed Abstract | Publisher Full Text\n\nTriche TJ Jr, Weisenberger DJ, Van Den Berg D, et al.: Low-level processing of Illumina Infinium DNA Methylation BeadArrays. Nucleic Acids Res. 2013; 41(7): e90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang D, Zhang Y, Huang Y, et al.: Comparison of different normalization assumptions for analyses of DNA methylation data from the cancer genome. Gene. 2012; 506(1): 36–42. PubMed Abstract | Publisher Full Text\n\nWu H, Caffo B, Jaffee HA, et al.: Redefining CpG islands using hidden Markov models. Biostatistics. 2010; 11(3): 499–514. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWu MC, Joubert BR, Kuan PF, et al.: A systematic assessment of normalization approaches for the Infinium 450K methylation platform. Epigenetics. 2014; 9(2): 318–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZhang Y, Maksimovic J, Naselli G, et al.: Genome-wide DNA methylation analysis identifies hypomethylated genes regulated by FOXP3 in human regulatory T cells. Blood. 2013; 122(16): 2823–36. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "14245",
"date": "15 Jun 2016",
"name": "Timothy J Peters",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper describes a workflow for processing, filtering and analysis of Illumina Infinium methylation array data. It showcases a reproducible pipeline integrating a suite of tools from Bioconductor for multi-faceted genomic insights. While none of the tools individually are novel, their integration into a sensible, reproducible pipeline is. I am recommending this manuscript for indexationfor 3 main reasons:\nThe tools contained therein and their application are in line with, or near, best practice. The workflow itself contains all the major steps that this reviewer usually uses for their methylation array processing.\n\nAn integrated workflow such as this will be valuable for novice and intermediate bioinformaticians who are tasked with processing methylation data. The number of caveats and sanity checks needed for appropriate biological interpretation is not trivial, and this workflow does a satisfactory job of outlining them.\n\nThe reproducible nature of this manuscript is a strength; it is very \"coalface bioinformatics\". Many published methods have very poor or buggy implementations and no effort is made to contextualise them in a given pipeline. Publication may set a precedent for other authors to give worked examples and context, which in this reviewer's opinion accelerates the path to best practice.\n\nMinor amendments needed:\nI could not find any public links to the data files imported into this workflow. These ought to be provided.\n\na) The mathematical definition of β is given as β = M/(M +U + α). While I realise α is a fudge factor for offset purposes this is not clear to the lay reader and needs to be made so.\nb) Why are there no offsets for M or U in the calculation of M values, especially since there is one for the calculation of β? On (admittedly) rare occasions M or U will be exactly zero and hence offsets need to be put in both the numerator and denominator of the ratio to be log-transformed, else a non-number will result.\n\nA justification for the preference of M values over β values for use in the MDS plots is needed, especially since the statement is made that \"Beta values are generally preferable ... for graphical presentation\". This reviewer's experience is that β is much more common for use in PCA/MDS, and is certainly the standard for other methylation platforms e.g. bisulfite sequencing data.\n\nLegends are needed for density plots in Figs. 3 and 8. I appreciate minfi annoyingly puts the default legend in the top right, obscuring the hypermethylated mode, but a custom call to legend() ought to fix this.\n\nAppropriate Y-axis labels are needed for Figs. 9 and 14.",
"responses": [
{
"c_id": "2082",
"date": "26 Jul 2016",
"name": "Jovana Maksimovic",
"role": "Author Response",
"response": "Thanks Tim for taking the time to review our paper. In response to your comments/suggestions: I could not find any public links to the data files imported into this workflow. These ought to be provided. In addition to the references, we have now included links to GEO for the data used and have also made a bundle of all the data available on Figshare which can now be used directly from within R to download the data and complete the workflow. a) The mathematical definition of β is given as β = M/(M +U + α). While I realise α is a fudge factor for offset purposes this is not clear to the lay reader and needs to be made so. b) Why are there no offsets for M or U in the calculation of M values, especially since there is one for the calculation of β? On (admittedly) rare occasions M or U will be exactly zero and hence offsets need to be put in both the numerator and denominator of the ratio to be log-transformed, else a non-number will result. This has been clarified in the text. See also response to Davide Risso. A justification for the preference of M values over β values for use in the MDS plots is needed, especially since the statement is made that \"Beta values are generally preferable ... for graphical presentation\". This reviewer's experience is that β is much more common for use in PCA/MDS, and is certainly the standard for other methylation platforms e.g. bisulfite sequencing data. We disagree that beta values should be used in principal components analysis. While plotMDS does produce a graphic, the function is performing a statistical analysis (i.e. principal components analysis), which is based on normal distribution theory. The same reasons for not performing differential methylation analysis on the beta values apply in this case (i.e. heteroscedasticity of the beta values). Legends are needed for density plots in Figs. 3 and 8. I appreciate minfi annoyingly puts the default legend in the top right, obscuring the hypermethylated mode, but a custom call to legend() ought to fix this. These legends have been added as suggested. Appropriate Y-axis labels are needed for Figs. 9 and 14. The Y-axis labels have been added."
}
]
},
{
"id": "14425",
"date": "17 Jun 2016",
"name": "Peter F. Hickey",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper is a well-written workflow for analysing DNA methylation microarrays using Bioconductor packages. A challenge in writing these workflows is to produce something that is opinionated enough to be useful and balanced enough to be fair to packages developed by other people; I believe the authors have struck the right balance.\n\nHowever, my overall assessment is \"Approved With Reservations\" because the data used in the workflow is not easily available and therefore the workflow cannot be tested out by the interested reader.\n\nI spent some time trying to compile the raw data from GEO, but to me this feels a bit too much to expect of the reader, especially when it is likely that the interested reader is a beginner or intermediate user of bioinformatics software. I strongly believe the workflow should either include code to curate/construct/download the necessary files such as SampleSheet.csv and the IDAT files or include a link to prepared example data that can be used right from the 'Loading the data' section of the workflow. For example, http://f1000research.com/articles/4-1070/v1 uses data from the airway Bioconductor package that can easily be installed by the reader to follow along with the workflow.\n\nMy other main suggestion would be to re-run the code using the recently published Bioconductor version 3.3. I expect this might require some minor changes to the code, e.g., the minfi::read.450k* functions have been deprecated in favour of minfi::read.metharray* functions.\n\nI have some additional minor comments and suggestions that I will include once I'm able to run through and review the workflow from beginning to end.",
"responses": [
{
"c_id": "2057",
"date": "07 Jul 2016",
"name": "Peter Hickey",
"role": "Reviewer Response",
"response": "p2: beta = M / (M + U + alpha), the alpha parameter should be explained. Also, both the definition of beta and Mvalue differ slightly from that given in the cited Du, P. et al. Comparison of Beta-value and M-value methods for quantifying methylation levels by microarray analysis. BMC Bioinformatics 11, 587 (2010). p3: Perhaps worth mentioning that a complete list of packages for analysing DNA methylation data can be accessed using BiocViews (https://www.bioconductor.org/packages/release/bioc/html/biocViews.html and https://www.bioconductor.org/packages/release/BiocViews.html#___DNAMethylation) p4: \"...loading all the package libraries...\" should be \"...loading all the packages...\" p4: Perhaps worth commenting on which of the loaded packages are methylation-focused and/or purpose of other packages, e.g., stringr, Gviz. p4: This is *super* pedantic (sorry!): strictly speaking the IlluminaHumanMethylation450kmanifest package provides the Illumina manifest for the 450k array, which can then be accessed by using `minfi::getAnnotation()` Figure 2: Not immediately obvious that righthand plot is zoomed in version of lefthand plot. The caption could better explain this. p10: The code produces a warning. Would be helpful to the reader to comment on whether this is cause for concern in this case. Figure 9: Wondering whether helpful to have each panel with y-axis = [0, 1] p25: `islandData` apparently contains 0 ranges. This looks like a bug in the code. p28: \"For gene ontology testing (default), the user can specific collection = \"GO\" for KEGG testing collection = \"KEGG\"\"; this sentence seems incomplete or is perhaps missing a word p29 and p30: The code produces a warning. Would be helpful to the reader to comment on whether this is cause for concern in this case. The workflow uses multiple packages and it's not always clear where each function comes from. This could be clarified e.g., by namespacing functions such as `limma::plotMDS()` instead of `plotMDS()`"
},
{
"c_id": "2081",
"date": "26 Jul 2016",
"name": "Jovana Maksimovic",
"role": "Author Response",
"response": "Thanks for taking the time to review our workflow, Peter. In response to your suggestion we have made the data available and rerun the workflow using the latest R and Bioconductor. In response to your other comments: p2: beta = M / (M + U + alpha), the alpha parameter should be explained. Also, both the definition of beta and Mvalue differ slightly from that given in the cited Du, P. et al. Comparison of Beta-value and M-value methods for quantifying methylation levels by microarray analysis. BMC Bioinformatics 11, 587 (2010). This has been addressed. See response to Davide Risso. p3: Perhaps worth mentioning that a complete list of packages for analysing DNA methylation data can be accessed using BiocViews (https://www.bioconductor.org/packages/release/bioc/html/biocViews.html and https://www.bioconductor.org/packages/release/BiocViews.html#___DNAMethylation) This has been added to the paper. p4: \"...loading all the package libraries...\" should be \"...loading all the packages...\" The text has been modified accordingly. p4: Perhaps worth commenting on which of the loaded packages are methylation-focused and/or purpose of other packages, e.g., stringr, Gviz. The text has been modified accordingly. p4: This is *super* pedantic (sorry!): strictly speaking the IlluminaHumanMethylation450kmanifest package provides the Illumina manifest for the 450k array, which can then be accessed by using `minfi::getAnnotation()` The text has been modified accordingly. Figure 2: Not immediately obvious that righthand plot is zoomed in version of lefthand plot. The caption could better explain this. This has been clarified in the figure caption. p10: The code produces a warning. Would be helpful to the reader to comment on whether this is cause for concern in this case. A sentence has been included that explains the reason for the waring. Figure 9: Wondering whether helpful to have each panel with y-axis = [0, 1] As we are trying to highlight the differences between the groups tested for individual CpGs and not comparing between CpGs, we feel that the axes are appropriate for the purposes of \"sanity checking\" the results of the statistical analysis. p25: `islandData` apparently contains 0 ranges. This looks like a bug in the code. This was due to the fact that there were not any CpG islands present in the region being plotted; we have selected another region to plot that does have a CpG island so that islandData is no longer empty. p28: \"For gene ontology testing (default), the user can specific collection = \"GO\" for KEGG testing collection = \"KEGG\"\"; this sentence seems incomplete or is perhaps missing a word This sentence has been modified. p29 and p30: The code produces a warning. Would be helpful to the reader to comment on whether this is cause for concern in this case. Added a sentence to the text to explain the warning. The workflow uses multiple packages and it's not always clear where each function comes from. This could be clarified e.g., by namespacing functions such as `limma::plotMDS()` instead of `plotMDS()` We don’t feel it is a particularly useful exercise to change every function to include the package name. Searching the help for any of the functions will inform users which package the function comes from. For example ?plotMDS."
}
]
},
{
"id": "14248",
"date": "21 Jun 2016",
"name": "Michael I. Love",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI am not an expert in analysis of methylation data, and have not used the methylation packages mentioned in this workflow, so I reviewed the workflow as an uninitiated reader might approach it.\nMajor comments:\nI found the workflow to be easy to follow and informative. The authors have done a good job summarizing a large and complex topic into an reasonable size for a workflow article, while still mentioning the various alternatives that are possible at each step. I appreciated the focus on EDA and checking the quality of results by eye, for example the M-values for the most significant tests and the MDS plots colored by different variables.\nI did not try to run the code, and I agree with the other two reviewers that the code and datasets should be made available and linked to from this workflow.\nMinor comments:\nThe first time “moderated t-statistics” is mentioned, it would benefit to have a citation so that a reader who hasn’t encountered this method before can read the reference, e.g. Smyth 2004.\nThe first or second time IDAT files are mentioned, a small description of these would be useful, a little more than just that these are the raw files. Which platforms produce IDAT files? Are they compressed files? About how large are they?\nFigure 2: It wasn’t obvious at first that the plot on the right is the same as the left but zoomed in.\nWhen discussing the choice of normalization depending on whether or not there are global changes across samples due to underlying biology, the authors might consider referencing the quantro article and Bioconductor package by Stephanie Hicks for determining whether there are global changes in genomic datasets across samples, and therefore whether quantile normalization is appropriate. Hicks has an example of whether or not to use quantile normalization for methylation data in the article.\nhttps://www.bioconductor.org/packages/quantro\nPrincipal components is misspelled in the text: “principle components”\nIn the paragraph above the call to makeContrasts, it would be good to state in the text in one sentence what it is this function does, for the benefit of someone who has never performed linear modeling before. Likewise, to explicitly state that coef=1 is referencing the first column of the contrast matrix. It should be stated what is the B-statistic which orders the topTable.\nThe authors should explain a bit more what is being shown in Figure 10 in the caption.\nIn the text and code the authors have written DNAseI, but I believe the more common capitalization is DNaseI.\nThe authors might consider commenting on the top GO categories and the associated FDR values. How far down the list should one look? Can the authors advise the reader how GO results should be reported? Is it fair to pick out the most relevant categories from this list and only report them?\nIt wasn’t clear to me the difference between the gometh and gsameth approaches.\nIt would be good to provide references to literature for “it has been hypothesised that highly variable CpGs in cancer are important for tumour progression”.",
"responses": [
{
"c_id": "2080",
"date": "26 Jul 2016",
"name": "Jovana Maksimovic",
"role": "Author Response",
"response": "Thank you for reviewing our paper, Michael. In response to your comments/suggestions we have made the following changes: The Smyth 2004 citations have been added the first time “moderated t-statistics” is mentioned A description of IDAT files has been added to the text along with a reference to a Bioconductor package that is specifically for reading IDAT files. We have added to the legend for Figure 2 to clarify that the plot on the right is the same as the left but zoomed in. There is now a reference to Hicks and quantro in included in the Normalisation section. Spelling mistakes and typos have been fixed. Function of makeContrasts is described: See response to Davide Risso. We now explicitly state that coef=1 is referencing the first column of the contrast matrix. Included explanation for B-statistic and citation. More detail about the plot has been added to the figure caption for Figure 10. Changed DNAseI to DNasel Typically we would consider GO categories that have associated FDRs less than 5% as significant. Some discussion of these points has been added to the gene set testing section. The gometh function specifically tests only GO and KEGG pathways, whereas the gsameth is a more general function that requires the user to supply their own gene sets for testing. We have changed the sentence “it has been hypothesised that highly variable CpGs in cancer are important for tumour progression” to “it has been hypothesised that highly variable CpGs in cancer may contribute to tumour heterogeneity” and included the following reference: Hansen KD, Timp W, Bravo HC, Sabunciyan S, Langmead B, McDonald OG, Wen B, Wu H, Liu Y, Diep D, Briem E, Zhang K, Irizarry RA, Feinberg AP: Increased methylation variation in epigenetic domains across cancer types. Nat Genet. 2011, 43: 768-775"
}
]
},
{
"id": "14247",
"date": "22 Jun 2016",
"name": "Davide Risso",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAs someone who has experience with R/Bioconductor and with genomics data, but not direct experience analyzing methylation array data, I found the workflow very useful and I would suggest it to anyone wanting to start analyzing this type of data.\nI do agree with the other reviewers that the value of the workflow will be greatly increased if the dataset used was available as an R object. The authors should consider submitting an experiment data package to Bioconductor to accompany the workflow. Alternatively, they could provide the dataset as a supplementary file.\nAs for the analysis itself, I only have one major question. Note that I do not have direct experience analyzing methylation array, so this is a genuine question rather than a criticism.\nIn gene expression analysis, we tend to perform filtering prior to normalization, while the authors here first normalize the data by quantile normalization and then filter out probes that are low quality and/or affected by SNPs. Wouldn't it be safer to perform filtering before normalization? I understand that given the few probes affected, the order has likely very little effect in this dataset. But I naively imagine that if there are many problematic probes and, say, the quality of the samples is confounded with the biology, there could be issues in using low quality probes for normalization.\nOther minor points:\nI agree that the code should be re-run with the latest release of R and Bioconductor.\n\nIn the definition of \\beta, \\alpha should be defined, and its default value in getBeta() should be specified.\n\nSpelling: most of the article uses British English spelling, but the word \"normalization\" is sometimes (but not always) spelled in American English.\n\nA sentence describing what is the procedure implemented in preprocessQuantile() is needed for people not familiar with normalization.\n\nI agree that it would be useful to provide a brief description of what is a contrasts matrix as this section could be confusing for people unfamiliar with statistical models.\n\nFor the same reason, the authors should add a brief explanation of the problem of multiple testing and what is the false discovery rate. Or at least provide references to the appropriate literature.",
"responses": [
{
"c_id": "2079",
"date": "26 Jul 2016",
"name": "Jovana Maksimovic",
"role": "Author Response",
"response": "Thanks for your review, Davide. While we agree that normalisation post-filtering makes sense, there are some practical aspects with the data objects that minfi uses which makes this difficult. Many (but not all) normalisation procedures in minfi accept an rgSet object, which can be thought of as a raw data object, which cannot easily be subset by CpG site. These normalisation procedures then output a different type of data object, such as MethylSet or GenomicRatioSet, which are much easier to work with in terms of filtering out problematic CpG sites. Due to the sheer number of CpG sites observed per sample (>450,000) we believe it shouldn’t make too much difference for most datasets, especially if very poor quality samples are excluded prior to normalisation, although it is possible that there are exceptions to this. Response to minor points: We have spent some time modifying the workflow to run with the latest R and Bioconductor. We have added additional details regarding beta values, M-values and the offset in the paper. We have changed \"normalization\" to \"normalisation\" throughout the text. A sentence has been added about preprocessQuantile in the normalisation section. We have included some additional explanation of contrast matrices. An additional paragraph was added explaining about the issues of multiple testing in very high dimensional data."
}
]
}
] | 1
|
https://f1000research.com/articles/5-1281
|
https://f1000research.com/articles/6-430/v1
|
05 Apr 17
|
{
"type": "Case Report",
"title": "Case Report: Orbital metastasis as the presenting feature of lung cancer",
"authors": [
"Sunil Munakomi",
"Samrita Chaudhary",
"Pramod Chaudhary",
"Jagdish Thingujam",
"Bijoy Mohan Kumar",
"Iype Cherian",
"Samrita Chaudhary",
"Pramod Chaudhary",
"Jagdish Thingujam",
"Bijoy Mohan Kumar",
"Iype Cherian"
],
"abstract": "Orbital metastasis from lung cancer as an initial presenting symptom is a rare entity, which may paradoxically delay the diagnosis and initiation of correct management, due to the confusion of it being primary orbital pathology. Herein we report a case of a 58 year old woman, who presented with painful orbital swelling along with diminution in her vision. The patient was initially thought to have a primary eye lesion; however chest X-ray was suggestive of a lung mass, which was confirmed by chest computed topography followed by ultrasound guided fine needle aspiration cytology. The patient was then referred to a cancer centre for further management. This case report aims to increase the knowledge about this metastasis as a probable cause of orbital symptoms in certain subsets of patients, so that correct therapeutic decisions may be made in the future.",
"keywords": [
"orbit",
"metastasis",
"lung cancer"
],
"content": "Introduction\n\nOrbital metastatis as the initial presenting symptom from a metastatic lung lesion is a rare entity, occurring at an incidence of approximately 7%1,2. However, this should be kept as one of the differentials in any patients presenting with orbital symptoms, so as to frame an accurate and effective plan of management. Occasionally such rare presentations would invariably lead to a delay in the correct diagnosis, thereby increasing the risk of loss of vision, which decreases the quality of life of patients3. Poor management also increases the odds of progressing the tumor stage. Herein, we report one such case in a 58 year old woman, who presented with unilateral peri-orbital swelling and diminution of vision. Following detailed examination and investigations, the patient was found to harbor a malignant lung lesion.\n\n\nCase report\n\nA 58 year old woman from central Nepal presented to our outpatient clinic with a history of painful swelling around her right eye for two months. The patient also complained of diminishing vision in the same eye. The vision in the patient’s left eye had been previously lost following an injury during childhood. There was no other relevant family information or any significant past medical or surgical illnesses of the patient. Local examination revealed presence of peri-orbital swelling in the right eye with restricted eye movements (Figure 1). The patient’s visual acuity in the same eye was restricted to only perception to light. Funduscopy revealed the presence of papilledema. Remaining physical examinations were normal.\n\nRadio-imaging of the patient’s orbits revealed the presence of hyperostotic changes in theright orbit, with presence of enhancing lesions on the right globe with extension to the para-nasal sinuses and also invasion along the dural base in the anterior cranial fossa (Figure 2 and Figure 3). The initial differential diagnosis was an infective pathology. However, the patient was not immuno-compromised.\n\nA chest X-ray was performed as a routine work up, which inadvertently revealed the presence of an elevated right hemi-diaphragm with presence of right para-hilar mass (Figure 4). Further evaluation through chest computed tomography confirmed the finding of a right para-hilar mass (Figure 5).\n\nWe discussed with the patient and her relatives the possibility of the eye findings to be related to the lung lesion and recommended approaches to obtain a definitive diagnosis. Ultrasound guided fine needle aspiration cytology (FNAC) from the lung lesion revealed findings suggestive of a malignant lung disease (Figure 6). Diagnostic biopsy from the nasal endoscope confirmed the metastatic nature of the disease from the lung (Figure 7). Therefore, a diagnosis of metastatic lung disease to the orbit was finally confirmed.\n\nThe patient was started on a steroid therapy (injection dexamethasone at 8mg stat followed by 4 mg every eight hours), which decreased the swelling on the patient’s eye and improved visual acuity to finger counting within a period of 1 week. This further hinted at compressive rather than infiltrative effect on the optic nerve by the lesion. The patient was counseled and then immediately referred to the National Cancer Centre, Kathmandu, Nepal for further management with systemic chemo-radiation therapy after evaluation. Since the patient had a single and minimally functioning eye left, the decision was taken not to surgically decompress the lesion from the orbit. The patients was initially started on chemotherapy with a further plan of management to be tailored as per the clinical response seen in the patient.\n\nInitially, metastatic deposits causing eye swelling in the patient was not suspected. It was serendipity that the routine chest X-ray gave a clue to the presence of a lung mass. Even a small delay may have had a disastrous impact on the outcome of the vision in the patient.\n\n\nDiscussion\n\nMetastatic disease to the orbit is a rare epiphenomenon occurring in only 7% of all cancers1–2. Of these, symptoms related to orbital metastasis presents earlier to that of the primary lesion in around 20% of patients2. Breast, prostate and lung carcinomas are the usual primaries in many cases of metastatic lesions to the orbit4–5. Lid swelling are a common presentation in such metastatic lesions5, which can paradoxically delay the actual diagnosisaccounting for the benign orbital lesions. Diplopia is the most common presenting symptom in metastatic lesions, while proptosis or visual loss is seen in patients with primary orbital neoplasms6. Loss of vision can be due to either direct infiltration to the optic nerve or subsequent to the mass effect. Rarely, is it subsequent to paraneoplastic phenomenon mainly from lung carcinoma. Pain resulting from perineural invasion is typical for metastatic orbital lesions6.\n\nDiagnosis can be confirmed with FNAB, which has a diagnostic accuracy of more than 90%7. Further investigations need to be carried out to stage the tumor before embarking on the management option; PET scan is a rapid viable model for assessment tumor staging8.\n\nSurgical debulking is the cornerstone of management in patients with diminished vision subsequent to optic nerve compression. This was not attempted in our case, since it was the only functioning eye in the patient and that was functionally impaired as well. Surgical removal of the lesion may be locally effective in few patients having symptoms, due to compression on the optic nerve following raised intra –orbital pressure6. However, chemo-radiation is usually preferred to surgery because it is non-invasive. Chemotherapy, especially platinum base regimes, is chosen for small cell lung cancer over radiation because of the risk of damage to the eye lens. For non-small cell cancers, either photon radiation of 30–40 Gy, or newer frontiers, such as tyrosine kinase inhibitors, are the mainstay of treatment. However, overall prognosis, despite systemic therapy, is poor with a median survival of little over 1 year, and only 27% of patients surviving for more than two years6,9–13. Compared to breast cancer, lung cancers metastasize early to the orbit and also have shorter median survival time14.\n\n\nConclusions\n\nIt is prudent to provide a strategy for management of cases presenting with eye symptoms, so that rare causes, such as metastatic lesions, are not omitted. Such a strategy would certainly help in providing an early and effective treatment plan in such patients with metastatic orbital lesions. This would increase the chance of improving vision, escalate quality of life and also initiate early cancer therapy following appropriate work up and staging.\n\n\nConsent\n\nBoth written and verbal informed consent for publication of images and clinical data related to this case was sought and obtained from the patient.",
"appendix": "Author contributions\n\n\n\nSC, PC and JT prepared the manuscript, did the literature review and collected the data. SM, IC and BMK revised, edited and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nShields JA, Shields CL, Brotman HK, et al.: Cancer metastatic to the orbit: the 2000 Robert M. Curts Lecture. Ophthal Plast Reconstr Surg. 2001; 17(5): 346–354. PubMed Abstract\n\nMacedo JE, Machado M, Araújo A, et al.: Orbital metastasis as a rare form of clinical presentation of non-small cell lung cancer. J Thorac Oncol. 2007; 2(2): 166–167. PubMed Abstract | Publisher Full Text\n\nHolland D, Maune S, Kovács G, et al.: Metastatic tumors of the orbit: a retrospective study. Orbit. 2003; 22(1): 15–24. PubMed Abstract | Publisher Full Text\n\nEliassi-Rad B, Albert DM, Green WR: Frequency of ocular metastases in patients dying of cancer in eye bank populations. Br J Ophthalmol. 1996; 80(2): 125–128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNelson CC, Hertzberg BS, Klintworth GK: A histopathologic study of 716 unselected eyes in patients with cancer at the time of death. Am J Ophthalmol. 1983; 95(6): 788–793. PubMed Abstract | Publisher Full Text\n\nDe Potter P: Ocular manifestations of cancer. Curr Opin Ophthalmol. 1998; 9(6): 100–4. PubMed Abstract | Publisher Full Text\n\nTijl JW, Koornneef L: Fine needle aspiration biopsy in orbital tumours. Br J Ophthalmol. 1991; 75(8): 491–492. PubMed Abstract | Publisher Full Text | Free Full Text\n\nManohar K, Mittal BR, Bhattacharya A, et al.: Orbital Metastases as Presenting Sign of Lung Carcinoma: Detection of Primary Malignancy and Disease Burden by F-18 FDG PET/CT. Nucl Med Mol Imaging. 2012; 46(1): 73–75. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGlasburn JR, Klionsky M, Brady LW: Radiation therapy for metastatic diseases involving the orbit. Am J Clin Oncol. 1984; 7: 145–148. Publisher Full Text\n\nFerry AP, Font RL: Carcinoma metastatic to the eye and orbit. I. A clinicopathologic study of 227 cases. Arch Ophthalmol. 1974; 92(4): 276–286. PubMed Abstract | Publisher Full Text\n\nSun L, Qi Y, Sun X, et al.: Orbital metastasis as the initial presentation of lung adenocarcinoma: a case report. Onco Targets Ther. 2016; 9: 2743–2748. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKoma Y, Goto K, Yoshida C, et al.: Orbital metastasis secondary to pulmonary adenocarcinoma treated with gefitinib: a case report. J Med Case Rep. 2012; 6: 353. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMori H, Maekawa N, Satoda N, et al.: [A case of primary lung cancer with initial symptoms due to orbital metastases]. Nihon Kokyuki Gakkai Zasshi. 2003; 41(1): 19–24. PubMed Abstract\n\nFreedman MI, Folk JC: Metastatic tumors to the eye and orbit. Patient survival and clinical characteristics. Arch Ophthalmol. 1987; 105(9): 1215–9. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "22210",
"date": "02 May 2017",
"name": "Matteo Giaj Levra",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting case report, about an unusual site of metastasis in lung cancer patient. There are details about the symptoms, diagnosis and symptomatic treatment of the orbital metastasis.\n\nI would only add if there was a history of tobacco exposure (not cited in the text).\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
},
{
"id": "22208",
"date": "04 May 2017",
"name": "Trevor K. Rogers",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nA major deficiency of the report is the lack of characterization of the tumour. Whilst the CT imaging is convincing for a lung primary it should at the very least be commented whether the tumour was non-small cell (and ideally whether adeno-, squamous or neuroendocrine) or small cell -the H&E slide is suspicous of the latter, although I am not a pathologist. Ideally it would have been established if it were TTF1 positive supporting a lung origin.\nI am surprised that the smoking status of the patient is not reported.\nThe sentence: \"Orbital metastatis as the initial presenting symptom from a metastatic lung lesion is a rare entity, occurring at an incidence of approximately 7%\" - the denominator 7% of what? - I suspect orbital tumours. Metastasis spelt incorrectly\nThe sentence: \"Poor management also increases the odds of progressing the tumor stage.\" is incorrect: the disease stage is already IV. What I think they mean is that patients may become too unwell for anticancer treatments and the visual impairment worse and harder to palliate.\nFunduscopy should be fundoscopy\n\"PET scan is a rapid viable model for assessment tumor staging8\" PET scanning would only be considered if simpler imaging modalities failed to identify a primary site, as in a lung cancer, staging is already M1b\nI remain troubled by the 7% figure which is repeated in the discussion. If orbital metastasis really occurred in 7% of all cancers it would not be regarded as that rare. Reference 2 does not give any evidence for the 7% figure quoted, does not give the denominator (either) and I can't see any evidence for the 7% figure quoted in reference 1.\nThe outcome for the patient is not described with sufficient detail, including length of survival.\n\nIs the background of the case’s history and progression described in sufficient detail? No\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? No\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? No",
"responses": []
},
{
"id": "22711",
"date": "17 May 2017",
"name": "Luis Rafael Moscote-Salazar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAn interesting case of orbital metastasis of lung adenocarcinoma is presented. Lung cancer is one of the leading causes of death globally. Adenocarcinoma occupies the first place in epidemiological frequency (50%) is also one of the most frequent types of tumors in non-smokers. They are classified into 4 histological types: acinar, papillary, bronchialveolar and the mucin secretory variety.\nThe adenocarcinoma originates from mucoproductive cells. Adenocarcinoma metastasis to the orbit is infrequent. In the orbits most of the tumors are primary, but they can also reach the orbit by contiguity. At least 50% of patients with orbital metastasis are unaware of the existence of a primary tumor. Metastasis is less frequent than ocular metastasis. Apparently, there is no predilection for any specific orbit and their bilateral appearance is rare. The most frequent orbital metastasis are breast, lung and orbit. They have been described a 5 types of clinical syndromes associated with orbital metastasis. The first type is mass syndromes.\nIn more than 50% of cases. This syndrome causes displacement of the eyeball. The second type is infiltrative. The third type is an inflammatory type. The fourth type of metastasis is called functional and is frequently located in apex orbital. The last type is silent, does not produce symptomatology. The use of fine needle aspiration biopsy is an excellent option when orbital metastasis is suspected. Finally, the patients with orbital metastasis are not candidates for orbit surgery for extirpation of the tumor mass. The realization of surgery does not offer a cure. In cases of slow-growing tumors, the extirpation of metastasis and the primary tumor may improve the prognosis. Management strategies include radiation therapy. Sometimes the use of radiation therapy results in vision recovery.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-430
|
https://f1000research.com/articles/6-423/v1
|
04 Apr 17
|
{
"type": "Software Tool Article",
"title": "WTFgenes: What's The Function of these genes? Static sites for model-based gene set analysis",
"authors": [
"Christopher J. Mungall",
"Ian H. Holmes",
"Christopher J. Mungall"
],
"abstract": "A common technique for interpreting experimentally-identified lists of genes is to look for enrichment of genes associated with particular ontology terms. The most common test uses the hypergeometric distribution; more recently, a model-based test was proposed. These approaches must typically be run using downloaded software, or on a server. We develop a collapsed likelihood for model-based gene set analysis and present WTFgenes, an implementation of both hypergeometric and model-based approaches, that can be published as a static site with computation run in JavaScript on the user's web browser client. Apart from hosting files, zero server resources are required: the site can (for example) be served directly from Amazon S3 or GitHub Pages. A C++11 implementation yielding identical results runs roughly twice as fast as the JavaScript version. WTFgenes is available from https://github.com/evoldoers/wtfgenes under the BSD3 license. A demonstration for the Gene Ontology is usable at https://evoldoers.github.io/wtfgo.",
"keywords": [
"Gene Ontology",
"Graphical Model",
"Gene Set Enrichment Analysis"
],
"content": "Introduction\n\nTerm Enrichment Analysis (TEA) is a common technique for finding functional patterns, specifically overrepresented ontology terms, in a set of experimentally identified genes1. The most common approach, which we refer to as Frequentist TEA, is a one-tailed Fisher’s Exact Test (based on the hypergeometric distribution, which models the number of term-associations if the gene set was chosen by chance), with a suitable correction for multiple hypothesis testing. Frequentist TEA has been implemented many times on various platforms1–8.\n\nA model-based alternative to Frequentist TEA, which more directly addresses some of the multiple testing issues (for example, by modeling the ways that an observed gene list can be broken down into complementary gene sets), is Bayesian TEA. In contrast to Frequentist TEA, which just rejects a null hypothesis that genes are chosen by chance, the Bayesian TEA explicitly models the alternative hypothesis that the gene set was generated from a few random ontology terms. This approach was introduced by 9 and further developed by 10, who implemented model-based testing in Java and R11. However, the model-based approach remains significantly less well-explored than frequentist approaches.\n\nThe graphical model underpinning Bayesian TEA is sketched in Figure 1. For each of the m terms there is a boolean random variable Tj (“term j is activated”). For each of the n genes there is a directly-observed boolean random variable Oi (“gene i is observed in the gene set”), and one deterministic boolean variable Hi (“gene i is activated”) defined by Hi = 1 − Πj∈Gi (1 − Tj), where Gi is the set of terms associated with gene i (including directly annotated terms, as well as ancestral terms implied by transitive closure of the directly annotated terms). The probability parameters are π (term activation), α (false positive) and β (false negative), and the respective hyperparameters are p = (p0, p1), a = (a0, a1) and b = (b0, b1).\n\nOther variables and hyperparameters are defined in the text. Circular nodes indicate continuous-valued variables or hyperparameters; square nodes indicate discrete-valued (boolean) variables. Dashed lines indicate deterministic relationships; shaded nodes indicate observations. Plates (rounded rectangles) indicate replicated subgraph structures.\n\nThe model is\n\n\n\nwith π ∼ Beta(p), α ∼ Beta(a) and β ∼ Beta(b). The model of 10 is similar, but uses an ad hoc discretized prior for π, α and β .\n\nMost Bayesian and Frequentist TEA implementations are designed for desktop use. Several Frequentist TEA implementations are designed for the web, such as DAVID-WS6 and Enrichr8,12,13, which has a rich dynamic web front-end. However, web-facing Frequentist TEA implementations generally require a server-hosted back end that executes code. Further, there are no JavaScript-based Bayesian TEA implementations, and no web-facing implementations other than the Java-based Ontologizer which can be loaded via Java Web Start.\n\nIn order to further explore the model-based TEA and compare it to Frequentist TEA, and to make these investigations accessible to researchers in a way that would be easily embeddable in static websites, we developed WTFgenes, a JavaScript implementation of both approaches with (for time-sensitive applications) a parallel C++ implementation that is numerically identical.\n\nWe note in passing that Fisher’s Exact Test—which we call Frequentist TEA—was originally motivated by a blind tea-tasting challenge14.\n\n\nMethods\n\nIn developing our Bayesian TEA sampler, we introduce a collapsed version of the model in Figure 1 by integrating out the probability parameters. Let cp = ∑jm Tj count the number of activated terms, cg = ∑inHi the activated genes, ca = ∑in Oi (1 – Hi) the false positives and cb = ∑in Oi Hi the false negatives.\n\nThen\n\nP(T, O|a, b, p) = Z(cp;m, p)Z(ca; n – cg, a)Z(cb; cg, b)\n\nwhere\n\n\n\nis the beta-Bernoulli distribution for k ordered successes in N trials with hyperparameters A= (A0, A1), using the beta function\n\n\n\nIntegrating out probability parameters improves sampling efficiency and allows for higher-dimensional models where, for example, we observe multiple gene sets and give each term its own probability πj or each gene its own error rates (αi, βi). Our implementation by default uses uninformative priors with hyperparameters a = b = p = (1, 1), but this can be overridden by the user.\n\nThe MCMC sampler uses a Metropolis-Hastings kernel15. Each proposed move perturbs some subset of the term variables. The moves include flip, where a single term is toggled; step, where any activated term and any one of its unactivated ancestors or descendants are toggled; jump, where any activated term and any unactivated term are toggled; and randomize, where all term variables are uniformly randomized. The relative rates of these moves can be set by the user.\n\nThe sampler of 10 implemented only the flip move. To test the relative efficacy of the newly-introduced moves we measured the autocorrelation of the term variables for a dataset of 17 S.cerevisiae genes involved in mating (The gene IDs used in this evaluation, for purposes of reproduction, were: STE2, STE3, STE5, GPA1, SST2, STE11, STE50, STE20, STE4, STE18, FUS3, KSS1, PTP2, MSG5, DIG1, DIG2, STE12. Other representative gene sets for yeast may be obtained from the Gene Ontology website at http://geneontology.org/experimental/enrichment-genesets/yeast/ and several of these are bundled with the example dataset in the WTFgenes repository). The results, shown in Figure 2, led us to set the MCMC defaults, such that the flip, step, and jump moves are equiprobable, while randomize is disabled.\n\nA rapidly-decaying curve indicates an efficiently-mixing kernel. The kernel incorporating flip, step and jump moves (defined in the text) mixes most efficiently.\n\nWe have implemented both Frequentist TEA (with Bonferroni correction) and Bayesian TEA (as described above), in both C++11 and JavaScript. The JavaScript version can be run as a command-line tool using node, or via a web interface in a browser, and includes extensive unit tests. The two implementations use the same random number generator and yield numerically identical results.\n\nOur JavaScript software, when used as a web application, offers a “quick report” view using Frequentist TEA. For the slower-running, but more powerful, Bayesian TEA, the software plots the log-likelihood during an MCMC sampling run, for visual feedback. The repository includes setup scripts allowing the tool to be deployed as a “static site”, i.e. consisting only of static files (HTML, CSS, JSON, and JavaScript) that can be hosted via a minimal web server with no need for dynamic code execution. This has considerable advantages: static web hosting is generally much cheaper, and far more secure, than running server-hosted web applications.\n\nAn example WTFgenes static site, configured for the GO-basic ontology and GO-annotated genomes from the Gene Ontology website, can be found at https://evoldoers.github.io/wtfgo.\n\nAn earlier version of this article can be found on bioRxiv (doi: 10.1101/114785).\n\n\nResults\n\nWhen compiled using clang, the C++ version of WTFgenes is about twice as fast as the JavaScript version: a benchmark of Bayesian TEA on a late-2014 iMac (4GHz Intel Core i7), using the above mentioned 17 yeast mating genes and the relevant subset of 518 GO terms, run for 1,000 samples per term, took 37.6 seconds of user time for the C++ implementation and 79.8 seconds in JavaScript.\n\nBy contrast, the Frequentist TEA approach is almost instant. However, its weaker statistical power is apparent from Figure 3, which compares the recall vs specificity of Bayesian and Frequentist methods on simulated datasets (The full workflow for this simulation is available at http://doi.org/10.5281/zenodo.40060816). For values of N from 1 to 4, we sampled N terms from the S.cerevisiae subset of the Gene Ontology, and generated a corresponding set of yeast genes with false positive rate 0.1% and false negative rate 1%. The MCMC sampler was run for 100 iterations per term, and this experiment was repeated 100 times. The model-based approach has vastly superior recall to the Fisher exact test, and the difference grows with the number of terms.\n\nThe axes are scaled per term. There are 5,919 ontology terms annotated to S.cerevisiae genes, so (for example) a false discovery rate of 0.001 corresponds to about 6 falsely reported terms.\n\n\nDiscussion\n\nJavaScript genome browsers, such as JBrowse17, represent a broader web trend of producing static sites where possible, for reasons of security and performance. We have implemented such a static site generator for ontological term enrichment analysis of gene sets that offers both Bayesian and frequentist tests. In contrast with existing web services for Frequentist TEA, such as DAVID-WS or Enrichr, it requires no server resources and allows comparison of Bayesian and Frequentist approaches.\n\nModel-based TEA is versatile: it can readily be extended to allow for datasets that are structured temporally18, spatially19, or by genomic region20; to use domain-specific biological knowledge21; or to incorporate additional lines of evidence such as quantitative data22. We hope our development of a collapsed likelihood, and evaluation of different MCMC kernels, will assist these efforts.\n\n\nSoftware and data availability\n\nLatest source code: https://github.com/evoldoers/wtfgenes\n\nArchived source code as at time of publication: http://doi.org/10.5281/zenodo.40060623\n\nSoftware license: BSD3\n\nA demonstration for the Gene Ontology is usable at https://evoldoers.github.io/wtfgo.\n\nA Makefile-driven simulation study underpinning results reported in this paper is available at http://doi.org/10.5281/zenodo.40060816.",
"appendix": "Author contributions\n\n\n\nIH designed the method, wrote the software, performed the analyses, and wrote the manuscript. CM suggested the idea, consulted on the design of the software and corrected errors in the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nIHH was partially supported by NHGRI (grant HG004483). CJM was partially supported by Office of the Director (R24-OD011883) and the Director, Office of Science, Office of Basic Energy Sciences, of the US Department of Energy (Contract No. DE-AC02-05CH11231).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nBoyle EI, Weng S, Gollub J, et al.: GO::TermFinder--open source software for accessing Gene Ontology information and finding significantly enriched Gene Ontology terms associated with a list of genes. Bioinformatics. 2004; 20(18): 3710–3715. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRobinson MD, Grigull J, Mohammad N, et al.: FunSpec: a web-based cluster interpreter for yeast. BMC Bioinformatics. 2002; 3: 35. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhatri P, Draghici S, Ostermeier GC, et al.: Profiling gene expression using onto-express. Genomics. 2002; 79(2): 266–270. PubMed Abstract | Publisher Full Text\n\nZeeberg BR, Feng W, Wang G, et al.: GoMiner: a resource for biological interpretation of genomic and proteomic data. Genome Biol. 2003; 4(4): R28. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBauer S, Grossmann S, Vingron M, et al.: Ontologizer 2.0--a multifunctional tool for GO term enrichment analysis and data exploration. Bioinformatics. 2008; 24(14): 1650–1651. PubMed Abstract | Publisher Full Text\n\nJiao X, Sherman BT, Huang da W, et al.: DAVID-WS: a stateful web service to facilitate gene/protein list analysis. Bioinformatics. 2012; 28(13): 1805–1806. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMi H, Muruganujan A, Casagrande JT, et al.: Large-scale gene function analysis with the PANTHER classification system. Nat Protoc. 2013; 8(8): 1551–1566. PubMed Abstract | Publisher Full Text\n\nChen EY, Tan CM, Kou Y, et al.: Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool. BMC Bioinformatics. 2013; 14: 128. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLu Y, Rosenfeld R, Simon I, et al.: A probabilistic generative model for GO enrichment analysis. Nucleic Acids Res. 2008; 36(17): e109. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBauer S, Gagneur J, Robinson PN: GOing Bayesian: model-based gene set analysis of genome-scale data. Nucleic Acids Res. 2010; 38(11): 3523–3532. PubMed Abstract | Free Full Text\n\nBauer S, Robinson PN, Gagneur J: Model-based gene set analysis for Bioconductor. Bioinformatics. 2011; 27(13): 1882–1883. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGundersen GW, Jones MR, Rouillard AD, et al.: GEO2Enrichr: browser extension and server app to extract gene sets from GEO and analyze them for biological functions. Bioinformatics. 2015; 31(18): 3060–3062. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKuleshov MV, Jones MR, Rouillard AD, et al.: Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 2016; 44(W1): W90–97. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFisher RA: Mathematics of a lady tasting tea. In The Design of Experiments. Oliver and Boyd, Edinburgh, 1935.\n\nGilks WR, Richardson S, Spiegelhalter DJ: Markov Chain Monte Carlo in Practice. Chapman & Hall, London, UK, 1996. Reference Source\n\nHolmes IH, Mungall C: ihh/wtfgenes-paper: 0.1.0 release [Data set]. Zenodo. 2017. Data Source\n\nBuels R, Yao E, Diesh CM, et al.: JBrowse: a dynamic web platform for genome visualization and analysis. Genome Biol. 2016; 17: 66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHejblum BP, Skinner J, Thiébaut R: Time-Course Gene Set Analysis for Longitudinal Gene Expression Data. PLoS Comput Biol. 2015; 11(6): e1004310. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLin Z, Sanders SJ, Li M, et al.: A Markov Random Field-Based Approach to Characterizing Human Brain Development Using Spatial-Temporal Transcriptome Data. Ann Appl Stat. 2015; 9(1): 429–451. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLean CY, Bristor D, Hiller M, et al.: GREAT improves functional interpretation of cis-regulatory regions. Nat Biotechnol. 2010; 28(5): 495–501. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSzczurek E, Beerenwinkel N: Modeling mutual exclusivity of cancer mutations. PLoS Comput Biol. 2014; 10(3): e1003503. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKalaitzis AA, Lawrence ND: A simple approach to ranking differentially expressed gene expression time courses through Gaussian process regression. BMC Bioinformatics. 2011; 12: 180. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHolmes IH, Mungall C: evoldoers/wtfgenes: 0.1.0 release [Data set]. Zenodo. 2017. Data Source"
}
|
[
{
"id": "22056",
"date": "05 May 2017",
"name": "Cedric Simillion",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a novel \"Term Enrichment Analysis\" algorithm, which expands on previous work by Bauer et al. (2010). The provided implementation of the algorithm as a stand-alone web interface is very well-designed and user-friendly. The availability of a command-line implementation in C++ ensures that the method can be incorporated in diverse workflows.\n\nI do, however, have some major criticisms about the presentation of the method in the manuscript as well as the validation method used.\nMajor points:\nMy main problem with this manuscript is that the description of the algorithm is very terse and hard to understand. In particular, the following points need clarification:\n\nThe algorithm model needs to be described in less mathematical terms. The present description makes it very hard for a biologist to understand the merits of the algorithm.\n\nThe biological meaning or impact of the mentioned hyperparameters A0 and A1 needs to be added.\n\nThe authors claim as one of the advantages of their algorithm that \"Integrating out probability parameters improves sampling efficiency and allows for higher-dimensional models where, for example, we observe multiple gene sets and give each term its own probability πj or each gene its own error rates (αi, βi)\" However, they do not mention any procedure for estimating these parameter values. A detailed example of such a procedure would greatly benefit the manuscript.\n\nRelated to the previous point: It seems that there are quite a few parameters in this algorithm that can be adjusted. While the implementation provided does seem to suggest sensible default values, it would be good if the authors could prove the robustness of their method by validating a test set against a range of parameter values.\n\nThe second major concern I have with this manuscript is lack of rigour and detail in the applied validation procedures.\n\nIt is not clear at all to me what is meant with \"the autocorrelation of the term variables for a dataset\". This concept needs to be explained in more detail, ideally with an example.\n\nIn the tuning step of the MCMC kernels, the authors used a test set of only 17 genes. Typical transcriptomics experiments yield, especially in mammals, up to thousands of differentially expressed genes. It would therefore be good to repeat this analysis with increasing test set sizes (e.g. 10 - 100 - 1000).\n\nPossibly the biggest issue I have with this manuscript is that the authors compare the performance of their algorithm to that of a simple hypergeometric test, using simulated data. As several authors have already pointed out before, the hypergeometric approach is a poor strategy for doing gene set analysis1. Validation should be against more sophisticated \"frequentist\" algorithms such as TopGO2, PADOG3, SetRank4, ... as these algorithms also deal with the multiple hypothesis testing problem by considering the overlap between different term gene sets. Ideally, a benchmarking strategy on real biological data, such as the one suggested by Tarca et al.5 would be used.\n\nMinor Point:\nMost of the literature refers to this type of analysis as \"Gene Set Enrichment Analysis\" GSEA. It would be good if the authors at least refer to this term as well.\n\nIs the rationale for developing the new software tool clearly explained? Partly\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? No",
"responses": []
},
{
"id": "23167",
"date": "23 Jun 2017",
"name": "Ruth Isserlin",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn the article \"WTFgenes: What's The Function of these genes? Static sites for model-based gene set analysis\" Mungall and Holmes introduce a java script static site implementation of a model based Bayesian method to calculate functional enrichment. Included in this is an implementation of the current standard method, fisher exact test.\n\nIn the paper the method model is very well explained but given that the authors are introducing a tool to access this model very little was discussed about how the software works. For example, the author states that other front end tools \"require a server-hosted back end that executes code\" but it is not clear how WTFgenes work in a way that it doesn't require a back end that executes code. I think it would be helpful to clearly outline (in a figure) the different implementation of WTFgenes, and how a user can access/set up the different parts available, required inputs and generated outputs.\n\nAlso, it would be helpful if you expand the example that are presented in the paper to something larger than 17 yeast genes. It is not discussed in the paper but I presume that there are performance limitations which is why there is both javascript and C++ versions. It would be helpful if this was stated. For example, using X number of genes would take Y time in javascript but Z using the C++ implmentation. (I am also not sure how you could switch between these two implementations. It states in the paper that the javascript version can be run by command line or via the web but it doesn't say how to run the c++ version). What is the benefit of running the javascript version by command line?\n\nFor the yeast example in the paper do you use all of Gene ontology (CC, BP, MF) or just a subset of terms?\nIs there a way to output the results of the enrichment analysis so you can use the results in downstream analyses?\nMinor comments: In the paper it states that for Frequentist enrichemnt analysis you use bonferonni correction. Under the tab \"Quick report\" which contains these results I see a p-value. Is this the corrected p-value or nominal p-value? If it is the nominal how do we find the corrected p-value?\nSome general comments/questions: Can WTFgenes only work with gene onotology?\n\nGiven that there is no back-end web server is it easier to update the annotation that you use? It looks like it requires obo and gaf files but can it also support generic gene set files. It might be beneficial to create a docker image of WTFgenes for easy installation of WTFgenes.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-423
|
https://f1000research.com/articles/6-30/v1
|
10 Jan 17
|
{
"type": "Software Tool Article",
"title": "AR2, a novel automatic artifact reduction software method for ictal EEG interpretation: Validation and comparison of performance with commercially available software",
"authors": [
"Shennan Aibel Weiss",
"Ali A Asadi-Pooya",
"Sitaram Vangala",
"Stephanie Moy",
"Dale H Wyeth",
"Iren Orosz",
"Michael Gibbs",
"Lara Schrader",
"Jason Lerner",
"Christopher K Cheng",
"Edward Chang",
"Rajsekar Rajaraman",
"Inna Keselman",
"Perdro Churchman",
"Christine Bower-Baca",
"Adam L Numis",
"Michael G Ho",
"Lekha Rao",
"Annapoorna Bhat",
"Joanna Suski",
"Marjan Asadollahi",
"Timothy Ambrose",
"Andres Fernandez",
"Maromi Nei",
"Christopher Skidmore",
"Scott Mintzer",
"Dawn S Eliashiv",
"Gary W Mathern",
"Marc R Nuwer",
"Michael Sperling",
"Jerome Engel Jr",
"John M Stern",
"Ali A Asadi-Pooya",
"Sitaram Vangala",
"Stephanie Moy",
"Dale H Wyeth",
"Iren Orosz",
"Michael Gibbs",
"Lara Schrader",
"Jason Lerner",
"Christopher K Cheng",
"Edward Chang",
"Rajsekar Rajaraman",
"Inna Keselman",
"Perdro Churchman",
"Christine Bower-Baca",
"Adam L Numis",
"Michael G Ho",
"Lekha Rao",
"Annapoorna Bhat",
"Joanna Suski",
"Marjan Asadollahi",
"Timothy Ambrose",
"Andres Fernandez",
"Maromi Nei",
"Christopher Skidmore",
"Scott Mintzer",
"Dawn S Eliashiv",
"Gary W Mathern",
"Marc R Nuwer",
"Michael Sperling",
"Jerome Engel Jr",
"John M Stern"
],
"abstract": "Objective: To develop a novel software method (AR2) for reducing muscle contamination of ictal scalp electroencephalogram (EEG), and validate this method on the basis of its performance in comparison to a commercially available software method (AR1) to accurately depict seizure-onset location. Methods: A blinded investigation used 23 EEG recordings of seizures from 8 patients. Each recording was uninterpretable with digital filtering because of muscle artifact and processed using AR1 and AR2 and reviewed by 26 EEG specialists. EEG readers assessed seizure-onset time, lateralization, and region, and specified confidence for each determination. The two methods were validated on the basis of the number of readers able to render assignments, confidence, the intra-class correlation (ICC), and agreement with other clinical findings. Results: Among the 23 seizures, two-thirds of the readers were able to delineate seizure-onset time in 10 of 23 using AR1, and 15 of 23 using AR2 (p<0.01). Fewer readers could lateralize seizure-onset (p<0.05). The confidence measures of the assignments were low (probable-unlikely), but increased using AR2 (p<0.05). The ICC for identifying the time of seizure-onset was 0.15 (95% confidence interval (CI), 0.11-0.18) using AR1 and 0.26 (95% CI 0.21-0.30) using AR2. The EEG interpretations were often consistent with behavioral, neurophysiological, and neuro-radiological findings, with left sided assignments correct in 95.9% (CI 85.7-98.9%, n=4) of cases using AR2. Conclusions: EEG artifact reduction methods for localizing seizure-onset does not result in high rates of interpretability, reader confidence, and inter-reader agreement. However, the assignments by groups of readers are often congruent with other clinical data. Utilization of the AR2 software method may improve the validity of ictal EEG artifact reduction.",
"keywords": [
"scalp EEG",
"electroencephalogram",
"muscle artifact",
"independent component analysis",
"seizure"
],
"content": "Introduction\n\nThe scalp electroencephalogram (EEG) is a critical diagnostic tool in the evaluation of seizures, but artifact from muscle contraction often limits its use because of the obscuring of the cerebrally generated potentials. This problem is present in 11% of ictal EEGs overall and up to 70% of frontal lobe seizures1–3. The inability to discern the seizure-onset zone from scalp EEG often necessitates additional testing, including (positron emission tomography) PET, magnetoencephalography, ictal Single-photon emission computed tomography (SPECT), and intracranial EEG4. Each of these tests adds undesired time and cost to the evaluation.\n\nDigital filters are the common approach to maximizing the likelihood of identifying a seizure-onset zone from EEG with muscle artifact. This filtering reduces muscle artifact by attenuating all frequencies beyond a selected value5, but it may impair the integrity of the EEG recording since brain-generated potentials may be in the same frequency band6,7. Recently, new technologies to reduce muscle artifact based on independent component analysis (ICA)8–10 have become available. ICA removes artifacts based on source-related features instead of frequencies11–14. Prior studies have demonstrated that ICA-based methods improve the interpretation of artifact-laden ictal EEG recordings; in these studies researchers manually performed the ICA analysis prior to performing the EEG interpretation15,16. Automatic artifact reduction using ICA8 has become commercially available and is included in the latest versions of popular EEG viewer software17.\n\nDespite the utilization of these software products by neurologists around the globe, the clinical benefit has not been established. It is also unknown if the new approaches introduce confounding artifacts that may lead to erroneous interpretations.\n\nThe goal of this study was to assess the validity of a commercially available EEG artifact reduction tool (AR1)17 and compare its validity to a novel automatic artifact reduction tool (AR2), which was developed at the University of California Los Angeles on the basis of inter-reader agreement, confidence, and congruence with other clinical findings, and which we are describing here.\n\n\nMethods\n\nThe custom software algorithm involved importing EEG scalp recordings as European Data Format (EDF) files in Matlab 8.4 (Mathworks, Natick, MA). The imported EEG was band pass filtered (16–70 Hz) using a 500th order finite impulse response filter, i.e. FIR1 in referential montage. We then applied a power spectral density algorithm to find extended intervals of elevated high frequency power across electrodes. We next calculated the normalized mutual information (MI)18 adjacency matrix across all scalp electrode contacts during the (16–70 Hz) band-pass filtered artifact epoch of greatest duration and assigned each scalp EEG electrode a single MI value derived from the maximum pairwise MI values in the adjacency matrix. We then determined if this maximum mutual information value exceeded a threshold value, and if that electrode should be included in subsequent artifact reduction processing. If the recording lacked an artifact epoch, or all channels were excluded, artifact reduction was applied to the referential recordings from all recording electrodes.\n\nThe high pass filtered (>16 Hz) scalp EEG was then separated into consecutive 120-second trials and each trial was processed using CUDAICA19. The purpose of the ICA was to separate the (>16 Hz) seizure activity, from the (>16 Hz) muscle artifact. The 16 Hz cut-off for the filter was chosen to isolate the vast majority of the muscle artifact. Independent components that explained an amount of variance above a particular threshold were excluded from the signal. The threshold was selected on the basis of the values of the raw and normalized mixing matrix (i.e. inverse weight matrix) calculated in each of the ICA iterations. We assumed that the last myogenic component and first neurogenic component can be differentiated on the basis of the inverse weight matrix, which provides the spatial distribution of each component, and identifying the independent component of greatest order with a focal spatial topography defined on the basis of exceeding a normalized threshold in at least one electrode of the inverse weight matrix.\n\nThe pruned EEG calculated for each 120 second trial of EEG (i.e iteration of CUDAICA) was concatenated, and subsequently the entire raw ictal EEG was low pass filtered (<16 Hz) using a 500th order symmetric digital FIR filter, and the resulting low pass filtered EEG was reconstituted with the high pass (>16 Hz) filtered EEG, following the exclusion of the independent components suspected to represent muscle artifact. The reconstituted and modified ictal EEG was exported from Matlab format to EDF for subsequent visual analysis.\n\nAll computations were carried out using compiled Matlab 8.4 custom scripts on a cluster of HP SL230s Gen 8 ES-2670 nodes with dual-eight-core 2.6 GHz Intel ES-2670 central processing units, 4 GB of memory per core, and NVIDIA Tesla graphics processing units. Minimal system requirements for operating AR2 include Matlab v8.4 or above, an Intel Xeon CPU, 2 GB of memory, a NVIDIA GPU, which is CUDA compatible, and CUDAICA. For scalp EEG files exported from Neuroworkbench (Nihon-Kohden, Irvine, CA, USA), executing the AR2 software method requires only inputting the file name of the EDF file of interest at the command line. For EDF files exported from other equipment manufacturers, full automation of the AR2 software method can be easily accomplished with simple modifications of the input parameters.\n\nWe tested AR2 retrospectively using 23 seizures from eight adult patients with suspected focal-onset seizures undergoing evaluation at the UCLA Seizure Disorder Center. The patients and seizures were selected by S.A.W, whom was not a reviewer, from a review of consecutive clinical neurophysiology case conference presentations between January 1, 2014 and December 1, 2015 and based on case conference consensus that the ictal EEG records were uninterpretable due to muscle artifact contamination when reviewed with conventional digital filtering. For each of these patients, between 1–4 uninterpretable seizures were selected for inclusion in the study on the basis of a lack of ECG, electrode, and salt bridge artifact by S.A.W. Clinical data for each patient included seizure semiology, inter-ictal epileptiform abnormality, unobscured findings and radiological reports from MRI, PET, SPECT scans. The EEG and clinical records were deidentified and research informed consent was not required. This study was approved under UCLA IRB#15-001481. The video EEGs were acquired using a EEG-1200 amplifier (Nihon-Kohden, Irvine, CA) at a sampling rate of 200 Hz. Electrodes were placed according to the 10–20 international system with the additional anterotemporal electrodes at T1/T2. The duration of the exported EEG recording included the entire seizure and a 2-3 minute peri-ictal epoch.\n\nAR1 was the commercially available Persyst v12 artifact reduction software17 (Persyst Development, San Diego, CA). The methods are proprietary. AR2 was developed by S.A.W and involved a two-step procedure consisting of a custom algorithm. EEG processed by AR2 was also interpreted using the Persyst v12 artifact reduction software.\n\nThe ictal recordings for AR1 and AR2 were reviewed in Persyst v12 without video by 26 neurologists with a specialization in EEG. The readers were blinded to which records received AR1 or AR2, and each reader reviewed the 46 seizures in random. Following review of each ictal record, the reader completed a multiple choice questionnaire (Supplementary File 1), which assessed ability to visualize seizure-onset (Y,N) lateralize seizure-onset (L,R,N), locate the region of ictal onset (anterior temporal, anterior frontal, mid-temporal, temporal-parietal-occipital, occipital, none), and self-identify confidence of interpretation on a 5 point scale [(5) entirely confident (4) somewhat sure (3) probable (2) not confident (1) unlikely i.e. slight probability] for each measure. When time of onset, laterality, or the seizure onset region was not assigned the confidence was taken as (0). Readers were not provided with a definition of seizure-onset.\n\nDuring the interpretation of the ictal EEG processed by AR1 or AR2, no restrictions were placed on the use of Persyst v12 built in EEG filters (low-pass, high-pass, band-pass), or changes to montage. A comment in each recording was used to demarcate the time prior to the clinical seizure but not the EEG onset. The assessment was not time limited.\n\nDifferences in EEG interpretation utilizing AR1 and AR2 were assessed using the Wilcoxon signed rank test and the McNemar test on paired nominal data. Agreement across readers (Y,N,L,R), using either AR1 or AR2, was calculated using the inter-class correlation coefficient (ICC). For these outcomes, missing values were imputed to be in between non-missing values, and were analyzed using cumulative logit mixed effects models, which capture this ordering in the values and accounts for the clustering of readings into patients, and seizures within patients. Agreement across readers for onset region was calculated using a Fleiss kappa and treating the missing values as a category of response. Errors are given as standard error of the mean (s.e.m), unless otherwise specified.\n\n\nResults\n\nWe applied the AR2 method developed at UCLA to the 23 seizures in the dataset. The method was automatic and unsupervised and separated the high-pass filtered (> 16 Hz) scalp EEG recordings into putative neurogenic and myogenic components (Figure 1). After pruning the putative myogenic components, the putative neurogenic components were reconstituted with the low-pass filtered (< 16 Hz) scalp EEG (Figure 2). The AR2 and AR1 processed scalp EEG recordings were subsequently inspected by the 26 specialists (Figure 3).\n\nThe AR2 method automatically separates independent components containing myogenic from neurogenic potentials in the beta and gamma band on the basis of spatial topography and explained variance. A. Unprocessed scalp ictal EEG recording that was deemed uninterpretable. B. The same epoch after applying a low pass (<16 Hz) filter demonstrating a lack of a convincing ictal rhythm. C. The ictal epoch after applying a high pass (> 16 Hz) filter demonstrating dense muscle artifact. D. An example of a mutual information adjacency matrix calculated during an epoch of artifact in the high pass (> 16 Hz) filtered scalp EEG recording. Three scalp electrode recordings exhibited relatively low mutual information with all other electrodes and were designated poor quality and excluded from further processing to optimize INFO-MAX ICA based artifact reduction. E. The inverse weight matrix, and normalized inverse weight matrix (panel F) of all independent components across scalp electrode recordings for the seizure in panel A. Independent components 1-13 exhibited strong focality and were designated as containing myogenic potentials, while independent components 14 and above were designated neurogenic.\n\nReconstitution of the low pass (<16 Hz) ictal scalp EEG with the high pass (>16 Hz) neurogenic independent components reveals a clear ictal onset. A. The tentative neurogenic independent components (A1) and myogenic independent components (A2) derived from INFO-MAX ICA processing of the high pass (> 16 Hz) filtered ictal scalp EEG recording. The largest amplitude activity in the neurogenic components are evident frontally and in the left hemisphere. B. The low pass filtered ictal scalp EEG suggests a possible left frontal onset but a convincing ictal rhythm is lacking. C. Reconstitution of the low pass EEG with the neurogenic high pass (> 16 Hz) independent components results in an ictal EEG that demonstrates a more convincing left frontal onset consisting of beta-gamma oscillations with some clear phase reversals in F3 and F7.\n\nIctal scalp EEG recording from seizure 18 prior to artifact reduction processing (top), after processing with artifact reduction methodology 1 (AR1, middle), and after processing with artifact reduction methology 2 (AR2, bottom). Only processing with AR2 reveals a right hemispheric onset followed by clear spread to right frontal regions.\n\nAcross the 23 seizures considered previously uninterpretable with digital filtering (Table 1) two-thirds of the readers were able to delineate the time of seizure-onset for 10 of the 23 using AR1, and 15 of the 23 using AR2 (Figure 4A, n=23, p<0.01). Across the 23 seizures, the mean confidence measure for the determination of seizure-onset was 2.68 +/- 0.19 (probable-not confident) when AR2 was utilized and 2.19 +/- 0.18 (not confident) with AR1 (Figure 5A, p<0.01). The inter-class coefficient (ICC) was 0.26 (95% Confidence Interval (CI) 0.21-0.30) with AR2, and 0.15 (95% CI 0.11-0.18) with AR1 (p=0.333).\n\nClinical description of patients and ictal EEG laterality and focus assignments with AR1 and AR2. Abbreviations (L:left, R:right), PET findings refer to hypometabolism, SPECT findings to hyperperfusion. The focus was determined on a majority basis across all the assignments made by the readers for a subject’s seizure(s).\n\nMore readers could visualize the time of seizure onset, and assign laterality to seizure onset utilizing AR2 as compared to AR1, and the assigned laterality of seizure onset sometimes differed between the two methods. A. Bar plot of the number of readers whom visualized the time of onset for each seizure utilizing AR1 (blue) or AR2 (red). Across seizures more readers visualized seizure onset utilizing AR2 compared with AR1 (p<0.01). Asterisks indicate statistically significant differences between the two methods in individual seizures (McNemar, p<0.05). B. Stacked bar plot of the number of readers selecting a left- or right-sided seizure onset utilizing AR1 (light blue, left; light yellow, right) or AR2 (dark blue, left; yellow, right). Across seizures more readers lateralized seizure onset utilizing AR2 compared with AR1 (p<0.01). Asterisks indicate statistically significant differences in individual seizures (McNemar, p<0.05), number sign indicates a significant change in the determination of laterality utilizing AR2 compared to AR1 (McNemar, p<0.05).\n\nA. Bar plot of the mean confidence scale values for visualizing the time of seizure onset for the 23 seizures interpreted utilizing AR1 (blue), and AR2 (red). Across seizures, confidence scale values were greater when AR2 was utilized as compared with AR1 (p<0.01). Asterisks indicate differences in confidence values in individual seizures (p<0.05). Error bars are calculated as s.e.m. B. The respective mean confidence scale values for seizure onset lateralization. C. The respective mean confidence scale values for seizure focus localization. Across seizures, confidence scale values for lateralizing seizure onset, and identifying the seizure focus were greater when AR2 was utilized as compared with AR1 (p<0.05).\n\nCompared with identifying the time of seizure-onset, fewer readers could lateralize seizure-onset after either AR1 or AR2 (Figure 4B, p<0.05). However, more readers were able to lateralize seizure-onset using AR2 compared to AR1 (Figure 4B, p<0.05) and readers were more confident with AR2, although both methods did not produce high levels of confidence. The mean confidence measure for seizure-onset lateralization was 1.87+/- 0.198 (not confident-unlikely) for AR2 and 1.54+/- 0.176 (not confident-unlikely) for AR1 (Figure 5B, p<0.01). The ICC was equivalent (p=0.501) for AR1 (ICC=0.33 95% CI 0.30-0.37) and AR2 (ICC=0.28 95% CI 0.25-0.31). For localizing the region of seizure-onset reader confidence (Figure 5C), and agreement was very low (Figure 6, AR1 Fleiss’ kappa = 0.1199, 95% CI = 0.116-0.124, AR2 Fleiss’ kappa = 0.121, 95% CI =0.118-0.125). For two of the seizures, the laterality assignments were different when AR2 was used as compared to AR1 (Figure 4B, McNemar p<0.05).\n\nStacked bar plot of the ictal onset region assignments using either AR1 (lighter colors) or AR2 (darker colors) for all 23 seizures. Overall, across seizures, more readers were able to render an assignment using AR2 as compared to AR1 (p<0.05). Inter-reader agreement using for assigning the ictal onset region was marginal using either AR1 or AR2.\n\nWe identified the patients with at least two consistent clinical findings that lateralized the suspected seizure-onset zone (SOZ). Compared to AR1, more readers were able to render seizure-onset laterality assignments using AR2, and these assignments were more often congruent with other clinical data (Table 2). These clinical findings included seizure semiology, onset of seizures without EEG obscuration, structural MRI, PET, or SPECT findings. If any of the clinical findings were contradictory with respects to the laterality of the suspected SOZ, the SOZ was designated unknown. Overall, 4 patients (#1,4,5,6) had clinical findings that supported a left-hemispheric SOZ, and 1 patient (#7) had clinical findings that supported a right-hemispheric SOZ (Table S1). Among the 8 patients, if the reader lateralized the seizure-onset to the left using AR2 they were correct in 95.9% (95% CI 85.7-98.9%) of cases, but using AR1 they were correct in 91.9% (95% CI 77.0-97.5%) of cases (Table 3, p<0.0607).\n\nContingency table of the agreement between seizure-onset laterality using AR1 (left), and AR2 (right) and the laterality of seizure-onset assigned on the basis of other clinical data for all the study patients and seizures. Note that clinical seizure-onset lateralization was not available for all patients, and when readers rendered a laterality decision that matched the laterality based on other clinical data, the assignments “agreed”.\n\nAgreement between seizure-onset laterality assignments using either AR1 or AR2 and the suspected laterality of the SOZ assigned on the basis of other clinical data. Parentheses indicate the 95% confidence interval. “n” refers to the number of subjects.\n\n\nDiscussion\n\nIn this study, we present a new artifact reduction software, AR2, and its application compared with a commercially available tool, AR1. 26 neurologists used the two methods to interpret 23 ictal EEG recordings that were uninterpretable due to muscle artifact when reviewed with conventional filtering. The major findings from this study include: 1) the utilization of artifact reduction software results in non-uniform interpretation of ictal EEG, with many readers not able to render assignments; 2) when readers did render seizure-onset laterality assignments it often agreed with other clinical findings; 3) although the study size was small, the AR2 software method increased the number of readers that rendered assignments, and reader confidence suggesting it aids in diagnosis.\n\nBoth AR1 and AR2 are digital signal processing software tools8,15,17 that may confound accurate ictal EEG interpretation by altering the appearance of the EEG. Digital filtering also can mislead5. One concern about AR1 and AR2 relates to the lack of understanding of the waveform alteration. Specifically, the readers were not confident in their interpretations, and the determination of seizure lateralization sometimes differed between the AR1 or AR2 methods. As such, the artifact reduction methods may introduce false positive findings. This demonstrates the limits of EEG artifact reduction approaches and puts the advantages into perspective.\n\nNeurologists often disagree on the interpretation of ictal EEG processed with artifact reduction software, however the seizure-onset laterality assignments rendered by a quorum are often correct. Further refinement of this technology may successfully improve the efficiency of video-EEG monitoring and the utilization of epilepsy surgery; however, correlation with epilepsy resective surgery outcomes will be required for further validation.\n\nWith regard to AR2, the novel software method developed for this study, the slight improvement seen in ictal EEG interpretability after applying the method suggests that the algorithm can (1) reliably produce signals that are, exclusively or mainly, EEG or EMG, and (2) identify which signals are of brain origin and which are contaminant.\n\nOne explanation for AR2’s ability to isolate myogenic from neurogenic independent components may be that scalp EEG electrodes record weighted and summated far-field signals from all brain and muscle sources, as well as near-field electrode noise generated at the electrode/skin interface. The decomposition of scalp EEG data into components with maximally independent time courses using independent component analysis results in time series that may resemble single equivalent dipoles because of the bias towards increased local connectivity in neurons and myocytes as compared to long distance connectivity14.\n\n\nData and software availability\n\nAll software code for the new AR2 software developed by S.A.W. is openly and permanently available at https://github.com/shennanw/AR2.\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.22989321\n\nLicense: GNU Public License 3.\n\nThe raw scalp ictal EEG files that were analyzed in this study using AR2, as well as the scalp ictal EEG files following processing using AR2 are available from Zenodo: Dataset 1. Validity of two automatic artifact reduction software methods in ictal EEG interpretation. Doi, 10.5281/zenodo.22109522 (https://www.zenodo.org/record/221095#.WF63m7YrLdR)\n\nThe raw data used for the comparative assessments are available from Zenodo: Dataset 2. Validity of two automatic artifact reduction software methods in ictal EEG interpretation. Doi. 10.5281/zenodo.223329 (https://zenodo.org/record/223329#.WHN-HLYrLdQ)",
"appendix": "Author contributions\n\n\n\nS.A.W designed the study, analyzed the data, drafted and revised the manuscript, A.A.P analyzed the data, and revised the manuscript, S.V analyzed the data, and revised the manuscript, S.M revised the manuscript, D.H.W revised the manuscript, I.O analyzed the data, and revised the manuscript, M.G analyzed the data, and revised the manuscript, L.S analyzed the data, and revised the manuscript, J.L analyzed the data, and revised the manuscript, C.K.C analyzed the data, and revised the manuscript, E.C analyzed the data, and revised the manuscript, R.R analyzed the data, and revised the manuscript, I.K analyzed the data, and revised the manuscript, P.C analyzed the data, and revised the manuscript, C.B.B analyzed the data, and revised the manuscript, A.L.N analyzed the data, and revised the manuscript, M.G.H analyzed the data, and revised the manuscript, L.R analyzed the data, and revised the manuscript, A.B analyzed the data, and revised the manuscript, J.S analyzed the data, and revised the manuscript, M.A analyzed the data, and revised the manuscript, T.A analyzed the data, and revised the manuscript, A.F analyzed the data, and revised the manuscript, M.N analyzed the data, and revised the manuscript, C.S analyzed the data, and revised the manuscript, S.M analyzed the data, and revised the manuscript, D.S.E analyzed the data, and revised the manuscript’, G.W.M analyzed the data, and revised the manuscript, M.R.N analyzed the data, and revised the manuscript, M.S analyzed the data, and revised the manuscript, J.E analyzed the data, and revised the manuscript, J.S designed the study, analyzed the data, revised the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were declared.\n\n\nGrant information\n\nDr. Weiss was supported by an Epilepsy Foundation Award Research and Training Fellowship for Clinicians.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors would like to thank Mrs. Sandra Dewar for her administrative assistance, and Mr. Kirk Shattuck for his technical support.\n\n\nSupplementary material\n\nSupplementary File 1: Multiple choice questionnaire, completed by the reader after review of the ictal record.\n\nClick here to access the data.\n\n\nReferences\n\nWilliamson PD, Spencer DD, Spencer SS, et al.: Complex partial seizures of frontal lobe origin. Ann Neurol. 1985; 18(4): 497–504. PubMed Abstract | Publisher Full Text\n\nLaskowitz DT, Sperling MR, French JA, et al.: The syndrome of frontal lobe epilepsy: characteristics and surgical management. Neurology. 1995; 45(4): 780–7. PubMed Abstract | Publisher Full Text\n\nFoldvary N, Klem G, Hammel J, et al.: The localizing value of ictal EEG in focal epilepsy. Neurology. 2001; 57(11): 2022–8. PubMed Abstract | Publisher Full Text\n\nKnowlton RC: The role of FDG-PET, ictal SPECT, and MEG in the epilepsy surgery evaluation. Epilepsy Behav. 2006; 8(1): 91–101. Review. PubMed Abstract | Publisher Full Text\n\nGotman J, Ives JR, Gloor P: Frequency content of EEG and EMG at seizure onset: possibility of removal of EMG artefact by digital filtering. Electroencephalogr Clin Neurophysiol. 1981; 52(6): 626–39. PubMed Abstract | Publisher Full Text\n\nBautista RE, Spencer DD, Spencer SS: EEG findings in frontal lobe epilepsies. Neurology. 1998; 50(6): 1765–71. PubMed Abstract | Publisher Full Text\n\nWorrell GA, So EL, Kazemi J, et al.: Focal ictal beta discharge on scalp EEG predicts excellent outcome of frontal lobe epilepsy surgery. Epilepsia. 2002; 43(3): 277–82. PubMed Abstract | Publisher Full Text\n\nMakeig S, Bell AJ, Jung TP: Independent component analysis of electroencephalographic data. Advances in neural information processing systems.1996; 145–151. Reference Source\n\nMakeig S, Jung TP, Bell AJ, et al.: Blind separation of auditory event-related brain responses into independent components. Proc Natl Acad Sci U S A. 1997; 94(20): 10979–84. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJung TP, Makeig S, Humphries C, et al.: Removing electroencephalographic artifacts by blind source separation. Psychophysiology. 2000; 37(2): 163–78. PubMed Abstract | Publisher Full Text\n\nIlle N, Berg P, Scherg M: Artifact correction of the ongoing EEG using spatial filters based on artifact and brain signal topographies. J Clin Neurophysiol. 2002; 19(2): 113–24. Review. PubMed Abstract | Publisher Full Text\n\nDelorme A, Sejnowski T, Makeig S: Enhanced detection of artifacts in EEG data using higher-order statistics and independent component analysis. Neuroimage. 2007; 34(4): 1443–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nViola FC, Thorne J, Edmonds B, et al.: Semi-automatic identification of independent components representing EEG artifact. Clin Neurophysiol. 2009; 120(5): 868–77. PubMed Abstract | Publisher Full Text\n\nDelorme A, Palmer J, Onton J, et al.: Independent EEG sources are dipolar. PLoS One. 2012; 7(2): e30135. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUrrestarazu E, Iriarte J, Alegre M, et al.: Independent component analysis removing artifacts in ictal recordings. Epilepsia. 2004; 45(9): 1071–8. PubMed Abstract | Publisher Full Text\n\nVergult A, De Clercq W, Palmini A, et al.: Improving the interpretation of ictal scalp EEG: BSS-CCA algorithm for muscle artifact removal. Epilepsia. 2007; 48(5): 950–8. PubMed Abstract | Publisher Full Text\n\nNierenberg N, Wilson SB, Scheuer ML: Method And System For Detecting And Removing EEG Artifacts. U.S. Patent Application No. 13/684,556. Published; 2013. Reference Source\n\nStrehl A, Ghosh J: Cluster Ensembles – A Knowledge Reuse Framework for Combining Multiple Partitions. J Mach Learn Res. 2002; 3: 583–617. Reference Source\n\nRaimondo F, Kamienkowski JE, Sigman M, et al.: CUDAICA: GPU optimization of Infomax-ICA EEG analysis. Comput Intell Neurosci. 2012; 2012: 206972. 1–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee TW, Girolami M, Sejnowski TJ: Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources. Neural Comput. 1999; 11(2): 417–441. PubMed Abstract | Publisher Full Text\n\nshennanw : shennanw/AR2: AR2 [Data set]. Zenodo. 2017. Data Source\n\nWeiss S: Validity of two automatic artifact reduction software methods in ictal EEG interpretation. Dataset 1 [Data set]. Zenodo. 2016. Data Source"
}
|
[
{
"id": "20461",
"date": "22 Feb 2017",
"name": "Patrícia Figueiredo",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nReferee report The manuscript needs careful revision by a native English speaker within the scientific community. Although I feel that the performance measures used by the authors are adequate, and that a substantial number of EEG specialists quantified them, the overall results are poor, particularly in terms of the specialists’ confidence in their assessment. It would be therefore useful to compare the performance with additional methods, for instance, as to understand the behavior of the proposed performance measures across a larger set of methods.\nSpecific Comments\nTitle:\n1. The title should mention that this novel method specifically addresses EEG artifacts induced by myogenic activity.\nAbstract:\n2. (Results) The authors should include the consistency value also for AR1.\nIntroduction:\n3. (page 3, first paragraph) “Each of these tests adds undesired time and cost to the evaluation”. I would say that the necessity of using additional imaging techniques depends on how precise one wants seizure-onset zone delineation to be, as scalp EEG has a poor spatial resolution and localization power. Please elaborate and/or re-phrase the sentence accordingly.\n\n4. (page 3, second paragraph) “ICA removes artifacts based on source-related features instead of frequencies”. What do Authors mean with “source-related features”? Actually, there are several studies that use frequency-based criteria for the selection and subsequent removal of artifact-related sources…Please explain.\n\n5. (page 3, second paragraph) Please add more recent reviews/papers on the automatic IC selection for EEG cleaning, such as: Chaumon, M., Bishop, D.V.M., Busch, N. a., 2015. A Practical Guide to the Selection of Independent Components of the Electroencephalogram for Artifact Correction. J. Neurosci. Methods. or Urigüen, J.A., Garcia-Zapirain, B., 2015. EEG artifact removal—state-of-the-art and guidelines. J. Neural Eng. 12, 31001.1\n\n6. (page 3, fourth paragraph) Authors refer to AR1 as a commercially available software, and in fact, detailed information about it is provided in reference [17]. However, the Authors should provide a brief description of the method because: 1) it is the only method which they compare their novel one with; and 2) so that future readers do not need to go through [17] in order to understand the overall rationale of AR1.\nMethods:\n7. (page 3, Implementation, first paragraph) “(…) a power spectral density algorithm to find extended intervals of elevated high frequency power across electrodes”. The Authors provide no information about how this algorithm works, nor references; thus, it is presently not possible to reproduce this part of the study. 8. (page 3, Implementation, first paragraph) The Authors need to justify their choices in general; particularly, why only compute the adjacency matrix between the epoch of greatest duration across all electrodes? Why compute the adjacency matrix in the first place, and not any other discriminative feature for the presence of muscle artifacts? Why only assign the maximum pairwise MI value in the adjacency matrix to a given electrode and ignore all the rest? How was the MI threshold determined?\n\n9. (page 3, Implementation, second paragraph) Again, the Authors need to provide more details overall. Why segment EEG into consecutive epochs of 120 s? How exactly was the variance threshold derived? Also, I did not understand why should be there any order associated with myogenic and neurogenic components (“We assumed that the last myogenic component and first neurogenic component (…)”).\n\n10. (page 3, Implementation, second paragraph) I understand that one of the expected features of ICs reflecting muscle artifacts is having a focal spatial topography; however, bad channels are also reflected in ICs exhibiting this feature. Thus, I have severe concerns about false positives when using this criterion, as other myogenic-unrelated ICs are probably being selected as well, which may hinder a true assessment of the impact of muscle artifact correction.\n\n11. (page 3, Implementation, third paragraph) What does reconstitute mean in this context?\n\n12. (page 4, Statistical analysis) Since the performance of AR1 and AR2 is being assessed by 4 different performance measures obtained from 26 EEG specialists, it would be more accurate to use a 2-way repeated measures ANOVA (or its non-parametric equivalent, in the case of the samples not being normally distributed), followed by multiple comparison testing if necessary.\nResults:\n13. The first three figures have very poor quality. In particular, it is nearly impossible to follow the overall (quite detailed) description of Figure 2 (and it is panel A1 on the top, left hand-side, and not A2). Also, the three panels in Figure 3 should be overlaid to facilitate the direct comparison between the two algorithms.\n\n14. Although AR2 outperforms AR1 for most of the performance measures, the results are still poor, making me wonder if either of these methods is suitable for EEG muscle artifact correction. Discussion 15. (page 12, first paragraph) What do the Authors mean with “One concern about AR1 and AR2 relates to the lack of understanding of the waveform alteration”?\n\n16. (page 12, fourth paragraph) “(…) (1) reliably produce signals that are, exclusively or mainly, EEG or MEG (…)”. Please clarify and elaborate on this claim.",
"responses": [
{
"c_id": "2594",
"date": "04 Apr 2017",
"name": "Shennan Weiss",
"role": "Author Response",
"response": "Dear Dr. Figueiredo and Dr. Abreu, Thank you very much for your thoughtful and helpful comments and suggestions. We have substantially revised the manuscript according to your feedback as follows: 1. The title should mention that this novel method specifically addresses EEG artifacts induced by myogenic activity. -- The title of the paper has been modified to “AR2, a novel automatic muscle artifact reduction software method for ictal EEG interpretation: Validation and comparison of performance with commercially available software” Abstract: 2. (Results) The authors should include the consistency value also for AR1. -- As you suggested we have added the consistency values for AR1 to the abstract. Introduction: 3. (page 3, first paragraph) “Each of these tests adds undesired time and cost to the evaluation”. I would say that the necessity of using additional imaging techniques depends on how precise one wants seizure-onset zone delineation to be, as scalp EEG has a poor spatial resolution and localization power. Please elaborate and/or re-phrase the sentence accordingly. -- We have modified the introduction as follows (pg. 3): The inability, or lack of precision, to discern the seizure-onset zone from scalp EEG often necessitates additional testing, … 4. (page 3, second paragraph) “ICA removes artifacts based on source-related features instead of frequencies”. What do Authors mean with “source-related features”? Actually, there are several studies that use frequency-based criteria for the selection and subsequent removal of artifact-related sources…Please explain. -- Thank you for this instructive feedback. We have modified the introduction as follows (pg.3): ICA derives spatial features that can remove artifacts that have static scalp topographies and time courses of activity that are distinct from that of EEG sources. 5. (page 3, second paragraph) Please add more recent reviews/papers on the automatic IC selection for EEG cleaning, such as: Chaumon, M., Bishop, D.V.M., Busch, N. a., 2015. A Practical Guide to the Selection of Independent Components of the Electroencephalogram for Artifact Correction. J. Neurosci. Methods. or Urigüen, J.A., Garcia-Zapirain, B., 2015. EEG artifact removal—state-of-the-art and guidelines. J. Neural Eng. 12, 31001.1 -- Thank you for suggesting the inclusion of this important methods article. We now cite this article in the introduction and discussion. 6. (page 3, fourth paragraph) Authors refer to AR1 as a commercially available software, and in fact, detailed information about it is provided in reference [17]. However, the Authors should provide a brief description of the method because: 1) it is the only method which they compare their novel one with; and 2) so that future readers do not need to go through [17] in order to understand the overall rationale of AR1. -- Although the complete methods for AR1 are not included in reference 17 we have modified the introduction as follows (pg.3): The goal of this study was to assess the validity of a commercially available EEG artifact reduction tool (AR1) that uses different montages and within electrode analysis to identify artefactual independent components20, and compare its validity to a novel automatic artifact reduction tool (AR2)… Methods: 7. (page 3, Implementation, first paragraph) “(…) a power spectral density algorithm to find extended intervals of elevated high frequency power across electrodes”. The Authors provide no information about how this algorithm works, nor references; thus, it is presently not possible to reproduce this part of the study. 8. (page 3, Implementation, first paragraph) The Authors need to justify their choices in general; particularly, why only compute the adjacency matrix between the epoch of greatest duration across all electrodes? Why compute the adjacency matrix in the first place, and not any other discriminative feature for the presence of muscle artifacts? Why only assign the maximum pairwise MI value in the adjacency matrix to a given electrode and ignore all the rest? How was the MI threshold determined? -- We agree with your comments #7 and #8. We now specify in the methods that the reason we performed this analysis was (pg.3): “Prior to performing ICA to remove muscle artifact, the algorithm first identified epochs of the scalp EEG record contaminated by muscle artifact and determined the electrodes that were suspected of having high recording impedance during that epoch. The purpose of these calculations was to exclude these electrodes from the ICA calculations.” The method used to determine the artifact epoch had actually been modified prior to submission of version 1 of the manuscript. We now better describe this algorithm as “We then calculated the normalized instantaneous amplitude of the band-pass filtered signal using a Hilbert transform. This signal was smoothed using moving averaging, and the algorithm identified the longest epoch in which the time series remained greater than one standard deviation.” 9. (page 3, Implementation, second paragraph) Again, the Authors need to provide more details overall. Why segment EEG into consecutive epochs of 120 s? How exactly was the variance threshold derived? Also, I did not understand why should be there any order associated with myogenic and neurogenic components (“We assumed that the last myogenic component and first neurogenic component (…)”). -- We agree with your comment and apologize for the lack of clarity. We now specify that (pg.4): A 120 second trial length was chosen to optimize processing time. In addition, the method have been modified as follows (pg. 4): “We assumed that the last myogenic component and first neurogenic component can be differentiated on the basis of the inverse weight matrix, which provides the spatial distribution of each component, and identifying the independent component that account for the most variance with a focal spatial topography17 defined on the basis of exceeding a normalized threshold of two standard deviations in at least one electrode of the inverse weight matrix. This threshold was chosen on the basis of visual inspection of the EEG in the experimental dataset and resulting independent components.” 10. (page 3, Implementation, second paragraph) I understand that one of the expected features of ICs reflecting muscle artifacts is having a focal spatial topography; however, bad channels are also reflected in ICs exhibiting this feature. Thus, I have severe concerns about false positives when using this criterion, as other myogenic-unrelated ICs are probably being selected as well, which may hinder a true assessment of the impact of muscle artifact correction. -- We agree with your concerns however in the algorithm we already excluded bad channels using the algorithm described with reference to comments #7 and #8. 11. (page 3, Implementation, third paragraph) What does reconstitute mean in this context? -- We now specify in implementation (pg.4) that: the resulting low pass filtered EEG was reconstituted by addition of the waveforms with the high pass (>16 Hz) filtered EEG 12. (page 4, Statistical analysis) Since the performance of AR1 and AR2 is being assessed by 4 different performance measures obtained from 26 EEG specialists, it would be more accurate to use a 2-way repeated measures ANOVA (or its non-parametric equivalent, in the case of the samples not being normally distributed), followed by multiple comparison testing if necessary. -- We appreciate this helpful feedback. Dr. David Groppe the other reviewer of the manuscript suggested that we use paired t-tests and provide the t-value in order to convey effect size to the reader. We have followed his recommendations. Including both 2-way repeated measures ANOVA and paired t-tests would confuse the reader. Results: 13. The first three figures have very poor quality. In particular, it is nearly impossible to follow the overall (quite detailed) description of Figure 2 (and it is panel A1 on the top, left hand-side, and not A2). Also, the three panels in Figure 3 should be overlaid to facilitate the direct comparison between the two algorithms. -- a) We have made grammatical changes to figure 2, and corrected the figure A1 vs. A2 labeling. We apologize for this oversight. b) We attempted to overlay the panels of figure 3 but the result was confusing and not visually appealing. Therefore, we cannot provide this suggested change. 14. Although AR2 outperforms AR1 for most of the performance measures, the results are still poor, making me wonder if either of these methods is suitable for EEG muscle artifact correction. -- We agree and point out in the discussion that the readers were not confident in their interpretations using either AR1 or AR2 in the discussion (pg. 13). Discussion 15. (page 12, first paragraph) What do the Authors mean with “One concern about AR1 and AR2 relates to the lack of understanding of the waveform alteration”? -- This sentence has been modified to provide more clarity (pg.13): “One concern about AR1 and AR2 relates to the uncertainty that myogenic activity was fully removed, and neurogenic components were unaffected during waveform alteration.” 16. (page 12, fourth paragraph) “(…) (1) reliably produce signals that are, exclusively or mainly, EEG or MEG (…)”. Please clarify and elaborate on this claim. -- We agree with your comment that this sentence is unclear. This paragraph has been modified in the revision and now reads as follows: “One explanation for AR2’s ability to isolate myogenic from neurogenic activity may be related to the respective dipole generators of each. ICA produces independent components that may resemble single equivalent dipoles14. Presumably, networks of myocytes exhibit shorter distance connectivity then networks of neurons that produce beta and gamma oscillations, and thus the two generators can be distinguished on the basis of the focality17 of the independent components topography.” References 1. Chaumon M, Bishop DV, Busch NA: A practical guide to the selection of independent components of the electroencephalogram for artifact correction.J Neurosci Methods. 2015; 250: 47-63 PubMed Abstract | Publisher Full Text"
}
]
},
{
"id": "20245",
"date": "16 Mar 2017",
"name": "David M. Groppe",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, Weiss and colleagues present a novel algorithm for removing electromyographic (EMG) artifacts from ictal EEG recordings, called AR2. Moreover, they evaluate the performance of the algorithm on data from 8 patients and compare it to a similar commercial algorithm, AR1 (i.e., Persyst v12’s artifact correction software), using readings by 26 neurologists. The data chosen were so corrupted by EMG artifacts that they were not interpretable using conventional frequency-based filtering. Both AR1 and AR2 rely on independent components analysis (ICA) to remove EMG artifacts via spatial filters that are learned from the data. There is strong evidence that ICA is effective at removing EMG (and other EEG-artifacts) from data acquired in controlled, research settings[ref1]-2. However, there may be too many EMG sources in highly polluted ictal recordings for ICA to work.\nIn general, the authors found that both algorithms (1) made around 50% of the seizures interpretable with typically low levels of rater confidence and (2) produced very low-levels of inter-rater agreement. Nonetheless, when compelling seizure-onset lateralization was available from other sources of data (e.g., PET, SPECT), the algorithms led to EEG interpretations that were in concordance in about 80% of seizures (Table 2). Moreover, AR2 tended to slightly outperform AR1. Specifically, neurologists could interpret more seizures and tended to have more confidence in their interpretations following AR2 artifact correction. However, there was no statistically significant difference in inter-rater agreement between algorithms. The authors conclude from this that their AR2 algorithm “may improve the validity of ictal EEG artifact reduction.”\nIn general, I think the authors’ work is laudable and that it is a valuable contribution to the literature. AR2 is well motivated given the evidence that ICA is successful at removing EMG (and other EEG artifacts) from data acquired in controlled research settings and the approach they have taken to validate their algorithm is generally sound. Moreover it is impressive that all of the seizures were read by a large number of neurologists, (26; although it is not clear how many were board certified in epilepsy or clinical neurophysiology) and that they have made all of their code and data public.\nHowever, there are some significant issues with this work that qualify their findings and should be addressed in revisions or future work:\n-As the authors note, the data for this study was obtained from a small number of patients (8, only 5 of whom had lateralized seizure foci based on independent data). Thus, it is not clear how robust some of their findings are (e.g., the small differences between AR1 and AR2 performance). -Although AR2 is a fully automatic algorithm, there are some arbitrary parameters of the algorithm (e.g., the mutual information threshold used to include an electrode in the artifact correction procedure) that must have been set based on exploratory analyses. If the data used to set these parameters are the same data used to validate the algorithm, then the authors are surely over-estimating, to some extent, the automatic performance of the algorithm. The authors need to specify what data were used to fix the parameters of AR2. -It is important to note that the authors chose extremely contaminated data to evaluate AR1 and AR2 and that these algorithms might be more useful when applied to less contaminated data. -If I understand the text correctly, AR2 excludes non-artifact contaminated electrodes from its analysis. You should include these electrodes in the ICA decompositions because they will help capture the neurogenic signal you are trying to preserve. -Since ICA necessarily removes some neurogenic signal along with EEG artifacts, it can help to quantify this by applying your algorithm to non-artifact polluted data 2. Adding such an analysis to these findings would help us to understand how and how much AR2 might be distorting EEG seizure activity. Electrodes closest to muscles are likely most affected. -For many statistical hypothesis tests the authors provide only p-values. It would be much more informative if the authors provided test statistics (e.g., t-scores, degrees of freedom), named the type of test (e.g., cumulative logit mixed effect model) and confidence intervals. In particular, confidence intervals will be much better than p-values at communicating how important and robust these effects are 3. -Figures 4-5 report p<0.05 for the results of a large number of statistical tests (23 per subfigure) with no correction for multiple comparisons. You should perform some type of correction (e.g., Bonferroni-Holm or Benjamini & Hochberg’s false discovery rate control algorithm). -To interpret these results, it would greatly help to have inter-reader reliability and reader confidence values for non-artifact contaminated data. Can you get these from the existing literature? -I think the primary finding of this work is that neither AR1 nor AR2 provide robust artifact correction when applied to such heavily contaminated data and need to be improved. You should discuss what improvements (if any) you think could be made. For example, using higher-density EEG recordings could greatly help. With more electrodes, ICA’s performance should improve (given sufficient training data).\n\nIn addition to those major points, here are some additional suggestions and points of consideration/clarification:\nThe abstract should specify the consistency of AR1-derived lateralization with behavioural, neurophysiological, and neuro-radiological findings. Currently, only the consistency with AR2-derived lateralization is reported. -[pg 3]: Saying “ICA removes artifacts based on source-related features instead of frequencies.” is too vague to be informative. You might consider providing more details, such as “ICA derives spatial filters that can remove artifacts that have static scalp topographies and time courses of activity that are distinct from that of EEG sources. ICA artifact correction is necessarily imperfect and will remove some neurogenic components of the EEG as well4. However, the degree of EEG distortion may be negligible and ICA has proven effective at removing EMG and ocular artifacts from EEG data recorded from neuronormal individuals in laboratory settings2.” -The introduction should note why ICA might not be able to correct for EMG-ictal artifact, even though it has proven useful for less artifact-polluted research data. Specifically, it may fail because the number of EEG artifact sources may be much greater in ictal data. -You should include the article by De Vos et al. (2011)5 in your review of previous algorithms for correcting EEG artifacts in clinical epilepsy data with ICA. -You say that EEG readings were provided by “26 neurologists with a specialization in EEG.” Please specify how many were board certified in epilepsy or clinical neurophysiology. -It appears that AR2 is applied to epochs that are not contaminated with EMG (pg 3, bottom left). Why try to correct artifacts that aren’t there? -Instead of saying “independent components of greatest order,” I think it is more conventional to say “independent components that account for the most variance.” -Please provide the specifications of the analog filter used to acquire the data. It would help to explicitly report the number of data points per electrode fed to ICA. The reliability of ICA is a function of this6. -It might help to clearly state that the AR1 and AR2 processed data were both read using the same graphical user interface (i.e., Persyst’s). It took me a little while to figure this out and it’s great that you did this. -It would help to add titles to subfigures (if it is permitted by F1000’s formatting guidelines). -In Figure 1 there is no point to showing both the non-normalized ICA and normalized mixing matrix since the mixing matrix column scale is arbitrary. Just show the normalized mixing matrix. It would also help to view the mixing matrix weights as scalp topographies to see both the quality of the putative neurogenic and EMG ICs. -[pg 4] You say “Compared to AR1, more readers were able to render seizure-onset laterality assignments using AR2, and these assignments were more often congruent with other clinical data (Table 2).” However in Table 2, 82% of the seizures that were lateralizable with AR1 (i.e., 145/177) agree with clinical findings in contrast to 81% of seizures using AR2 (i.e., 171/210). I think percentage of agreement is more important than the number of seizures in agreement. -[pg 11] You say “Among the 8 patients, if the reader lateralized the seizure-onset to the left using AR2 they were correct in 95.9%….”. Do you mean “Among the 5 patients” with clinical seizure onset lateralization based on independent data? -I think your statement “With regard to AR2, the novel software method developed for this study, the slight improvement seen in ictal EEG interpretability after applying the method suggests that the algorithm can (1) reliably produce signals that are, exclusively or mainly, EEG or EMG, and (2) identify which signals are of brain origin and which are contaminant.” is overly strong. I think “sometimes” is more accurate than “reliably” given the low reader confidence and inter-reader agreement. -I don’t understand your statement “One explanation for AR2’s ability to isolate myogenic from neurogenic independent components may be that scalp EEG electrodes record weighted and summated far-field signals from all brain and muscle sources, as well as near-field electrode noise generated at the electrode/skin interface.” ICA can separate myogenic from neurogenic activity because they have distinct scalp topographies and largely independent time courses of activity. -It is fantastic that you have made both AR2’s code and your data publicly available. However, there is not enough documentation on your GitHub repo for me to be able to easily understand how to use it (what is scalp_input_matrix.mat?). A little bit more documentation would greatly help.",
"responses": [
{
"c_id": "2593",
"date": "04 Apr 2017",
"name": "Shennan Weiss",
"role": "Author Response",
"response": "Dear Dr. David Groppe, We are grateful for your insightful and thoughtful comments and suggestions. Appended below are answers to your inquiries, and changes we have made to the manuscript. -As the authors note, the data for this study was obtained from a small number of patients (8, only 5 of whom had lateralized seizure foci based on independent data). Thus, it is not clear how robust some of their findings are (e.g., the small differences between AR1 and AR2 performance). -- The authors agree that this study is underpowered. Our findings are exploratory at best. -Although AR2 is a fully automatic algorithm, there are some arbitrary parameters of the algorithm (e.g., the mutual information threshold used to include an electrode in the artifact correction procedure) that must have been set based on exploratory analyses. If the data used to set these parameters are the same data used to validate the algorithm, then the authors are surely over-estimating, to some extent, the automatic performance of the algorithm. The authors need to specify what data were used to fix the parameters of AR2. -- You are correct that we used the experimental dataset to define the threshold values and thus we are likely over-estimating the performance of the algorithm. We clarify on (pg.3) and (pg.4) that the thresholds were defined using visual inspection of the experimental dataset in the revised manuscript. -It is important to note that the authors chose extremely contaminated data to evaluate AR1 and AR2 and that these algorithms might be more useful when applied to less contaminated data. -- On (pg.2) we now specify “Ictal scalp EEG recordings present extraordinary challenges to ICA artifact reduction algorithms because the number of EMG artifact sources increases.” -If I understand the text correctly, AR2 excludes non-artifact contaminated electrodes from its analysis. You should include these electrodes in the ICA decompositions because they will help capture the neurogenic signal you are trying to preserve. -- We apologize for the lack of clarity. We only excluded electrodes that had suspected increases in impedance. We specify on (pg.3) “Prior to performing ICA to remove muscle artifact, the algorithm first identified epochs of the scalp EEG record contaminated by muscle artifact and determined the electrodes that were suspected of having high recording impedance during that epoch. The purpose of these calculations was to exclude these electrodes from the ICA calculations.” -Since ICA necessarily removes some neurogenic signal along with EEG artifacts, it can help to quantify this by applying your algorithm to non-artifact polluted data 2. Adding such an analysis to these findings would help us to understand how and how much AR2 might be distorting EEG seizure activity. Electrodes closest to muscles are likely most affected. -- We agree that this analysis would be helpful and should be a focus of future study. Unfortunately, the EEG reviewers who participated in this study are not available to review non-ictal scalp EEG recordings. -For many statistical hypothesis tests the authors provide only p-values. It would be much more informative if the authors provided test statistics (e.g., t-scores, degrees of freedom), named the type of test (e.g., cumulative logit mixed effect model) and confidence intervals. In particular, confidence intervals will be much better than p-values at communicating how important and robust these effects are 3. -- As you suggested we now provide t-scores, degrees of freedom, and have named the type of the test in the results. We provide confidence intervals for the cumulative logit mixed effects models results, and the correlation with other clinical data. S.E.M values are provided for the other comparisons in the figures included in the manuscript. The authors are in agreement the confidence intervals are essential to convey effect size3 -Figures 4-5 report p<0.05 for the results of a large number of statistical tests (23 per subfigure) with no correction for multiple comparisons. You should perform some type of correction (e.g., Bonferroni-Holm or Benjamini & Hochberg’s false discovery rate control algorithm). -- We have used your Matlab code to perform the Bonferroni-Holm correction on the p values obtained for the individual seizures. The results have been revised accordingly (see methods, statistical analysis). -To interpret these results, it would greatly help to have inter-reader reliability and reader confidence values for non-artifact contaminated data. Can you get these from the existing literature? -- We agree and the following sentence has been added to the discussion (pg. 13): The reliability of localization by ictal scalp EEG in the absence of artifact is between 65-75% for lateralization26. -I think the primary finding of this work is that neither AR1 nor AR2 provide robust artifact correction when applied to such heavily contaminated data and need to be improved. You should discuss what improvements (if any) you think could be made. For example, using higher-density EEG recordings could greatly help. With more electrodes, ICA’s performance should improve (given sufficient training data). -- Thank you for this helpful suggestion, we have added the following sentence to the discussion (pg. 13): The effectiveness of AR2 could possibly be improved by utilizing autocorrelations to identify the myogenic independent components. We hope that this method can be optimized for 10/20 standard scalp EEG. In addition to those major points, here are some additional suggestions and points of consideration/clarification: The abstract should specify the consistency of AR1-derived lateralization with behavioural, neurophysiological, and neuro-radiological findings. Currently, only the consistency with AR2-derived lateralization is reported. -- We have provided the results for AR1 in the abstract as you suggested. -[pg 3]: Saying “ICA removes artifacts based on source-related features instead of frequencies.” is too vague to be informative. You might consider providing more details, such as “ICA derives spatial filters that can remove artifacts that have static scalp topographies and time courses of activity that are distinct from that of EEG sources. ICA artifact correction is necessarily imperfect and will remove some neurogenic components of the EEG as well4. However, the degree of EEG distortion may be negligible and ICA has proven effective at removing EMG and ocular artifacts from EEG data recorded from neuronormal individuals in laboratory settings2.” -- Thank you for your suggestion we have made these verbatim changes to the introduction (pg. 2) -The introduction should note why ICA might not be able to correct for EMG-ictal artifact, even though it has proven useful for less artifact-polluted research data. Specifically, it may fail because the number of EEG artifact sources may be much greater in ictal data. -- We have addressed this issue as mentioned in a prior comment to you. -You should include the article by De Vos et al. (2011)5 in your review of previous algorithms for correcting EEG artifacts in clinical epilepsy data with ICA. -- done as suggested -You say that EEG readings were provided by “26 neurologists with a specialization in EEG.” Please specify how many were board certified in epilepsy or clinical neurophysiology. -- 20 of the readers were board certified as now specified on (pg.4.) -It appears that AR2 is applied to epochs that are not contaminated with EMG (pg 3, bottom left). Why try to correct artifacts that aren’t there? -- As specified in the methods we performed the ICA on 120 second trials irrespective of the beginning and end of the ictal EMG artifact. We used this approach in order to allow the algorithm to function in an automated and unsupervised manner. -Instead of saying “independent components of greatest order,” I think it is more conventional to say “independent components that account for the most variance.” -- We have made this modification as you suggested (pg. 3) -Please provide the specifications of the analog filter used to acquire the data. It would help to explicitly report the number of data points per electrode fed to ICA. The reliability of ICA is a function of this6. -- We now specify 24,000 data points in the methods (pg.4) -It might help to clearly state that the AR1 and AR2 processed data were both read using the same graphical user interface (i.e., Persyst’s). It took me a little while to figure this out and it’s great that you did this. -- We have modified the methods as follows (pg. 5): The AR1 and AR2 processed data were reviewed in Persyst v12 without video by 26 neurologists with a specialization in EEG, 20 of whom were board certified. -It would help to add titles to subfigures (if it is permitted by F1000’s formatting guidelines). -- As far as I know this is not possible. -In Figure 1 there is no point to showing both the non-normalized ICA and normalized mixing matrix since the mixing matrix column scale is arbitrary. Just show the normalized mixing matrix. It would also help to view the mixing matrix weights as scalp topographies to see both the quality of the putative neurogenic and EMG ICs. -- We have changed the figure as you suggested and modified the legend. -[pg 4] You say “Compared to AR1, more readers were able to render seizure-onset laterality assignments using AR2, and these assignments were more often congruent with other clinical data (Table 2).” However in Table 2, 82% of the seizures that were lateralizable with AR1 (i.e., 145/177) agree with clinical findings in contrast to 81% of seizures using AR2 (i.e., 171/210). I think percentage of agreement is more important than the number of seizures in agreement. -- Thank you for this insightful point. The numbers do not refer to the number of seizures in agreement but rather to the number of observations i.e. assignments made that agreed with the laterality defined by other clinical data. Thus, more readers were able to render observations that agreed with other clinical data using AR2 as compared to AR1. However, as you point out the percentage of readers that made rendered a laterality decision that did not agree with the other clinical data using AR2 was comparable to AR1. -[pg 11] You say “Among the 8 patients, if the reader lateralized the seizure-onset to the left using AR2 they were correct in 95.9%….”. Do you mean “Among the 5 patients” with clinical seizure onset lateralization based on independent data? -- You are correct and we apologize for the lack of clarity. We have modified the results as follows (pg. 12): Among the 5 patients with clinical seizure onset lateralization based on independent data, … -I think your statement “With regard to AR2, the novel software method developed for this study, the slight improvement seen in ictal EEG interpretability after applying the method suggests that the algorithm can (1) reliably produce signals that are, exclusively or mainly, EEG or EMG, and (2) identify which signals are of brain origin and which are contaminant.” is overly strong. I think “sometimes” is more accurate than “reliably” given the low reader confidence and inter-reader agreement. -- We agree and have modified the sentence as you suggested (pg.13). -I don’t understand your statement “One explanation for AR2’s ability to isolate myogenic from neurogenic independent components may be that scalp EEG electrodes record weighted and summated far-field signals from all brain and muscle sources, as well as near-field electrode noise generated at the electrode/skin interface.” ICA can separate myogenic from neurogenic activity because they have distinct scalp topographies and largely independent time courses of activity. -- Thank you for pointing out that this sentence lacks clarity. We have modified this paragraph as follows (pg.13): One explanation for AR2’s ability to isolate myogenic from neurogenic activity may be related to the respective dipole generators of each. ICA produces independent components that may resemble single equivalent dipoles14. Presumably, networks of myocytes exhibit shorter distance connectivity then networks of neurons that produce beta and gamma oscillations, and thus the two generators can be distinguished on the basis of the focality17 of the independent components topography. -It is fantastic that you have made both AR2’s code and your data publicly available. However, there is not enough documentation on your GitHub repo for me to be able to easily understand how to use it (what is scalp_input_matrix.mat?). A little bit more documentation would greatly help. -- We are in the process of improving the documentation of AR2 on GitHub thank you for reviewing the source code. References 1. Jung TP, Makeig S, Humphries C, Lee TW, McKeown MJ, Iragui V, Sejnowski TJ: Removing electroencephalographic artifacts by blind source separation.Psychophysiology. 2000; 37 (2): 163-78 PubMed Abstract 2. Mognon A, Jovicich J, Bruzzone L, Buiatti M: ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features.Psychophysiology. 2011; 48 (2): 229-40 PubMed Abstract | Publisher Full Text 3. Groppe DM: Combating the scientific decline effect with confidence (intervals).Psychophysiology. 2017; 54 (1): 139-145 PubMed Abstract | Publisher Full Text 4. Groppe DM, Makeig S, Kutas M: Independent component analysis of event-related potentials. Cognitive Science Online. 2008; 6 (1): 11-44 Reference Source 5. De Vos M, Deburchgraeve W, Cherian PJ, Matic V, Swarte RM, Govaert P, Visser GH, Van Huffel S: Automated artifact removal as preprocessing refines neonatal seizure detection.Clin Neurophysiol. 2011; 122 (12): 2345-54 PubMed Abstract | Publisher Full Text 6. Groppe DM, Makeig S, Kutas M: Identifying reliable independent components via split-half comparisons.Neuroimage. 2009; 45 (4): 1199-211 PubMed Abstract | Publisher Full Text"
}
]
}
] | 1
|
https://f1000research.com/articles/6-30
|
https://f1000research.com/articles/6-413/v1
|
03 Apr 17
|
{
"type": "Opinion Article",
"title": "Midlife interventions are critical in prevention, delay, or improvement of Alzheimer’s disease and vascular cognitive impairment and dementia",
"authors": [
"Sam Gandy",
"Tamas Bartfai",
"Graham V. Lees",
"Mary Sano",
"Tamas Bartfai",
"Graham V. Lees",
"Mary Sano"
],
"abstract": "The basic strategy for focusing exclusively on genetically identified targets for intervening in late life dementias was formulated 30 years ago. Three decades and billions of dollars later, all efforts at disease-modifying interventions have failed. Over that same period, evidence has accrued pointing to dementias as late-life clinical phenotypes that begin as midlife pathologies. Effective prevention therefore may need to begin in midlife, in order to succeed. No current interventions are sufficiently safe to justify their use in midlife dementia prevention trials. Observational studies could be informative in testing the proposal that amyloid imaging and APOEε4 genotype can predict those who are highly likely to develop Alzheimer’s disease and in whom higher risk interventions might be justifiable. A naturally occurring, diet-responsive cognitive decline syndrome occurs in canines that closely resembles human Alzheimer’s. Canine cognitive dysfunction could be useful in estimating how early intervention must begin in order to succeed. This model may also help identify and assess novel targets and strategies. New approaches to dementia prevention are urgently required, since none of the world’s economies can sustain the costs of caring for this epidemic of brain failure that is devastating half of the over 85-year-olds globally.",
"keywords": [
"dementia",
"cognition",
"cognitive decline",
"cognitive impairment",
"amyloid",
"tau"
],
"content": "We are not winning the fight against Alzheimer’s disease\n\nThe first generation of the “amyloidocentric” approach to Alzheimer’s has recently drawn to a close, and we are left with the same approved symptomatic treatments that we have had for the past 30 years (i.e., cholinesterase inhibitors). The Alzheimer’s disease-modifying drug discovery field remains at a perfect 100% failure rate when it comes to new approvable disease-modifying interventions. The two most promising amyloid-reducing interventions of 2016, solanezumab1 and verubacestat2, recently failed to modify decline in mild Alzheimer’s disease (AD) and were abandoned.\n\nThis is a very serious situation for society, as the disease burden of Alzheimer’s continues to skyrocket globally, yet the private profit-based pharmaceutical companies cannot, and will not, continue working on disease modifying drugs for Alzheimer’s unless convincing new scientific avenues are opened. What this means is that new drug targets must be identified, but for this to happen, we must be open-minded about the trap laid by the early success of identifying genes and molecular mechanisms of the familial Alzheimer’s patients. These early targets presented challenges in druggability, but for many years, these targets seemed scientifically rock solid. Now, this certainty and billions of dollars in trial funding are gone, and many large, prestigious pharmaceutical companies have closed their Alzheimer’s drug discovery programs.\n\nIt is time to admit that we are experiencing a rare phenomenon in drug development where the molecular mechanisms uncovered in familial cases of the disease have not helped us manage the sporadic form of the disease. In order to explain our repeated failures, we first blamed the quality of the drug candidates. More recently, we put forward a “kinetic argument”, causing trials to begin earlier and earlier in the course of the disease whilst making little or no effort to identify alternative or additional pathophysiologies beyond amyloidosis and tauopathy. The “kinetic argument” permitted us to remain focused on genetically-predicted drug targets as the most important drug targets, bringing to a halt virtually all other research. As those genetically derived targets continue to fail, experts are commonly overheard to say, “Of course that drug failed: the trial started too late in the progression of the disease; no one expected that to work.” This is revisionist history; it was only five years ago when scientists at several major pharma houses were convinced that the odds of success were high enough to invest $50–100 million for phase 1 and 2 clinical trials, or up to $1 billion each for phase 3 trials, even though each will take 4 years and at least 3 separate successful iterations are required. What was truly surprising to the pharmaceutical industry was that immunotherapies and enzyme inhibitors converging on the same target through entirely independent mechanisms yielded failure after failure, with no new insights. The situation was so surprising that most experts hid their disappointment. As recently as last year, clinicians were telling patients and families how these \"anti-amyloid\" antibodies and BACE inhibitors were “the most promising drugs ever to enter the pipeline”. To now turn around and say, “no one ever expected these drugs to work” is both disingenuous and hurtful to patients, who respond saying, “If no one expected that drug to work, then why did you recommend the trial to me?”\n\nEach successively earlier trial failure condemns doctors and patients to another 5 to 7 years of purgatory while the next trial iteration moves five clicks earlier in enrollment age. Yet these trial designs move forward without sufficient ability to define the molecular pathology with precision at the level of the individual patient. This argues not only for diversifying the disease-modifying portfolio but also for redoubling efforts on symptomatic interventions that are also easier to get through the regulatory process.\n\n\nCan we pinpoint how early is “early enough”?\n\nPerhaps we should seek specific, empirical data about how early is “early enough” in humans. Patients with epilepsy and APOEε4 alleles do not develop Alzheimer’s yet they deposit plaques in their early 40s3. This rare but surprisingly early phenomenon argues strongly that we should initiate intervention to prevent amyloidosis or tauopathy not just a little earlier, as we are doing now, but far earlier; in other words, we should intervene in midlife, not later in life. This dovetails well with evidence that midlife risk factors lead to a late-life phenotypes. Midlife-onset hypertension is a risk factor for AD in late life; late life-onset hypertension appears to be protective of cognition4. What sorts of interventions might these be? Dietary interventions with small effect size but employed over decades might have important cumulative effects. Vaccines may provide lifelong or very long interventions when the proper antigens and adjuvants are identified and employed.\n\nWhat might we do to refine our guess at what might be truly “early enough” for us to intervene? One might design an observational study of APOEε4 carriers in their 40s3,5. Annual amyloid and tau imaging of APOEε4 carriers could be used to identify those with evidence, or at highest risk, of progression5. Once an APOEε4 carrier becomes amyloid-imaging positive, one could imagine entering them either into an observational study or into an authentic trial employing reducers of amyloid, tau, or both. An advantage of this design is the possibility that for serially imaged subjects observed to change from negative to positive proteinopathy imaging, it will be possible to know that the proteinopathy has been present for 12 months or less. If we jump back in age as far as the mid 40s, and show that we can engage the proteinopathy targets at that early point, but yet still fail clinically, that will be a strong indication that “anti-proteinopathy-only” will never succeed.\n\n\nMixed pathology may be the most common underpinning for dementia\n\nIt is worth keeping in mind that even if we have impact on Alzheimer’s pathology, the frequent concurrent presence of multiple pathologies will continue to confound. These days, there is more research on the relationship between vascular cognitive impairment and dementia (VCID) and Alzheimer’s dementia, but not yet enough, even though defining this is probably profoundly important. Among African Americans with clinical Alzheimer’s, 70% have mixed pathology at autopsy6. Despite much attention being given to insulin signaling and brain proteinopathy, virtually all the accumulated data indicate that dementia of type-2 diabetes is not Alzheimer’s but is primarily vascular in origin7. Synucleinopathy, which is present in about 1/3 of Alzheimer’s patients, induces a plethora of epigenetic changes in the brain transcriptome8. This means that, currently, perfect antemortem diagnosis is sometimes impossible. Certainly, the more mixed dementia subjects there are in a trial intended to assess the efficacy of a drug that only treats Alzheimer’s, the lower the sensitivity to see a signal will be. There is a good chance that each of the various underlying pathogenetic mechanisms will have their own optimum time window for intervention, and that this will have to be factored in as well. On the other hand, improved midlife cardiovascular health could benefit cognition and may delay all causes of dementia.\n\n\nImmunology of cognitive decline in Alzheimer’s disease\n\nWhile genetics and, more recently, multi-scale network-level genomics9 have taught us, and continue to teach us, much about the molecular pathology of Alzheimer’s, imaging and pathology have taught us just as much about the disappointingly poor clinicopathological and clinicoradiological correlation in this illness. Cognitive decline is poorly predicted by amyloid imaging in isolation, as was foretold by the Religious Orders study some 20 years ago10.\n\nAre there truly new drug targets? What might we be missing in our formulation of the pathogenesis of sporadic Alzheimer’s? Of the two dozen genes linked by genome-wide association studies (GWAS), two thirds are lipid- or immune-related11, raising the possibility of a neuroinflammatory/neurodegenerative dementia-causing pathway that might be worthy of further research. DeStrooper has recently proposed an alternative formulation of AD as a clinical umbrella under which “feed-forward” pathogenesis scenarios lie (e.g., inflammation causes or exacerbates tauopathy; then, in turn, tauopathy aggravates inflammation)12. This model fits the existing data at least as well as the classical linear amyloid hypothesis and explains some of the holes in the current predictive models. DeStrooper’s formulation12 includes scenarios where anti-proteinopathy drugs alone would probably be inadequate. Along the same lines, the CR1 risk polymorphism is associated with increased risk for clinical Alzheimer’s but in the setting of progression-related reduction in amyloidosis13,14.\n\nOne other model might be worth investigating: Canine cognitive dysfunction (CCD)15 is the only naturally occurring mammalian dementia to mimic Alzheimer’s disease. In CCD, diet and lifestyle have measurable impact on disease progression15. The CCD model could contribute to our understanding of how early intervention must begin in order to be effective. And with CCD, one could test drugs as prophylaxis in vivo. In the meantime, the importance and potential benefit of safe and easy diet and lifestyle changes for humans should be a topic of much stronger advocacy. In fact, strong evidence that better cardiovascular health reduces prevalence of both Alzheimer’s and VCID is already beginning to emerge and must be a clear part of patient education by clinicians16–23. To its credit, the April 2017 issue of Scientific American heralds “Success in the Fight Against Alzheimer’s”, a review of dementia risk reduction benefit that can be realized through modification of diet and lifestyle beginning in midlife24.\n\n\nThere may be more “unknown unknowns” yet to be revealed\n\nFinally, it is important to emphasize that researchers tackling Alzheimer’s and other dementias must remain clear-eyed, challenged, and worried that there are still many gaps in our understanding. A simple backward jump to earlier intervention may require 5 more iterations at the current pace, if indeed we should be intervening when subjects are in their 40s. If the disease can be primarily driven by lipid pathology or inflammatory pathology even with little or no protein aggregate pathology as could be inferred from the nature of the GWAS hits11, and from the variability in amyloid-first phenotypic sequences vs. neurodegeneration-first phenotypic sequences25, then efforts focused solely on purging clumped proteins from brains in which they may have lingered silently for decades may always come up short.",
"appendix": "Author contributions\n\n\n\nAll authors contributed to drafting and editing.\n\nSG prepared the first draft and the final submitted version.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed. The opinions expressed here are exclusively those of the authors.\n\n\nGrant information\n\nMS and SG would like to acknowledge the support of National Institute on Aging P50 Alzheimer’s Disease Research Center (AG05138) to MS. SG would also like to acknowledge the support of the Louis B. Mayer Foundation, the Georgianne and Doctor Reza Khatib Foundation, VA MERIT (I01RX002333 and I01RX000684), and NIH Accelerating Medicines Partnership (U01AG046170) to Eric Schadt, Icahn School of Medicine.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nDedicated to the memory of Kerstin Iverfeldt, PhD (1957–2017) who made important contributions to our understanding of the biology of APP and the amyloid β peptide.\n\n\nReferences\n\nThe Lancet Neurology: Solanezumab: too late in mild Alzheimer's disease? Lancet Neurol. 2017; 16(2): 97. PubMed Abstract | Publisher Full Text\n\nHawkes N: Merck ends trial of potential Alzheimer's drug verubecestat. BMJ. 2017; 356: j845. PubMed Abstract | Publisher Full Text\n\nGouras GK, Relkin NR, Sweeney D, et al.: Increased apolipoprotein E epsilon 4 in epilepsy with senile plaques. Ann Neurol. 1997; 41(3): 402–4. PubMed Abstract | Publisher Full Text\n\nCorrada MM, Hayden KM, Paganini-Hill A, et al.: Age of onset of hypertension and risk of dementia in the oldest-old: The 90+ Study. Alzheimers Dement. 2017; 13(2): 103–110, pii: S1552-5260(16)32962-4. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim YY, Villemagne VL, Pietrzak RH, et al.: APOE ε4 moderates amyloid-related memory decline in preclinical Alzheimer's disease. Neurobiol Aging. 2015; 36(3): 1239–44. PubMed Abstract | Publisher Full Text\n\nBarnes LL, Leurgans S, Aggarwal NT, et al.: Mixed pathology is more likely in black than white decedents with Alzheimer dementia. Neurology. 2015; 85(6): 528–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStoeckel LE, Arvanitakis Z, Gandy S, et al.: Complex mechanisms linking neurocognitive dysfunction to insulin resistance and other metabolic dysfunction [version 2; referees: 2 approved]. F1000Res. 2016; 5: 353. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDesplats P, Spencer B, Coffee E, et al.: Alpha-synuclein sequesters Dnmt1 from the nucleus: a novel mechanism for epigenetic alterations in Lewy body diseases. J Biol Chem. 2011; 286(11): 9031–7. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang M, Roussos P, McKenzie A, et al.: Integrative network analysis of nineteen brain regions identifies molecular signatures and networks underlying selective regional vulnerability to Alzheimer's disease. Genome Med. 2016; 8(1): 104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSnowdon DA: Aging and Alzheimer’s disease: lessons from the Nun Study. Gerontologist. 1997; 37(2): 150–6. PubMed Abstract | Publisher Full Text\n\nLambert JC, Ibrahim-Verbaas CA, Harold D, et al.: Meta-analysis of 74,046 individuals identifies 11 new susceptibility loci for Alzheimer's disease. Nat Genet. 2013; 45(12): 1452–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDe Strooper B: After Solanezumab: Where Should Alzheimer’s Research Go? Alzforum. 2017. Reference Source\n\nThambisetty M, An Y, Nalls M, et al.: Effect of complement CR1 on brain amyloid burden during aging and its modification by APOE genotype. Biol Psychiatry. 2013; 73(5): 422–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGandy S, Haroutunian V, DeKosky ST, et al.: CR1 and the \"vanishing amyloid\" hypothesis of Alzheimer's disease. Biol Psychiatry. 2013; 73(5): 393–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPop V, Head E, Hill MA, et al.: Synergistic effects of long-term antioxidant diet and behavioral enrichment on beta-amyloid load and non-amyloidogenic processing in aged canines. J Neurosci. 2010; 30(29): 9831–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNgandu T, Lehtisalo J, Solomon A, et al.: A 2 year multidomain intervention of diet, exercise, cognitive training, and vascular risk monitoring versus control to prevent cognitive decline in at-risk elderly people (FINGER): a randomised controlled trial. Lancet. 2015; 385(9984): 2255–63. PubMed Abstract | Publisher Full Text\n\nLobo A, Saz P, Marcos G, et al.: Prevalence of dementia in a southern European population in two different time periods: the ZARADEMP Project. Acta Psychiatr Scand. 2007; 116(4): 299–307. PubMed Abstract | Publisher Full Text\n\nSekita A, Ninomiya T, Tanizaki Y, et al.: Trends in prevalence of Alzheimer's disease and vascular dementia in a Japanese community: the Hisayama Study. Acta Psychiatr Scand. 2010; 122(4): 319–25. PubMed Abstract | Publisher Full Text\n\nRocca WA, Petersen RC, Knopman DS, et al.: Trends in the incidence and prevalence of Alzheimer's disease, dementia, and cognitive impairment in the United States. Alzheimers Dement. 2011; 7(1): 80–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchrijvers EM, Verhaaren BF, Koudstaal PJ, et al.: Is dementia incidence declining?: Trends in dementia incidence since 1990 in the Rotterdam Study. Neurology. 2012; 78(19): 1456–63. PubMed Abstract | Publisher Full Text\n\nWu YT, Lee HY, Norton S, et al.: Prevalence studies of dementia in mainland china, Hong Kong and taiwan: a systematic review and meta-analysis. PLoS One. 2013; 8(6): e66252. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQiu C, von Strauss E, Bäckman L, et al.: Twenty-year changes in dementia occurrence suggest decreasing incidence in central Stockholm, Sweden. Neurology. 2013; 80(20): 1888–94. PubMed Abstract | Publisher Full Text\n\nWimo A, Sjölund BM, Sköldunger A, et al.: Cohort Effects in the Prevalence and Survival of People with Dementia in a Rural Area in Northern Sweden. J Alzheimers Dis. 2016; 50(2): 387–96. PubMed Abstract | Publisher Full Text\n\nKivipelto M, Håkansson K: A Rare Success against Alzheimer’s. Sci Am. 2017; 316(4): 32–37. PubMed Abstract | Publisher Full Text\n\nJack CR Jr, Wiste HJ, Weigand SD, et al.: Different definitions of neurodegeneration produce similar amyloid/neurodegeneration biomarker group findings. Brain. 2015; 138(Pt 12): 3747–59. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21939",
"date": "18 Apr 2017",
"name": "W.Sue T Griffin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nDr. Gandy and his colleagues have come to the heart of the matter – current anti-amyloid strategies have failed to change the course of Alzheimer’s disease (AD). This important article is written as an “Aha, we can now address the question of what other strategies might hold promise toward preventing or delaying onset of Alzheimer’s disease (AD).” They raise a number of very important questions, including how early should we treat; what is the role of multiple pathologies in Alzheimer pathogenesis; are the pathways that give rise to the neuronal dysfunction and loss noted in AD the same ones that obtain in the pathogenesis of other neurodegenerative diseases; are such pathways known and, if so, have the drivers of these pathways been identified? Most of these questions have been addressed in animal models and some have included evidence of the existence of such pathways in brain tissue from patients vs controls. A great deal of effort has been given to one of the areas of interest noted by the authors “Immunology of cognitive decline in AD.” The authors’ discussion of a recent review of the potential relationship between lifestyle changes such as diet and exercise and how this may have a beneficial effect as a middle-age combatant of cognitive decline is laudable. However, implementation of such lifestyle changing practices would require individuals to be either coerced or otherwise convinced to adhere to such practices. At present, even though there is convincing evidence of tangible benefits of exercise and diet1, the data is not with us regarding adherence to such regimens. Therefore, we would like to suggest inflammation, both in the brain and in the periphery, as a prime target for early intervention in Alzheimer pathogenesis through the use of currently available drugs or drugs developed so as to have fewer or more tolerable side effects. Data from the Framingham study of 691 cognitively intact individuals provide evidence of spontaneous increases in production by blood monocytes of two pluripotent proinflammatory cytokines, interleukin-1 (IL-1) and TNFα2; these increases were suggested to serve as markers that “strengthen” a link between inflammation and development of AD. In addition, an accumulation of epidemiological data has consistently sided with use of anti-inflammatory compounds as increasing the odds against being in the “Alzheimer group” and in favor of being in the non-demented, non-neuropathologically confirmed group. Data from the Rotterdam study of non-NSAID users vs NSAID users reported an adjusted relative risk ratio of 0.54 among NSAID users as support for the potential of NSAIDs to protect against development of AD3. In a larger study of the VA database (~50,000 clinically and neuropathologically confirmed AD patients vs ~200,000 non-AD patients), 5 or more years of ibuprofen use reduced the adjusted odds ratio to 0.564. Further in the extended results of the ADAPT study, asymptomatic subjects who received naproxen had a reduced incidence of AD compared to those who received either celecoxib or placebo5.\nIn view of the evidence implicating inflammation as increasing risk for later development of AD, it seems prudent to first explore what we know at present regarding early events in Alzheimer pathogenesis as such events may presage development, or help us pinpoint “when is the best time to intervene” to forestall formation of the aggregate defining neuropathological hallmarks of AD. In the McGeers’ early studies of neuroinflammation in AD, immune markers were identified on plaque-associated microglia in AD brain6. The following three discoveries: one showing that activated glia in AD brain overexpress IL-1, a second showing that microglial activation with overexpression of interleukin-1 (IL-1) as well as astrocyte activation and overexpression of the neurite-growth stimulating cytokine S100B are prominent in fetuses, newborns, children, and adults7, and a third showing that IL-1 induces synthesis of APP in human cell cultures8 opened a new field of investigation, viz., the potential of IL-1-directed pathways to promote neuropathogenesis. These two cytokines, IL-1 and S100B upregulate each other and both induce synthesis of βAPP in vitro and in vivo. This βAPP may then be cleaved for β-amyloid (Aβ) plaque formation or for sAPPα for further microglial activation and overexpression and release of IL-1β9-10. Moreover, IL-1β, via its ability to induce synthesis as well as activation of MAPK-p38, increases production of hyperphosphorylated tau, for formation of the paired helical filaments in neurofibrillary tangles, as well as α-synuclein for formation of Lewy bodies11. If such neuroinflammation, which has now been associated with immune challenges in the periphery is associated with risk for development of AD, it would seem prudent to develop an inflammatory index in order to decide when treatment should begin. In view of the appearance of neuroinflammation, noted as increased activation of glia and overexpression of IL-1 decades before frank pathology in Down’s syndrome, perhaps erring on the side of caution, beginning treatment sooner rather than later may be preferable.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": [
{
"c_id": "2706",
"date": "16 May 2017",
"name": "William Grant",
"role": "Reader Comment",
"response": "The opinion article by Gandy et al. discusses some important issues regarding preventing, delaying, and /or preventing Alzheimer's disease (AD).1 Near the end, they discuss the role of diet and lifestyle in affecting risk of AD, a topic I have studied for 20 years. In this commentary I add a few observations regarding that point.From the study of AD prevalence rates with respect to national diets, it is found that diet appears to be the most important risk-modifying factor. For example, AD prevalence rates for elderly in Japan increased from 1% in 1985 to 7% in 2008, lagging the nutrition transition from the traditional Japanese diet to the Western diet by 15-25 years.2 The major changes were a 5-fold increase in meat consumption and a 50% reduction in rice consumption. In a recent 10-country study, it was found that meat, eggs, fish, and cheese were highly correlated with AD prevalence.3The Mediterranean diet, which is touted as reducing risk of AD by about 50%, was found to lie in the middle of the regression fit to AD prevalence vs. meat consumption. The mechanisms whereby eating animal products increases risk of AD include the effects of cholesterol and increasing the intake of heavy metals while reducing intake of beneficial trace minerals (calcium, magnesium, potassium), thereby leading to increased destruction of neurons by free radicals. Cooking meat at high, dry temperatures also increases production of advanced glycation end products (AGEs), which also increase risk of AD.4 The dietary links to AD are somewhat similar to those for coronary heart disease (CHD).5 Thus, the experience in reducing risk of CHD through diet and lifestyle changes should provide some guidance for preventing AD. A good example comes from Finland. A young physician, Pekka Puska, observed that the highest cardiovascular disease mortality rates in the world were in North Karelia and were associated with high intake of saturated fats and high serum cholesterol levels.6 He organized a community-based program that reduced intake of saturated fats from 20% to 12% in 2007. This project led to large reductions in CHD mortality rates, from 647/100,000/yr for men and 114/100,000/yr for women in 1969-72 to from 289/100,000/yr for men and 36/100,000/yr for women in 1992.7 Higher sun exposure and vitamin D status are also associated with reduced risk of AD.8 Interestingly, a study in New York state found that those who had non-melanoma skin cancer had a greatly reduced incidence of AD.9 It is unlikely that public health policies in the U.S. and many other countries will lead to reduced consumption of animal products among those middle aged or older anytime soon. However, individuals can learn the advantages of whole-plant based diets and reduce their personal risk of AD, CHD, and, as an added bonus, many types of cancer.10,11References1. Gandy S, Bartfai T, Lees GV, Sano M. Midlife interventions are critical in prevention, delay, or improvement of Alzheimer's disease and vascular cognitive impairment and dementia. F1000Res. 2017 Apr 3;6:4132. Grant WB. Trends in diet and Alzheimer’s disease during the nutrition transition in Japan and developing countries. J Alz Dis. 2014;38(3):611-20.3. Grant WB. Using Multicountry Ecological and Observational Studies to Determine Dietary Risk Factors for Alzheimer’s Disease. J Am Coll Nutr, 2016;35(5):476–489. http://www.tandfonline.com/doi/full/10.1080/07315724.2016.11615664. Perrone L, Grant WB. Observational and ecological studies of dietary advanced glycation end products in national diets and Alzheimer’s disease incidence and prevalence. J Alz Dis. 2015;45: 965–79.5. Rafique R, Amjad N. Dietary predictors of early-onset ischaemic heart disease in a sample drawn from a Pakistani population. Heart Asia. 2012;4(1):129-34.6. Vartiainen E, Laatikainen T, Tapanainen H, Puska P. Changes in Serum Cholesterol and Diet in North Karelia and All Finland. Glob Heart. 2016;11(2):179-84. 7. Vartiainen E, Puska P, Pekkanen J, et al. Changes in risk factors explain changes in mortality from ischaemic heart disease in Finland. BMJ. 1994;309(6946):23-7.8. Afzal S, Bojesen SE, Nordestgaard BG. Reduced 25-hydroxyvitamin D and risk of Alzheimer's disease and vascular dementia. Alzheimers Dement. 2014;10(3):296-302.9. White RS, Lipton RB, Hall CB, Steinerman JR. Nonmelanoma skin cancer is associated with reduced Alzheimer disease risk. Neurology. 2013;80(21):1966-72.10. Aune D, De Stefani E, Ronco A, Boffetta P, Deneo-Pellegrini H, Acosta G, Mendilaharsu M. Meat consumption and cancer risk: a case-control study in Uruguay. Asian Pac J Cancer Prev. 2009;10(3):429-36.11. Grant WB. A multicountry ecological study of cancer incidence rates in 2008 with respect to various risk-modifying factors, Nutrients. 2013;6(1):163-189."
}
]
},
{
"id": "21956",
"date": "19 Apr 2017",
"name": "Domenico Pratico",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very interesting and stimulating opinion article in which the authors try to make sense of the current state of the Alzheimer’s disease (AD) pharmacotherapy. They present in a very objective fashion the disappointing scenario we are facing in this area, and at the same time they provide very insightful food for thought on how the research should move forward. I would like the authors to spend some additional words on the followings: 1.\n\nThe multifactorial nature of sporadic AD, and because of that a multi-target and may be personalized approach versus a one-size-fits-all solution is more likely to work. 2.\n\nWhile the large GWAS studies have provided evidence for good genetic leads, there is strong evidence indicating that environmental factors (i.e., lifestyle, diet) can ultimately influence the clinical phenotype.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-413
|
https://f1000research.com/articles/6-412/v1
|
03 Apr 17
|
{
"type": "Research Note",
"title": "Patterns of ocular inflammation in patients with miliary tuberculosis",
"authors": [
"Salil Mehta"
],
"abstract": "Background: Ocular morbidity associated with systemic tuberculosis is common. The clinical picture varies from anterior uveitis, intermediate uveitis and posterior uveitis to even panuveitis. There is little data on the correlation between specific systemic presentations and the ocular inflammation. We conducted a retrospective review of the ocular findings in the case records of patients admitted with a diagnosis of miliary tuberculosis. These patients were then referred for a more detailed ophthalmic evaluation. Methods: We analysed the case records of patients with a clinical diagnosis of miliary tuberculosis over a 10-year period at Lilavati Hospital and Research Center, Mumbai. Results: In total, 11 immunocompetent patients were identified. All 22 eyes showed normal findings on slit lamp examination. Dilated fundus examination showed single or multiple tubercles. In our cohort, the ocular findings were exclusively in the form of choroidal tuberculosis, either unilaterally or bilaterally. Slit lamp examination revealed no anterior segment inflammation Conclusions: We suggest that this pattern of choroidal/retinal tuberculosis in the absence of anterior and intermediate segment inflammation is specific for miliary tuberculosis and may be related to a specific immune response.",
"keywords": [
"ocular tuberculosis",
"miliary tuberculosis",
"tubercles",
"ocular inflammation"
],
"content": "Introduction\n\nTuberculosis is a significant cause of uveitis, with published literature describing a spectrum of ocular inflammation that includes anterior uveitis, intermediate uveitis and posterior uveitis or even panuveitis in patients with different presentations of systemic tuberculosis. However, little data exists on the correlation between specific systemic presentations and any ocular inflammation that may co-exist.\n\nMiliary tuberculosis is a specific systemic presentation that is commonly associated with ocular inflammation. We conducted a retrospective observational study of patients admitted with a diagnosis of miliary tuberculosis, to assess the specific patterns of any associated ocular inflammation.\n\n\nMethods\n\nThe study was conducted at Lilavati Hospital (Mumbai, India), which is a private tertiary healthcare facility. The Institutional Ethics Committee of Lilavati Kirtilal Mehta Medical Trust Research Centre approved this study for publication (09/02/2017).\n\nWe defined miliary tuberculosis as “tiny, discrete, widespread and uniform-sized lung opacities 2 mm or less in diameter (millet grains) on X ray or CT scan”. We retrieved the records of matching patients from 2006–2016 and excluded records of patients with HIV infection, autoimmune disease or on immunosuppressive therapy.\n\nAs part of a regular protocol that recognizes the diagnostic value of fundoscopy, all patients with a probable diagnosis of miliary tuberculosis were referred for an ophthalmic evaluation. All patients or their next of kin provided written informed consent for an ocular evaluation. Following this consent, patients underwent assessment of best-corrected visual acuity, slit lamp examination, and dilated indirect fundus examination and intraocular pressure assessment with an applanation tonometer. Patients unable to undergo a full evaluation underwent dilated indirect fundus examination at the bedside but were scheduled to complete the evaluation once their systemic status had improved. All patients who gave their written informed consent underwent fundus photography or optical coherence tomography (OCT) studies for documentation. All patients underwent both tests.\n\nThe following additional data was retrieved from patient records: age, sex, findings of chest X rays, CT scans (chest, brain or abdominal) and laboratory data (complete and differential blood counts, mantoux testing, renal and liver function tests at the least). Microbiological data included blood cultures (aerobic, anaerobic, mycobacterial cultures) and cultures of sputum studies.\n\n\nResults\n\nIn total, 11 immunocompetent patients were identified. These included 5 males and 6 females with ages ranging from 4 to 73 years (mean 42.5). All were ethnically Indian and their socio-economic status varied from the indigent residing in high-density tenements/slums to the affluent. Sources of referral included transfer from neighborhood facilities or from family physicians.\n\nThe common modes of clinical presentation of miliary tuberculosis included persistent fever, (7 patients: 4 males and 3 females with ages ranging from 4–73 years) or sepsis (4 patients: 3 females and 1 male with ages ranging from 16–71 years). 6 patients underwent a detailed evaluation soon after admission. The remaining 5, who were significantly ill, underwent only dilated fundus examination at that time.\n\nThe eyes of all 11 patients were analyzed (22 eyes in total). All patients were visually asymptomatic. Visual acuity studies were available for 6 of the 11 patients and were normal with 6/6 best-corrected visual acuity. All 22 eyes gave normal findings (no cells/flare) on slit lamp examination. Dilated fundus examination showed single or multiple tubercles bilaterally in 7 patients and unilaterally in 4 patients. No vitritis or raised intraocular pressure was seen in any patient. (Table 1).\n\nARDS: Acute Respiratory Distress Syndrome; CNS: Central Nervous System; RE: right eye; LE: left eye; BE: both eyes.\n\n4 patients (3 females and 1 male, ages ranging from 16–71 years) gave consent for both fundus photography and OCT to be performed, and both tests were carried out.\n\nAdditionally, patients (2 females, 42 and 44 years old) had additional signs of acute respiratory distress syndrome (ARDS). 4 patients (3 men and 1 women; ages ranging from 4–71) had central nervous system (CNS) granulomas found in the frontal, parietal or temporal regions.\n\nA standard therapy of INH, rifampicin, ethambutol and pyrazinamide was given. Systemic steroids were used at the discretion of the treating physician. Follow-up was available for 3–12 months for 4 patients (3 female and 1 male, ages ranging from 16 to 71, mean 30.5) until the choroidal tubercles were healed.\n\nThe clinical and ocular data of these patients is available in Dataset 11.\n\n\nDiscussion\n\nOf the 1.7 billion individuals infected with tuberculosis, only 10 % will develop an active infection in their lifetime, due to a protective immune response that can also be damaging to the tissues and is responsible for the clinical picture seen during active disease. The various clinical presentations are the result of a complex interaction between immune cells, secreted cytokines and varying combinations of systemic Th1 and/or Th2 responses.\n\nMiliary tuberculosis accounts for 2% of all new cases of tuberculosis and approximately 20% of all extrapulmonary tuberculosis cases2. It is a potentially fatal form of disseminated TB that follows from massive hematogenous dissemination. Its etiopathogenesis involves immune responses skewed towards a Th2 response that inhibits protective responses (granuloma formation), and this may permit widespread dissemination3.\n\nThe ocular correlates of the systemic picture have been less well studied. Mehta analyzed the PET/CT scans of 27 patients in total; 13 with anterior uveitis, 7 with intermediate uveitis, 6 with pan-uveitis, 2 with vasculitis and 1 with multifocal serpiginous-like choroidopathy. 14 showed metabolically active, largely mediastinal, lymphadenopathy, and lung parenchymal disease was only rarely seen. The author postulated that a specific immune response to mycobacteria in the target tissues was responsible for this pattern of disease; i.e. systemic lymph node tuberculosis with its ocular correlate in the form of uveitis, with marked anterior and intermediate inflammation4.\n\nIn our cohorts, who differ significantly in their systemic presentation from the previously mentioned study, the ocular findings were exclusively in the form of choroidal tuberculosis, either unilaterally or bilaterally. Slit lamp examination revealed a marked absence of anterior or intermediate segment inflammation. All the patients had evidence of tubercles, thus confirming the diagnostic role of fundoscopy, but a larger cohort is needed to confirm the absence of anterior segment inflammation.\n\nWe suggest that this pattern of choroidal/retinal tuberculosis in the absence of anterior and intermediate segment inflammation is specific for miliary tuberculosis and may be due to a specific immune response. A larger study that assesses the CD4 and CD8 counts and the cytokine profile is needed to elucidate the exact nature of the immune response responsible.\n\n\nData availability\n\nDataset 1: Clinical and ocular data of study patients.\n\nDOI, 10.5256/f1000research.11035.d1553101",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nMehta S: Dataset 1 in: Patterns of ocular inflammation in patients with miliary tuberculosis: An observational study. F1000Research. 2017. Data Source\n\nSharma SK, Mohan A, Sharma A: Miliary tuberculosis: A new look at an old foe. J Clin Tuberc Other Mycobact Dis. 2016; 3: 13–27. Publisher Full Text\n\nSharma SK, Mohan A, Sharma A, et al.: Miliary tuberculosis: new insights into an old disease. Lancet Infect Dis. 2005; 5(7): 415–30. PubMed Abstract | Publisher Full Text\n\nMehta S: Observed Patterns of Systemic Disease on PET/CT Scan in Patients with Presumed Ocular Tuberculosis: Findings and Hypotheses. Ocul Immunol Inflamm. 2016; 1–3. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "21905",
"date": "18 Apr 2017",
"name": "Jyotirmay Biswas",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthor reported the pattern of ocular lesions in patients with miliary tuberculosis. He analysed the record of the patients with the clinical diagnosis of miliary tuberculosis over ten years period seen in a hospital of Western India.\nAll the patients were immunocompetent, had normal anterior segment findings but fundus examination showed single or multiple tubercles in the choroid. This is a rare cohort of patients of miliary tuberculosis who have got an ophthalmic examination done.\nIt would have been better if authors could put some photograph of the fundus of these patients showing choroidal tubercles. We have studied 1005 patient of systemic tuberculosis in TB hospital in 1985 and found in 1.39% patients had ocular tuberculosis and only one patient had miliary tubercles in choroid (Biswas et al. 1996). New imaging system like optical coherence tomography particularly enhanced-depth and swept source of optical coherence tomography could show, the lesion location elegantly.\nInterestingly these lesions are in the periphery, not in the macular area and the patients preserve good visual function. I feel this article will be good addition to ophthalmic tuberculosis and systemic tuberculosis literature.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "21512",
"date": "21 Apr 2017",
"name": "Reema Bansal",
"expertise": [
"Reviewer Expertise Uveitis and Retinal diseases"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors report the ocular profile of a rare but an important systemic disease, which adds relevant information to the literature. The study highlights the choroidal tubercles as hallmarks of miliary TB.\n\nProviding fundus photographs would add value to the manuscript.\n\nFour patients did undergo both fundus photography and OCT. But the authors haven’t described the OCT findings in any of these.\n\nAnother reference may be added; Sharma et al. 2012.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22605",
"date": "10 May 2017",
"name": "Somasheila I. Murthy",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study provides information on the patterns of ocular inflammation in patients with milliary tuberculosis over a 10-year period. The finding of choroidal tubercles in the presence of normal visual acuity and in the absence of associated anterior chamber inflammation or significant vitritis highlights the importance of screening posterior segment in patients with milliary tuberculosis. The examination can be easily done as a bed side procedure.\nA few points mentioned below, if further analyzed, can help to improve the understanding of disease behavior in these patients.\n\nAll patients had choroidal lesions detected on fundus examination. It would have been better to know at what time points the patients were examined. Were any of these patients already on anti-tubercular therapy (ATT) at the time of presentation or were started on it only after the presentation.\n\nThe course and time of resolution of choroidal tubercles following initiation of ATT using OCT imaging at least in patients who were ambulatory will be an important addition.\n\nDid any of the patients have persistent lesions after the completion of ATT requiring prolonged continuation of ATT or some additional form of treatment?\n\nDid any of the patient have a resurgence of the disease following discontinuation of ATT, after an initial cure. Information regarding the recurrence pattern can further provide an insight into the etiopathogenesis of this manifestation.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "21514",
"date": "11 May 2017",
"name": "Padmamalini Mahendradas",
"expertise": [
"Reviewer Expertise Uveitis and ocular immunology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author has reported that the choroidal tubercle is a characteristic finding in cases of military tuberculosis in an immunocompetent cases. Unlike other types of systemic tuberculosis (for example, pulmonary, lymphadenitis) which could be associated with several different ocular manifestations, it is interesting that choroidal tubercle is the only presentation observed in military tuberculosis.\n\nProviding the fundus photographs and optical coherence tomography images of choroidal tubercles along with radiological images of military tuberculosis cases would add additional value to the manuscript.\n\nThe author has mentioned that systemic steroids were used at the discretion of the treating physician. Addition of treatment details in the Data set 1 will give a more clear picture regarding the management of individual cases to the readers.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Not applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-412
|
https://f1000research.com/articles/6-410/v1
|
03 Apr 17
|
{
"type": "Clinical Practice Article",
"title": "Treatment with penicillin G and hydrocortisone reduces ALS-associated symptoms: a case series of three patients",
"authors": [
"Bert Tuk",
"Harmen Jousma",
"Pieter J. Gaillard",
"Harmen Jousma",
"Pieter J. Gaillard"
],
"abstract": "Three male Caucasian patients with ALS were admitted to the hospital due to progressive dysphagia and dysarthria. During two 21-day courses of penicillin G and hydrocortisone, these patients’ dysphagia and dysarthria resolved. The patient’s other ALS-associated symptoms also improved, including respiratory function, coordination, walking, and muscle strength. This is the first report of a treatment with a protocol for treating dysphagia, dysarthria, respiratory depression and other ALS-related symptoms. Furthermore, the observations are consistent with the recent hypothesis that the successful treatment of ALS symptoms with this treatment course in six patients with syphilitic ALS was not directly due to the treatment of syphilis; but that the administered penicillin G and/or hydrocortisone treated these patients’ ALS symptoms due the off-target pharmacological activity of penicillin G and/or hydrocortisone. This report therefore underscores the need to evaluate the efficacy of this treatment course in a clinical trial.",
"keywords": [
"Amyotrophic lateral sclerosis",
"dysphagia",
"dysarthria",
"penicillin G",
"hydrocortisone",
"GABA",
"neuromuscular disease",
"respiratory depression"
],
"content": "Introduction\n\nAmyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease) is a rapidly progressive devastating disease with an average life expectancy of only 3–5 years after diagnosis1–3. The cumulative lifetime risk of ALS is approximately 1:350–4003. Moreover, the total cost associated with ALS—excluding the cost of medication—has been estimated to exceed $1.4 million per patient4. The clinical manifestations of ALS include progressive wasting of muscle mass, reduced muscle coordination, dysphagia, dysarthria, and fatal respiratory depression1–3. Several observations regarding the putative pathogenesis of ALS have been reported1–3; however, although more than 140 years have passed since ALS was first described, its pathogenesis remains poorly understood, and no disease-modifying treatment is available.\n\nDysphagia-associated aspiration pneumonia is the leading cause of death in many neuromuscular and neurological diseases, including ALS, Parkinson’s disease, and Alzheimer’s disease5,6. Remarkably, however, no effective treatment for dysphagia is currently available.\n\nHere, we report that dysphagia, dysarthria, and other ALS-related symptoms resolved during two 21-day courses of penicillin G and hydrocortisone. This treatment course was previously reported to be efficacious in six patients with so-called syphilitic ALS in which syphilis was hypothesized to cause ALS7,8. However, we recently proposed that the treatment effect was not due to the treatment of syphilis, but rather was a consequence of the multifaceted pharmacology of penicillin G and/or hydrocortisone9. Given that our three patients presented with no evidence of syphilis, our results provide evidence that treating ALS patients with a course of penicillin G and hydrocortisone—regardless of whether they present with syphilitic ALS or non-syphilitic ALS—may effectively treat the symptoms of this rapidly progressive disease. These three cases warrant further study of this treatment course.\n\n\nCase presentation: Patient 1\n\nA 42-year-old Caucasian male with ALS was admitted to the hospital with complaints of progressive swallowing and speech difficulties. Three years ago, this patient was diagnosed with limb-onset ALS and presented with other symptoms typical of ALS, including dysphagia, dysarthria, difficulty with coordination, and muscle wasting. The patient had no history of syphilis or other systemic infection, and the diagnosis of ALS was confirmed by the Netherlands National ALS Center.\n\nUpon admission to the hospital, the patient was only able to take a few steps and had been wheelchair-bound for the past four months. In addition, his upper extremities had been paralyzed for over twelve months (see Figure 1A). In the preceding months, the patient’s speech had degenerated, and the patient had difficulty swallowing both solid food and liquids, including saliva.\n\nA: Upon admission to the hospital, patient 1 was only able to take a few steps and had been wheelchair-bound for the past four months. In addition, his upper extremities had been paralyzed for over twelve months. In the preceding months, the patient’s speech had degenerated, and the patient had difficulty swallowing both solid food and liquids, including saliva. B: On the 2nd and 3rd treatment days, patient 1 reported that he was able to lie in bed without experiencing muscle pain in the neck, shoulders, or back. On day 4, the patient was able to stand from a sitting position. On day 5, the patient was able to walk unaided a distance of approximately 100 meters. During days 76 to 91 the patient experienced increasing muscle pain in the neck, shoulders, or back, his walking ability regressed, and the patient became wheelchair-bound again. His swallowing and speech remained functional.\n\nFigure 2 shows the progression of symptoms and the effect of treatment. Dysphagia was confirmed at the time of admission by performing a fiber-optic endoscopic evaluation of swallowing (FEES) examination10 (Movie 1). Physical examination and laboratory blood analysis revealed no other clinical pathology, and renal function was normal. The patient was not taking any prescription medications.\n\nThe patient had no history of seizures and was therefore eligible to receive high doses of penicillin G. After confirming that the patient was not allergic to penicillin (by administering a daily dose of amoxicillin for six days), the patient was started on a 21-day course of penicillin G and hydrocortisone (Table 1) delivered via midline catheter infusion. This treatment course was recently postulated to be efficacious for treating dysphagia, dysarthria, and other ALS-related symptoms9.\n\nOn the 2nd and 3rd treatment days, the patient reported that he was able to lie in bed without experiencing muscle pain in the neck, shoulders, or back. On day 4, the patient was able to stand from a sitting position. On day 5, the patient was able to walk unaided a distance of approximately 100 meters (see Figure 1B). In addition, the patient’s speech and swallowing improved.\n\nAt the end of the first week, the patient’s dysphagia and dysarthria symptoms had resolved fully, and his walking had improved further. On day 9, the patient was able to move the fourth and fifth fingers on his right hand for the first time in a year, and by day 11 he had regained control of these two fingers (Movie 2). On day 11, a repeat FEES examination confirmed that the patient’s dysphagia had resolved. On day 12, the patient was able to move all of the fingers on his right hand (Movie 3) and grasp objects with his left hand, and he could once again operate the mouse attached to his computer. Furthermore, the patient’s voice recognition software was able to interpret the speech of the patient for the first time in months. On day 14, it was possible to sample venous blood from the patient’s forearm for the first time since he was admitted to the hospital. On day 18, the patient’s motor function further improved (Movie 4).\n\nAt the end of day 21, physiological and FEES examinations revealed that swallowing function remained intact (Movie 5), and the patient reported that his breathing and sleep quality had improved markedly during the treatment course. Therefore, in accordance with the defined treatment protocol, the midline catheter was removed and the patient was discharged. Upon discharge, the patient’s speech was restored to nearly pre-ALS levels, and he had regained the ability to stand from his wheelchair and walk unaided. The patient had also regained control over his fingers and had regained the ability to grasp light objects. Overall muscle function and strength were also improved as evidenced by increased power in his arms and legs and his renewed ability to stand, walk, and bend at the waist. Furthermore, his respiratory function had improved. Lastly, the patient reported that his general muscle pain had regressed, and the pain in his shoulder muscles had resolved completely.\n\nAfter returning home, the patient continued to improve. On day 22 (the first day following the end of the treatment course), the patient was able to lie in a dentist’s chair for 40 minutes without muscle pain, which had not been possible prior to receiving the treatment, and the patient was able to complete the dental procedure. On day 25, the patient was able to walk unaided a distance of approximately 650 meters. During the follow-up period from day 25 through day 75, the patient had generally stabilized. During days 76 to 91 the patient experienced increasing muscle pain in the neck, shoulders, or back, he no longer could operate the mouse attached to his computer, his voice recognition software no longer was able to interpret his speech, his walking ability regressed to the point that he not even take a few steps, and the patient became wheelchair-bound again. His swallowing and speech remained functional.\n\nOn day 92, the patient was readmitted to the hospital and started on a second 21-day course of penicillin G and hydrocortisone (Table 1) delivered via midline catheter infusion. Physiological and FEES examination on day 92 revealed that swallowing function had remained stable relative to day 21. During the 2nd 21-day course (days 92 to 113), the patient reported that he was able to lie in bed without experiencing muscle pain in the neck, shoulders, or back, and that his walking ability had slightly improved. Furthermore, his speech and swallowing function had improved. During days 114 to 150 the patient remained wheelchair-bound. His regained finger movement, swallowing and speech remained functional.\n\n\nCase presentation: Patient 2\n\nA 51-year-old Caucasian male with ALS was admitted to the hospital with complaints of progressive swallowing and speech difficulties. One year earlier, this patient was diagnosed with bulbar-onset ALS and presented with symptoms typical of bulbar-onset ALS, including dysphagia and dysarthria. Over the previous year, the patient also developed other symptoms typical of ALS, including fasciculations, coordination problems, and muscle weakness. Furthermore, the patient experienced tremor-like movements when lying in bed and when getting up in the morning. These transient muscle tremors prevented the patient from standing and walking directly after getting out of bed. The patient also experienced bladder control problems for the previous two years, but tested negative for bladder infection. The patient had no history of syphilis or other systemic infection, and the diagnosis of ALS was confirmed by the Netherlands National ALS Center.\n\nIn the year preceding admission to the hospital, the patient’s speech had degenerated, and the patient had difficulty swallowing both solid food and liquids, including saliva, leading to the manifestation of a high frequency of coughing. Figure 3 shows the progression of symptoms and the effect of treatment in Patient 2. Dysphagia was confirmed by FEES examination at the time of admission (Movie 6), and speech impairment was confirmed by a speech therapist. Physical examination and laboratory blood analysis revealed no other clinical pathology and normal renal function; a test for syphilis was negative. The only prescription medication taken by the patient was Riluzole (100 mg/day).\n\nThe patient had no history of seizures and was therefore eligible to receive high doses of penicillin G. After confirming that the patient was not allergic to penicillin (as described for Patient 1), the patient was started on a 21-day course of penicillin G and hydrocortisone (Table 1) delivered via midline catheter infusion.\n\nOn the 4th treatment day, the frequency and severity of coughing had decreased markedly, and a slight improvement in speech was noted. Furthermore, bladder function had improved. On day 5, the patient’s speech further improved, and the patient had stopped coughing completely, indicating improved swallowing function. From day 6 onwards, swallowing and speech function were further improved, as confirmed by FEES examination (Movie 7) and evaluation by a speech therapist, both on day 20. Furthermore, respiratory function improved. On days 11–12, the patient’s blood pressure had increased, and the patient reported a severe headache, both of which resolved after starting treatment with a blood pressure medication (nifedipine). After the discontinuation of hydrocortisone on day 15, the patient no longer reported headache symptoms, and his blood pressure had normalized. On days 16 and 17, the patient reported slight muscle weakness in his legs, which resolved within a few days. At the end of day 21, the midline catheter was removed and the patient was discharged. During the follow-up period from day 21 through day 91, the patient had generally stabilized. On day 92, the patient was readmitted to the hospital. Physiological and FEES examination on day 92 revealed that swallowing function had remained stable relative to day 21. The patient was started on a second 21-day course of penicillin G and hydrocortisone (Table 1) delivered via midline catheter infusion. Because the observed blood pressure increase during the first course, the patient was administered 30 mg nifedipine retard once daily, starting from the first day of the 2nd treatment course. On the 2nd day of the 2nd 21-day course (day 93), the patient’s blood pressure had increased, which resolved after increasing the nifedipine dose to 90 mg daily. During the second 21-day course, and during the follow-up period from day 112 to 122, the patient remained stable.\n\n\nCase presentation: Patient 3\n\nA 65-year-old Caucasian male with ALS was admitted to the hospital with severely impaired swallowing and speech. One year earlier, this patient was diagnosed with bulbar-onset ALS and presented with symptoms typical of bulbar-onset ALS, including dysphagia and dysarthria. Over the course of the disease, the dysphagia had progressed to the level that the patient’s oral intake had become inadequate, and the patient was scheduled to undergo a percutaneous endoscopic gastrostomy (PEG). In the year prior to admission to the hospital, the patient’s speech also had become severely impaired, and the patient developed other symptoms typical of ALS, including coordination problems, muscle weakness, cramps and leg resting tremors. A few months before admission, the patient could no longer walk unaided and began using a walker. The patient had no history of syphilis or other systemic infection, and the diagnosis of ALS was confirmed by the Netherlands National ALS Center.\n\nFigure 4 shows this patient’s progression of symptoms and the effect of treatment. Dysphagia was confirmed by FEES examination at the time of admission (Movie 8), and the speech impairment was confirmed by a speech therapist. Physical examination and laboratory blood analysis revealed no other clinical pathology and normal renal function. The only prescription medication taken by the patient was losartan (100 mg/day) for high blood pressure.\n\nThe patient had no history of seizures and was therefore eligible to receive high doses of penicillin G. After confirming that the patient was not allergic to penicillin (as described for Patient 1), the patient was started on a 21-day course of penicillin G and hydrocortisone (Table 1) delivered via midline catheter infusion.\n\nOn the 3rd treatment day, the patient was able to drink a glass of water for the first time in months. On day 4, swallowing function improved further, and the patient was able to swallow solid food for the first time in months. Furthermore, the patient’s speech improved, his muscle stiffness has diminished, and he no longer experienced cramps or leg rest tremors. Over the following days, both swallowing and speech function continued to improve, as confirmed by FEES examination (Movie 9) and evaluation by a speech therapist. From day 7 onwards, the patient’s swallowing function continued to improve to the point that a PEG procedure was no longer necessary, and his speech further improved. On day 8, the patient reported increased muscle strength in arms and legs, and the patient’s walking ability had improved. Furthermore, his weight had increased with 3 kg, and his respiratory function improved. On day 9, the patient’s blood pressure had increased, and the patient reported a headache. He was therefore placed on blood pressure medication (nifedipine), and the hydrocortisone was discontinued on day 10 (rather than on day 14 as indicated in the protocol). After hydrocortisone was discontinued, the patient no longer reported headache symptoms, and blood pressure was normalized. On days 11 and 12, the patient reported slight muscle weakness in his legs, which resolved within a few days. At the end of day 21, the midline catheter was removed and the patient was discharged. During the follow-up period from day 21 through day 90, the patient had generally stabilized. On day 91, the patient was readmitted to the hospital and was started on a second 21-day course of penicillin G and hydrocortisone (Table 1) delivered via midline catheter infusion. Because the observed blood pressure increase during the first course, the patient was administered 30 mg nifedipine retard once daily. During the second 21-day course (days 91 to 112) the patient remained stable.\n\n\nDiscussion\n\nHere, we report that two 21-day courses of penicillin G and hydrocortisone treated symptoms typical of ALS in three patients with no history of syphilis. After only four days of treatment, Patient 1—who had been wheelchair-bound for four months—was able to walk unaided approximately 100 meters. In addition, this patient rapidly regained movement of his fingers and could grasp items with his left hand, functions that had been completely absent for eight months prior to treatment. Furthermore, the symptoms in Patient 2—who had experienced progressive dysarthria, dysphagia, and bladder dysfunction over the preceding year—also regressed during the first week of treatment. Lastly, in Patient 3—who presented with dysphagia so severe that PEG was indicated—both the dysphagia and dysarthria regressed during the first week of treatment.\n\nThe treatment protocol was generally well tolerated by all three patients. The only side effect observed in Patient 1 was slight, transient edema of the arm and hand at the site where the midline catheter was placed. Patient 2 experienced high blood pressure and a severe headache beginning on day 11; these symptoms can be attributed to the administration of hydrocortisone, and they resolved after the discontinuation of hydrocortisone. Similarly, patient 3 experienced high blood pressure on day 9, which resolved after the discontinuation of hydrocortisone on day 10. During the second course of treatment, the increase in blood pressure in these patients was effectively treated with nifedipine. These observations indicate that blood pressure should be monitored closely during the treatment protocol. Patients 2 and 3 experienced temporary muscle weakness in their legs in the days following the discontinuation of hydrocortisone. Because Patients 2 and 3 are not in the late stages of ALS disease, in which respiratory muscle function is often severely affected, the hydrocortisone withdrawal–induced temporary muscle weakness was well tolerated. However, in late-stage ALS, in which muscle function is often reduced to minimal functional levels, hydrocortisone withdrawal–induced muscle weakness may lead to serious side effects, including respiratory depression.\n\nThe absence of seizure activity in these patients—even on high-dose penicillin G—is consistent with our recent observation that seizures are often absent in patients with ALS11. In the treatment of syphilis, for which this treatment protocol was originally developed, increasing doses of penicillin G are given in order to minimize the risk of inducing a Jarisch-Herxheimer reaction. However, this titration of penicillin G is also recommended for use in ALS patients, as these patients may have impaired blood-brain barrier (BBB) function12,13, potentially resulting in high levels of penicillin G in the CSF, which could induce seizure activity. In this respect, including hydrocortisone in the treatment course may provide additional benefits, as hydrocortisone has been reported to maintain BBB integrity14.\n\nFour courses of the 21-day treatment protocol reported here was previously reported to be efficacious in treating six cases of so-called syphilitic ALS7,8. Syphilitic ALS is an intriguing clinical phenomenon, as it is the only form of ALS ever reported to have been cured9. Recently, we proposed that the successful treatment of ALS symptoms in these six patients with syphilitic ALS was not directly due to the treatment of syphilis; specifically, we proposed that the penicillin G and/or hydrocortisone treated these patients’ ALS symptoms due to the off-target pharmacological activity of penicillin G (e.g., as a GABA receptor antagonist) and/or the multifaceted pharmacological activity of hydrocortisone (e.g., as an immunosuppressant)9. This notion is supported by the three cases reported here, as our patients had no syphilis-related symptoms or history of syphilis.\n\nIt is important to note that either penicillin G or hydrocortisone—or both—contributed to the observed effects. Penicillin G is a GABA receptor antagonist15 that can reduce GABAergic overstimulation. At high doses, penicillin G can also affect other major bodily functions and/or systems, including the immune system, the cardiovascular system, metabolic function, renal function, liver function, the hematological system, and the urogenital system (penicillin G summary of product characteristics, see https://www.medicines.org.uk/emc/medicine/2962. Accessed July 27, 2016); these pharmacological activities may be associated with the clinical benefits reported here.\n\nHydrocortisone has immunomodulatory and anti-inflammatory properties (hydrocortisone summary of product characteristics. https://www.medicines.org.uk/emc/medicine/10815, accessed July 27, 2016, and hydrocortisone sodium succinate product monograph. https://www.drugs.com/monograph/hydrocortisone-sodium-succinate.html, accessed November 16, 2016); therefore, it may also affect systems involved in the pathogenesis of ALS, including the anti-inflammatory system1–3. Moreover, hydrocortisone has reported efficacy in treating multiple sclerosis and respiratory diseases (hydrocortisone summary of product characteristics. https://www.medicines.org.uk/emc/medicine/10815, accessed July 27, 2016), conditions that have clinical overlap with ALS. Furthermore, like penicillin G, hydrocortisone can affect several bodily functions and/or systems, including the endocrine system, the immune system, inflammatory function, the respiratory system, the hematological system, and the gastrointestinal system (hydrocortisone summary of product characteristics. https://www.medicines.org.uk/emc/medicine/10815, accessed July 27, 2016). Furthermore, glucocorticoids have been found to be efficacious in preclinical models of ALS16.\n\nOther explanations may account for the observations reported here. First, the patients may have improved due to a placebo effect. However, this may be deemed unlikely, as the full range of ALS symptoms resolved in these patients. Nevertheless, the possibility that the patients improved due to a placebo effect should be investigated in a clinical trial setting. Second, the patients may have been incorrectly diagnosed as having ALS and/or incorrectly diagnosed as not having neurosyphilis. However, this is also unlikely, given that Patient 2 tested negative for syphilis before starting treatment, and given that all three patients were diagnosed at a leading neurology center with extensive experience diagnosing both ALS and neurosyphilis. Furthermore, syphilis has an extremely low prevalence in the Netherlands (present in only 0.15% of the population)17, and the symptoms associated with ALS generally do not overlap with the symptoms associated with syphilis18. Third, it is possible that the patients’ ALS symptoms were caused by an infection other than syphilis, which was then treated by the penicillin and hydrocortisone. The possible presence of an unidentified infection—and its treatment with penicillin G—may explain the observation that the benefits of treatment remained even after treatment was discontinued. This is an interesting point, as the elimination half-life of penicillin G and hydrocortisone is 0.5–1.0 hour (penicillin G summary of product characteristics, see https://www.medicines.org.uk/emc/medicine/2962, accessed July 27, 2016) and 1.5–3.5 hours (hydrocortisone sodium succinate product monograph. https://www.drugs.com/monograph/hydrocortisone-sodium-succinate.html, accessed November 16), 2016, respectively; thus, both compounds would have been cleared from the body within hours after treatment was discontinued. Nevertheless, given the apparent absence of infection in all three patients, we believe that the post-treatment effects were due to the treatment’s effects on ALS rather than its antibacterial activity. Finally, it is formally possible that Patient 2 may have improved because of the concomitant administration of Riluzole. However, this is unlikely, as this patient’s symptoms had been progressing steadily since he was first diagnosed with ALS, and he was on Riluzole since he was diagnosed. Moreover, this patient began to improve only after he started on the penicillin and hydrocortisone protocol.\n\n\nConclusions\n\nThis is the first report of a treatment with a protocol for treating dysphagia, dysarthria, and other ALS-related symptoms using a 21-day course of penicillin G and hydrocortisone in three patients with ALS and no history or symptoms of syphilis. This is an important clinical observation, as no treatment is currently available for dysphagia, dysarthria, or ALS. Furthermore, the findings support the recent hypothesis that the successful treatment of ALS symptoms with this treatment course in six patients with syphilitic ALS was not directly due to the treatment of syphilis; but that the administered penicillin G and/or hydrocortisone treated these patients’ ALS symptoms due the off-target pharmacological activity of penicillin G and/or hydrocortisone. In view of the devastating, rapidly progressive nature of ALS, this treatment protocol should be evaluated further in a clinical trial.\n\n\nConsent\n\nWritten informed consent for publication was obtained from the patients. The requirement for ethical approval was waived by the Medical Ethics Review Committee of the Academic Medical Center. Patients were treated according to guidelines of the Royal Dutch Medical Association (KNMG) for off-label prescribing (see https://protect-eu.mimecast.com/s/DdxhBlJkYHQ). Medication was administered under supervision of E.W.J. Wielinga, M.D., Ph.D., consultant ENT-surgeon, who also validated the results regarding swallowing and speech.",
"appendix": "Author contributions\n\n\n\nBT, EW, HJ and PG were responsible for the design of the treatment protocol and contributed to the design and the preparation of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nBT is the inventor listed on a patent application for the treatment of neuromuscular and neurological diseases using the therapies described in this manuscript. BT is also the founder of Ry Pharma, a company that develops the therapy described in the manuscript. HJ is an unsalaried volunteer at Ry Pharma. PG has no competing interests.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors wish to thank Jent Zijlstra, Marc ter Haar, Mikkel Hofstee, and Nico Zandman for their support and critical review of the manuscript.\n\nThe authors also thank SEVBI, a Dutch foundation that promotes the effective use of medications outside of their registered indication, for providing EUR 800 for the cost of making this publication open-access.\n\n\nSupplementary material\n\nMovie 1: FEES in Patient 1 prior to the start of the treatment, showing impaired swallowing function while attempting to swallow two pieces of bread.\n\nClick here to access the data.\n\nMovie 2: On day 11, Patient 1 had regained movement of the fourth and fifth fingers on his right hand.\n\nClick here to access the data.\n\nMovie 3: On day 12, Patient 1 had regained movement of all four fingers on his right hand.\n\nClick here to access the data.\n\nMovie 4: On day 18, Patient 1 could grasp objects.\n\nClick here to access the data.\n\nMovie 5: FEES in Patient 1 on day 21, showing restored swallowing function. Note that the patient fully swallowed the piece of bread, and the patient could swallow faster than on day 0 (compare with Movie 1).\n\nClick here to access the data.\n\nMovie 6: FEES in Patient 2 prior to the start of the treatment, showing impaired swallowing function while attempting to swallow two pieces of bread. Note that residue of bread is still present in the vallecula after 4 attempts of swallowing.\n\nClick here to access the data.\n\nMovie 7: FEES in Patient 2 on day 20, showing improved swallowing function. After swallowing a piece of bread in one time, no residue can be discerned in the vallecula (compare with Movie 6).\n\nClick here to access the data.\n\nMovie 8: FEES in Patient 3 prior to the start of the treatment showing that drinking water was impossible without chocking and coughing.\n\nClick here to access the data.\n\nMovie 9: FEES in Patient 3 on day 16, showing functional swallowing of a piece of bread (compare with Movie 8).\n\nClick here to access the data.\n\n\nReferences\n\nKiernan MC, Vucic S, Cheah BC, et al.: Amyotrophic lateral sclerosis. Lancet. 2011; 377(9769): 942–55. PubMed Abstract | Publisher Full Text\n\nSilani V, Messina S, Poletti B, et al.: The diagnosis of Amyotrophic lateral sclerosis in 2010. Arch Ital Biol. 2011; 149(1): 5–27. PubMed Abstract | Publisher Full Text\n\nTurner MR, Hardiman O, Benatar M, et al.: Controversies and priorities in amyotrophic lateral sclerosis. Lancet Neurol. 2013; 12(3): 310–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nObermann M, Lyon M: Financial cost of amyotrophic lateral sclerosis: a case study. Amyotroph Lateral Scler Frontotemporal Degener. 2015; 16(1–2): 54–7. PubMed Abstract | Publisher Full Text\n\nTjaden K: Speech and Swallowing in Parkinson’s Disease. Top Geriatr Rehabil. 2008; 24(2): 115–126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKalia M: Dysphagia and aspiration pneumonia in patients with Alzheimer's disease. Metabolism. 2003; 52(10 Suppl 2): 36–8. PubMed Abstract | Publisher Full Text\n\nel Alaoui-Faris M, Medejel A, al Zemmouri K, et al.: [Amyotrophic lateral sclerosis syndrome of syphilitic origin. 5 cases]. Rev Neurol (Paris). 1990; 146(1): 41–4. PubMed Abstract\n\nChraa M, Mebrouk Y, McCaughey C, et al.: Amyotrophic lateral sclerosis mimic syndrome due to neurosyphilis. Amyotroph Lateral Scler Frontotemporal Degener. 2013; 14(3): 234. PubMed Abstract | Publisher Full Text\n\nTuk B: Syphilis may be a confounding factor, not a causative agent, in syphilitic ALS [version 1; referees: 2 approved]. F1000Res. 2016; 5: 1904. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNacci A, Ursino F, La Vela R, et al.: Fiberoptic endoscopic evaluation of swallowing (FEES): proposal for informed consent. Acta Otorhinolaryngol Ital. 2008; 28(4): 206–11. PubMed Abstract | Free Full Text\n\nTuk B: Overstimulation of the inhibitory nervous system plays a role in the pathogenesis of neuromuscular and neurological diseases: a novel hypothesis [version 2; referees: 2 approved]. F1000Res. 2016; 5: 1435. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarbuzova-Davis S, Rodrigues MC, Hernandez-Ontiveros DG, et al.: Amyotrophic lateral sclerosis: a neurovascular disease. Brain Res. 2011; 1398: 113–25. PubMed Abstract | Publisher Full Text\n\nEvans MC, Couch Y, Sibson N, et al.: Inflammation and neurovascular changes in amyotrophic lateral sclerosis. Mol Cell Neurosci. 2013; 53: 34–41. PubMed Abstract | Publisher Full Text\n\nGaillard PJ, van Der Meide PH, de Boer AG, et al.: Glucocorticoid and type 1 interferon interactions at the blood-brain barrier: relevance for drug therapies for multiple sclerosis. Neuroreport. 2001; 12(10): 2189–93. PubMed Abstract\n\nRossokhin AV, Sharonova IN, Bukanova JV, et al.: Block of GABAA receptor ion channel by penicillin: electrophysiological and modeling insights toward the mechanism. Mol Cell Neurosci. 2014; 63: 72–82. PubMed Abstract | Publisher Full Text\n\nEvans MC, Gaillard PJ, de Boer M, et al.: CNS-targeted glucocorticoid reduces pathology in mouse model of amyotrophic lateral sclerosis. Acta Neuropathol Commun. 2014; 2: 66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan der Ploeg CP, van der Pal SM, Oomen P: Procesmonitoring prenatale screening infectieziekten en erytrocytenimmunisatie 2007–2009. RIVM. Accessed November 1, 2016. Reference Source\n\nSingh AE, Romanowski B: Syphilis: Review with Emphasis on Clinical, Epidemiologic, and Some Biologic Features. Clin Microbiol Rev. 1999; 12(2): 187–209. PubMed Abstract | Free Full Text"
}
|
[
{
"id": "21563",
"date": "05 Apr 2017",
"name": "Rien Vermeulen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a remarkable report on three patients with ALS who improved after treatment with penicillin G and hydrocortisone. This treatment was based on the observed beneficial effects of this treatment in patients with ALS and syphilis. The authors hypothesize, this effect of treatment is a result of reduction of GABAergic overstimulation by penicillin G.\nHowever, all too often neurologists have seen astonishing improvements in uncontrolled studies. Therefore this treatment should be tested in a randomized controlled trial. Before embarking on a large scale trial, I would like to suggest to start with a small efficacy study in patients with ALS in whom the respiratory function is decreasing. The primary outcome in this trial could be the maximal inspiratory pressure which is a well known surrogate marker for disease progression in patients with ALS. In trials in patients with ALS few placebo-effects have been observed, which means that a quick answer is possible to the question whether or not this treatment improves respiratory function. Moreover, there are no safety issue in such a study.\nClinicians owe to their patients with this devastating disease to test the effects of this treatment with high priority, since there are presently no other promising therapies.",
"responses": []
},
{
"id": "21581",
"date": "05 Apr 2017",
"name": "Alan Gill",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nInterpreting any therapeutic benefit of Penicillin G plus hydrocortisone on ALS symptom severity and progression does require a careful examination of drug effects that occur outside of the intended general GABA receptor antagonism and anti-inflammatory actions, as well as the effects of additional drugs the patients take, including those needed to offset untoward effects of the primary treatment, e.g., nifedipine. Plasma volume expansion by hydrocortisone might be expected to make venous blood sampling easier, but also to increase blood pressure to levels that need attention. While direct vasodilation by nifedipine increased the size of the circulation’s container enough to permit blood pressure to normalize, both nifedipine and hydrocortisone soon promote additional plasma volume expansion to fill the larger container. The authors do carefully document the application of all drugs in a way that permits therapists to anticipate managing these side-effects.\n\nThe authors provide solid safety recommendations about dosing with both the Penicillin G and with the hydrocortisone. These recommendations would be important even without the potential for blood brain barrier leakiness and for adrenal dysregulation that can occur in ALS in order to prevent seizures and unexpected responses by the pituitary-adrenal axis. The authors formally recognize and take these potential risks into account.\n\nWe do believe that the apparent therapeutic benefit of treatment shown by patients in the current report is interesting at face value. However, if the objective of the report is to convince ALS clinicians and patients to consider trying this off-label therapeutic approach, then the manuscript would be better served to include typical ALS endpoint data, e.g., ALS FRS scores, forced vital capacity (or slow vital capacity) measurements, and patient body weight, alongside the clinical description and videos. While ALS FRS is subjective, it is familiar to neurologists and patients and could provide important context for the magnitude of benefit observed. FVC/SVC and body weights are more objective measures that would provide indications about the patients’ well-being, although body weights would be influenced in a potentially confounding way by the hydrocortisone during, but perhaps not after, treatment. All of these endpoints are standard in ALS clinics, both in the United States and in Europe.\n\nOne can hope that careful clinical testing of the current regimen might encourage efforts to try to identify an even safer, more predictable, therapeutic regimen than PENG/hydrocortisone itself. Such efforts might try to understand which inhibitory interneuron networks require interventions and which do not, and which specific antagonist (or agonist) drugs would optimize our test of the hypothesis. Can specific peptide neuromodulators contribute to improving the regimen? These considerations do not preclude the importance of carefully testing the PENG/hydrocortisone regimen itself. The complexity of this task alone is significant. Refinements of the regimen only add complexity, but could lead to more predictable, safer, and even more effective ALS treatment.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-410
|
https://f1000research.com/articles/4-480/v1
|
05 Aug 15
|
{
"type": "Software Tool Article",
"title": "PSFC: a Pathway Signal Flow Calculator App for Cytoscape",
"authors": [
"Lilit Nersisyan",
"Graham Johnson",
"Megan Riel-Mehan",
"Alexander R Pico",
"Arsen Arakelyan",
"Lilit Nersisyan",
"Graham Johnson",
"Megan Riel-Mehan",
"Alexander R Pico"
],
"abstract": "Cell signaling pathways are sequences of biochemical reactions that propagate an input signal, such as a hormone binding to a cell-surface receptor, into the cell to trigger a reactive process. Assessment of pathway activities is crucial for determining which pathways play roles in disease versus normal conditions. To date various pathway flow/perturbation assessment tools are available, however they are constrained to specific algorithms and specific data types. There are no accepted standards for evaluation of pathway activities or simulation of flow propagation events in pathways, and the results of different software are difficult to compare. Here we present Pathway Signal Flow Calculator (PSFC), a Cytoscape app for calculation of a pathway signal flow based on the pathway topology and node input data. The app provides a rich framework for customization of different signal flow algorithms to allow users to apply various approaches within a single computational framework.",
"keywords": [
"pathway signal flow",
"cytoscape",
"systems biology",
"networks",
"scoring algorithms",
"gene expression"
],
"content": "Introduction\n\nCell signaling pathways are sets of directed interactions between biological molecules, that are initiated by a particular signal (e.g. a ligand binding to a receptor) and result in realization of certain target processes (e.g. transcription of genes). Pathways can be represented as graphs, with nodes as biological entities (proteins, other biomolecules, chemical compounds, other pathways), and edges as physical or regulatory interactions between them. In contrast to protein-protein interaction networks, biomolecular pathways have directionality, input nodes, intermediate nodes and branches, and output or sink nodes.\n\nPathway Signal Flow (PSF), or perturbation, is the flux generated by propagation of the signal starting from input nodes, flowing through intermediate nodes in branches and accumulating at sink nodes. Thus, PSF can be an indicator of pathway activity state. Assessment of changes in pathway activity is of major interest for identification of processes involved in the formation of certain phenotypes (healthy and diseased states), and assessment of cell response to drugs and other stimuli. First attempts to globally evaluate the pathway activity changes were performed in parallel with the appearance of high-throughput gene expression measurement experiments. Pathway involvement is typically analyzed by over-representation analysis (ORA)1 or gene set enrichment analysis (GSEA)2. The major drawback of these widely used approaches is that they operate on gene sets involved in the pathway, but do not account for the pathway topology and ignore the interactions between the nodes.\n\nA number of techniques and tools have recently emerged, aimed at determining pathway activities based on topological information of pathways and gene expression/protein activity levels. One of the pioneering papers in this direction was the Pathway Impact Analysis algorithm, which combines GSEA with gene position in the network3. Other approaches apply specific rules to model flow or signal propagation through the pathway and evaluate the amount of the signal reaching the sink nodes4–6.\n\nThe above mentioned algorithms and tools are implemented using various programming and scripting languages, making their use and result comparison difficult in the common context. Moreover, they often work with programming environment specific objects, and are not flexible for using biological pathways that appear in various formats. Cytoscape, on the other hand, is a powerful and flexible platform that, together with its diverse collection of available apps, provides a rich environment for parsing, visualization and analysis of networks7.\n\nHerein we present Pathway Signal Flow Calculator (PSFC), a Cytoscape app for computation of pathway signal flow based on input data and pathway topology. PSFC provides a variety of options for signal propagation, both used in already published signal flow algorithms3–6, as well as in new ones. Thus, it allows experimenting with the results obtained by various (existing and customizable) approaches within a single framework, and evaluating their ability to simulate real life situations.\n\n\nMethods\n\nPSFC packages and data structures. PSFC is implemented in Java and is available as an app for Cytoscape 3. The main module consists of two main packages, logic and gui. The package logic is designed to handle PSFC-inherent structures and algorithms, while the gui package is responsible for user communication via the PSFC tab in the Cytoscape GUI west panel, and for mapping Cytoscape inherent data structures to PSFC data structures (Graph, Node and Edge) contained in the logic package (Figure 1).\n\nOnly the main packages are displayed. In the right-hand diagram, the packages logic.algorithms and logic.structures are extended to respective classes. The communication from and to the user is performed via the PSFCPanel class and the gui.actions package. PSFC inherent structures and algorithms are implemented in the packages logic.algorithms and logic.structures. The classes extended from the packages logic.structures and logic.algorithms are shown with green and blue arrows, respectively. Class dependencies are indicated with grey arrows.\n\nGraph sorting. Graph sorting is the first step before proceeding to signal flow calculation. The aim of sorting is to assign levels to the nodes, to propagate the signal from lower to higher level nodes.\n\nWe have modified the topological sort algorithm implemented in Java JGraphT library [http://jgrapht.org/], to handle multiple input node containing graphs.\n\nRecall that biological networks often contain feedback loops, which create cycles in graphs. PSFC firstly performs depth first search traversal and removes backward edges from the graph, and performs the topological sorting on the resulting acyclic graph, after which the backward edges are restored. Finally, node levels of the sorted graph are mapped to the Cytoscape node attributes table.\n\nPathway signal flow calculation. In biological signaling networks, the signal is propagated via interactions between source-target node pairs. The outcome of signal propagation events is the signal (PSF value) accumulated at each network node. Figure 2 provides an example of how the signal propagates through a sample network, with various signal propagation options applied.\n\nRed and blue edges are of types “activation” and “inhibition”, respectively. Multiple signals at a target node are computed by addition (Add), multiplication (Mult), or by updating target node signals (Update). Signal splitting is either set to “none” or “equal” (Equal) or “proportional” (Prop) rules, and is either performed on multiple outgoing edges (Out) or multiple incoming edges (In).\n\nRules for simple source-target interactions. Functional interaction types can be broadly defined as activation or inhibition, while the range of physical and regulatory interactions is much wider (phosphorylation, binding, dimerization, ubiquitination, etc.). An edge in a graph carries a signal transfer function, which depends on the interaction type. PSFC allows the user to define the interaction type of each edge in the network, as well as define the edge-type specific mathematical functions of wide complexity. These functions should have the form f (source, target), where the source and target variables stand for the source node signal and the target node value. The functions are parsed with Exp4j Java library for symbolic operations [http://www.objecthunter.net/exp4j/]. Function assignment for different edge types is shown in Figure 2.\n\nRules for multiple incoming and outgoing signals. Generally, the intensity of interactions between molecules largely depends on their concentration and activation state. However, if a node has several interacting partners, those may compete with each other, and the interaction capacity of the node may be “split” between those partners. Thus, there is the option to proportionally split the signal among multiple edges starting from a single source or ending on a single target node. The signals on multiple edges ending on a single target node may be processed in one of the following three ways: the signals may be computed separately at each edge and added (i) or multiplied (ii) to each other, or they may be processed in order (iii), by updating the signal at a target node each time a single edge is processed. The order, in which the edges are processed in the last case, may be adjusted by user defined edge ranks (Figure 2).\n\nHandling of feedback loops. The presence of negative and positive feedback loops in biomolecular pathways is of paramount importance for pathway functionality and regulation. However, currently it is a major obstacle for developing optimal algorithms for pathway activity assessment. To our knowledge, there is no single solution for treatment of loops in signal propagation algorithms, thus PSFC provides several options for loop handling:\n\nIgnore feedback loops: In this case cycle-forming backward edges are ignored during PSF calculations (Figure 3B).\n\nPrecompute signals at loops: In this mode, the algorithm firstly finds cycle-forming backward edges, computes their signals, and updates their target node values. Afterwards, the algorithm runs on the whole graph in the “ignore feedback loops” mode (Figure 3C).\n\nIterate until convergence: The algorithm runs for several rounds, until convergence of signal flow values is reached (Figure 3D–F). Convergence is reached if the percentage of signal changes between two iterations is less than the specified convergence threshold at all the nodes. If convergence is not achieved, the algorithm stops after running for a defined number of iterations. The user may check the convergence status of the calculations in the PSFC log file and in the command prompt.\n\nThe PSF flow rules are: for red edges of type activation (*): source * target; for blue edges of type inhibition (/): 1/source * target; Multiple input signals: Addition; Splitting: Proportional; Split on: Incoming edges. The “Ignore feedback loops” mode (B) does not account for backward edges. In the “Precompute loops” mode (C), the backward edge signal first updates the target node value, and is ignored in the following single iteration. (D–E): signal flow at different iterations with “Iterate until convergence” mode. The network converges at iteration 10. The dynamics of flow changes at each node during 10 iterations are shown as line charts in (F). Edges ignored during the computation are indicated by red X symbols.\n\nSignificance calculation. The significance value of signal flows at each node is computed using bootstrapping. The user may choose between sample-centric or gene-centric bootstrapping modes. In the sample-centric mode, the values of the nodes in the network will be reshuffled among each other during resampling. In the gene-centric mode, the value of each node is randomly chosen from a supplied distribution of node values, e.g., from measurements of a given gene’s expression across multiple samples.\n\nPSFC is implemented for Cytoscape version 3.2 and higher, with Java 1.7 or higher. PSFC may be installed with either the Cytoscape App Manager or by direct download of the jar file from http://apps.cytoscape.org/apps/psfc. The whole functionality of PSFC is accessible to users via a single tab in the Cytoscape GUI west panel.\n\nGeneral use case of PSFC. The main use case of the app is presented in Figure 4. PSFC operates on any network loaded into the Cytoscape environment. Node data and edge types should be loaded into Cytoscape attribute tables, while signal propagation rules should be set in respective PSFC GUI tabs (Figure 4). PSF computation is performed with the “Compute flow” button. The resulting PSF values are stored both in Cytoscape attribute tables, and in PSFC output files (the score backup file and psfc.log file in text formats). The signal propagation may be visualized via node color and edge width mapping, where continuous values are mapped to color gradients and width ranges at a chosen level or across all levels in a sequence.\n\nThe user should load the network into Cytoscape, and import node and edge attributes using the Cytoscape environment. Further, the user sets the rules for signal propagation, loop handling and significance calculation. After PSF calculation, the signal propagation may be visualized in Cytoscape. The dashed rectangles are optional.\n\nPSF calculation on MAPK signaling pathway: a use case. We evaluated signal flow changes in the MAPK signaling pathway network taken from previously published papers8,9. In their paper, Nelander et al. have performed a series of experiments, where they have downregulated one or many of the MAPK pathway proteins, and measured the changes of protein phosphorylation levels, and the states of G1-arrest and apoptosis8. Feiglin et al.9 have compared the results of the experimental data with their predictions, based on a wiring algorithm described in their paper9. We have repeated the same experimental simulations, to compare the performance of PSFC with the wiring algorithm and with the experimental data. The node values were presented as gene expression fold change (FC) values that show the relative increase or decrease of gene expression compared to the reference state. In the reference state, the amount of PSF should be 1, corresponding to the normal level of pathway activity necessary to realize the target biological process. Departure of PSF values from 1 is indicating an up- or down-regulation of the pathway. To simulate this situation, we have applied the following rules for signal propagation. The single edges were treated with (source*target) and (1/source*target) functions for edges of types activation and inhibition, respectively, ensuring that an FC change on a node propagates proportionally via signal perturbations to downstream nodes. Furthermore, we have applied splitting on incoming edges and addition of multiple incoming signals on a single target node. This is based on the speculation that, in signaling pathways, the capacity of a protein to interact with upstream agents depends on the relative frequency of co-occurrence and the interaction strength with those agents, which is this case, is represented as the PSF signals of the source nodes. Finally, loop handling was in “iterate until convergence” mode, since the absence of positive feedback loops in the MAPK pathway and FC representation of the node values ensure that the algorithm will converge.\n\nWe have performed PSF calculations in 6 different experiments. In each of these experiments one of the IGF1R, PI3K, mTOR, PKCdelta, p-MEK, or EGFR nodes was assigned a value of 0.1 (down-regulated), while the rest of the nodes had “fc” values of 1, which corresponds to the unchanged state compared to the control (Figure 5A).\n\nMAPK signaling pathway is generated by the model used in 8. Part A shows the pathway, as visualized in Cytoscape after PSF calculations. Edges of activation and inhibition types have Delta and T shaped target arrows. The darker colors of nodes indicate greater PSF values, and the line width corresponds to the signal intensities at the edges. The prediction results of six perturbation experiments by PSFC and the pathway wiring algorithm9 are presented in part B, similar to the representation in 9. In each of the six experiments, one node was down-regulated, as indicated by the “x” sign. The predicted perturbations at the “G1-arrest” and “apoptosis” nodes are shown with up and down arrows. Green and red colors indicate consistency and non-consistency, respectively, of the predictions with the experimental results presented in 8.\n\nUnder all the experimental settings we have predicted up-regulation of the “G1-arrest” and “apoptosis” nodes, which is in full accordance with the predictions of Feiglin et al.9. These predictions deviated from the experimental outcomes8 in only one case (Figure 5B). The network xml file and all configuration files are available in Supplementary material, MAPK_psfc_configurations.rar.\n\n\nSummary\n\nWe have developed PSFC, a Cytoscape app for PSF calculation. The main purpose of the app is to evaluate the signal flow propagation in pathways and assess activity states of pathway components, based on input data and the topology. PSFC may be used for the purpose of assessment of pathway activity deregulations in different conditions, for simulation studies on network dynamics, etc.\n\nCompared to other similar software, PSFC stands out with a wide set of rules and options for signal propagation, which makes it possible to use the app for the majority algorithms that could possibly be applied for pathway flow calculations in different biological contexts. It is, thus, not constrained with preset algorithm design, but allows users to apply their own algorithms. Thus, PSCF can be used in routine data analysis by bench biologists using available presets, but also can become a powerful tool for sophisticated pathway analyses in the hands of a bioinformatics skilled person.\n\n\nSoftware availability\n\nSoftware available from: http://apps.cytoscape.org/apps/psfc\n\nhttps://github.com/lilit-nersisyan/psfc/\n\nhttps://github.com/F1000Research/PSFC\n\nhttp://dx.doi.org/10.5281/zenodo.1946510\n\nPSFC is free software; it can be distributed and/or modified under the terms of the GNU General Public License version 3. The license can be found at http://www.gnu.org/licenses/gpl.html. The exp4j library is distributed under Apache License 2.0, while the JGraphT library is dual-licensed GNU Lesser General Public License and Eclipse Public License.",
"appendix": "Author contributions\n\n\n\nLN and AA performed software design and development, testing and analyses, and manuscript preparation. GJ, MR-M, and AP contributed to software design and manuscript preparation. All the authors have read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nThe authors disclose no competing interests.\n\n\nGrant information\n\nThis work was funded by Google Summer of Code 2014 (Student: LN, Mentor: GJ), and the Armenian National Science and Educational Foundation research grant molbio-3818 (PI: LN).\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nThe archive contains source and configuration files used to generate the data described in the “PSF calculation on MAPK signaling pathway: a use case” subsection of the manuscript. The MAPK_network.xml file contains the network model, which should be imported into Cytoscape. The EdgeTypeRuleName.config and RuleNameRule.config files are configuration files defining simple rules for flow propagation. The psfc.props contains the properties used by PSFC during calculations. The Flow_propagation_rules.pdf describes the applied rules and options.\n\nClick here to access the data\n\nThe user manual covers installation notes, a generic use case and a specific example, in addition to an overview of the graphical user interface and details about network sorting, flow rule configurations, pathway flow calculation and flow visualization. This supplemental file is a static version of the manual. For the latest online version, click on the Tutorial link on the PSFC app page: http://apps.cytoscape.org/apps/psfc.\n\nClick here to access the data\n\n\nReferences\n\nKhatri P, Drăghici S: Ontological analysis of gene expression data: current tools, limitations, and open problems. Bioinformatics. 2005; 21(18): 3587–3595. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHung JH: Gene Set/Pathway enrichment analysis. Methods Mol Biol. 2013; 939: 201–13. PubMed Abstract | Publisher Full Text\n\nDraghici S, Khatri P, Tarca AL, et al.: A systems biology approach for pathway level analysis. Genome Res. 2007; 17(10): 1537–45. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIsik Z, Ersahin T, Atalay V, et al.: A signal transduction score flow algorithm for cyclic cellular pathway analysis, which combines transcriptome and ChIP-seq data. Mol Biosyst. 2012; 8(12): 3224–31. PubMed Abstract | Publisher Full Text\n\nArakelyan A, Aslanyan L, Boyajyan A: High-throughput Gene Expression Analysis Concepts and Applications. Sequence and Genome Analysis II – Bacteria, Viruses and Metabolic Pathways. iConcept Press. 2013; ISBN: 978-1-480254-14-5. Reference Source\n\nHaynes WA, Higdon R, Stanberry L, et al.: Differential expression analysis for pathways. PLoS Comput Biol. 2013; 9(3): e1002967. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShannon P, Markiel A, Ozier O, et al.: Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003; 13(11): 2498–504. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNelander S, Wang W, Nilsson B, et al.: Models from experiments: combinatorial drug perturbations of cancer cells. Mol Syst Biol. 2008; 4: 216. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFeiglin A, Hacohen A, Sarusi A, et al.: Static network structure can be used to model the phenotypic effects of perturbations in regulatory networks. Bioinformatics. 2012; 28(21): 2811–8. PubMed Abstract | Publisher Full Text\n\nNersisyan L, Johnson G, Riel-Mehan M, et al.: F1000Research/PSFC. ZENODO. 2015. Data Source"
}
|
[
{
"id": "9853",
"date": "20 Aug 2015",
"name": "David Ruckerbauer",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors introduce an app for Cytoscape called 'Pathway Signal Flow Calculator' (PSFC). PSFC allows to simulate and visualize the propagation of signals through a pathway, considering both the pathway topology and input data for 'node activity', such as protein phosphorylation. The aim is to provide a flexible environment that supports the integration of various custom algorithms and data sets.The paper describes several rules and algorithms for signal flow and demonstrates their effect on two small toy models as well as a bigger use case (MAPK signaling pathway), all in context of using PSFC.Furthermore, the authors provide the use case with the corresponding input files as supplementary material, a very detailed step-by-step manual for the different options of PSFC and the source code of the app.I have tested the app under Linux Mint 17.2 (64-bit), Mac OS X 10.7.5 (64-bit) and Windows Vista (32-bit) (see minor issues below).While my assessment of this paper is 'Approved', I have come across some issues that are detailed below. Additionally to major and minor issues I have added a section with suggestions that I personally feel might improve PSFC. I would like to stress, however, that PSFC has full usability without considering any of the points in the latter section.Major IssuesFigure 2: The definition of inhibition as 'source – target' seems counterintuitive, as a strong signal from the source should decrease the result in the target node. Should it be 'target – source'? This definition is also used by Isik et al. (Reference 4 in the paper) as well as in the document 'rules_presets.pdf' provided by the authors. Please be aware that both the text and the figures would need to be updated.Figure 2 (F,G,H): 'Multiple input fields' is set to 'Update', the figures, however, correspond to 'Add'.If it should indeed be 'Update' the values for the bottom right box are wrong in all three figures.While all necessary input files for the use case are provided in the supplementary material the output is missing (apart from the screenshot of a log-file in the manual, which is not easy to read). This information should definitely be provided for all perturbations in order to be able to compare numerical values; a qualitative comparison is not enough, especially since all results are the same (up-regulation).Ideally the toy model from Figure 2 would also be provided as an .xgmml-file, as this would enable readers to test and verify different settings.Minor IssuesClicking the link for reference 5 gives an error (page not found).It is possible to install PSFC in Cytoscape running on Windows Vista 32-bit (with current versions of Cytoscape and Java), but running the calculation gives an error. I am aware that Cytoscape generally recommends using a 64-bit system, but if PSFC is for 64-bit systems only then this information could be added to the manual.Redoing a calculation shuffles the sorting of the columns in the node / edge attribute table (in terms of the order of psf_l0, psf_l1, etc.), in Linux Mint.Setting mTOR to an fc of 2 in the MAPK-example will lead to a final state where the box 'apoptosis' is completely white, even though the value is 0.75, and the box for 'p-RAF' is completely black (with a value of 4.59). This may or may not be an issue with the Cytoscape installation under Linux Mint.I don't understand the purpose of the 'level textbox' in the flow visualization window. Does it only show the current maximum level?Page numbers should be added to the manual.SuggestionsUsing a two color scheme (or maybe an easily customizable one) would improve the visualization. Right now even negative values are shown in the same color as positive ones and they are indistinguishable from low positive numbers.An option to visualize the values for nodes and edges might be helpful.Clicking the 'Play flow' button in the flow visualization window causes a pop-up which covers part of the network visualization. Maybe this pop-up could be forced to a bottom corner of Cytoscape.The 'play flow' feature is on a very high speed if there is only a small number of levels. Ideally either the speed should be easily adjustable (and an endless mode available) or maybe the 'Show state' button could be removed and the visualization updated directly via the slider. This would allow to control the speed and direction of the visualization very easily. It would also solve the above mentioned issue of the pop-up.Is the use of two config files (Edge Types config file and Rule config file) a requirement of Cytoscape or the parser? If not it would seem simpler to merge them and remove the redundant 'Function names'.The file 'rules_presets.pdf' could be added to the manual instead of being a separate file, especially since there already is an extra pdf for the use case.Additional toy models with the relevant input/output in order to test and verify 'significance calculation', handling feedback loops and edge weights might be helpful.The paper explicitly mentions 'bench biologists' as potential users of PSFC. I feel that the app is described sufficiently in order to allow a user unfamiliar with modeling and/or Cytoscape to use it successfully. The bottleneck would probably be generating the necessary input model. Maybe a short paragraph in the manual pointing to the relevant sections in the Cytoscape manual as well as describing the creation and changing of node and edge attribute tables would remove this obstacle.",
"responses": [
{
"c_id": "2601",
"date": "03 Apr 2017",
"name": "Lilit Nersisyan",
"role": "Author Response",
"response": "Response to the reviewer We would like to thank the reviewer for their thorough revision of the manuscript, and for the useful suggestions. We have addressed the issues and suggestions and have incorporated them in the version 2 of the manuscript and in version 1.1.2 of PSFC. Here, the paragraphs containing responses to the reviewer’s comments are indicated with “Response:” in the beginning. Major Issues Figure 2: The definition of inhibition as 'source – target' seems counterintuitive, as a strong signal from the source should decrease the result in the target node. Should it be 'target – source'? This definition is also used by Isik et al. (Reference 4 in the paper) as well as in the document 'rules_presets.pdf' provided by the authors. Please be aware that both the text and the figures would need to be updated. Response: We agree with the reviewer. Although the user may set any rule they desire, “target – source” would be more intuitive for inhibition type of edges. We have updated the example and the figures respectively. Figure 2 (F,G,H): 'Multiple input fields' is set to 'Update', the figures, however, correspond to 'Add'. If it should indeed be 'Update' the values for the bottom right box are wrong in all three figures. Response: We agree with the reviewer. The rule for multiple inputs is addition, while its mentioned as “update” in the figures 2 F,G,H. We have made respective changes. While all necessary input files for the use case are provided in the supplementary material the output is missing (apart from the screenshot of a log-file in the manual, which is not easy to read). This information should definitely be provided for all perturbations in order to be able to compare numerical values; a qualitative comparison is not enough, especially since all results are the same (up-regulation). Response: We have presented the PSF scores of all the experiments in a separate document in the supplementary material. Ideally the toy model from Figure 2 would also be provided as an .xgmml-file, as this would enable readers to test and verify different settings. Response: The example network in Figure2 is provided as an xgmml file. In addition we have provided instructions to replicate the settings applied in different subfigures. Minor Issues Clicking the link for reference 5 gives an error (page not found). Update the link with https://www.iconceptpress.com/books/genomics-ii--bacteria-viruses-and-metabolic-pathways/11000061/1205000491.pdf. It is possible to install PSFC in Cytoscape running on Windows Vista 32-bit (with current versions of Cytoscape and Java), but running the calculation gives an error. I am aware that Cytoscape generally recommends using a 64-bit system, but if PSFC is for 64-bit systems only then this information could be added to the manual. Response: This was explicitly mentioned in the Manual. Redoing a calculation shuffles the sorting of the columns in the node / edge attribute table (in terms of the order of psf_l0, psf_l1, etc.), in Linux Mint. Response: Unfortunately, currently it is impossible to order the columns in Cytoscape. Currently, this as an ogoing task and will soon be addressed by the Cytoscape development team. Setting mTOR to an fc of 2 in the MAPK-example will lead to a final state where the box 'apoptosis' is completely white, even though the value is 0.75, and the box for 'p-RAF' is completely black (with a value of 4.59). This may or may not be an issue with the Cytoscape installation under Linux Mint. Response: The versions 1.0.2 and 1.1.2 of PSFC allows for setting the color scheme based on custom preferences. So the user may get rid of complete white and black nodes. I don't understand the purpose of the 'level textbox' in the flow visualization window. Does it only show the current maximum level? Response: The flow visualization is based on sequential mapping of the node and edge signals, from the input nodes (level 0), to the sink nodes (highest level). The level textbox may thus be considered as an indication of time-step. In PSFC 1.0.2 and 1.1.2 we have made it non-editable and named it “time-step”, so that is more understandable to the user. Page numbers should be added to the manual. Response: According to the reviewer’s comment, we have added page numbers to the manual. Suggestions Using a two color scheme (or maybe an easily customizable one) would improve the visualization. Right now even negative values are shown in the same color as positive ones and they are indistinguishable from low positive numbers. Response: Thank you for the suggestion. We have added a two-color scheme in PSFC 1.0.2, with option to specify also the middle color and signal value. Additionally, the user may also choose the range of edge widths.In PSFC 1.1.2 the the minimum and maximum values may also be modified by the user. An option to visualize the values for nodes and edges might be helpful. Response: The initial values of nodes and edges can be visualized by flow visualization at level of time-step 0. We have additionally described this in the manual. Clicking the 'Play flow' button in the flow visualization window causes a pop-up which covers part of the network visualization. Maybe this pop-up could be forced to a bottom corner of Cytoscape. The 'play flow' feature is on a very high speed if there is only a small number of levels. Ideally either the speed should be easily adjustable (and an endless mode available) or maybe the 'Show state' button could be removed and the visualization updated directly via the slider. This would allow to control the speed and direction of the visualization very easily. It would also solve the above mentioned issue of the pop-up. Response: According to the reviewers suggestion, we have removed the “Play flow button” in PSFC 1.0.2 and added control buttons, so that the user is able to visualize the pathway states manually. This also removed the problem with the pop-up. Is the use of two config files (Edge Types config file and Rule config file) a requirement of Cytoscape or the parser? If not it would seem simpler to merge them and remove the redundant 'Function names'. Response: The edge types config file describes the mapping between edge types and function names, while the rule config files assigns mathematical functions to each function name. The reason these two files are separate, is that multiple edge types can be treated with the same rule, and writing the same function for that rule multiple types in a file may produce errors. Thus, we think that keeping the mappings separate is more optimal. The instructions for using Figure 2 toy example of the manuscript, also describe the advantage of having two separate config files. The file 'rules_presets.pdf' could be added to the manual instead of being a separate file, especially since there already is an extra pdf for the use case. Response: The “rules_preset.pdf” is to assist the user specifically in setting up the options for flow propagation rules. While the user may be familiar with the rest of the options in PSFC, this is intended to be a quick look-up guide for rules. Thus, we made it as a separate file, to free the user from scrolling through the manual every time they have a doubt which rule to use. Additional toy models with the relevant input/output in order to test and verify 'significance calculation', handling feedback loops and edge weights might be helpful. Response: The toy network model for loop-testing has also been added in the supplementary material of the second version of the manuscript. The paper explicitly mentions 'bench biologists' as potential users of PSFC. I feel that the app is described sufficiently in order to allow a user unfamiliar with modeling and/or Cytoscape to use it successfully. The bottleneck would probably be generating the necessary input model. Maybe a short paragraph in the manual pointing to the relevant sections in the Cytoscape manual as well as describing the creation and changing of node and edge attribute tables would remove this obstacle. Response: We agree with the reviewer. We have referred the user to respective pages of the Cytoscape manual, in the PSFC User Manual, where appropriate."
}
]
},
{
"id": "10923",
"date": "27 Nov 2015",
"name": "Maarten Altelaar",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present PSFC, a cytoscape app for the analysis of pathway signal flow to indicate pathway activity states. A user manual is presented with some example datasets.In the manual the various steps are very well described. My assessment for this paper is 'Approved', the app helps in visualization pathway flow and generates an easy interpretable network.Just some minor comments and recommendations: The flow visualization part and the Rules tab in the control panel is not completely visible. The bottom part falls outside the panel. And also the 'Min', 'Mid', 'Max' text boxes are very small, making it difficult to modify the numbers inside these boxes. When pathways are loaded from ReactomeFIPlugin, the edge annotation column contains often more than 1 entry. This causes problems in the flow calculation steps. An idea is to split this column into the separate entries and check which entry is described in the edgeType config file. When the pathway is loaded from a plugin inside Cytoscape, like Reactome, the network file is not saved yet. This caused problems with writing of the results output file. It would be better if the network file could be automatically saved at this step. Some interactions could represent a complex, instead of an 'inhibition' or 'activation' signal, in this case there is not really a rule that would describe this situation, besides maybe source == target, but unfortunately I get an error when I use the '=' sign to describe an edge rule.",
"responses": [
{
"c_id": "2600",
"date": "03 Apr 2017",
"name": "Lilit Nersisyan",
"role": "Author Response",
"response": "Response to the reviewers We would like to thank the reviewer for their comments and recommendations. We have addressed those in the version 2 of the manuscript and in version 1.1.2 of PSFC. Here, the paragraphs containing responses to the reviewer’s comments are indicated with “Response:” in the beginning. The flow visualization part and the Rules tab in the control panel is not completely visible. The bottom part falls outside the panel. And also the 'Min', 'Mid', 'Max' text boxes are very small, making it difficult to modify the numbers inside these boxes. Response: We have changed the layout of PSFC 1.1.2 accordingly. It is still possible that the user experiences distortions depending on the screen resolution. We would like to receive comments from users in such cases to make appropriate changes in further versions of the app. When pathways are loaded from ReactomeFIPlugin, the edge annotation column contains often more than 1 entry. This causes problems in the flow calculation steps. An idea is to split this column into the separate entries and check which entry is described in the edgeType config file. Response: We understand the concern brought by the reviewers. However, while this feature would be useful in the particular case when pathways are loaded from the ReactomeFIPlugin, in other cases this would cause unnecessary complications. Therefore, we have left this split operation to the user: they may create an additional column containing the proper edge annotation entries. When the pathway is loaded from a plugin inside Cytoscape, like Reactome, the network file is not saved yet. This caused problems with writing of the results output file. It would be better if the network file could be automatically saved at this step. Response: In PSFC the user should specify which network to perform calculations on. There may be many networks loaded into the Cytoscape session, while the user would desire to perform PSF on only one network. Therefore, we find that automatically saving the networks should not be a feature of PSFC. In case of issues, we advise the user to send us the details of the errors to perform appropriate fixes. Some interactions could represent a complex, instead of an 'inhibition' or 'activation' signal, in this case there is not really a rule that would describe this situation, besides maybe source == target, but unfortunately I get an error when I use the '=' sign to describe an edge rule. Response: We thank the reviewers for this comment. We understand that the case of “source == target” can be handled by simply putting “source” in the edge rule. This will lead to source signals to be transferred to the targets as is. The cases of complex nodes are handled with the new feature of PSFC 1.1.2 – operator nodes. In addition to the rules assigned to edges, each specific node can also be assigned a function. E.g. if this node is a complex node it can take on the ‘min’ function, which means that the minimum signal of all the incoming nodes will be assigned to the target. Functions ‘min’, ‘max’, ‘mean’ and ‘prod’ are supported. The details are available in the user manual."
}
]
}
] | 1
|
https://f1000research.com/articles/4-480
|
https://f1000research.com/articles/6-406/v1
|
31 Mar 17
|
{
"type": "Research Article",
"title": "Factors associated with awareness about syphilis and gonorrhoea among women in Bangladesh",
"authors": [
"Mosharaf Hossain",
"Rafiqul Islam",
"Aziza Sultana Rosy Sarkar",
"Rafiqul Islam",
"Aziza Sultana Rosy Sarkar"
],
"abstract": "Background: Currently, syphilis and gonorrhoea among women is a topic great concernin Bangladesh. To date, little is known in the existing literature regarding its prevalence, and the current level of syphilis and gonorrhoea awareness among women with regard to prevention is inadequate. This research aims to identify factors associated with awareness of syphilis and gonorrhoea among women in Bangladesh. Methods: Data were collected from women by the Bangladesh Demographic and Health Survey (BDHS) 2011 as a cross-sectional study. The seven divisions surveyed were Dhaka, Rajshahi, Rangpur, Chittagong, Barisal, Khulna and Sylhet. The number of women in the seven divisions totalled 17,842. The chi-squared test and a logistic regression model were used to determine the social-demographic factors associated with awareness about syphilis and gonorrhoea among women in Bangladesh. Results: The rate of awareness about syphilis and gonorrhoea among women in Bangladesh was 13.3% and 15.7%, respectively. The chi-squared test and logistic analysis demonstrated that there is a significant association between the awareness of syphilis and gonorrhoea with the respondents’ age, location of the respondents’ house, educational level of the respondent, socioeconomic status, geographic region, and respondents that listened to the radio and watched TV. Conclusions: There is an essential need to expand the learning and teaching program in Bangladesh regarding syphilis and gonorrhoea, mainly among younger women (<25 years) in all topographical and rural areas. Advertising drives and mass broadcasting programs can be used to increase knowledge within societies, particularly among women. In addition, the low awareness of syphilis and gonorrhoea indicates that prevention interventions are required among women.",
"keywords": [
"Syphilis",
"Gonorrhoea",
"Knowledge",
"Awareness",
"Women."
],
"content": "Introduction\n\nGenerally, women are in greater danger of contracting sexually transmitted infections compared with men1. Treponema pallidum is a microaerophilic spirochete that causes syphilis, a chronic systemic venereal illness with various characteristics, which is also characterised by latent periods and flare-ups or incidents of energetic virus1. Gonorrhoea is a general venereal illness caused by the bacterium Neisseria gonorrhoea. Symptoms include painful urination and pain around the urethra. Virtually any mucous membrane can be infected2,3. Previous research has shown that the rate of awareness of gonorrhoea was 4%, while that for syphilis was 5% among 1,550 women in Bangladesh4, while further research showed that rate of awareness for syphilis and gonorrhoea was 0.9% and 0.5%, respectively4. The World Health Organization reported an incidence of 340 million individuals with gonorrhoea and syphilis among 15–49 year olds, the majority of whom resided in Asia5. In developed and developing countries, such as Bangladesh, gonorrhoea and syphilis area major health and economic problem5. Per day more than 1 million individuals obtain a sexually transmitted infection, and per year, a projected 500 million individuals contract one of four sexually transmitted infections, including gonorrhoea and syphilis6.\n\nGeographic region, place of residence, respondent’s age, education, listening to the radio and watching television have a sufficient association with knowledge concerning sexually transmitted diseases, in general, among women in Bangladesh7. Syphilis and gonorrhoea are harmful to the health of women and infants8–10. Gonorrhoea is caused by pelvic inflammatory diseases, which can lead to sterility, ectopic gravidity, and long-lasting pelvic pain11–15,16. Additional, investigation specified that syphilis and gonorrhoea combined can be co-factors for HIV infection16–18,19. In Bangladesh, previous studies have identified the sero-prevalence of sexually transmitted infections and reproductive tract infections in the general population20. However, a nationwide study concerning the rate of awareness of syphilis and gonorrhoea, specifically, among women in Bangladesh is lacking. Consequently, the goal of this study is to identify the associated factors concerning knowledge about these diseases among women in Bangladesh.\n\n\nMethods\n\nThis cross-sectional study used data collected in the Bangladesh Demographic and Health Survey (BDHS) 2011, which includes data collected from women. Dhaka, Rajshahi, Rangpur, Chittagong, Khulna, Barisal and Sylhet are seven administrative divisions in Bangladesh. Each division is subdivided into zilas (administrative areas), and each zila into upazilas (sub-administrative areas). Each urban area in an upazila is divided into wards, and into mohallas (villages) within a ward. Each rural area in an upazila is divided into union parishads (UP; local administrative areas) and mouzas (villages) within a UP. These divisions allow the country as a whole to be easily separated into rural and urban areas21.\n\nThe BDHS survey was conducted by a two-stage stratified sample of households. Initially, a total of 600 areas were selected, with 207 clusters in urban areas and 393 in rural areas. A complete household listing operation was then carried out in all of the selected areas to provide a sampling frame for the second-stage selection of households. In the second stage of sampling, a systematic sample of 30 households on average was selected per area to provide statistically reliable estimates of key demographic and health variables for the country as a whole, for urban and rural areas separately, and for each of the seven divisions. A total of 18,222 ever-married women aged 12–49 were identified in these households, and 17,842 were interviewed, yielding a response rate of 98%21.\n\nSPSS v21 was used to conduct statistical analysis. χ2 tests were used to calculated the association between awareness about syphilis and gonorrhoea and the respondent’s age, place of residence, education, socioeconomic grade, geographic region, and if the respondents listen to radio and watch TV. A p-value of 0.05 was considered significant at the 95% confidence intervals (CI) level. To identify the predictive factors with awareness about syphilis and gonorrhoea, and the socio-demographic variables (Table 1), a logistic regression analysis was conducted. The dependent variable used in the model was a dichotomous binary variable: Y=1 if the women have awareness about syphilis and gonorrhoea, and Y=0 otherwise. Respondent’s age, place of residence, education, socio-economic grade, geographic region, and if the respondent listen to radio and watch TV were measured as predictive variables (Table 1).\n\n*Based on BDHS, 2011; https://dhsprogram.com/pubs/pdf/fr265/fr265.pdf\n\n\nResults\n\nTable 2 presents the association between awareness about syphilis and gonorrhoea and the designated social-demographic variables of women in Bangladesh. The rate of awareness about syphilis and gonorrhoea among women in Bangladesh was 13.3% and 15.7%, respectively. Women who were <25 years, 25–35 years and 36–49 years had an awareness of 9.0%, 14.0%, 16.6% for syphilis, respectively, and 11.8%, 16.6%, 18.5% for gonorrhoea, respectively. Among all the women, 10.6% and 13.3% in rural areas and 18.2% and 20.4% in urban areas had awareness about syphilis and gonorrhoea in Bangladesh, respectively. Only 18.9% and 22.8% of women that were educated at a secondary or higher level had awareness about syphilis and gonorrhoea, respectively, and 18.1% and 21.0% rich women knew about syphilis and awareness, respectively. The women in the Barisal division had the highest (20.5% and 25.3%) awareness about syphilis and gonorrhoea of all the geographic regions (12.8% and 14.1%, Chittagong; 14.6% and 16.9%, Dhaka; 13.9% and 16.8%, Khulna; 10.9% and 13.1%, Rajshahi; 9.7% and 12.6%, Rangpur; 11.2% and 11.4%, Sylhet). Of the women who listen to radio only 17.5% and 20.2% knew about syphilis and gonorrhoea, respectively, and of those who watch TV only 17.8% and 20.4% had awareness.\n\n*p<0.05 level of significance\n\nFrom Table 3, women aged 25–35 years and 36–49 years were, respectively, 1.91 and 3.01, and 1.77 and 2.63 times more aware of syphilis and gonorrhoea, respectively, compared to women aged <25 years. Women that lived in rural areas had 0.72 and 0.82 times less awareness about syphilis and gonorrhoea, respectively, than women living in urban areas in Bangladesh. Education was shown to be an important factor for awareness about syphilis and gonorrhoea among women: Women who had finished primary, secondary and higher teaching were, respectively, 1.56, 3.41, and 1.59, and 3.72 times more aware of syphilis and gonorrhoea than women who had no education. The level of awareness about syphilis and gonorrhoea increased with the level of women’s education. The middle class and rich women were, respectively, 1.17 and 1.23, and 1.06 and 1.22 times more aware of syphilis and gonorrhoea than poor women. In addition, women living in Dhaka, Khulna, Chittagong, Rajshahi, Rangpur and Sylhet divisions had less awareness about syphilis and gonorrhoea than women living in the Barisal division. Women who listen to radio and watch TV were, respectively, 1.17 and 1.01, and 1.13 and 1.11 times more aware of syphilis and gonorrhoea, respectively, compared to women who did not listen to radio or watch TV in Bangladesh.\n\n*p<0.05 level of significance\n\n\nDiscussion\n\nAwareness about syphilis and gonorrhoea leads to the promotion of health care among women in Bangladesh. The present study was designed to identify the awareness about syphilis and gonorrhoea among women. In this study, the rate of awareness about syphilis and gonorrhoea among women in Bangladesh is 13.3% and 15.7%, respectively. Bangladesh in previous studies shows that, the rate of awareness about syphilis and gonorrhoea is between 4–5.7% and 5–6.3% respectively,4,16. The Bangladesh government should give urgent attention to increase awareness about the rate of syphilis and gonorrhoea in Bangladesh, since syphilis and gonorrhoea can lead to ectopic pregnancy, low birth weight, pelvic inflammatory diseases and infertility, which are increasing day-by-day16. Older women have a higher level of awareness about syphilis and gonorrhoea, since they have acquired knowledge related to sexuality and reproduction22. In this study, middle-aged and older women have better awareness about syphilis and gonorrhoea compared to younger women (<25 years). The old-style society system and health service overlook younger women in Bangladesh. The level of women’s education is significantly associated with awareness about syphilis and gonorrhoea. Education makes an important contribution to awareness, and it showed a statistically significant association with awareness in the current study (p<0.001). A higher level education provides women with various opportunities, such as practice of health scare and knowledge on reproductive health. This is supported by the encouraging effect of teaching for the development of awareness about syphilis and gonorrhoea seen in previous studies23,24. In this study, women that live in urban areas have more awareness about syphilis and gonorrhoea, as do women in the Barisal (urban) area. Urban areas are exposed more to mass media and education programs compared with rural areas. Mass media is an important channel, as music, newspapers, songs and advertising can communicate awareness about syphilis and gonorrhoea. The major sources of information about syphilis and gonorrhoea for women are the radio and TV, and in the present study women that listened to the radio and watched TV were more likely to know about the two STIs. This is similar to the suggestion established by Khan and Goel in their research: The level of awareness increased with age and literacy, which shows policymakers that educational intervention programs may be effective23,25. One of the limitations of this research is that the material was self-reported and few studies have studied syphilis and gonorrhoea in Bangladesh. Therefore, Bangladesh needs more research about these diseases.\n\n\nConclusions\n\nKnowledge about infectious diseases, especially syphilis and gonorrhoea, in Bangladesh has been an important theme in population based studies. Educating women is an important step in increasing knowledge consciousness about syphilis and gonorrhoea. Highly effective sexual health education should be included in textbooks and infectious diseases prevention programmes, which will achieve positive health outcomes among rural poor women of Bangladesh. At present, awareness about syphilis and gonorrhoea shows there is more risk to women in different regions (Rajshahi, Rangpur, Sylhet and Chittagong divisions). Rural school based educational programmes are needed to increase the awareness about syphilis and gonorrhoea. However, mass media (broadcasting and television) play a large role in increasing awareness about infectious diseases, such as syphilis and gonorrhoea. Therefore, Bangladeshi government policy should focus on increasing educational programmes at the public level about syphilis and gonorrhoea through the use of radio, television, the Internet, newspapers and textbooks.\n\n\nEthical approval\n\nEthical approval for this study was not applicable, since ethical approval for the collection of data was previously approved for BDHS.\n\n\nData availability\n\nThe data from BDHS 2011 are free-to-access (https://dhsprogram.com/data/dataset/Bangladesh_Standard-DHS_2011.cfm?flag=0); however, before you can download data, users must register as a DHS data user. Dataset access is only granted for legitimate research purposes (https://dhsprogram.com/data/new-user-registration.cfm).",
"appendix": "Author contributions\n\n\n\nMH and ASRS participated in the design of the study and performed the statistical analysis. MRI conceived the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nThe authors wish to acknowledge the BDHS, NIPORT, MEASURE DHS and ICF for data collection. The authors are particularly grateful for the professional work undertaken by BDHS, without which this work would not have been possible.\n\n\nReferences\n\nFarah A, Rahman MH, Rahman O, et al.: Socio demographic study of gonorrhoea and syphilis in two medical college hospital and two private chamber in Bangladesh. Med Today. 2013; 25(1): 18–20. Publisher Full Text\n\nCox DL, Chang P, McDowall AW, et al.: The outer membrane, not a coat of host proteins, limits antigenicity of virulent Treponema pallidum. Infect Immun. 1992; 60(3): 1076–83. PubMed Abstract | Free Full Text\n\nWolff K: Fitzpatrick’s Dermatology in General Medicine. Seventh edition. Mc Graw Hill Medical. 2008; 2: l1993–6. Reference Source\n\nKhan MA, Rahman M, Khanam PA, et al.: Awareness of sexually transmitted disease among women and service providers in rural Bangladesh. Int J STD AIDS. 1997; 8(11): 688–96. PubMed Abstract | Publisher Full Text\n\nWorld Health Organization: Global prevalence and incidence of selected curable sexually transmitted infections: overviews and estimates. Geneva, Switzerland, 2001. Reference Source\n\nWHO: Fact sheet: Sexually transmitted infections (STIs). World Health Organization. Geneva, Switzerland, 2013. Reference Source\n\nHossain M, Mani KK, Sidik SM, et al.: Knowledge and awareness about STDs among women in Bangladesh. BMC Public Health. 2014; 14: 775. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCotch MF, Pastorek JG 2nd, Nugent RP, et al.: Trichomonas vaginalis associated with low birth weight and preterm delivery. The Vaginal Infections and Prematurity Study Group. Sex Transm Dis. 1997; 24(6): 353–60. PubMed Abstract | Publisher Full Text\n\nGoldenberg RL, Thom E, Moawad AH, et al.: The preterm prediction study: fetal fibronectin, bacterial vaginosis, and peripartum infection. NICHD Maternal Fetal Medicine Units Network. Obstet Gynecol. 1996; 87(5 Pt 1): 656–60. PubMed Abstract | Publisher Full Text\n\nNewton ER, Piper J, Peairs W: Bacterial vaginosis and intraamniotic infection. Am J Obstet Gynecol. 1997; 176(3): 672–7. PubMed Abstract | Publisher Full Text\n\nWeström L: Effect of acute pelvic inflammatory disease on fertility. Am J Obstet Gynecol. 1975; 121(5): 707–13. PubMed Abstract | Publisher Full Text\n\nHolmes KK, Eschenbach DA, Knapp JS: Salpingitis: overview of etiology and epidemiology. Am J Obstet Gynecol. 1980; 138(7 Pt 2): 893–900. PubMed Abstract | Publisher Full Text\n\nBrunham RC, Binns B, Guijon F, et al.: Etiology and outcome of acute pelvic inflammatory disease. J Infect Dis. 1988; 158(3): 510–7. PubMed Abstract | Publisher Full Text\n\nCates W Jr, Rolfs RT Jr, Aral SO: Sexually transmitted diseases, pelvic inflammatory disease and infertility: an epidemiologic update. Epidemiol Rev. 1990; 12: 199–220. PubMed Abstract\n\nWorld Health Organization: Tubal infertility: serologic relationship to past chlamydial and gonococcal infection. World Health Organization Task Force on the Prevention and Management of Infertility. Sex Transm Dis. 1995; 22: 71–7. PubMed Abstract\n\nGibney L, Macaluso M, Kirk K, et al.: Prevalence of infectious diseases in Bangladeshi women living adjacent to a truck stand: HIV/STD/hepatitis/genital tract infections. Sex Transm Inf. 2001; 77(5): 344–350. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWasserheit JN: Epidemiological synergy. Interrelationships between human immunodeficiency virus infection and other sexually transmitted diseases. Sex Transm Dis. 1992; 19(2): 61–77. PubMed Abstract | Publisher Full Text\n\nLaga M, Nzila N, Goeman J: The interrelationship of sexually transmitted diseases and HIV infection: implications for the control of both epidemics in Africa. AIDS. 1991; 5(Suppl 1): S55–63. PubMed Abstract\n\nCohen CR, Duerr A, Pruithithada N, et al.: Bacterial vaginosis and HIV seroprevalence among female commercial sex workers in Chiang Mai, Thailand. AIDS. 1995; 9(9): 1093–7. PubMed Abstract\n\nGani MS, Chowdhury AM, Nyström L: Urban–rural and socioeconomic variations in lifetime prevalence of symptoms of sexually transmitted infections among Bangladeshi adolescents. Asia Pacific Family Medicine. 2014; 13: 7. Publisher Full Text\n\nNational Institute of Population Research and Training (NIPORT): Bangladesh Demographic and Health Survey 2011. Mitra and Associates, and ICF International, 2013. Dhaka, Bangladesh and Calverton, Maryland, USA: NIPORT, Mitra and Associates, and ICF International. Dhaka, Bangladesh, 2013. Reference Source\n\nKhan MA: Knowledge on AIDS among female adolescents in Bangladesh: evidence from the Bangladesh demographic and health survey data. J Health Popul Nutr. 2002; 20(2): 130–137. PubMed Abstract\n\nJahan M: Women workers in Bangladesh garments industry: a study of the work environment. Int J Soc Sci Tomorrow. 2012; 1(3): 1–5.\n\nMondal NI, Hossain M, Rahman M: Knowledge and awareness about HIV/AIDS among garments workers in Gazipur District Bangladesh. Soc Sci. 2008; 3(7): 528–530. Reference Source\n\nGoel NP, Pandey MC: Awareness on AIDS & drug among college students in colleges of Meghalaya. Int Educ. 1997; 12(1): 1–6."
}
|
[
{
"id": "25690",
"date": "11 Sep 2017",
"name": "Tasnuva Wahed",
"expertise": [
"Reviewer Expertise Sexual and Reproductive Health",
"Cholera and Oral Cholera Vaccine"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article provides updated information on awareness level of Bangladeshi women on two important sexually transmitted diseases, syphilis and gonorrhea including associated factors with the awareness level. It has been prepared based on national survey called Bangladesh Demographic and Health Survey (BDHS) 2011. Therefore, data used in this article is valid and widely acceptable. However, I have few suggestions and some clarity for better readability and understanding of the readers.\n\n1. Abstract:\nThe authors mentioned, “To date, little is known in the existing literature regarding its prevalence, and the current level of syphilis and gonorrhea awareness among women with regard to prevention is inadequate.” It is not clear whether there is lacking of information on prevalence of syphilis and gonorrhea or prevalence of awareness level on syphilis and gonorrhea. The authors did not measure the prevalence of syphilis and gonorrhea in this study. Please, clarify or revise this sentence.\n\n2. Introduction\nThe author should include some information on why Bangladeshi women are at risk of syphilis and gonorrhoea.\n\nLast sentence of first paragraph in Introduction section: “Per day more than 1 million individuals obtain a sexually transmitted infection, and per year, a projected 500 million individuals contract one of four sexually transmitted infections, including gonorrhoea and syphilis”- Are these infected individuals from Bangladesh or from the world population? Please also use a reference.\n3. Methods:\nSample design (Page 3):\n- I would prefer to use “Study design and study site” instead of sample design.\n- Did authors collect primary data using cross-sectional study design or secondary data review or analysis was applied, please clarify?\n\nSampling procedure:\n- What is a cluster, or how have these clusters been defined or created?\n- If description of this sampling procedure is published in a BDHS report, it can be used here as a reference.\n\nIf this is not secondary data analysis or it involves primary data collection, description of field visits is required.\n\nDescription of questionnaire is required.\n\nData analysis: The definition of “awareness about syphilis and gonorrhea” is required. The authors may include a specific or list of questions used to define or identity “awareness about syphilis and gonorrhea” as an example.\n\nTable 1: Did the authors mean the first category “1≤25” as “12-24” years? How many under 18 children were in this group? I would suggest to make two groups by stratifying first category as 12-17 as adolescent and 18-24 as youth. There should be a significant awareness difference between adolescents and youth groups.\n4. Results:\nTable 2: It should indicate total 'n' for Syphilis and total 'n' for Gonorrhoea in the Heading as it is not cited in the Method section.\n\nTable 3:\n- Place of residence: It has been interpreted as “Women that lived in rural areas had 0.72 and 0.82 times less awareness about syphilis and gonorrhoea, respectively, than women living in urban areas in Bangladesh (Column: Right, Paragraph:2, Line:4)”- I would suggest to check this interpretation by a statistician.\n- Geographic region: The author should include a justification in the Method’s data analysis paragraph why they used Barisal as a reference category. In my opinion, Dhaka can be a reference category as it is capital city of Bangladesh.\n\n5. Discussion:\nThe authors justified with possible reasons about their findings.\n\nIn first paragraph, the authors described the level of awareness separately at now and in the past. I would suggest to show this information in one sentence as, “Over one and a half decades (from 1997-2001 to 2011), the awareness on syphilis and gonorrhoea has been slightly/poorly/unsatisfactory increased from 4–5.7% to 13.3% and 5–6.3% to 15.7% respectively (ref).”\n\n6. Grammatical or Typo-errors:\nThe whole manuscripts should be checked by an English editor as a few typo-errors have been observed. Such as:\nAbstract: “Currently, syphilis and gonorrhoea among women is a topic great concern in Bangladesh. To date, little is known in the existing literature regarding its prevalence, and the current level of syphilis and gonorrhea awareness among women with regard to prevention is inadequate (Page 1).” The authors may want to write: Currently, syphilis and gonorrhoea among women is a topic of great concern in Bangladesh.\n\nMethods: “This cross-sectional study used data collected in the Bangladesh Demographic and Health Survey (BDHS) 2011, which includes data collected from women (Page-3).” The use of word ‘collected’ twice makes the sentence unclear.\n\nDiscussion: “A higher level education provides women with various opportunities, such as practice of health scare and knowledge on reproductive health (Page-6).” The author may write care instead of scare.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "25479",
"date": "18 Sep 2017",
"name": "David H. Martin",
"expertise": [
"Reviewer Expertise Sexually transmitted infections"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI agree for the most part with the points made by the other reviewer of this paper. Here I will not repeat any of the points previously made but I have several to add which I think the authors and readers should consider.\nIntroduction 1st paragraph. One of the most common symptoms of gonorrhea in males, urethral discharge, was not mentioned.\n2nd paragraph. The following statement is incorrect: “Gonorrhea is caused by pelvic inflammatory diseases,…” I think the authors meant to say “Gonorrhea causes pelvic inflammatory disease which can lead to sterility…” This is a typographical error which significantly changed the meaning of a sentence. There are other less significant such errors throughout the paper and, as recommend by the other reviewer, the paper would benefit from proofreading to detect and correct these.\nBackground information on the national prevalence rates of syphilis and gonorrhea would be helpful in understanding the results of the study. Are such data available? If so they should be provided in the introduction.\nResults It is striking that all the independent variables tested where independently associated with level of knowledge about syphilis and gonorrhea despite the fact there must be significant confounding of the analysis i.e., for example, socioeconomic status is always strongly correlated with educational level. This correlation is so strong it is somewhat surprising that that both variables were independently associated knowledge levels. The large sample size probably accounts for this and this the major strength of the paper.\nDiscussion It was striking that the Barisal region had significantly higher knowledge levels than the other districts. From what I could learn on the internet, this is a relatively rural area which, based on the results of the comparison of rural vs. urban area knowledge levels, should have had lower knowledge levels. Dhaka is the largest city in Bangladesh yet knowledge levels in the Dhaka region were significantly lower than in the Barisal region. It would be important for the authors to help the reader understand this unexpected finding in the Discussion Section of the paper. If there are better sexual health education programs in Barisal, perhaps these could be adapted in other parts of the country.\nThe statement concerning the importance of mass media that songs can communicate awareness of syphilis and gonorrhea struck me. Is this really true in Bangladesh? If so, it would be novel and it would be interesting to know more songs used for sexually transmitted infection (STI) education.\nConclusions Four geographic regions were singled out as parts of Bangladesh with the lowest knowledge of syphilis and gonorrhea. However, it does not appear that there is much difference between these four regions and the Dhaka and Khulna regions. Only the Barisol region seems to have be better knowledge of these two STIs than any of the other regions.\nThe main message of the paper is clear as stated in the concluding sentence. Much greater knowledge of STIs in needed throughout all of Bangladesh regardless of socioeconomic status and educational level. This will be a daunting task given the relatively low levels of access to mass media in the country.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "25480",
"date": "25 Sep 2017",
"name": "Jeanne Marrazzo",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThere is so little information on awareness of common STIs in countries like Bangladesh that it is great to see that the authors included syphilis and gonorrhea in their assessment here. The study is strengthened by the nature of the survey which appears to be a nationally representative household sample (albeit from 2011, but that is OK given how little data we have from this region on this topic). My major question is about the conduct of the interviews themselves, especially given that I suspect a lack of privacy might have been one challenge, especially for women who might not have had much autonomy in some households (this may be presumptuous of me, so I would welcome the authors' provision of details). How were the women approached? Was a standard interview format used? Were the interviewers male? Was there any effort to match interviewers by sex? These are important considerations if we are to judge the results as representative and significant, and we need to know more about the methodology in general.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nPartly\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-406
|
https://f1000research.com/articles/6-404/v1
|
31 Mar 17
|
{
"type": "Research Note",
"title": "A scoring system for the evaluation of the mutated Crb1/rd8-derived retinal lesions in C57BL/6N mice",
"authors": [
"Danilo Concas",
"H.L. Cater",
"S.E. Wells",
"Danilo Concas",
"S.E. Wells"
],
"abstract": "As part of the International Mouse Phenotyping Consortium (IMPC) programme, the MRC Harwell is conducting a large eye morphology phenotyping screen on genetically modified mice compared to the baseline phenotype observed in the background strain of C57BL/6NTac. The C57BL/6NTac strain is known to carry a spontaneous mutation in the Crb1 gene that causes retinal degeneration characterized by the presence of white spots (flecks) in the fundus. These flecks potentially represent a confounding factor, masking similar retinal phenotype abnormalities that may be detected in mutants. Therefore we investigated the frequency, position and extent of the flecks in a large population of C57BL/6NTac mice to provide the basis for evaluating the presence of flecks in mutant mice with the same genetic background. We found that in our facility males were more severely affected than females and that in both males and females the most common localisation of the flecks was in the inferior hemicycle of the fundus.",
"keywords": [
"mouse phenotyping",
"Crb1/rd8 mutation",
"retina degeneration"
],
"content": "Introduction\n\nRetinal degeneration in mice occurs in many forms, many of which can be attributed to mutations in specific genes. Some of the reported types of retinal degeneration display a similar phenotype, characterised by the presence in the fundus of white spots of different shapes and sizes1–3. One of the causative mutations for retinal degeneration in the mouse is the spontaneous single nucleotide deletion rd8 in the Crb1 gene, situated on chromosome 11,4. It has been previously reported that the C57BL/6N strain, derived from the unaffected C57BL/6J strain, often presents typical retinal white spots (flecks) caused by the Crb1/rd8 mutation4. These have been described as dysplastic lesions affecting the retinal region between the inner and the outer nuclear layer and are mainly localised in the inferior part of the retina5,6. The observed phenotype is considered a possible confounding factor that could mask a phenotype with a similar appearance but a different causative gene mutation (Figure 1). This is of particular importance considering that the C57BL/6N line is a widely used commercial line and is the background strain used for the generation of gene-targeted mice in several mouse mutagenesis/phenotyping programmes, including the International Knockout Mouse Consortium (IKMC) and the International Mouse Phenotyping Consortium (IMPC).\n\nThe flecks appear in the superior hemicycle of the fundus because the image is inverted by the ophthalmoscope.\n\nOver the last 5 years of phenotyping mice through the IMPC pipeline at MRC Harwell, we have observed the presence of fundus flecks in both the knockout lines and in the C57BL/6NTac wild type mice. The number of affected individuals for each knockout line generated has been variable, as has the number of flecks present in each individual. As a result of such variability, the probability that the flecking in the mutant line is a phenotype attributable to the gene mutations rather than the background strain effect becomes questionable. To correctly interpret similar phenotypes in the knockout lines and exclude the contribution of Crb1/rd8 -related flecks, we have created a scoring system to allow us to fully categorise the lesions present in the C57BL/6NTac mice in a systematic manner in order to provide a comprehensive background strain reference. The flecks scoring system takes into account the position of the flecks in the superior and inferior retinal hemicycle, as the retina is not uniformly affected by the phenotype5. As an innovative approach, we also scored the number of flecks in each hemicycle as a measure of the phenotypic penetrance. In addition, in order to determine any sexual dimorphism, we applied our scoring system to both males and females.\n\n\nMethods\n\n194 C57BL/6NTac males and 200 females were screened at 15 weeks of age. Animals were housed in IVC cages from birth under 12-hour-on/12-hour-off cyclic lighting, at controlled temperature (21 ± 2°C) and humidity (55 ± 10%) conditions. The mice had free access to water (25 p.p.m. chlorine) and were fed ad libitum on a commercial diet (SDS Rat and Mouse No.3 Breeding diet RM3). All procedures and animal studies were carried out in accordance with the Animals (Scientific Procedures) Act 1986, UK, SI 4 2012/3039) and with the NC3R’s ARRIVE guidelines All animal work reported in this article has been optimised to minimise the animals’ suffering and unnecessary procedures.\n\nFor the fundus examination an Omega 180 ophthalmoscope (Heine Ltd, USA) and a Superfield NC lens (Volk Optical Inc., USA) were used. Each eye pupil was dilated using a drop of 1% w/v Minims Tropicamide (Bausch & Lomb Inc., USA) and the observation was performed after 2 minutes. Images of the fundus were acquired by the use of a topical endoscopy fundus imaging (TEFI) camera.\n\nThe examinations were conducted by trained technicians on both eyes and the flecks on individual eyes were evaluated according to an in-house scoring system (Figure 2), taking into account their position in the fundus with respect to the optic nerve head (superior or inferior) and their number (with respect to the retinal surface covered by the flecks) as a measure of the severity grade. Therefore, the combination of both the position and the severity grade formed a scoring category for each eye.\n\nThe retinal fundus has been divided into two hemicycles: inferior and superior. In each one of the hemicycles the percentage of the surface that is covered by flecks represents the severity grade within a range of 25% for each level. The combination of the position, superior (S) or inferior (I), and the severity grade (from 0 to 4) represents the flecks score.\n\nAll observational data were recorded on a Microsoft Office Excel spreadsheet, and counts and percentage calculations were performed. Where different flecking scores were obtained for the left and right eye of the same animal, the eye with the most severe grade was used for the percentage calculations.\n\n\nResults\n\nAs shown in Table 1, the total percentage of affected males was higher than that of affected females (14.4% of males and 5% of females). Further categorising the flecks according to our scoring system, we observed that the males were still the most affected in the score classes ranging from I1 to I3 (Figure 3), with a symmetrical distribution of the frequencies centred in the I2 class (25–50% of inferior retina surface) in both sexes (8.2% of males and 3.0% of females). In the sample, there were no males affected in the class I4 (75 to 100% of inferior retina surface), whilst only one female (0.5% of the total) presented that severity grade. We mentioned above that the presence of this kind of flecks has already been associated with the inferior hemicycle of the fundus by other authors, a fact supported by our data that show just one male in the S3 (50–75% of superior retina surface) class of flecking.\n\nFor each flecks class, the percentage relative to the total number of animals in each sex group has been calculated.\n\nThe chart shows the percentage distribution of the flecks in the retinal fundus of males (black columns) and females (white columns) C57BL/6NTac mice. The horizontal categories represent the flecks class as previously explained in Figure 2.\n\n\nConclusions\n\nWith this study we make available both the observational data on the retinal flecks in C57BL/6NTac mice determined by the use of our scoring system, and the scoring system itself. Our findings, using a large population of wild type mice, provide a reference baseline that could significantly contribute to the further evaluation of Crb1 mutations-based eye morphology phenotypes. In addition to the flecks distribution data, the scoring system used represents a reliable quantitative method to evaluate the degree of flecking of an affected mouse retina and to make the comparison process between two or more strains (or treatment groups) more accurate and manageable.\n\n\nData availability\n\nDataset 1: Flecks scores raw data. A spreadsheet with the raw data related to the manual scoring of flecks made by the trained technicians, according to our in-house scoring system. The spreadsheet contains one column related to the animal ID and one column with the score class for males and females. The score class represents the combination of the position, superior (S) or inferior (I), and the severity grade (from 0 to 4) as described in Figure 2.\n\nDOI, 10.5256/f1000research.11252.d1564057",
"appendix": "Author contributions\n\n\n\nDC: Score system design, data analysis, article writing, charts and preparation of figures. HC: Score system design, article review. SW: Article review\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe research described in this manuscript was funded by the National Institutes for Health (U54HG006348) and by the Medical Research Council Strategic Award (53650).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe wish to thank Sharon Clementson-Mobbs, Russell Joynson and Clare Norris (MRC Harwell Institute, Mary Lyon Centre) for their precious contribution with fundus examination. We wish also to thank Dr Debora Bogani (MRC Harwell Institute), for her generous contribution to the manuscript reviewing process.\n\n\nReferences\n\nChang B, Hawes NL, Hurd RE, et al.: Retinal degeneration mutants in the mouse. Vision Res. 2002; 42(4): 517–525. PubMed Abstract | Publisher Full Text\n\nHawes NL, Chang B, Hageman GS, et al.: Retinal degeneration 6 (rd6): a new mouse model for human retinitis punctata albescens. Invest Ophthalmol Vis Sci. 2000; 41(10): 3149–3157. PubMed Abstract\n\nChen J, Nathans J: Genetic ablation of cone photoreceptors eliminates retinal folds in the retinal degeneration 7 (rd7) mouse. Invest Ophthalmol Vis Sci. 2007; 48(6): 2799–2805. PubMed Abstract | Publisher Full Text\n\nMattapallil MJ, Wawrousek EF, Chan CC, et al.: The Rd8 mutation of the Crb1 gene is present in vendor lines of C57BL/6N mice and embryonic stem cells, and confounds ocular induced mutant phenotypes. Invest Ophthalmol Vis Sci. 2012; 53(6): 2921–2927. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAleman TS, Cideciyan AV, Aguirre GK, et al.: Human CRB1-associated retinal degeneration: comparison with the rd8 Crb1-mutant mouse model. Invest Ophthalmol Vis Sci. 2011; 52(9): 6898–6910. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMehalow AK, Kameya S, Smith RS, et al.: CRB1 is essential for external limiting membrane integrity and photoreceptor morphogenesis in the mammalian retina. Hum Mol Genet. 2003; 12(17): 2179–2189. PubMed Abstract | Publisher Full Text\n\nConcas D, Cater H, Wells S: Dataset 1 in: A scoring system for the evaluation of the mutated Crb1/rd8-derived retinal lesions in C57BL/6N mice. F1000Research. 2017. Data Source"
}
|
[
{
"id": "23403",
"date": "12 Jun 2017",
"name": "Cheryl Mae Craft",
"expertise": [
"Reviewer Expertise My expertise is molecular neurobiology and genetics of blindness using mouse and rat models to decipher and understand the phototransduction cascade. I was the first to molecularly identify the members of the arrestin superfamily and other key signal transduction proteins in the retina by creating genetically engineered knockout mice for the cone arrestin and to address the phototransduction shutoff in cones. We identified and characterized a serious degeneration with Crb1 on the knockout Grk1 because of a C57Bl/6N background (Pak JS",
"Lee EJ",
"Craft CM.The retinal phenotype of Grk1-/- is compromised by a Crb1 rd8 mutation.Mol Vis. 2015 Nov 30",
"21:1281-94.PMID:26664249"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe report is interesting in describing the Crb1 /Rd 8 phenotype; however, the genetic analysis is still essential to use and to verify the status of the mutation since other genetic defects can lead to a similar retinal phenotype on different mouse background strains.\nLimited references were included.\n\nBecause Crb1/Rd8 is a recessively inherited gene, it can significantly play a role and the defective Crb1 protein can interact with other retinal proteins that lead to degeneration and go undetected because of its regional effect.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "24563",
"date": "29 Aug 2017",
"name": "Michel J. Roux",
"expertise": [
"Reviewer Expertise Retinal physiopathology",
"notably in diseases which main symptoms are not visual",
"as Duchenne Muscular Dystrophy or Down Syndrome",
"using immunohistology",
"microscopy",
"electrophysiology and retinal imaging (TEFI",
"OCT). In parallel",
"I have been supervising visual phenotyping at the Mouse Clinic Institute",
"notably for the Eumodic and IMPC program",
"thus facing the common problem of C57Bl/6N users described and quantified here."
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present report describes the distribution of rd8 lesions found in commercial C57Bl/6NTac mice. While the presence of such lesions has been known for a while (Mattapallil et al. 2012, the uncited Simon et al. Genome Biology 20131), it is to my knowledge the first such quantitative description, with as well the first report of the higher prevalence of lesions in males compared to females.\nI would suggest to replace Figure 1 with a composite fundus image indicating the extent of the retinal field which is examined (an approximate angle should be indicated), as the retina is rarely observed up to the ciliary margin, especially in a first-line phenotyping as the IMPC pipeline. As some animals may have only peripheral lesions, they may or may not be taken into account depending on the retinal field examined.\nAs animals are essentially affected in the inferior hemicycle, I think Figure 2 is misleading. The panels should rather be 0, I1/S0, I2/S1, I3/S2 and I4/S4. It would also be better to include fundus images that illustrate the various grades of scoring.\nThe difference between males and females is large, and considering the number of animals, should be significant, but it would be better to use statistics.\nThe flecks scores raw data are provided, but it would be useful to provide a comprehensive set of images illustrating the full spectrum of lesions.\nWere the lesions randomly distributed in cohorts, or were some cohorts more affected than others?\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Partly\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Partly\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-404
|
https://f1000research.com/articles/5-2785/v1
|
29 Nov 16
|
{
"type": "Correspondence",
"title": "Matching target dose to target organ",
"authors": [
"Desmond I. Bannon",
"Marc A. Williams",
"Marc A. Williams"
],
"abstract": "In vitro assays have become a mainstay of modern approaches to toxicology with the promise of replacing or reducing the number of in vivo tests required to establish benchmark doses, as well as increasing mechanistic understanding. However, matching target dose to target organ is an often overlooked aspect of in vitro assays, and the calibration of in vitro exposure against in vivo benchmark doses is often ignored, inadvertently or otherwise. An example of this was recently published in Environmental Health Perspectives by Wagner et al., where neural stems cells were used to model the molecular toxicity of lead. On closer examination of the in vitro work, the doses used in media reflected in vivo lead doses that would be at the highest end of lead toxicity, perhaps even lethal. Here we discuss the doses used and suggest more realistic doses for future work with stem cells or other neuronal cell lines.",
"keywords": [
"In vitro",
"lead",
"in vivo",
"dose",
"stem cells."
],
"content": "\n\nA recent article by Wagner et al. reported the involvement of the anti-oxidant Nrf2 transcription factor signaling pathway in the toxicity of lead using neural stem cells in an in vitro model of neuronal differentiation1. While this work was completed in a similar way to other studies involving in vitro lead exposure, the work avoids a critical, often neglected issue of what constitutes a relevant physiological dose in vitro. The assumption that the selected dose of 1 µM (or 20.7 ug/dL) for neuronal stem cell exposure was “4 times the CDC levels of concern (LOC) for blood lead (5 ug/dL) and is within the range of exposed populations” requires further examination. Since the in vitro exposure was completed in media (the equivalent of plasma or serum) and not in whole blood, the assumption that the in vitro lead level would be equivalent to that found in blood of lead-exposed humans is somewhat inaccurate. Lead in serum (or plasma) represents only a fraction (~1%) of the level found in whole blood2,3, with the major fraction of lead bound inside erythrocytes4. For arguments sake, if the proportion of lead used in this study was 1% of that in whole blood, the equivalent blood lead value would be 2073 ug/dL, a level over 400 times the CDC LOC, and one that would be acutely toxic and perhaps lethal.\n\nAnother study, which was cited by Wagner et al.1, showed that measurable effects in stem cells in vitro could occur at doses as low as 0.4 µM; this dose would represent a blood lead level of 800 µg/dL by our calculations5. In a study by Chan et al., the lowest dose of 1 µM lead used in a study of newborn rat neuronal stem cells would represent 1000 µg/L in serum and a massive systemic blood lead level of about 10,000 µg/dL6. Other studies examining the toxicity of lead in cell cultures have also failed to adequately match the in vitro doses7–10 with those found in vivo, by taking account of the well documented difference between plasma and whole blood lead values. More importantly, with measurable effects on differentiation only beginning at 10 µM for Chan et al.6, could these data suggest the alternative interpretation that neuronal stem cells in vivo are more resistant to toxic insult by lead than our current understanding would have us believe – at least in the short term?\n\nWhat is clear is that at current blood lead levels in the US population, serum or plasma levels will represent a very low fraction of those values and in vitro work could more realistically model chronic neurological effects in humans if target doses were better matched to the doses found at target sites. Thus, the model proposed in this and other work, while presenting novel effects, may be more appropriate for high acute exposures. To ensure that doses used in in vitro assays are complimentary to a target in vivo blood lead level of 20 µg/dL, exposure to cells in vitro should more accurately correspond to 1% of the blood lead value, or a dose of 0.2 µg/dL (0.01 µM). At the current CDC 5 µg/dL LOC for children, the in vitro dose would become 0.05 µg/dL (0.002 µM), a dose that would present difficulties to laboratories that cannot eliminate background levels from residual lead on glassware and other sources of possible contamination or confounding of the reported data.\n\nIn the study by Wagner et al.1, much of this may have been considered by the authors, and key assumptions may have been made; however, the question still remains whether the upregulation of genes in the Nrf2-mediated anti-oxidative stress pathway would have been observed if a more physiologically relevant dose of 0.2 µg/dL (0.1 µM) in the media (i.e., representing a blood lead level of 20 µg/dL) had been used.\n\n\nDisclaimer\n\nThe views expressed in this article are those of the author(s) and do not necessarily reflect the official policy of the Department of Defense, Department of the Army, U.S. Army Medical Department or the U.S.",
"appendix": "Author contributions\n\n\n\nDB conceptualized the article and analyzed the original critiqued article reported herein. MW provided technical writing support and analysis of the original critiqued article reported herein.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe authors confirm that no grant(s) were involved in supporting this work.\n\n\nReferences\n\nWagner PJ, Park HR, Wang Z, et al.: In Vitro Effects of Lead on Gene Expression in Neural Stem Cells and Associations between Upregulated Genes and Cognitive Scores in Children. Environ Health Perspect. 2016. [Epub Ahead of Print]. PubMed Abstract | Publisher Full Text\n\nManton WI, Cook JD: High accuracy (stable isotope dilution) measurements of lead in serum and cerebrospinal fluid. Br J Ind Med. 1984; 41(3): 313–19. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSmith D, Hernandez-Avila M, Tellez-Rojo MM, et al.: The relationship between lead in plasma and whole blood in women. Environ Health Perspect. 2002; 110(3): 263–68. PubMed Abstract | Free Full Text\n\nSimons TJ: Passive transport and binding of lead by human red blood cells. J Physiol. 1986; 378(1): 267–86. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSenut MC, Sen A, Cingolani P, et al.: Lead exposure disrupts global DNA methylation in human embryonic stem cells and alters their neuronal differentiation. Toxicol Sci. 2014; 139(1): 142–61. PubMed Abstract | Free Full Text\n\nChan YH, Gao M, Wu W: Are newborn rat-derived neural stem cells more sensitive to lead neurotoxicity? Neural Regen Res. 2013; 8(7): 581–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEngstrom A, Wang H, Xia Z: Lead decreases cell survival, proliferation, and neuronal differentiation of primary cultured adult neural precursor cells through activation of the JNK and p38 MAP kinases. Toxicol In Vitro. 2015; 29(5): 1146–155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nQasmian Lemraski M, Soodi M, Taha Fakhr M, et al.: Study of lead-induced neurotoxicity in neural cells differentiated from adipose tissue-derived stem cells. Toxicol Mech Methods. 2015; 25(2): 128–35. PubMed Abstract | Publisher Full Text\n\nJia Q, Ha X, Yang Z, et al.: Oxidative stress: a possible mechanism for lead-induced apoptosis and nephrotoxicity. Toxicol Mech Methods. 2012; 22(9): 705–10. PubMed Abstract | Publisher Full Text\n\nKermani S, Karbalaie K, Madani SH, et al.: Effect of lead on proliferation and neural differentiation of mouse bone marrow-mesenchymal stem cells. Toxicol In Vitro. 2008; 22(4): 995–1001. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "19455",
"date": "18 Jan 2017",
"name": "Mir Ahamed Hossain",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn vitro assays have become a mainstay of modern approaches to toxicology with a high promise of understanding the underlying mechanisms of toxicity. The results reported by Wagner et al., (2016) in the August 26 issue of the Environmental Health Perspectives, where neural stem cells were used to model the toxicity of lead. The results support the notion that lead treatment of cells leads to upregulation of vascular gene expression (JBC 275:27874-27882, 2000). While this work presents interesting effects, this reviewer’s opinion is in agreement with the correspondence (critiqued article) authors Bannon and Williams that it may be more appropriate for high acute exposures particularly in case of neural stem/progenitor cells, which lack many of the characteristic features of mature neurons.\n\nIt is also likely that neural stem cells (NSCs) could be more resistance to toxic insult by lead - at least in the short term. Thus the in vitro work could more realistically model chronic neurological effects if doses are better matched with the doses at the target site, as supported by the fact that serum or plasma levels represent a very low fraction of the total blood lead levels. Thus the concentrations of lead used in this study, which elicits upregulation of genes in the Nrf2-mediated anti-oxidative stress pathway, appear to be in the low micromolar range, which is much higher than the in vitro dose equivalent of the current CDC levels of concentrations (5 ug/dL) for children. Thus the concentrations used in the study does not reflect the likely exposure of lead in the environment, that is to say, concentrations which are likely to be cytotoxic particularly in case of NSCs. This is clearly a near impossible issue to address empirically, but if some information available along these lines using a more physiologically relevant dose in the media of in vitro NSC cultures to show gene expression in the Nrf2-mediated anti-oxidative stress pathway would be helpful for the reader as suggested by Bannon and Williams in the critiqued article. It will also be interesting to see how the differentiated neurons from lead exposed NSCs express neurons specific features or exhibit mature neuronal function.",
"responses": [
{
"c_id": "2524",
"date": "30 Mar 2017",
"name": "Mark A Williams",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Reviewer 1. We thank reviewer #1 for helpful comments on our article. We address some specific aspects below. The reviewer agreed with our principle argument, but goes on to state that the Wagner et al “results support the notion that lead treatment of cells leads to upregulation of vascular gene expression”, citing an in vitro microarray study using astrocytes, (Hossain et al, 2002, ref 9 above) when in fact two of the three VEGF transcripts listed in Supplemental Table 1 of Wagner et al were downregulated by lead, with only one – VEGFA downregulated by 0.8-fold – being statistically significant. Therefore the cited publication by Hossain et al is contradicted by the Wagner et al data for the VEGF gene. The fact that the Hossain et al study used 10 µM lead acetate to dose astrocytes in vitro further supports our main point – that most lead concentrations in vitro would reflect highly lethal lead concentrations in vivo if the difference between lead in whole blood (red blood cells) and plasma were taken into account. Hossain et al did cite Audersirk (Audesirk G, et al. In Vitro Cell Dev Biol. 1989 Dec;25(12):1121-8) as supporting evidence for the use of 10 µM lead as a dosing solution for astrocytes, where Audersirk measured free lead (Pb2+) in the nanomolar range in the presence of full experimental media dosed with micromolar lead acetate using an ion selective electrode. However, Audersirk’s work in snail and chick neurons did not examine the potential lethality of the in vitro working doses to the whole organism, taking account of plasma/whole blood differences."
}
]
},
{
"id": "19115",
"date": "26 Jan 2017",
"name": "Donald Smith",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis commentary is well written, very well justified, and timely. While there are countless published papers on the myriad effects of lead in biological systems, the consideration of dose extrapolation from in vitro to in vivo studies and their relationships to the human condition often goes unappreciated. Indeed, since toxicology is driven by the dose of the poison, establishing environmental or occupational relevance of the dose is absolutely key to the relevance of the findings. This commentary points this out in a concise and evidence-driven fashion, and is worthy of publication.\n\nBelow are a few minor comments to consider.\n\nPg. 2, 1st para: For arguments sake, …\nComment: A caveat here might be that is known that the proportion of whole blood lead in plasma increases with increasing blood lead, so it is likely that the blood lead level that would produce a 1 uM plasma lead would be lower than 2,073 ug/dL, but this does not detract from the point the authors are making, which is a good and important one.\n\nPg. 2, 3rd para: Thus, the model proposed in this and other work…\n\nComment: It is not clear whose work 'this work' is referring to - Chan et al?\n\nPg. 2, 3rd para: To ensure that doses used in in vitro assays are complimentary to a target in vivo blood lead level of 20 μg/dL…\nComment: This suggestion by the authors is reasonable, assuming that plasma lead reflects extracellular fluid lead, though it might also be worth looking at the relationship between blood lead and CSF lead levels (in the literature) to see if it follows an appx 1% relationship as does plasma to further substantiate this suggestion.\n\nPg. 2, 3rd para: …eliminate background levels from residual lead on glassware and other sources of possible contamination or confounding of the reported data…\nComment: This too raises an important point in that the vast majority of studies do not make sufficient effort to reduce background lead levels in control cultures, so it is quite possible that here and in those other studies the control cultures, even with modestly elevated background lead levels will also be affected, requiring higher exposure doses to demonstrate a difference or 'effect' in the lead-exposed treatments. It is good that the authors pointed this out.",
"responses": [
{
"c_id": "2525",
"date": "30 Mar 2017",
"name": "Mark A Williams",
"role": "Author Response F1000Research Advisory Board Member",
"response": "Reviewer 2. We thank reviewer #2 for knowledgeable and helpful comments on our article. Here are our responses to specific comments. Comment 1. This point is well made – we agree that the proportion of lead in plasma would increase as blood lead increases, so that equivalent plasma lead at blood lead values greater than 100 µg/dL could be upwards of 2%. As it was we selected 1% plasma/blood ratio as the blood lead under question was 20 µg/dL but of course there is some inbuilt error in our calculations at high doses. Nonetheless, our extrapolated exposure scenario is meant to demonstrate that the assumptions under which many in vitro studies lie with respect to their relationship to in vivo blood lead values are often violated; the reviewer also acknowledges our efforts to point this out. We have added more text to acknowledge this non-linear relationship at increasing doses between whole blood lead and plasma lead. Comment 2. This sentence has been restructured to indicate that we referring to the Wagner et al study, as well as other studies that have made similar assumption. Comment 3. We agree that cerebrospinal fluid measures would further corroborate our assumptions. The work by Manton et al (cited in our article) showed that cerebrospinal fluid levels were about 50% of serum levels, though it should be pointed out that this work was carried out in only one subject. We have added more text to acknowledge this fact. Comment 4. We agree with the further elaboration of this sentence and have added additional text to incorporate the details of the comment."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2785
|
https://f1000research.com/articles/6-399/v1
|
30 Mar 17
|
{
"type": "Research Article",
"title": "Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management",
"authors": [
"Dèdéou Apocalypse Tchokponhoué",
"Sognigbé N'Danikou",
"Iago Hale",
"Allen Van Deynze",
"Enoch Gbènato Achigan-Dako",
"Sognigbé N'Danikou",
"Iago Hale",
"Allen Van Deynze",
"Enoch Gbènato Achigan-Dako"
],
"abstract": "Background. The miracle plant, Synsepalum dulcificum (Schumach. & Thonn.) Daniell is a native African orphan crop species that has recently received increased attention due to its promise as a sweetener and source of antioxidants in both the food and pharmaceutical industries. However, a major obstacle to the species’ widespread utilization is its relatively slow growth rate and prolonged juvenile period. Method. In this study, we tested twelve treatments made up of various watering regimes and exogenous nutrient application (nitrogen, phosphorus and potassium, at varying dosages) on the relative survival, growth, and reproductive development of 15-months-old S. dulcificum juveniles. Results. While the plants survived under most tested growing conditions, nitrogen application at doses higher than 1.5 g [seedling]-1 was found to be highly detrimental, reducing survival to 0%. The treatment was found to affect all growth traits, and juveniles that received a combination of nitrogen, phosphorus, and potassium (each at a rate of 1.5 g [seedling]-1), in addition to daily watering, exhibited the most vegetative growth. The simple daily provision of adequate water was found to greatly accelerate the transition to reproductive maturity in the species (from >36 months to an average of 23 months), whereas nutrient application affected the length of the reproductive phase within a season, as well as the fruiting intensity. Conclusions. This study highlights the beneficial effect of water supply and fertilization on both vegetative and reproductive growth in S. dulcificum. Water supply appeared to be the most important factor unlocking flowering in the species, while the combination of nitrogen, phosphorus and potassium at the dose of 1.5 g (for all) consistently exhibited the highest performance for all growth and yield traits. These findings will help intensify S. dulcificum’s breeding and horticultural development.",
"keywords": [
"Mineral fertilization",
"juvenility phase",
"precocity",
"environmental induction",
"growth",
"flowering",
"miracle berry"
],
"content": "Introduction\n\nThe miracle plant, Synsepalum dulcificum (Schumach. & Thonn.) Daniell (Sapotaceae), is a perennial shrub originating from West Africa (Inglett & May, 1968) and is the only known natural source of miraculin, a glycoprotein with remarkable edulcoration properties (Lim, 2013). In West Africa, the sweetening activity of the fruit is valued in drink-making, whereas the leaves, roots, and bark of the species are used in traditional treatments of diabetes, enuresis, kidney, hyperthermia, coughing, and stomach afflictions (Burkill, 2000; Oumorou et al., 2010). The fruit of the species (miracle berry) is a rich source of vitamin C, leucine, flavonols, and anthocyanin (Du et al., 2014; Njoku et al., 2015); and its modern utilizations include many applications in cosmetics, food, and pharmaceuticals (Achigan-Dako et al., 2015). With its many unique properties, some writers have suggested that miracle berry would currently have a much larger market in the USA, and therefore globally, if it had not been misclassified in the 1970's as a food additive instead of a sweetener (http://www.gayot.com/Lifestyle/Health/Benefits/Miracle-Fruit; http://www.theweek.co.uk/politics/27131/sweet-and-sour-tale-miracle-berry). Recently, additional scientific evidences were highlighted on the ability of the species to substitute sugar, particularly in sour beverages (Rodrigues et al., 2016).\n\nDespite the nutritional, economic, and medicinal promise of the species, S. dulcificum remains a neglected crop that is not widely cultivated. In addition, according to Adomou (2005), the species is in depletion and is also suspected to exhibit recalcitrant seed storage behavior (Chen et al., 2012). One of the major constraints to economic cultivation of miracle berry is the very slow growth rate and the prolonged juvenile phase of the plant. According to Joyner (2006), the species seedling size at four years old is a maximum of 60 cm and fructification occurs only after three to four years; however, information regarding the growing conditions of the seedlings in that study was lacking. In Benin, where the plants are also reported to exhibit a relatively slow growth rate and to be late maturing, the species is almost wholly neglected. When encountered in its natural habitat (open field), the species exhibits relatively poor fitness in the face of weed competition, as well as anthropogenic and animal disturbances (Houeto, 2015).\n\nAn important step toward the systematic improvement of S. dulcificum would be to accelerate the transition to reproductive maturity, thus shortening generation times. According to Wilkie et al. (2008), there are three possible ways to induce flowering in horticultural trees, thereby reducing the length of the juvenile phase, or increasing precocity: environmental induction, autonomous induction, and the use of growth regulators. A plant’s ability to favorably respond to any of these flowering induction techniques greatly depends on its origin. While tropical and subtropical species tend to respond better to environment stimuli (e.g. mango, Mangifera indica L.; lychee, Litchi chineensis Sonn.), those from temperate regions exhibit autonomous floral induction (e.g. apple, Malus domestica Borkh.; sweet cherry, Prunus avium L.) (Wilkie et al., 2008). Given that S. dulcificum is a tropical species, we hypothesize that an accelerated transition to reproductive maturity can be triggered through proper environmental manipulation. Additionally, in woody angiosperms, cold treatment, nutrient supply, photoperiod, and water stress were found to be the main environmental stimulations that could induce flowering (Meilan, 1997).\n\nOne important factor limiting plant growth is nitrogen and phosphorus deficiency (King et al., 2008; Poothong & Reed, 2014). Nutrient status has been reported to affect gene activity and protein synthesis in plant species (e.g. Japanese red pine, Pinus densiflora Sieb. & Zucc.) (Nakaji et al., 2001). For instance, a high C/N ratio was reported to favor flowering in fruit trees (Hanke et al., 2007). Fertilization management thus appears to be a promising means of promoting plant growth and early flowering in horticultural species; and yet, different plant species tend to react to nutrient supply in unpredictable ways. For example, while phosphorus fertilization was found to be beneficial for the lobolly pine (Pinus taeda L.) growth, nitrogen fertilization on the same species was rather detrimental (Faustino et al., 2013). In another study, phosphorus fertigation was shown to be harmful to the fan flower (Scaevola aemula R. Br.) when applied at a rate exceeding 43.5 g.ml-1 (Zhang et al., 2004). In many other species, such as marula, Sclerocarya birrea (Hochst.) and wild loquat, Uapaca kirkinia (Muell.Arg.), the benefit of fertilizer application remains elusive (Akinnifesi et al., 2008). Similarly, water availability is considered to be one of the three most important factors controlling a plant's transition to flowering (Bernier et al., 1993), in addition to affecting the phenological rhythm of tropical species; and yet plant response to water stress (excess/deficiency) also tends to be species-specific. While water deficiency was found to promote flowering in Citrus spp. (Davenport, 2003), it reduced vegetative growth in Mangifera indica L. (Pavel & De villiers, 2004).\n\nTo the best of our knowledge, the response of S. dulcificum to fertilization and regular water supply has never been documented. Furthermore, detailed phenological data, especially in juveniles, are not available despite their importance to pioneering breeding programs. Understanding how nutrient and water supply affect fruiting in S. dulcificum juveniles is critical to the development of this promising species.\n\nIn this study, we analyzed the growth, flowering, and fruiting response of S. dulcificum to water and mineral fertilizer treatments with the objective of reducing the species natural (in reference to stands evolving in natural habitat) production cycle, while significantly enhancing overall growth and fruit yield.\n\n\nMethods\n\nThe experiment was carried out from December 2013 to April 2016 in the municipality of Abomey-Calavi (southern Benin), at the experimental site of the Faculty of Agronomic Sciences, University of Abomey-Calavi (06°25’00.8”N, 002°20’24.5”E), and in a neighboring open field (06°27’00.”N, 002°21’00”E) to simulate natural rain fed conditions (no irrigation or exogenous nutrient application). Abomey-Calavi is located in the Guinean phytogeographical region of Benin largely characterized by a ferralitic soil type (Röhrig, 2008). During the experimental timeframe, the mean annual rainfall was 1,329 mm and the mean monthly temperature was 24°C.\n\nIn December 2013, mature, ripe and fresh fruits of S. dulcificum were collected from a single tree located in the district of Toffo (6°92’N; 2°27’E), where the soil is ferralitic, the mean annual rainfall is around 1,000 mm, and the mean annual temperature varies from 27°C to 30°C. Fruits collected were processed and sown at ambient temperature (25–27°C) in black polystyrene nursery bags (0.75 l) filled with sand to produce seedlings that were monitored in the nursery until they reached 13 months old. At that time, seedlings of a similar size were transplanted either in pots on the site of University of Abomey-Calavi or directly at soil in the open field and monitored for two months before being used in the watering and fertilization experiment. There was only one seedling per pot and each pot had 15 l volume.\n\nThe experiment was made up of twelve treatments (Table 1), out of which the absolute control (Cont: rain fed seedlings with no nutrient supply) was established at soil in the open site and the other 11 treatments were established in pots (to control the amount of water supply and its efficiency) filled with soil collected at 0–10 cm depth on the site of University of Abomey-Calavi. Each seedling in pots received two liters of water daily. Nutrients were brought to each pot (seedlings) separately; the nitrogen was applied as urea (46% N), the phosphorus as simple superphosphate (46% P2O5) and the potassium as potassium sulfate (48% K2O). Fertilizers were applied using the sub-surface method at 8 cm beneath the soil and at a frequency of one application every two months. The first application occurred in March 2015. Physico-chemical characteristics of the experimental medium in pots were as follows: pH (KCl) = 5.48, pH (H2O) = 5.88, silt = 25.75%, clay = 12.27%, sand = 61.98%, organic carbon = 1.03%, N = 0.06%, Mg = 2.37 (meq/100g), Ca = 0.63 (meq/100g), P = (2.08 meq/100g), and assimilable P = 23.06 ppm. The experiment design was of completely randomized design and each treatment was made up of a cohort of 10 seedlings of the same age (15 months). We used this sample size because S. dulcificum is a recalcitrant perennial, and obtaining progeny individuals of similar age and size was challenging.\n\nMeasuring growth parameters. Before treatment application, initial stem collar diameter, plant height, number of branches, and number of leaves were measured for all seedlings (Table 2) to ensure that seedlings had similar size. At the end of the experiment (April 2016), the same traits were also measured to evaluate the increments.\n\nValues are means ± SE (n = 10 seedlings).\n\nns= Not significant at 5%.\n\nLeaf area was measured following the method by Cornelissen et al. (2003). The most mature and fully sun exposed leaf was harvested from each seedling. Harvested leaves were photocopied on paper, which were cut-out and weighed according to the shape of the leaves. The weight of the cut-out paper was multiplied by the known area/weight ratio of the paper to get the leaf area. Growth was assessed based on the increment recorded for each vegetative growth parameter between the onset and the end of the experiment.\n\nTracking flowering phases. From the first day of treatment application to the end of experiment, we monitored each seedling development daily. Within the so-called generative phase, starting with budding and ending with fruit ripening, we distinguished seven main events (budding, flowering, flower bloom, fructification onset, fruit physiological maturity, ripening onset, and full ripening) demarcating six distinct phases (S1: budding to flowering, S2: flowering to flower bloom, S3: flower bloom to fructification onset, S4: fructification onset to physiological maturity, S5: physiological maturity to fruit ripening onset, and S6: fruit ripening onset to full ripening; see Figure 1). The occurrence date of each event was recorded and the total number of buds, flowers, and fruits per seedling were counted. The number of buds and the number of flowers were monitored until the tenth month (to avoid flower drop) of the experiment (December 2015) and only the fruiting was monitored to the end of the experiment (April 2016).\n\n(A) Budding; (B) Flowering; (C) Flower bloom; (D) Fructification onset; (E) Physiological maturing; (F) Fruit ripening onset; (G) and (H) Fruit full ripening. S1 A→B; S2 B→C; S3 C→D; S4 D→E; S5 E→F; S6 F→G, H.\n\nPrior to analysis we explored the datasets, and outliers were identified using the boxplot approach (Crawley, 2007). These outliers contained in Datasets 3 and 4 ((Tchokponhoué et al., 2017c; Tchokponhoué et al., 2017d) were removed from further vegetative growth analysis. Following this approach, outliers are considered as more than 1.5 times the interquartile range above the third quartile and below the first quartile. To test the effects of treatments on seedling survival, we performed a survival analysis. To analyze stem collar diameter, height, and leaf area variation in response to treatments, we performed analyses of variance followed by Tukey post hoc test for means separation. We employed orthogonal contrasts to dissect the effect of daily watering, as well as to analyze trends in growth response to progressive doses of nutrients when significant effects were observed. To analyze how the treatments affected the proportion of plants bearing buds, flowers, and fruits, we used prop.test. The number of branches, the number of leaves, the length of each generative phase, the number of buds, the number of flowers and the number of fruits were analyzed using a generalized linear model (glm) with poisson error structure (or quasi error structure to account for over-dispersion) where necessary. Apart from survival analysis, other statistical analyses were only performed for treatments that had at least two surviving seedlings at the end of the experiment. Also, since all seedlings considered in vegetative growth have not reached reproductive stages (e.g. budding, flowering), there is a discrepancy in the number of seedling between vegetative and reproductive growth datasets. Analyses were performed using “agricolae”, “car”, “gvlma”, ‘‘multcomp’’ and ‘‘survival’’ packages in R version 2.15.3 (R Developement Core Team, 2013) and results are presented as means ± standard errors (SE).\n\n\nResults\n\nAt the end of the experiment, the survival rate in the juveniles was highly affected by the treatment (P < 0.001), with the lowest survival rates observed in nitrogen-based treatments (Table 3). For this specific nutrient type (N), the higher the dose, the lower the survival and the more abrupt the survival decline. For instance, while the average time to death in juveniles that received 1.5 g nitrogen each was 12.00 ± 0.5 weeks, times to death in juveniles that received 3.0 g and 4.5 g nitrogen were 4.22 ± 0.3 weeks and 3.50 ± 0.3 weeks, respectively (Figure 2).\n\nMeans with different letters within a column denote significant differences. ***= Significant at 1‰\n\nCont = rain fed, no exogenous nutrients; W = Daily watering, no exogenous nutrients; N1.5 = Daily watering + 1.5 g N [seedling]-1; N3 = Daily watering +3 g N [seedling]-1; N4.5 = Daily watering + 4.5 g N [seedling]-1; P1.5 = Daily watering + 1.5 g P [seedling]-1; P3 = Daily watering + 3 g P [seedling]-1; P4.5 = Daily watering + 4.5 g P [seedling]-1; K1.5 = Daily watering +1.5 g K [seedling]-1; K3 = Daily watering + 3 g K [seedling]-1; K4.5 = Daily watering + 4.5 g K [seedling]-1; NPK = Daily watering + 1.5 g N + 1.5 g P + 1.5 g K [seedling]-1.\n\nThe survival data indicated a survival rate less than 20% in treatments N3 and N4.5; consequently they were discarded from subsequent analyses.\n\nStem collar diameter, plant height, and branching. The increment in the seedlings stem collar diameter was highly affected by treatment (Figure 3A). The daily watered juveniles performed better than the rain fed ones (P < 0.001). The extent of the stem collar diameter growth also greatly differed among nutrient types. For instance, the average increment in juveniles fertilized with NPK (10.36 ± 0.96 mm) was nearly twofold higher than that in juveniles fertilized with nitrogen only (4.73 ± 1.31 mm). The stem collar diameter growth with phosphorus was as good as potassium (P = 0.52), but higher than N (P = 0.007), and lower than with NPK (P = 0.04). We observed a highly significant effect of treatment on plant height (Figure 3B). Contrast analysis indicated that combined N, P and K application increased plant height better than single nutrient application (P = 0.01). Plants also better responded to phosphorus or potassium supply than to nitrogen (P = 0.002). Meanwhile, rain fed seedlings grew taller than daily watered plants receiving a single nutrient (P < 0.01).\n\n(A) Stem collar diameter; (B) Height; (C) Branching; (D) Leaf production and (E) Leaf area. Values are means ± SE (n = 8 – 10 seedlings). Means with different letters denote significant differences at P < 0.05, ANOVA, Tukey Test. Cont = rain fed, no exogenous nutrients; W = Daily watering, no exogenous nutrients; N1.5 = Daily watering + 1.5 g N [seedling]-1; N3 = Daily watering +3 g N [seedling]-1; N4.5 = Daily watering + 4.5 g N [seedling]-1; P1.5 = Daily watering + 1.5 g P [seedling]-1; P3 = Daily watering + 3 g P [seedling]-1; P4.5 = Daily watering + 4.5 g P [seedling]-1; K1.5 = Daily watering +1.5 g K [seedling]-1; K3 = Daily watering + 3 g K [seedling]-1; K4.5 = Daily watering + 4.5 g K [seedling]-1; NPK = Daily watering + 1.5 g N + 1.5 g P + 1.5 g K [seedling]-1.\n\nThe branching intensity also greatly varied following treatments (Figure 3C). The average branches gain in rain fed seedlings was 3.75 ± 0.53, whereas the set of daily watered juveniles gained on average nearly double (7.33 ± 1.35; P < 0.001). The effect of nutrient supply was also significant (P < 0.001) on the seedling branching, with plants fertilized with NPK gaining on average 12.33 ± 1.8 branches against 6.74 ± 1.25 for plants fertilized with a single nutrient.\n\nIncrease in leaf number and size. The variation in leaf production based on treatment is presented in Figure 3D. The differences in the increment of the number of leaves due to water supply and to exogenous nutrient application were all highly significant (P < 0.001). Grouped together, daily watered juveniles produced on average fourfold more leaves than rain fed juveniles. Regarding the fertilizer type, daily watered juveniles fertilized with NPK gained on average 925 ± 154 leaves, representing for instance 2.51 times the average leaf gain in simply watered juveniles without exogenous nutrient (W). Furthermore, NPK particularly improved leaf production comparatively to single nutrient application (P < 0.001). Likewise, the treatment significantly affected the leaf size with daily watered juveniles presenting a larger leaf area (1539.06 ± 55.46 mm2) than rain fed juveniles (695.37 ± 86.87 mm2), and leaf area in juveniles fertilized with NPK was greater than those of juveniles fertilized with a single nutrient (Figure 3E). However, the juveniles responded better when P or K was supplied than when N was supplied.\n\nBudding and flowering. The proportion of budding juveniles was significantly affected by the treatment and ranged from 0–100% (Table 4). The contrast analysis on the average time to budding revealed a significant effect of treatment (P = 0.02; Figure 4A). Though the shortest times to budding, 190 ± 5.92 days and 201 ± 24.51 days were recorded in daily watered unfertilized juveniles and in daily watered and NPK fertilized juveniles, respectively, the highest number of buds was observed in juveniles fertilized with NPK (Table 5). After 10 months, NPK-fertilized seedlings produced a significantly greater number of buds than unfertilized plants (six times, P = 0.05).\n\nMeans with different letters within a column denote significant differences. ***= Significant at 1‰.\n\nValues are means ± SE (n = 3 – 10 seedlings).\n\n$: assessed at the tenth month of the experiment, €: assessed at the end of the experiment (thirteenth month of the experiment).\n\nMeans with different letters within a column denote significant differences. *= Significant at 5%.\n\nThe proportion of flowering juveniles was also highly affected by the treatment (Table 4). The highest flowering percentages were observed in NPK-fertilized juveniles and those fertilized with potassium at 4.5 g [juvenile]-1 (100%). The time to flowering (Figure 4B) was shorter for NPK-fertilized juveniles (P = 0.004), which flowered after 242.0 ± 21.97 days compared to 299.65 ± 7.41 days for the set of single-nutrient fertilized juveniles. Within the potassium-based treatments, the effect of application dose was significant (P = 0.01) and the time to flowering decreased as the potassium dose increased with a quadratic relationship between the two variables (P = 0.02). The regression equation reads: Time to flowering = 300.52 + 49.32 Potassium dose -19.51 (Potassium dose)2.\n\n(A) Time to budding; (B) Time to flowering; (C) Time to fruiting and (D) Total fruit production. Values are means ± SE (n = 5 – 10 seedlings). Means/Values with different letters denote significant differences. Generalized linear model, Tukey Test. W = Daily watering, no exogenous nutrients; N1.5 = Daily watering + 1.5 g N [seedling]-1; N3 = Daily watering +3 g N [seedling]-1; N4.5 = Daily watering + 4.5 g N [seedling]-1; P1.5 = Daily watering + 1.5 g P [seedling]-1; P3 = Daily watering + 3 g P [seedling]-1; P4.5 = Daily watering + 4.5 g P [seedling]-1; K1.5 = Daily watering +1.5 g K [seedling]-1; K3 = Daily watering + 3 g K [seedling]-1; K4.5 = Daily watering + 4.5 g K [seedling]-1; NPK = Daily watering + 1.5 g N + 1.5 g P + 1.5 g K [seedling]-1.\n\nFructification. The proportion of fruiting juveniles ranged from 0% in rain fed juveniles to 100% in NPK-fertilized plants and was highly affected by the treatment (Table 4). Likewise, the time to fruiting in S. dulcificum juveniles significantly differed among treatments (P = 0.004) and varied from 286 ± 9.33 days to 377 ± 5.43 days (Figure 4C). The earliest fruiting individuals included NPK-fertilized plants. Here also, the time to fruiting was affected by the potassium dose (P = 0.02). We also observed a significant quadratic relationship between the time to fruiting and the potassium application dose (P = 0.03). The equation reads: Time to fruiting = 355.48 + 39.18 Potassium dose -16.99 (Potassium dose)2.\n\nFurthermore, the highest cumulative fruit number per treatment (Figure 4D) and average fruit number yielded by each plant (Table 5) were observed in NPK-fertilized juveniles. For instance, NPK-fertilized juveniles produced twofold more fruits than those that received a single nutrient (N or P or K) and threefold more fruits than juveniles that received no nutrients (Table 5). The fruit mass significantly differed among treatments (P = 0.01) and ranged from 1.08 ± 0.17 g (in juveniles fertilized with 1.5 g phosphorus) to 1.47 ± 0.04 g (in juveniles fertilized with 3 g phosphorus).\n\n* Significant at 5%, ** = Significant at 1%, ***= Significant at 1‰\n\nThe lengths of the various phenophases observed during the reproductive growth of S. dulcificum are presented in Figure 5. The effect of treatments on the times from budding to flowering (S1), from flower bloom to fructification onset (S3), and from fructification onset to physiological maturity (S4) were very significant (P < 0.01), highly significant (P < 0.001) and significant (P < 0.05), respectively. The shortest length for S1 was observed in juveniles fertilized with 1.5 g phosphorus (32.33 ± 6.97 days), whereas the longest time for S1 was recorded in daily watered unfertilized juveniles (87.00 ± 12.52 days). NPK-fertilized juveniles rapidly started fruiting (within 16.66 ± 3.32 days), once their flowers bloomed. The longest time from fructification onset to physiological maturity (S4) was recorded in daily watered unfertilized juveniles (W) (28.66 ± 3.52 days).\n\n(S1 ) Time from budding to flowering; (S2) Time from flowering to flower bloom; (S3) Time from flower bloom to fructification onset; (S4) Time from fructification onset to physiological maturity; (S5) Time from physiological maturity to fruit ripening onset; (S6) Time from fruit ripening onset to full ripening. ns = not significant, * Significant at 5%, ** = Significant at 1%, ***= Significant at 1‰. W = Daily watering, no exogenous nutrients; N1.5 = Daily watering + 1.5 g N [seedling]-1; N3 = Daily watering +3 g N [seedling]-1; N4.5 = Daily watering + 4.5 g N [seedling]-1; P1.5 = Daily watering + 1.5 g P [seedling]-1; P3 = Daily watering + 3 g P [seedling]-1; P4.5 = Daily watering + 4.5 g P [seedling]-1; K1.5 = Daily watering +1.5 g K [seedling]-1; K3 = Daily watering + 3 g K [seedling]-1; K4.5 = Daily watering + 4.5 g K [seedling]-1; NPK = Daily watering + 1.5 g N + 1.5 g P + 1.5 g K [seedling]-1.\n\nThe correlation matrix overall indicated positive and highly significant correlation between growth traits; a higher correlation was observed between the stem collar diameter and the number of leaves (Table 6). Similarly, correlations between fruit production and growth traits are all positive but higher with leaves production than other growth traits. The regression equation for fruit production in juveniles reads: ln (Number of fruit) = -4.51 + 1.15 ln (Number of leaves).\n\n\nDiscussion\n\nIn S. dulcificum’s juveniles the use of appropriate fertilizer at a relevant dose is critical to avoid detrimental effects. The present study showed that while seedlings with phosphorus and potassium supply maintained survival at a high rate, nitrogen fertilization decreased survival rate with an increasing prevalence of dead seedlings as the dose increased. Similar negative effects of a larger nitrogen supply on survival was also reported in Trifolium medium L. (Chmelíková & Hejcman, 2014) and in Eucalyptus pauciflora Sieber ex Sprengel (Atwell et al., 2009). Likewise, in Betula pubescens Ehrh., Larix sibirica Ledeb., and Picea sitchensis (Bong.) Carr seedlings fertilized with nitrogen at the rate of 3.7 g [seedling]-1 had lower survival than those fertilized with 1.2 g [seedling]-1 (Oskarsson et al., 2006). Therefore, for 15 month-old juveniles of S. dulcificum we should limit the nitrogen dose to 1.5 g [seedling]-1 to encourage further growth and development.\n\nJuvenility represents a crucial stage in survival, functional and productive traits of plant species (Trubat et al., 2010), and improving the performance of plant species at this stage through fertilization is desirable. Though the beneficial effect of fertilization on juveniles of tree species is questionable (Akinnifesi et al., 2008; Ebert et al., 2002), our results revealed that in the case of S. dulcificum, all vegetative growth traits positively responded to water supply and fertilization. We observed two main morphotypes in juveniles of S. dulcificum in response to treatments. The first morphotype was ‘thin’ and exclusively observed in the field where juveniles were rain fed, and where the plant mainly grew in height as an adaptation strategy to cope with weed competition for the light and gained a limited number of branches and leaves. In contrast, when water and/or nutrients were supplied, this induced a ‘well-branched’ morphotype. The characteristics of this morphotype included a high stem collar diameter, a high number of leaves and branches and a dense crown. NPK application to 15 months old seedlings improved vegetative growth. For instance, at the end of the experiment, initial stem collar diameter and leaf number increased by 1.6 fold and 18 folds, respectively, in 15-month old juveniles watered and supplied with NPK; whereas in control juveniles (without watering and fertilization), initial stem collar diameter, height, and number of leaves increased by 1.36 fold and 6.41 folds, respectively. This performance of NPK-fertilized seedlings highlighted the additive effect of those three nutrients (N, P and K) (Chang, 2003).\n\nAt 28 months old, juveniles were 47 cm tall after 13 months of fertilization with a 23.2 cm gain. Existing literature reported that the species height at four years old was 50–60 cm (Joyner, 2006). Even under a fertilization regime, S. dulcificum height growth did not dramatically improve, particularly compared to other tropical fruit species, such as Vitex doniana Sweet in which seedlings in nursery reached 75 cm before one year old (N’Danikou et al., 2015). However, the effect of NPK on the vegetative growth was reflected in increased branch and leaf numbers, which represents an interesting prerequisite to further investigation of the species’ response to increased dose of the N, P, and K combination.\n\nMore importantly, our findings provided evidence (for the first time) of the beneficial effect of water supply and fertilization on S. dulcificum flowering and fructification. Only juveniles that were daily watered entered in the generative phase. No bud and flower were observed in juveniles evolving in natural conditions, i.e. rain-fed juveniles. This suggested water supply as the key determinant for S. dulcificum juveniles’ entrance into reproductive phase. This finding is in line with Bernier et al. (1993) who indicated that any environmental factors that have the ability to change regularly (e.g. photoperiod, temperature, water availability) can control plant development towards flowering. While perennial species were reported to exhibit, in general, a long juvenile phase (Hanke et al., 2007) that could reach up to five years (e.g. Olea europea L., Malus domestica Borkh.) (Santos-Antunes et al., 2005; Zimmerman, 1972), this juvenile phase (ending with budding) can be shortened in S. dulcificum from > 36 months to 21 months with simple daily water provision. Our results also revealed that when suitable fertilization scheme was combined to daily watering, first flowering occurred in S. dulcificum at an average age of 23 months (less than two years old) and at 16 months old for early flowering individuals. This highlighted the importance of nutrient balance to the development of fruit tree species. First fruiting occurred at the average age of 24 months (20 months for extra early individuals). This achievement represented a major progress in the improvement of the species reproduction, as previous reports indicated that S. dulcificum bears fruit after 3 to 4 years (Joyner, 2006). Although water supply was crucial for S. dulcificum to initiate generative phase, our findings also suggested that nutrient supply is of paramount importance for the species productivity. This is illustrated by the fruit production that is fivefold higher in juveniles receiving NPK in addition to daily watering than in juveniles that benefited just of daily watering.\n\nOur findings also expand the current knowledge on the phenology and reproductive biology of S. dulcificum. In juveniles of S. dulcificum, budding is continuous once it started, provided water is available. Flowering occurred one to three months after budding. In the first production round, flower production started from within the crown outward. This same “centrifugal” flowering pattern was also reported in Acer platanoides L. (Tal, 2011). Flower bloom occurred five to seven days after flowering and was always observed at the hot hours of the day (from 11 a.m. to 4 p.m.). In this study, we observed that flowers fully exposed to sun bloomed quicker than those hidden in the plant crown. This was well observed in NPK-fertilized seedlings and we suspected the flower bloom time in S. dulcificum to be light-dependent. This suspicion could even be expanded to the whole reproductive stage length of the species, since Xingway & Abdullah (2016) reported that four year old juveniles kept under shelter took 200 days from budding to fruiting stage, whereas in this study, sun exposed juveniles fruited within 100 – 160 days after budding. The growth stage also played a key role in the length of S. dulcificum phenophases. In adult trees, the timeframe from flowering to fruiting was estimated at seven days (Oumorou et al., 2010), while in juveniles, we observed that flowering to fruiting lasted 46 to 57 days.\n\nS. dulcificum as a sweetener and source of secondary metabolites has a lot of potential as a future crop that can be used to reduce the prevalence of diabetes, high blood pressure, and other diseases due to inadequate nutrition. The species has suffered from lack of interest and is rarely included in breeding programs. Moreover, strategies to develop cultivars are still obscure. Also, agronomic practices to improve production and seed management require increased mobilization of resources. Our study is the first of its kind, and reports on the effect of water and nutrient management on flowering and fruiting in S. dulcificum. When the suitable nutrient was combined to regular water supply, fructification time in S. dulcificum can be reduced to half of its natural duration.\n\nInorganic fertilization significantly improved S. dulcificum growth; however, the most efficient fertilizer formulation is yet to be determined. Moreover, the use and the effects of organic fertilization on the species growth and fruit production should be explored. A major reason of the renewed interest in S. dulcificum is its high content in secondary metabolites. In our study, the effect of fertilization on metabolite content was not assessed and future studies should shed light on that effect, as well as on the metabolite production across ecological gradients.\n\nTo date only limited knowledge is available on the genetic variation in S. dulcificum and the distribution of genotypes across Africa. S. dulcificum is reported to be native to West Africa and thrives in Ghana, Benin, Togo, and Nigeria. Assessment of the genetic diversity and the definition of heterotic groups, as well as a region-wide collection of germplasms, are necessary to gather ecotypes and cultivars to increase the range of diversity and enable the development of breeding populations.\n\nS. dulcificum is a shrub that naturally matures after three to four years. Although regular watering and nutrient supply can accelerate fruit production, it will be useful to identify secondary traits related to yield so as to increase predictive accuracy and efficient breeding plan (e.g. efficient time management, selection of high-yielding population). In this regard, leaf production represents an interesting secondary trait to consider in correlative selection of high yielding genotypes. In our study, high leaf production was positively correlated with higher fruit production. To increase the accuracy of the selection programme, the use of quantitative traits loci might be an option. So far there are no data on genes involved in leaf and fruit production. The sequencing of the species’ genome could then enable rapid identification of such genes and other useful ones so as to strengthen the development of cultivar and the economic return of the species.\n\nHeat and drought stresses are yet to be assessed in S. dulcificum. Empirical observation from the first and last authors revealed that shaded seedlings were more vigorous than sun-exposed ones. Understanding how various genotypes of S. dulcificum respond to environmental stresses will shed light onto which cultivar would be appropriate to which locations and help adapt to climate changes. In addition, juveniles submitted to rainfall survived as well as those regularly watered. Such a response opens room for the investigation of the adaptation potential of the species to drier environments and the side-effects of such adaptation on cultivar selection.\n\nPhenology data presented in this study remains incomplete since it did not cover the whole year. A follow up experiment will be necessary to provide a wider view on the phenological timeframe, including analysis of the fructification frequency, the period of flowering and fructification peak, and their variation across dry and rainy reasons.\n\n\nConclusions\n\nThis study has highlighted the beneficial effect of water supply and fertilization on both vegetative and reproductive growth in S. dulcificum. Water supply appeared as the most important factor unlocking flowering in the species, while nutrient supply was crucial in accelerating entrance into reproductive phase and enhancing fruit production. Throughout the experiment, the combination of nitrogen, phosphorus and potassium at the dose of 1.5 g (for all) consistently exhibited the highest performance for all growth and yield traits. These findings represent a crucial progress towards the species breeding and production scaling up.\n\n\nData availability\n\nDataset 1. Initial growth parameters at the fertilization experiment onset. D0 = Initial diameter, H0 = Initial height, L0 = Initial number of leaves, and B0 = Initial branching. This dataset was used to prepare Table 2. doi, 10.5256/f1000research.11091.d155614 (Tchokponhoué et al., 2017a)\n\nDataset 2. Survival data. This dataset was used to prepare Figure 2 and Table 3 and to perform related analysis. “Status” refers to whether the seed died (1) or was still alive at the end of the experiment (0) and “Time” refers to the number of weeks after each the seedling died (for dead seedlings) or the last time we saw surviving seedling (for seedlings still alive at the end of the experiment). doi, 10.5256/f1000research.11091.d155615 (Tchokponhoué et al., 2017b)\n\nDataset 3. Growth parameters (increment) at the end of the experiment for vegetative growth. This dataset was used to prepare Figures 3A–D and to perform related analysis. doi, 10.5256/f1000research.11091.d155616 (Tchokponhoué et al., 2017c)\n\nDataset 4. Growth parameters at the end of the experiment for leaf area. This dataset was used to prepare Figure 3E and to perform related analysis. doi, 10.5256/f1000research.11091.d155626 (Tchokponhoué et al., 2017d)\n\nDataset 5. Reproductive performance (time to budding). This dataset was used to prepare Figure 4A and to perform related analysis. doi, 10.5256/f1000research.11091.d155627 (Tchokponhoué et al., 2017e)\n\nDataset 6. Reproductive performance (time to flowering). This dataset was used to prepare Figure 4B and to perform related analysis. doi, 10.5256/f1000research.11091.d155628 (Tchokponhoué et al., 2017f)\n\nDataset 7. Reproductive performance (time to fruiting). This dataset was used to prepare Figure 4C and to perform related analysis. doi, 10.5256/f1000research.11091.d155629 (Tchokponhoué et al., 2017g)\n\nDataset 8. Cumulative fruiting. This dataset was used to prepare Figure 4D and to perform related analysis. doi, 10.5256/f1000research.11091.d155630 (Tchokponhoué et al., 2017h)\n\nDataset 9. Budding intensity. This dataset was used to prepare Table 5 and to perform related analysis. doi, 10.5256/f1000research.11091.d155631 (Tchokponhoué et al., 2017i)\n\nDataset 10. Fruiting intensity and correlation between growth parameters and fruiting. This dataset was used to prepare Table 5 and to generate Table 6 (correlation matrix), and to perform related analysis. doi, 10.5256/f1000research.11091.d155632 (Tchokponhoué et al., 2017j)\n\nDataset 11. Phenophase length. This dataset was used to generate Figure 5 and to perform related analysis. doi, 10.5256/f1000research.11091.d155633 (Tchokponhoué et al., 2017k)",
"appendix": "Author contributions\n\n\n\nDAT, SN and EAD conceived the study and designed the experiments. DAT carried out the experiment, collected and analyzed data. IH reviewed the data analysis. IH, AVD and EAD gave conceptual advice. All authors contributed in preparing the manuscript. All authors were involved in the revision of the draft manuscript and have agreed on the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was fully funded by New Alliance Trust to DAT (grant number, RGNAT03/14).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript\n\n\nAcknowledgments\n\nWe are in debt to the following colleagues for their invaluable help during the field work: Carlos Houdégbé, Chaldia Agossou, Lys Aglinglo, Soulemane Nouroudine, Alcade Segnon, Olga Sogbohossou and Jacob Houéto.\n\n\nReferences\n\nAchigan-Dako EG, Tchokponhoué DA, N’Danikou S, et al.: Current knowledge and breeding perspectives for the miracle plant Synsepalum dulcificum (Schum. et Thonn.) Daniell. Genet Resour Crop Ev. 2015; 62(3): 465–476. Publisher Full Text\n\nAdomou AC: Vegetation patterns and environmental gradients in Benin: Implication for biogeography and conservation. Wageningen University, Netherlands. 2005. Reference Source\n\nAkinnifesi F, Mhango J, Sileshi G, et al.: Early growth and survival of three miombo woodland indigenous fruit tree species under fertilizer, manure and dry-season irrigation in southern Malawi. Forest Ecol Manag. 2008; 255(3–4): 546–557. Publisher Full Text\n\nAtwell BJ, Henery ML, Ball MC: Does soil nitrogen influence growth, water transport and survival of snow gum (Eucalyptus pauciflora Sieber ex Sprengel.) under CO enrichment? Plant Cell Environ. 2009; 32(5): 553–566. PubMed Abstract | Publisher Full Text\n\nBernier G, Havelange A, Houssa C, et al.: Physiological Signals That Induce Flowering. Plant Cell. 1993; 5(10): 1147–1155. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBurkill HM: The useful plants of west tropical Africa. Families S-Z: Cryptogams Addenda, Royal Botanical Gardens. 2000; 5. Reference Source\n\nChang SX: Seedling sweetgum (Liquidambar styraciflua L.) half-sib family response to N and P fertilization: growth, leaf area, net photosynthesis and nutrient uptake. Forest Ecol Manag. 2003; 173(1–3): 281–291. Publisher Full Text\n\nChen XW, Abdullah TL, Abdullah NA, et al.: Rooting responses of miracle fruit (Synsepalum dulcificum) softwood cuttings as affected by indole butyric acid. Am J Agric Biol Sci. 2012; 7(4): 442–446. Publisher Full Text\n\nChmelíková L, Hejcman M: Effect of nitrogen, phosphorus and potassium availability on emergence, nodulation and growth of Trifolium medium L. in alkaline soil. Plant Biol (Stuttg). 2014; 16(4): 717–725. PubMed Abstract | Publisher Full Text\n\nCornelissen J, Lavorel S, Garnier E, et al.: A handbook of protocols for standardised and easy measurement of plant functional traits worldwide. Austr J Bot. 2003; 51(4): 335–380. Publisher Full Text\n\nCrawley MJ: The R Book. Imperial College London at Silwood Park, UK. 2007. Publisher Full Text\n\nDavenport TL: Management of flowering in three tropical and subtropical fruit tree species. HortScience. 2003; 37(8): 1331–1337. Reference Source\n\nDu L, Shen Y, Zhang X, et al.: Antioxidant-rich phytochemicals in miracle berry (Synsepalum dulcificum) and antioxidant activity of its extracts. Food Chem. 2014; 153: 279–284. PubMed Abstract | Publisher Full Text\n\nEbert G, Eberle J, Ali-Dinar H, et al.: Ameliorating effects of Ca(NO3)2 on growth, mineral uptake and photosynthesis of NaCl-stressed guava seedlings (Psidium guajava L.). Sci Hortic. 2002; 93(2): 125–135. Publisher Full Text\n\nFaustino LI, Bulfe NM, Pinazo MA, et al.: Dry weight partitioning and hydraulic traits in young Pinus taeda trees fertilized with nitrogen and phosphorus in a subtropical area. Tree Physiol. 2013; 33(3): 241–251. PubMed Abstract | Publisher Full Text\n\nHanke MV, Flachowsky H, Peil A, et al.: No flower no fruit –Genetic potentials to trigger flowering in fruit trees. G3. 2007; 1(1): 1–12. Reference Source\n\nHoueto J: Strategies d'exploitation des pieds reliques et vergers de Synsepalum dulcificum (Schumach & Thonn.) Daniell dans la commune de ToffoUniversity of Abomey-Calavi, Abomey-Calavi, Benin. 2015.\n\nInglett GE, May JF: Tropical plants with unusual taste properties. Econ Bot. 1968; 22(4): 326–331. Publisher Full Text\n\nJoyner G: The miracle fruit. Quandong magazine of the West Australian Nut and Tree Crop Association. Subiaco, West Australia. 2006; 15.\n\nKing NT, Seiler JR, Fox TR, et al.: Post-fertilization physiology and growth performance of loblolly pine clones. Tree Physiol. 2008; 28(5): 703–711. PubMed Abstract\n\nLim TK: Synsepalum dulcificum. Edible medicinal and non-medicinal plants. Springer Netherlands. 2013; 146–150. Publisher Full Text\n\nMeilan R: Floral induction in woody angiosperms. New Forest. 1997; 14(3): 179–202. Publisher Full Text\n\nN’Danikou S, Achigan-Dako EG, Tchokponhoué DA, et al.: Improving seedling production for Vitex doniana. Seed Sci Technol. 2015; 43(1): 10–19. Publisher Full Text\n\nNakaji T, Fukami M, Dokiya Y, et al.: Effects of high nitrogen load on growth, photosynthesis and nutrient status of Cryptomeria japonica and Pinus densiflora seedlings. Trees. 2001; 15(8): 453–461. Reference Source\n\nNjoku NE, Ubbaonu CN, Alagbaoso SO, et al.: Amino acid profile and oxidizable vitamin content of Synsepalum dulcificum berry (miracle fruit) pulp. Food sci Nutr. 2015; 3(3): 252–256. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOskarsson H, Sigurgeirsson A, Raulund-Rasmussen K: Survival, growth, and nutrition of tree seedlings fertilized at planting on Andisol soils in Iceland: Six-year results. Forest Ecol Manag. 2006; 229(1–3): 88–97. Publisher Full Text\n\nOumorou M, Dah-Dovonon J, Aboh BA, et al.: Contribution à la conservation de Synsepalum dulcificum: régénération et importance socio-économique dans le département de l’ouémé (Bénin). Annal Sci Agron. 2010; 14(1): 101–120.\n\nPavel EW, De villiers AJ: Responses of mango trees to reduced irrigation regimes. Acta Hortic. 2004; 645: 63–68. Publisher Full Text\n\nPoothong S, Reed BM: Modeling the effects of mineral nutrition for improving growth and development of micropropagated red raspberries. Sci Hortic. 2014; 165: 132–141. Publisher Full Text\n\nR Developement Core Team: R: A language and environment for statistical computing.R Foundation for Statistical Computing. Vienna, Austria; 2013. Reference Source\n\nRodrigues JF, Andrade RD, Bastos SC, et al.: Miracle fruit: An alternative sugar substitute in sour beverages. Appetite. 2016; 107: 645–653. PubMed Abstract | Publisher Full Text\n\nRöhrig J: Evaluation of agricultural land resources in Benin by regionalisation of the marginality index using satellite data. PhD Thesis, University of Bonn. 2008. Reference Source\n\nSantos-Antunes F, León L, de la Rosa R, et al.: The length of the juvenile period in olive as influenced by vigor of the seedlings and the precocity of the parents. HortScience. 2005; 40(5): 1213–1215. Reference Source\n\nTal O: Flowering phenological pattern in crowns of four temperate deciduous tree species and its reproductive implications. Plant Biol (Stuttg). 2011; 13(Suppl 1): 62–70. PubMed Abstract | Publisher Full Text\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 1 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017a. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 2 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017b. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 3 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017c. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 4 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017d. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 5 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017e. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 6 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017f. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 7 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017g. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 8 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017h. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 9 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017i. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 10 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017j. Data Source\n\nTchokponhoué DA, N'Danikou S, Hale I, et al.: Dataset 11 in: Early fruiting in Synsepalum dulcificum (Schumach. & Thonn.) Daniell juveniles induced by water and inorganic nutrient management. F1000Research. 2017k. Data Source\n\nTrubat R, Cortina J, Vilagrosa A: Nursery fertilization affects seedling traits but not field performance in Quercus suber L. J Arid Environ. 2010; 74(4): 491–497. Publisher Full Text\n\nWilkie JD, Sedgley M, Olesen T: Regulation of floral initiation in horticultural trees. J Exp Bot. 2008; 59(12): 3215–3228. PubMed Abstract | Publisher Full Text\n\nXingway C, Abdullah TL: Flower ontogenesis and fruit development of Synsepalum dulcificum. HortScience. 2016; 51(6): 697–702.\n\nZhang D, Moran RE, Stack LB: Effect of phosphorus fertilization on growth and flowering of Scaevola aemula R. Br. 'New Wonder'. HortScience. 2004; 39(7): 1728–1731. Reference Source\n\nZimmerman RH: Juvenility and flowering in woody plants: a review. HortScience. 1972; 7(5): 447–455. Reference Source"
}
|
[
{
"id": "21888",
"date": "26 Apr 2017",
"name": "Emil Luca",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt was a great and pleasant surprise for me to read such a well-documented paper. The results obtained by the authors are revealing the intensive research conducted in the almost 3 years of experiments, and those results are properly highlighted in the content of the present paper.\nThe documentation was also very detailed and thoroughly done, which demonstrates the authors' involvement in the chosen topic, as well as their dedication to it.\nThrough their research, the authors managed to shorten the juvenile phase (ending with budding) at Synsepalum dulcificum from more than 36 months to 21 months with simple daily water provision, while when suitable fertilization scheme was combined to daily watering, first flowering occurred at an average age of 23 months and at 16 months old for early flowering individuals. In the article it is also stated that the first fruiting occurred at the average age of 24 months. These achievements shorten almost to half the vegetation period for Synsepalum dulcificum, which means that this miraculous plant has the chances to be bred and produced on a large scale very soon.\nMy hope is that the authors will continue their research, for in the near future we would be able to find a way to produce it in other parts of the world.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nI cannot comment. A qualified statistician is required.\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "23248",
"date": "05 Jun 2017",
"name": "Nur Ashikin Psyquay Abdullah",
"expertise": [
"Reviewer Expertise Botany",
"in vitro physiology",
"molecular systematics",
"agronomy"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nWhen the title presented was as “induced by water and inorganic nutrient”, I was expecting different watering different regimes. It comes to no surprise that the control will have least significant effects on the plant growth since it received no fertilizers. If the authors would have applied the same fertilizing regimes on the rainfall fed, then they would have stronger arguments that water did influence the onset of flowering. The same opinion is applied to the treatments in the glasshouse. Obviously, no watering on fertilized plants will gives devastating effects on the seedlings. However, if they would had expand the watering regimes to different volumes such as 1,2 and 3 L for example, giving some stress induced conditions to the seedling then perhaps they could gives a strong conclusion that watering was the main effects in inducing flowering. Their arguments fall towards more on the watering rather than the fertilizing as this comprise the main treatments. The results presented with comment “Finding providing evidence of the beneficial effect of water supply and fertilizing (for the first time)” for me is hardly surprising or new. Many plants that are put into cultivation must go through cultivation studies to determine the optimum agronomic practices, and basically water and nutrients are beneficial to obtained optimum yield. What I would like the author to stress on, since few studies are put into the cultivation of miracle fruit is the emphasis on the best fertilizing regimes for its growth. For me, by chance the 2L water did induced flowering as compared to the rain fed plants. In my opinion, the rain should had been measured. In the future, when this plant is plant in the field, how will they expect to provide 2L water or are this recommended for indoor pot plant?\n\nNevertheless, this paper is recommendable since we need as much baseline data for the cultivation of this underutilized plant.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-399
|
https://f1000research.com/articles/6-283/v1
|
17 Mar 17
|
{
"type": "Review",
"title": "Associative memory cells: Formation, function and perspective",
"authors": [
"Jin-Hui Wang",
"Shan Cui",
"Shan Cui"
],
"abstract": "Associative learning and memory are common activities in life, and their cellular infrastructures constitute the basis of cognitive processes. Although neuronal plasticity emerges after memory formation, basic units and their working principles for the storage and retrieval of associated signals remain to be revealed. Current reports indicate that associative memory cells, through their mutual synapse innervations among the co-activated sensory cortices, are recruited to fulfill the integration, storage and retrieval of multiple associated signals, and serve associative thinking and logical reasoning. In this review, we aim to summarize associative memory cells in their formation, features and functional impacts.",
"keywords": [
"Associative memory cell (AMC)",
"synapse",
"neuron",
"learning and cognition"
],
"content": "\n\nAssociative learning is a common approach for information acquisition, and associative memory is essential for logical reasoning, associative thinking, comparison and computation1–4. Each object possesses several characteristics that can be detected by different sensory modalities. An apple is detected by the olfactory system for its perfume, the visual system for its shape and color, the taste system for its sweetness, the auditory system for its name, and so on. In initial associative learning, how do the sensory cortices integrate these cross-modal signals for us to describe an object and fulfill their associative memory? i.e., how does the brain jointly store multiple signals and distinguishably retrieve them? In fact, such signals can be retrieved reciprocally, i.e., one signal induces the recall of its associated signals, or the other way around. The mutual synapse innervations among the co-activated sensory cortices and the associative memory cells in these areas are presumably recruited during associative learning1,3,5–7.\n\nIn the studies of cellular and molecular mechanisms underlying associative learning and memory, animal models in fear conditioning, eyelid-blinking conditioning and operant conditioning are used8–11. A psychological view suggests that a conditioned signal induces the prediction of an unconditioned forthcoming signal as the basis of conditioned reflexes; however, the cellular mechanisms remain unclear4. Activity-dependent neural plasticity, such as long-term potentiation12 and depression13, is presumably involved. Whether these types of neural plasticity are correlated with these associated signals remains to be examined. In addition, perceptual memory presumably resides in the cell assembles formed by the strengthening of neuronal connections due to their correlated activities during information acquisition14. However, the nature of these memory cells is largely unknown.\n\nA few of points are worth considering in the use of these conditioning models. Associative memory appears as a signal evoking the recall of its associated signals, or the other way around. After fear conditioning or eye-blinking conditioning is established, whether the air-puffing to the cornea or the electrical shock to the feet induces the recall of the sound signal remains unknown. Moreover, the electrical shock may activate entire sensory cortices and even the whole brain through the spread of electrical currents to all the sensory systems in the body, so that the association is not region-specific in the brain. In addition, the cerebellum may not be a primary region for the joint storage of air-puff to the cornea and sound.\n\n\nA comprehensive model of associative memory\n\nCurrent reports show the reciprocal form of cross-modal reflexes in mice, i.e., paired odor and whisker stimulations lead to odorant-induced whisker motion and whisker-induced olfaction responses5,6. This mutual retrieval of associated signals can be used to explain associative memory. After two signals are learnt associatively, one signal induces the recall of its associated signals by the presentation of their respective behaviors, or the other way around, so that individuals are able to fulfill logical reasoning and associative thinking in forward and backward manners. For instance, people looking at an orange recall its sour-sweetness to be salivary, and people tasting something that has sour-sweetness may recall an orange or orange-like juice.\n\nWhere is the location to integrate the associated signals? Since inhibiting the function of sensory cortices blocks the reciprocal cross-modal reflex1,3,15, the primary area for associative memory is likely located in sensory cortices, where mutual synapse innervations and associative memory cells are recruited after associative learning1,3,7.\n\n\nThe cellular mechanism underlying associative memory\n\nThe associations of sensory signals lead to their associated storage and retrieval, so that each signal is able to induce the recall of other signals. Cellular substrates are hypothetically based on the events that the co-activations of the sensory cortices, by pairing their input signals, recruit the mutual synapse innervations among these cortices for integrating these associated signals and the associative memory cells for encoding these signals in sensory cortices5,6.\n\nAfter odorant-induced whisker motion and whisker-induced olfaction responses are established in mice, their barrel and piriform cortical neurons are recruited to encode a new signal alongside an innate signal. Barrel cortical neurons encode new odor and innate whisker signals. Piriform cortical neurons are able to encode new whisker and innate odor signals. Moreover, barrel cortical neurons receive new synapse innervation from the piriform cortex alongside an innate one from the thalamus. In addition, piriform cortical neurons receive new synapse innervations from the barrel cortex alongside innate ones from the olfactory bulb. That is, barrel and piriform cortical neurons are mutually innervated through their axons and synapse outputs1,3,5,6. The neurons that encode both new and innate signals based on their mutual synapse innervations are named as associative memory cells. The neurons that encode either one of signals are called new memory cells or innate memory cells. Associative memory cells include glutamatergic neurons, GABAergic neurons and astrocytes1,3,7. miRNA-mediated epigenetic processes also appear to be involved7,15. Associative memory cells and mutual synapse innervations among sensory cortices constitute the cellular substrates for memory to specific associated signals. Notably, associative memory cells are able to store more than two signals15,16. For instance, paired whisker, odor and tail stimulations lead to odorant-induced and tail-induced whisker motions alongside whisker-induced whisker motion. The neurons in these sensory cortices are recruited to encode these three signals through mutual cortical innervations15.\n\nMemory to associated signals is primarily fulfilled by associative memory cells in sensory cortices. These associative memory cells recruited from sensory cortical neurons possess the following characteristics (Figure 1). They encode associated signals, including their innate signals and new signals. They receive new synapse innervations from the co-activated sensory cortices besides their innate sensory input for the integration and storage of associated innate and new signals. Their axons project to brain areas that control behavior, cognition and emotion to initiate memory presentations. Their recruitment is influenced by epigenetics-regulated genes and proteins that are related to memory. In the integration, storage and retrieval of these associated signals, the working principles for associative memory cells are based on their receptions to innate and new synapse inputs for signal integrations, their abilities to convert synaptic analogue signals into digital spikes for encoding the associated signals, and their capacities of spike outputs to drive behavior-, cognition- and emotion-related brain areas. Therefore, the synapse inputs onto associative memory cells determine the specificity of memory contents. The number, activity level and plasticity of associative memory cells, as well as the connection and activity strengths in their input and output partners, set up the power and persistence of information storage and memory presentation. With associative memory cells in the sensory cortices1,3,7,16, their axon-innervated downstream brain cells are able to encode these associated signals17–20. The stimulations to any of these areas in the neural circuits from sensory cortices to behavior- and emotion-controlled brain nuclei can induce memory presentation21–27.\n\nAssociative memory cells encode new sensory signals alongside innate signals, which are called primary associative memory cells (pAMC). They receive synapse innervations from co-activated sensory cortices alongside innate sensory inputs. Their axons project to the brain areas related to cognition (prefrontal cortex and hippocampus), emotions (amygdala and nucleus accumbens) and behaviors (motor cortex). The downstream neurons can be mutually innervated to encode dual associative signals (secondary associative memory cells, sAMC) during active thinking. sAMC also include those neurons that receive convergence inputs from different groups of pAMC. The mutual innervations can be induced among the different groups of pAMC by the associations of their correspondent intramodal signals. The different groups of associative memory cells and their inputs/outputs are presented by the different intensities of colors, such as light blue vs dark blue and orange vs red.\n\n\nThe impact of associative memory cells on physiology and pathology\n\nOne signal induces the recall of its associated signals and the expression of their respective behaviors, or the other way around, such that individuals are able to fulfill logical reasoning and associative thinking in forward and backward manners. Each of the co-activated sensory cortices encodes the associated innate signal and newly learnt signal. Each of the associated signals is stored at multiple sensory cortices, and this storage prevents memory loss3. The storage of multiple signals in an associative memory cell strengthens the efficiency of memory retrieval. In addition, the storage of multiple signals in a cortical area and the recall of one signal triggered by multiple signals enable these individuals to strengthen their abilities in memory retrieval and well-organized cognitions.\n\nThere are two forms of neuronal plasticity, i.e., the recruitment of associative memory cells, as well as the structural and functional changes of neurons/synapses. The recruitment of associative memory cells driven by new synapse innervations is related to store the specifically acquired signals, which differs from plasticity at the existing synapses, such as potentiation and depression that remain to be proved specifically for the newly acquired signals7. Structure and functional plasticity at the subcellular compartments of associative memory cells determines whether they sensitively integrate associated signals, precisely encode memorized signals and efficiently trigger the neurons in their downstream brain areas for memory presentation7.\n\nA working diagram of associative memory cells in cross-modal memory and reflexes is provided in Figure 1. In addition to the innate input, the activity strength of associative memory cells in the given sensory cortex is facilitated by the newly innervated synapses from other co-activated sensory cortices. For instance, piriform cortical neurons receive synapse innervations from the barrel cortex after the pairings of whisker and odor signals. On the basis of odor signals from the olfactory afferent pathway, the activity of synapse inputs from barrel cortical neurons by whisker stimulus will drive these piriform cortical neurons toward a threshold for firing spikes, and their spikes activate the downstream neurons for olfactory responses, such that a whisker-induced olfactory response is formed. On the other hand, barrel cortical neurons receive synapse innervations from the piriform cortex after the pairings of whisker and odor signals. On the basis of whisker signals from the whisker afferent pathway, the activity of synapse inputs from piriform cortical neurons by odor stimulation will drive these barrel cortical neurons toward a threshold for firing spikes. Their spikes activate the motor cortical neurons for whisker motions, such that odorant-induced whisker motion is formed. In the meantime, the activation of their downstream cognition- and emotion-brain regions will lead to the accompanying responses in emotion and cognition7.\n\nIn terms of memory enhancement, weakness and loss, the changes in associative memory cells from the sensory cortices to cognition-, behavior- and emotion-related brain regions are critical. For instance, if the innervations from multi-signal inputs to associative memory cells and their upregulation are persistent in sensory cortices, memory traces will be maintained over a life time. The decayed plasticity at behavior-related cortices, due to the lack of use, may lead to the inability of memory presentation, i.e., memory weakness or loss15. In fact, although certain learned signals cannot be intrinsically and spontaneously recalled, these stored signals can be retrieved from the sensory cortex by the signal, similar to or same as, them. On the other hand, if signals’ integrations at associative memory cells are unusually strong, the hyperactivity of associative memory cells may lead to seizure activities in motor cortices for epilepsy, in sensory cortices for hallucination and in cognitive cortices for delusion7.\n\n\nPlasticity at associative memory cells\n\nIn glutamatergic neurons, the excitatory synaptic inputs, intrinsic property and axon outputs are upregulated. In GABAergic neurons, the excitatory inputs are upregulated, whereas the intrinsic property and axon outputs are downregulated. These factors may coordinately facilitate the driving force from new synapse innervations to recruit glutamatergic and GABAergic neurons as associative memory cells, and promote their functioning state to store signals7. For instance, in neurons with newly formed synapse innervations, the increased excitatory inputs and decreased inhibitory inputs can increase their active states to a higher level for receiving and storing new information, i.e., the recruitment of associative memory cells1,3,7. The increased number and function of excitatory synapse inputs can strengthen the encoding capacity and precision of associative memory cells for information storage and retrieval precisely and efficiently28–30. If the excitatory associative memory cells are overly active, they may activate the neighboring inhibitory neurons and prevent associative memory cells from hyperactivity via recurrent negative feedback28,31–33.\n\n\nPerspectives of associative memory cells\n\nIn addition to recruitment of associative memory cells by the association of exogenous signals, associative memory cells may be recruited by the association of endogenous signals in the brain during cognition, such as associative thinking and logical reasoning (Figure 1). In active thinking, the associations of previously stored associative signals in sensory cortices may lead to recruited associative memory cells (i.e., primary associative memory cell) making mutual synapse innervations among them and convergent innervating downstream neurons. In their downstream brain areas, the neurons begin to encode dual associative signals and are recruited to be new associative memory cells (i.e., secondary associative memory cells). The contents of associative thinking and logical reasoning are memorized. In subsequent cognitive activity, the secondary associative memory cells can be activated for mixed associative memory presentation, high-level cognition and even inspiration. Thus, the more associative thinking is, the higher the integration of associative memory cells and the more inspiration there is3. Moreover, secondary associative memory cells may be recruited by giving exogeneous associated signals to pair two forms of associative memory. In terms of the connections among different groups of associative memory cells, the patterns of sequential links and a common-shared group may be present for signals’ integrations.\n\nIn addition to associative learning through cross-modal sensory systems to induce cross-modal associative memory, associative learning can be intramodal, e.g., associated images to the visual system, associated odors to the olfactory system, associated words to the auditory system, associated somatosensory signals to the somatosensory system, and so on, inducing intramodal associative memory. The associated signals from a given sensory input to its sensory cortex may initiate two sets of neurons that encode each of these associated signals to form mutual innervation through axons and synapses, so that associative memory cells in a single modality of the sensory cortex are recruited. In this given sensory cortex, associative memory cells are formed to memorize intramodal signals with the different features, strengths and locations of input signals (Figure 1). With the associative memory cells in intramodal sensory cortices, intramodal memory to associated signals is formed, e.g. image one induces image two recall, odor one induces odor two recall and word one induces word two recall, or the other way around. It is noteworthy that there is a time delay among intramodal signals, in which the activity persistence in sets one and two of neurons in this given sensory cortex controls whether their co-activations overlap to recruit intramodal associative memory cells. The different portions, activity strengths and connections of these neurons are responsible for the storage and retrieval of intramodal signals with different features34.\n\nDuring associative thinking and logical reasoning, we usually can tell that images are from previous sights, words from previous reading or listening, tastes from previous eating, and so on. These phenomena indicate that the recalled signals originate from associative memory cells in sensory cortices, and/or that the secondary associative memory cells in cognitive cortices send synapse innervations back to the primary associative memory cells. Their interactions make associative thinking and logical reasoning with the inclusions of the sensory origins. In addition, images, odors, tastes and events are presented by the word-based language during associative thinking and logical reasoning. In initial learning, these sensations/events and their word descriptions are associated, such that associative memory cells for encoding these sensations/events and word descriptions have been recruited. Once sensations and behaviors are recalled in sequential playbacks, their word descriptions in these associative memory cells are initiated to substitute the complicated images and events for speeding up these cognitive processes. However, if words and sensation/events are associated improperly, the correction of these associations is difficult because of the presence of these recruited synapse innervations, associative memory cells and their circuits.\n\nThe formation of associative memory cells in terms of number and distributed areas is affected by the excitatory state of the brain. More excitatory strengths and areas recruit more associative memory cells, i.e., activity together and connection together. When the brain is highly excited in many areas, such as euphoria perception, extreme fear and strong stimulus, more associative memory cells are recruited in these areas through their mutual innervations, so that impressive memory and spontaneous recall to these experiences are generated in an individual’s life7. In this regard, it is difficult to remove these formed synapse innervations and recruited associative memory cells for the relief of fear memory. Alternative ways can be the avoidance of fear stimulations and the induction of happiness to rebalance these two states toward the weakness of fear memory, since the lack of uses in neural circuits related to fear memory may drive them to be function silence. In the brains of individuals with a history of substance abuse or addiction, associative memory cells are formed to be large in number and are found in extensive areas, meaning that relapses occur during the individual’s life time1. Strategies for these individuals may include the avoidance of the environmental cues associated with substance abuse to silence these associative memory cells and circuits, as well as establish alternative happiness to modify these silent associative memory cells.\n\nHow different groups of associative memory cells work together during dreaming is proposed below. Dreams are usually associated with high activities in electronic encephalograph and other behaviors, such as rapid eye movement, muscle twitch and active respiration/heat beat, indicating high activity in the cerebral brain. In the meantime, associative memory cells are presumably activated, especially those for images and events intensively activated and frequently thought in daytime, such that the images and events in more or less similarity are playbacks. In other words, associative thinking and logical reasoning based on associative memory cells can be fulfilled under an unawake condition. The incompletely identical images and events in the dreams may be caused by the differential integration of associative memory cells when the brain is not fully awake, compared to the awake condition7.\n\nIn terms of molecular mechanisms underlying the recruitment of associative memory cells, epigenetic-mediated processes are presumably involved6,7,15, since their formations are triggered by the external inputs from sensory organs and the intrinsic activation by endogenous synapse inputs. The downstream molecules and signaling pathways of these epigenetic events that regulate synapse formation and neuron/synapse activities are presumably contributing to the recruitment of associative memory cells, which remains to be tested.\n\n\nConclusions\n\nIn summary, based on studies of associative memory cells, we have constructed a working map of the brain for the integration, storage and retrieval of associated signals, as well as the subsequent cognitive processes. Associative memory cells and their activity-dependent activation play important roles in associative memory and cognition. The perspectives in this review are expected to be useful for future study to scheme comprehensive brain-working atlases.",
"appendix": "Author contributions\n\n\n\nJHW contributed the original idea and wrote the paper, and SC drew the diagram. All authors approve the final version of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study is funded by National Basic Research Program (2013CB531304, 2016YFC1307100) and Natural Science Foundation China (81671071, 81471123) to JHW.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nGao Z, Chen L, Fan R, et al.: Associations of Unilateral Whisker and Olfactory Signals Induce Synapse Formation and Memory Cell Recruitment in Bilateral Barrel Cortices: Cellular Mechanism for Unilateral Training Toward Bilateral Memory. Front Cell Neurosci. 2016; 10(285): 285. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKandel ER, Pittenger C: The past, the future and the biology of memory storage. Philos Trans R Soc Lond B Biol Sci. 1999; 354(1392): 2027–52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang D, Zhao J, Gao Z, et al.: Neurons in the barrel cortex turn into processing whisker and odor signals: a cellular mechanism for the storage and retrieval of associative signals. Front Cell Neurosci. 2015; 9(320): 320. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWasserman EA, Miller RR: What's elementary about associative learning? Annu Rev Psychol. 1997; 48: 573–607. PubMed Abstract | Publisher Full Text\n\nWang JH, Chen N, Gao Z, et al.: Upregulation of glutamatergic receptor-channels is associated with cross-modal reflexes encoded in barrel cortex and piriform cortex. Biophys J. 2014; 106(2 supplement 1): 191a. Publisher Full Text\n\nWang JH, Wang D, Gao Z, et al.: Both Glutamatergic and Gabaergic Neurons are Recruited to be Associative Memory Cells. Biophys J. 2016; 110(3 supplement 1): 481a. Publisher Full Text\n\nYan F, Gao Z, Chen P, et al.: Coordinated Plasticity between Barrel Cortical Glutamatergic and GABAergic Neurons during Associative Memory. Neural Plast. 2016; 2016:1–20. 5648390. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDityatev AE, Bolshakov VY: Amygdala, long-term potentiation, and fear conditioning. Neuroscientist. 2005; 11(1): 75–88. PubMed Abstract | Publisher Full Text\n\nMaren S: Pavlovian fear conditioning as a behavioral assay for hippocampus and amygdala function: cautions and caveats. Eur J Neurosci. 2008; 28(8): 1661–6. PubMed Abstract | Publisher Full Text\n\nStaddon JE, Cerutti DT: Operant conditioning. Annu Rev Psychol. 2003; 54: 115–44. Publisher Full Text\n\nTheios J, Brelsford JW Jr: A Markov model for classical conditioning: Application to eye-blink conditioning in rabbits. Psychol Rev. 1966; 73(5): 393–408. PubMed Abstract | Publisher Full Text\n\nBliss TV, Lomo T: Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path. J Physiol. 1973; 232(2): 331–356. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStanton PK, Sejnowski TJ: Associative long-term depression in the hippocampus induced by hebbian covariance. Nature. 1989; 339(6221): 215–8. PubMed Abstract | Publisher Full Text\n\nHebb DO: The organization of behavior, a neuropsychological theory. New York, NY: Wiley. 1949. Reference Source\n\nWang JH, Feng J, Lu W: Associative memory cells are recruited to encode triple sensory signals via synapse formation. Biophys J. 2017; 112(3 Supplement 1): 1443–444a. Publisher Full Text\n\nVincis R, Fontanini A: Associative learning changes cross-modal representations in the gustatory cortex. eLife. 2016; 5: pii: e16420. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCai DJ, Aharoni D, Shuman T, et al.: A shared neural ensemble links distinct contextual memories encoded close in time. Nature. 2016; 534(7605): 115–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNaya Y, Yoshida M, Miyashita Y: Forward processing of long-term associative memory in monkey inferotemporal cortex. J Neurosci. 2003; 23(7): 2861–71. PubMed Abstract\n\nTakehara-Nishiuchi K, McNaughton BL: Spontaneous changes of neocortical code for associative memory during consolidation. Science. 2008; 322(5903): 960–3. PubMed Abstract | Publisher Full Text\n\nViskontas IV: Advances in memory research: single-neuron recordings from the human medial temporal lobe aid our understanding of declarative memory. Curr Opin Neurol. 2008; 21(6): 662–8. PubMed Abstract | Publisher Full Text\n\nEhrlich I, Humeau Y, Grenier F, et al.: Amygdala inhibitory circuits and the control of fear memory. Neuron. 2009; 62(6): 757–771. PubMed Abstract | Publisher Full Text\n\nLi H, Penzo MA, Taniguchi H, et al.: Experience-dependent modification of a central amygdala fear circuit. Nat Neurosci. 2013; 16(3): 332–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu X, Ramirez S, Puryear CB, et al.: Optogenetic stimulation of a hippocampal engram activates fear memory recall. Nature. 2012; 484(7394): 381–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOtis JM, Namboodiri VM, Matan AM, et al.: Prefrontal cortex output circuits guide reward seeking through divergent cue encoding. Nature. 2017; 543(7643): 103–107. PubMed Abstract | Publisher Full Text\n\nPape HC, Pare D: Plastic synaptic networks of the amygdala for the acquisition, expression, and extinction of conditioned fear. Physiol Rev. 2010; 90(2): 419–463. PubMed Abstract | Publisher Full Text | Free Full Text\n\nXu W, Südhof TC: A neural circuit for memory specificity and generalization. Science. 2013; 339(6125): 1290–5. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYokose J, Okubo-Suzuki R, Nomoto M, et al.: Overlapping memory trace indispensable for linking, but not recalling, individual memories. Science. 2017; 355(6323): 398–403. PubMed Abstract | Publisher Full Text\n\nWang JH, Wei J, Chen X, et al.: Gain and fidelity of transmission patterns at cortical excitatory unitary synapses improve spike encoding. J Cell Sci. 2008; 121(Pt 17): 2951–2960. PubMed Abstract | Publisher Full Text\n\nYu J, Qian H, Chen N, et al.: Quantal glutamate release is essential for reliable neuronal encodings in cerebral networks. PLoS One. 2011; 6(9): e25219. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYu J, Qian H, Wang JH: Upregulation of transmitter release probability improves a conversion of synaptic analogue signals into neuronal digital spikes. Mol Brain. 2012; 5(1): 26. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChen N, Chen X, Yu J, et al.: Afterhyperpolarization improves spike programming through lowering threshold potentials and refractory periods mediated by voltage-gated sodium channels. Biochem Biophys Res Commun. 2006; 346(3): 938–945. PubMed Abstract | Publisher Full Text\n\nChen N, Chen X, Wang JH: Homeostasis established by coordination of subcellular compartment plasticity improves spike encoding. J Cell Sci. 2008; 121(Pt 17): 2961–2971. PubMed Abstract | Publisher Full Text\n\nGrienberger C, Milstein AD, Bittner KC, et al.: Inhibitory suppression of heterogeneously tuned excitation enhances spatial coding in CA1 place cells. Nat Neurosci. 2017; 20(3): 417–426. PubMed Abstract | Publisher Full Text\n\nZhao J, Wang D, Wang JH: Barrel cortical neurons and astrocytes coordinately respond to an increased whisker stimulus frequency. Mol Brain. 2012; 5: 12. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21274",
"date": "27 Mar 2017",
"name": "Ping Zheng",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAssociative learning and memory is a very interesting topic. Jin-Hui Wang et al. summarize associative memory cells in their formation, features and functional impacts. The review is well finished. I have the following suggestions:\n\n1. P1: The definition for associative learning: ”Associative learning is a common approach for information acquisition” is a bit too general. I suggest that using “Associative learning is the learning of associations between events” to replace it. 2. Give a definition for “associative memory cells” and provide evidence to support “associative memory cells”. 3. P2: Since “In the studies of cellular and molecular mechanisms underlying associative learning and memory, animal models in fear conditioning, eyelid-blinking conditioning and operant conditioning are used”, the following description for associative memory cells should be advanced using these models. The present description for associative memory cells is a bit too general.",
"responses": []
},
{
"id": "21064",
"date": "27 Mar 2017",
"name": "Jian-Guo Chen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis review summarizes current studies about cellular mechanisms underlying associative memory, in which associative memory cells are recruited for the integration, storage and retrieval of cross-modal signals. The contents in this review are and advanced concept. To improve this review, I would suggest that authors include the following information.\n1) The physiological and pathological impacts are caused by the quantity of associative memory cells. 2) How many kinds of associative memory cells are present in the cerebral cortices?",
"responses": []
},
{
"id": "21289",
"date": "27 Mar 2017",
"name": "Hongxin Dong",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this review Dr. Jin-Hui Wang and Shan Cui proposed a model named “associative memory cells (AMCs)” that could be used to explain associative memory formation and functioning at the neuronal circuitry level. This novel cellular model was first established in Dr. Wang’s laboratory. Based on their serial studies, they proposed the hypothesis that AMCs encode both new and innate signals based on their mutual synapse innervation in the sensory cortical area. Furthermore, they suggested that the primary associative memory cells innervate the downstream neurons in associated brain areas, and the downstream neurons encode dual associative signals and serve as the secondary associative memory cells (sAMCs). The formation and functioning of the AMCs are very interesting and important as they may help to reveal novel mechanisms for learning, memory and cognition.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-283
|
https://f1000research.com/articles/5-222/v1
|
26 Feb 16
|
{
"type": "Method Article",
"title": "How blockchain-timestamped protocols could improve the trustworthiness of medical science",
"authors": [
"Greg Irving",
"John Holden",
"John Holden"
],
"abstract": "Trust in scientific research is diminished by evidence that data are being manipulated. Outcome switching, data dredging and selective publication are some of the problems that undermine the integrity of published research. Here we report a proof-of-concept study using a ‘blockchain’ as a low cost, independently verifiable method that could be widely and readily used to audit and confirm the reliability of scientific studies.",
"keywords": [
"clinical trials",
"blockchain",
"data",
"bitcoin"
],
"content": "\n\nTrust in scientific research is diminished by evidence that data are being manipulated1. Outcome switching, data dredging and selective publication are some of the problems that undermine the integrity of published research. The declaration of Helsinki states that every clinical trial must be registered in a publicly accessible database before recruitment of the first subject2. Yet despite the creation of numerous trial registries problems such as differences between pre-specified and reported outcomes persist3–5. If readers doubt the trustworthiness of scientific research then it is largely valueless to them and those they influence. Here we propose using a ‘blockchain’ as a low cost, independently verifiable method that could be widely and readily used to audit and confirm the reliability of scientific studies.\n\nA blockchain is a distributed, permanent, timestamped public ledger of transactions. In doing so it provides a method for establishing the existence of a document at a particular time that can be independently verified by any interested party. When someone wishes to add to it, participants in the network – all of whom have copies of the existing blockchain – run algorithms to evaluate and verify the proposed action. Once the majority of ‘nodes’ confirm that a transaction is valid i.e. matches the blockchain history then the new transaction will be approved and added to the chain. Once a block of data is recorded on a blockchain ledger it is extremely difficult to change or remove it as doing so would require changing the record on many thousands computers worldwide. This prevents tampering or future revision of a submitted timestamped record. Such distributive version control has been increasingly used in fields such as software development, engineering and genetics but to date has not been applied to the reporting of clinical studies.\n\n\nMethods\n\nIn this proof-of-concept study we used publically available documentation from a recently reported randomized control trial6,7. A copy of the clinicaltrials.gov study protocol was prepared based on it’s pre-specified endpoints and planned analyses which was saved as an unformatted text file6 (Dataset 1). The document’s SHA256 digest for the text was then calculated by entering text from the trial protocol into an SHA256 calculator (Xorbin©). This was then converted into a bitcoin private key and corresponding public key using a bitcoin wallet. To do this a new account was created in Strongcoin©8 and the SHA256 digest used as the account password (private key). From this Strongcoin© automatically generated a corresponding Advanced Encryption Standard 256 bit public key. An arbitrary amount of bitcoin was then sent to a corresponding bitcoin address. To verify the existence of the document a second researcher was sent the originally prepared unformatted document. An SHA256 digest was created as previously described and a corresponding private key and public key generated. The exact replication of the public key (1AHjCz2oEUTH8js4S8vViC8NKph4zCACXH) was then used to prove the documents existence in the blockchain using blockchain.info©9. The protocol document was then edited to reflect any changes to pre-specified outcomes as reported by the COMPare group3. This was used to create a further SHA256 and corresponding public and private keys3.\n\n\nResults\n\nIncorporating a transaction into the blockchain using a public and private key generated from the SHA256 digest of the trial protocol provided a timestamped record that the protocol was at least as old as the transaction generated. The transaction took under five minutes to complete. The process cost was free as the nominal bitcoin transaction could be retrieved. Researchers were able to search for the transaction on the blockchain, confirm the date when the transaction occurred and verify the authenticity of the original protocol by generating identical public and private keys. Any changes made to the original document generated different public and private keys indicating that protocol had been altered. This included assessment of the edited protocol reflecting pre-specified outcomes not reported or non-pre-specified outcomes now reported in the final paper.\n\n\nDiscussion\n\nFraud or carelessness in scientific methods erodes the confidence in medicine as a whole which is essential to the performance of its function1. The method described here provides an immutable record of the existence, integrity and ownership of a specific trial protocol. It is a simple and cheap way of allowing a third party to audit and externally validate outcomes and analyses specified a-priori with the findings reported a-posteriori. The method prevents researchers from changing endpoints or analyses after seeing their study results without reporting such changes. Transaction codes could be recorded in scientific papers, reference databases or trial registries to facilitate external verification. Making changes to pre-specified text in a document or trying to bury a protocol in a trial registry would simply not be possible. Attempts to fraudulently prepare multiple protocols in advance would be technically possible but would require a considerable amount of advanced planning and would leave behind a publically available trail of evidence that could not be destroyed.\n\nThe blockchain offers a number of advantages over trial registries or publishing protocols. Firstly, the blockchain would not be confined to the validation of clinical trials. The approach could be used for a whole range of observational and experimental studies where registries do not currently exist. Secondly, the blockchain provides a real-time timestamped record of a protocol. Such precision is important given persistent problems with protocol registration after trial initiation10. Thirdly, with over 30,000 trials currently published annually and rising, manual outcome verification is simply not possible11.\n\n\nConclusion\n\nThe method we have described allows anyone to verify the exact wording and existence of a protocol at a given point in time. It has the potential to support automated, extremely robust verification of pre-specified and reported outcomes. This evidence should increase trust and diminish suspicion in reported data and the conclusions that are drawn.\n\n\nData availability\n\nF1000Research: Dataset 1. Unformatted text file, 10.5256/f1000research.8114.d11459612",
"appendix": "Author contributions\n\n\n\nGI conceived the study. GI designed the experiments. GI and JH carried out the research. GI prepared the first draft of the manuscript. All authors were involved in the revision of the draft manuscript and have agreed to the final content.’\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nHouse of Commons: Science and Technology Committee. Third Report. 2016. Reference Source\n\nWMA Declaration of Helsinki - Ethical Principles for Medical Research Involving Human Subjects. 2016. Reference Source\n\nCOMPare - Full results. 2016. Reference Source\n\nSlade E, Drysdale H, Goldacre B, et al.: Discrepancies Between Prespecified and Reported Outcomes. Ann Intern Med. 2015. PubMed Abstract | Publisher Full Text\n\nGoldacre B: How to get all trials reported: audit, better data, and individual accountability. PLoS Med. 2015; 12(4): e1001821. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe CArdiovasCulAr Diabetes & Ethanol (CASCADE) Trial. Tabular View - ClinicalTrials.gov. 2016. Reference Source\n\nGepner Y, Golan R, Harman-Boehm I, et al.: Effects of Initiating Moderate Alcohol Intake on Cardiometabolic Risk in Adults With Type 2 Diabetes: A 2-Year Randomized, Controlled Trial. Ann Intern Med. 2015; 163(8): 569–79. PubMed Abstract | Publisher Full Text\n\nStrongcoin. 2016. Reference Source\n\nBlockchain info. 2016. Reference Source\n\nAnand V, Scales DC, Parshuram CS, et al.: Registration and design alterations of clinical trials in critical care: a cross-sectional observational study. Intensive Care Med. 2014; 40(5): 700–22. PubMed Abstract | Publisher Full Text\n\nMedline trend. 2016. Reference Source\n\nIrving G, Holden J: Dataset 1 in: How blockchain-timestamped protocols could improve the trustworthiness of medical science. F1000Research. 2016. Data Source"
}
|
[
{
"id": "12891",
"date": "29 Mar 2016",
"name": "Amy I Price",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title is informative and appropriate. The abstract is well done and provides considerable detail in an elegant way that focuses on an original innovation for data security.The research article is a proof of concept study that explains the model and the rationale for why it is needed and how it will be fit for purpose.Blockchain improves and expands the role for trial registries or publishing protocols. The approach could be used for RCTs and a whole range of observational and experimental studies where registries are needed but do not currently exist. A blockchain provides a real-time time-stamped record of any study protocol.Security for data and time stamps that are secure and tamper resistant are a welcome addition for clinical trials databases as is one secure shared location for all trials registry entries. This needs to be flexible enough to register change easily and efficiently. The authors supply real data and it is feasible to accomplish this however for professionals with little time to spare the outside interface will need to be simplified and steps minimized to retain users. Somewhat like GOOGLE search on a white page. Only typing a word from one link is required and the search does all the background algorithm loading to accomplish the task. I am sure this will be the next step in the project.This present research can be replicated by those with sufficient IT skills and it fulfills a significant gap in research. Social media is full of information on security breaches, data fraud and altered protocols, this would be one way to make registering a valid protocol secure and to reduce concerns about trials transparency as research needs to be registered and reported.The conclusions are justified and balanced.",
"responses": []
},
{
"id": "13757",
"date": "11 May 2016",
"name": "Luís Pinho-Costa",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis concept paper describes the potential use of blockchain technology in scientific publishing as a way to establish a timestamped record of study protocols.The paper presents a logical structure and the individual parts form a coherent whole. The language is clear and objective, and the arguments relevant.The title is elucidative and enticing. The abstract is presented in a synthetic and meaningful way.The methods are ingenious and relevant to the formulated aims. Sufficient details is provided, allowing for replication of the experiment. Yet, a more clear delineation of the methodological aspects could be useful for readers not accustomed with the technical standards and tools used by the authors.The conclusions are supported by the findings. Logical implications are drawn by the authors. Timestamped blockchain technology, as proposed by the authors, could revolutionize scientific publishing.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-222
|
https://f1000research.com/articles/6-20/v1
|
09 Jan 17
|
{
"type": "Research Note",
"title": "Engaging high school students in systems biology through an e-internship program",
"authors": [
"Wim E. Crusio",
"Cynthia Rubino",
"Anna Delprato",
"Wim E. Crusio",
"Cynthia Rubino"
],
"abstract": "In this article, we describe the design and implementation of an e-internship program that BioScience Project offers high school students over the summer. Project topics are in the areas of behavioral neuroscience and brain disorders. All research, teaching, and communication is done online using open access databases and webtools, a learning management system, and Google apps. Students conduct all aspects of a research project from formulating a question to collecting and analyzing the data, to presenting their results in the form of a scientific poster. Results from a pilot study indicate that students are capable of comprehending and successfully completing such a project, and benefit both intellectually and professionally from participating in the e-internship program.",
"keywords": [
"e-internship",
"personalized learning",
"neurogenomics",
"systems biology",
"K12",
"stem"
],
"content": "Introduction\n\nNeurogenomics is the study of the systems, networks, and gene interactions that underlie neural processes. Increased functional information from diverse sources available in open access databases, along with specific tools for analysis, enables the integration of these data to gain unique insights (Overall et al., 2015). BioScience Project (www.bioscienceproject.org) offers high school students the opportunity to work as summer interns on research projects in the area of behavioral neuroscience and brain disorders, which includes the analysis of gene expression data using systems and network biology methods (see Schughart & Williams, 2017). This is a voluntary internship available to students regardless of academic performance or institution. Students participate in the program to gain hands on experience and acquire new skills. The projects involve learning how to formulate and test hypotheses, data-mine biological and neuroscience-specific databases, statistical analysis, and data representation and visualization. Students need only a computer with an Internet connection to participate. The projects are flexible, allowing students to work from home on their own schedule. All communication is done via the Internet with an online learning management system (Moodle, https://moodle.com/), Google apps (https://gsuite.google.com/), and video conferencing. At the end of the internship, students communicate their work in a poster, which can be used to leverage their college applications and/or detail their experience to prospective employers. Students also receive certificates of completion. Several strengths of the e-internship program worth noting are: (1) Students are highly interested in topics related to behavioral neuroscience and brain disease; (2) This is shown to be an effective model to introduce early stage students to advanced topics and research methods in neuroscience; (3) Students receive the otherwise-limited opportunity to participate in authentic research projects and work directly with professional scientists; (4) The internship program is scalable, enabling many students to participate; (5) Project results are freely accessible to the scientific community on BioScience Project’s website (www.bioscienceproject.org).\n\n\nInternship implementation\n\nRecruiting students is mainly done by contacting high school science departments through email and providing information about our organization and the internship opportunity. We include a recruitment poster (Supplementary File 1) and ask that the information be passed along to their students. We launched a two year pilot project that included both private and public institutions around the Boston (MA, USA) area. Schools were selected randomly. Several students from schools not contacted by us learned about the Internship program through word of mouth or an Internet search.\n\nThe internship program runs for 6 to 8 weeks in July and August. Students may begin sooner if they like. The time commitment varies for each student, but is in the range of 10–15 hours per week. Students proceed at their own pace and can work alone or in a group. There are no deadlines, except to finish projects before the new school year begins. Project completion requires that students proceed through all of the modules and make a scientific style poster of their work, which includes introduction, methods, results, and discussion sections. Students are able to choose their topic of study or can select from subjects suggested by us. Project specific materials are provided throughout the internship. These include relevant literature for background information from science magazines (Scientific American and The Scientist), as well as links to news updates from sources such as EurekAlert! (https://www.eurekalert.org/), BBC Science (http://www.bbc.co.uk/science), Neuroscience News (http://neurosciencenews.com/) and YouTube (https://www.youtube.com/).\n\nStudents are provided with as much mentoring as they need to complete the internship. Mentoring is requested and scheduled by email. All instruction and mentoring is provided by the project director, Dr. Anna Delprato. As the internship program grows, additional scientists will be recruited to assist with teaching. Students are not tested and there are no grades assigned. Teaching and communication is done through an online learning platform (Moodle; https://moodle.com/), one on one video conferencing (Skype or Google Hangouts), email, document sharing (Google Docs), and a Google group, which enables students to receive notices and communicate with one another. Google apps are also used for data handling (Google Sheets) and presentation (Google Slides). Students may also use Microsoft Office’s Excel and PowerPoint software for the same purpose.\n\nAll of the databases and analysis tools are open access. The core set of databases and web tools used in the internship are: The Allen Brain Atlas (gene expression data based on donor brains and correlation analysis; http://brain-map.org/), Venny (Venn diagram generator; http://bioinfogp.cnb.csic.es/tools/venny/), DAVID (Database for Annotation, Visualization and Integrated Discovery; functional annotation, pathway information, and clustering; https://david.ncifcrf.gov/; Huang et al., 2009), PythonAnywhere (statistics, graphing; https://www.pythonanywhere.com) and STRING (network analysis; http://string-db.org/; Szklarczyk et al., 2015). A more detailed description of these are provided in the following sections.\n\nThe Allen Brain Atlas combines genomic data with neuroanatomy through the generation of gene expression maps obtained from Affymetrix data (Hawrylycz et al., 2012). The Allen Brain human database contains gene expression data for 6 donor brains. This human database is queried using the differential search function, which enables a search to identify gene expression enrichment in one brain region as compared to another. For example, learning and memory are typically associated with the hippocampus, so in this case the differential search function is used to find genes that have enhanced expression in the hippocampus relative to other regions of the brain. Details on the usage of the differential search function can be found at the Allen Brain site (http://help.brain-map.org/display/humanbrain/Microarray+Data#MicroarrayData-GeneSearch). Students are taught how to interpret Affymetix heatmaps, evaluate gene expression data (fold difference values, error, and threshold cutoff), and use spreadsheet editing, sorting, and graphing functions for the organization and analysis of large datasets.\n\nThe cleaned gene sets are then compared by the students to detect common and distinct elements using an online program (Venny) that evaluates lists and generates a Venn diagram as a visual representation. The genes that are common among all donors are then analyzed in DAVID for functional annotation, clustering, and pathway information. Genes that are associated with project relevant themes, such as behavior, nervous system development, and/or specific diseases, are used to build interaction networks, which consist of protein-protein interactions that are supported by multiple lines of evidence, such as experimental, text mining, and co-expression in the STRING database.\n\nThe interaction networks are used to identify potential gene candidates that may be involved in the same behavioral process or disease, and are also used to identify network substructures, such as hubs and motifs, which indicate important and possibly functionally related entities. Functional classification is assessed using DAVID to identify interactions that are relevant to the project topic. For an extended analysis, students can use the most pertinent genes extracted from the networks to identify additional candidates that have similar spatial expression profiles in the brain tissue of interest. The correlation analysis is done in the Allen Brain database using the correlation search function (http://help.brain-map.org/display/humanbrain/Microarray+Data#MicroarrayData-CorrelativeSearch).\n\nFinally, a statistical analysis of the gene expression data is performed by the students with Python, using an online Python server, PythonAnywhere, which enables students to run Python scripts from their browser. Students are provided with a general script and are required to modify this for their own datasets. The script returns general statistics, such as standard error, mean, minimum and maximum, variance, and distribution profiles.\n\nThe starting point for all projects involves the identification of brain regions associated with a behavioral process or brain disease, which is based primarily on functional magnetic resonance imaging (fMRI) data. Students find this information through an Internet search with our assistance. Gene expression patterns are then analyzed to identify those genes that are preferentially expressed in these brain areas across all donor brains. For the genes identified in this way, clustering algorithms and gene ontology annotation are used to identify those entries that are directly related to the subject of interest. These genes are then used as hooks to build interaction networks in order to pull out additional functionally relevant genes.\n\n\nInternship outcomes\n\nThe internship program has run for two years since 2015. The first year, five students participated and in the second year ten students participated. Student project topics included addiction, learning and memory, Alzheimer’s disease, creativity, and bipolar disorder, among others. Student posters can be viewed at the BioScience Project website (http://www.bioscienceproject.org/student-posters). This year a student also coauthored a published research article with our group, which reports on the identification of genetic factors associated with morphine addiction (Crusio et al., 2016). Upon completion of the internship, students answered survey questions pertaining to the internship content, instruction, and overall experience. The student responses to the survey questions are presented in Table 1 and Table 2. Comments and suggestions provided by the students can be viewed in Supplementary File 2. Based on the student feedback in 2015, the internship instruction was revised for clarity using step-by-step annotated screenshots together with one-on-one tutorials via video conferencing for each database and method.\n\n*“It was a bit confusing in the very beginning, but after a bit of experience it became very easy to use.”\n\n*“It varied from step to step, sometimes it was clear and sometimes I was a little confused.”\n\n“The steps were clearly presented and we had a lot of help through the ones we didn’t understand, but the overarching goal/conceptual understanding of the project was a bit confusing during the steps.”\n\n\nConclusions\n\nThis e-internship program has been shown to be a useful way of introducing early stage students to advanced topics and research methods in systems biology, which supplements their high school science curriculum and provides them with an opportunity to gain hands on experience. Students are interested in collaborating with scientists on research projects in neuroscience-related topics and they gain both intellectually and professionally from participating in the summer e-internship program. Given the flexibility in both time and procedure, this e-internship program can easily be extended to include more students.",
"appendix": "Author contributions\n\n\n\nAD designed and implemented the project. WEC contributed expertise in neuroscience and CR contributed expertise in educational technology and outreach. AD and WEC wrote the manuscript. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: Internship recruitment poster.\n\nClick here to access the data.\n\nSupplementary File 2: Student feedback.\n\nClick here to access the data.\n\n\nReferences\n\nCrusio WE, Dhawan E, Chesler EJ, et al.: Analysis of morphine responses in mice reveals a QTL on Chromosome 7 [version 2; referees: 2 approved]. F1000Res. 2016; 5: 2156. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, et al.: An anatomically comprehensive atlas of the adult human brain transcriptome. Nature. 2012; 489(7416): 391–399. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHuang da W, Sherman BT, Lempicki RA, et al.: Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009; 4(1): 44–57. PubMed Abstract | Publisher Full Text\n\nOverall RW, Williams RW, Heimel JA: Collaborative mining of public data resources in neuroinformatics. Front Neurosci. 2015; 9: 90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchughart K, Williams RW (eds): Systems Genetics: Methods and Protocols. Humana Press, New York, 2017; 1488: 609. Publisher Full Text\n\nSzklarczyk D, Franceschini A, Wyder S, et al.: STRING v10: protein-protein interaction networks, integrated over the tree of life. Nucleic Acids Res. 2015; 43(Database issue): D447–452. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "19850",
"date": "08 Feb 2017",
"name": "Christine F. Hohmann",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper by Crusio, Rubino and Delprato, appropriately titled: “Engaging high school students in systems biology through an e-internship program”, provides a description of a research education program at the high school level. The authors have developed an online research training program that uses public access genomics and neuroscience data bases to train students in behavioral neurogenetics. Students exercise a substantial amount of autonomy (“are able to choose their own projects”, “proceed at their own pace”). Students were mentored through the process of identifying their projects, conducting their research and preparing a capstone poster presentation by the co-author, Dr. Anna Delprato. According to the website for the program, all students who participated in the project successfully submitted their capstone posters which appear to be of a very sophisticated nature. One student even accomplished to co-author a research paper with the first author of this paper. Unfortunately, the viewing option online (at least for this reviewer and her mac computer; maybe I need tech help??) made it impossible to read the details of the posters and after download, the image quality was insufficient, when enlarged, to see much. Survey data indicate that the participants were, with few exceptions in year 1, very satisfied with this learning opportunity which had two iterations so far.\n\nThis online training/mentoring model offers a very exciting possibility for (global) distance learning. It is currently based on a very small students sample (5 year one and 10 in year 2) from just a couple of high schools located in New England and on limited assessment. It would have been helpful to know the demographics of the student population involved as well as the graduation rates at this school and how many graduates typically attend college. It would also be helpful to know how many hours, on average, Dr. Delprato spend with each student/student group over the course of the summer. I hope that, as the authors continue their model, they will follow their participants’ future educational and career decisions to assess the impact of the training experience. There is a lot of potential in this model to be implemented within the context of course based research, at the college level as well as for integration into federally funded existing training programs in the US. I would like to encourage the authors to prepare a publication on the specifics of their curriculum in near future, if they are interested in having others adopt this model and implement it in different settings.\nIn conclusion, although this is a very preliminary and descriptive account of a summer research training experience its novelty merits publication even at this early stage. The paper provides sufficient detail to engender ideas for others to attempt to replicate the model, although a more detailed description of the curriculum, and follow up analysis with a larger data set, should be encouraged.",
"responses": [
{
"c_id": "2504",
"date": "21 Feb 2017",
"name": "Anna Delprato",
"role": "Author Response",
"response": "Thank you for reviewing our article. We have changed the file format of the students posters to pdf. This has improved the resolution for online viewing."
}
]
},
{
"id": "19553",
"date": "08 Feb 2017",
"name": "Byron C. Jones",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe description of the internship program is clear and convincing. One minor issue is with Table 2 -- the ** is missing from the body of the table.\nAre there plans to obtain financial support for the students?",
"responses": [
{
"c_id": "2503",
"date": "21 Feb 2017",
"name": "Anna Delprato",
"role": "Author Response",
"response": "Thank you for reviewing our article. We are seeking funding for student support and program expansion."
}
]
},
{
"id": "19554",
"date": "20 Feb 2017",
"name": "Jennifer A. Ufnar",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article by Anna Delprato describes an interesting way to engage students in an online research program in the neurosciences. This reviewer is interested in the program and the potential impacts and uses of the program for K-12 students. This reviewer, however, has several concerns that need to be addressed before the paper is indexed.\n\nThe authors stipulate in the abstract that the students in the program were \"capable of comprehending ...such a project\". The authors cannot make that claim with the evidence presented in the manuscript. Self-reported surveys were the only form of data presented, and this type of survey does not show student learning or comprehension. The reader also does not know if this comprehension is applied to the ability to perform research, the content learned, or learning how to communicate the information (or a combination of all three). This is a significant concern and must be addressed before publication.\n\nThere were only six references cited for the entire manuscript. There are many papers in the literature that discuss programs similar in nature to this. It would behoove the authors to include more background research to support their claims.\n\nThe authors need to include more information about the project itself (i.e. a discussion of the modules used for teaching, the mentoring, etc.). The reader is left wondering if the students are doing actually self-initiated projects, or if they are being led by the modules. It was helpful to read about the databases that the students use, but the reader does not gain any knowledge about the actual e-internship program through which the students progress (or are those the modules? It is unclear.). In other words, how were they mentored? Is the mentoring effective for student growth? How are the modules designed? Is there any research to show that the module design is effective? Do they present and defend their research in a scientific forum (even online)? It would also help to put the Project Design section first (before the database discussion).\n\nThe evaluation of the program needs to be more robust. Self-reported surveys are a good starting point, but they are only formative assessments. You need to include some summative assessments for the program and more qualitative assessments to determine the efficacy of the program for student learning. Also, it sounds as though the students are spending much less time on the project as opposed to an onsite laboratory model, so the reader is left wondering if the program is really effective. Also, do you have demographics of the participating students?\n\nThere are a couple of grammatical errors (mostly words that need to be omitted and comma usage errors).\n\nIn summary, this manuscript presents an interesting program, but the authors need to address the issues of a clear evaluation plan to determine the efficacy of the program, and provide enough discussion of the program so that readers understand exactly what the students gain.",
"responses": [
{
"c_id": "2587",
"date": "29 Mar 2017",
"name": "Anna Delprato",
"role": "Author Response",
"response": "We thank the reviewer for the insightful comments and suggestions. Below we respond to each point of the review. We have submitted an updated version of the article that incorporates the requested changes and provides additional information about the internship program. Reviewer’s comment 1) The authors stipulate in the abstract that the students in the program were \"capable of comprehending ...such a project\". The authors cannot make that claim with the evidence presented in the manuscript. Self-reported surveys were the only form of data presented, and this type of survey does not show student learning or comprehension. The reader also does not know if this comprehension is applied to the ability to perform research, the content learned, or learning how to communicate the information (or a combination of all three). This is a significant concern and must be addressed before publication. Author’s response We have removed the statement that students comprehended the internship project from the abstract as this was not formally evaluated in the pilot study. Reviewer’s comment 2) There were only six references cited for the entire manuscript. There are many papers in the literature that discuss programs similar in nature to this. It would behoove the authors to include more background research to support their claims. Author’s response We have provided additional references as requested. If the reviewer is aware of other relevant references, we will include them in the article. Reviewer’s comment 3) The authors need to include more information about the project itself (i.e. a discussion of the modules used for teaching, the mentoring, etc.). The reader is left wondering if the students are doing actually self-initiated projects, or if they are being led by the modules. It was helpful to read about the databases that the students use, but the reader does not gain any knowledge about the actual e-internship program through which the students progress (or are those the modules? It is unclear.) How are the modules designed? Is there any research to show that the module design is effective? Author’s response The modules are based on a protocol that was designed in our laboratory for profiling gene expression data to identify genes of interest associated with a behavior or brain disorder. The protocol is broken down into steps so as not to overwhelm the students with too much information at once. Modules which consist of detailed instruction are built around each step of the protocol. The purpose of the module is to provide students with a detailed reference in addition to the mentoring sessions so that they can work in the databases and with the data independently. The effectiveness of the modules has not been formally evaluated. We would like to also provide the students with screen casting videos for improved instruction. We hope to make the videos available to students this season. Reviewer’s comment 4) In other words, how were they mentored? Is the mentoring effective for student growth? Author’s response Student mentoring primarily occurs through video conferencing and includes a walk-through of each database using screen sharing, project/topic discussion, and troubleshooting. Mentoring sessions occur weekly and last anywhere from 30 min - 90 min. Students also email us with questions. The effectiveness of the mentoring sessions on student growth has not been assessed formally but we believe that mentoring is necessary for the students to perform their research and complete their projects. Reviewer’s comment 5) Do they present and defend their research in a scientific forum (even online)? Author’s response Students have not presented and/or defended their research in a scientific forum, but this is a great idea and we will add it to this year’s program. Reviewer’s comment 6) It would also help to put the Project Design section first (before the database discussion). Author’s response Done Reviewer’s comment 7) The evaluation of the program needs to be more robust. Self-reported surveys are a good starting point, but they are only formative assessments. You need to include some summative assessments for the program and more qualitative assessments to determine the efficacy of the program for student learning. Also, it sounds as though the students are spending much less time on the project as opposed to an onsite laboratory model, so the reader is left wondering if the program is really effective. Author’s response The internship described in this study constitutes a dry lab experience. All of the research and analysis is done on a computer. Performing this type of work for 6 to 8 hours per day is physically and mentally taxing and can result in errors. We recommend that students spend about 10 to 15 hours per week working on their projects. The nature of bioinformatics-based research is very different from a classical wet lab experience which entails physical manipulations at the bench. We believe that both types of research internships are valuable to students but program efficacy cannot be evaluated based solely on time spent given the differences between the two approaches. Reviewer’s comment 8) Also, do you have demographics of the participating students? Author’s response We ask students to provide us with their resume in order to get an idea of their science background but we do not collect demographic information. Reviewer’s comment 9) There are a couple of grammatical errors (mostly words that need to be omitted and comma usage errors). Author’s response Corrected Reviewer’s comment 10) In summary, this manuscript presents an interesting program, but the authors need to address the issues of a clear evaluation plan to determine the efficacy of the program, and provide enough discussion of the program so that readers understand exactly what the students gain."
}
]
}
] | 1
|
https://f1000research.com/articles/6-20
|
https://f1000research.com/articles/6-372/v1
|
28 Mar 17
|
{
"type": "Software Tool Article",
"title": "Expresso: A database and web server for exploring the interaction of transcription factors and their target genes in Arabidopsis thaliana using ChIP-Seq peak data",
"authors": [
"Delasa Aghamirzaie",
"Karthik Raja Velmurugan",
"Shuchi Wu",
"Doaa Altarawy",
"Lenwood S. Heath",
"Ruth Grene",
"Karthik Raja Velmurugan",
"Shuchi Wu",
"Doaa Altarawy",
"Lenwood S. Heath",
"Ruth Grene"
],
"abstract": "Motivation: The increasing availability of chromatin immunoprecipitation sequencing (ChIP-Seq) data enables us to learn more about the action of transcription factors in the regulation of gene expression. Even though in vivo transcriptional regulation often involves the concerted action of more than one transcription factor, the format of each individual ChIP-Seq dataset usually represents the action of a single transcription factor. Therefore, a relational database in which available ChIP-Seq datasets are curated is essential. Results: We present Expresso (database and webserver) as a tool for the collection and integration of available Arabidopsis ChIP-Seq peak data, which in turn can be linked to a user’s gene expression data. Known target genes of transcription factors were identified by motif analysis of publicly available GEO ChIP-Seq data sets. Expresso currently provides three services: 1) Identification of target genes of a given transcription factor; 2) Identification of transcription factors that regulate a gene of interest; 3) Computation of correlation between the gene expression of transcription factors and their target genes. Availability: Expresso is freely available at http://bioinformatics.cs.vt.edu/expresso/",
"keywords": [
"ChIP-Seq",
"transcription factor",
"gene regulation",
"transcriptional regulation"
],
"content": "Introduction\n\nChromatin immunoprecipitation (ChIP) is a method to investigate DNA-binding sites of DNA-binding proteins, such as transcription factors (TFs) (Valouev et al., 2008). ChIP can provide genome-wide information of in vivo protein-DNA interactions (Kaufmann et al., 2010). Therefore, it has become an important tool to assay TF-associated gene regulations (Kaufmann et al., 2010; Park, 2009; Valouev et al., 2008). In a typical ChIP experiment, first the DNA-binding protein of interest is cross-linked to its binding sites. Then the chromatin is sheared, randomly, into short fragments and the protein-DNA complexes are purified by immunoprecipitation using a specific antibody against the DNA-binding protein of interest. Finally, genome-wide profiling of protein binding sites is produced by either genome-tiling arrays (ChIP-ChIP) or next-generation sequencing technologies (ChIP-Seq) (Kaufmann et al., 2010; Valouev et al., 2008). Compared to ChIP-ChIP, ChIP-Seq provides high-resolution data with a better signal-noise ratio. ChIP-seq also requires less initial material and is more cost-effective (Ho et al., 2011; Kaufmann et al., 2010; Valouev et al., 2008). Therefore, ChIP-Seq has displaced ChIP-ChIP rapidly and is currently the most widely used technology for studying the action of transcription factors (Park, 2009; Valouev et al., 2008).\n\nIn contrast to the biomedical field, the use of ChIP-Seq in plant biology is limited (Kaufmann et al., 2010). For example, the GEO database (https://www.ncbi.nlm.nih.gov/gds) currently contains 8,486 ChIP-Seq human datasets (as of October 2016), but has only 200 Arabidopsis datasets. The delay in the use of ChIP-Seq technology in plant research may be due to the specific properties of plant tissue, such as the presence of the cell wall and abundant secondary metabolites that affect the quality of protein-DNA complex extraction (Kaufmann et al., 2010). However, with the improvement of ChIP-Seq protocols and reduction of next-generation sequencing costs, an increasing number of plant scientists are choosing ChIP-Seq to study function of transcription factors in detail.\n\nChIP datasets currently available for Arabidopsis are isolated, fragmentary and they lack a uniform format. Thus a major gap exists between the capabilities of in vitro methods, such as ChIP Seq and the goal of understanding the complexities of transcriptional regulation. We report on the curation of the Expresso database to collect and integrate Arabidopsis ChIP-Seq data (available as peaks), which in turn can be linked to a user-provided Arabidopsis gene expression data. Expresso compiles 20 groups of selected Arabidopsis ChIP-Seq peak datasets downloaded from NCBI GEO or supplemental data of the corresponding paper. All collected ChIP-Seq peak datasets were re-analyzed by the Expresso processing pipeline to create a coherent and unified results which bridge the gap among multiple ChIP-Seq studies, and to provide a consensus access to TFs, target genes and DNA-binding motifs. In summary, instead of going though separate ChIP-Seq datasets, Expresso provides a more rapid and integrated method for the systematic study of the action of plant transcription factors.\n\n\nMethods\n\nThe Expresso computational analysis pipeline comprises preprocessing of peak loci reported by at each reference dataset, finding conserved motifs using MEME-suite (Bailey et al., 2009), identifying potential target genes for each transcription factor, and finally storing target genes and motifs linked to TFs into the database. Data-formatting primarily involves the extraction of a peak locus peak, peak summit and DNA sequences in fasta format from the Arabidopsis thaliana genome. Of the 50 datasets, almost all were found to be in distinct formats and only 20 had the peak information available either on GEO or at their supplemental material section of their corresponding published manuscript. We restructured the downloaded data into a unique format by extracting a specific set of information including: peak ID, chromosome number, peak start and end positions and genes in 1kbp distance of the peak summit. All the codes for preprocessing of the input data are available at Expresso GitHub page under “preprocessing”.\n\nCandidate target gene finding using motif search: Given the chromosome number and peak start and end positions, the corresponding genomic sequence was extracted and trimmed, and then were subject to motif search using MEME-suite tool (http://meme-suite.org/), with following parameters: -nmotifs 20 -minw 5 -maxw 30 -dna. While the distribution of the length of the untrimmed peak sequences of each dataset varied widely, the reported peak summit lengths were usually 200 to 500 bases long upstream and downstream from the middle of the summit (Bailey et al., 2009; Immink et al., 2012; Valouev et al., 2008). For a few datasets, the summit length was not provided in the article, so the largest summit length found, 500 bases, was used. Motif width was set to the length of the reported motif (if any). Otherwise, motif width was set to 5 to 30 bps, and significant motifs (E-value < 0.05) together with the candidate target genes possessing those motifs were uploaded to the database. Hence, a gene should have the following properties to be eligible to be uploaded to the database: i) should be among the target genes provided by a ChIP-Seq experiment, or within 1kbps distance of the peak summit ii) should have a significantly enriched motif in its peak binding site. Moreover, the presence of the motif found by MEME was validated by the reported motif in the reference paper. If the reported motif was not found using the MEME search tool on the peak sequences, the resulting motifs were not uploaded to the database.\n\n\nResults\n\nExpresso provides a user-friendly environment to facilitate exploring different transcription factors and target genes through motif analysis. ChIP-Seq experiments in Expresso are available under the “Experiments” tab. Expresso currently provides three services for identifying: 1) the target genes of a given transcription factor, 2) the transcription factors that regulate genes of interest and 3) the correlation of gene expression between transcription factors and their target genes.\n\nIdentifying candidate target genes for a transcription factor (see “Transcription Factors” on the Expresso website: http://bioinformatics.cs.vt.edu/expresso/?q=node/3): Users can select a transcription factor from the list of available transcription factors to view potential target genes. Since target genes for each transcription factor have been compiled from the peaks and motifs data, users can change the cut-off for the motif E-value. The default E-value is set to 0.05. A short functional description (along with a link to TAIR10) and the GEO id for the reference ChIP-Seq experiment is provided for each potential target gene. For example, searching for target genes of TOC1 transcription factor results in 298 genes that have at least one significantly enriched motifs at least one peak located close to their transcription start site.\n\nIdentifying potential transcription factors regulating a target gene (see “Genes” on the Expresso website: http://bioinformatics.cs.vt.edu/expresso/?q=node/4): Users can enter a gene or multiple genes and Expresso finds all the transcription factors that might regulate that gene together with the binding motif for that TF. For example, SGP2 (AT3G21700) gene is potentially transcriptionally regulated by PIF3 and KAN1.\n\nExploring gene expression data: Users can upload gene expression data and Expresso finds genes and transcription factor pairs present in Expresso database and performs Pearson correlation analysis on their corresponding expression data. Upon submission of the gene expression, a task id is assigned to this job. Users need to keep the task id to retrieve the results or check the status of their job. If they provide an email address, they will be notified when the results get ready. To demonstrate the application of correlation analysis on finding potential TF-target gene pairs, a RNA-Seq dataset (Segaran, 2007) has been added to Expresso as a demo (see “Gene Expression” on the Expresso website: http://bioinformatics.cs.vt.edu/expresso/?q=node/5). 100 genes (including some transcription factors) were selected randomly from this dataset, which has expression values for genes from different Arabidopsis tissues: leafs, seeds, roots and flowers. 54 genes were found to be target genes of transcription factors in Expresso. 33% of the uploaded genes were found to be targets genes of multiple transcription factors. The correlation of gene expression between a transcription factor and its target genes can be used for inferring their relationship. For example, three out of four target genes of PIF3 show high correlation with the expression of PIF3, although one gene was found to have a negative correlation (R=-0.92). The fact that their expression patterns are correlated with PIF3, suggests that PIF3 plays a dominant role in regulating these three target genes. However, AT3G21700 was found to have a low correlation with PIF3, which suggests that there might be other transcription factors that challenge PIF3 in the regulation of AT3G21700.\n\n\nConclusions\n\nChIP-Seq is a powerful technology that aides in the study of the action of transcription factors, predicting a given transcription factor's target genes and corresponding conserved binding motifs (Ho et al., 2011; Kaufmann et al., 2010; Park, 2009; Valouev et al., 2008). The Expresso database is curated to integrate several available ChIP-Seq datasets. Expresso provides an easy access to 1) potential targets of a given transcription factor and their possible binding sites; 2) candidate transcription factors regulating several genes of interest; 3) correlation analysis of TF and target gene pair using the user’s input gene expression data. Taken together, Expresso facilitates an easy access to several ChIP-Seq experiments, making the study of the transcriptional regulation in the cells easier in the context of interaction among several transcription factors.\n\n\nSoftware and data availability\n\nExpresso is freely available online: http://bioinformatics.cs.vt.edu/expresso/\n\nSource code available at: https://github.com/doaa-altarawy/Expresso/tree/2.0.0\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.399501 (Altarawy, 2017).\n\nLicense: MIT\n\nAll datasets were publicly available and were downloaded from GEO DataSets. The list of ChIP-Seq datasets available in Expresso is available at ‘Experiments’ section on Expresso. The list of transcription factors and target genes can be downloaded in the text format.",
"appendix": "Author contributions\n\n\n\nD. Aghamirzaie contributed to the data analysis, web development, and biological validation of the results. KRV contributed to the data analysis section. SV was involved in the biological validation of the results. D. Altarawy was involved the web development and maintenance of the Expresso website. RG and LH conceived the study. All authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis project was supported by National Science Foundation [NSF-MCB-1052145 and NSF-ABI-1062472].\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors are grateful to the GBCB (Genetics, Bioinformatics, and Computational Biology) program, especially Dr. David Bevan, for providing the opportunity to work on this project.\n\n\nReferences\n\nAltarawy D: doaa-altarawy/Expresso: Expresso Ver 2.0 [Data set]. Zenodo. 2017. Data Source\n\nBailey TL, Boden M, Buske FA, et al.: MEME SUITE: tools for motif discovery and searching. Nucleic Acids Res. 2009; 37(Web Server issue): W202–W208. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHo JW, Bishop E, Karchenko PV, et al.: ChIP-chip versus ChIP-seq: lessons for experimental design and data analysis. BMC Genomics. 2011; 12(1): 134. PubMed Abstract | Publisher Full Text | Free Full Text\n\nImmink RG, Posé D, Ferrario S, et al.: Characterization of SOC1’s central role in flowering by the identification of its upstream and downstream regulators. Plant Physiol. 2012; 160(1): 433–449. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaufmann K, Muiño JM, Østerås M, et al.: Chromatin immunoprecipitation (ChIP) of plant transcription factors followed by sequencing (ChIP-SEQ) or hybridization to whole genome arrays (ChIP-CHIP). Nat Protoc. 2010; 5(3): 457–472. PubMed Abstract | Publisher Full Text\n\nPark PJ: ChIP-seq: advantages and challenges of a maturing technology. Nat Rev Genet. 2009; 10(10): 669–680. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSegaran T: Programming collective intelligence: building smart web 2.0 applications. O'Reilly Media, Inc. 2007. Reference Source\n\nValouev A, Johnson DS, Sundquist A, et al.: Genome-wide analysis of transcription factor binding sites based on ChIP-Seq data. Nat Methods. 2008; 5(9): 829–834. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21317",
"date": "18 Apr 2017",
"name": "Nicholas J. Provart",
"expertise": [
"Reviewer Expertise Cyberinfrastructure",
"plant bioinformatics",
"data visualization"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn principle the Expresso database will be useful to plant researchers. I would like to see a couple of things: what about mention of other databases like AGRIS at OSU and Cistome/ePlant at the BAR? Do these capture the kinds of interactions the authors are describing? Another is HRGRN (http://plantgrn.noble.org/hrgrn/). What are the disadvantages/limitations of these vis a vis Expresso?\nAnother thing that would a “nice-to-have” would be to include the Ecker Lab’s recent, extensive DAP-seq data set, which the authors (https://www.ncbi.nlm.nih.gov/pubmed/27203113) show to be quite concordant with existing Chip-Seq data. These data are more extensive than the fairly limited number of Chip-seq data sets that Aghamirzaie et al. have collated.\nI tried out the software, which worked as promised. The functionality was somewhat basic. It would be quite easy to use table.js or similar on the “Genes” search output page, or on the “Gene Expression” output page to be able to sort the table of favourite genes with their targets as a user expects to be able to do, or to sort by Pearson correlation. It might be nice to let users know how to download the “Genes” search results and load them into Cytoscape in a tutorial section. I was unable to download a file of TFs binding to my favourite genes (URL http://bioinformatics.cs.vt.edu/expresso/Expresso_Codes/getResFile_Genes.php) – the page returned an error of “Unable to select database”.\n“Run Demo” did not work on the “Gene Expression” page, or at least I thought it didn’t until I realized I had to scroll down to see the results, which appeared...but off the bottom of my screen…a little Javascript autoscroll to that section would be helpful after the calculation has finished.\n\nTypos/grammar In general: it’s ChIP-chip, not ChIP-ChIP (the first ChIP is for Chromatin Immuno-Precipitation, the second “chip” refers to microarray)\nBe consistent: either ChIP-seq or ChIP-Seq (we see ChIP-seq, ChIP-Seq, and ChIP Seq in the paper)\nCandidate target gene finding section: “the corresponding genomic sequences were extracted and trimmed” (“sequences” should be plural as multiple genomic sequences are analyzed, no?)\nCandidate target gene finding section: “Otherwise, motif width was set to be between 5 to 30 bp” (“…to 5 to 30” is awkward)\nBottom of page 3: (“along with a link to TAIR10”) – TAIR10 refers to the 10th genome build. I’d say rather “along with a link to TAIR”. It might be nice to add a link to the Araport record for a given gene too.\nTop of page 4: “…motifs in at least one peak located close to…” (missing “in”?)\nMidway down page 4: “…the results are complete.” (instead of “get ready”)\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "21320",
"date": "18 Apr 2017",
"name": "Asa Ben-Hur",
"expertise": [
"Reviewer Expertise Bioinformatics",
"machine learning",
"analysis of high throughput sequencing data"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors have created a useful resource that provides unified access to a large number of ChIP-seq experiments in arabidopsis. The database has useful functionality that would be useful for exploring TF binding. Each function of the database has example data that allows users to try it out easily, and the pipeline is available through github.\nThe following should be addressed:\nThe major issue with the paper is that there is an existing similar resource called ChIPBase (see citations below1,2). The authors should cite it and compare their database with it, as it's not obvious what Expresso is adding to what it provides.\n\nA figure that summarizes your data analysis pipeline would be beneficial (I saw such a figure on the Expresso website).\n\nIn the section on motif finding: did you focus on promoter regions for the peaks, and if so, how were those defined? The motifs generated by MEME were compared against those in the corresponding papers, and no motif was added if MEME did not detect a motif. How often did that happen? MEME occasionally misses motifs, and other tools could have possibly found those motifs.\n\nIn the expression section, please suggest how the user should measure expression to provide good results. For that matter, please provide information on how expression of the TFs is quantified.\n\nMinor comments:\nThere are some grammar issues that need to be fixed - see below.\nIn the introduction you write that \"ChIP can provide genome-wide information...\". That is true when performed as ChIP-ChIP or ChIP-seq.\n\nTF-associated gene regulations --> regulation\n\n\"ChIP-Seq in plant biology is limited\": I think you meant that it hasn't been as widely used as in mammalian systems.\n\nI did not buy your explanation of the delay in adoption of ChIP-seq in plant research. Plant research tends to be a few steps behind, and furthermore, many more people study human than arabidopsis.\n\n\"All the codes for preprocessing\" --> all the code for preprocessing\n\n\"results in 298 genes that have at least one significantly enriched motifs at least one peak located close to their transcription start site.\" something unclear here - \"one significantly enriched motifs at least one peak\" - should there be an \"or\" or \"and\" enriched motif AND at least one peak? And the word motif should be singular, and refer to a motif hit/occurrence.\n\ntargets genes --> target genes\n\n\"can be downloaded in the text format\" --> in text format\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "22402",
"date": "02 May 2017",
"name": "Sakiko Okumoto",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this manuscript, the authors created a web-based interface which hosts ChIP-seq data from 20 published experiments, and allows the users to 1) access the compiled lists of TF targets that met the criteria set by the authors, 2) identify the transcription factor(s) that regulate his/her gene of interest, and 3) perform co-expression analyses with user-provided RNAseq data.\n\nAlthough there are other web-based services that allow at least part of what is described above, the authors argue that this is the first platform that provides a uniform format for 20 genes. I would like to agree with the authors about the value of such a format.\n\nOf the above three functions, the first two are fairly straightforward and seems to function as expected. However, I feel that the description found in the manuscript about the co-expression analysis did not contain enough information.\nThe authors provide a list of 100 genes with their FPKM values in 4 different tissues as a demo. The manuscript describes the set as “100 genes (including some transcription factors) were selected randomly from this dataset”. When comparing this list of 100 with the list of TFs in this database however, I see that 16 out of 20 TFs in the database are included in the list of 100 genes. This seems more than “some” to me- please describe specifically. When I run the demo, none of the 4 that are not in the list of 100 are found to be co-expressed with any of the genes. I think that would make sense because I don’t see how one can deduce co-expression between a given gene and a TF it the TF is not expressed in the data set. If this is indeed a requirement, I would like to see that stated in the manuscript.\nAlso, typically how many tissues/times would need to be in the dataset? (When I remove one of the columns in the demo set I don’t get any hits, probably due to less statistical power of the data set.) It would be beneficial for the readers to know the approximate number experiments needed for a correlation analysis.\nIn general, I would really appreciate if the authors could explain how co-expression analysis works – does it first perform co-expression analyses within the genes in the uploaded dataset, identify the TFs in the data base, then select the ones that have the consensus motif? A lay-friendly flow chart would be much appreciated.\nAlso, it would be nice if the algorithm identified the motif and the distance from the ATG in the co-expressed genes.\nMinor points include:\nOn the “Experiment” tab- for each TF, would you please include the link to the original publication? One can trace back using GEO NCBI, but it would be easier for the users if the publication is included as an additional column.\n\nTo demonstrate the application of correlation analysis on finding potential TF-target gene pairs, a RNA-Seq dataset (Segaran, 2007) has been added to Expresso as a demo” I am fairly certain that the reference provided here is wrong.\n\nMethods “a peak locus peak” a peak locus?\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Partly\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-372
|
https://f1000research.com/articles/6-133/v1
|
13 Feb 17
|
{
"type": "Research Article",
"title": "Neuraxial opioids as analgesia in labour and postoperative after caesarean section and hysterectomy: A questionnaire survey in Sweden",
"authors": [
"Anette Hein",
"Caroline Gillis-Haegerstrand",
"Jan G. Jakobsson",
"Caroline Gillis-Haegerstrand",
"Jan G. Jakobsson"
],
"abstract": "Background: Neuraxial opioids improve labour analgesia and analgesia after caesarean section (CS) and hysterectomy. Undesirable side effects and difficulties in arranging postoperative monitoring might influence the use of these opioids. The aim of the present survey was to assess the use of intrathecal and epidural morphine in gynaecology and obstetrics in Sweden. Methods: A questionnaire was sent to 47 anaesthesiologists at obstetric units in Sweden concerning the use and postoperative monitoring of morphine, sufentanil and fentanyl in spinal/epidural anaesthesia. Results: A total of 32 units responded representing 83% of annual CS in Sweden. In CS spinal anaesthesia, 20/32 units use intrathecal morphine, the most common dose of which was 100 μg (17/21). Intrathecal fentanyl (10-20 μg) was used by 21 units and sufentanil (2.5 -10 μg) by 9/32 of the responding units. In CS epidural anaesthesia, epidural fentanyl (50-100 μg) or sufentanil (5-25 μg) were commonly used (25/32), and 12/32 clinics used epidural morphine, the majority of units used a 2 mg dose. Intrathecal morphine for hysterectomy was used by 20/30 units, with 200 μg as the most common dose (9/32). Postoperative monitoring was organized in adherence to the National Guidelines; the patient is in postoperative care or an obstetrical ward over 2-6 hours and up-to 12 hours in an ordinary surgical ward. Risk of respiratory depression/difficult to monitor was a reason for not using intrathecal opioids. Conclusions: Neuraxial morphine is used widely in Sweden in CS and hysterectomy, but is still restricted in some units because of the concern for respiratory depression and difficulties in monitoring.",
"keywords": [
"intrathecal morphine",
"labour pain",
"postoperative pain",
"Caesarean Section",
"hysterectomy",
"sufentanil",
"fentanyl",
"epidural morphine"
],
"content": "Introduction\n\nIntrathecal and epidural morphine improve postoperative analgesia after caesarean section (CS) and hysterectomy and intrathecal labour analgesia1–3. In 1981, the Swedish Society of Anaesthetists conducted a nationwide survey of experience with intrathecal and extradural opiates4. They found intrathecal morphine was administered to only 90–150 patients and ventilatory depression requiring treatment with naloxone was needed in six of these patients. Since then, the use of intrathecal and epidural morphine has expanded, with a decrease in the doses used5. Still, the use of intrathecal and epidural morphine in these patient categories vary and the general use in Sweden at present is unknown. Spinal morphine may have undesirable side effects, such as nausea and vomiting, with the most feared side effect being respiratory depression, which is why extended postoperative monitoring is required6. Extended postoperative monitoring demands personal resources and is sometimes difficult to arrange, which may influence the use of spinal morphine.\n\nThe aim of the present questionnaire survey was to assess the use of intrathecal and epidural opioids in obstetric and gynaecological patients, factors that limit/holdback its use and monitoring routines implemented for patients treated with opioids.\n\n\nMethods\n\nThis study was conducted in accordance with the principles outlined in the Declaration of Helsinki. Ethical committee approval was not applied for the present study, since the survey concerns only clinical practice and routines and not patient data in accordance to Swedish ethical board guidelines. The permission to carry out the research was obtained from the head of department of each hospital.\n\nA questionnaire survey was sent to anaesthesiologists in charge of Swedish obstetric anaesthesia. In all, 47 obstetric units were identified by the Swedish Medical Birth Register from National Board of Health and Welfare (Sweden).\n\nWe identified the anaesthesiologist in charge of the obstetric and gynaecological anaesthesia for each unit by address lists used by Swedish Association of Obstetric Anaesthesia and Intensive Care, and if these were not present the hospital was phoned to get hold of the anaesthesiologist in charge.\n\nThe survey consisted of 26 questions sent by mail to the anaesthesiologists identified in December 2014, and this was repeated in April 2015 to those clinics that had not answered the first questionnaire. The second time the same questionnaire was sent by email and by post, including a return envelope. The email included a letter to the anaesthesiologist and the survey in two versions. The first version was a Word document, possible to fill in by the computer, save and return by email to a special email address for the study purpose only. The second version was a PDF-file that could be printed and filled in by hand and send by post to our hospital address.\n\nThe questionnaire (Supplementary File 1 and Supplementary File 2) was produced to reflect how common the use of intrathecal and epidural morphine, fentanyl and sufentanil as adjunct to local anaesthetics for perioperative care of CS and hysterectomy is, and also the routine use of intrathecal morphine for labour analgesia. The anaesthesiologists were asked to approximate the numbers of CS and hysterectomies performed in spinal and epidural anaesthesia, respectively, and specify the numbers of patients administered with opioids, including neuraxial morphine, for the operations performed. We also asked for doses when spinal/epidural opioids are used. When spinal/epidural morphine is marked “not used” we asked for the reasons behind withholding it. Both multiple choice and written answers were collected. Questions about the organisation of monitoring after neuroaxial morphine administration, and known serious adverse events that had occurred in their units were also included.\n\nWe calculated the size of the different units by using annual total number of performed CS in the units, in order to put our findings, with regards to the routine use of opioids, into perspective. The annual numbers are collected from The Swedish Medical Birth Register from National Board of Health and Welfare (Sweden).\n\nData is presented as number and percentage, and range as applicable. No formal statistical tests are used.\n\n\nResults\n\nIn total, 32 units of 47 mailed returned questionnaires (68% response rate). Units with a large number of births and CS were more willing to respond.\n\nThe routine use of intrathecal opioids in CS is shown in Table 1. All responding units use at least one opioid as adjunct to local anaesthesia in CS.\n\nIT, intrathecal; EDA; epidural; Mo, morphine; F, fentanyl; S, sufentanil.\n\nIntrathecal morphine. A total of 20 out of 32 units reported the use of intrathecal morphine as routine in CS spinal analgesia. Three units that did not use intrathecal morphine for CS patients commented that they intended to start using intrathecal morphine within the next year.\n\nFor CS the most common intrathecal morphine dose is 100 μg, which is used in 17/20 units and three units use 125 μg. All, except for two units, use morphine in combination with either fentanyl or sufentanil. One of these units reported that they plan to start the addition of fentanyl.\n\nIntrathecal fentanyl. Addition of fentanyl in intrathecal CS anaesthesia is used in 21/32 units; five units use fentanyl, but no morphine. In total, 18 units use 10–12.5 microgram dose and three units use a 15–20 µg dose.\n\nIntrathecal sufentanil. The addition of sufentanil in intrathecal CS anaesthesia was less commonly used; 9/32 units, where seven units use sufentanil as the sole opioid added. Four units use a 2.5 µg dose, four units use a 5 µg dose and one unit use a 5–10 µg dose of sufentanil.\n\nThus, 16 units use the combination morphine/fentanyl, two used the combination morphine/sufentanil and two solely use morphine, five solely use fentanyl and seven solely use sufentanil.\n\nEpidural morphine. Addition of epidural morphine to CS performed in epidural anaesthesia was less common than intrathecal morphine in spinal anaesthesia: 12/32 units add morphine to the epidural local anaesthesia. The majority, 10 units, administer 2 mg dose of morphine, one unit 1 mg and one unit use a 4 mg dose of morphine.\n\nEpidural fentanyl. Ten units add fentanyl (50–100 µg) in the epidural for CS anaesthesia.\n\nEpidural sufentanil. In total, 15 units use sufentanil as the fast opioid in the epidural anaesthesia for CS anaesthesia, where 11 use doses up to 10 µg, three administer doses up to 25 µg, and one use a 25–50 µg dose.\n\nLabour analgesia. None of the units use morphine in spinal labour analgesia.\n\nIntrathecal morphine. Spinal anaesthesia with morphine added is used in 20 of the 32 answering units in hysterectomy. The most common dose is 200 µg administered by nine clinics, five clinics use 120–140 µg, five use 100 µg and one unit administers 80 µg.\n\nIn other types of gynaecological operations, like perineoraphies, malign robot surgery, all gynaecological abdominal surgery seven of 32 clinics use intrathecal morphine. One of 32 clinics used epidural morphine, for malign gynaecological surgery.\n\nPostoperative monitoring. Postoperative monitoring is generally organised within the initial 2–6 hours in the postoperative ward and the following hours, up to 12 hours, in the regular surgical ward, according to the guidelines of SFAI – Swedish Association of Anaesthesia and Intensive Care. In case of CS the initial 2–6 hours of postoperative monitoring are located to either the postoperative ward or obstetrical ward (as it was in eight hospitals as routine and in some hospitals occasionally) and the following hours, up to 12 hours, in the regular ward.\n\nSeven of the eleven units that chose not to use spinal/epidural morphine describe risk of respiratory depression and difficulties to monitor as the main reasons for withholding its use.\n\n\nDiscussion\n\nWe found that all units used opioid supplementation and two thirds of responding units of Swedish obstetric anaesthesia used spinal/epidural morphine as adjunct to local anaesthesia for perioperative care of CS and hysterectomies. Spinal use was rather uniform with 100 µg for CS and up to 200 µg for hysterectomies of intrathecal morphine combined with a low dose of fentanyl or sufentanil. Epidural morphine was used to a lesser extent. Opioids were also commonly added to labour analgesia as sufentanil, but no unit uses morphine as an adjunct in labour analgesia. The common reason for withholding spinal/epidural opioid was the risk for respiratory depression, and thus the demands for post procedural monitoring.\n\nThere were some obvious differences in practice in the different units. Winther et al. found likewise inconsistencies in clinical guidelines for obstetric anaesthesia for CS7. The most effective method for providing pain relief during labour is epidural analgesia and there are good opportunities to get a beneficial epidural in Sweden8. In countries with limited resources, a single shot spinal anaesthesia may be a feasible option. Combination of low dose morphine, fentanyl and bupivacaine or morphine, sufentanil and bupivacaine are suggested to achieve effective analgesia with a prolonged effect9. However, we were previously unable to find a major advantage of the addition of morphine, when comparing 0, 50 or 100 μg morphine added to 1.25 mg bupivacaine and 5 μg sufentanil during established labour, as this did not show a significant increased duration of analgesia10. Morphine is not a preferred drug for labour epidural analgesia and questions about the addition of opioids other than morphine to labour epidurals were not included in this survey. Adding sufentanil or fentanyl is common practice, and may be seen more or less as the gold standard11,12. Sufentanil is the most common opioid added in labour anaesthesia in Sweden, which was reported in a national obstetric anaesthesia meeting from a survey in 2009 (not published).\n\nSpinal anaesthesia is the preferred analgesia in CS when a working epidural is not in place for conversion and top-up. Long acting spinal opioids as a component of the CS spinal anaesthesia has been proven superior as the post-caesarean analgesia over systemic counterparts, and made them a commonly used part of multimodal analgesic regimes13. In Sweden, the long acting opioid most commonly used in CS is morphine, and we found that 63% of units used intrathecal morphine as routine in CS spinal analgesia. Since larger units with higher numbers of CS more commonly use intrathecal morphine, the impact of the routine is even more pronounced. The units using intrathecal morphine as routine adjunct to local anaesthesia in CS, covered an estimated 73% of all CS annually performed in responding units. All responding units used intrathecal morphine doses of 100–125 µg. Palmer et al. found in a dose response study that morphine at 100 µg added to hyperbaric bupivacaine (12.75 mg) provided analgesia comparable to that provided by doses as high as 500 µg1. The optimal effective dose for CS is not well defined. The balance between analgesia and side effects must be taken into account. The occurrence of pruritus, has been found to be dose related1,6,14. For nausea and vomiting the relationship is not that clear but nausea and vomiting is seen in a higher frequency in higher doses than in lower dose groups14.\n\nIntrathecal morphine is suggested to provide better pain control after CS than opioid-free epidural analgesia15. Intrathecal morphine has also been shown superior regarding postoperative pain relief, as compared to abdominal wall block (TAP)13. The more lipophilic diamorphine was the most commonly used opioid in a survey conducted in the UK in 20085. Diamorphine has a more rapid onset of action than morphine because of a high lipophilicity (octanol-water coefficient =280)16. Diamorphine is metabolised to morphine with long-acting activity; however, diamorphine is not registered/available as pharmaceutical drug in Sweden16. Fentanyl and sufentanil are the available lipid-soluble opioids used, which have a quick onset for improved perioperative spinal and epidural analgesia17. A combination of one lipid-soluble opioid and the long-acting water-soluble morphine is added to hyperbaric bupivacaine, which is found to be commonly used in Sweden. It is an attractive combination providing a quick onset with improved perioperative quality and a long- acting postoperative analgesia16. Yet currently, the present study found that twelve units use solely fast opioid adjuncts to local anaesthesia and two of the units add morphine to local anaesthetics with no addition of fentanyl or sufentanil, but one of the units stated they plan to start adding fentanyl shortly.\n\nWhen a functional labour epidural is in place, it’s recommended to convert this to a full epidural anaesthesia if an emergency CS becomes necessary18. Twelve of the 32 units add morphine to the epidural local anaesthesia, the most common dose is 2 mg. Singh et al. found epidural morphine at 1.5 mg provided none-inferior post caesarean analgesia and caused fewer adverse effects compared with 3 mg epidural morphine19. Still the majority of responding units chose not to add morphine in epidural analgesia for CS postoperative analgesia. The addition of sufentanil (15 units) or fentanyl (9 units) was more common, together covering 74% of CS performed annually in responding clinics. Malhotra et al. found no support for adding 75 µg fentanyl in top-up epidural for CS regarding time to onset nor quality of analgesia in women already receiving epidural fentanyl during labour20. Yet, the addition of epidural opioids has been proven to enhance postoperative analgesia after CS with earlier onset21,22.\n\nThe addition of morphine intrathecal or epidural as adjunct for intra as well as postoperative pain relief is also regarded by many as the ‘gold-standard’, due to its positive analgesic interaction with local anaesthetics and prolonged duration of action6. Spinal anaesthesia with added morphine was used in 63% of responding units for hysterectomy perioperative analgesia. The most common dose used was 200 µg, which is the well in line with the optimal dose found in a study of the three doses (100 µg, 200 µg and 300 µg) versus placebo in abdominal hysterectomy2. Intrathecal morphine use in hysterectomy varied regarding type of surgical technique: five units chose to use intrathecal morphine in abdominal, vaginal and laparoscopic hysterectomies, while seven units use intrathecal morphine in abdominal and vaginal, but not in laparoscopic, hysterectomy.\n\nAll responding units had guidelines regarding postoperative monitoring after neuroaxial opioids. The routine of dividing the postoperative monitoring between a more monitoring intensive postoperative ward for the first 2–6 hours and a regular surgical ward for the following hours, up to 12 hours, according to the guidelines of SFAI, was common, and was found to be already routine in a European survey in 1996 by Rawal et al23. In eight units, CS were initially monitored for 2–6 hours in the obstetrical ward. Two units using intrathecal morphine for CS monitored postoperatively only in a regular maternity ward for the whole period. Seven of the eleven units that chose not to use spinal/epidural morphine describe a risk of respiratory depression and difficulties to monitor postoperatively as the main reason for withholding its use. In these units, which do not use morphine because of risk for respiratory depression and difficulties in postoperative monitoring, approximately 15% of CS reported are performed.\n\nRespiratory depression is the most serious side effect associated with neuroaxial morphine; however, its occurrence is rare6. In 2010, the subject “All patients receiving neuraxial morphine should be monitored with continuous pulse oximetry” was debated during Controversies in Obstetric Anesthesia24. It was commented that a requirement for pulse oximetry might decrease the use of neuroaxial morphine as post caesarean analgesia and risk more pain in patients with more parenteral morphine consumed with the risk of increased risk of respiratory depression6. We have in our institution some 15 years’ experience of approximately 2000 annual CS performed, the majority in spinal anaesthesia using a combination of fentanyl (10 µg), morphine (100 µg) and hyperbaric bupivacaine (10–12 mg). Our routine is an initial 2–3 postoperative hours monitoring in a postoperative ward, and the following hours up to 12 hours in the maternity regular ward with midwife hourly supervision of level of consciousness and in case of sedated (sleeping) patient monitoring of respiratory rate. In the case of an unstable patient, morbid obesity or known sleep apnoea, the patient stays in the postoperative department for a longer time, determined on an individual basis. In line with responding units in this survey, we have no known experience from severe respiratory depression.\n\nThe return rate of the questionnaire was 32/47 units (68%), a figure in line with previous questionnaire survey around anaesthesia practice in conjunction to obstetrics25,26. Taking into account the size of the different units, the response routines cover approximately 82% out of 111,364 annual deliveries, as seen in The Swedish Medical Birth Register (2013), and 83% of CS are performed in responding units. All units performing more than 400 annual CS returned the questionnaire. Our main interest was to find the routine use of morphine in CS, since we learned from anaesthesia meetings with colleges that this routine, regarded as “gold standard” was not in use in several units, due to fear of respiratory depression6. We included questions regarding spinal/epidural addition of fentanyl and sufentanil as well, but did not in detail ask for monitoring after these adjuvants when morphine was not included. We asked for numbers of performed CS and hysterectomies, including morphine, for each unit. However, those numbers were not possible to extract from the questionnaire because of few answers. When the majority of anaesthesia units are reporting anaesthesia to the Swedish PeriOperative Register (SPOR), hopefully in the coming years there will be better opportunities to answer questions about common routines and outcomes.\n\n\nConclusion\n\nThe use of neuraxial opioids is widely spread in Sweden; however somewhat varying regimes exists, where some units chose to use only lipophilic opioids with a rapid onset and few use only long-acting water-soluble morphine. However, a majority use a combination together with local anaesthesia. Still in some units the use of morphine is restricted because of the concern for respiratory depression and difficulties in monitoring.\n\n\nData availability\n\nDataset 1: Raw data for IT and EDA opioid survey. IT, intrathecal; EDA, epidural; MO, morphine. doi, 10.5256/f1000research.10705.d15141827",
"appendix": "Author contributions\n\n\n\nAll authors have contributed equal to the preparation of the paper, preparation and study design, AH has done most of collection of data, all authors under the lead of AH/JGJ have contributed to the writing and preparation of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study has been supported by the Department of Anaesthesia & Intensive Care, no external funds were received.\n\n\nSupplementary material\n\nSupplementary File 1: Questionnaire in Swedish.\n\nClick here to access the data.\n\nSupplementary File 2: Questionnaire in English.\n\nClick here to access the data.\n\n\nReferences\n\nPalmer CM, Emerson S, Volgoropolous D, et al.: Dose-response relationship of intrathecal morphine for postcesarean analgesia. Anesthesiology. 1999; 90(2): 437–44. PubMed Abstract | Publisher Full Text\n\nHein A, Rösblad P, Gillis-Haegerstrand C, et al.: Low dose intrathecal morphine effects on post-hysterectomy pain: a randomized placebo-controlled study. Acta Anaesthesiol Scand. 2012; 56(1): 102–9. PubMed Abstract | Publisher Full Text\n\nYeh HM, Chen LK, Shyu MK, et al.: The addition of morphine prolongs fentanyl-bupivacaine spinal analgesia for the relief of labor pain. Anesth Analg. 2001; 92(3): 665–8. PubMed Abstract | Publisher Full Text\n\nGustafsson L, Schildt B, Jacobsen K: Adverse effects of extradural and intrathecal opiates: report of a nationwide survey in Sweden. 1982. Br J Anaesth. 1998; 81(1): 86–93. discussion 85. PubMed Abstract\n\nGiovannelli M, Bedforth N, Aitkenhead A: Survey of intrathecal opioid usage in the UK. Eur J Anaesthesiol. 2008; 25(2): 118–22. PubMed Abstract | Publisher Full Text\n\nSultan P, Gutierrez MC, Carvalho B: Neuraxial morphine and respiratory depression: finding the right balance. Drugs. 2011; 71(14): 1807–1819. PubMed Abstract | Publisher Full Text\n\nWinther LP, Mitchell AU, Møller AM: Inconsistencies in clinical guidelines for obstetric anaesthesia for Caesarean section: a comparison of the Danish, English, American, and German guidelines with regard to developmental quality and guideline content. Acta Anaesthesiol Scand. 2013; 57(2): 141–9. PubMed Abstract | Publisher Full Text\n\nAnim-Somuah M, Smyth RM, Jones L: Epidural versus non-epidural or no analgesia in labour. Cochrane Database Syst Rev. 2011; (12): CD000331. PubMed Abstract | Publisher Full Text\n\nAl-Kazwini H, Sandven I, Dahl Leiv V, et al.: Prolonging the duration of single-shot intrathecal labour analgesia with morphine: A systematic review. Scand J Pain. 2016; 13: 36–42. Publisher Full Text\n\nHein A, Rösblad P, Norman M, et al.: Addition of low-dose morphine to intrathecal bupivacaine/sufentanil labour analgesia: A randomised controlled study. Int J Obstet Anesth. 2010; 19(4): 384–9. PubMed Abstract | Publisher Full Text\n\nLi B, Wang H, Gao C: Bupivacaine in combination with fentanyl or sufentanil in epidural/intrathecal analgesia for labor: a meta-analysis. J Clin Pharmacol. 2015; 55(5): 584–91. PubMed Abstract | Publisher Full Text\n\nWong C: Epidural and spinal analgesia/Anesthesia for Labor and Vaginal delivery. In Chestnut D, Wong C, Tsen L, Ngan Kee W, Beilin Y, Mhyre J. Chestnut's obstetric Anesthesia Principles and Practice. Fifth edition. Elsevier Saunders. 2014; 457–517. Reference Source\n\nAbdallah FW, Halpern SH, Margarido CB: Transversus abdominis plane block for postoperative analgesia after Caesarean delivery performed under spinal anaesthesia? A systematic review and meta-analysis. Br J Anaesth. 2012; 109(5): 679–87. PubMed Abstract | Publisher Full Text\n\nSultan P, Halpern SH, Pushpanathan E, et al.: The Effect of Intrathecal Morphine Dose on Outcomes After Elective Cesarean Delivery: A Meta-Analysis. Anesth Analg. 2016; 123(1): 154–64. PubMed Abstract | Publisher Full Text\n\nSuzuki H, Kamiya Y, Fujiwara T, et al.: Intrathecal morphine versus epidural ropivacaine infusion for analgesia after Cesarean section: a retrospective study. JA Clinical Reports. 2015; 1: 3. Publisher Full Text\n\nCarvalho B, Butwick A: Postoperative analgesia: Epidural and spinal techniques. In: Chestnut D, Wong C, Tsen L, Ngan Kee W, Beilin Y, Mhyre J. Chestnut's obstetric Anesthesia Principles and Practice. Fifth edition. Elsevier Saunders, 2014; 621–661. Reference Source\n\nDahlgren G, Hultstrand C, Jakobsson J, et al.: Intrathecal sufentanil, fentanyl, or placebo added to bupivacaine for cesarean section. Anesth Analg. 1997; 85(6): 1288–93. PubMed Abstract | Publisher Full Text\n\nLevy DM: Emergency Caesarean section: best practice. Anaesthesia. 2006; 61(8): 786–791. PubMed Abstract | Publisher Full Text\n\nSingh SI, Rehou S, Marmai KL, et al.: The efficacy of 2 doses of epidural morphine for postcesarean delivery analgesia: a randomized noninferiority trial. Anesth Analg. 2013; 117(3): 677–85. PubMed Abstract | Publisher Full Text\n\nMalhotra S, Yentis SM: Extending low-dose epidural analgesia in labour for emergency Caesarean section - a comparison of levobupivacaine with or without fentanyl. Anaesthesia. 2007; 62(7): 667–671. PubMed Abstract | Publisher Full Text\n\nCohen S, Amar D, Pantuck CB, et al.: Postcesarean delivery epidural patient-controlled analgesia. Fentanyl or sufentanil? Anesthesiology. 1993; 78(3): 486–491. PubMed Abstract\n\nVora KS, Shah VR, Patel B, et al.: Postoperative analgesia with epidural opioids after cesarean section: Comparison of sufentanil, morphine and sufentanil-morphine combination. J Anaesthesiol Clin Pharmacol. 2012; 28(4): 491–495. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRawal N, Allvin R: Epidural and intrathecal opioids for postoperative pain management in Europe--a 17-nation questionnaire study of selected hospitals. Euro Pain Study Group on Acute Pain. Acta Anaesthesiol Scand. 1996; 40(9): 1119–11126. PubMed Abstract | Publisher Full Text\n\nD’ Angelo R: All parturients receiving neuraxialmorphine should be monitored with continuous pulse oximetry. Int J Obstet Anesth. 2010; 19(2): 202–4. PubMed Abstract | Publisher Full Text\n\nKinsella SM, Walton B, Sashidharan R, et al.: Category-1 caesarean section: a survey of anaesthetic and peri-operative management in the UK. Anaesthesia. 2010; 65(4): 362–8. PubMed Abstract | Publisher Full Text\n\nGardner IC, Kinsella SM: Obstetric epidural test doses: a survey of UK practice. Int J Obstet Anesth. 2005; 14(2): 96–103. PubMed Abstract | Publisher Full Text\n\nHein A, Gillis-Haegerstrand C, Jakobsson JG: Dataset 1 in: Neuraxial opioids as analgesia in labour and postoperative after caesarean section and hysterectomy: A questionnaire survey in Sweden. F1000Research. 2017. Data Source"
}
|
[
{
"id": "20179",
"date": "27 Feb 2017",
"name": "Jakob Walldén",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present study explores the use of opioids in obstetric and gynaecological surgery in Sweden. The authors have done a nice survey among Swedish hospitals covering a majority of the procedures performed. The study is properly designed, the results reported correctly with adequate clinically relevant conclusions. However, some revisions are needed in the manuscript.\nThroughout the manuscript please revise the language according to tenses and report the results in “past time”. Please use a consequent numbering format with numbers and letters.\n\nConsider sending the manuscript for language proof reading to improve the quality.\n\nAbstract:\nChange “…47 anaesthesiologists at ob. units…” to “… all anaesthetic obstetric unit..”\n\nIntroduction: Line 5: 90-150 of how many??\n\nMethods: “..26 questions sent by mail…” Please specify postal mail “…questionnaire was produced to reflect….” Designed or Formulated might be better than Produced “…no formal statistical tests WERE used.” Please use past form.\n\nResults Regarding the responding clinics, was it 32 units that responded with postal mail? Or total response rate? Consider deleting the word “mailed”. The results here are not in accordance with the results in the abstract where you state that the units represented 83% of annual CS. Better formulation may be: “… 32 of 47 (68%) units responded to the questionnaires representing 83 % of annual CS in Sweden...”\n\nPlease provide proof that larger units were more willing to respond. Significant?\n\nPlease be consequent with the dosing: now both µg and microgram are used.\n\nSection epidural morphine:\n\nTable 1: The table is difficult to view in the current layout.\n\nSuggestion -------\n\nNumber of units Response rate 32/47\n\nIntrathecal morphine 20 Standard dose 100 µg\n\n17 Standard dose 125 µg\n\n3 Combined with fentanyl\n\n16 Combined with sufentanil\n\n2\n\nIntrathecal Fentanyl\n\n21 Standard dose 10-14 µg\n\n18 Etc… -------------\n\nDiscussion\n\nPlease add “…all responding units used….” to the first sentence.\n\n“…a survey in 2009 (not published)”. I “Data on file” might be a better formulation than “not published”\n\n“The addition of sufentanil (15 units) or fentanyl (9units…)” Please reformulate so that it is clear that you mean number of OB/Gyn units and not units of the drugs.\n\nLast paragraph of the Discussion: Move the results to the first part of the results section.\n\nPlease also include limitations of the study in the discussion. Were there any pit-falls? Can the results be generalised outside Sweden? The standard care for the units are reported, but is there a variability at the units that has to be discussed?\n\nConsider reducing the length of the discussion.\n\nReferences There is a mixed format of the references with special attention to the page numbering. Please adhere to the Index Medicus/Medline format. Check page number ref 23.",
"responses": []
},
{
"id": "20178",
"date": "08 Mar 2017",
"name": "Wojciech Weigl",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study of Hein et al. presents results of a nation-wide survey on the use of neuraxial opioids in obstetric and gynecology patients in Sweden. I found especially interesting these parts that were devoted to cesarean section (CS). Multimodal analgesia became a widely used approach to post-CS pain management and the use of intrathecal opioids is one of the important components in this approach. Among intrathecal opioids, low-dose morphine appear to be the most widely recommended1. However, this hydrophilic opioid has a high frequency of adverse effects2 such us nausea and vomiting, pruritus, and potentially serious late respiratory depression3. That’s why the question of the current routines related to this topic in Sweden is important in daily life clinical practice.\nI think that several things could be corrected to improve quality of the paper.\nGeneral comments:\nAuthors should choose tense in which they present and discuss findings. Personally, I don’t like to present tense because, as stated below it can be sometimes misleading. Anyway, consistency is needed. English language could also be improved. There are some awkward expressions such as ’fast opioid’ which is not really medical term.\nSpecific comments:\nTitle: The title could be more clear and in better English.\nAbstract: I recommend to maintain consistency regarding the aim of the study, whether it is the use of neuraxial opioids or just morphine. ‘the patient is in postoperative care or an obstetrical ward over 2-6 hours and up-to 12 hours in an ordinary surgical ward.’ I would add ‘is monitored in …’\nIntroduction ‘Intrathecal and epidural morphine improve postoperative analgesia after caesarean section (CS) and hysterectomy and intrathecal labour analgesia’. Did the authors mean during intrathecal labour analgesia? This is at least what Yeh et al. had in mind during their study.\nMethods: ‘A questionnaire survey was sent to anaesthesiologists in charge of Swedish obstetric anaesthesia’ I think it sounds a bit strange. Is it not more simple just to write that the questionnaire was sent to the anaesthesiologists in charge of obstetric anaesthesia units in hospitals in Sweden? Furthermore, information about anesthesiologist is repeated 3 times. What is ‘Swedish Association of Obstetric Anaesthesia and Intensive Care’? The questionnaire (Supplementary File 1 and Supplementary File 2) was produced to reflect how common the use of intrathecal and epidural morphine, fentanyl and sufentanil as adjunct to local anaesthetics for perioperative care of CS and hysterectomy is, and also the routine use of intrathecal morphine for labour analgesia. I think this sentence could be corrected in underlined parts as it sounds awkward. ‘specify the numbers of patients administered with opioids, including neuraxial morphine, for the operations performed’ it is not clear if the authors were interested in intrathecal morphine alone or in other intrathecal opioids or, what would be even more interesting what was the combinations of used opioids. It is the same issue what with the aim of the study.\n\nResults ‘Epidural sufentanil. In total, 15 units use sufentanil as the fast opioid in the epidural anaesthesia for CS anaesthesia’ maybe opioid with rapid onset of action. 50mcg of epidural sufentanil is quite a large dose. I think it should be commented. It is quite surprising that for hysterectomy and gynecological abdominal surgery opioids are used intrathecally and not with the use of epidural anesthesia. Can the authors comment on that? ‘Postoperative monitoring is generally organized within the initial 2–6 hours in the postoperative ward and the following hours, up to 12 hours, in the regular surgical ward, according to the guidelines of SFAI – Swedish Association of Anaesthesia and Intensive Care.’ Did you mean this is general routine in Sweden or this was the result of survey? Using present tense in result section is a bit misleading. Also ‘in the regular surgical ward, according to the guidelines of SFAI – Swedish Association of Anaesthesia and Intensive Care.’ I would change into: … according to the guidelines of Swedish Association of Anaesthesia and Intensive Care (SFAI).\n\nPostoperative monitoring: The authors write about 9 units of 32 in case of CS, what happened to patients in the rest of 20 units where morphine was used? ‘Seven of the eleven units that chose not to use spinal/epidural morphine’ Or was it 12/32 units which did not use morphine as in table 1.\nDiscussion I think authors should avoid expressions such as ‘Swedish obstetric anaesthesia’ because such thing do not exist. Use instead obstetric anaesthesia units in Sweden. ‘Opioids were also commonly added to labour analgesia as sufentanil,’ please change for clarity into: ‘Opioids such as sufentanil were also commonly added to labour analgesia… ‘ ‘there are good opportunities to get a beneficial epidural in Sweden’ what did you mean by ‘beneficial’? The citation should be placed earlier, as Cochrane’s review does not say much how these issues occur in Sweden. ‘Sufentanil is the most common opioid added in labour anaesthesia in Sweden, which was reported in a national obstetric anaesthesia meeting from a survey in 2009 (not published).’ Oral presentations that were not published should not be mentioned or cited in a good quality paper. ‘Spinal anaesthesia is the preferred analgesia in CS when a working epidural is not in place for conversion and top-up.’ What do you mean ‘conversion’? ‘Yet currently, the present study found that twelve units use solely fast opioid adjuncts to local anaesthesia and two of the units add morphine to local anaesthetics with no addition of fentanyl or sufentanil, but one of the units stated they plan to start adding fentanyl shortly.’ Last part of the sentence could be omitted; the authors should filter not important information. On the other hand it could be discussed that event though intrathecal lipophilic opioids in CS are not as effective as intrathecal morphine, they still proved to be beneficial during the period of highest analgesic demand after cesarean section4. ‘Yet, the addition of epidural opioids has been proven to enhance postoperative analgesia after CS with earlier onset’ I don’t understand this sentence. Withdrawing from the use of intrathecal morphine could be discussed in context of other large studies that report usage of intrathecal opioids in obstetrics5-7.\nLimitation Usually, at the end of manuscript, authors should state limitations of the study. Some aspects that could be discussed are: The postoperative monitoring routines are limited to time perspective. There is nothing about what was actually monitored and how often. Quality of information obtained from the question regarding complications is very low.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-133
|
https://f1000research.com/articles/6-320/v1
|
24 Mar 17
|
{
"type": "Research Note",
"title": "Hermaphroditism in the white spot grouper Epinephelus coeruleopunctatus (Pisces: Serranidae) harvested from Padang City waters, Indonesia",
"authors": [
"Usman Bulanin",
"Masrizal Masrizal",
"Zainal A. Muchlisin",
"Masrizal Masrizal",
"Zainal A. Muchlisin"
],
"abstract": "The objective of the present study was to determine the length (mm) for sex transformation of hermaphroditism in white spot grouper Epinephelus coeruleopunctatus as a basis for developing breeding technology. Fish sampling was carried out between April and October 2013 in Padang City waters, Indonesia. A total of 56 white spot groupers were recorded during the study; of these 22 were male, 28 female and 6 samples were not recognized regarding sex preference. Sex differentiation was detected at a length of 183 mm, and at this size the fish are female. Sex transformation to male begun to occur at 302 mm total length.",
"keywords": [
"Epinephelus coeruleopunctatus",
"Reproduction",
"Gonad",
"Grouper Fish Growth"
],
"content": "Introduction\n\nGroupers (family Serranidae) belong to 109 species and 11 genus1,2. Groupers are commercial marine fishes that have been harvested intensively from the wild, resulting in decreasing the population worldwide3,4. The white spot grouper, Epinephelus coeruleopunctatus, is one of the most popular groupers and has a high economic value among groupers in Asia-Pacific regions5,6. However, this species is rare and difficult to catch. According to local fishermen of Padang City, Indonesia, the population of E. coeruleopunctatus has been declining sharply over the last two decades7. According to Teixeira et al.8 and Mariskha and Abdulgani9 the decreasing fish population is caused by overfishing, habitat perturbation10 and unfriendly fishing practices11. The International Union for Conservation of Nature12 reports this species on the Red List as a threatened species.\n\nCulturing of white spot grouper has been initiated in Indonesia; however, the fry (juveniles) are strongly dependent from the wild supply13. Therefore, it is very crucial to develop breeding technology of the white spot grouper. One of the problems in the development of breeding technology is hermaphroditism sex development, which is observed in this species14. Therefore, it is difficult to determine the sex differentiation between male and female. Hermaphroditism has also been reported in several other groupers, such as E. tauvina15, E. aeneus16, E. rivulatus17, E. striatus18, and Plectropomus laevis19. Hence, this paper reports on the size (length and body weight) of sex transformation in white spot grouper. This information is crucial to plan a better management strategy of fishery resources20 and to develop breeding technology for the white spot grouper.\n\n\nMethods\n\nAll procedures involving animals were conducted in compliance with Bung Hatta University Research and Ethics Guidelines, Section on Animal Care and Use in Research. Fish were caught from Padang City waters, at GPS coordinates 00 54’ 55.34” S, 1000 10’ 15.49” E (Figure 1), between April and October 2013. The fish were caught using hooks and hand line at the depth of 30–50 m. Fishing operations were carried out from 6.00 am to 16.00 pm. The sampled fish were anesthetized with MS222, prepared by dissolving 4g of MS222 in 5L tap water21 and then transported to the Laboratory of Fisheries Resources of Bung Hatta University for further analysis. In the laboratory, the fish samples were measured for total length (mm) and body weight (g). The abdomen was dissected and the gonad was removed carefully and cleaned using a tissue paper and then weighed nearest to mg using a digital balance (ACIS: AD300; errors 0.01g). Sex differentiation by gonad was examined microscopically (100x magnification) and determined based on Muchlisin et al.22. The data were analyzed descriptively.\n\n\nResults\n\nA total of 56 fish were recorded during the study, where 50 fish were recognized regarding sex differentiation by gonad, of which 22 were males and 28 females. A total of 6 samples were not recognized regarding their sex, due to being still in the early gonadal development stage. The sex ratio was 2:3 (male:female). The total length of the male fish ranged from 302–537 mm, while females ranged from 183–537 mm. The body weight ranged between 374–2107 g and 85–373 g for male and female fish, respectively. The total length of fish with undetermined sex ranged from 125–242 mm and 85–373 g body weight (Table 1 and Table 2).\n\nThe study showed that the first sex differentiation of E. coeruleopunctatus occurred at a size above 183 mm; fish of this size were recognized as female and no male fish were detected in this size group. First sex differentiation is species dependent; for example, E. bleekeri occurrs at 170 mm23 and Plectropomus laevis at 280 mm19.\n\nThe results revealed that the female white spot grouper begun to transform to male at 302 mm in length, indicating a protogynous hermaphroditism. However, the size at which all fish transform to male fish was unknown, since there were no fish sample more than 537 mm in length. But, the existing data show that the ratio of male fish was increased as total length increased; hence, we suspect that all fish have changed sex to male at sizes above 600 mm. For comparison, Renones et al.24 reported that the female dusky grouper E. marginatus transforms its sex initially from female to a male at a size of 680 mm and all males were detected at size 800 mm. In addition, Tan and Tan25 reported that E. tauvina begins to transform their sex from female to male at the size of 650 mm, while at the size of 700 mm all fish are recognized as male. According to Burhanuddin and Fami26 the occurrence of sex transformation in hermaphroditic fish is species dependent and strongly influenced by environmental factors.\n\n\nConclusions\n\nThe white spot grouper Epinephelus coeruleopunctatus is a protogynous hermaphroditism. Sex differentiation was detected at the total length of 183 mm and at this size the fish are female. The sex transformation began to occur at 302 mm total length.\n\n\nData availability\n\nDataset 1: The total length, body weight and sexes of the 56 individual fish sampled. doi, 10.5256/f1000research.11090.d15511927",
"appendix": "Author contributions\n\n\n\nUB was responsible for developing research proposal and study design and approved the final draft of the paper. MM was responsible for sample collection and processing, and data analysis. ZAM is responsible for manuscript preparation and proofreading of the draft.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by the Ministry of Research, Technology and Higher Education of the Republic of Indonesia through the Fundamental Research Scheme (contract number, 014/SP/HATTA-1/LPPM/II/2013).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank the Ministry of Research, Technology and Higher Education for providing the financial support to this study. Appreciation goes to our colleagues who helped the authors during field sampling and laboratory analysis.\n\n\nReferences\n\nAllen G: Marine Fishes of South East Asia. Periplus Editions (HK) Ltd. Western Australian Museum, 2000. Reference Source\n\nPeristiwady T: Seafood economical important in Indonesia. Indonesian Institute of Sciences, Jakarta, 2006.\n\nPears RJ: Comparative demography and assemblage structure of serranid fishes: implication for conservation and fisheries management.2005. Reference Source\n\nRhodes K, Russell B, Kulbicki M, et al.: Epinephelus coeruleopunctatus, Whitespotted Grouper. The IUCN Red List of Threatened Species. ISSN 2307-8235(on line). 2008; 9. Reference Source\n\nLee C, Sadovy Y: A taste for live fish: Hong Kong's live reef fish market. Naga ICLARM. 2006; 21(2): 38–42. Reference Source\n\nRhodes KL, Tupper MH: A preliminary marked-based survey of the Pohnpei, Micronesia, grouper (Serranidae: Epinephelinae) fishery reveals unsustainable fishing practices.2007; 26(2): 335–344. Publisher Full Text\n\nBulanin U, Masrizal M, Muchlisin ZA: Length-weight relationships and condition factors of the whitespotted grouper Epinephelus coeruleopunctatus Bloch, 1790 in the coastal waters of Padang City, Indonesia. Aceh Journal of Animal Science. 2017; 2(1): 23–27. Reference Source\n\nTeixeira SF, Ferreira BP, Padovan IP: Aspects of fishing and reproduction of the black grouper, Mycteroperca bonaci (Poey, 1860) (Srranidae: Epinephelinae) in the Northeastern Brazil. Neotrop Ichthyol. 2004; 2(1): 19–30. Publisher Full Text\n\nMariskha PR, Abdulgani N: Aspects of reproduction Sixbar grouper, Epinephelus sexfaciatus, in waters Glondonggede Tuban. ITS Journal of Science and Arts. 2012; 1(1): 27–31.\n\nBulanin U: Potensi dan penyebaran ikan kerapu, Epinephelus miliaris, di perairan laut Kota Padang. Jurnal Mangrove dan Pesisir. 2010; 1(1): 39–41.\n\nMuchlisin ZA, Fadli N, Rudi E, et al.: Estimation of production trend of the depik, Rasbora tawarensis (Teleostei, Cyprinidae), in Lake Laut Tawar, Indonesia. AACL Bioflux. 2011; 4(5): 590–597. Reference Source\n\nIUCN: IUCN Red List of Threatened Species. Version 2009.2. Downloaded on 15 February 2010, 2009. Reference Source\n\nBulanin U, Masrizal: Biology reproduction aspects and gonado maturation of white spot grouper, Epinephelus coeruleopunctatus. Research Report of the Bung Hatta University, Padang, Indonesia: 2013.\n\nZhou L, Gui J: Molecular mechanisms underlying sex change in hermaphroditic groupers. Fish Physiol Biochem. 2010; 36(2): 181–193. PubMed Abstract | Publisher Full Text\n\nTan SM, Tan KS: Biology of tropical grouper Epinephelus tauvina (Forskal): Preliminary study on hermaphroditism in E. tauvina. Singapore Journal Pri Ind. 1974; 2(2): 123–133.\n\nHussain S, DeMonbrison D, Hanin Y, et al.: Domestication of White Groupers, Epinephelus aeneus, 1. Growth and reproduction. Aquaculture. 1997; 156(3–4): 305–316. Publisher Full Text\n\nMackie M: Reproductive biology of the halfmoon grouper, Epinephelus rivulatus, at Ningaloo Reef, Western Australia. Environ Biol Fishes. 2000; 57(4): 363–376. Publisher Full Text\n\nSadovy Y, Colin PL: Sexual development and sexuality in the Nassau grouper. J Fish Biol. 1995; 46(6): 961–976. Publisher Full Text\n\nSlamet B, Suwirya K, Supii A, et al.: Some aspects of biology reproduction grouper fish Plectropoma leavis. Proceedings of Innovation and Technology Aquaculture Forum. 2010; 352–357.\n\nMuchlisin ZA, Musman M, Siti-Azizah MN: Spawning seasons of Rasbora tawarensis (Pisces: Cyprinidae) in Lake Laut Tawar, Aceh Province, Indonesia. Reprod Biol Endocrinol. 2010; 8: 49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuchlisin ZA, Hashim R, Chong AS: Preliminary study on the cryopreservation of tropical bagrid catfish (Mystus nemurus) spermatozoa; the effect of extender and cryoprotectant on the motility after short-term storage. Theriogenology. 2004; 62(1–2): 25–34. PubMed Abstract | Publisher Full Text\n\nMuchlisin ZA, Musman M, Siti-Azizah MN: Spawning seasons of Rasbora tawarensis (Pisces: Cyprinidae) in Lake Laut Tawar, Aceh Province, Indonesia. Reprod Biol Endocrinol. 2010; 8: 49. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBulanin U, Syandri H: Potency and biology aspects duskytail grouper, Epinephelus Bleekeri, caught in the Padang city waters. Finally report of Fundamental Research Scheme. Faculty of Fisheries and Marine Sciences, Bung Hatta University, Padang, Indonesia. 2010; 65.\n\nRenones O, Grau A, Mas X, et al.: Reproductive pattern of an exploited dusky grouper Epinephelus marginatus (lowe 1834) (Pisces: Serranidae) population in the western Mediterrania. Scientia Marina 2010; 74(3): 523–537.\n\nTan SM, Tan KS: Biology of tropical grouper Epinephelus tauvina (Forskal): Preliminary study on hermaphroditism in E. tauvina. Singapore Journal Pri Ind. 1974; 2(2): 123–133.\n\nBurhanudin F: Reproduction of angelfish (Pomacanthus annularis Bloch) in the coastal water Cilamaya, Karawang, West Java. Indonesian Institute of Sciences, Jakarta: 1996.\n\nBulanin U, Masrizal M, Muchlisin ZA: Dataset 1 in: Hermaphroditism in the white spot grouper Epinephelus coeruleopunctatus (Pisces: Serranidae) harvested from Padang City waters, Indonesia. F1000Research. 2017. Data Source"
}
|
[
{
"id": "21264",
"date": "29 Mar 2017",
"name": "Ambok Bolong Abol-Munafi",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\n1. Title\n\nThe title is suitable. Only the word 'harvested' should be deleted.\n2. Abstract\n\nSuggestion: A total of 56 white spot groupers were sampled; of these 22 were male, 28 were female and 6 were undifferentiated.\n3. Introduction\n\nAccepted but the English Language needs to be revised.\n4. Methods\n\ni)\n\nPlease provide the instrument used to measure the total length and body weight.\n\nii) Method for gonad measurement can be removed since the data is not tabled in this article.\n\niii) Must have explanation on gonad structure/conditions of functional male and functional female based on microscopic observations that were used for sex determination. I cannot find the explanation in Muchlisin et al. (2010)1. Please elaborate on \"The data were analyzed descriptively\" or the statement is not relevant in this article.\n\niv) Please explain on what basis the length class and weight class were decided. Why the differences are not equal. Please recalculate.\n\n5. Results\n\ni) The first sentence need to be rephrased. ii) Regarding my comment on what basis the length class was decided, the article suggested that the female differentiated at 183mm. If you refer to your Dataset 1, the smallest female is 22.3cm (or 223 mm). Based on the length class, the article suggested that the sex change from female to male occurred at 302 mm. Your Dataset 1 showed that the smallest male caught was 35.0cm (or 350 mm). Please elaborate your data.\n6. Conclusions - The sample size is too small and the duration of the study is too short to conclude the finding that the female differentiated at 183mm and sex changed to male occurred at 302 mm.\n7. References - No 27 is considered as one of the references?\nGeneral remark:\n- The English language must be revised. Standard terminology should be used, eg: sex change instead of sex transform; differentiated instead of recognized.",
"responses": []
},
{
"id": "21782",
"date": "11 Apr 2017",
"name": "Bradley J. Pusey",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle and abstract both appropriate. Article content – design, methods and analysis all appropriate Conclusions are sensible and balanced. The authors might consider expressing maturation in terms of the length at which 50% of the sample are one sex or another, in addition as say minimum length. Perhaps more needs to be said of the fact that not all fish in the very largest size class had changed sex into males.",
"responses": []
},
{
"id": "21903",
"date": "18 Apr 2017",
"name": "Rudy Agung Nugroho",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nReview report for article entitle “Hermaphroditism in the white spot grouper Epinephelus coeruleopunctatus (Pisces: Serranidae) harvested from Padang City waters, Indonesia”. Overall, this article has displayed originality in the work and the outcome of this work adds benefit to the area of the research. This article is presented well with cohesiveness. However, editorial suggestions should be addressed by the author. Below, the basis for suggestion followed by some special editorial concern by section.\n\nTitle: The title is short, informative and well constructed. Abstract: Suggestion: Information regarding on how sex differentiation was performed should be written. Introduction: The introduction has provided quality relevant information particularly with regard to hermaphroditism. Suggestion: The introduction required to include relevant information specific to sex differentiation by using gonad identification followed by relevant references. Methods: a) Please provide specific instrument that used to measure the total length and body weight. b) Gonad measurement was performed but there is no gonad weight data in the result section. c) Microscopic observations of the gonad is not clearly describe. d) \"The data were analyzed descriptively\" the statement is not clear which data referring to in this article. Please be specific Results: a) Length and weight frequency distribution class in the table 1 and 2 is not clearly defined. b) Set data or the number of the sample is too small. c) It is stated that the occurrence of sex transformation in hermaphroditic fish is species dependent and strongly influenced by environmental factors. The author should include the environment report in the study area. This environment report will give another perspective on this study. Conclusion: To conclude the finding that the female differentiated at 183mm and sex changed to male occurred at 302 mm is too “early” because is based on the length and weight. It is better to include histological, endocrinological or event molecular study in this article. References: Suggestion: delete ref #27",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-320
|
https://f1000research.com/articles/6-319/v1
|
24 Mar 17
|
{
"type": "Software Tool Article",
"title": "Extending TCGA queries to automatically identify analogous genomic data from dbGaP",
"authors": [
"Erin K. Wagner",
"Satyajeet Raje",
"Liz Amos",
"Jessica Kurata",
"Abhijit S. Badve",
"Yingquan Li",
"Ben Busby",
"Erin K. Wagner",
"Satyajeet Raje",
"Liz Amos",
"Jessica Kurata",
"Abhijit S. Badve",
"Yingquan Li"
],
"abstract": "Data sharing is critical to advance genomic research by reducing the demand to collect new data by reusing and combining existing data and by promoting reproducible research. The Cancer Genome Atlas (TCGA) is a popular resource for individual-level genotype-phenotype cancer related data. The Database of Genotypes and Phenotypes (dbGaP) contains many datasets similar to those in TCGA. We have created a software pipeline that will allow researchers to discover relevant genomic data from dbGaP, based on matching TCGA metadata. The resulting research provides an easy to use tool to connect these two data sources.",
"keywords": [
"dbGaP",
"TCGA",
"SRA",
"cancer",
"database",
"genome",
"The Cancer Genome Atlas",
"GDC"
],
"content": "Introduction\n\nMany large funding organizations, including the National Institutes of Health (NIH), encourage researchers to make their data available in public databases. Policies like the NIH’s Genomic Data Sharing policy (https://gds.nih.gov/03policy2.html) and other incentives around data sharing have promoted the development of several public data repositories. However, in spite of the availability of data, it can still be challenging to harness the power of these public databases, and researchers are faced with a variety of barriers in accessing shared data (van Schaik et al., 2014).\n\nA major obstacle to data discovery is the disconnectedness of various data sharing resources. Automated tools that can connect these databases and reduce the time that researchers spend on data discovery are critically needed (Dudley & Butte, 2008; Ruau et al., 2011). Such tools will promote reproducibility, increase the efficiency of research, and aid in solving the problem of small sample sizes. These issues are especially relevant to genomic data, which is typically expensive to gather.\n\nHere, we focus on connecting two popular genomic data repositories, the Database of Phenotypes and Genotypes (dbGaP) (Tryka et al., 2014) and The Cancer Genome Atlas (TCGA), data hosted by the Genomic Data Commons (GDC; https://gdc.cancer.gov/). These two popular data sharing resources both house genomic datasets related to cancer, but despite containing similar data, these repositories have no direct connection to allow researchers to link them together. In the case of these two repositories the only way to find projects with analogous metadata is to manually search each repository. The key contribution of this work is a tool that acts as an interface between the GDC and dbGaP, which allows researchers to discover dbGaP datasets with similar metadata to a TCGA dataset of interest.\n\n\nMethods\n\nGDC. The GDC (https://gdc.cancer.gov/) is a highly curated resource for datasets from cancer related genomic studies from the National Cancer Institute (NCI). Its primary function is to provide a centralized repository for accessibility to data from large-scale NCI programs, such as TCGA and its pediatric equivalent, Therapeutically Applicable Research to Generate Effective Treatments. As of September 2016, GDC held over 260K sequence files with different genomic data-types (whole genome, RNA, etc.) of over 14K patients.\n\ndbGaP. The National Center for Biotechnology Information (NCBI) dbGaP (https://www.ncbi.nlm.nih.gov/gap) is the largest collection of genomic data. It is not limited to cancer data or human data. While the metadata fields are fixed, unlike the GDC, the entries in these fields are not curated. This is a challenge for harmonizing the metadata across the two datasets. The NCBI Sequence Read Archive (SRA) (https://www.ncbi.nlm.nih.gov/sra) is a collection of sequence data associated with the studies in dbGaP.\n\nAs the tool was developed as part of a hackathon, we used a development methodology similar to the Rapid Application Development model suitable for prototype development (Kerr & Hunter, 1994). This subsection is organized as steps within this methodology.\n\nDefining the scope. We first identified the end users of our tool to be molecular and computational biologists and bioinformaticians with limited programming experience. Thus, the tools had to be easy to setup and execute. Next, we identified the use-cases as follows:\n\nThe tool should take TCGA study identifiers or study-level metadata values from the GDC and identify dbGaP studies with analogous data.\n\nThe tool should subsequently provide the capability of fetching the sequence level genomic data directly for these studies from the NCBI SRA data repository.\n\nThis gave us the necessary modules that needed to be developed.\n\nMapping the metadata. We first extracted the required metadata by parsing the raw XML data and also scrapping the website data from both TCGA (GDC) and dbGaP. This metadata is stored as mapping tables in CSV format. Based on the extracted metadata, we developed two mapping dictionaries to translate between 1) disease terms and 2) genomic data-types, as defined separately within dbGaP and the GDC.\n\nAccomplishing this mapping was challenging, as the allowable values for these fields is strictly controlled in the GDC, but completely user-defined in dbGaP. We designed a rule-based mapper to generate an initial map between search values from each repository, then manually curated these mappings to refine and rank mapped terms. These mappings are stored and used during the execution of our tool.\n\nDeveloping the required modules. Both the TCGA data (through GDC; https://gdc.cancer.gov/developers/gdc-application-programming-interface-api) and dbGaP (through NCBI Eutils; https://eutils.ncbi.nlm.nih.gov/entrez/eutils/) provide APIs to access their respective data that allow metadata transfer in the XML or JSON formats. An API or Application Programming Interface provide an interface to data and services that other programs can directly use.\n\nThe SRA toolkit is a software tool that allows researchers to obtain the sequence data (with appropriate access rights) from the SRA database. The search can be narrowed by various parameters, including the genomic region and type of sequence (e.g. mRNA and whole genome shotgun).\n\nWe used Python (version 2.7; https://www.python.org/) for the development of our tool. We wanted to keep the tool as platform agnostic as possible. As the SRA toolkit is Unix-based, only the final part of the implementation pipeline, as discussed subsequently, is a shell script (not directly compatible in Windows environment).\n\n\nResults\n\nWe developed an easy-to-use tool that can be used to find additional data from dbGaP (and SRA) by expanding TCGA queries automatically. The first part of the pipeline allows researchers to query either repository by TCGA Project ID, File ID, Case ID, disease type, or experimental strategy via a metadata mapping dictionary. It returns not only a list of TCGA IDs, but also a list of related dbGaP study IDs. For dbGaP studies with NCBI SRA data, the second part of the pipeline will return the .sam files that contains reads aligned to a genomic region of interest to be used with the SRA Toolkit. Our tool is divided into three modules as illustrated in Figure 1. Below, each module is discussed in detail.\n\nThis component of the pipeline queries the GDC in multiple ways, including a direct ID search for projects, cases, samples, or files, or a custom search by the cancer type or experimental methods. Currently, the scope of custom search is limited to the available terms in the GDC data portal (Table 1). The module fetches the metadata using the GDC API and extracts the metadata terms related to the specified ID (i.e. the cancer type and experiment method). It then translates these terms to corresponding dbGaP search terms and returns the relevant dbGaP study IDs using the NCBI Eutils API. While executing the pipeline, the XML/JSON outputs of the APIs are processed in-memory behind the scenes. Thus, the end-users are not exposed to the API directly.\n\nThe mapping between the Disease and Primary Site can be found in our GitHub repository.\n\nFor custom searches, this module returns results from both the GDC and dbGaP simultaneously. Thus, this module also provides consolidated search capability over the TCGA and dbGaP data. The output from this module includes two files:\n\na list of the TCGA cases for the given project or search criteria, and\n\na list of dbGaP studies (with links) that are analogous to the input query.\n\nThe second component of the pipeline takes the list of dbGaP studies IDs and returns the list of sequence read run (SRR) files from the NCBI SRA from the dbGaP studies, when available. The users can specify the genomic region of interest as an additional parameter.\n\nThe final part of the pipeline takes a list of SRRs and uses the SRA-toolkit to return sequencing level genomic data for a genomic region of interest directly from NCBI SRA data repository. This module assumes the required authorization has been granted prior to accessing the sequencing data.\n\n\nConclusion\n\nTo our knowledge, this is the first easy-to-use tool for harmonizing TCGA and dbGaP study metadata for the purpose of data discovery and consolidated querying. We would like to continue to work with the cancer biology community to develop this interface tool. Future improvements include extending our search capabilities to include other metadata, the option to query multiple genomic regions simultaneously, and a user-friendly GUI. Feature requests or contributions of code can be made on our GitHub site, which will be monitored for such activity.\n\n\nSoftware availability\n\nLatest source code: https://github.com/NCBI-Hackathons/TCGA_dbGaP.\n\nArchive source code as at the time of publication: doi, 10.5281/zenodo.160551 (Kurata, 2016) (https://zenodo.org/record/160551#.WE7Lz9WLTcs)\n\nLicense: CC0 1.0 Universal",
"appendix": "Author contributions\n\n\n\nBB, LA and SR conceived the idea. All authors participated in the background research, design and implementation of the software tool. JK, EW and SR developed the variable mappings across dbGaP and TCGA. JK, AB, YL, EW and SR were primarily involved in implementation of the software components. EW and LA prepared the first draft of the manuscript. SR contributed to subsequent drafts.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nFunding was provided by the NIH intramural research program at the NLM. JK was supported in part by the Frances Berger Foundation Fellowship and her lab was supported in part by Ramesh Kesanupalli family and Beckman Research Institute Funds.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to acknowledge Lisa Federer, NIH Library Writing Center, for manuscript editing assistance. The work described here was done as part of the NCBI Hackathon (August 2016). This research was supported in part by an appointment to the National Library of Medicine (NLM) Research Participation Program administered by ORISE through an interagency agreement between the U.S Dept. of Energy and the NLM and supported in part by the Intramural Research Program of the U.S. National Institutes of Health, NLM.\n\n\nReferences\n\nDudley J, Butte AJ: Enabling integrative genomic analysis of high-impact human diseases through text mining. Pac Symp Biocomput. 2008; 580–591. PubMed Abstract | Free Full Text\n\nKerr J, Hunter R: Inside RAD: How to Build Fully Functional Computer Systems in 90 Days or Less. McGraw-Hill Inc, New York, NY USA, 1994.\n\nKurata J, Badve A, Raje S, et al.: NCBI-Hackathons/TCGA_dbGaP: TCGA_dbGaP_v1.0 2016 [Data set]. Zenodo. 2016. Data Source\n\nRuau D, Mbagwu M, Dudley JT, et al.: Comparison of automated and human assignment of MeSH terms on publicly-available molecular datasets. J Biomed Inform. 2011; 44(Suppl 1): S39–43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTryka KA, Hao L, Sturcke A, et al.: NCBI’s Database of Genotypes and Phenotypes: dbGaP. Nucleic Acids Res. 2014; 42(Database issue): D975–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVan Schaik TA, Kovalevskaya NV, Protopapas E, et al.: The need to redefine genomic data sharing: A focus on data accessibility. Appl Transl Genomics. 2014; 3(4): 100–104. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21260",
"date": "28 Apr 2017",
"name": "Yussanne Ma",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a clear description of a tool that is simple in concept but will be of use to the cancer genomics community. They identified a clear need for researchers to be able to easily identify and download datasets from GDC and dbGAP sample attributes and have designed a solution that is conceptually sound. Most helpfully, they have taken care of not only the manual search but the mapping of the non-homogeneous metadata fields to save users time.\nMy detailed comments are listed below:\n1. (Major) After installing the code and running the first command, we received this error: $ bin/python bin/fetch_dbGaP_with_TCGA.py id -i TCGA-BRCA -s project -l low Traceback (most recent call last):\n\nFile \"bin/fetch_dbGaP_with_TCGA.py\", line 332, in\n\nsys.exit(main())\n\nFile \"bin/fetch_dbGaP_with_TCGA.py\", line 322, in main\n\n+outDict[outStringKeys[returnType][2]]+\",\"+\"\\\\\".join(outDict[outStringKe +ys[ returnType][3]])+\",\"+outDict[outStringKeys[returnType][4]]+\"\\n\" TypeError: coercing to Unicode: need string or buffer, list found\n\nAs the authors are targeting users with minimal coding experience, it is important that error messages are far more informative. From the above it's completely unclear without looking into the code itself whether this is due to the input being in an incorrect format or if there is an actual problem with the code itself. Much more user testing and error handling is needed.\n2. (Major) In general, more user documentation is needed. The instructions are brief and again, do not account for the case of everything not working on the first try. Input examples and example commands should also be provided for fetch_dbGaP_with_TCGA.py. Notes on how to interpret the output would also be helpful, perhaps by annotating the output file examples provided, which are a good inclusion.\n3. (Minor) Currently the only way to see the re-mapping of metadata is to look in the code itself on github. Could a txt file of the field mappings be provided? This would be helpful for users to understand the assumptions being made with the metadata vocabulary.\n4. (Minor) In the results section, could the authors summarize the results of the testing they did to ensure that the results being returned are correct? At minimum, it is important that users are assured of the completeness of the search. This could, for example, be demonstrated by searching on a TCGA disease type in GDC and comparing the number of results with the TCGA cohort size. Specificity and accuracy of the search results should also be demonstrated, perhaps by showing and summarizing the results of some example searches in both sites.\n5. (Minor) GDC currently hosts not only TCGA, but also TARGET data (which is mentioned in the manuscript), and will soon be hosting other datasets. Is this tool limited to the TCGA datasets in GDC? As far as I know from our submissions to the GDC, the other datasets will have the same controlled metadata fields so the functionality should extend naturally to all of the data hosted at the GDC and it would greatly increase the utility if this were the case.\n\nI think this tool will be beneficial to the cancer genomics community and will facilitate and encourage users to mine the rich NGS datasets that have been made available in the past decade. It would be good if the authors are able to provide a revision of the code that works and make some improvements to the user documentation, so that these benefits can be realized.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? No\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Partly",
"responses": []
},
{
"id": "23935",
"date": "10 Jul 2017",
"name": "Konstantinos Krampis",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe software is adequately explained and is a useful tool for a specialized task, fitting the format and section of the F1000. It is great work given that this was all completed during the hackathon. However I would suggest some polishing of the readme on the Github. This will still not make it any easier to use for not expert users, but for this purpose it would be ideal if the authors can add a short software readme as supplementary to the manuscript. The citation I have provided with this review points to a paper for the BioDocklets software that includes a manual that the authors could use as example. Other than that this is a great article that provides a very useful tool integrating key aspects of two important databases for the community.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Yes\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Yes\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
},
{
"id": "23937",
"date": "28 Jul 2017",
"name": "Tsung-Jung Wu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe software can fulfill the requirement of authors designated task. Since the original design of this software is not for general usage, it might be difficult for a general user to access. However, with this tools' help, a cancer genomics researcher can download database from both GDC and dbGAP sample info easily. The most important part of this software is manually curated terms. This approach can assure more accurate search outcome and save user's valuable time. By introducing Disease Ontology to the mapping between the Disease and Primary Site step, it might be able to help more accurate curation and cancer type determination. Articles provide here are about Disease Ontology and Disease Ontology Cancer Slim. The tool is designed for cancer genomics community and with these terminologies and ID. This tool will be able to further expand its usage and application.\nSome more documentations of this software and outcome interpretations will be helpful.\n\nIs the rationale for developing the new software tool clearly explained? Yes\n\nIs the description of the software tool technically sound? Yes\n\nAre sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others? Partly\n\nIs sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool? Partly\n\nAre the conclusions about the tool and its performance adequately supported by the findings presented in the article? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-319
|
https://f1000research.com/articles/6-42/v1
|
13 Jan 17
|
{
"type": "Method Article",
"title": "An ELISA DYRK1A non-radioactive assay suitable for the characterization of inhibitors",
"authors": [
"Yong Liu",
"Tatyana Adayev",
"Yu-Wen Hwang",
"Yong Liu",
"Tatyana Adayev"
],
"abstract": "The DYRK1A (dual specificity tyrosine phosphorylation-regulated kinase 1A) gene encodes a proline-directed Ser/Thr kinase. Elevated expression and/or altered distribution of the kinase have been implicated in the neurological impairments associated with Down syndrome (DS) and Alzheimer’s disease (AD). Consequently, DYRK1A inhibition has been of significant interest as a potential strategy for therapeutic intervention of DS and AD. Many classes of novel inhibitors have been described in the past decade. Although non-radioactive methods for analyzing DYRK1A inhibition have been developed, methods employing radioactive tracers are still commonly used for quantitative characterization of DYRK1A inhibitors. Here, we present a non-radioactive ELISA assay based on the detection of DYRK1A-phosphorylated dynamin 1a fragment using a phosphorylation site-specific antibody. The assay was verified by the use of two well-characterized DYRK1A inhibitors, epigallocatechin gallate (EGCG) and harmine. The IC50s for EGCG and harmine determined by the ELISA method were found to be comparable to those previously measured by radioactive tracing methods. Furthermore, we determined the mode of inhibition for EGCG and harmine by a modification of the ELISA assay. This assay confirms the mode of inhibition of EGCG (non-ATP-competitive) and harmine (ATP-competitive), as previously determined. We conclude that the ELISA platform demonstrated here is a viable alternative to the traditional radioactive tracer assays for analyzing DYRK1A inhibitors.",
"keywords": [
"non-radioactive kinase assay",
"EGCG",
"harmine",
"inhibitor screening"
],
"content": "Introduction\n\nThe human DYRK1A gene1 is mapped to a region of chromosome 21 implicated in Down syndrome (DS)2. DS, the most common chromosomal abnormality associated with birth defects and developmental disabilities, is caused by full or partial trisomy of chromosome 213. Almost all DS cases inevitably lead to the development of Alzheimer’s disease (AD)-type pathology4. Transgenic mice carrying an extra copy of DYRK1A have been shown to exhibit symptoms similar to DS, including brain abnormalities, neurodevelopmental delay, and memory impairments5–7. DYRK1A in the brain displays a distinct structure-specific distribution pattern8, and it interacts with an array of factors involved in neuronal development, proliferation, and differentiation9. The level of DYRK1A is elevated in a gene dosage-dependent manner in DS, suggesting that the protein not only plays an important role in regulating normal brain functions, but also in the etiology of DS8,10.\n\nDYRK1A has been linked to neurofibrillary degeneration and β-amyloidosis of AD11. DYRK1A was shown to phosphorylate microtubule-associated protein tau at T212 to prime tau for subsequent phosphorylation by GSK-3β at S30812,13. This inhibits tau’s ability to stimulate microtubule assembly and promotes self-aggregation, like abnormally hyperphosphorylated tau in AD brain13. DYRK1A was also found to phosphorylate amyloid precursor protein (APP) at T66814 and prenesilin-1 at T35415, which correlated with increased cleavage of APP by β and γ secretases15,16 respectively, and leads to the formation of neurotoxic β-amyloid peptides (Aβ). Moreover, Aβ is shown to be involved in a positive feedback loop for promoting DYRK1A expression, which may further accelerate production of Aβ17.\n\nThe collected evidence suggests that DYRK1A is a potential drug target for the treatment of DS and AD. To this end, many classes of DYRK1A inhibitors, both natural and synthetic, have been tested18–20. The potency of such inhibitors has mostly been analyzed using radioactive tracer methods despite the availability of non-radioactive assays21,22. It may be that these methods typically require multiple steps, which is undesirable for screening. Here, we describe a simple ELISA assay for DYRK1A inhibition using dynamin 1a and its phosphorylation site antibody for detection23,24. The ELISA assay has been verified using two known DYRK1A inhibitors and found to be consistent with radioactive tracer methods.\n\n\nMethods\n\nEpigallocatechin gallate (EGCG; #70935) and harmine (#10010324) were obtained from Cayman Chemical. Para-nitrophenyl phosphate (PNPP) tablets and diethanolamine substrate buffer were purchased from Thermo-Fisher Scientific. EGCG and harmine were initially prepared as 50 mM stock in 100% DMSO. Working solutions of EGCG and harmine (0.01 μM – 3.2 μM) were prepared from stocks in 2% DMSO by serial dilution. Dynamin 1a pS857-specific mouse mAb 3D3 (RRID: AB_2631263) was prepared as described24. The antibody was partially purified from ascites using Bakerbond ABx resins (#7269-02) before use. Anti-dynamin mAb Hudy-1 (RRID: AB_309677)25 was obtained from EMD Millipore. Alkaline phosphatase (AP) conjugated goat anti-mouse IgG secondary antibody (#115-055-146) was purchased from Jackson ImmunoResearch Laboratories, Inc.\n\n6xHis tagged rat truncated DYRK1A containing residues 1-497 (HT-497) was used for all assays. This truncation preserves activity of DYRK1A26,27. A bacterial HT-497 expression vector was constructed as follows. The truncated DYRK1A gene was first obtained by PCR from the GST-DYRK1A vector23 using a pair of primers for producing the DYRK1A fragment with a Cla I site plus a 6XHis tag (cctatcgatgcatcatcatcatcatcaccatacaggaggagagacttc) at the start codon and an in-frame termination at codon 498 plus a Xho I site (ggactcgagtcaagggctggtggacacactgtt), respectively, at 5’ and 3’ ends. PCR was performed in 50 μl mixture containing 10 ng template, 0.2 μg of each primer, 0.2 mM ATP, and PfuUltra (Agilent Technologies), as recommended by the supplier. Amplification was conducted with 20 cycles of the following steps: 94°C, 30-sec, 72°C 90-sec, and 62°C 30-sec. Custom primers were purchased from Integrated DNA Technologies. The resulting amplicon was then cloned into a modified T7 promoter-driven vector pND128 via the Cla I and Xho I sites, as described29. Proline rich domain (PRD, residues 746–864) of dynamin 1a was also prepared as N-terminal tagged 6xHis fusion protein (HT-PRD) exactly as described above. The PRD fragment was first produced from the semi-synthetic dynamin 1a gene24 by PCR using a pair of primers, (aggatcgatgcatcatcatcatcatcataacacgaccaccgtcagcacg) and (aggctcgagtcataggtcaaaaggtggtcg) for subsequent cloning into expression vector pND1, like HT-497.\n\nBoth HT-497 and HT-PRD were expressed and purified using TALON metal affinity resin (Clontech Laboratories) under native conditions as described29. Proteins were quantified by Bradford method30 and stored at -80°C until use.\n\nSubstrate, HT-PRD, was diluted in dilution buffer (25 mM Tris-HCl, pH 7.4 and 100 mM NaCl) to a concentration of 2 ng/μl (or higher as in Figure 1 and Figure 2) and used to coat a 96-well plate (BD Falcon #353072) with 100 μl per well (200 ng/well unless otherwise indicated) at 4°C overnight. Unbound materials were washed away with dilution buffer and wells were blocked with 150 μl blocking buffer (2% BSA, 1X PBS, and 0.25% Tween 20) at room temperature for 60 min. After blocking, wells were washed extensively with dilution buffer before subjecting to phosphorylation. DYRK1A phosphorylation was performed in wells with 100 μl reaction mix containing 25 mM HEPES, pH7.4, 100 mM NaCl, 5 mM MgCl2, 100 μM ATP (Sigma-Aldrich Chemicals), inhibitor if needed, and 5 ng HT-497 (unless otherwise indicated). Reactions were initiated by adding HT-497 and continued for 30 min (unless otherwise indicated) at 30°C. For time course experiments, reactions were terminated by the addition of 20 mM EDTA at the indicated time points. A set of inhibition experiments typically consists of a no-inhibitor control plus a series of eight inhibitor concentrations (0.001 μM - 3.2 μM final). Each point was run in duplicate with DMSO present in all assays at 0.2% final concentration. DMSO, up to 2%, does not affect the potency of EGCG and harmine. HT-PRD phosphorylation was subsequently determined by the sandwich antibody staining protocol, first with 100 μl mAb (60 min at room temperature) then with 100 μl AP-linked anti-mouse secondary antibody (60 min at room temperature), followed by colorimetric reaction with 100 μl PNPP solution. The extent of AP reaction was monitored at λ=405 nm. For Hudy-1 staining, wells were coated, blocked, and then stained with the antibody (1:3000 dilution) for colorimetric detection as described above.\n\nWells were incubated with indicated amounts of HT-PRD (0, 25, 50, 100, 200, 400, and 800 ng/well) at 4°C overnight and the level of coated proteins was then detected with anti-dynamin mAb Hudy-1 by following the sandwich ELISA protocol, as described in Methods (n = 4 for each data point).\n\nDilution factors for both mAb 3D3 and secondary antibody were pre-determined for each batch of antibody to ensure that neither antibody was limiting in the assay. A stock to be determined was serially diluted (from 1000 to 256,000-fold) and each dilution was used together with a non-limiting concentration of the other antibody to assess the level of HT-PRD phosphorylated under standard ELISA reaction conditions without inhibitor (see Results and Discussion). OD405 readings were normalized to the 1000-fold dilution and plotted against the dilutions of the testing antibody. Dilutions in the normalized OD405 plateau can be used for the assay. We routinely use 1:3000 dilutions for ABx purified 3D3 stock (~1.5 mg/ml) and 1:2000 dilutions of commercial secondary antibody for the assay.\n\nData transformation, calculation, plotting, curve fitting, and IC50 calculation were performed in KaleidaGraph (http://www.synergy.com/wordpress_650164087; Mac version 4.1). Data was corrected for background (readings from wells with only PNPP) before subsequent manipulations. To determine IC50, the residual DYRK1A activity was first calculated as the ratio to the no-inhibitor control in that set. The resulting residual activity was then plotted against its corresponding inhibitor concentrations in semi-log graph and the plot was fit to the sigmoidal equation, y = a+(b-a)/(1+(x/c)d), for IC50 calculation.\n\nThe standard ELISA protocol was modified to run under conditions allowing a constant inhibitor to compete against varying ATP concentrations in inhibiting DYRK1A. Briefly, a set of competition experiments had four DYRK1A assays in the presence of different ATP concentrations (100, 200, 400, or 800 μM) with a single fixed concentration of the inhibitor to be tested. An identical set, except without inhibitor, was performed in parallel (no-inhibitor controls). The inhibitor concentration used was roughly twice the IC50 of the inhibitor. All other procedures of the assay are unchanged. Residual kinase activity with the inhibitor at each ATP concentration was first calculated as a percentage of the corresponding no-inhibitor control. The residual kinase activity was subsequently converted to inhibition potency as the difference from 1. The value for each ATP concentration was then normalized to the inhibition potency at 100 μM ATP and plotted.\n\n\nResults and discussion\n\nWe chose to follow the ELISA-based protocol31,32 in developing our assay, by immobilizing the substrate followed by kinase phosphorylation in the wells, as this format offers the advantage of a simple, proven design versus other non-radioactive approaches33. Like many non-radioactive approaches33, our assay relies on a phospho-specific antibody to differentiate between phosphorylated from un-phosphorylated substrates. The antibody used in the assay, mAb 3D3, was raised against DYRK1A-phosphorylated Dynatide 3; a peptide derived from the DYRK1A phosphorylation site of dynamin 1a at S85724. 3D3 has been shown to recognize only pS857-dynamin 1a in rat brain extracts upon extensive phosphorylation24.\n\nDynatide 3, which is used routinely as a substrate to measure DYRK1A activity by the radioisotope/filter binding method27,34,35, was first tested as a substrate. Unfortunately, it failed to produce any signal upon phosphorylating coated Dynatide 3 with DYRK1A, presumably due to a lack of peptide coating. To circumvent this problem, we used a 6X histidine-tagged PRD of dynamin 1a (HT-PRD) as a DYRK1A substrate. This fragment coats wells in a concentration-dependent manner and the amount of immobilized proteins, as revealed by mAb Hudy-1 staining, are proportional to input proteins up to 200 ng/well of HT-PRD (15 pmole/well) (Figure 1).\n\nWe then examined whether immobilized HT-PRD is accessible to DYRK1A. Wells coated with varying amounts of HT-PRD were subjected to exhaustive phosphorylation in situ with excess DYRK1A (HT-497) (see Methods) for 60 min and probed with excess (non-limiting) mAb 3D3 and secondary antibodies (see below in Figure 5). Phosphorylated immobilized HT-PRD was recognized by 3D3. The 3D3 signal was elevated in response to increasing input of HT-PRD (Figure 2, filled circles) initially, then leveled off, closely resembling the response of substrate coating (Figure 1). As controls, uncoated wells phosphorylated by HT-497 (Figure 1) and coated HT-PRD, processed without HT-497, produced no detectable signals (Figure 2, empty circles). These results indicate that immobilized HT-PRD is phosphorylatable by DYRK1A and that the output of the assay requires DYRK1A phosphorylation.\n\nIf a system is to be useful in determining inhibitor potency quantitatively, the output of the system must be solely dependent on DYRK1A activity in a linear fashion. We used a fixed amount of coated HT-PRD (200 ng/well) to identify the proper conditions. The system response to changes of HT-497 was first examined (Figure 3). Our ELISA system produces sufficient signal to be readily distinguished from the noise of no-kinase control, with ~1 ng HT-497 (~17 fmole) phosphorylation at 30°C for 30 min. The output (the equivalent of reaction rate) is elevated accordingly as enzyme concentration increased, but the ratio of elevation to enzyme concentration, in proportion to enzyme, is progressively reduced (Figure 3). This is a typical enzyme concentration-dependent reaction profile when the substrate becomes the limiting factor36. Time-course experiments were subsequently conducted with 5 ng HT-497, as the highest enzyme concentration producing a near-linear enzyme-dependent response. The output was found to be linear with reaction times up to about 75 min (Figure 4). Therefore, we use the following standard conditions [200 ng of substrate, 5 ng HT-497 (0.82 nM), 100 μM ATP, and 30 min kinase reaction at 30°C] for all subsequent experiments.\n\nWells were coated with indicated amounts of HT-PRD (0, 25, 50, 100, 200, 400, and 800 ng/well) and then subjected to extensive DYRK1A phosphorylation in situ by incubation with 80 ng of HT-497 at 30°C for 60 min. The level of S857 phosphorylation was then detected with 3D3 following the sandwich ELISA protocol, as described in Methods (n = 4 for each data point). Filled circles (●), with kinase; empty circles (○), without kinase.\n\nWells were coated with 200 ng/well HT-PRD and then subjected to DYRK1A phosphorylation with varying amounts of HT-497 (1.25, 2.5, 5, 10, 20, 40, and 80 ng) at 30°C for 30 min. The level of S857 phosphorylation was then detected with 3D3 as described in Methods (n = 6 for each data point).\n\nWells were coated with 200 ng/well of HT-PRD and then subjected to DYRK1A phosphorylation with 5 ng HT-497 at 30°C. The reactions were terminated at the indicated time points (0, 5, 10, 20, 30, 45, 60, 75, and 90 min) by the addition of 20 mM EDTA. The level of S857 phosphorylation was then detected with 3D3 as described (n = 3 for each data point).\n\nWells were coated with 200 ng/well HT-PRD and then subjected to phosphorylation with 5 ng HT-497 under the standard reaction conditions. 3D3 to be tested was serially diluted (from 1000 to 256,000x) and used to probe the phosphorylated wells, followed by secondary antibody as described. Normalized OD405 was calculated (see Methods) and used for plotting (n = 9 for each data point).\n\nTo support accurate measurement of IC50, the amounts of antibody, both 3D3 and secondary antibody, must not be limiting. Otherwise, immunostaining will most likely under-report the actual phosphorylation level at lower concentrations of inhibitor, which could skew the IC50 calculation. Therefore, each batch of antibody was titered to determine the maximal dilution can be used. As shown for titering of 3D3, when the antibody is limiting (provided that a non-limiting concentration of secondary antibody is used), the readout will increase upon addition of 3D3 until a plateau indicating saturation (Figure 5). Only dilutions that produce readout in the plateau (non-limiting) region should be used for the assay (Figure 5). Dilution factors for the secondary antibody were similarly determined (Supplementary Figure 1).\n\nWe subsequently tested the system by examining two well-characterized inhibitors, EGCG and harmine27,35. A typical inhibition profile conducted by the ELISA method for EGCG (Figure 6) and harmine (Supplementary Figure 2) follows a sigmoidal function. IC50s for EGCG and harmine determined by the ELISA method were 0.215 ± 0.024 μM and 0.107 ± 0.018 μM, respectively. These values are comparable to those obtained earlier by us and others with different substrates and protocols, including the radioisotope/filter binding assay, generally regarded as the gold standard for kinase inhibition assays (Table 1)22,37–39. The results obtained from this ELISA assay appear to be as reproducible as any given enzymatic assay. These results confirm that our ELISA platform is a valid system for quantitative characterization of DYRK1A inhibitors.\n\nA. EGCG inhibition assays were performed in the presence of serially diluted EGCG (0.001 – 3.2 μM) under the standard reaction conditions. DYRK1A activity at any given EGCG concentration was calculated as a ratio to the activity of the no-inhibitor control and plotted on the Y-axis, versus EGCG concentration. IC50 was calculated from the plot after curve fitting, as described in Methods (n = 6 for each data point).\n\n*DYRK1A used for the assay:\n\n1: GST-DYRK1A (residues 1-499)26\n\n2: GST-DYRK1A (residues 1-497)27\n\n3: HT-DYRK1A (residues 1-497) (this study)\n\n#Reported IC50 was the average of three independent sets of duplicate assays (n = 6).\n\nWe then modified the ELISA protocol to run the assays under a single concentration of inhibitor with varying ATP concentrations, to determine whether ATP is competitive against the inhibitor in question. This allows the efficacy of inhibition to be evaluated with changing ATP. ATP is expected to influence the potency of competitive inhibitors, but not that of non-competitive inhibitors. As shown in Figure 7, harmine loses potency against DYRK1A when ATP is increased from 100 to 800 μM, indicating an ATP-competitive mode, while EGCG potency remains essentially unchanged (non-ATP-competitive). The inhibitory modes for harmine and EGCG revealed by the ELISA assay are the same as previously reported by the radioisotope/filter binding method27,35. This further validates the ELISA assay.\n\nFor each inhibitor (epigallocatechin gallate (EGCG) and harmine), assays were conducted with a single concentration of inhibitor in four different ATP concentrations (100, 200, 400, 800 μM) and quantified as described in Methods. The inhibitory potency at three other ATP concentrations was calculated as relative to that at 100 μM ATP and used to plot against ATP concentrations. Inhibitor concentrations used in the assays were 0.4 μM for EGCG (●) and 0.2 μM for harmine (▪) (n = 6 for each data point).\n\nAs noted, non-radioactive DYRK1A assays have been described21,22. These assays employ similar solution DYRK1A reactions at first stage and then different approaches for measuring the phosphorylated products. One used a phospho-specific antibody to capture products for subsequent immuno/colorimetric detection21, while another used a fluorescein-tagged substrate and analyzed products by high performance liquid chromatography/fluorescence detection22. The above methods have been optimized for sensitivity to measure cellular DYRK1A activity. We do not know whether our ELISA method, at the current stage, affords such level of sensitivity. Nevertheless, as we have demonstrated, our ELISA assay provides sufficient sensitivity for analyzing inhibitor activity with recombinant DYRK1A. Furthermore, because of the standard ELISA protocol, our assay is straightforward to perform with the entire process carried out in a single well. The tools and equipment for adapting this plate-based assay for high throughput automation are widely available, and if necessary, the assay can be refined to further improve the sensitivity. We believe that our assay offers a simple, rapid, and reliable non-radioactive method suitable for replacing the radioactive trace assays in quantifying and screening DYRK1A inhibitors.\n\n\nData availability\n\nDataset 1: Raw data for Figures 1–7 and Supplementary Figures 1 and 2 are supplied in KaleidaGraph format. 10.5256/f1000research.10582.d14878742\n\nZipped file, containing the following:\n\nData for Figure 1. Coating ELISA plate with HT-PRD. Data (OD405) for 0 – 800 ng of coated HT-PRD per well were shown. Background measurements for all Figures (1–7 and Supplementary Figure 1 and 2) were obtained using wells with PNPP only (no coating, no phosphorylation, and no antibodies), which were performed in parallel with each experimental replicate/triplicate for background correction. The data shown in the files for all Figures have been corrected using averaged background from each set.\n\nData for Figure 2. Phosphorylation of coated HT-PRD by DYRK1A. Data (OD405) for 0 – 800 ng coated HT-PRD per well were shown.\n\nData for Figure 3. DYRK1A concentration-dependent phosphorylation of coated HT-PRD. Data (OD405) for phosphorylation with 0 – 80 ng HT-497 were shown.\n\nData for Figure 4. Time-course phosphorylation of coated HT-PRD by DYRK1A. Time-course phosphorylation data (OD405) with 0–90 min incubation time were shown.\n\nData for Figure 5. 3D3 dilution factor determination. Data (OD405) for 3D3 dilution 1:1,000 – 256,000 were shown.\n\nData for Figure 6. Epigallocatechin gallate (EGCG) inhibition profile. Data (OD405) for EGCG 0 – 3.2 μM were shown.\n\nData for Figure 7. ATP competition assay. Data (OD405) for ATP 100 – 800 μM were shown.\n\nData for Supplementary Figure 1. Secondary antibody dilution factor determination. Data (OD405) for secondary antibody dilution 1:1,000 – 256,000 were shown.\n\nData for Supplementary Figure 2. Harmine inhibition profile. Data (OD405) for harmine 0 – 3.2 μM were shown.",
"appendix": "Author contributions\n\n\n\nYong Liu carried out experiments to determine the optimal conditions for running the ELISA assay. He also performed ATP competition measurements.\n\nTatyana Adayev cloned and expressed the enzyme and the substrate. She demonstrated that mAb 3D3 could stain phosphorylated dynamin in ELISA, which is the prerequisite for the development of the current assay.\n\nYu-Wen Hwang conceived and designed the ELISA approach. He also carried out the assays for measuring IC50. The manuscript is written by this author.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work is supported by the New York State Office for People with Developmental Disabilities, the parent agency of New York State Institute for Basic Research in Developmental Disabilities. No extramural fund was used to support this research.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank Dr. Kevin Hwang for critical reading of the manuscript.\n\n\nSupplementary materials\n\nSupplementary Figure 1. Secondary antibody dilution factor determination. Phosphorylated HT-PRD in ELISA wells were prepared and probed with 3D3 (1:3000 dilution) followed by serially diluted secondary antibody (from 1000 to 256,000x) similarly as described in Figure 5. Like 3D3 titering, normalized OD405 was calculated for plotting (n = 6 for each data point).\n\nClick here to access the data.\n\nSupplementary Figure 2. Harmine inhibition profile. Harmine inhibition was conducted and analyzed exactly as described in Figure 6 for EGCG (n = 6 for each data point).\n\nClick here to access the data.\n\n\nReferences\n\nBecker W, Sippl W: Activation, regulation, and inhibition of DYRK1A. FEBS J. 2011; 278(2): 246–256. PubMed Abstract | Publisher Full Text\n\nSong WJ, Sternberg LR, Kasten-Sportès C, et al.: Isolation of human and murine homologues of the Drosophila minibrain gene: human homologue maps to 21q22.2 in the Down syndrome \"critical region\". Genomics. 1996; 38(3): 331–339. PubMed Abstract | Publisher Full Text\n\nRahmani Z, Blouin JL, Creau-Goldberg N, et al.: Critical role of the D21S55 region on chromosome 21 in the pathogenesis of Down syndrome. Proc Natl Acad Sci U S A. 1989; 86(15): 5958–5962. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWegiel J, Wisniewski HM, Dziewiatkowski J, et al.: Differential susceptibility to neurofibrillary pathology among patients with Down syndrome. Dementia. 1996; 7(3): 135–141. PubMed Abstract | Publisher Full Text\n\nSmith DJ, Stevens ME, Sudanagunta SP, et al.: Functional screening of 2 Mb of human chromosome 21q22.2 in transgenic mice implicates minibrain in learning defects associated with Down syndrome. Nat Genet. 1997; 16(1): 28–36. PubMed Abstract | Publisher Full Text\n\nBranchi I, Bichler Z, Minghetti L, et al.: Transgenic mouse in vivo library of human Down syndrome critical region 1: association between DYRK1A overexpression, brain development abnormalities, and cell cycle protein alteration. J Neuropathol Exp Neurol. 2004; 63(5): 429–440. PubMed Abstract | Publisher Full Text\n\nAhn KJ, Jeong HK, Choi HS, et al.: DYRK1A BAC transgenic mice show altered synaptic plasticity with learning and memory defects. Neurobiol Dis. 2006; 22(3): 463–472. PubMed Abstract | Publisher Full Text\n\nWegiel J, Kuchna I, Nowicki K, et al.: Cell type- and brain structure-specific patterns of distribution of minibrain kinase in human brain. Brain Res. 2004; 1010(1–2): 69–80. PubMed Abstract | Publisher Full Text\n\nTejedor FJ, Hämmerle B: MNB/DYRK1A as a multiple regulator of neuronal development. FEBS J. 2011; 278(2): 223–235. PubMed Abstract | Publisher Full Text\n\nDowjat WK, Adayev T, Kuchna I, et al.: Trisomy-driven overexpression of DYRK1A kinase in the brain of subjects with Down syndrome. Neurosci Lett. 2007; 413(1): 77–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWegiel J, Gong CX, Hwang YW: The role of DYRK1A in neurodegenerative diseases. FEBS J. 2011; 278(2): 236–245. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoods YL, Cohen P, Becker W, et al.: The kinase DYRK phosphorylates protein-synthesis initiation factor eIF2Bepsilon at Ser539 and the microtubule-associated protein tau at Thr212: potential role for DYRK as a glycogen synthase kinase 3-priming kinase. Biochem J. 2001; 355(Pt 3): 609–615. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLiu F, Liang Z, Wegiel J, et al.: Overexpression of Dyrk1A contributes to neurofibrillary degeneration in Down syndrome. FASEB J. 2008; 22(9): 3224–3233. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRyoo SR, Cho HJ, Lee HW, et al.: Dual-specificity tyrosine(Y)-phosphorylation regulated kinase 1A-mediated phosphorylation of amyloid precursor protein: evidence for a functional link between Down syndrome and Alzheimer's disease. J Neurochem. 2008; 104(5): 1333–44. PubMed Abstract | Publisher Full Text\n\nRyu YS, Park SY, Jung MS, et al.: Dyrk1A-mediated phosphorylation of Presenilin 1: a functional link between Down syndrome and Alzheimer's disease. J Neurochem. 2010; 115(3): 574–584. PubMed Abstract | Publisher Full Text\n\nLee MS, Kao SC, Lemere CA, et al.: APP processing is regulated by cytoplasmic phosphorylation. J Cell Biol. 2003; 163(1): 83–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKimura R, Kamino K, Yamamoto M, et al.: The DYRK1A gene, encoded in chromosome 21 Down syndrome critical region, bridges between beta-amyloid production and tau phosphorylation in Alzheimer disease. Hum Mol Genet. 2007; 16(1): 15–23. PubMed Abstract | Publisher Full Text\n\nBecker W, Soppa U, Tejedor FJ: DYRK1A: a potential drug target for multiple Down syndrome neuropathologies. CNS Neurol Disord Drug Targets. 2014; 13(1): 26–33. PubMed Abstract | Publisher Full Text\n\nDuchon A, Herault Y: DYRK1A, a Dosage-Sensitive Gene Involved in Neurodevelopmental Disorders, Is a Target for Drug Development in Down Syndrome. Front Behav Neurosci. 2016; 10: 104. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStotani S, Giordanetto F, Medda F: DYRK1A inhibition as potential treatment for Alzheimer's disease. Future Med Chem. 2016; 8(6): 681–696. PubMed Abstract | Publisher Full Text\n\nLilienthal E, Kolanowski K, Becker W: Development of a sensitive non-radioactive protein kinase assay and its application for detecting DYRK activity in Xenopus laevis oocytes. BMC Biochem. 2010; 11: 20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBui LC, Tabouy L, Busi F, et al.: A high-performance liquid chromatography assay for Dyrk1a, a Down syndrome-associated kinase. Anal Biochem. 2014; 449: 172–178. PubMed Abstract | Publisher Full Text\n\nChen-Hwang MC, Chen HR, Elzinga M, et al.: Dynamin is a minibrain kinase/dual specificity Yak1-related kinase 1A substrate. J Biol Chem. 2002; 277(20): 17597–17604. PubMed Abstract | Publisher Full Text\n\nHuang Y, Chen-Hwang MC, Dolios G, et al.: Mnb/Dyrk1A phosphorylation regulates the interaction of dynamin 1 with SH3 domain-containing proteins. Biochemistry. 2004; 43(31): 10173–10185. PubMed Abstract | Publisher Full Text\n\nWarnock DE, Terlecky LJ, Schmid SL: Dynamin GTPase is stimulated by crosslinking through the C-terminal proline-rich domain. EMBO J. 1995; 14(7): 1322–1328. PubMed Abstract | Free Full Text\n\nHimpel S, Panzer P, Eirmbter K, et al.: Identification of the autophosphorylation sites and characterization of their effects in the protein kinase DYRK1A. Biochem J. 2001; 359(Pt 3): 497–505. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAdayev T, Chen-Hwang MC, Murakami N, et al.: Kinetic properties of a MNB/DYRK1A mutant suitable for the elucidation of biochemical pathways. Biochemistry. 2006; 45(39): 12011–12019. PubMed Abstract | Publisher Full Text\n\nCunningham PR, Weitzmann CJ, Nurse K, et al.: Site-specific mutation of the conserved m62Am62A residues of E. coli 16S ribosomal RNA. Effects on ribosome function and activity of the KsgA methyltransferase. Biochim Biophy Acta. 1990; 1050(1–3): 18–26. PubMed Abstract\n\nHwang YW, Zhong JM, Poullet P, et al.: Inhibition of SDC25 C-domain-induced guanine-nucleotide exchange by guanine ring binding domain mutants of v-H-ras. J Biol Chem. 1993; 268(33): 24692–24698. PubMed Abstract\n\nBradford MM: A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem. 1976; 72(1–2): 248–254. PubMed Abstract | Publisher Full Text\n\nYano T, Taura C, Shibata M, et al.: A monoclonal antibody to the phosphorylated form of glial fibrillary acidic protein: application to a non-radioactive method for measuring protein kinase activities. Biochem Biophys Res Commun. 1991; 175(3): 1144–1151. PubMed Abstract | Publisher Full Text\n\nFarley K, Mett H, McGlynn E, et al.: Development of solid-phase enzyme-linked immunosorbent assays for the determination of epidermal growth factor receptor and pp60c-src tyrosine protein kinase activity. Anal Biochem. 1992; 203(1): 151–157. PubMed Abstract | Publisher Full Text\n\nWang Y, Ma H: Protein kinase profiling assays: a technology review. Drug Discov Today Technol. 2015; 18: 1–8. PubMed Abstract | Publisher Full Text\n\nAdayev T, Chen-Hwang MC, Murakami N, et al.: Dual-specificity tyrosine phosphorylation-regulated kinase 1A does not require tyrosine phosphorylation for activity in vitro. Biochemistry. 2007; 46(25): 7614–7624. PubMed Abstract | Publisher Full Text\n\nAdayev T, Wegiel J, Hwang YW: Harmine is an ATP-competitive inhibitor for dual-specificity tyrosine phosphorylation-regulated kinase 1A (Dyrk1A). Arch Biochem Biophys. 2011; 507(2): 212–218. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSegel IH: Enzyme Kinetics, Behavior and Analysis of Rapid Equilibrium and Steady-State Enzyme Systems. John Wiley & Sons, Inc. 1975. Reference Source\n\nBain J, McLauchlan H, Elliott M, et al.: The specificities of protein kinase inhibitors: an update. Biochem J. 2003; 371(Pt 1): 199–204. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBain J, Plater L, Elliott M, et al.: The selectivity of protein kinase inhibitors: a further update. Biochem J. 2007; 408(3): 297–315. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGöckler N, Jofre G, Papadopoulos C, et al.: Harmine specifically inhibits protein kinase DYRK1A and interferes with neurite formation. FEBS J. 2009; 276(21): 6324–6337. PubMed Abstract | Publisher Full Text\n\nWoods YL, Rena G, Morrice N, et al.: The kinase DYRK1A phosphorylates the transcription factor FKHR at Ser329 in vitro, a novel in vivo phosphorylation site. Biochem J. 2001; 355(Pt 3): 597–607. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHimpel S, Tegge W, Frank R, et al.: Specificity determinants of substrate recognition by the protein kinase DYRK1A. J Biol Chem. 2000; 275(4): 2431–2438. PubMed Abstract | Publisher Full Text\n\nLiu Y, Adayev T, Hwang YW: Dataset 1 in: An ELISA DYRK1A non-radioactive assay suitable for the characterization of inhibitors. F1000Research. 2017. Data Source"
}
|
[
{
"id": "19325",
"date": "18 Jan 2017",
"name": "Walter Becker",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nLiu and coworkers have developed an ELISA-based assay for the protein kinase DYRK1A and show that their assay compares well with the traditional radioactive assays in the analysis of DYRK1A inhibitors. As correctly pointed out the authors, a previous ELISA-based DYRK1A assay developed in my own lab had been optimized for sensitive detection of endogenous DYRK kinases, while the new ELISA assay is more straightforward and provides a simple and rapid method for analyzing inhibitor activity with recombinant DYRK1A. The inhibitions curves with the DYRK1A inhibitors EGCG and harmine support the conclusion “that the ELISA platform demonstrated here is a viable alternative to the traditional radioactive tracer assays for analyzing DYRK1A inhibitors”.\n\nThis methods article clearly represents a substantial modification of an existing procedure. The study is well designed, the methods and the analysis of the results are appropriately described and the conclusions are justified on the basis of the results. In summary, this is sound and useful study that will be valuable for other researchers in the field.\nRecommendations\nIn the 2nd paragraph the introduction, presenilin is misspelled.\n\nI appreciate the identification of the used antibodies in the method section by their accession numbers in the antibody registry. Direct links to this registry would be helpful for the reader.\n\nFor the convenience of the reader, I suggest to include the harmine inhibition curve (supplemental Fig. 2) in the main text as a second panel in Fig. 6.\n\nThe final statement of the manuscript suggests the use their assay for the screening of DYRK1A inhibitors. It may be worth to validate this application by determining the Z’–factor of the assay (according to Zhang et al. 19991).\n\nThe availability of plasmids and antibodies should be indicated. Are the plasmids available from the authors or from Addgene? Will the authors make the hybridoma clone for the pSer857 3D3 antibody available or can the antibody be commercially purchased?\n\nIt must be stated in the figure legends whether the error bars show SD or SEM. I suggest to show SD, which provides the reader a measure for the experimental error.\n\nThe raw data for the figures are provided in the KaleidaGraph format and were not accessible to me. Is it possible to submit as Excel or PDF?\n\nThe authors may consider to include the term \"kinase\" in the title to enhance visibility of theit article to readers not aware of DYRK1A.",
"responses": [
{
"c_id": "2582",
"date": "24 Mar 2017",
"name": "Yu-Wen Hwang",
"role": "Author Response",
"response": "We thank Dr. Becker for comments and suggestions. We have revised the article and addressed most of the questions. Here is the summary of changes. The typo is corrected. Direct link to antibody registry is added. Supplementary Figure 2 for harmine inhibition is now incorporated in the main text as Figure 6B. The Z’-factor for the ELISA assay has been estimated. The results are shown in the Supplementary Table. We have deposited the vectors (pHT-497 and pHT-PRD) and antibody 3D3 with Addgene and the Developmental Studies Hybridoma Bank of the University of Iowa, respectively. The error bars are SEM. This is now indicated in the revised article. The raw data in Excel format have been submitted. The term “kinase” is added to the title."
}
]
},
{
"id": "19700",
"date": "26 Jan 2017",
"name": "Stefan Knapp",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI approve the article by Liu et al.\nDYRK1A has developed into an interesting pharmacological target. The assay described in this paper offers an interesting alternative to previously published assays.\nFor assay validations the authors made however a poor choice. Epigallocatechin gallate (EGCG) has been identified as a DYRK1 inhibitor but this compound is highly reactive and not a meaningful inhibitor for assay validation (even though there might be clinical benefits using this natural product due to the broad spectrum of published possible cellular activities). Recently the diverse problem using promiscuous, reactive and chemically unstable inhibitors (PAINS) has been highlighted in the literature. Certainly EGCG should not be used as a control compound in any assay.\nHowever, the data is well presented (antibody source should be described) and a good addition to the kinase assay repertoire that can be used studying DYRK1A inhibitors.",
"responses": [
{
"c_id": "2581",
"date": "24 Mar 2017",
"name": "Yu-Wen Hwang",
"role": "Author Response",
"response": "We thank Dr. Knapp for the comments. We agree with Dr. Knapp that EGCG is known to be unstable and is reactive toward many cellular targets. Clearly, EGCG is not a suitable reagent for validating biological assays. However, in a defined in vitro system consisting of only DYRK1A, substrate, ATP, and buffer, like our ELISA-based assay, EGCG is a well-behaved small molecule. Its property is predictable and can be reliably measured under such conditions as has been done by us and others. In addition, EGCG inhibits DYRK1A through a non-ATP-competitive mechanism. Pairing EGCG with ATP-competitive inhibitors such as harmine provides different prospects for assessing the ELISA assay. The antibody used in the study, 3D3, has been deposited with the Developmental Studies Hybridoma Bank of the University of Iowa."
}
]
}
] | 1
|
https://f1000research.com/articles/6-42
|
https://f1000research.com/articles/6-310/v1
|
23 Mar 17
|
{
"type": "Case Report",
"title": "Case Report: Behçet’s disease accompanied with vitiligo",
"authors": [
"Ragıp Ertaş",
"Kemal Özyurt",
"Atıl Avcı",
"Sule Ketenci Ertas",
"Mustafa Atasoy",
"Kemal Özyurt",
"Atıl Avcı",
"Sule Ketenci Ertas",
"Mustafa Atasoy"
],
"abstract": "Recently, a few case reports and clinical studies have been published that explore the association of Behçet’s Disease (BD) and vitiligo, with conflicting results. Genetic and immunological properties of BD and presence of autoantibodies support autoimmunity, but clinical features suggest autoinflammatory diseases. BD is thought to be a cornerstone between autoimmune and autoinflammatory diseases. On the other hand, vitiligo has been accepted as an autoimmune disease with associations of other autoimmune disorders and there is a possible role of autoimmunity in pathogenesis of the disease. Significant advances have been made understanding the pathogenesis and genetics of BD. However, it is worth presenting rare clinical variants for improving the clinical understanding of BD. Herein, we are presenting a case with diagnosis of both Behçet’s disease and vitiligo in same patient, which is a rare occurrence. Discussion and demonstrating the association of these two diseases may give rise to understanding similar and different aspects of autoimmunity and autoinflammatory pathogenesis of both diseases.",
"keywords": [
"Behçet’s Disease",
"Vitiligo",
"Autoimmunity",
"Autoinflammatory",
"Depigmentation",
"Erythema nodosum",
"Thrombophlebitis",
"Arthritis"
],
"content": "Introduction\n\nBehçet's disease (BD) is a systemic disease with an unknown origin characterized by recurrent oral ulcers, mucocutaneus disorders and ocular findings. BD may be life-threatening, affecting the central nervous system, large vessels and the gastrointestinal tract1. Numerous studies have investigated the etiopathogenesis of BD over a long period, but the etiology and mechanisms of pathogenesis have not yet been fully explained2.\n\nVitiligo is a chronic depigmenting disorder representing white patches in the skin or hair extinct of functional melanocytes3. Autoimmunity has been implicated in the pathogenesis of the disease, and associations with autoimmune diseases have been demonstrated4.\n\nHere, we present a unique case of BD and vitiligo in the same patient. This is a very rare condition and gives the opportunity to understand similar and different aspects of autoimmunity and autoinflammatory pathogenesis of both diseases by observing clinical and laboratory findings.\n\n\nCase report\n\nA 24-year-old woman was admitted to the Clinic of Dermatology at the Kayseri Training and Research Hospital. The patient complained of swelling and pain in her legs for two weeks. Medical history of the patient included monthly relapsing oral aphthous ulcers for three years, and one attack of thrombophlebitis and arthritis previously. She had received treatment in various clinics and times for relapsing oral aphthous ulcers, including colchicum tablets, mouthwashes, corticosteroid and antibiotic creams. For thrombophlebitis and arthritis she was hospitalized and given therapy. The patient had vitiligo for 14 years. Her relatives had neither BD nor vitiligo.\n\nA physical examination revealed erythema nodosum-like eruptions on the patient’s legs, and white, depigmented patches on the patient’s bilateral lateral malleolus, wrists, eyelids, knees, fingers and an oral aphthous ulcer on the lower lip mucosa (Figure 1–Figure 4). An ophthalmological examination resulted in normal findings even though the patient had pain in her eyes. A pathergy test was negative. Laboratory examination showed hemoglobin,10.8 gr/dL(reference level,12–16gr/dL);platelet count,285 10^3/uL(130–400 10^3/uL);white cell count,635 10^3/uL (46–10210^3/uL);serum folic acid,4.84 ng/ml(3.1–17.54ng/ml);serum ferritin,8.5 ng/ml (110–305ng/ml);vitamin B12, 217 pg/ml(126-505pg/ml);serum iron,28 ug/dL (60–180ug/dL);serum total iron binding capacity,345 ug/dL (155–355ug/dL); C-reactive protein,5.11 mg/L (0–5mg/L);erythrocyte sedimentation rate,22 mm/h (0–20mm/h);rheumatoid factor,10.2 IU/ml (0–15IU/ml);serum antistreptolysin-o titer,174IU/ml (0–200IU/ml); free T3,3.68 pg/ml (2.5–3.9); free T4, 0.75 ng/dl (0.54–1.24 ng/dl); thyroid stimulating hormone,1.56 mIU/L (0.4–5.6mIU/L); antithyroglobulin antibody test,<2.2 IU/ml (0–4IU/ml); antithyroid peroxidase antibody test,0.6IU/ml (0–9IU/ml).\n\nA diagnosis of BD was made according to the International Criteria for Behçets Disease (ICBD)5 and vitiligo was diagnosed based on prior physical examination. Diagnosis of BD, according to the ICBD, was based on only clinical features, but not any laboratory finding. For the ICBD, ocular lesions, oral aphthosis and genital aphthosis are each assigned 2 points, while skin lesions, central nervous system involvement and vascular manifestations are assigned 1 point each. The pathergy test was assigned 1 point. A patient scoring 4 points is classified as having BD. Our patient had 5 points: 2 for oral aphthosis, 1 for erythema nodosum and 1 for thrombophlebitis. Additionally, laboratory results mentioned above showed an iron deficiency anemia.\n\n\nFollow-up and outcomes\n\nThe patient was hospitalized and treated in our dermatology clinic for 10 days. She was given systemic corticosteroid and wet dressing for erythema nodosum-like eruptions on her legs. These lesions improved and she was discharged at the end of 10 days. She was not living in borders of our province and was recommended for follow-up in a local dermatology clinic.\n\n\nDiscussion\n\nClinical and immunological understandings of the disease suggest BD is a cornerstone between autoimmune and inflammatory disease. Clinical features and male predominance suggest inflammatory diseases; however, sharing class I MHC association in genetic details and presence of autoantibodies in patients supports autoimmunity2. Clinical characteristics and symptoms are the main factors for diagnosing BD, but a specific diagnostic feature or laboratory method is not yet available. The clinical features of patients in countries with a high prevalence of BD may help to clarify the pathogenesis of BD1. Here we present a case of BD accompanied by vitiligo. Vitiligo is a common skin disorder and various factors participate in the etiopathogenesis, which causes autoimmune melanocytic destruction. Autoimmune thyroid diseases and pernicious anemia are frequently associated with vitiligo3,4. Recently, a few case reports and clinical studies have been published that demonstrate the association of BD with vitiligo, with conflicting results. Oran et al. showed that the frequency of vitiligo was not increased among patients with BD6, while two different reports mentioned the coexistence of vitiligo and BD7,8. In addition, Guney et al. claimed that vitiligo occurred during interferon therapy in a patient with BD9.\n\nVogt–Koyanagi–Harada (VKH) syndrome is an inflammatory disorder characterized by bilateral panuveitis, and is frequently associated with poliosis, vitiligo, alopecia, central nervous system and auditory symptoms10. VKH syndrome is not often mistaken as BD. However, VKH syndrome has similar properties to BD and the etiology of both diseases remains unknown; however, an autoimmune response has been presumed to be implicated in their pathogenesis. Hu et al. mentioned TT genotype of rs7574865 in STAT4 gene may be a susceptible factor for VKH syndrome in a Chinese Han population, and GG genotype of this SNP may confer susceptibility in male BD patients11. Our patient had only vitiligo and no other symptoms of VKH syndrome.\n\nThese case reports and studies give rise to thought about the association of BD and vitiligo. In our case, vitiligo had been present for 14 years before the diagnosis of BD. Antithyroid autoantibodies are not included in the diagnosis of BD, but show evidence of autoimmunity. These were negative in our patient. We don’t know whether a unique genetic predisposition or any environmental or infectious factor caused this status. Interestingly, Karincaoglu et al. declared incidental coexistence of BD and vitiligo and also koebnerization of genital ulceration of BD7. However, in their case, the patient had vitiligo patches not only in the scar area of genital region, but also on other body surfaces.\n\nVitiligo may be only one symptom of a big picture, as in VKH syndrome12. A different disease may have the features of BD and vitiligo. Indeed, all these implications are speculative and we need new studies and cases. We present a case of BD accompanied with vitiligo, a rare clinical variant of BD, which may help to improve the clinical understanding of BD.\n\n\nConsent\n\nWritten informed consent was obtained from the patient for the publication of the manuscript.",
"appendix": "Author contributions\n\n\n\nRE: wrote the manuscript; KO, AA and MA: Helped manage the patient’s diagnosis and therapy, and prepared the manuscript; SKE: patient’s consultant from the Department of Rheumatology.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nOzyurt K, Colgecen E, Baykan H: Does familial occurrence or family history of recurrent oral ulcers influence clinical characteristics of Behçet's disease? Acta Dermatovenerol Croat. 2013; 21(3): 168–73. PubMed Abstract\n\nPineton de Chambrun M, Wechsler B, Geri G: New insights into the pathogenesis of Behçet's disease. Autoimmun Rev. 2012; 11(10): 687–98. PubMed Abstract | Publisher Full Text\n\nKovacs SO: Vitiligo. J Am Acad Dermatol. 1998; 38(5 Pt 1): 647–66; quiz 667–8. Review. PubMed Abstract | Publisher Full Text\n\nKemp EH, Waterman EA, Weetman AP: Autoimmune aspects of vitiligo. Autoimmunity. 2001; 34(1): 65–77. PubMed Abstract | Publisher Full Text\n\nInternational Team for the Revision of the International Criteria for Behçet's Disease (ITR-ICBD), The International Criteria for Behçet’s Disease (ICBD): a collaborative study of 27 countries on the sensitivity and specificity of the new criteria. J Eur Acad Dermatol Venereol. 2014; 28(3): 338–347. PubMed Abstract | Publisher Full Text\n\nOran M, Hatemi G, Tasli L, et al.: Behçet's syndrome is not associated with vitiligo. Clin Exp Rheumatol. 2008; 26(4 Suppl 50): S107–9. PubMed Abstract\n\nBorlu M, Cölgeçen E, Evereklioglu C: Behçet's disease and vitiligo in two brothers: coincidence or association? Clin Exp Dermatol. 2009; 34(8): e653–5. PubMed Abstract | Publisher Full Text\n\nKarincaoglu Y, Kalayci B, Tepe B: Vitiligo koebnerized by behçet disease genital ulceration. Am J Clin Dermatol. 2009; 10(4): 268–70. PubMed Abstract | Publisher Full Text\n\nGuney E, Akcali G, Akcay BI, et al.: Vitiligo in a Patient Treated with Interferon Alpha-2a for Behçet's Disease. Case Rep Med. 2012; 2012: 387140. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNorose K, Yano A: Melanoma specific Th1 cytotoxic T lymphocyte lines in Vogt-Koyanagi-Harada disease. Br J Ophthalmol. 1996; 80(11): 1002–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHu K, Yang P, Jiang Z, Hou S, et al.: STAT4 polymorphism in a Chinese Han population with Vogt-Koyanagi-Harada syndrome and Behçet's disease. Hum Immunol. 2010; 71(7): 723–6. PubMed Abstract | Publisher Full Text\n\nAktas E, Ertas R: Vitiligo'nun Tanı ve Ayırıcı Tanısı. Turkiye Klinikleri Journal of Dermatology Special Topics. 2009; 2: 23–6. Reference Source"
}
|
[
{
"id": "21204",
"date": "18 Apr 2017",
"name": "Zafer Türkoğlu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe case report that is being reported has been carried out well with no flaws in the design or methodology.\n\nThe case report was reported correctly, with acknowledgement of the existing body of work.\n\nThe work provides sufficient details for it to be useful for other practitioners.\n\nIt is suitable for indexing.",
"responses": []
},
{
"id": "22530",
"date": "05 May 2017",
"name": "Rasheedunnisa Begum",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAuthors should mention the onset of vitiligo and Behçet’s disease (BD) in clinical history. Moreover, it would be interesting if authors can mention the prevalence of vitiligo and BD for better correlation of the present study.\n\nVitiligo is a disorder without any gender biasness, as the subject recruited for the present study is a female it would add impact to the present study if authors should discuss the gender biasness for BD and discuss the same.\n\nAuthors should clearly state if depigmentation is due to vitiligo as there are many other depigmentation disorder. Was there any confirmation under wood’s lamp for the same?\n\nIn the discussion section authors have mentioned association of vitiligo with other autoimmune disorders, for the same they should cite recent reports with higher sample size (for e.g. autoimmunity in onset and progression of vitiligo where atypical autoimmune disorder Thyroid has been discussed and many more).\n\nAuthors should also mention the extent of depigmentation, type and the activity of vitiligo in the patient as per standard classification guidelines.\n\nThe findings of laboratory examination along with the normal range can be represented in a tabular form to make it clearer.\n\nAuthors should also mention whether Koebner phenomenon was observed in the patient or not and discuss the same.\n\nAuthors should also mention whether approval from respective ethics committee was obtained for publishing the case study.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-310
|
https://f1000research.com/articles/6-309/v1
|
23 Mar 17
|
{
"type": "Opinion Article",
"title": "Towards a systems approach for chronic diseases, based on health state modeling",
"authors": [
"Michael Rebhan"
],
"abstract": "Rising pressure from chronic diseases means that we need to learn how to deal with challenges at a different level, including the use of systems approaches that better connect across fragments, such as disciplines, stakeholders, institutions, and technologies. By learning from progress in leading areas of health innovation (including oncology and AIDS), as well as complementary indications (Alzheimer’s disease), I try to extract the most enabling innovation paradigms, and discuss their extension to additional areas of application within a systems approach. To facilitate such work, a Precision, P4 or Systems Medicine platform is proposed, which is centered on the representation of health states that enable the definition of time in the vision to provide the right intervention for the right patient at the right time and dose. Modeling of such health states should allow iterative optimization, as longitudinal human data accumulate. This platform is designed to facilitate the discovery of links between opportunities related to a) the modernization of diagnosis, including the increased use of omics profiling, b) patient-centric approaches enabled by technology convergence, including digital health and connected devices, c) increasing understanding of the pathobiological, clinical and health economic aspects of disease progression stages, d) design of new interventions, including therapies as well as preventive measures, including sequential intervention approaches. Probabilistic Markov models of health states, e.g. those used for health economic analysis, are discussed as a simple starting point for the platform. A path towards extension into other indications, data types and uses is discussed, with a focus on regenerative medicine and relevant pathobiology.",
"keywords": [
"chronic diseases",
"systems approach",
"Precision Medicine",
"computational modeling",
"disease progression",
"Markov health state models",
"Regenerative Medicine",
"Open Science."
],
"content": "Rising pressure from chronic diseases\n\nOne of the main challenges our healthcare and biomedical research and development systems are facing, in the age of digitalization and aging populations, is a rising burden from chronic conditions. This burden has a multitude of effects not only on the Quality of Life (QoL) and well-being of the patients and their immediate social networks (e.g. family members), but it also triggers increasing discussion about sustainability problems in health-related systems, including the economics of healthcare systems. Medical conditions are defined as being ‘chronic’ when they last 12 months or more, result in functional limitations (which tend to reduce QoL) and/or the need for ongoing medical care (i.e. healthcare resource utilization). Costs associated with chronic conditions are on the rise in many countries, and have been identified as a main driver of medical cost explosion, leading into the economic sustainability discussion. By now, they cause the majority of all healthcare costs in developed countries, with fast-rising prevalence in some emerging countries as well, as their societies increasingly imitate developed countries, including lifestyle, economy and burden from chronic diseases.\n\nFor example, in the US, a country that is among the most advanced in terms of this development, 31.5% of the population in 2010 was affected not only by a single, but multiple chronic conditions (MCC), binding more than 70% of all healthcare spending (not considering other costs, outside healthcare budgets, such as social care) (Gerteis et al., 2014). Chronic diseases overall, including patients with a single chronic condition, account for a vast majority (86%) of healthcare spending in the US (Gerteis et al., 2014), leading to intensive discussion on how long society can afford to pay for rising healthcare budgets (Callahan, 2013), which are based on economic models that are largely disconnected from outcomes achieved (EFPIA, 2015). In terms of indications, metabolic (e.g. diabetes), cardiovascular (e.g. heart disease), respiratory (e.g. COPD and asthma), autoimmune (e.g. rheumatoid arthritis), and neurological conditions (e.g. Alzheimer’s, and Parkinson’s disease) are typically among the most commonly observed, depending on the country and population (Callahan, 2013; Gerteis et al., 2014; Nugent, 2008; and Kvedar et al., 2016; see also the Global Burden of Disease study below).\n\nThis increase in chronic diseases in both developed and also emerging countries (Nugent, 2008) represents a challenge that forces us to go back to the design board, in terms of the health-related systems we have created, to increase their ability to cope with what’s growing in terms of challenges. As a recent article that explains the need for such a fundamental redesign puts it: we face a “critical turning point, requiring not only improved health care systems but also a new model of medicine at its foundation” (Callahan, 2013). Similar statements can be found in the discourse of other disciplines involved in health innovation, including biomedical research and its translation (Butler, 2008; Cooksey, 2006; Lazebnik, 2002; Munos, 2010; Munos, 2016; Poste, 2011). At the same time, due to medical progress in specific areas, some of the diseases that were almost impossible to survive a while ago, now turn into new types of chronic conditions, e.g. AIDS (where personalized combination therapies have enabled impressive improvements of patient outcomes in a relatively short time, see below). Such new chronic conditions created by medical progress also require sustained care and resources over many years, further increasing chronic disease burden. This trend of medical innovation creating new chronic conditions is likely to continue. “It is now possible, and not uncommon, for someone to have cancer pushed into remission at 65, to persist with well-managed heart disease at 75, and then to acquire Alzheimer’s at 85” (Callahan, 2013). Therefore, the rise in life expectancy that follows increasing development according to the Western model of modernization of the last 2–3 centuries is accompanied by more time spent in a managed chronic condition. This, in turn, leads to a lively debate on the need to push innovation for ‘healthy aging’, considering not only how long we live, but also the QoL of those added years. In that context, what can we learn about ‘healthy aging’, in the absence of a heavy burden from chronic diseases, in populations that do better than average?\n\n\nIslands of healthy aging\n\nComparisons between different human populations (e.g. in different geographies, or between subpopulations that live in the same geographic area) can reveal interesting patterns related to this debate. Studies of human populations that enjoy both a long and healthy life compared to others in their proximity (i.e. “islands of healthy aging”), so far have revealed that there are candidate contributing factors for healthy aging at many levels, including genetics, various aspects of lifestyle, environmental context, sociology and culture, and of course economic factors. However, it is important to be cautious about accepting simplified conclusions from such studies, as they suffer from the same fundamental problems as other types of studies in complex human populations, including the risk of unintentionally comparing apples and oranges (which can reveal the wrong factors as being significant), as well as the temptation of jumping from correlations to statements on causation (as it often happens in the mass media, which adds to widespread confusion on the topic).\n\nIn the case of the so-called ‘Blue Zone’ populations of central Sardinia (Pes et al., 2013), a Mediterranean island with pleasant climatic conditions, various studies aim to identify significant differences between the healthy aging ‘Blue Zone’ populations, which are known as some of the most long-living populations in the Western world, and other Sardinian populations that have a close-to-average life expectancy and health profile during aging. Note that the ‘Blue Zone’ populations in the center of the island are known to have been slower in adopting a modern lifestyle, compared to the people on more accessible coastal areas (a pattern that can be observed in similar landscapes, where accessibility of geographies influences speed of modernization). A statistical analysis of factors that clearly distinguish both Sardinian populations from each other (i.e. Blue Zone populations from the others) revealed occupational aspects (with communities rich in shepherds being healthier than those with more farmers and fishermen), landscape (mountainous terrain being a healthier environment compared to coastal lowlands), and dietary factors (with Barley production associated with healthy aging) as significant. A possible conclusion from such an analysis of correlations could be that healthy aging populations are rather found in areas with many shepherds, who used to spend much time roaming sparsely populated, mountainous areas, and less than in areas with more intensive spurts of activity typical for farmers and fishermen. Other conclusions may be valid as well, and it can be difficult to choose among the alternative conclusions, to inform action.\n\nBased on our current knowledge about the characteristics of healthy aging populations, and risk factors for increased burden from chronic diseases (e.g. from the Framingham and similar longitudinal observational cohort studies; Mahmood et al., 2014), initiatives aimed at reducing chronic disease burden in public health have tried to develop solutions that work in an efficient manner at population level, including educational, political, regulatory and medical initiatives. One of the most visible exemplars for successful paradigms in public health is the reduction of burden associated with smoking and second-hand smoking reduction, highlighting the power of coordinated, interdisciplinary collaboration towards a higher-level health goal. In that context it is important though to point out that much of the evidence we have is, as stated above, only correlative in nature, and that its efficient reduction to practice in terms of the best (combination and/or sequence) of interventions in different populations and settings is anything but trivial (see also Carter, 2015, and the discussion on AIDS and platform applications below). For example, let’s assume that many studies confirm that a shepherd-like lifestyle in mountains with mild climate, regular siesta and associated diet is indeed the one that gives us the most healthy aging experience, how do we extend such a ‘successful lifestyle paradigm’ into another setting that is less peaceful and traditional, e.g. a busy, modern, urban environment with its strong selective pressures on lifestyle and culture?\n\n\nSystems approaches\n\nWith all the (somewhat fragmented) knowledge we have accumulated, and made increasingly accessible with digitalization, I propose that it is a good time to learn how to “put the pieces of the puzzle together”, by learning how to best link and extend the most successful paradigms. Learning, in this case, means to understand the most powerful combinations of paradigms, where a paradigm can and where it cannot be applied (its ‘domain of validity’), and what adjustments to its implementation are needed to fit a particular situation. Several examples are provided below, e.g. the lessons learned from the modernization of diagnosis in oncology and AIDS, combined with innovation on more personalized (combinatorial) interventions. To achieve this, we need to learn how to better connect relevant ‘pieces of knowledge’ and stakeholders, across disciplines, institutions and other real life barriers, towards increased speed and effectiveness of distributed learning, at systems and community level. This should put following generations into a better position for managing not only problems related to sustainability in health, which our generation is still struggling with, but also problems in other (connected) areas that pose similar challenges.\n\nHowever, if we do a reality check, of our status quo, this is the type of challenge we, at our current stage of human cultural evolution, have provided only limited evidence so far for actually being able to cope with. Several thousand years after a series of cultural transitions from small communities of hunter-gatherers (with a more limited control over their environment) into increasingly large and complex, globally connected societies (with more widespread effects in our environment, including the most remote corners of our planet), the question poses itself: what is the next stage in our cultural evolution, as a species? Will it actually be possible to overcome obstacles on the path towards multi-stakeholder co-design of healthier and more sustainable systems, and how long will it take?\n\nIn the life sciences, including fields related to medicine and biology, we can find many good initiatives that point into this direction, but also a widespread disbelief among leaders in those disciplines that we will be able to fundamentally change things, because “things that never change” (which translates into the implicit belief that we have reached the end of human cultural evolution, in terms of our ability to manage certain types of complexity, as a human population) and the special characteristics (complexity) of living systems compared to engineered systems (e.g. see Lazebnik, 2002). Good introductions into those important discourses, in the above context, are provided by\n\nAltman, 2012 (linking the molecular and clinical worlds; role of systems medicine)\n\nAuffray et al., 2016 (focus on European initiatives, and the need to connect those)\n\nBarker, 2011 (sustainability of healthcare systems, with US and UK focus)\n\nButler, 2008 (the ‘valley of death’ problem, translating innovation to impact)\n\nCallahan, 2013 (sustainability of medicine, healthy aging, chronic diseases)\n\nCarter, 2015 (healthy aging, medical philosophy, and public health policies)\n\nGoodwin, 1999 (evolution of science, from control to participation)\n\nKoelsch et al., 2013 (economic sustainability of personalized health model in Oncology)\n\nLazebnik, 2002 (blind spots in biomedical research, lack of common language)\n\nMathews & Pronovost, 2011 (need for better systems integration in medicine)\n\nMunos, 2010 (from a non-sharing, competitive culture to open science in biomedical R&D)\n\nMunos, 2016 (innovation crisis in pharma R&D, and economic sustainability of the industry)\n\nPoste, 2011 (problems related to biomarkers and diagnostics)\n\nPowell, 2004 (systems approaches in biology, key problems and some trends)\n\nPritchard et al., 2017 (translating PM into regular clinical care, key challenges, adoption)\n\nThe ability to make progress here requires an increased capability for understanding how different aspects in relevant subsystems influence each other dynamically. Recently, we can observe early signs of a transition to a ‘new health innovation ecosystem’ with changes in many subsystems, based on changing roles (e.g. of patients, physicians and pharmacists), processes, habits, and underlying economic models (Barker, 2011; Beckmann & Lew, 2016; Koelsch et al., 2013; Munos, 2010), as a first symptom of efforts to increase systemic sustainability, as well as the effects of technology advances (see below). The types of challenges we need to tackle includes the need for a discourse of important trade-offs that require careful balancing of the perspectives of multiple stakeholders. For example, as we develop therapeutic solutions for increasingly smaller but more molecularly defined populations in oncology, based on diagnostic modernization (see below), this results in tension between high prizes for targeted therapies and the enormous investment required to develop new targeted therapies in a highly regulated, and cost-intensive industry (Koelsch et al., 2013; Kostic & Phillips, 2015?). Another fundamental trade-off situation with many consequences for a variety of stakeholders can be found when considering the resources currently involved in the last 5–10 years of life, including elderly, health, social and other care (e.g. by family members). Is a move towards more robots taking care of our elderly the only solution we can imagine, since there will not be enough people around to provide more human versions of care? Resolving such and other interconnected, multi-stakeholder challenges with trade-off tensions in their core will be an important set of problems to address, see Figure 1. In this context, a renewed and more widespread interest in systems approaches as a tool for managing such complexity is on the cards. For a brief overview of potentially relevant fields, concepts and tools related to systems approaches, see Box 1.\n\nThe design is based on the ambition that all stakeholders should benefit from the development of this digital center. RWE = real world evidence.\n\n\n\nIn this article, I define systems approaches as efforts aimed at ‘connecting the pieces of the puzzle’, i.e. a set of connected parts or subsystems (i.e. system components) that influence each other, with an emphasis on understanding the interactions between those parts, and how they contribute to system-level properties. System-level properties include emergence, which means that the system displays behaviors that depend on the way how the system components interact with each other, and robustness, a property that captures the ability of a system to deal with changes in its environment (e.g. living systems have evolved a collection of complementary system motifs that enhance their ability to cope flexibly with changes of food supply). Systems approaches can build on knowledge and tools from a range of fields, such as systems science, complexity theory, computational modeling of complex natural systems (e.g. in ecology and economy), nonlinear systems theory, self-organizing systems, chaos theory, cybernetics, whole systems thinking, general systems theory, and game theory. Introductory texts into some of the most relevant fields, their key concepts and tools, can be found in Goodwin (1999); Hammond (2005); Lazebnik (2002); Powell (2004); Sterling (2003); and Bousquet et al. (2011).\n\nFor most of the 19th and 20th century, our mindset was preoccupied with certain ideas of ‘development’ and ‘civilization’, with mostly negative views of other lifestyles found in ‘less developed’ areas, and a belief in a core role of new technologies to enable even more development towards an even better civilization. As such development, spreading globally, led to increasing awareness about the “other side of the coin”, i.e. negative consequences for human and non-human species, this fueled excitement on finding better ways to understand complex systems that involve living species, e.g. how effects related to human development (e.g. pollution, change of environments, increasing density of human populations, waste) affected the health of ecosystems (e.g. lakes undergoing eutrophication based on system shifts, with deadly consequences for the species that used to inhabit this biosphere; Yang et al., 2008). Over time, such ‘ecosystem’-related fields developed the capability to understand recurring principles in that complexity, including the role of the connectivity between individual system components (Sterling, 2002). Interestingly, this revealed common patterns found in many complex systems, adding further fuel to the interest in systems approaches as a tool for managing complexity.\n\nThe ability to perform experiments, and the use of increasing computational power to develop better in silico models, at systems level, played an important role in this process. However, an early attempt to apply developments in those areas to molecular networks involved in disease, in the form of ‘systems biology’ (Lazebnik, 2002; Powell, 2004; Spivey, 2004), got slowed down by a few fundamental challenges. The effort and time needed to advance our understanding of all relevant system components and their interactions in human health, at sufficient detail for determining the best intervention (i.e. ‘target’) for promoting a transition to a particular health state, is immense, and there is doubt if that vision can even be achieved despite technology advances (e.g. omics technologies that can monitor the state of thousands of such molecules in living systems, see below). As a consequence of this ‘cool down’ on systems biology (enabled by omics profiling), many academic researchers in the field have shifted, over the years, to study simpler systems that are more remote from human complexity first, e.g. (populations of) easier-to-study single cell organisms with more simple genomes and behaviors, while applied research and medical innovation in industry largely focuses on other paradigms for generating starting points for their innovation pipelines, e.g. based on the screening of biological systems that model selected aspects of disease (to find starting points for new therapies). Note that institutes designed around a long-term investment into systems biology approaches, such as Lee Hood’s Institute of systems biology in Seattle, have made considerable contributions to the continued discourse on the need for systems approaches in health innovation, and the development of guiding principles for P4 Medicine (e.g. Bousquet et al., 2011; Hood et al., 2014, and below). However, this discourse if by now substantially different from the ambitions of the systems biology wave about 10 years ago, as the community was getting excited about a new ability to ‘know all the parts’ and put the picture together on their interactions.\n\nAt the same time, there is increasing recognition that ‘biomarkers’ may become important ‘anchors’ in those complex networks, due to the ability to study their links with medical, economic and other non-biological data related to chronic diseases, including connections between diagnosis and intervention (see below). Scientific discussions related to this shift towards biomarkers (Burns et al., 2013; Poste, 2011) are one of the origins of the proposed platform. They may also serve as useful scientific bridges between key stakeholders (Figure 1), e.g. between different Intellectual Property/innovation domains such as the OpenScience community (where their efforts add information on the role of biomarkers and biomarker-based health state models) vs. proprietary therapeutic assets in pharmaceutical R&D pipelines (e.g. where mechanism of action biology of those assets connects with such biomarkers, and health state biology). Similar issues may occur at interfaces between patient/consumer-centric solutions (e.g. through digital health), and those deployed in hospitals (i.e. for health care providers, HCPs), see Figure 1, with their different Intellectual Property/innovation domains. Biomarkers, as they contribute to the development to the interdisciplinary understanding of health states across stakeholders, are therefore an important focus of the proposed systems approach.\n\nOf particular interest, from a systems point of view, will be knowledge related to the ability of different kinds of systems to cope with external changes (i.e. system robustness), including pressures outside the normal range of what the system is typically encountering (short time scales), or what it was encountering during its evolution (longer time scales). In a time of complex interactions between changes in various fields related to chronic diseases, we need to understand more about what makes systems robust despite change, and how the forces that drive change, and their effects, are connected. Kuhn’s thoughts on recurring, cyclic patterns in the history of science that he called ‘paradigm shifts’ (Kaiser, 2012), including the accumulation of ‘anomalies’ that are inconsistent with the dominating paradigm(s), may be helpful. More widespread adoption of tools related to systems approaches, outside the existing, rather small group of experts, in areas where theory and practice collide for better learning, will be an enabling development for the proposed platform. In that context, it is important to develop a modeling-based learning process in the public domain, on a neutral platform that involves many stakeholders.\n\n\nRelationship with Precision, P4 and Systems Medicine\n\nAs different aspects of an emerging consensus on how to develop more sustainable health-related systems are discussed in the literature and other media, due to the early stage of the discussion a variety of terms that capture key elements of the transition are used with inconsistent meaning, adding to overall confusion. Some of the terms that try to capture the ambitions of a ‘new health innovation ecosystem’, range from ‘Precision Medicine’, ‘Personalized Health’ to ‘P4 Medicine (P4 because of the four principles, starting with ‘p’: predictive, personalized, preventive, participatory)’ and ‘Systems Medicine’. For a recent overview on this discourse, see Auffray C et al., 2016; Bousquet J et al., 2011; Flores et al., 2013; Hawgood et al., 2015; Hood et al., 2014; Hood & Price, 2014; Kodrič K et al., 2016;Kostic & Phillips, 2016;Scholz, 2015; Wang et al., 2015; and Wilckens T, 2016. Comparison with the guiding principles of evidence-based medicine is provided by Beckmann & Lew, 2016. Going forward, I will use the simplified abbreviation ‘PM’, as it captures at least some of the more commonly used terms (i.e. Precision/Personalized/P4 Medicine) in a simple abbreviation, assuming that systems approaches are an important tool on the path to the development of sustainable PM-based systems. The proposed platform is designed in a way that can accommodate the early stage of the emerging consensus in PM, and facilitate its maturation.\n\n\nAims of this article\n\nIn this article, I aim to make a contribution to this discourse by 1) discussing potentially reusable, successful paradigms from selected areas of medical innovation, 2) leading to guiding principles for designing a platform that enables multi-stakeholder initiatives, centered on a theory of health states. In terms of interdisciplinary interfaces, our focus is on connections between medicine, biology and economy, and initially focus on applications related to regenerative medicine. Iterative optimization of the proposed reference health state models would be fueled by linking opportunities related to a) the modernization of diagnosis, b) ability to capture health state profiles using omics, c) patient-centric approaches based on technology convergence, d) increasing understanding of the pathobiology, clinical meaning and health economic aspects of disease progression stages, and e) design of new interventions, including therapies as well as preventive measures.\n\n\nSuccessful paradigms, from leading areas of health innovation\n\nLooking across different areas of medicine, we can notice interesting differences, e.g. in sharing culture, commonly applied tools, mindsets and approach, affecting the translation of advances in knowledge into improved patient outcomes, as well as the generation of new advances that fuel further progress. Here, I would like to highlight a number of successful paradigms with an impact on patient outcomes, and their potential relevance in the above discourse, even outside the problem areas in which they were originally developed.\n\nIn discussions on the “valley of death” challenge in health innovation (Butler D, 2008; Wehling, 2009), which concerns the problem of translating scientific and technical advances into impact at the level of patient outcomes, beyond time-limited clinical studies, oncology is often mentioned as an area of medicine in which there has been relatively good progress in terms of such translation into regular practice. In this medical specialty, many advances in our increasing scientific understanding of the molecular basis of disease, and patient heterogeneity, have been translated into solutions that benefit patients with specific tumor profiles. Looking across different areas in Oncology, the most successful paradigms that evolved effectively couple the modernization of diagnostics (i.e. the ability to determine tumor subtype based on its biological profile) with the use of targeted therapies (which have been designed for a specific tumor type, or a set of tumor types, with a characteristic biological profile). This personalized health paradigm emphasizes understanding of patient heterogeneity at the level of biological profiles, because it was possible to link diagnostic capability at the level of tumor-derived DNA with its interpretation in terms of the biology that drives the growth and survival of that type of tumor, resulting in impressive improvements of patient outcomes in many tumors (where both tools converge). However, this paradigm has also raised economic sustainability concerns, as tension increases between stakeholders who a) invest in the development of solutions based on this paradigm, and b) those who need to pay for healthcare of tumor patient populations, which are increasingly segmented, with many segments associated with relatively high costs (see above, and Koelsch et al., 2013).\n\nIn this renowned area of medicine, many innovations based on this paradigm have advanced quite far in the innovation translation pipeline, leading to practical solutions for global deployment, reimbursement in different healthcare systems, education of healthcare providers, and integration into regular care processes. Considering the effort that is required for such a level of system-wide change in the real world, those successes are indeed quite impressive, keeping in mind, however, that there are many areas of medical need that remain a considerable challenge in oncology, including the phenomenon of tumor recurrence despite short or mid-term effects of targeted therapies.\n\nIf we consider to extend this paradigm to other areas of medicine, we need to take into account that tumors have many characteristics that are fundamentally different from many common chronic diseases, complicating the application of exact copies of the approach in non-Oncology areas, apart from some exceptions, such as diseases with a strong genetic component (which tend to be rare). Therefore, we need to learn how to consider the particular characteristics of a disease, at a diagnostic and therapeutic level, as we extend the oncology paradigm of personalized health to other indications. This challenge, so far, has been hard to crack, triggering a ‘lessons learned’ discourse that can be quite healthy in the context of a possible adoption of the proposed platform. Understanding the very slow progression of many common chronic diseases, from different angles, is part of the scientific challenge, as outlined below.\n\nAnother area of medicine that has witnessed much progress in terms of developing a modern approach to the convergence of innovation in diagnosis and therapy is AIDS. Being an infectious disease makes it a case that is quite different from oncology and most chronic diseases, although aspects of the oncology paradigm of targeted therapy and personalization have been re-used here as well. Once the AIDS epidemic was recognized as a major health challenge, relatively fast progress on understanding the key characteristics of viral populations, and the biology of their interactions with host (defense) biology, has enabled the development of highly personalized combination therapy approaches, depending on the DNA level composition of the viral population in that specific patient, at a particular point in time (Lengauer & Sing, 2006; Lengauer et al., 2014). As in oncology, much of the progress in this area of medicine was catalyzed by technical progress. For example, easier access to relevant omics technology (see below), enables faster, easier and better diagnosis of the state of virus populations, as a basis of therapy personalization. Campaigns for collaborative multi-stakeholder, interdisciplinary solutions for battling this infectious disease have also played an important role in contributing to the relatively fast impact on patient outcomes, although challenges remain, e.g. related to the high costs of many years of combination therapy close to the ‘cutting edge’ of molecular medicine. Interestingly, note that both areas (oncology and AIDS) have increasingly moved away from the use of single therapies, to a more sophisticated, cutting-edge combination therapy approach that involves the early recognition of disease recurrence. As a consequence, I propose a connected set of successful paradigms from oncology and ADIS as pillars of the platform described below, recognizing that much needs to be learned on how to apply those paradigms to chronic diseases with a more limited genetic contribution. In that context, a key question will be to find out where the most meaningful, interpretable and actionable diagnostic signals are, to guide our choice of interventions, based on the patient profile, at a particular stage in disease progression.\n\nOur ability to study, measure and understand complex biological systems has increased with many new tools and methods - although that doesn’t mean that it is easy to put the many different pieces of the puzzle together, in our mind, or in computational models. It is certainly more complex than ‘fixing a radio’ (Lazebnik, 2002), although the author’s thoughtful points about unresolved issues in the biomedical research approach, including the lack of a formal language that helps communities to connect the pieces in such systems, were indeed very helpful, and influential. Enabling technologies in that area includes a maturation of our ability to capture states of biological systems at a more comprehensive level, using genome-wide technologies (or simply ‘omics’). Such omics technologies now exist for many different levels of biological systems, including DNA, variants for RNA, protein and metabolite-level system dynamics (i.e. genomics, transcriptomics, proteomics, and metabolomics; Spivey, 2004). Depending on the sample we take and how we process it, omics technologies can generate very rich datasets about the ‘expression state’ of thousands of molecules in those systems (that are represented by the samples that were taken). However, there are many complex problems in data generation and interpretation as well. Inferring overall ‘health states’ (see below) from such measurements is possible, but non-trivial, and, at present, still resource intensive (Chen et al., 2012).\n\nAround the year 2000, at about the same time as the hype on the sequencing of the human genome and its ability to revolutionize medicine, there was also much excitement on the promise of such omics technologies (Spivey, 2004), leading to thousands of publications with datasets based on human and non-human samples (e.g. from species that are commonly used as preclinical models of human disease). However, most of those datasets represent ‘snapshots’ in time, with unclear positioning in terms of disease progression states, exact cellular composition, and other ‘metadata’ that would help with interpretation and comparison. Now that the first wave of excitement has given way to a second wave that aims to build on lessons learned from the first omics wave, there is increasing awareness about the importance of understanding disease progression, beyond ‘snapshots’ with limited ‘annotation’. This trend is likely to be enabling for the proposed approach, as it helps to connect ‘health states’ in time, with biology, at a comprehensive level. Looking back at how we handled omics waves could also be tremendously helpful in designing guiding principles for handling technology hypes in general.\n\nTechnical advances in a variety of areas, from mobile technology and the widespread use of smartphones, to health-related sensors, machine learning and digitalization of healthcare, are increasingly producing ‘real world’ impact based on convergence between different technology fields, beyond exciting prototypes, in chronic diseases (Dobkin & Dorsch, 2011; Kvedar et al., 2016). While the more difficult-to-change and highly regulated healthcare and health innovation sectors are expected to develop more slowly compared to less-regulated industries, e.g. those that can improve products quickly based on consumer-centric feedback loops, there are emerging paradigms with reusability potential. Patient-facing solutions with interfaces for other stakeholders, including healthcare providers, are one of the fastest-moving areas here.\n\nFor example, in respiratory diseases, such solutions have connected improved therapy (e.g. new COPD and asthma drugs) with ‘real world’ data on patient outcomes collected using mobile technology around ‘smart inhaler’ devices for those drugs, alongside with patient-centric views on smartphones, and the involvement of healthcare providers or clinical trial teams (Bender et al., 2017; Clift, 2016; Perez, 2015). This smart inhaler paradigm for designing “beyond the pill” solutions appears to provide value to multiple stakeholders, as a) the patients get better feedback on how they are doing with the inhaler-based therapy, including the aim to prevent stressful exacerbations, b) healthcare providers have more data to optimize care pathways, c) the developers of relevant drugs get more information on ‘real life’ settings and problems, enabling faster learning, while d) device developers get better feedback on how to optimize their devices in terms of usability, functionality and other health impacts, and how to connect the engineered systems with other components. Note that the ability to generate such value close to patients’ homes, outside classic healthcare settings (e.g. hospitals), is a factor driving excitement in the digital health sector, which has identified the management of chronic diseases as a key challenge and opportunity (for a more comprehensive overview, see Kvedar et al., 2016).\n\nIn the context of the proposed platform (below), this and similar patient-centric paradigms fill an important void in the current healthcare and health innovation landscape, as they a) add low cost solutions closer to patients, in their natural environments, minimizing travel to clinics, b) have the potential to contribute diagnostic signals, and c) improve the ability to connect system components, across stakeholders, enabling a more data-driven approach to system-level learning.\n\nImprovements related to the direct and indirect effects of chronic disease morbidity (and improvements in terms of healthy aging) at population level need to be monitored in a reliable manner, to enable learning based on the impact achieved by different types of candidate solutions. The better we can measure impact, the more efficient our learning process. This needs to be based on a trusted methodology that works in a variety of settings, in different countries, to allow fair comparisons. Recent advances in this area include the “global burden of disease” (GBD) methodology developed by IHME (Institute for Health Metrics and Evaluation), and the framework developed by ICHOM, which establishes a first version of a system for capturing relevant impact. In terms of biomedical innovation, and the key role of clinical studies in validating specific hypotheses in human populations, we can now capture a diversity of patient outcomes, including QoL (Guyatt et al., 1993; Norman et al., 2003) and health-related functions such as mobility (an emerging area enabled by sensors that record different kinds of movement patterns, e.g. accelerometry). At the economic sustainability level, measures such as QALY (QoL-adjusted life years) have added the highly debated ability to differentiate among years of life extension with high and low QoL when judging value provided by innovation (Shiroiwa et al., 2010). As a consequence, we now have a basic arsenal of tools to monitor the short and long-term results of solutions we develop, at different levels, in clinical trials, in regular clinical care as well as outside clinical settings. With this, we do have an improved ability to evaluate the impact of system-wide solutions that better connect the fragments, e.g. across the successful paradigms described in this article. However, this does not mean that the current toolbox for measuring outcomes across different settings is perfect and needs no further optimization. It is a very complex topic that will certainly require more innovation and adjustments down the road. At the same time, we can start to pragmatically use what we already have. In that context, multi-morbidity, in a landscape of increasing chronic disease burden, is one of the areas that may benefit from increased attention, with regards to the capture of impact, as well as the combination of several paradigms in relevant solutions.\n\nAn intensive, open debate on the best approach for a particular problem, and open access to data that can help to select among alternative approaches, are features associated with good science. Since the successful paradigm of open source software in informatics has infected an increasing number of areas related to health innovation, including bioinformatics, the screening of chemical libraries, as well as the generation of tools for research on new kinds of drug targets, an intense debate has developed on the role for an extension of those fragmented experiences in particular areas of science into a more comprehensive, interdisciplinary ‘open science’ approach that includes innovative models for enabling a faster translation of research results to patients (Laverty et al., 2015; Low et al., 2016; Munos, 2010). Indications in which innovative approaches at a similar level of complexity were tested ‘end-to-end’ have added further fuel to the debate, with the malaria field taking a prominent position among those translational pioneers (Wells et al., 2016). Relevant aspects include a) the openness of raw data, code and algorithms (avoiding ‘black box’ solutions), which applies to computational as well as experimental protocols, b) ‘reproducible research’ for enhanced transparency and reproducibility between research groups, c) sharing of data, insights, knowledge, and tools, based on initiatives that provide some structure to data sharing (e.g. Dataverse ). Bernard Munos (e.g. Munos, 2010) has proposed that the increasing adoption of such approaches is linked with sustainability in health innovation, considering biomedical complexity. A recent conference in Oxford (“Drug discovery: creating a new ecosytem”, 2–3 June 2016) has been able to gather a variety of pioneers in that area. However, while there are many experiences in this growing global community that can help to select a particular open science model for a given medical problem, we are still in the earlier stages of that learning process, in terms of tackling the rather difficult ‘valley of death’ problem in translation towards patients.\n\nThe digitalization of healthcare, as well as technology convergence in non-healthcare areas, are resulting in an increasingly diverse and fragmented landscape of data related to different aspects of health, from hospital records, to claims for reimbursement, to fitness device data, and data produced by patient-centric solutions for chronic diseases as outlined above (Strategy & report, 2015). Such ‘real world evidence’ is often contrasted with data generated in controlled clinical studies, such as randomized clinical trials that test specific medical hypotheses. Important differences exist, for example, in our ability to make conclusions from either data, based on solid statistics, methodology and theory. In that context, it can be helpful to develop improved capabilities for dealing with real world evidence in a way that is consistent with ‘good science’ principles, in collaboration with disciplines that have a history in that area, e.g. epidemiology and HEOR (health economy and outcomes research). The example of the “Global Burden of Disease” study of IHME (Lozano et al., 2012), which succeeded in integrating thousands of different real world evidence data sources into an over-arching model that enables a number of analyses, could be helpful in that context.\n\nConsidering the important role of economy in the health care/innovation sustainability discussion (with an assumption of limited resources), we can observe that the classic economic model of reimbursement for healthcare actions based on a “fee-for-service” concept, is gradually being replaced by a potentially more sustainable “value-based care” model (EFPIA, 2016). This model is based on the following principles: (i) coordinating around patients all the elements of the care continuum; (ii) shared commitment of all healthcare system players to the outcomes that matter to patients; (iii) generating and tracking data on those outcomes; (iv) benchmarking performance transparently for informing management decisions; and (v) paying for outcomes rather than for inputs and processes. However, this is a complex structural change and therefore unlikely to be an easy transition, so it may take decades before this transformation reaches all aspects of health-related systems globally (with progress being tracked by programs such as the Pharmaceutical Outcomes Research & Policy Program at University of Washington). In the meantime, pioneering institutions around the world are moving from pilots to organizational change, to become a leader in this important transition, increasing their fitness for the future at an earlier time point, when changes are still easier to manage and resource. One of the exciting opportunities in this area could be an improved ability to better align incentives across stakeholders, based on the above “value-based care” principles, as a basis for more collaborative solution development.\n\n\nInitial focus indications\n\nInnovation related to this neurological disease is an interesting indication in the context of the proposed systems approach, and platform design, for several reasons:\n\nIt is a common chronic disease with considerable burden on multiple stakeholders, including patients, families, healthcare resources, social care, elderly care, and other areas of society. This burden tends to rise in societies with longer life expectancies (Braak & Del Tredici, 2015), and is therefore linked with aging populations.\n\nIt features the typical slow and ‘silent’ disease progression of many chronic diseases (i.e. sporadic forms of the disease), as tissue damage accumulates over decades in the brain (e.g. as described by Braak & Del Tredici, 2015, at the level of pathohistology), with patient-specific speed of progression. There are known characteristics of subpopulations of patients with faster progression, e.g. based on genetic predisposition (presenilin families with early onset, APOE4 carriers with medium onset, others with late onset).\n\nDiagnostic complexity: experience with a very diverse collection of diagnostic tools, including those related to the detection of cognitive decline, molecular biomarkers, imaging biomarkers, histopathology. This could facilitate the iterative optimization of health state models (as proposed below).\n\nCulture of the field: Many years of disappointing results from clinical trials have resulted in a healthy ‘lessons learned’ discourse across disciplines, and stakeholders. A history of using computational modeling in the context of biomarkers for disease progression (Haas et al., 2016).\n\nAn interesting side effect of the history of disappointing or failed clinical trials in this indication is that it forces the biomedical research community to reconsider their approach and collaborative paradigms for the sake of patients, causing a healthy discussion on advancing the biomedical research community culture, with possible effects in other indications as well, where reusable learnings are generated. For example, this field has suffered from a strong bias for a small set of hypotheses and paradigms, based on particular types of evidence, as a basis for designing and developing therapeutic solutions. The excitement around those hypotheses has resulted in lackluster discussion of things that didn’t fit this ‘group think’, including alternative hypotheses and solutions. Compared to the discourse in this indication in the 1990s, we can now notice an increasing readiness to learn from this experience.\n\nThis neurological disease is a typical chronic disease in the pathobiology sense I discuss below, i.e. there is slowly accumulating tissue damage outpacing regenerative mechanisms, which results in a progressive decline of tissue functions, which then show up as increasingly severe clinical symptoms and patient outcomes are impacted over time. The need to better recognize early stages of disease (Braak & Del Tredici, 2015), together with innovation in terms of interventions that target the pathobiology of exactly those stages, is now widely seen as the most promising approach for enabling translational progress in the field. This is likely to lead to sophisticated methods for combining diagnostic signals across many system levels, including the cognitive, molecular, imaging and other aspects described above, requiring a platform for improving public reference versions of relevant computational models.\n\nAt the same time, there has been good progress in areas related to digital health, e.g. in the early detection of a possible cognitive decline using speech patterns (Morris & Lundell, 2003), which may add a cheap and easy-to-deploy screening method to the ‘early stage’ diagnostics innovation. The Alzheimer’s exemplar may also help in exploring connections between the various aspects of the proposed systems approach and platform. Below, I will propose a road map for developing such a platform, including starting points derived from Alzheimer’s disease.\n\nIncreasing readiness to design innovative clinical studies is also visible in this indication, related to particular subpopulations and health states, e.g. the APOE4 subpopulation, which carries a higher risk of fast progression towards more severe health states, compared to sporadic cases without such genetic risk factors (Mahley et al., 2009). This may help to close important gaps in the data landscape that prevent progress.\n\nTissue and organ regeneration principles. We know that, based on knowledge accumulated in scientific fields related to regenerative biology and medicine, a) many animals have an amazing ability to regenerate tissues, organs and limbs (after injury or other damage), and therefore recover in terms of function (i.e. health); and b) that there is considerable evolutionary conservation at the level of the involved biology between humans and non-human vertebrate animals with high regenerative capacity. Discoveries in the past decades in that area have nurtured the hope that a deeper understanding of biomolecular systems involved in tissue, organ and limb regeneration will lead to the development of improved therapeutic and diagnostic solutions for areas of medical need, in which improved regeneration could contribute to better outcomes. In many common chronic diseases we can, during the progression of those diseases in the patient over time, observe a slowly progressing imbalance between accumulating tissue damage, and regenerative mechanisms activated in that tissue as a response to this accumulating damage. I will illustrate this principle using several examples, below, and, in the spirit of the proposed ‘systems approach’ to chronic diseases, highlight connected aspects from biology, medicine and economy that may enable the development of better solutions.\n\nChronic liver disease. The liver is a very important organ, contributing to overall health with its many functions related to homeostasis (the ability to keep us within a healthy range, regarding physiological parameters). Depending on our lifestyle, and other factors, such as our genetic profile, liver tissue can be increasingly damaged by different factors, including high consumption levels of alcohol, and an unbalanced (Western) diet (leading to steatohepatitis). This then leads to reduced liver function, with an impact on our body’s ability to maintain homeostasis, and therefore health. On the other hand, the liver is also known for its regenerative capacity, a notion that was further reinforced by more recent observations of liver regeneration after the application of antiviral therapies (Zois et al., 2008). In the early stages of such slowly (and silently) accumulating organ damage, the liver may still be able to deal rather well with the repeated insults, and maintain most of the important functions, and therefore overall health. Over time, however, the accumulating damage outweighs the ability to regenerate and maintain organ function, shifting the system towards an unhealthier balance between damage and regeneration. As liver damage increases, and liver function decreases, first clinical symptoms may appear that are often ignored, for a variety of reasons. With a diagnosis of ‘late stage liver disease with advanced liver cirrhosis and portal hypertension’ by a relevant medical specialist (i.e. a hepatologist), a comprehensive reaction of the healthcare system (i.e. a care pathway) with diagnostic and therapeutic aspects is triggered, based on medical guidelines and current understanding of disease progression-related risks). While some patients’ lives can be extended through liver transplantation, those who die from liver disease on the transplantation waiting list, waiting for such relief, is unfortunately rising. This adds fuel to a discussion on the need for developing new solutions for this growing medical problem. What we know so far about the different stages of chronic liver disease in such patients can be summarized in the following simplified disease progression model, which includes a heterogeneous mix of causal factors involved in creating the liver tissue damage:\n\n\n\nHere, each health state is a stage in disease progression that is characterized by a combination of features that can affect diagnosis, assigning a particular patient to an earlier or later stage in the disease progression (with many consequences related to the clinical management or further diagnostic monitoring of the patient). Based on current knowledge, it seems that, despite the heterogeneity of causal factors involved in creating the relevant tissue damage, there is a clear response pattern of this organ, with limited variation. In other words, once the balance between regeneration and damage has passed a certain level, a rather fixed pattern of progression towards later stages is observed. Some variation is noticed, however, between patients in the time spent between such stages, i.e. we can distinguish ‘fast’ or ‘slow’ progression relative to average times, between stages. As such chronic liver disease progression stages have multiple links with the overall health of the patient, in the context of the proposed health state modeling framework they can help to define health states, considering that the same patient may also display other co-morbidities, including other chronic diseases with their own progression stage.\n\nIn a similar manner, other chronic diseases result in slowly accumulating tissue damage over the years, until homeostasis and tissue function is affected to a degree that it becomes very difficult to get the patient back to a healthy state with a high QoL. Therefore, the liver disease example may help us define relevant paradigms for understanding health states that consider co-morbidity. Unfortunately, such accumulating tissue damage is currently not easy to detect in earlier stages, in real world settings that require a high standard of patient comfort, low cost, ease of use and risk mitigation. For example, repeated invasive sampling of biopsies for assessing the condition of liver tissue in the progression is usually not practiced due to clinical risks associated with biopsy generation. Therefore, we need to develop new solutions that will increase our ability to recognize not only health states and progression stages that have a clear clinical profile, based on currently available tools in the healthcare system, but also help us understand and recognize the health states that are in between very healthy states and the easier-to-recognize advanced stages of chronic diseases. This is a general principle and a grand challenge that no single discipline or stakeholder can fully address on their own.\n\nWith this context, what are some of the more interesting interdisciplinary connections involving biology, medicine and economics in developing such new solutions?\n\nThe NAFLD to NASH transition: while many patients have NAFLD, i.e. a fat accumulation in liver that is not likely to be caused by excess consumption of alcohol, but rather by diet and other lifestyle factors (with interesting links to obesity and metabolic syndrome), only a subset of those patients are going to transition into the more serious NASH state in liver disease progression (Calzadilla Bertot & Adams, 2016). In the NASH stage there is more serious tissue damage, possibly due to an overshooting reaction of the immune systems that is linked with the regenerative response to liver tissue damage. The transition from NAFLD to NASH is loaded with many questions, including the biology of this transition, the best ways to recognize it as early as possible in patients (e.g. Hannah & Harrison, 2016), and multiple economic consequences, such as the most efficient use of healthcare resources. While much progress has been made in understanding those aspects, a more collaborative approach across stakeholders is increasingly gathering momentum, including projects managed under the IMI umbrella (a framework for multi-stakeholder collaborations related to health, with public-private partnership at its core).\n\nEffect of co-morbidities: in patients with chronic liver disease, including earlier stages, what are the most relevant co-morbidities that will influence outcomes? While it is easy to see how factors that have a known effect on the regeneration/damage balance in the liver, e.g. exposure to substances with liver toxicity, will be relevant, epidemiological studies of ‘real world data’ collected on such patients may reveal additional factors that do not have a known liver tissue relevance. For example, such patients may take drugs (e.g. metformin) on a regular basis that are related to co-morbidities (e.g. diabetes), which interact in a complex manner with the liver disease progression system (e.g. liver functions related to glucose, He et al., 2014). Statistically sound observations in such studies could then lead to investigations into the biology of those drugs in the context of liver disease, with potential effects on clinical management, outcomes and economy that would warrant interdisciplinary collaboration at such interfaces.\n\nRefining the disease progression model: While it is encouraging to know that there is a highly defined pattern in disease progression in liver diseases, despite the diversity of factors involved in causing tissue damage, there may be heterogeneity in patients that is currently under-appreciated. For example, if we consider the variability that was observed in terms of slow or fast progression between stages, including the risk of developing NASH based on NAFLD, this indicates that we need to learn more about patient heterogeneity and risk of progression, at the biology/medicine interface.\n\nSkin ulcers. The above discussion on chronic liver disease implies a fundamental challenge shared by other chronic diseases, i.e. that we are dealing with slow dynamics in the recognizable transitions between distinct health states. This means that our iterative learning cycle between designing a study, implementing it, sharing the results and designing the next study, combined with the time needed to observe sufficient change between health states, leads us to rather long studies that stretch many years. Therefore, fast learning based on short iterative cycles is difficult, apart from problems that can be addressed in shorter timescales. Together with an overall tendency towards short-term approaches including funding, this means that progress on understanding the above problems, including health state refinement, will be hard to accelerate.\n\nWith this in mind, let us explore the possibility of finding complementary medical problems related to a) similar interdisciplinary complexity of chronic diseases and b) learning related to health states, c) which would allow us to develop a fast-learning, collaborative network on top of short iterative study cycles and d) a systems approach that facilitates such interdisciplinary exchange. Our example for such a medical problem is again related to the principle of the fateful balance between slowly accumulating tissue damage, and scientific questions in the field of regenerative medicine. When our skin tissues are healthy and have a regenerative capacity within the normal range, we have all experienced how superficially visible wounds usually heal within a few weeks or even days. Once we look a bit closer at this area of tissue regeneration, we can notice a variation in terms of the speed and quality of healing, depending on a variety of factors, such as wound size, shape, depth, use of dressings to promote healing and prevent infection, infection management and so on. In addition, we may have heard about bad outcomes related to wound infection that led to amputations. Based on that common experience, most of us are not used to think of skin wounds as a major medical challenge in the context of the chronic disease challenge. However, if we look even closer, we find that there are many patients with one or more chronic diseases who have considerable problems due to disturbed healing, with surprisingly harsh outcomes linked with how wounds were managed (Hunt et al., 2011; Park et al., 2013). But what is the link between the chronic disease challenge, and this medical problem? And how does it relate to our discussion on systems approaches?\n\nDiabetes complications: Patients with diabetes, in later stages of disease progression, when slowly accumulating tissue damage has reached an advanced stage, may have to deal with a variety of clinical complications, affecting organs such as the eyes, kidney and foot. Complications of the foot typically present themselves clinically as ‘diabetic foot ulcer’, a type of non-healing, chronic skin wound, to a well-trained expert, such as a specialized wound nurse or physician (Driver et al., 2010; Lim et al., 2017). Unfortunately, many patients carry such dangerous wounds for too long, and therefore have to face unfavorable outcomes, when it’s too late to manage the problem with currently available tools. Considering the progress we have made in terms of care coordination and clinical innovation in diabetes, this is one of the remaining problems related to diabetic complications.\n\nOther chronic diseases that affect skin regeneration: To further complicate matters, other chronic diseases also have an effect on such tissue regeneration in the skin after wounding, including venous disease (i.e. leading to ‘venous leg ulcers’) (Margolis et al., 2002). Proper regeneration of the skin with a full restoration of tissue function (i.e. avoiding a scar with reduced function) requires many cells to do the right thing at the right time in the right context. Once the damage has occurred, there is a wave of signals going through the tissue that triggers that complex and dynamic regenerative response by many cells, including resident cells that go through all kinds of changes, as well as invading cells from the immune systems that arrive on the scene. As a result, a detailed molecular understanding of such skin regeneration is rather difficult, complicating efforts to develop new solutions based on that knowledge.\n\nTowards systems approaches: I mentioned that many chronic diseases, such as liver disease, are difficult to study in terms of disease progression, because of the long timespans involved, which slow down the data-driven, iterative learning cycle. With regards to skin regeneration problems in the context of diabetes and other chronic diseases, the situation is a bit different, because a) changes related to outcomes can be measured in weeks and months, rather than years, b) the fluid produced by open wounds enables omics-type profiling close to the biology of tissue regeneration vs. damage, and c) the most affected tissue is relatively accessible, or easy to monitor, compared to tissues located further inside the body. The combination of those aspects could allow a fast-learning systems approach that combines the paradigms described above. This could then allow connections to be made between:\n\nBiology of ‘wound states’, e.g. improving our understanding of the balance between tissue regeneration and damage, how to shift it towards more regeneration, with potential benefits in other chronic diseases\n\nClinical profile of ‘wound states’, e.g. when to intervene to prevent bad outcomes, and how to best integrate new diagnostics into care processes\n\nEconomic profile of ‘wound states’, e.g. how to achieve good patient outcomes and economic efficiency, considering the secondary and tertiary effects of bad outcome wounds even outside the utilization of healthcare resources\n\nBuilding a capability for fast learning based on short iterative cycles that enable data-driven approaches, including machine learning and expert-based learning, in the context of an interdisciplinary collaborative network, therefore seems an attractive opportunity in this area of medical need.\n\n\nProposed platform\n\nI have mentioned the need to a) modernize diagnostics, extending the paradigms developed in leading areas of medical innovation, such as oncology and AIDS; b) connect better across system components that cross disciplines, e.g. medicine, biology and economics of health, e.g. in the context of more patient-centric connected health solutions; c) measure the impact of new PM solutions at that level, in a way that reflects the most relevant outcomes, enabling feedback loops that facilitate faster learning at systems level. But how can we best develop an enabling platform for such systems approaches based on those paradigms, which enables community-based learning, using open science principles, as well as feedback loops that involve real world evidence? Let us start with the center of the proposed platform, the health states, and their computational modeling across medicine, biology and economics.\n\nBased on the knowledge we have accumulated in terms of disease progression in Alzheimer’s disease and chronic liver disease, I propose to aggregate the medical, biological and economic knowledge across these two indications in a way that allows the extraction of computational and theoretical platform components, as described below (with an eye on later reusability). In a next step, we would test the extension of such a two-indication platform to additional indications (including wound states reflecting skin regeneration), before we would explore the development of an even more comprehensive platform that captures all frequent chronic diseases, their progression stages and complexity at the level of multi-morbidity, for a particular patient. While such a comprehensive platform could have a variety of applications, its use in the design of sequential combinations of interventions, where timing depends on the recognition of a particular health state, is emphasized.\n\nA modernization of diagnosis extending (and adapting) paradigms from oncology and AIDS, in parallel with innovation on interventions that build on diagnostic innovation\n\nAbility of omics profiling technologies to capture aspects of health states at genomic level\n\nPromoting the design of patient-centric connected health solutions based on technology convergence that provide value to multiple stakeholders, similar to the ‘smart inhaler’ paradigm\n\nBuild on our increased ability to measure outcomes, morbidity and health at population level, to understand the impact of innovation\n\nEnabling faster community-based learning through a culture of sharing based on open science, FAIR and reproducible research principles\n\nDevelop solid methodology for dealing with messy ‘real world’ data, as a complement to more controlled clinical trial data, to understand the patient journey\n\nFacilitate business model innovation that aligns incentives across stakeholders, facilitating the economic success of more balanced, sustainable approaches\n\nExtend systems approaches capability through education and community building, see Figure 1 for scope and ambition (health state models may turn out to be only one component in the digital center between stakeholders, but nonetheless a pragmatic focus to learn how to better manage complexity)\n\nIf we want to build an initial platform that captures current knowledge in both Alzheimer’s disease and chronic liver diseases, how could we get started, based on what exists already? In both indications, we have a relatively good understanding of disease progression states, from a medical, biological and economic point of view. This includes earlier stages of disease (when symptoms tend to be mild, with minimal impact on QoL and healthcare resource usage) to more advanced stages (when symptoms are more severe, with a more dramatic effect on QoL and healthcare resources). At the disease progression level, we can use the following starting points: a) Alzheimer’s disease progression theory of Braak & Del Tredici (2015); b) the review by Pellicoro et al. (2014), summarizing chronic liver disease progression. At the computational level, we can use the following starting points:\n\nModeling health states. In Alzheimer’s disease, there is already a rich history of using computational modeling of distinct health states in the context of disease progression and disease severity, as reviewed by Green (2007). In the words of the author, “Markov models may be particularly useful when a decision problem involves clinical changes, across discrete health states, that are ongoing over time”, with such models “representing the course of a disease in terms of mutually exclusive ‘health states’ and the transitions among them”. For example, a 6-month cycle has been used to calculate transition probabilities between states, using clinical trial and epidemiological data. Such computational models are used to assess the value provided by medical interventions, e.g. those that result in a delay of progression towards severe health states. More recent updates on usage of such models, including additional applications, is provided by Green & Zhang (2016). Based on the small numbers of health states modeled so far, integration of additional data could lead to a population of alternative models with different numbers of health states, initially for Alzheimer’s disease only. It may also be necessary to extend the simplistic Markov modeling approach, e.g. considering progress made in projects like the 100K cohort (Hood & Price, 2014) or the Google Baseline study (Piller, 2015). Once an initial version of the computational platform is developed, it could be extended to chronic liver diseases.\n\nSemantic framework. IMI, a renowned multi-stakeholder platform for the development of pre-competitive assets with long-term effects related to medicine, has developed starting points in that area. This includes the Aetionomy project, which enables the representation of complex networks of information, including cause-and-effect relationships, based on the BEL language and the Semantic Web data format RDF. Braak and Del Tredici’s theory on connections between pathology and clinical symptoms in Alzheimer’s (Braak & Del Tredici, 2015) can be generalized within the platform, in areas that are linked to the above health state models, to capture a slowly progressive tissue damage pathobiology, with regenerative biology context, affecting the dynamics of progression.\n\nTherefore, the stage 1 platform would capture disease progression in two diseases, using Markov models from Alzheimer’s as a starting point, with knowledge from liver diseases, a very different indication with different characteristics, enabling the reusability-centric design of health state modeling and semantic framework. Although it would be desirable to then add new knowledge from both indications to the platform, their slowly progressing nature will limit the speed of such iterative optimization. To add faster-learning extensions that benefit the platform development effort, additional areas of medicine will therefore provide a natural focus of stage 2.\n\nAs discussed, skin regeneration as observed in non-healing wounds (skin ulcers) is a promising candidate for stage 2 medical focus areas. Based on existing knowledge on proteins found in wound fluid, a fluid similar to but also distinct from blood, panels of candidate protein-level biomarkers could be designed that capture ‘wound states’. Wound outcomes linked with such wound states and various ‘standard-of-care’ interventions could be tracked using nurse-centric Digital health tools that describe progress towards wound closure and healing (i.e. wound outcomes). Such tools could also capture features important for the early detection of potential complications, and facilitate the involvement of relevant experts based on wound states. Smart dressings for wounds that can benefit from negative pressure therapy are under development, within the Horizon 2020 program in Europe, potentially complementing the protein-based monitoring of wound states. In addition, new technologies are available for non-invasive assessments of skin architecture, based on confocal microscopy (Lange-Asschenfeldt et al., 2012). When connected into a comprehensive solution for tracking wounds in wound nurse type settings, such a connected health solution could not only generate valuable real world evidence on a diversity of ulcers, but also advance the ability to recognize and model health states as discussed above. For example, would the platform enable comparisons between innovation efforts in different countries and healthcare settings in a way that is more difficult at present? Testing their ability to model patient journeys in different populations, countries and healthcare settings could help to refine such models in a way that generates a second generation of reference models with an improved ability to capture ‘real world’ variation, where it matters. The more data become accessible for comparison and optimization, including new data types that extend our knowledge on the biology of different states, as well as diagnostic advances that change the health state recognition part of the models themselves, the more useful such health state models will become, in each iteration of improvement through data-based learning. Connecting them with tools early on that facilitate such iterative optimization will be crucial, including algorithms for the integration of multimodal diagnostic data related to such health states. Special attention should be given to make sure that different patient journey clusters are represented, if differences between those clusters can affect the definition or interpretation of health states, and transitions between health states.\n\nAdditional indications. Beyond skin regeneration, additional areas of medicine that are currently not yet identified may become a focus in stage 2, if they present an opportunity to add faster-learning cycles to the health state learning process, within that platform. Candidate datasets include longitudinal observational human studies that generate data aimed at learning health states and state transitions, including those in early stages of disease, such as the 100K project (Hood & Price, 2014) or the Google Baseline study (Piller, 2015). Other reusable aspects of the platform may also require comparisons beyond those 3 initial indications, e.g. to design a reusable semantic framework around the health state modeling effort.\n\nOnce stage 2 is mature enough, the possibility of extension towards multi-morbidity in chronic disease progression space may present itself. Patterns of multi-morbidity frequently observed in epidemiological data could provide starting points for interdisciplinary focus, in terms of ‘real life’ health states composed of several chronic disease aspects. Limited resources in elderly care settings could provide an economic aspect, linked to an existing system of cross-indication progression monitoring, including tools for QoL and risk indicators in those settings.\n\nAt that stage, we may be able to approach a more generic and more patient-centric theory of chronic disease progression and health states, across indications. Convergence among the developments discussed above would facilitate its maturation. In particular, diagnostic innovation in slowly progressing chronic diseases would improve our ability to accurately diagnose different stages and variants of disease, including improved understanding of earlier stages of disease that are characterized by a clinically ‘silent’ progression of tissue damage that increasingly outpaces regenerative, damage control or repair mechanisms. In reference models that capture average disease progression in defined populations, transitions between the states in those models would be calculated using a variety of data, from different diseases, and different types of diagnostic evidence, to capture the characteristics of that population.\n\nOnce we have enough information about the most common health states and their reliable recognition, this new capability can be connected with machine learning capabilities that help to refine such models in various settings, based on an initial reference model and some starting conditions that, over time, increasingly fit the features of that particular setting (e.g. the educational and expectation profile of the involved participants, including healthcare providers and patients, as well as established processes, habits and culture). Optimization would be achieved based on feedback loops based on measured outcomes, including patient outcomes, as well as healthcare utilization and other economic aspects, at population level. Reference clusters of patient journeys could be refined, e.g. by adjusting the weights given to particular features in the clustering, filtering for the most predictive features, and including new features not covered in the reference model. Such adaptations should then find their way back to the next generation of reference models as well, so they become easier to adapt to different settings in the next round of optimization.\n\n\nApplications of the platform\n\nIn later phases of stage 2, and in stage 3, various applications of the platform can be envisioned. The examples below are meant for illustration, to encourage participation.\n\nWhen single interventions are not sufficient to achieve the desired outcomes, combinations of interventions either delivered at the same time, or in a particular sequence over time (that considers increasingly refined health state recognition capability), will be an interesting option to consider in stage 3. This could mean that we will be increasingly able to iteratively approach a near-optimal personalized solution for a particular patient, at a particular time in their disease progression, extending the paradigms learned in oncology and AIDS to more indications, and into longitudinal data space. In principle, such value may not need to be limited to therapeutic interventions linked with health states, it could accommodate preventative interventions and even monitoring actions. For example, intervention ‘IN1’ designed for health state ‘HS1’ would lead to a subsequent health state HS2, which triggers intervention ‘IN2’, and so on. After ‘HS1’ there may be a branching point that, in some patients, leads to another state, ‘HS3’, which does not match well with intervention ‘IN2’, but requires monitoring that, once state ‘HS4’ is reached, triggers intervention ‘IN3’. Ideally, the diagnostic recognition of those states HS1-4 would be achieved with a single, reusable diagnostic procedure that is able to differentiate those health states based on non-invasive, low-risk approach (Figure 2).\n\nHealth states (HS1-4), which match state definitions in probabilistic Markov models, are connected with interventions (IN1-3), defining the time aspect in the PM vision (“the right intervention for the right patient, at the right time”). Each health state would have annotation in terms of pathobiology, health economics and clinical picture.\n\nBased on those reference models, user-friendly, patient-facing solutions could be developed, which compare the individual’s profile, at a particular time, with the most relevant reference model. If the individual’s profile is indicating faster-than-average progression towards one or more disease progression paths, a number of options could be explored, with tools that help monitoring the effect on progression at the level of health states. For example, the beneficial effects of lifestyle changes or therapeutic interventions on that profile could encourage continuation and compliance to relevant guidance. Gamification-related approaches could be useful in exploring aspects related to user engagement, considering expertise from user-centric design.\n\nSimilar usage may apply to clinical studies, if they span follow-up periods that contain health state transitions. In addition, increased ability to recognize distinct health states could generate hypotheses for new study tools linked with well-established measures and outcomes.\n\nCoordination of care, including the actions of different healthcare providers, as well as social/elderly care, is a very complex topic. The proposed platform could facilitate such efforts by simplifying the recognition of health states that require actions by specific components in the system, at a particular time, to then follow the effects of those actions on health state transitions more closely. For example, it could facilitate early recognition of worsening condition, complications, and other signals that require attention, and their collaborative interdisciplinary management.\n\nWhile (health)care processes are often quite stable over time, once the team and process landscape is up and running, there are periods where such process landscapes are under discussion, to optimize particular outcomes, as well as economic constraints, e.g. for certain patient clusters that show high costs but below-average outcomes. Teams with a history of ‘care pathway redesign’ could start to engage in the proposed platform, as early as late phases of stage 2. If it is possible to improve the recognition of clinically and economically relevant health states in such settings, care pathway redesign projects would likely be able to extract value from such insights, as they try to determine the best time and mode to intervene in specific types of patient journeys. It could also help them with the exploration of a large range of options for care pathway changes, e.g. in a visual form that supports time-efficient discussion and consensus formation in complex, interdisciplinary groups, based on connected health state diagrams. On the other hand, such collaboration would enable early influence on the design of the proposed platform, to facilitate collaboration at such interfaces, which could become increasingly impactful on both sides over time as health state recognition (via diagnostic tools) and modeling add value to care pathway redesign projects. Such collaborations could therefore benefit both sides, the developers of the proposed platform, as well as the teams involved in care pathway redesign. This includes educational aspects required for such development, with the particular side effect that the platform community gets anchored into ‘real life’ settings as early as possible. Those focused on disease biology aspects of health states would benefit from such collaborations by improved anchoring of their efforts in ‘real life’ care settings as well.\n\nInstitutions involved in device development could benefit from the platform in areas related to chronic disease progression, e.g. in projects aimed at developing smarter, more connected devices that contribute to multi-stakeholder value (similar to ‘smart inhalers’, see above). For devices linked with therapy application, the platform could help to manage projects, considering challenges related to the different timelines and cultures in therapy and device development, by allowing collaboration without excessive dependency between projects.\n\nApplications of the platform, related to disease biology, include:\n\nEnhanced ability to understand translatability of preclinical models, at the level of health state biology\n\nImproved ability to couple the development of novel therapies with biomarkers related to health states\n\nGap analysis at portfolio level, using health states to aggregate project information\n\nBy closely linking the effort on the development and optimization of health state models with initiatives focused on the representation of semantic aspects of relevant data, the following applications and value can be envisioned:\n\nIncreasing adoption of semantic technologies, for the use of data in models\n\nFeedback on inconsistencies that help develop the semantic frameworks\n\nFurther development of guiding principles (see below)\n\nRecent progress made in relevant multi-stakeholder communities, such as FORCE11, towards consensus on guiding principles in related areas includes:\n\nFAIR Guiding Principles, to facilitate data and metadata re-use (Wilkinson et al., 2016)\n\nIncreasing use of semantic web technology for many different types of biomedical data, e.g. RDF versions of EBI resources include diverse objects, from computational models to biosamples, chemicals and gene products (Jupp et al., 2014)\n\nIncreased attention to the importance of capturing reusable metadata, close to data generation, in many institutions. While we are in early stages of connecting across such efforts, convergence with increasing consensus on how to apply FAIR principles will be a key challenge in the coming 5–10 years.\n\nOther developments that provide further fuel to such efforts are increasing awareness of the importance of a more consistent implementation of ‘reproducible research’ principles (Walthemath & Wolkenhauer, 2016), to restore trust in the results of biomedical research\n\n\nA few obstacles to keep in mind\n\nIt is not for the lack of motivation, understanding or interest that systems approaches with similarity to the one discussed in this article have not developed towards real world impact, as measured by their contribution to the creation of tangible value to multiple stakeholders, and sustainability/health at systems level. Many obstacles have prevented or at least slowed down progress in that area, including the following factors, which deserve at least a brief discussion:\n\nProject management experience results in the reduction of complexity, prevention of scope creep and limiting the set of stakeholders involved in decision making, to manage risks associated with the ability to reach agreed deliverables, as well as stakeholder support. Such risk management also means that project leaders are forced to work with what exists, and need to often focus on value for particular stakeholders at the expense of others.\n\nA tendency to get infected by technology hypes, and other innovation fashions, which often lead to a shift in funding, attention and culture, which reminds us of a ‘gold rush’, including the ‘valley of tears’ after the hype, in which models fall apart, predictions are found to be wrong, credibility is lost, widespread frustration about unexpected complexity, and the failure of new wonder drugs (Lazebnik, 2002). As Lazebnik put it, “this stage can be summarized by the paradox that the more facts we learn the less we understand the process we study”. If unmanaged, this very human tendency results in an inability to resolve problems at the systems level discussed in this article.\n\nAttitudes against theory development in science, the role of mathematics, and computational modeling as a tool, further complicate connections with some stakeholders. Such attitudes strongly depend on disciplinary background, highlighting the role of academic education and training in this phenomenon. Life science disciplines, such as biology and medicine, are well-known for a widespread disregard of those aspects, leading to unnecessary tensions with potential contributors from disciplines with stronger emphasis in those areas. An example is the history of omics technologies, where such attitudes and fixed mindsets from the days of a more reductionist “one postdoc, one gene” approach resulted in much waste of research resources due to a lack of experimental design, statistical analysis skills and theoretical background (Micheel et al., 2012). The way we approach problems, and hypes in particular, is at the root of the inability to advance in this area, as highlighted by Lazebnik (2002).",
"appendix": "Author contributions\n\n\n\nMR prepared the manuscript, with input on specific topics from experts listed in Acknowledgements.\n\n\nCompeting interests\n\n\n\nThe author is an employee of the research organization of a pharmaceutical company (Novartis Pharma AG, Basel, Switzerland). The author acknowledges a bias towards OpenScience principles, as outlined above, which may not reflect the mainstream mindset in his industry.\n\n\nGrant information\n\nMy employer (Novartis Institutes for Biomedical Research) funded the work on this text.\n\n\nAcknowledgements\n\nThis text is the result of manifold interactions with the works of others in terms of their publications (not all of them could be listed in References), but also much face-to-face debate on various aspects of the described systems approach. In particular, I would like to acknowledge the contributions from discussions with Dr. Bernard Munos (InnoThink, Indianapolis, USA), Dr. Federico Tortelli (Novartis, Basel), Thomas Brenzikofer (baselarea.swiss, Basel), Dr. Ming Wong (licensed physician, Boston, USA), Dr. Edith Schallmeiner (Roche, Basel), Dr. Thomas Hach (Novartis, Basel), Prof. Torsten Schwede (University of Basel), Dr. Jonas Dorn (Novartis, Basel), Dr. Alex Zhavoronkov (in silico medicine, Baltimore, USA), Dr. Kah-Tong Seow (consultant, Frankfurt, Germany), Marco d’Angelantonio (himsa, Brussels, Belgium), Dr. Li Tang (University of Basel), Sascha Kress (Huawei, Zurich), Dr. Stefan Scherer (Novartis, New York), Dr. Evert Luesink (Novartis, Basel), Dr. Florian Nigsch (Novartis, Basel), Dr. Yasuto Tanaka (University of Kobe, Japan), Dr. Vickie Driver (Novartis, Boston, USA), Prof. Keith Harding (University of Cardiff, UK), Dr. Tewis Bouwmeester (Novartis, Basel), Dr. David Gyurko (Kantonspital Aarau), Prof. Christoph R. Meier (University of Basel), Prof. Hinrich Rahmann (University of Hohenheim, Stuttgart, Germany), Prof. Heinz Breer (University of Hohenheim, Stuttgart, Germany), Dr. Hans Widmer (Novartis, Basel), Prof. Russ Altman (University of Stanford, USA), Dr. Peter Groenen (Actelion, Basel), Dr. Frank Kumli (Ernst & Young, Basel), Dr. John Lamb (GNF, San Diego), Prof. Michael Krauthammer (Yale University, Connecticut) and Prof. Niko Beerenwinkel (ETH, Basel), and the mentorship I received as a young scientist, including an understanding of ‘good science’ values, from my academic supervisors (Prof. Harald Rösner, at the University of Hohenheim in Stuttgart, and Prof. Doron Lancet at the Weizmann Institute of Science, in Israel). In a way, this text presents an attempt to find some kind of synthesis between many of those discussions, which were disconnected in so many ways, in time and space.\n\n\nSupplementary material\n\nSupplementary File 1: Glossary of terms.\n\nClick here to access the data.\n\n\nReferences\n\nAltman RB: Translational bioinformatics: linking the molecular world to the clinical world. Clin Pharmacol Ther. 2012; 91(6): 994–1000. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAuffray C, Balling R, Barroso I, et al.: Making sense of big data in health research: Towards an EU action plan. Genome Med. 2016; 8(1): 71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarker R: 2030 – The future of medicine: avoiding a medical meltdown. Oxford University Press. 2011. Reference Source\n\nBeckmann JS, Lew D: Reconciling evidence-based medicine and precision medicine in the era of big data: challenges and opportunities. Genome Med. 2016; 8(1): 134–145. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBender BG, Chrystyn H, Vrijen B: Smart Pharmaceuticals. In: Health 4.0: How virtualization and big data are revolutionizing healthcare. Edited by Thuemmler C & Bai C. ISBN: 978-3-319-47617-9. Springer Int Publish, 2017; 61–90. Publisher Full Text\n\nBousquet J, Anto JM, Sterk PJ, et al.: Systems medicine and integrated care to combat chronic noncommunicable diseases. Genome Med. 2011; 3(7): 43. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBraak H, Del Tredici K: The preclinical phase of the pathological process underlying sporadic Alzheimer’s disease. Brain. 2015; 138(Pt 10): 2814–33. PubMed Abstract | Publisher Full Text\n\nBurns LC, Orsini L, L'italien G: Value-based assessment of pharmacodiagnostic testing from early stage development to real-world use. Value Health. 2013; 16(6 Suppl): S16–19. PubMed Abstract | Publisher Full Text\n\nButler D: Translational research: crossing the valley of death. Nature. 2008; 453(7197): 840–842. PubMed Abstract | Publisher Full Text\n\nCallahan D: Medical progress and global chronic disease: the need for a new model. The Brown Journal of World Affairs. 2013. Reference Source\n\nCalzadilla Bertot L, Adams LA: The Natural Course of Non-Alcoholic Fatty Liver Disease. Int J Mol Sci. 2016; 17(5): pii: E774. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarter ED: Making the Blue Zones: Neoliberalism and nudges in public health promotion. Soc Sci Med. 2015; 133: 374–82. PubMed Abstract | Publisher Full Text\n\nChen R, Mias GI, Li-Pook-Than J, et al.: Personal omics profiling reveals dynamic molecular and medical phenotypes. Cell. 2012; 148(6): 1293–1307. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClift J: Connected asthma: how technology will transform care. Asthma UK Report. 2016. Reference Source\n\nCooksey D: A review of UK health research funding. HM Treasury, London; 2006. Reference Source\n\nDobkin BH, Dorsch A: The promise of mHealth: daily activity monitoring and outcome assessments by wearable sensors. Neurorehabil Neural Repair. 2011; 25(9): 788–98. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDriver VR, Fabbi M, Lavery LA, et al.: The costs of diabetic foot: the economic case for the limb salvage team. J Am Podiatr Med Assoc. 2010; 100(5): 335–41. PubMed Abstract\n\nEFPIA: Healthier future: the case for outcomes-based, sustainable healthcare. European Federation of Pharmaceutical Industries and Associations. 2016. Reference Source\n\nFlores M, Glusman G, Brogaard K, et al.: P4 medicine: how systems medicine will transform the healthcare sector and society. Per Med. 2013; 10(6): 565–76. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGerteis J, Izrael D, Deitz D, et al.: Multiple chronic conditions chartbook. AHRQ Publications No. Q14-0038. Agency for Healthcare Research and Quality. 2014. Reference Source\n\nGoodwin B: From control to participation, via a science of qualities. ReVision, 1999; 21: 2–10. Reference Source\n\nGreen C: Modelling disease progression in Alzheimer’s disease: a review of modelling methods used for cost-effectiveness analysis. Pharmacoeconomics. 2007; 25(9): 735–50. PubMed Abstract | Publisher Full Text\n\nGreen C, Zhang S: Predicting the progression of Alzheimer’s disease dementia: A multidomain health policy model. Alzheimers Dement. 2016; 12(7): 776–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGuyatt GH, Feeny DH, Patrick DL: Measuring health-related quality of life. Ann Intern Med. 1993; 118(8): 622–9. PubMed Abstract | Publisher Full Text\n\nHaas M, Stephenson D, Romero K, et al.: Big data to smart data in Alzheimer’s disease: Real-world examples of advanced modeling and simulation. Alzheimers Dement. 2016; 12(9): 1022–30. PubMed Abstract | Publisher Full Text\n\nHammond D: Philosophical and ethical foundations of systems thinking. tripleC. 2005; 3(2): 20–27. Reference Source\n\nHawgood S, Hook-Barnard IG, O'Brien TC, et al.: Precision Medicine: Beyond the inflection point. Sci Transl Med. 2015; 7(300): 300ps17. PubMed Abstract | Publisher Full Text\n\nHannah WN Jr, Harrison SA: Noninvasive imaging methods to determine severity of nonalcoholic fatty liver disease and nonalcoholic steatohepatitis. Hepatology. 2016; 64(6): 2234–43. PubMed Abstract | Publisher Full Text\n\nHe L, Meng S, Germain-Lee EL, et al.: Potential biomarker of metformin action. J Endocrinol. 2014; 221(3): 363–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHood L, et al.: 2014–2015 Scientific Strategic Plan. Institute for Systems Biology, Seattle. 2014. Reference Source\n\nHood L, Price ND: Promoting wellness and demystifying disease: the 100K project. Clinical Omics. 2014; 1(3): 20–23. Publisher Full Text\n\nHunt NA, Liu GT, Lavery LA: The economics of limb salvage in diabetes. Plast Reconstr Surg. 2011; 127(Suppl 1): 289S–295S. PubMed Abstract | Publisher Full Text\n\nJupp S, Malone J, Bolleman J, et al.: The EBI RDF platform: linked open data for the life sciences. Bioinformatics. 2014; 30(9): 1338–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKodrič K, Čamernik K, Černe D, et al.: P4 medicine and osteoporosis: a systematic review. Wien Klin Wochenschr. 2016; 128(Suppl 7): 480–491. PubMed Abstract | Publisher Full Text\n\nKoelsch C, Przewrocka J, Keeling P: Towards a balanced value business model for personalized medicine: an outlook. Pharmacogenomics. 2013; 14(1): 89–102. PubMed Abstract | Publisher Full Text\n\nKostic A, Phillips R: Precision Medicine as a new paradigm in drug development. Journal of Precision Medicine. 2016. Reference Source\n\nKvedar JC, Fogel AL, Elenko E, et al.: Digital medicine’s march on chronic disease. Nature Biotech. 2016; 34(3): 239–246. PubMed Abstract | Publisher Full Text\n\nLange-Asschenfeldt S, Bob A, Terhorst D, et al.: Applicability of confocal laser scanning microscopy for evaluation and monitoring of cutaneous wound healing. J Biomed Opt. 2012; 17(7): 076016. PubMed Abstract | Publisher Full Text\n\nLaverty H, Orrling KM, Giordanetto F, et al.: The European lead factory – an experiment in collaborative drug discovery. J Med Dev Sci. 2015; 1(1): 20–33. Reference Source\n\nLazebnik Y: Can a biologist fix a radio?--Or, what I learned while studying apoptosis. Cancer Cell. 2002; 2(3): 179–82. PubMed Abstract | Publisher Full Text\n\nLengauer T, Pfeifer N, Kaiser R: Personalized HIV therapy to control drug resistance. Drug Discov Today Technol. 2014; 11: 57–64. PubMed Abstract | Publisher Full Text\n\nLengauer T, Sing T: Bioinformatics-assisted anti-HIV therapy. Nat Rev Microbiol. 2006; 4(10): 790–97. PubMed Abstract | Publisher Full Text\n\nLim JZ, Ng NS, Thomas C: Prevention and treatment of diabetic foot ulcers. J R Soc Med. 2017; 110(3): 104–109. PubMed Abstract | Publisher Full Text\n\nLow E, Bountra C, Lee WH: Accelerating target discovery using pre-competitive open science-patients need faster innovation more than anyone else. Ecancermedicalscience. 2016; 10: ed57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLozano R, Naghavi M, Foreman K, et al.: Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet. 2012; 380(9859): 2095–128. PubMed Abstract | Publisher Full Text\n\nMahley RW, Weisgraber KH, Huang Y: Apolipoprotein E: structure determines function, from atherosclerosis to Alzheimer’s disease to AIDS. J Lipid Res. 2009; 50(Suppl): S183–8. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMahmood SS, Levy D, Vasan RS, et al.: The Framingham Heart Study and the epidemiology of cardiovascular disease: a historical perspective. Lancet. 2014; 383(9921): 999–1008. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMargolis DJ, Bilker W, Santanna J, et al.: Venous leg ulcer: incidence and prevalence in the elderly. J Am Acad Dermatol. 2002; 46(3): 381–6. PubMed Abstract | Publisher Full Text\n\nMathews SC, Pronovost PJ: The need for systems integration in health care. JAMA. 2011; 305(9): 934–935. PubMed Abstract | Publisher Full Text\n\nMicheel CM, Nass SJ, Omenn GS: Evolution of translational omics: lessons learned and the path forward. Chapter 2: Omics-Based Clinical Discovery: Science, Technology, and Applications. Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials; Board on Health Care Services; Board on Health Sciences Policy; Institute of Medicine. 2012. Reference Source\n\nMorris M, Lundell J: Ubiquitous computing for cognitive decline: findings from Intel’s proactive health research. Intel Research. 2003. Reference Source\n\nMunos B: Can Open-Source Drug R&D Repower Pharmaceutical Innovation? Clin Pharmacol Ther. 2010; 87(5): 534–6. PubMed Abstract | Publisher Full Text\n\nMunos B: A new look at the most innovative pharma companies, and weather they are sustainable (Forbes Innovation Chatroom). 2016. Reference Source\n\nNorman GR, Sloan JA, Wyrwich KW: Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation. Med Care. 2003; 41(5): 582–92. PubMed Abstract | Publisher Full Text\n\nNugent R: Chronic diseases in developing countries: health and economic burdens. Ann NY Acad Sci. 2008; 1136: 70–79. PubMed Abstract | Publisher Full Text\n\nPark NJ, Allen L, Driver VR: Updating on understanding and managing chronic wounds. Dermathol Ther. 2013; 26(3): 236–56. PubMed Abstract | Publisher Full Text\n\nPellicoro A, Ramachandran P, Iredale JP, et al.: Liver fibrosis and repair: immune regulation of wound healing in a solid organ. Nat Rev Immunol. 2014; 14(3): 181–94. PubMed Abstract | Publisher Full Text\n\nPerez C: Smart inhalers and the future of respiratory health management. RT Magazine. 2015. Reference Source\n\nPes GM, Tolu F, Poulain M, et al.: Lifestyle and nutrition related to male longevity in Sardinia: an ecological study. Nutr Metab Cardiovasc Dis. 2013; 23(3): 212–9. PubMed Abstract | Publisher Full Text\n\nPiller C: Google’s next big idea: mining health data to prevent disease. STAT News. 2015. Reference Source\n\nPoste G: Bring on the biomarkers. Nature. 2011; 469(7329): 156–7. PubMed Abstract | Publisher Full Text\n\nPowell K: All systems go. J Cell Biol. 2004; 165(3): 299–303. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPritchard DE, Moeckel F, Susan Villa M, et al.: Strategies for integrating personalized medicine into healthcare practice. Personalized Medicine. 2017; 14(2): 141–52. Publisher Full Text\n\nScholz N: Personalised medicine: the right treatment for the right person at the right time. European Parliament Research Service. 2015; PE 569.009. Reference Source\n\nShiroiwa T, Sung YK, Fukuda T, et al.: International survey on willingness-to-pay (WTP) for one additional QALY gained: what is the threshold of cost effectiveness? Health Econ. 2010; 19(4): 422–37. PubMed Abstract | Publisher Full Text\n\nSpivey A: Systems biology: the big picture. Environ Health Perspect. 2004; 112(16): A938–43. PubMed Abstract | Free Full Text\n\nSterling S: Whole systems thinking as a basis for paradigm change in education: explorations in the context of sustainability. PhD thesis at the University of Bath, UK, 2003. Reference Source\n\nStrategy & report: Revitalizing pharmaceutical R&D: The value of real world evidence. 2015. Reference Source\n\nWalthemath D, Wolkenhauer O: How Modeling Standards, Software, and Initiatives Support Reproducibility in Systems Biology and Systems Medicine. IEEE Trans Biomed Eng. 2016; 63(10): 1999–2006. PubMed Abstract | Publisher Full Text\n\nWang Y, Xue H, Liu S: Applications of systems science in biomedical research regarding obesity and noncommunicable chronic diseases: opportunities, promise, and challenges. Adv Nutr. 2015; 6(1): 88–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWehling M: Assessing the translatability of drug projects: what needs to be scored to predict success? Nature Rev Drug Disc. 2009; 8(7): 541–546. PubMed Abstract | Publisher Full Text\n\nWells TN, Willis P, Burrows JN, et al.: Open data in drug discovery and development: lessons from malaria. Nat Rev Drug Disc. 2016; 15(10): 661–662. PubMed Abstract | Publisher Full Text\n\nWilckens T: Machine learning guided precision trials for major chronic debilitating diseases. HBR Forum. 2016. Reference Source\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYang X, Wu X, Hao HL, et al.: Mechanisms and assessment of water eutrophication. J Zhejiang Univ Sci B. 2008; 9(3): 197–209. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZois CD, Baltayiannis GH, Karayiannis P, et al.: Systematic review: hepatic fibrosis - regression with therapy. Aliment Pharmacol Ther. 2008; 28(10): 1175–87. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "21975",
"date": "24 Apr 2017",
"name": "Michel Goldman",
"expertise": [
"Reviewer Expertise Strategies for therapeutic innovation"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nExcellent article that will be inspirational for shaping the future of medicine and healthcare. The systems approach should indeed revolutionise the approach of chronic disorders. The authors rightly integrate the importance of patient-centricity and digital health to translate precision medicine into standard of care. The F1000Research Open Science is a very appropriate vehicle for dissemination of this innovative vision.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "22492",
"date": "04 May 2017",
"name": "Rob Hooft van Huijsduijnen",
"expertise": [
"Reviewer Expertise Drug discovery",
"molecular biology",
"genomics",
"infectious diseases",
"neurodegenerative diseases"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe author proposes to better exploit the vast and varied datasets that are being generated around -specifically- chronic diseases to break these down in a set of 'health states' that can be used to guide improved prevention and treatment. Implementing such a view ('platform') requires the breakdown of barriers to Open Access and (paraphrasing) a revolution in health economics.\n\nThese ideas are familiar to many who have spent time in the Pharma industry and sometimes lead to heated discussions. This reviewer, for one, subscribes to the majority of the views that are expressed here.\n\nThe only real criticism I have is that the piece attempts to cover a lot of ground, and that the ideas could be presented in a more easily digestible way. Some very complex issues are lightly touched upon (the added Glossary is quite helpful). There are few specific, actionable recommendations- the way I read this is to prepare the mind-set for how better to tackle the great medical challenges of the 21st Century. In itself that is OK if the message is clear.\n\nSome of the language is awkward- I would not focus on boring 'health states', for, in analogy with Tolstoy, 'all happy families are alike'. I guess what is actually meant here is 'disease states', and I would make that 'disease stages' to convey the dynamics of healing and worsening disease.\n\nOverall, I had severe difficulties reading the title and abstract to make out what the article is about (dawned upon me much later). I initially half-expected this to be about holistic or anthroposophical medicine, or equivalent nonsense. In terms of informing and capturing the reader's attention I feel these sections can (and must) be much improved. The Journal Nature typically throws in 'Boxes' where complex concepts are explained. If the author is to reach a broad readership that includes academics this might be a good idea.\n\nThe section of a changing medical landscape towards chronical diseases is interesting but the author does not specifically say why Pharma is inept to deal with these. The casual reader might ask here- \"so what?\" It is not clear how the 'Islands of healthy aging' fits in; I guess it is to say that genes and environment play important roles, and this analysis promotes the article's system approaches but then we are told that \"Other conclusions may be valid as well, and it can be difficult to choose among the alternative conclusions, to inform action.\"- suggesting the problem is in fact intractable.\n\nExamples are given where modern systems biology has made good progress with improved diagnostics and interventions: oncology and AIDS. Fair enough, but these are diseases where the 'enemy' is crystal-clear. These disease states can almost be reduced simply to the metastatic tumour mass resp. virus load- it is not obvious the elegant solutions in this area are transferrable to 'regenerative medicine', the focus of the article. At some point it is said that \"..biology of their [AIDS] interactions with host (defense) biology, has enabled the development of highly personalized combination therapy approaches\". I disagree- almost all HIV drugs I know simply target viral enzymes or HIV's binding to CCR5. I just fail to see how these successes illustrate how we can tackle Alzheimer's, diabetes and similar diseases. The article would really gain from better suggesting how the proposed approach could help us in the right direction here.\n\nWhat I like however is the author's suggestion to be comprehensive in evaluating datasets that may affect the dynamics in disease stages. Also, his suggestion for better, global monitoring and standardized data collection merits follow-up. In this context the NHS could be cited, which is in a position where it can, and actually does such things. Also, I agree with the insight that we must let go of our ambition to fully understand systems. Some people understand how computers work, and some computers play better chess than any human being- likewise we will eventually understand the nuts and bolts of life- but remain unable to predict how humans will behave.\n\nI am a bit surprised that biomarkers are mainly presented as helping to understand 'disease states'. For me the main asset is that they can accelerate the evaluation of experimental drugs in patient (proxy outcomes). I am not too familiar with P4 and 'personalized medicine' for chronic diseases. I think the goal for now is to identify any treatment for, again, e.g., Alzheimer's for any category of patients- in my time we called that 'patient stratification'- but renaming elusive targets may provide some relief to frustration…\n\nAnother topic that could easily be expanded into a doorstopper is combinatorial interventions. This reviewer fully agrees that the concept of treating complex diseases with a single molecule is outrageous. However, evaluating multiple combinations in a factorial clinical trial design is just impossible, to mention just one problem.\n\nThe other topic that is (too) lightly touched upon is the disconnect between health care investment, outcomes and payback (health economics). I haven't seen much “value-based care”, with country-by-country (differential) pricing and too many clinical trials where NCEs are tested against placebos rather than the best existing care. Pharma (and Biotech) companies still behave as if they are just competing against each other, zero-sum-wise, spending more on marketing than on R&D in the process. As this article implies, they should rather be playing the game where they try to guess the cards that Mother Nature holds- and this requires cooperation. It is refreshing (and not wholly surprising) that such a proposal emerges from Novartis, one of the more enlightened Pharma Companies (no commercial interests with this reviewer).\n\nMany of the points raised above are intended to improve the important messages in this article and are to be seen as constructive criticism.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": [
{
"c_id": "2703",
"date": "12 May 2017",
"name": "Michael Rebhan",
"role": "Author Response",
"response": "The referee has provided a very interesting, constructive, in-depth review of this article, identifying several aspects of the text that warrant the generation of a 2nd version of the paper with the aim of reduced ambiguity for specific aspects. As author my intention is to work with several co-authors who have relevant expertise on those improvements in the next months, so the community has a more useful reference text for a discussion of those aspects.In the meantime, let me comment on some of his points right away, to further stimulate discussion:1. Implementing the platform requires the breakdown of barriers to Open Access, and a revolution in health economics: Agreed, this is a tough challenge, for a variety of reasons - but what is the alternative we have as a society struggling with rapidly rising chronic disease burden, and healthcare costs? While it is clear that the current health economics in real life is quite different from the approach described in the paper, many pioneers increasingly agree that value-based models are an important part of a more sustainable approach we need to develop, and that fragmented discipline-specific innovation efforts will not get us there. Implementing it however is hard work, extremely complex, and likely an iterative learning process. It is not developing as fast as many of us would hope, on the ground, in many areas; but there is progress, e.g. look at the work of BD4BO in IMI, which helps to create an improved, practical foundation for such efforts. It is clear that we need to develop a more sustainable system, and that it is difficult to get fast traction in the real world, to a point where many of us may just give up once they know the true complexity of the challenge. But what is the alternative? Just keep talented innovators focused on easier-to-handle fragments with short-term effect that do not connect at system sustainability level? The vision of the paper may seem overly ambitious and idealistic to many short-term focused pragmatists, but I wonder if a bit more focus on long-term oriented, collaborative systems approaches such as what is proposed in this paper would create a more healthy balance in our health innovation ecosystem. Think of it as a portfolio approach to short and long term aspects, with the right balance between both.See http://www.efpia.eu/topics/innovation/outcomes2. There are few specific, actionable recommendations, it's more about preparing the mindset for learning how to better tackle the great medical challenges of the 21st century: I do not agree with this statement. Did the referee notice the proposed roadmap for developing a health state modeling platform, towards the end of the paper (stages 1-3)? The way it is described there it could be implemented with limited resources, in a few years, with some refinement along the way as the community builds, and data are collected on what works best. On the other hand, preparing mindsets to focus more on long-term sustainability may be a value nonetheless? 3. Oncology and AIDS as examples: Partly agree. The referee comments that it is not obvious how the elegant solutions in those indications, where the root of the problem is more defined and easier to capture (e.g. at pathobiological level), can be transferred to chronic diseases and regenerative medicine. I agree that we cannot simply take the exact approaches that were used there, copy and paste them, and then apply them 'as is' in the indications described in the paper. As we accumulate more data on health states, learn how to represent and optimize such states, and build models for different uses, we need to learn which aspects and variants of those paradigms can be applied to which problems, and where the limitations are of a particular paradigm in terms of its application. There is no silver bullet that solves all problems, we all know that. Simply getting everyone's DNA and looking for signals there is not likely to be enough for chronic diseases with complex temporal change patterns, even if we do a lot of it in very large populations. Considering the proposed focus on the aggregation of longitudinal human data that can be simplified as health state models we need to first find out where the most relevant signals are that we should focus on.4. Why is Pharma inept to deal with the changing medical landscape toward chronical diseases? This is an interesting question, which may best be addressed in a separate, follow-up paper, as it is quite complex. The intention of this paper is NOT to focus on a Pharma perspective. Instead, the intention is to zoom out and look at the problem from a more neutral but comprehensive perspective which is more likely to be relevant for different stakeholders, as described in Fig. 1. While the author cannot claim neutrality based on affiliation, a serious effort was made to avoid a Pharma bias in the text, and provide a more balanced perspective. The proposed platform for health state modeling can only work as an open crystallization point in the community if it achieves a healthy balance of interests (Fig. 1), but this certainly requires a transition from an atmosphere of blaming each other for failures at systems level, to true collaboration around more constructive and sustainable approaches. There are important fundamental problems that are hard to tackle, some of them mentioned by this referee, which require a new constructive culture that transcends institutions, interest groups and mindsets. Nobody said it will be easy. I think there are plenty of people in Pharma and other places who are ready to engage, not only in Novartis. Again, what is the alternative for society? Business as usual?5. Biomarkers, health vs. disease states and patient stratification: The referee has many interesting points here, as the paper offers a somewhat unorthodox view of the role of biomarkers, in the context of patient stratification and disease progression. In the text, the way 'biomarkers' are discussed in terms of their relevance for the proposed health state modeling platform, as this is the focus of the text. Of course they will have additional uses and meaning, in other contexts. On the other hand, health states could also be defined in a way that not only relies on data related to biomarkers in a narrow sense, but also other objectively measurable signals that are usually not considered biomarkers. Once the health states have sufficient detail in terms of the most relevant disease biology states, linked with a clinical and economic profile, they could be seen as models that provide actionable interpretations and visualizations for combinations of biomarkers combined with other signals. The preference for the term 'health states' over 'disease states' comes from the idea that we still do not know enough about transitions between health and disease, and the earlier stages of many diseases. However, it is possible that therapeutic or preventative interventions in those earlier stages are likely to be more cost-effective and provide better patient value than later interventions. The way we look at disease and medicine currently is often focused on later stages with a strong phenotype that is visible in a classical clinical setting, with a higher hurdle for interventions to make a difference, and less emphasis on screening and preventative approaches. Again, this takes us back to economic reality in health, which may need a bit of adjustment, see above."
}
]
},
{
"id": "21819",
"date": "30 May 2017",
"name": "Jacques S. Beckmann",
"expertise": [
"Reviewer Expertise genetics",
"precision medicine"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThere is a general excitement about the perspectives of improved medical care and therapies afforded by the recent emergence of cost-effective high-throughput disruptive technologies, allowing the generation and computational analyses of massive amounts of clinically-relevant data and their transformation into useful information. This hype is evidenced by the abounding scientific literature on these subjects. The current manuscript summarizes the field approaching it from the perspective of pharma industries and focusing on chronic diseases.\nI read it more as representing as a first step, promoting a dialogue between the interested stakeholders, triggering reactions in order to eventually catalyse or crystalize the convergence of synergistic approaches. The manuscript lists numerous elements of discussion and consideration. This list is obviously incomplete, each reader may have his own favourite points to add. But this is also the purpose of this article (to which I could add my pet projects or thoughts).\nIndeed, in many aspects, this opening call to “put all the pieces of the puzzle together” may suffer from being yet incomplete, partially superficial or presenting an overly simplified interpretation; it may elicit disagreements on particular points. Yet, these limitations essentially reflect the uncertainties in which this field is currently navigating.\n\nAs such the manuscript warrants publication in F1000, a forum where such discussions are encouraged.\n\nSpecific comments:\nI like the wink to the tension between civilization, lifestyles and new technologies or what is referred to as “the other side of the coin”. Future discussions might also consider the potential risk of a further dramatic narrowing of the cultural, ethnic, species and environmental diversity, the consequences of which we don’t fathom fully as yet (although there have been numerous historical precedents, also illustrated in Y Harari's book).\nI would encourage further discussions on the deer necessity for the implementation of standardized, consistent nomenclatures and ontologies allowing, as suggested, cross-disease interactive channels. This is a real challenge as clinical data are much more complex and heterogeneous than lab data. To this we must add additional obstacles, when considering patient-centred involvement: these concern, among others, language (and cultural) barriers, the abundance of non-clinically approved monitoring devices and the heterogeneity in patients' verbal description of their status.\n\nMinor item: Correct ADIS to AIDS (under Modernization of diagnosis and personalized therapy)\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "21818",
"date": "13 Jun 2017",
"name": "Charles Auffray",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis excellent paper proposes to learn from progresses in leading areas of medicine innovation (AIDS, oncology, Alzheimer's disease) to extend them to other chronic diseases using system approaches. Representing health states with the aim of designing the right intervention for the right patient at the right time and dose should allow reiterative approaches in longitudinal studies. The paper focus on modernization of diagnosis (omics etc), technologies/digital health and biological/clinical (biomarkers) and and economic aspects of the disease.\nA comment that can be done to the paper, only partially covered by other reviewers, is that it tries to cover a too large subject, which makes it difficult to read; while avoiding, in our opinion, some crucial points.\nIt is also of note that the readability of the paper could be greatly improved by focusing on the key messages; the author tends to drift off topic and his lyrical musings, although interesting, add a lot of weight to an already large paper.\nWe would omit/reduce the sections on Open science culture and Business model innovation, Semantic web.\nWe would also reduce the section on systems approaches, that are already well covered in the manuscripts cited in the text.\nBlue zones, islands of longevity: the author provocatively states how it is possible to translate the information that being a shepherd who eats barley and live in the mountain to the general population. Genetics of centenarians is a very important research field. We now know that specific mutations associated to IGF-1 are enriched only in centenarians. We can also learn a lot in terms of food habits and nutraceuticals from extreme long-lived healthy people. Diet, moderate intake or animal proteins. Exercise. Prevention.\n\nFamily ties: Emma Morano Martinuzzi, the longest lived person on the planet recently passed away aged 117.5 years old. Emma had something that ALL long lived healthy people share: extensive family social network of support. A true army of offspring, grandchildren, cousins etc. Emma was never alone. In modern Western societies old people tend to isolate and die alone in their houses.\n\nLoneliness is notoriously associated to lowering immune defenses, depression and chronic diseases. It is all linked.\n\nAn important section of the paper should be dedicated to governmental/social policies not to leave our elderly to age and die alone (robots are not a solution), which happens less in the Eastern world or in more archaic societies.\n\nMedical knowledge, conceptual framework, technological advances and semantic categorization of stakeholders interactions are useless exercises without removing the elephant in the room: the sentimental and civil value we give to our elderly.\n\nMinor comment: in page 5, the author links the development of human civilization with the eutrophication of waters. We think that linking eutrophication with human activity should not be automatic. Eutrophication can be the product of a natural process, such as unusually high temperatures, modification of the hydrogeology of the ecosystem, natural erosion process of rich soils causing a sudden rise in availability of certain nutrients...\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-309
|
https://f1000research.com/articles/5-2825/v1
|
06 Dec 16
|
{
"type": "Antibody Validation Article",
"title": "Validation of commercially available sphingosine kinase 2 antibodies for use in immunoblotting, immunoprecipitation and immunofluorescence",
"authors": [
"Heidi A. Neubauer",
"Stuart M. Pitson"
],
"abstract": "Sphingosine kinase 2 (SK2) is a ubiquitously expressed lipid kinase that has important, albeit complex and poorly understood, roles in regulating cell survival and cell death. In addition to being able to promote cell cycle arrest and apoptosis under certain conditions, it has recently been shown that SK2 can promote neoplastic transformation and tumorigenesis in vivo. Therefore, well validated and reliable tools are required to study and better understand the true functions of SK2. Here, we compare two commercially available SK2 antibodies: a rabbit polyclonal antibody from Proteintech that recognizes amino acids 266-618 of human SK2a, and a rabbit polyclonal antibody from ECM Biosciences that recognizes amino acids 36-52 of human SK2a. We examine the performance of these antibodies for use in immunoblotting, immunoprecipitation and immunofluorescence staining of endogenous SK2, using human HEK293 and HeLa cell lines, as well as mouse embryonic fibroblasts (MEFs). Furthermore, we assess the specificity of these antibodies to the target protein through the use of siRNA-mediated SK2 knockdown and SK2 knockout (Sphk2-/-) MEFs. Our results demonstrate that the Proteintech anti-SK2 antibody reproducibly displayed superior sensitivity and selectivity towards SK2 in immunoblot analyses, while the ECM Biosciences anti-SK2 antibody was reproducibly superior for SK2 immunoprecipitation and detection by immunofluorescence staining. Notably, both antibodies produced non-specific bands and staining in the MEFs, which was not observed with the human cell lines. Therefore, we conclude that the Proteintech SK2 antibody is a valuable reagent for use in immunoblot analyses, and the ECM Biosciences SK2 antibody is a useful tool for SK2 immunoprecipitation and immunofluorescence staining, at least in the human cell lines employed in this study.",
"keywords": [
"Sphingosine kinase 2",
"antibody validation",
"immunoblotting",
"immunoprecipitation",
"immunofluorescence"
],
"content": "Introduction\n\nSphingolipids are an important family of cellular molecules that form critical structural components of cell membranes, as well as performing numerous signaling functions1. Of the many enzymes responsible for the biosynthesis and metabolism of sphingolipids, the sphingosine kinases (SKs) are of particular interest to study as they catalyze the formation of sphingosine 1-phosphate (S1P), and in doing so can promote cell survival, proliferation, migration and angiogenesis2. Both sphingosine kinases, SK1 and SK2, have been shown to be upregulated in various human cancers and both have documented roles in mediating oncogenesis3,4. However, where SK1 and its roles in cancer development are relatively well characterized, SK2 remains somewhat enigmatic as, in addition to the pro-cancer functions it shares with SK1, SK2 can also facilitate cell cycle arrest and cell death5,6.\n\nSK2 is ubiquitously expressed in all cells and tissues, but is expressed most highly in the liver, kidney and brain7. At the mitochondria, SK2-generated S1P has been proposed to facilitate the activation of Bak and subsequent mitochondrial membrane permeabilisation and cytochrome c release5. Notably, SK2 can also function as an epigenetic regulator, where S1P produced by nuclear-localized SK2 can inhibit the activity of histone deacetylases 1/2 resulting in increased transcription of specific genes, such as cyclin-dependent kinase inhibitor p21 and transcriptional regulator c-fos8. As SK1 does not appear to localize as prominently to these internal organelles, it is believed that the subcellular localization of SK2 is critical for the additional functions it performs. However, the mechanisms regulating the localization and functions of SK2, allowing it to switch between pro-apoptotic and pro-survival under certain conditions, remain poorly understood.\n\nIn order to study SK2 and better characterize its roles in normal cells as well as in cancer, reliable and properly validated tools are required. Antibody-based methods, such as immunoblotting (IB), immunoprecipitation (IP) and immunofluorescence (IF), are particularly useful as tools to examine and visualize important aspects of SK2 biology, like subcellular localization, expression and interaction with regulatory proteins. A number of groups in the field have taken to generating their own in-house SK2-specific polyclonal antibodies9,10, but to our knowledge there has been no systematic validation of commercially available SK2 antibodies. Here, we compare two commercially available SK2 antibodies, and validate the suitability of their use in IB, IP and IF using various human and mouse cells lines. We have examined a rabbit polyclonal SK2 antibody from Proteintech, which is raised against amino acids 266–618 of recombinant human SK2a, and a rabbit polyclonal SK2 antibody from ECM Biosciences, which is raised against a synthetic peptide corresponding to amino acids 36–52 of human SK2a. The Proteintech SK2 antibody has been previously utilized in one publication for IB11, and the ECM Biosciences SK2 antibody has been used in multiple publications for IB12–15 and for IF16.\n\n\nMaterials and methods\n\nThe following SK2 antibodies were assessed: rabbit polyclonal anti-SK2 (ECM Biosciences; anti-Sphingosine Kinase 2 (N-terminal region); #SP4621, lot #1) and rabbit polyclonal anti-SK2 (Proteintech Group, Inc; anti-SPHK2; #17096-1-AP, lot #00010361). The ECM Biosciences SK2 antibody was raised against a synthetic peptide coupled to keyhole limpet hemocyanin (KLH), corresponding to amino acids 36–52 of human SK2a, and was affinity purified with the SK2 peptide (without KLH). It is reported by the manufacturer to have cross-reactivity with rat and mouse SK2 [human, mouse and rat SK2 share 100% sequence identity in this region (determined using the align tool and protein sequences from www.uniprot.org)], and has been assessed by the manufacturer for use in IB and enzyme-linked immunosorbent assay (ELISA). The Proteintech SK2 antibody was raised against truncated recombinant GST-tagged human SK2a (amino acid residues 266–618 generated in Escherichia coli using the PGEX-4T plasmid). The SK2 target antibodies were then affinity purified using 6xHis-tagged antigen protein (to remove GST-specific antibodies) and then again with the immunising GST-tagged antigen protein. It is reported to have cross-reactivity with rat and mouse SK2 [80.2% sequence identity between human and mouse SK2, and 80.2% sequence identity between human and rat SK2 in this region (determined using the align tool and protein sequences from www.uniprot.org)], and according to the manufacturer can be employed for IB, ELISA, IP and immunohistochemistry. Mouse anti-α-tubulin (DM1A; Abcam; #ab7291) is a mouse monoclonal antibody, which was used as a loading control for IB analyses, at a dilution of 1:5,000. All antibody details, including information for secondary antibodies used, are provided in Table 1.\n\nHuman embryonic kidney (HEK) 293 cells (CellBank Australia; #85120602) and HeLa human cervical cancer cells (ATCC; #CCL-2) were cultured in Dulbecco’s modified Eagle’s medium (DMEM; Gibco, Thermo Fisher Scientific Inc.), containing 10% heat-inactivated fetal bovine serum (FBS; Bovagen), 1 mM HEPES, penicillin (1.2 mg/ml) and streptomycin (1.6 mg/ml). Cells were grown at 37°C with 5% CO2 in a humidified incubator. Primary mouse embryonic fibroblasts (MEFs) were generated from both wildtype (WT) C57/Bl6 and Sphk2-/- C57/Bl617 mouse embryos at 14.5 days post coitum. The fibroblasts were isolated and cultured as described above, except they were maintained at 37°C in a humidified atmosphere with 10% CO2.\n\nsiRNA-mediated knockdown of SK2 was performed using human SPHK2 siGENOME SMARTpool siRNA (Dharmacon), which targets the following sequences: CCACUGCCCUCACCUGUCU, GCUCCUCCAUGGCGAGUUU, GAGACGGGCUGCUCCAUGA, CAAGGCAGCUCUACACUCA. Cells were seeded and grown to a cell density of approximately 50%, and were then transfected with 30 nM (final concentration) of either human SK2 siRNA or siGENOME non-targeting siRNA control pool (Dharmacon), using Lipofectamine RNAiMAX (Life Technologies), as per the manufacturer’s protocol. Cells were incubated with the siRNA complexes at 37°C for 48 h.\n\nSpecific details for all reagents used can be found in Table 2. Cells were pelleted by centrifugation (400 × g, 5 min, 4°C) and washed in cold phosphate buffered saline (PBS). Cell pellets were resuspended in extraction buffer [EB; 50 mM Tris/HCl buffer (pH 7.4) containing 150 mM NaCl, 10% glycerol, 1 mM EDTA, 0.05% Triton X-100, 2 mM Na3VO4, 10 mM NaF, 10 mM β-glycerophosphate, 1 mM dithiothreitol (DTT) and protease inhibitor cocktail (Roche)], and lysed by bath sonication (4× 30 sec on/off). Lysates were clarified (17,000 × g, 15 min, 4°C) and equal amounts of protein [as determined by a Bradford protein assay (Bio-Rad Laboratories)] were mixed with 5× Laemmli sample buffer, boiled at 100°C for 5 min, and separated by SDS-PAGE on a Criterion™ XT Bis-Tris 4–12% gradient gel (Bio-Rad Laboratories). Proteins were then transferred to nitrocellulose membrane (Pall Life Sciences) at 400 mA for 1 h. Membranes were blocked with 5% skim milk in PBS containing 0.1% Triton X-100 (PBS-T) for 1 h at room temperature, with gentle rocking. Membranes were probed with rabbit anti-SK2 antibodies diluted in Signal Boost primary antibody diluent at 1:1,000 (ECM Biosciences: 1 µg/ml; Proteintech: 687 ng/ml) overnight at 4°C with gentle rocking. Alternatively, membranes were probed with mouse anti-α-tubulin antibody diluted in 5% skim milk in PBS-T at 1:5,000 (200 ng/ml) for 1 h at room temperature, with gentle rocking. Following primary antibody incubation, membranes were washed 3 × 5 min in 5% skim milk in PBS-T at room temperature with gentle agitation. Membranes were probed with goat anti-rabbit horseradish peroxidase (HRP) secondary antibody diluted in Signal Boost secondary antibody diluent at 1:10,000 (40 ng/ml), or goat anti-mouse HRP secondary antibody diluted in 5% skim milk in PBS-T at 1:10,000 (40 ng/ml), for 1 h at room temperature, with gentle rocking. Membranes were washed 3 × 5 min in 5% skim milk in PBS-T at room temperature with gentle agitation, and proteins were visualized using enhanced chemiluminescence (ECL) on a LAS-4000 luminescence image analyser (Fujifilm). Exposure times are indicated in the figure legends for each blot.\n\nHEK293 cell lysates were prepared as for immunoblotting, with the exclusion of DTT in the extraction buffer (EB–DTT). Protein concentration was determined from clarified lysates, and 800 µg total protein was transferred to fresh tubes and made up to 300 µl in EB–DTT. In total, 20 µl of diluted lysate was removed and mixed with 5× Laemmli sample buffer for immunoblot analysis. Immunoprecipitation was performed using the µMacs magnetic system (Miltenyi Biotec; see Table 3 for reagent details). Rabbit anti-SK2 antibodies (4 µg; ECM Biosciences 1:75 or Proteintech 1:52) or rabbit IgG isotype control antibody (4 µg), as well as 50 µl each of Protein A and G µBeads were added to the lysate, mixed gently and incubated on ice for 30 min. µMacs columns were placed onto a magnetic stand, equilibrated with 200 µl EB–DTT, and lysate/antibody/bead complexes were run through the columns. Columns were washed four times with 200 µl EB–DTT, and once with 100 µl low salt wash buffer, before the addition of 20 µl hot 1× Laemmli sample buffer for 5 min. Immunoprecipitates were then eluted with 50 µl hot 1× Laemmli sample buffer and collected in fresh tubes. Samples were boiled and 25 µl was analysed by SDS-PAGE and immunoblotting as described above.\n\nCell lines were seeded onto coverslips coated with poly-L-lysine (Sigma-Aldrich) in a 12-well plate (HEK293: 1.5 ×105 cells/well; HeLa: 7.5 ×104 cells/well; MEF: 3 ×104 cells/well). Cells were allowed to bed down overnight at 37°C in DMEM with 10% FBS. The following day (for HEK293 and HeLa cells), cells were treated with siRNA as described above, and incubated for 48 h. The following immunofluorescence staining protocol was then performed, with all steps carried out at room temperature (see Table 4 for reagent details). Cells were washed once in PBS, fixed in 4% paraformaldehyde for 10 min, washed three times in PBS-T, and permeabilized for 10 min in PBS-T. Cells were then blocked in 3% bovine serum albumin (BSA) in PBS-T for 30 min, and incubated for 1 h with anti-SK2 antibodies (4 µg/ml; ECM Biosciences 1:250 and Proteintech 1:172) diluted in 3% BSA/PBS-T. Cells were washed five times in PBS-T, and then incubated with goat anti-rabbit AlexaFluor 488 secondary antibody (1:500) for 1 h. After washing five times in PBS-T, cell nuclei were stained with DAPI (0.2 µg/ml) for 5 min. Cells were then washed twice in PBS, coverslips were partially dried and mounted onto slides using fluorescence mounting medium (Dako), and then left to set overnight. Fluorescence microscopy and imaging were performed using a Carl Zeiss LSM 700 confocal microscope, with Zen 2011 (Black Edition) version 8.1.5.484 software. All microscope settings, including gains, were kept constant for each cell line, allowing direct comparison between antibodies.\n\nVarious controls were used in these studies. Immunoprecipitations were performed with IgG isotype control antibody to control for any non-specific binding of proteins to the antibodies. Primary antibodies were also omitted from the immunofluorescence protocol to control for background fluorescence of the secondary antibody alone. siRNA-mediated SK2 knockdown as well as Sphk2-/- MEFs were utilized to verify the specificity of the SK2 antibodies to their target.\n\n\nResults\n\nBoth SK2 antibodies examined in the present study are reported by their respective manufacturers to be able to detect endogenous SK2 by IB. To determine the selectivity of the anti-SK2 antibodies, we performed IB analyses using two human cell lines (HEK293 and HeLa) that had been treated with either scrambled control or SK2-directed siRNA, as well as WT and Sphk2-/- MEFs. The Proteintech anti-SK2 antibody detected a single prominent band at the correct molecular weight for SK2 (~65 kDa), which was decreased or absent in the knockdown and knockout lines (Figure 1A; Dataset 118). Some faint non-specific bands were also detected in both the WT and Sphk2-/- MEF lysates by this antibody, which were not observed in the human cell lines. The ECM Biosciences anti-SK2 antibody did not appear to be very sensitive towards SK2, as no band was detected at the expected size in the HeLa lysates, and only very faint bands were present in the HEK293 and MEF lysates that were reduced or absent in the knockdown or knockout lines (Figure 1B). Furthermore, numerous prominent non-specific bands were present in all lysates, particularly in the MEF lines, indicating a lack of selectivity of this antibody towards SK2. Therefore, the Proteintech anti-SK2 antibody appears to be superior for use in IB, demonstrating both selectivity and sensitivity in the detection of endogenous SK2, particularly in the human cell lines tested.\n\nImmunoblot analyses of lysates from HEK293 and HeLa cells treated with scrambled control siRNA (si-Neg) or SK2 siRNA (si-SK2), and lysates from wildtype (WT) or Sphk2-/- MEFs. An equal amount (40 µg) of total protein from each sample was run in duplicate. After transferring to nitrocellulose and blocking, the membrane was separated and duplicate samples were probed with either (A) Proteintech rabbit anti-SK2 antibody or (B) ECM Biosciences rabbit anti-SK2 antibody. SK2 membranes were imaged using a 4 min exposure. The expected band size for SK2 is ∼65 kDa. Membranes were re-probed with mouse anti-α-tubulin antibody as a loading control (2 min exposure), which was detected at 55 kDa as expected. Consistent results were observed from 2-3 (HEK293 and MEF) or 3-4 (HeLa) independent experiments for each antibody.\n\nWe also examined whether either of the commercial anti-SK2 antibodies could immunoprecipitate SK2 from cell lysates. The Proteintech anti-SK2 antibody is suggested by the manufacturer to be useful for IP, whereas to our knowledge the ECM Biosciences anti-SK2 antibody has not been previously tested for use in this application. Initially, using lysates from HEK293 cells, we found that the Proteintech anti-SK2 antibody was sometimes able to IP a band at the correct size for SK2 (Figure 2A); however this was not consistent with each experimental repeat and other proteins were also immunoprecipitated to a varying extent by this antibody that were not present in the IgG isotype control (Dataset 219).\n\nSK2 was immunoprecipitated from HEK293 cell lysate using either (A) Proteintech rabbit anti-SK2 antibody or (B) ECM Biosciences rabbit anti-SK2 antibody. Normal rabbit IgG antibody was used as an isotype control. Immunoprecipitates (and 40 µg lysate input) were subjected to immunoblot analyses and probed with (A) Proteintech rabbit anti-SK2 antibody or (B) ECM Biosciences rabbit anti-SK2 antibody. Membranes were imaged using a 4 min exposure. Images are representative of three independent experiments for each antibody. (C) SK2 was immunoprecipitated from HEK293 cell lysates (of equal protein) that had been treated with scrambled control siRNA (si-Neg) or SK2 siRNA (si-SK2), using ECM Biosciences rabbit anti-SK2 antibody. Immunoprecipitates were subjected to immunoblot analyses and probed with ECM Biosciences rabbit anti-SK2 antibody. Membrane was imaged using a 4 min exposure. Image is representative of three independent experiments. IgG h/c = IgG heavy chain.\n\nConversely, the ECM Biosciences anti-SK2 antibody was able to consistently and cleanly IP a protein of the same size as SK2 from cell lysates, with almost no non-specific bands observed (Figure 2B; Dataset 219). The protein immunoprecipitated by the ECM Biosciences antibody was considerably enriched from the cell lysate and was strongly detectable by this antibody, which was unable to detect SK2 in the lysate input sample, consistent with Figure 1B. To determine if this band was in fact SK2, the ECM Biosciences anti-SK2 antibody was then used to immunoprecipitate SK2 from HEK293 lysates treated with either scrambled control or SK2-directed siRNA. SK2 knockdown consistently resulted in reduced intensity of the band enriched by this antibody (Figure 2C; Dataset 219), confirming that the ECM Biosciences anti-SK2 antibody can selectively IP endogenous SK2.\n\nFinally, we examined whether these commercially available SK2 antibodies could selectively detect SK2 by IF. Neither antibody has been reported to be tested for use in IF by their respective manufacturers; however, the Proteintech SK2 antibody is recommended for immunohistochemistry. Using IF staining methods routinely performed in our laboratory, we compared the two anti-SK2 antibodies using HeLa, HEK293 and MEF cell lines. The Proteintech anti-SK2 antibody produced minimal staining in all cell lines tested (Figure 3A–C), and consequently there was no observable differences between the control cells and those with SK2 knockdown (in the human cell lines) or SK2 knockout (in the Sphk2-/- MEF line).\n\n(A) HeLa or (B) HEK293 cells were treated with scrambled control siRNA (si-Neg) or SK2 siRNA (si-SK2), and endogenous SK2 (green) was visualised by immunofluorescence staining and confocal microscopy, using Proteintech rabbit anti-SK2 antibody or ECM Biosciences rabbit anti-SK2 antibody. (C) Wildtype (WT) or Sphk2-/- MEFs were seeded, and endogenous SK2 (green) was visualised by immunofluorescence staining and confocal microscopy, using Proteintech rabbit anti-SK2 antibody or ECM Biosciences rabbit anti-SK2 antibody. Nuclei were stained with DAPI (blue). For each cell line, background staining was examined by staining cells (si-Neg or WT cells) with secondary antibody and DAPI only, and collecting images using both 488nm and 405nm lasers (SK2 + DAPI). Images were taken at 40× magnification; scale bars = 10 µm. Images shown are representative of more than 100 cells from each experiment, and these results were consistent over three independent experiments for each cell line.\n\nThe ECM Biosciences anti-SK2 antibody did result in consistently observable staining in HeLa and HEK293 cells, which was substantially reduced upon knockdown of SK2 (Figure 3A and B; Dataset 320). Hence, in these cells the ECM Biosciences antibody was able to selectively detect SK2 by IF. Interestingly, in HeLa cells SK2 detected by the ECM Biosciences antibody was predominantly nuclear with some peri-nuclear/cytoplasmic localization, whereas in HEK293 cells SK2 was cytoplasmic and nuclear-exclusion, which is consistent with previous reports for these cell lines9. However, the ECM Biosciences anti-SK2 antibody produced very strong peri-nuclear staining/puncta in both the WT and Sphk2-/- MEF lines (Figure 3C), suggesting that this staining was not specific for SK2 and represents non-specific binding to other proteins in this cell type. Increased non-specific binding of both SK2 antibodies to other proteins in the MEF lines was also observed when used for IB, so this cell type may not be suitable for use with these antibodies. It will remain to be determined if the same level of non-specificity is also observed in other mouse cell lines and tissues.\n\n\nConclusion\n\nCommercially available antibodies raised against the SKs can be notorious, in our experience, for not being very sensitive or selective. A number of groups have generated their own SK-specific antibodies; however, many published studies have reported the use of different commercial SK2 antibodies, sometimes without proper controls or validation of selectivity. Hence, we have compared two commercially available SK2 antibodies and evaluated their selectivity towards SK2 in multiple applications using siRNA-mediated SK2 knockdown or Sphk2-/- MEF lines.\n\nWe found that the SK2 antibody from Proteintech was able to consistently detect a prominent band at the correct molecular weight by IB, and this band was confirmed to be SK2 by knockdown and knockout analyses, confirming the specificity of this antibody. The Proteintech antibody also resulted in virtually no non-specific detection of any other proteins in the HEK293 and HeLa lysates, but some additional faint bands were present in the MEF lines. This antibody has been tested by IB on various mouse tissue lysates by the manufacturer and many of these also gave rise to non-specific bands, so this will need to be considered and further validation may be required if this antibody is intended for use with mouse cells or tissues. Occasionally more than one band was detected in the human cell lines by the Proteintech SK2 antibody, but these bands also seemed to be reduced by SK2 knockdown. There are two characterized human SK2 isoforms21, so these bands may represent different SK2 variants and/or post-translationally modified forms of SK2.\n\nIn contrast, the present results revealed that the sensitivity of the ECM Biosciences antibody towards SK2 by IB was poor, with a faint band detected only in the HEK293 and MEF lines that was not present in the knockdown/knockout lysates. Furthermore, the ECM Biosciences SK2 antibody produced many intense non-specific bands in all cell lines tested, demonstrating poor selectivity. This antibody has been used for IB analyses in multiple publications12–15, suggesting that it may be more suitable with other cell/tissue systems or conditions not tested here. However, in agreement with our findings, the IB analysis performed by the manufacturer also showed various prominent non-specific bands in HeLa lysates. Therefore, at least in our hands, the ECM Biosciences SK2 antibody was not ideal for this application.\n\nHowever, the ECM Biosciences anti-SK2 antibody was superior for the IP of endogenous SK2, as it was able to cleanly and substantially enrich the protein from lysates and was confirmed by SK2-specific knockdown to be selective for SK2 in this application. This antibody will therefore be a useful tool to study SK2 function and regulation, as it can be applied to other applications requiring IP, such as chromatin-IP (ChIP) and rapid immunoprecipitation mass spectrometry of endogenous protein (RIME). In the present study, the Proteintech anti-SK2 antibody was inconsistent in its ability to IP protein at the correct size for SK2, and other bands of equal intensity were sometimes present.\n\nSimilarly, we found that the ECM Biosciences anti-SK2 antibody was able to selectively detect endogenous SK2 by IF staining in two human cell lines, HeLa and HEK293 cells. Furthermore, the observed localization of SK2 in these two cells lines was consistent with previous reports9. The selectivity of this antibody was validated by knockdown of SK2 in these cell lines, where most of the staining was reduced. A very small level of staining was still visible after SK2 siRNA treatment, possibly owing to the inherently incomplete nature of siRNA-mediated knockdown. However, we were unable to corroborate these data with SK2 knockout using the MEF lines, as considerable non-specific staining was present in this cell type, as was found for IB. Using identical methods, there was minimal staining observed using the Proteintech anti-SK2 antibody for IF, and therefore the selectivity of this antibody towards SK2 in this application could not be properly examined.\n\nDuring this study, methods routinely used in our laboratory were employed, and where applicable, recommendations from the manufacturers for antibody dilutions and concentrations were followed. It is possible that further optimization for these antibodies may allow them to perform better in the applications where they were deemed not optimal. However, as our main aim was to directly compare the performance of these two antibodies, and given at least one of the antibodies performed well for each application using our standard methods, further optimization was not performed.\n\nOverall, based on the data from this study we would recommend the use of the Proteintech SK2 antibody for IB, as it demonstrated selectivity and sensitivity towards endogenous SK2 in the human cell lines tested. Furthermore, we recommend the ECM Biosciences SK2 antibody for IP of endogenous SK2 and for visualizing SK2 by IF methods. Furthermore, both antibodies detected non-specific proteins by IB and IF in the mouse fibroblasts used, and hence further validation will be required to determine if this is the case for other mouse cells or tissues.\n\n\nData availability\n\nDataset 1: Raw images of all experimental replicates for Figure 1, immunoblotting experiments. This dataset includes uncropped blots for all experimental replicates that are represented in Figure 1. Treatments and immunoblot methods were performed as outlined in Figure 1. Blots were probed with Proteintech rabbit polyclonal anti-SK2 antibody (A–D) or ECM Biosciences rabbit polyclonal anti-SK2 antibody (E–H). Anti-α-tubulin antibody was used as a loading control. O/E SK2 = lysate from cells overexpressing SK2, used as a positive control to validate the correct size of SK2. Asterisks denote other protein bands that were probed using other antibodies not relevant to this study, prior to anti-α-tubulin.\n\nDOI, 10.5256/f1000research.10336.d14541618\n\nDataset 2: Raw images of all experimental replicates for Figure 2, immunoprecipitation experiments. This dataset includes uncropped blots for all experimental replicates that are represented in Figure 2. SK2 immunoprecipitation from HEK293 cell lysate, and subsequent immunoblotting, were performed using either (A–C) Proteintech rabbit anti-SK2 antibody or (D–F) ECM Biosciences rabbit anti-SK2 antibody. (G–I) SK2 immunoprecipitation from HEK293 cell lysates (of equal protein) treated with scrambled control siRNA (si-Neg) or SK2 siRNA (si-SK2), and subsequent immunoblotting, were performed using ECM Biosciences rabbit anti-SK2 antibody.\n\nDOI, 10.5256/f1000research.10336.d14541719\n\nDataset 3: Raw images of additional experimental replicates for Figure 3, immunofluorescence experiments. This dataset includes additional images from experimental replicates that demonstrate reproducibility of the images presented in Figure 3. Treatments and immunofluorescence staining methods were performed as outlined in Figure 3. Images were taken at 40× magnification; scale bars = 10 µm.\n\nDOI, 10.5256/f1000research.10336.d14541820",
"appendix": "Author contributions\n\n\n\nSP conceived the study. HN designed and carried out the experiments, and prepared the first draft of the manuscript. Both authors were involved in the revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was funded by an Australian Postgraduate Award, Royal Adelaide Hospital Dawes Scholarship and the University of South Australia (HN), and a National Health and Medical Research Council of Australia Project Grant (#626936) and Senior Research Fellowship (#1042589), and the Fay Fuller Foundation (SP).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe would like to thank Dr Briony Gliddon for generating the primary mouse embryonic fibroblasts used in this study.\n\n\nReferences\n\nPitson SM: Regulation of sphingosine kinase and sphingolipid signaling. Trends Biochem Sci. 2011; 36(2): 97–107. PubMed Abstract | Publisher Full Text\n\nPyne NJ, Pyne S: Sphingosine 1-phosphate and cancer. Nat Rev Cancer. 2010; 10(7): 489–503. PubMed Abstract | Publisher Full Text\n\nXia P, Gamble JR, Wang L, et al.: An oncogenic role of sphingosine kinase. Curr Biol. 2000; 10(23): 1527–1530. PubMed Abstract | Publisher Full Text\n\nNeubauer HA, Pham DH, Zebol JR, et al.: An oncogenic role for sphingosine kinase 2. Oncotarget. 2016. PubMed Abstract | Publisher Full Text\n\nChipuk JE, McStay GP, Bharti A, et al.: Sphingolipid metabolism cooperates with BAK and BAX to promote the mitochondrial pathway of apoptosis. Cell. 2012; 148(5): 988–1000. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOkada T, Ding G, Sonoda H, et al.: Involvement of N-terminal-extended form of sphingosine kinase 2 in serum-dependent regulation of cell proliferation and apoptosis. J Biol Chem. 2005; 280(43): 36318–36325. PubMed Abstract | Publisher Full Text\n\nLiu H, Sugiura M, Nava VE, et al.: Molecular cloning and functional characterization of a novel mammalian sphingosine kinase type 2 isoform. J Biol Chem. 2000; 275(26): 19513–19520. PubMed Abstract | Publisher Full Text\n\nHait NC, Allegood J, Maceyka M, et al.: Regulation of histone acetylation in the nucleus by sphingosine-1-phosphate. Science. 2009; 325(5945): 1254–1257. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIgarashi N, Okada T, Hayashi S, et al.: Sphingosine kinase 2 is a nuclear protein and inhibits DNA synthesis. J Biol Chem. 2003; 278(47): 46832–46839. PubMed Abstract | Publisher Full Text\n\nHait NC, Sarkar S, Le Stunff H, et al.: Role of sphingosine kinase 2 in cell migration toward epidermal growth factor. J Biol Chem. 2005; 280(33): 29462–29469. PubMed Abstract | Publisher Full Text\n\nLiu X, Ren K, Suo R, et al.: ApoA-I induces S1P release from endothelial cells through ABCA1 and SR-BI in a positive feedback manner. J Physiol Biochem. 2016; 72(4): 657–667. PubMed Abstract | Publisher Full Text\n\nBruno G, Cencetti F, Pertici I, et al.: CTGF/CCN2 exerts profibrotic action in myoblasts via the up-regulation of sphingosine kinase-1/S1P3 signaling axis: Implications in the action mechanism of TGFβ. Biochim Biophys Acta. 2015; 1851(2): 194–202. PubMed Abstract | Publisher Full Text\n\nWallington-Beddoe CT, Powell JA, Tong D, et al.: Sphingosine kinase 2 promotes acute lymphoblastic leukemia by enhancing MYC expression. Cancer Res. 2014; 74(10): 2803–2815. PubMed Abstract | Publisher Full Text\n\nLiu W, Ning J, Li C, et al.: Overexpression of Sphk2 is associated with gefitinib resistance in non-small cell lung cancer. Tumour Biol. 2016; 37(5): 6331–6336. PubMed Abstract | Publisher Full Text\n\nSun E, Zhang W, Wang L, et al.: Down-regulation of Sphk2 suppresses bladder cancer progression. Tumour Biol. 2016; 37(1): 473–478. PubMed Abstract | Publisher Full Text\n\nReid SP, Tritsch SR, Kota K, et al.: Sphingosine kinase 2 is a chikungunya virus host factor co-localized with the viral replication complex. Emerg Microbes Infect. 2015; 4(10): e61. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMizugishi K, Yamashita T, Olivera A, et al.: Essential role for sphingosine kinases in neural and vascular development. Mol Cell Biol. 2005; 25(4): 11113–11121. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNeubauer H, Pitson S: Dataset 1 In: Validation of commercially available sphingosine kinase 2 antibodies for use in immunoblotting, immunoprecipitation and immunofluorescence. F1000Research. 2016. Data Source\n\nNeubauer H, Pitson S: Dataset 2 In: Validation of commercially available sphingosine kinase 2 antibodies for use in immunoblotting, immunoprecipitation and immunofluorescence. F1000Research. 2016. Data Source\n\nNeubauer H, Pitson S: Dataset 3 In: Validation of commercially available sphingosine kinase 2 antibodies for use in immunoblotting, immunoprecipitation and immunofluorescence. F1000Research. 2016. Data Source\n\nNeubauer HA, Pitson SM: Roles, regulation and inhibitors of sphingosine kinase 2. FEBS J. 2013; 280(21): 5317–5336. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "18381",
"date": "21 Dec 2016",
"name": "Maria Laura Allende",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article by Neubauer and Pitson compares two commercially available antibodies against sphingosine kinase 2 (SK2). They tested the antibodies for their performance in western blotting, immunoprecipitation and immunofluorescence using two human cell lines as well as MEFs. They concluded that the Proteintech antibody has better performance for immunoblotting while the one from ECM Biosciences works with higher sensitivity in immunoprecipitation and immunofluorescence. Both antibodies are more specific with the human cells. The results suggest that these antibodies can be used to track the level of expression of SK2. The report has detailed information on reagents utilized for each protocol as well as the experimental procedures, and it is a starting point for establishing specific optimal conditions for other investigators. The article is well written. The experiments were designed with the adequate controls. The results are presented correctly and support the conclusions.",
"responses": [
{
"c_id": "2573",
"date": "23 Mar 2017",
"name": "Heidi Neubauer",
"role": "Author Response",
"response": "Your positive comments and perfect understanding of the findings of this report are highly appreciated."
}
]
},
{
"id": "18459",
"date": "23 Dec 2016",
"name": "Dagmar Meyer Zu Heringdorf",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nNeubauer and Pitson have compared here two commercially available antibodies to sphingosine kinase-2 (SK2) with regard to their performance in immunoblotting, immunoprecipitation and immunofluorescence staining. As the authors point out, much less is still known about SK2 compared to SK1, and the roles of SK2 in cell death, survival, and cancer remain in large part enigmatic. Furthermore, the availability of good antibodies is certainly a matter in this area. These facts underline the importance of the author’s undertaking.\nNeubauer and Pitson present a thorough and convincing report. They selected two rabbit polyclonal antibodies raised against different regions of human SK2 with reported or predicted cross-reactivity with mouse and rat SK2. The antibodies were tested in HeLa and HEK293 cell lysates with and without siRNA knockdown of SK2, and mouse embryonic fibroblasts (MEFs) from wild type and SphK2-/- mice. The methods are reported clearly and in exceptional detail, which is of a great value per se. The results show that the SK2 antibody from Proteintech performed well in immunoblotting, while the ECM antibody hardly detected SK2 but stained multiple nonspecific bands in the human and mouse cell lysates. On the other hand, the SK2 antibody from ECM, but not the Proteintech antibody, precipitated a band with the molecular weight of SK2 and caused fluorescence staining of HeLa and HEK293 cells that was sensitive to SK2 siRNA. In the MEFs, finally, neither of the two antibodies was suitable for immunofluorescence, because their staining pattern was insensitive to SK2 siRNA. The authors thus provide valuable data which can help other researchers to establish suitable protocols in their own cellular systems.\nThere are only minor points which deserve a discussion. Firstly, looking at Figure 1, the SK2 bands in the Western blot appear quite faint, and the reader wonders how the immunoblots would look like when they had been exposed more strongly. However, a view into the deposited data sets suggests that compared to overexpressed SK2, endogenous SK2 is probably expressed at very low levels in HeLa and HEK293 cells. Is this indeed the fact? Secondly, the Proteintech antibody, recognizing amino acids 266-618 of human SK2a, should also recognize other isoforms of SK2 with different N-terminal lengths. Why is there only one band? Is there only one isoform of SK2 expressed in HeLa and HEK293 cells? Thirdly, a low expression of SK2 might also hamper the generation of higher quality immunofluorescence images. The ECM antibody stained nuclei and cytosolic structures in HeLa cells for which a nuclear localization of SK2 has been reported, while it caused mere cytosolic staining in HEK293 cells, similar to what others have observed with GFP-SK2. While these observations provide confidence in the specificity of the antibody, the weak staining, preventing higher resolution images, raises some doubts that a localization of endogenous SK2 at subcellular structures such as mitochondria could be detected by this antibody.",
"responses": [
{
"c_id": "2574",
"date": "23 Mar 2017",
"name": "Heidi Neubauer",
"role": "Author Response",
"response": "Your careful review and suggestions for improving the report are highly appreciated. In response to your comments we have included some discussion on the very low expression of SK2 protein in many cell lines which impacts on immunoblotting and immunofluorescence detection. We have also included some comments on our findings that the SK2a (SK2-S) splice isoform appears the main SK2 protein present in HeLa and HEK293 cells, despite the previous report that suggested the main SK2 mRNA in some human cell lines, including HeLa cells, encodes for the SK2b (SK2-L) splice isoform."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2825
|
https://f1000research.com/articles/5-2517/v1
|
13 Oct 16
|
{
"type": "Clinical Practice Article",
"title": "Massive ear keloids: Natural history, evaluation of risk factors and recommendation for preventive measures – A retrospective case series",
"authors": [
"Michael Tirgan"
],
"abstract": "Keloid disorder (KD) is an inherited wound healing ailment, frequently seen among Africans /African Americans and Asians. Genetics of this disorder continues to be obscure and poorly understood. Clinical manifestation of KD is quite variable and very diverse, spanning from individuals with one or very few small keloidal lesions, to those with numerous and very large lesions covering large portion of their skin. Ears are common locations for development of keloids. Ear piercing is by far the leading triggering factor for ear keloid formation in genetically predisposed individuals. Although there are numerous publications about ear and earlobe keloids, there is a void in medical literature about massive ear keloids. This paper focuses on the natural history of massive ear keloids and risk factors that lead to formation of these life-changing and debilitating tumors and recommendations for prevention.",
"keywords": [
"Ear Keloid",
"Cryotherapy"
],
"content": "Introduction\n\nPatients with keloid disorder (KD) carry a genetic abnormality that predisposes them to the disorder1. Although no convincing genetic abnormalities have been linked to KD, clinical observation suggests that the genetic predisposition to KD has a broad spectrum2. Individuals who suffer from mild form of the disorder typically develop one or few slow-growing keloidal lesions, whereas individuals with the severe form of the disorder often develop several large keloids. In addition to the genetics, other factors also play important roles in clinical presentation of KD. Most importantly, there must exist an injury to the skin that would trigger abnormal wound healing response that leads to the formation of keloid lesions2. Figure 1 depicts a young African American male who developed an earlobe keloid following the piercing of his right ear. In addition, he also sustained several sharp and deep injuries to his neck, left shoulder and left arm. All wounded areas subsequently transformed into linear keloids. Therefore, it is safe to conclude that had he not pierced his ear or sustained other injuries, he would not have developed any of these keloids and would have remained completely asymptomatic. Therefore, simple clinical observations of this one patient teaches us that certain individuals harbor the KD genetic abnormality yet remain asymptomatic only because they have not pierced their ears or sustained a serious injury to their skin.\n\nNotice that each wounded area of skin has transformed into a keloidal lesion.\n\nAnother important fact about KD, which is well exemplified in this case, is that adjacent and even distant skin are also affected by the keloidal process; thus, the wounding of normal-appearing skin will inevitably lead to the formation of new keloid lesions.\n\nIn addition to genetics and skin injuries, the third important factor in the clinical presentation of KD is the age of the individual. The peak age of onset of KD occurs during puberty; however, certain types of skin injuries only occur later in life. For instance, the typical age of those undergoing cardiac bypass surgery or facelift surgery is in 6th and 7th decade of life. As such, certain KD carriers will remain asymptomatic until they undergo their first surgery and end up with chest-wall or peri-auricular keloids1,3. Race, gender, passage of time and therapeutic interventions are other important factors that play their own unique role in clinical presentation of this disorder. The wide spectrum of these factors contributes to highly variable phenotype of KD. The clinical presentation of KD is to some extent race and gender dependent. Large and tumoral keloids, including massive ear lesions, are more often encountered among Africans, African Americans and individuals with black skin2.\n\nFocusing our attention to the ears, it is common knowledge that keloid lesions grow over time. With medical interventions, some KD lesions respond well to the treatments, but some lesions fail to respond, or even get worse and grow much larger. By far, the most important factor in development of all primary keloidal lesions is the initial wounding injury of the skin. However, the surgical removal of ear keloids that is commonly performed by ENT specialists, plastic surgeons and general dermatologists, defies this very basic principal of keloid formation. The extent of the injury to the surrounding skin when an ear keloid is surgically removed is obviously several fold greater than the primary injury sustained from the piercing procedure. This iatrogenic injury will undoubtedly trigger a keloidal wound healing response that is not only more intense than the one triggered by the original piercing event but also much greater in magnitude and distribution. Studies have indicated that almost all ear keloids and almost all other keloid lesions will relapse after surgery; hence, the need for adjuvant treatment has been emphasized by almost every author who has published on this topic. Adjuvant treatments in the form of post-operative steroid injections4, pressure devices5 or even radiation therapy6 are often incorporated in management of ear keloids in order to counteract the fully expected keloid recurrence after surgery. However, despite the meticulous use of all available adjuvant treatments, a large number of patients will suffer from recurrent ear keloids and undergo second, third or fourth surgeries. Unfortunately, the ear keloids will continue to relapse in many instances. At some point, the surgeon, the patient, or even both will abandon therapeutic interventions.\n\nThis article focuses on these unfortunate cases; instances of recurrent large, semi-massive, and massive ear keloids among mostly young patients who ultimately accept the reality that surgery and/or adjuvant radiation therapy cannot treat their keloids, thereby resigning themselves to living with huge tumoral keloids hanging from their ears, an unwanted and unpleasant outcome that impacts every aspect of their daily lives.\n\n\nMaterials and methods\n\nThis is a retrospective analysis of 283 consecutive patients with ear keloids who were seen by the author in his keloid specialty medical practice. Patients with post-otoplasty ear keloids, and those with post-facelift peri-auricular keloids were not included in this study. The underlying research project for this retrospective study was determined by the Western IRB to meet the conditions for exemption under 45 CFR 46.101(b)(4). Consent is not required for studies that are determined to be exempt under 45 CFR 46.101(b)(4).\n\nKeloids were assessed visually and categorized according to their size into four separate groups. Other than author’s recently published keloid staging system7, there are no other previously described methodologies that would allow for more precise grouping of the ear keloids.\n\nTable 1 summarizes characteristics of the patients within each group.\n\n1- Massive ear keloids: the size of the keloid mass is greater than the surface area of the corresponding ear. Thirteen patients (4.5%) met this criterion. Three patient were Caucasians, and 10 were African Americans. Four patients (three females and one male) had bilateral massive ear keloids. Figure 2 depicts several patients in this category.\n\n2- Semi-massive ear keloids: the size keloid mass is at least 50% of the surface area of the corresponding ear. Eighteen patients (6.3%) met this criterion. Two patients were Caucasians, and sixteen were African Americans. Figure 3 depicts several patients in this category.\n\n3- Large ear keloids: the size of the keloid mass was more than the size of the corresponding earlobe. In total, 181 patients met this criterion. Forty-nine patients were Caucasians or Asians, and 132 patients were African Americans. Figure 4 depicts several patients in this category.\n\n4- Small ear keloids: the size of the keloid mass is less than the size of the corresponding earlobe. Seventy-one patients met this criterion. Twenty-eight patients were Caucasians or Asians, and 43 patients were African Americans. Figure 5 depicts several patients in this category.\n\nYellow radiation signs identify patients who have previously received adjuvant radiation therapy after removal of their ear keloids.\n\nTable 2 shows the stage classification of solitary ear keloids according to the author’s new Keloid Staging System6.\n\n\nResults\n\nAlthough this study is limited by its size, and patients were drawn from only one medical practice that does not offer surgery for treatment of keloids, several interesting factors stand out as risk factors for development of large, semi-massive and massive ear keloids. More females were noted among each study group. However, the gender may be simply related to the fact that more women pierce their ears.\n\n□ African/African American race was noted to be a major potential risk factor in all four groups, most importantly among those with massive, semi-massive ear keloids, with only five Caucasians/Asians among the 31 patients in both these groups.\n\n□ Prior keloid removal surgery was the most important risk factor among all patients with massive and semi-massive ear keloids. Without exception, all these patients had undergone anywhere between one to seven prior keloid removal surgeries.\n\n□ Prior keloid removal surgery was the most important risk factor among patients with large ear keloids. One hundred thirty-one patients (73%) had a history of prior keloid removal surgery.\n\nThe patients’ history of prior keloid removal surgery is summarized in Table 3.\n\n\nDiscussion\n\nSurgery is a commonly practiced therapeutic intervention for removal of ear keloids. Based on the findings of this study, the author proposes the following designations for keloid lesions.\n\nA primary ear keloid is a keloid that has not been previously treated with surgery. Keloid lesions can form in any part of the ear; however, the location of the keloid solely depends on the site of the prior injury or ear piercing. All primary ear keloids start as a small skin lesion and grow over time. The longer a keloid is present, the larger it will become. Figure 5 depicts several examples of primary keloids in various stages of development.\n\nA secondary ear keloid is a new keloid that forms at the site of surgery for the removal of a primary keloid. Figure 2, Figure 3, and Figure 4 depict numerous cases of secondary ear keloids.\n\nIt is undisputable that the extent of the injury from the surgical removal of a primary ear keloid is significantly greater than the injury sustained from ear piercing. It is also logical to conclude that the extent of skin injury has a direct and linear relationship with the size and mass of keloid lesions. These two simple facts explain why keloid removal surgery can trigger development of larger keloids. Cognizant of the fact that there are patients whose keloids do not recur after surgery, we must be well aware, and acknowledge the deleterious effects of surgery, and the nightmare that is imposed on patients who end up developing large, semi-massive or massive ear keloids. The psychological stress and anxiety that is imposed on a young person by having to live with a worsened ear keloid is very real and life changing8.\n\nIndiscriminate and repeated surgical attempts to remove ear keloids are also associated with disfigurement of the ear. By attempting to remove the entire keloid, surgeons remove part of the earlobe or performs a wedge resection and remove some of the ear cartilage and soft tissue adjacent to the keloid. Even if this approach does not lead to the recurrence of the keloid, which it often does, it will result in the loss of normal ear anatomy and a poor aesthetic outcome. Figure 6 depicts several examples of such poor outcomes. A very common shortcoming of several publications on the surgical treatment of ear keloids2–4 is lack of reporting of the aesthetic outcomes.\n\nNotice the disfiguration of normal ear anatomy and the loss of ear tissue from prior surgeries.\n\nRecently advocated approach of surgery in combination with adjuvant radiation therapy6, although it may yield to a lower keloid recurrence rate, it exposes all patients to potentially grave adverse effects of radiation therapy. Those of us who take on the task of treating keloid patients, often teenagers and young adults, need to be very cognizant about the risks associated with the treatments that we offer to our patients. Although surgery provides a quick-fix solution to an ear keloid, exposing children and young adults to a procedure that has even 1% risk of causing massive or semi-massive ear keloid is unacceptable, let alone a 10.8% risk that is observed among 283 consecutive cases presented here. It is unfortunate that the data on incidence of massive or semi-massive ear keloids has never been published, but a rate of greater than 10% among author’s patients is very disturbing and resonates like a loud siren calling for more careful analysis of outcome data of all surgical interventions.\n\nFurthermore, the carcinogenic risk of radiation therapy is real and should not be under-estimated. Exposing teenagers and young adults to such a treatment, even with a small long term risk, is simply unacceptable. We need to bear in mind that we are not treating elderly cancer patients with radiation; we are treating teenagers and young adults. No matter how well the ear tissue is isolated and shielded, many thousands of hematopoietic stem cells that circulate in the capillaries and venules of the ear tissue will be exposed to ionizing irradiation. The author doubts that even one radiation therapist will be willing to expose his or her own ear tissue or that of his or her child to the radiation that is so casually offered to many young adults with KD.\n\nMoreover, the real rate of keloid non-recurrence after adjuvant radiation therapy remains unknown. Most studies report their outcome after a short interval of few months to two years. A recent comprehensive review of adjuvant radiation therapy9 for treatment of keloid lesions screened 207 publications, many of which were excluded for not describing a minimum follow up. The authors limited their study to 33 articles with only 10 studies providing incidence of recurrence. The mean time to recurrence was 14.8 ± 6.7 months with a range of 2–36 months post-treatment. True long term recurrence rate of keloids after adjuvant radiation therapy remains unknown. Author is currently treating a patient with massive left ear keloid who had her first recurrence 13 years after receiving adjuvant radiation therapy. Figure 2 depicts several cases of massive ear keloids in patients who had previously received adjuvant radiation therapy after surgical removal of their ear keloids.\n\nThe successful treatment of human diseases is reliant on thorough understanding of the underlying processes that lead to the development of particular illnesses. The basic principal of treating keloidal lesions is the destruction of the abnormal tissue with a method that will not trigger the underlying keloidal wound healing response. Surgical removal of keloids will indeed trigger this pathological wound healing response and can result in development of a much larger ear keloids. Figures 8 depicts the vicious cycle of surgery that can results in formation of semi-massive and massive ear keloids; a cycle that all 31 patients in this study, and all those shown in Figure 2 and Figure 3 have been through. The development of all secondary keloids can be effectively prevented if we simply stop performing surgery on keloid patients all together. In the author’s opinion, supported by his own experience, the paradigm shifting treatment approach is a move to utilize contact cryotherapy for treatment of all primary ear keloids. Furthermore, the author believes that proper application of cryotherapy can effectively remove all primary ear keloids and prevent development of all secondary keloids. Results with high therapeutic success rate have been previously reported by others10–13. Figure 7 depicts several examples of durable results achieved by author for patients with primary ear keloids.\n\nNotice the very minimal scarring at the site of cryotherapy. Most of these patients have enjoyed very durable and persistent results.\n\nCryotherapy should be delivered properly and repeated as many times as needed. Liquid nitrogen is best applied to the keloid tissue using a properly sized hand-held applicator; such as a large cotton swab. The process should be repeated until the entire mass of keloid is frozen to the level of normal ear tissue. Within a few hours, the treated tissue becomes edematous and swollen and often forms a blister which often bursts and oozes serous fluid for several days. During this time frame, the treated keloid should be attended to as an open wound; thus, it is best covered with gauze and a loose dressing. Within the next several days, the treated tissue will become dehydrated and form a black-colored scab, which will remain in place for a few weeks. The scab will then gradually slough off. This process takes two to three weeks for very small keloids and up to six to eight weeks for larger keloids. In the author’s experience, upon recovery from the first treatment, most keloids show 30–60% reduction in mass. Cryotherapy should be repeated, often every four to eight weeks in the same fashion until the keloid mass is totally destroyed. Depending on the size of the primary keloid, this process takes four to eight months and results in total removal of the primary keloids in almost every patient. Pressure devices or intra-lesional steroids should be used in all patients who continue to have a keloid remnant within their ear tissue.\n\nPain control is critical during the application of cryotherapy as well as during the first 24 hours after treatment. Inadequate pain control will result in a lack of compliance and poor treatment outcome. All patients should be educated about the process, and prescribed proper pain control medications.\n\nFurthermore, there is no need to perforate the body of keloid with a very large bore metallic cannula, and to run liquid nitrogen through the core of keloid14. There exist no data, and no evidence to support superiority of this invasive technique to standard, non-invasive contact cryotherapy.\n\nAlthough we can successfully debulk and remove almost all primary ear keloids with cryotherapy, there is a clear need for follow up in all patients in order to detect and treat early recurrences. Repeat cryotherapy, intra-lesional steroid and/or intra-lesional chemotherapy should be considered in treating keloid recurrence.\n\n\nConclusions\n\nThe goal of treatment for keloid lesions, and ear keloids in particular, should not only focus on removal of the keloid tissue, but most importantly on two other very important principles:\n\n1. Prevention of damage to the ear tissue\n\n2. Prevention of the recurrence of the keloid\n\nPerforming surgery to remove primary ear keloids is inherently contrary to both of the above principles. Surgery, by its nature, induces new injury to the skin, and as shown in Figure 6, the surgical removal of a primary keloid frequently results in the loss of surrounding normal ear tissue. The loss of normal ear tissue, even in the absence of future keloid recurrence, will often result in an unacceptable aesthetic outcome. The worsening of ear keloids after surgical excision is caused by the triggering of the same dysregulated wound healing response, yet to a new dermal injury that is more extensive in nature than the injury from the ear piercing itself.\n\nTopical contact cryotherapy should be the primary mode of treatment for all primary and secondary ear keloids. This approach will prevent the development of incurable secondary and, large, semi-massive and massive keloids and eliminate the need for hazardous adjuvant radiation therapy.\n\n\nData availability\n\nAll raw data relevant to the study are provided in tables above.",
"appendix": "Competing interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nMarneros AG, Norris JE, Olsen BR, et al.: Clinical genetics of familial keloids. Arch Dermatol. 2001; 137(11): 1429–34. PubMed Abstract | Publisher Full Text\n\nPark TH, Park JH, Tirgan MH, et al.: Clinical implications of single- versus multiple-site keloid disorder: a retrospective study in an Asian population. Ann Plast Surg. 2015; 74(2): 248–51. PubMed Abstract | Publisher Full Text\n\nTirgan MH, Shutty CM, Park TH: Nine-month-old patient with bilateral earlobe keloids. Pediatrics. 2013; 131(1): e313–7. PubMed Abstract | Publisher Full Text\n\nAl Aradi IK, Alawadhi SA, Alkhawaja FA, et al.: Earlobe keloids: a pilot study of the efficacy of keloidectomy with core fillet flap and adjuvant intralesional corticosteroids. Dermatol Surg. 2013; 39(10): 1514–9. PubMed Abstract | Publisher Full Text\n\nTanaydin V, Beugels J, Piatkowski A, et al.: Efficacy of custom-made pressure clips for ear keloid treatment after surgical excision. J Plast Reconstr Aesthet Surg. 2016; 69(1): 115–21. PubMed Abstract | Publisher Full Text\n\nvan Leeuwen MC, Stokmans SC, Bulstra AE, et al.: Surgical Excision with Adjuvant Irradiation for Treatment of Keloid Scars: A Systematic Review. Plast Reconstr Surg Glob Open. 2015; 3(7): e440. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTirgan MH: Neck keloids: evaluation of risk factors and recommendation for keloid staging system [version 2; referees: 1 approved, 1 approved with reservations]. F1000Research. 2016; 5: 1528. Publisher Full Text\n\nBijlard E, Kouwenberg CA, Timman R, et al.: Burden of Keloid Disease: A Cross-sectional Health-related Quality of Life Assessment. Acta Derm Venereol. 2016. PubMed Abstract | Publisher Full Text\n\nvan Leeuwen MC, Stokmans SC, Bulstra AE, et al.: Surgical Excision with Adjuvant Irradiation for Treatment of Keloid Scars: A Systematic Review. Plast Reconstr Surg Glob Open. 2015; 3(7): e440. PubMed Abstract | Publisher Full Text | Free Full Text\n\nZouboulis CC, Orfanos CE: [Cryosurgical treatment of hypertrophic scars and keloids]. Hautarzt. 1990; 41(12): 683–8. PubMed Abstract\n\nBarara M, Mendiratta V, Chander R: Cryotherapy in treatment of keloids: evaluation of factors affecting treatment outcome. J Cutan Aesthet Surg. 2012; 5(3): 185–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRusciani L, Paradisi A, Alfano C, et al.: Cryotherapy in the treatment of keloids. J Drugs Dermatol. 2006; 5(7): 591–5. PubMed Abstract\n\nFikrle T, Pizinger K: Cryosurgery in the treatment of earlobe keloids: report of seven cases. Dermatol Surg. 2005; 31(12): 1728–31. PubMed Abstract | Publisher Full Text\n\nvan Leeuwen MC, Bulstra AE, Ket JC, et al.: Intralesional Cryotherapy for the Treatment of Keloid Scars: Evaluating Effectiveness. Plast Reconstr Surg Glob Open. 2015; 3(6): e437. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "17447",
"date": "15 Nov 2016",
"name": "Robert Sidbury",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nOverall this is a nice review. There are certain small additions I would make (e.g. a line somewhere capturing the idea that keloids can be seen in certain syndromes disproportionately such as Rubinstein-Taybi syndrome). This might reasonably be placed in the abstract (e.g. KD is an inherited wound healing ailment, frequently seen among Africans/African Americans, Asians, and in some genetic disorders such as Rubinstein Taybi syndrome). A notation like this in the introduction would also make more complete.\n\nThe aforementioned criticism pales in comparison with my overarching concern that this article has more of an agenda than just information dissemination. The agenda is that surgery is bad for ear keloids and the author does not shy away from this opinion. My concern is not that he expresses this opinion, even advocates for it to a certain extent if he believes it to be true, but the lengths he takes this, and the way he uses his \"data\" asymmetrically to make this point gives me pause.\n\nFirst, what about the point itself? Is it valid? Is surgery bad for keloids? Yes, surgery is an injury to the skin which can itself promote keloid formation, indeed so much so that our surgeons will not operate to remove or debulk keloids in certain areas (eg neck, trunk). However, they do believe and I have shared plenty of patients to validate that ear keloids can be effectively removed in a sustainable, cosmetically acceptable way without necessarily (and in fact rarely) resorting to radiation therapy adjunctively. This sort of response is well-documented in the literature they just don't happen to be references this author cites:\nTriamcinolone after surgery as effective as radiation therapy (and both can be effective) at preventing recurrence 1\n\nPressure clips after surgery can prevent recurrence (this is technique used by my colleagues in Plastic Surgery to good effect) 2\nThere are countless other references of success using surgery with acceptable risk-benefit profiles.\nSecond, where do I believe the author goes beyond simply stating his opinion, backed up by his data, that surgery can lead to recurrence and poor outcome?\n\nFirst paragraph page 3, final sentence, \"...unfortunately, ear keloids will continue...\". This is stated as if it is ALWAYS the case and it simply is not.\n\nWhy did he not include post-otoplasty patients in study?\n\np 8 Results: Don't have experience with their own surgical successes because none of these patients have that modality (\"do not offer surgery for treatment of keloids\"). Therefore the only patients they see are surgical failures.\n\np 8 under Secondary Ear Keloids: \"Cognizant of the fact that there are patients whose keloids do not recur after surgery...\" and then there is no balance in what follows. This context that author describes speaks to the need for an appropriate and careful risk-benefit discussion but not necessarily not offering surgery at all when there are many reasons why it is sometimes the right choice. What if the keloid itself is life altering? What if they do not have time to return for repeated liquid nitrogen over the course of a year? With the attendant blistering and healing phases played out with each treatment? And the cost?\n\np 10 paragraph 2: \"1% risk of causing massive or semi-massive ear keloids is unacceptable\" all depends on patient and context.\n\nFigure 6: \"after\" shots should be paired with \"before\" shots; I can imagine at least some of these outcomes potentially preferable to keloid it replaced but can't know without seeing images.\n\np 10: Need for a paradigm paragraph # 1. Stop all surgery, cryo for all...this just isn't practical or medically justifiable.\n\np 10 cryotherapy paragraph: This just isn't possible for all patients. Problems include time, $, quality of life during treatment, incomplete response because not all patients respond to cryo; post inflammatory pigment alteration secondary to cryo especially in darker skinned patients which this author does not mention at all.\n\nSo, in summary, this is simply too one-sided and agenda driven to be an appropriate publication in my opinion. Could it be modified? Sure. Present cryotherapy and other options alongside surgery and feel free to opine but a more balanced presentation with updated references would be required for me to endorse.",
"responses": [
{
"c_id": "2296",
"date": "22 Nov 2016",
"name": "Michael Tirgan",
"role": "Author Response",
"response": "Dear Dr. Sidsbury: Thank you very much for taking the time to review and comment on my publication. Peer review plays an important role in publishing research material. I truly appreciate your detailed and thorough review and each and every comment you have made. I hereby address the points you have raised. As for referencing to Rubinstein-Taybi Syndrome - This manuscript is focused on providing data about Massive Ear Keloids. It is by no means review of the disorder. As for your comment about my point being that “surgery is bad for ear keloids” – I am personally convinced that to be the case. Keloid is a genetic disorder that involves much of the normal appearing skin. It is not limited to the area of a keloid growth, therefore surgery cannot cure it. There are numerous publications that have eloquently explained this fact. This conclusion is backed by data as presented in the manuscript. It is also backed by clinical observation of some physicians, and parents and relatives of keloid patients, who dissuade patients from undergoing surgery. As for your comment “therefore the only patients they see are surgical failures” - this is simply not correct. Patients shown in Figure 5 are some of my patients who presented with very early stages of ear keloid and had chosen a non-surgical approach. As for breakdown of the study cohorts, there were 283 patients in the study. 31 patients had massive or semi-massive ear keloids. Among 181 patients with large keloids, only 73% had prior surgery and 27% rest of these patients never had surgery. Of the 71 patients in the “small ear keloid” majority had not undergone surgery. Altogether, more than a third of all patients did not have surgery. As for your comment about this manuscript being “agenda driven” - I simply disagree with you. My conclusions are rather data driven. We - as physicians and as healers – have the ethical and moral obligation of providing our patients with the best available treatment. The Declaration of Geneva of the World Medical Association binds us with the words, “The health of my patient will be my first consideration,” and the International Code of Medical Ethics declares that, “A physician shall act in the patient's best interest when providing medical care.”. It is the duty of the physician to promote and safeguard the health, well-being and rights of patients. The Declaration of Geneva also states that “The primary purpose of medical research involving human subjects is to understand the causes, development and effects of diseases and improve preventive, diagnostic and therapeutic interventions (methods, procedures and treatments). Even the best proven interventions must be evaluated continually through research for their safety, effectiveness, efficiency and accessibility.\" Purpose of this study was to exactly do what the Declaration of Geneva was intended us to do. Data provided in the manuscript points to the shortcomings of surgery by way of causing massive and semi-massive keloids. Manuscript provides argument as to why this may be the case. All conclusions are driven by data. As for your comments about surgery that “This sort of response is well-documented in the literature they just don't happen to be references this author cites”, I would refer to the two references provided at the conclusion of your comments. Shin et. al reported “The recurrence rate after surgical excision of an ear keloid in the triamcinolone group was estimated as 15.4 percent. The recurrence rate in the radiation therapy group was estimated as 14.0 percent.” Tanaydin et. al reported “Keloid scars did not recur in 70.5% of treated patients”. By doing the math, the recurrence rate was 29.5% among patients who used custom made pressure clips after surgery. There is no doubt that some patients develop massive and semi-massive keloids - the ones who fail to respond to the best surgical efforts - i.e. the 14-15% (one in six patients) reported by Shin and 29.5% (one in three) reported by Tanaydin. I hope to see more publications about the incidence of massive and semi-massive ear keloids. As of this date, searching pubmed does not locate even one publication about incidence of massive ear keloids. I do advocate cryotherapy as primary treatment of all bulky ear keloids. There are several references that lend support to the usage of cryotherapy for treatment of ear keloids. Cryotherapy, as a treatment modality for keloids is also mentioned in every textbook of dermatology and in every overview of keloids, however, hardly any dermatologist or plastic surgeon uses it.I wonder why?I know for fact that there are no billing codes for application of cryotherapy for treating keloids. As for “not including post-otoplasty cases” – there were only 14 patients. I have written is separate manuscript about this cohort. As for figure 6 – the point was to show less than ideal aesthetic outcome of ear keloid surgery. As for paradigm shift to stop surgery – I think it can be done. Although surgery is a quick-fix and results in immediate removal of ear keloids, it does cause long term harm to many patients. To this date, we do not have a methodology to identify who will be the next patient at risk of developing a massive ear keloid after surgical removal of a small primary ear keloid. In my data set, the rate is close to 11%. To know the exact risks and outcomes, we need establish a Keloid Surgery Registry, and register each and every keloid patient who undergoes surgery, and follow them for several years. In absence of such data, or data from a well-designed randomized trial, all we can do is to inform our patients of the potential risks of keloid surgery. There are many circumstances – but most importantly when surgery is performed on a patient – that we have the duty to obtain “informed consent” and not just a “permission to operate”. We also have the duty to advise our patients of the specific risks of the procedure. We do know that disclosure of the general risks that are associated with any surgical procedure is not adequate. We are obligated to disclose the risk of developing massive and semi-massive keloids to each and every patient who is to undergo keloid removal surgery. We are also obligated to discuss alternative procedures, or conservative nonsurgical approaches with our patients. Through informed consent process, we have the ethical and moral obligation of showing the images of massive and semi-massive ear keloid to their patients, and informing them that with keloid surgery, there is a risk of developing such life-changing complications. As for the cost, risks, rate of skin discoloration and other aspects of cryotherapy, only a well-designed randomized study to compare surgery to contact cryotherapy will be able to answer all these valid points. I hope that there will be enough interest to support and fund such a study. Finally, I am of the view that we all should respect data, disease biology and the process of informed consent. I only wish to improve the outcome of our patients. I hope that through collaborations with physicians like yourself, we can join forces and tackle this very hard-to-treat disorder. Thank you again for all your comments. Michael H. Tirgan, MD"
}
]
},
{
"id": "19062",
"date": "09 Jan 2017",
"name": "Amy J. McMichael",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a good accounting of keloids with many cases for discussion. The placement of new data in the discussion section is a bit off-putting for the reader. I would recommend placing all the charts in the Results section rather than having them in the Methods Section. Also, there needs to be more than just the grouping of keloids as presented. Basically, this is a list of keloids. There needs to be some correlation drawn or statistics to note associations rather than just descriptive stats. This is a great start, but just needs more to really give the reader information that is useful. I recommend not putting new data in the discussion and conclusion and moving this to the results.\n\nThe analysis used on the cases should be included in the methods section. The discussion should focus on the natural history of all the things that are discussed in the results sections.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2517
|
https://f1000research.com/articles/5-2894/v1
|
21 Dec 16
|
{
"type": "Method Article",
"title": "An open and transparent process to select ELIXIR Node Services as implemented by ELIXIR-UK",
"authors": [
"John M. Hancock",
"Alf Game",
"Chris P. Ponting",
"Carole A. Goble",
"Alf Game",
"Chris P. Ponting",
"Carole A. Goble"
],
"abstract": "ELIXIR is the European infrastructure established specifically for the sharing and sustainability of life science data. To provide up-to-date resources and services, ELIXIR needs to undergo a continuous process of refreshing the services provided by its national Nodes. Here we present the approach taken by ELIXIR-UK to address the advice by the ELIXIR Scientific Advisory Board that Nodes need to develop “mechanisms to ensure that each Node continues to be representative of the Bioinformatics efforts within the country”. ELIXIR-UK put in place an open and transparent process to identify potential ELIXIR resources within the UK during late 2015 and early to mid-2016. Areas of strategic strength were identified and Expressions of Interest in these priority areas were requested from the UK community. A set of criteria were established, in discussion with the ELIXIR Hub, and prospective ELIXIR-UK resources were assessed by an independent committee set up by the Node for this purpose. Of 19 resources considered, 14 were judged to be immediately ready to be included in the UK ELIXIR Node’s portfolio. A further five were placed on the Node’s roadmap for future consideration for inclusion. ELIXIR-UK expects to repeat this process regularly to ensure its portfolio continues to reflect its community’s strengths.",
"keywords": [
"ELIXIR",
"ELIXIR-UK",
"e-Infrastructure",
"ESFRI"
],
"content": "Introduction\n\nELIXIR, the European infrastructure for life science data1, is made up of individual Nodes, one for each of the organisation’s constituent members (20 as of 1st November 2016: Belgium, Czech Republic, Denmark, EMBL-EBI, Estonia, Finland, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Norway, Portugal, Slovenia, Spain, Sweden, Switzerland and the UK), and a coordinating hub. The individual ELIXIR Nodes provide the services and resources that support the five pillars of ELIXIR (Compute, Tools, Data, Interoperability and Training infrastructures).\n\nELIXIR nodes need to be able to evolve their contributions to ELIXIR by bringing new services and resources. ELIXIR identifies two types of service: Node-funded services, which are funded nationally and are contributed to ELIXIR from a national Node; and Commissioned Services, which are funded by ELIXIR as a whole via the ELIXIR Hub. In some ELIXIR Nodes, Node-funded services receive funds through their national Nodes; in the case of the UK’s Node, ELIXIR-UK, resource funding is through direct grant funding to resources and services from the national funders. In ELIXIR terms, these are still labelled as “Node-funded”. The process described in the present article was set up to identify Node-funded services and resources for ELIXIR-UK. ELIXIR sets high standards for the services it provides. Consequently, nodes need to take full account of these requirements when selecting and proposing their services, which are ultimately judged for suitability by the ELIXIR Scientific Advisory Board (SAB) and Board of ELIXIR (see the online ELIXIR Handbook for more detail).\n\nELIXIR-UK was established in September 2013, and as its first contribution to ELIXIR took on a thematic focus, namely of coordinating training activity. More recently, it has sought to expand its remit. To address the SAB’s recommendation that Nodes put in place “mechanisms to ensure that each Node continues to be representative of the Bioinformatics efforts within the country”, ELIXIR-UK developed a process to choose new services and resources to add to its existing portfolio. Its aims in developing this process were to:\n\n• Reflect national strengths and priorities in bioinformatics\n\n• Engage its national community\n\n• Build a robust, transparent and open process that its community would regard as fair and could continue to be applied to allow the Node to develop over time.\n\n\nProcess overview\n\nAs illustrated in Figure 1, the process implemented by ELIXIR-UK went through seven key phases, which are expanded on in the following sections:\n\n1. Strategic prioritization\n\n2. Identifying possible candidate resources\n\n3. Setting up appropriate structures\n\n4. Establishing assessment criteria\n\n5. Engaging the community\n\n6. Assessing Expressions of Interest\n\n7. Finalising a new portfolio\n\nItems in the red correspond to the numbered list of key phases. Numbers in red correspond to the seven phases of the process listed under the section “Process overview”.\n\n\nStrategic prioritization\n\nThe requirement to ensure that each Node continues to be representative of the Bioinformatics efforts within the country could be seen as open ended, and thus could ultimately lead to an ill-focussed collection of resources and services. To avoid this, ELIXIR-UK identified a set of priority areas within which to focus submissions to the process. These were initially identified by discussions within the Node and were refined by discussion with the Node’s funding organisations (Biotechnology and Biological Sciences Research Council [BBSRC], Medical Research Council [MRC] and Natural Environment Research Council [NERC]) and with the Scientific Development Group (SDG), which was a community body set up by the Node that is tasked with identifying new node resources (see below).\n\nAs a consequence of these discussions, Expressions of Interest (EoIs) were invited in the following priority areas, identified as being of high strategic importance within the UK:\n\n• Human clinical and health ‘omics and related areas in health informatics\n\n• Agricultural ‘omics and related data resources\n\n• Image informatics (including atlases)\n\n• Structural bioinformatics\n\n• Technical infrastructure for interoperability and training including standards\n\n\nIdentifying possible candidates\n\nELIXIR-UK aimed to reconcile two potentially conflicting drivers in developing its expansion process. Firstly, it wanted to be as open to the UK bioinformatics community as it could. This is an ongoing challenge because a) ELIXIR has incomplete brand recognition within the UK community, and b) is not well-regarded by some, being seen either as a closed club or unproductive. Secondly, the Node wanted to ensure it received Expressions of Interest from potential services and resources that were demonstrably of high value to the international life sciences community. To address these requirements, the Node approached the recruitment of potential candidates in two ways. Firstly, it publicised its “Node Expansion” process well in advance using its web site, Twitter and word-of-mouth. Secondly, it sent targeted emails to potential candidate resources. These were identified using a variety of inputs:\n\n• Brainstorming by members of the existing node\n\n• Setting up a specific working group - on Agriculture-related data - for an area that was not well-represented in the current node\n\n• Additional suggestions from the SDG (see next section) and funders\n\n\nThe Scientific Development Group\n\nThe key body in the Node’s expansion process was the SDG. This was set up by the Node to evaluate EoIs to join the Node against a set of published criteria (see below). This group was also involved in refining those criteria and providing suggestions of resources to be invited to provide EoIs.\n\nThe membership of this group was based on suggestions from within the Node and from its funders. The group’s composition reflected the priority areas identified for the expansion, geographic spread, and the inclusion of at least one industry and at least one overseas representative. The Chair was chosen for his experience as a senior officer of a UK funding agency and knowledge of appropriate processes for activities of this kind. For the record we note that the group did not have an appropriate gender balance (it was 100% male). This is a defect we intend to remedy in future.\n\n\nAssessment criteria\n\nOver time ELIXIR has been evolving both its classification of resources and its criteria for selecting them. During the period of the UK Node’s expansion process these definitions and criteria continued to evolve. The assessment criteria developed by ELIXIR-UK were developed by internal discussion and in discussion with the SDG and were also discussed informally with the leaders of Work Package 3 of the EXCELERATE programme (Jo McEntyre and Christine Durinx), as their criteria developed in parallel. The final set of criteria, which were provided to applicants as an openly shared Google document, were:\n\n• Alignment with the five ELIXIR infrastructure themes (data, tools, compute, interoperability, training)\n\n• Strong complementarity to the 2014-18 ELIXIR programme\n\n• Complementarity to ELIXIR-UK strategic themes\n\n• Potential for cross-Node collaborations\n\n• Provision of comparable impact to existing ELIXIR resources from other Nodes already accepted by the ELIXIR SAB\n\n• Resource contribution to wider EU infrastructures and integration\n\n• Ability to interoperate with other ELIXIR resources\n\n• Evidence of community outreach and adoption\n\n• Leadership in data stewardship within a community\n\n• Evidence of long-term sustainability\n\nTo facilitate applicants demonstrating that their resources fulfilled these criteria, an Expression of Interest template form was provided, also via Google documents.\n\nThe criteria developed by EXCELERATE Work Package 3 have subsequently been finalised and form the basis of the ELIXIR process for selection of Core Data Resources2.\n\n\nEngaging the community\n\nAs outlined, it was important to ensure community buy-in to this process (in order to ensure that the Node was able to engage sufficient high quality resources) and at the same time it was important to be sure that community members who might be interested in participating in ELIXIR-UK were aware of what was required and the expectations that would be placed on them as ELIXIR-UK Node resources. Formal community engagement took place in two phases: a webinar, led by the Head of Node (CAG) and Node Coordinator (JMH), in February 2016 and a workshop, hosted at the Wellcome Trust building in London, in March 2016. The aim of the webinar was to introduce ELIXIR and ELIXIR-UK and the rationale behind the node expansion process. The aim of the workshop was to introduce and discuss the assessment criteria in detail, so that potential applicants could be clear as to what was required. The presentation given at the workshop is available via Slideshare. At this stage a deadline was set for the receipt of Expressions of Interest by the Node. It is worth noting that the deadlines for the process were tight: EoIs were requested by the end of March 2016 and the assessment meeting took place at the end of April with some iterations taking place in May. We were fortunate in being able to run such a tight schedule due to a) clear and lightweight requirements for the EoIs; b) what we believe to be clear and effective communications; and c) motivated applicants and SDG members.\n\n\nAssessment of Expressions of Interest\n\nEoIs were assessed by the SDG against the published set of criteria. To facilitate assessment of EoIs, three group members were allocated to each EoI (18 were submitted). The three members were asked to score EoIs from 1 to 4: 1 = ready for inclusion in ELIXIR (“infrastructure ready”); 2 = further discussion or clarification needed; 3 = not ready, but suitable enough to be placed on a roadmap for future inclusion; 4 = not suitable. The assessments for each EoI were introduced by one member of the group leading on to an open discussion. Representatives of the Node funders and the ELIXIR-UK executive observed the meeting to give advice on strategic alignment. EoIs were given a consensus final score using the same scale as before, with a score of 2 in this case representing the need for further clarification of issues raised by the group. Resources given a 2 rating were asked for further information, which led to their final score being revised upwards or downwards in a subsequent iteration.\n\n\nResults of the assessment\n\nThe outcomes of the assessment are summarised in Table 1. Nine EoIs were considered to be infrastructure ready (score of 1) on the first pass of assessment, and a further five were asked for more detail on their proposal (score of 2).\n\nThe table gives numbers of proposals classified as 1 (ready for inclusion in ELIXIR (“infrastructure ready”); 2 (further discussion or clarification needed); 3 (not ready, but suitable enough to be placed on a roadmap for future inclusion); 4 (not suitable).\n\n*In this case the group were unclear whether the proposed resource could be included in the Node’s offering. This case was put forward to the ELIXIR SAB for further input who recommended it be placed in Category 1.\n\nAn iteration of discussions with resource scientists allowed questions raised by the SDG to be considered further. Where these were answered satisfactorily, resources were moved up to infrastructure-ready status. Otherwise they were put on the roadmap or, in one case, referred to the ELIXIR SAB for further comment (in this latter case, SAB guidance subsequently resulted in it being accepted as infrastructure ready).\n\nAfter ratification by the ELIXIR-UK executive and notification to the ELIXIR Hub, highly rated resources were included directly into the Node’s portfolio and were included in the Node Application presented to the ELIXIR SAB in June 2016 and the ELIXIR Board in November 2016. Others were placed on the Node’s roadmap for possible future inclusion.\n\nThe services and resources selected as ready for immediate inclusion are listed in Table 2.\n\nResources are classified by strategic themes within ELIXIR-UK.\n\n*References for the databases/tools have been added where available.\n\n\nFuture activities and conclusions\n\nWe believe that the process outlined here was open, transparent and fair. We note that the “success rate” of the process was high. No resources were rejected outright and more than 70% were promoted immediately to the Node’s portfolio. This does not reflect a lax process, but is likely to have had a number of contributing factors, including:\n\n• The fact that this was the first call of this kind meant that the Node could call on a number of outstanding, internationally-acknowledged resources. The resources placed on the roadmap were generally also well regarded, but usually in an early phase of their development. Our expectation is that most of these will be recognised as Node-funded resources in future.\n\n• There was a clear explanation and open presentation of the high standards expected of successful resource. Therefore, it is likely that only resources that considered they had a realistic chance of success after the webinar and workshop put their names forward. Consequently, we did not receive any truly speculative proposals.\n\nAnother aspect of the process we outline here is the short time period over which it was carried out. In particular, resources were only given four weeks to submit EoIs. A number of features of the process facilitated this: clear timelines, clear guidance as to what was required, the availability of a template for EoIs that helped proposers to compile their EoIs, and lightweight requirements for completing EoIs, which were nevertheless sufficient to allow the SDG to carry out its work effectively. Engagement at a senior level by both the Node and proposers was also important. It was also important to organise meetings, especially of the SDG, sufficiently ahead of time to allow members to both assess the EoIs and attend the meetings, either in person or remotely.\n\nTo maintain and continue to improve the Node’s alignment with UK research strengths, it plans to hold regular refresh exercises to introduce new resources into the Node. Plans for how this will be done are currently under development. To pursue this process we expect that we will need to develop community engagement in the specific priority areas, so that potential proposers are primed.",
"appendix": "Author contributions\n\n\n\nJMH developed the process, wrote the manuscript; AG chaired the Scientific Development Group; CPP contributed to the early development of the process; CAG led engagement activities and oversaw the process.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nELIXIR-UK is funded by the Biotechnology and Biological Sciences Research Council, the Medical Research Council and the Natural Environment Research Council (grant numbers BB/L005069/1 and BB/P017193/1).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe Scientific Development Group consisted of:\n\nAlf Game (Independent; Chair)\n\nMark Bailey (Centre for Ecology and Hydrology)\n\nWin Hide (University of Sheffield)\n\nSimon Hubbard (University of Manchester)\n\nNick Luscombe (UCL/Francis Crick Institute)\n\nSean May (University of Nottingham)\n\nAndrew Morris (University of Edinburgh)\n\nChris Rawlings (Rothamsted Research)\n\nDenis Shields (University College Dublin)\n\nWill Spooner (Eagle Genomics)\n\nMike Sternberg (Imperial College)\n\nDavid Westhead (University of Leeds)\n\nThe authors thank all participants in the ELIXIR-UK node expansion process for their commitment, enthusiasm and patience.\n\n\nReferences\n\nCrosswell LC, Thornton JM: ELIXIR: a distributed infrastructure for European biological data. Trends Biotechnol. 2012; 30(5): 241–2. PubMed Abstract | Publisher Full Text\n\nDurinx C, McEntyre J, Appel R, et al.: Identifying ELIXIR Core Data Resources [version 1; referees: 2 approved] . F1000Res. 2016; 5: pii: ELIXIR-2422. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAndersson L, Archibald AL, Bottema CD, et al.: Coordinated international action to accelerate genome-to-phenome with FAANG, the Functional Annotation of Animal Genomes project. Genome Biol. 2015; 16(1): 57. PubMed Abstract | Publisher Full Text | Free Full Text\n\nUrban M, Cuzick A, Rutherford K, et al.: PHI-base: a new interface and further additions for the multi-species pathogen-host interactions database. Nucleic Acids Res. 2016; pii: gkw1089. PubMed Abstract | Publisher Full Text\n\nSouthan C, Sharman JL, Benson HE, et al.: The IUPHAR/BPS Guide to PHARMACOLOGY in 2016: towards curated quantitative interactions between 1300 protein targets and 6000 ligands. Nucleic Acids Res. 2016; 44(D1): D1054–68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcQuilton P, Gonzalez-Beltran A, Rocca-Serra P, et al.: BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences. Database (Oxford). 2016; 2016: pii: baw075. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLyne R, Sullivan J, Butano D, et al.: Cross-organism analysis using InterMine. Genesis. 2015; 53(8): 547–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSansone SA, Rocca-Serra P, Field D, et al.: Toward interoperable bioscience data. Nat Genet. 2012; 44(2): 121–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDawson NL, Lewis TE, Das S, et al.: CATH: an expanded resource to predict protein function through structure and sequence. Nucleic Acids Res. 2016; pii: gkw1098. PubMed Abstract | Publisher Full Text\n\nWaterhouse AM, Procter JB, Martin DM, et al.: Jalview Version 2--a multiple sequence alignment editor and analysis workbench. Bioinformatics. 2009; 25(9): 1189–91. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKelley LA, Mezulis S, Yates CM, et al.: The Phyre2 web portal for protein modeling, prediction and analysis. Nat Protoc. 2015; 10(6): 845–58. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "18697",
"date": "30 Dec 2016",
"name": "Christine Durinx",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is clear and well written. The figure and tables support the content.\nSome suggestions, mainly to improve clarity:\nIntroduction\n- It would be helpful to mention that the ELIXIR Services can be data resources, tools, and services. - It is mentioned that the selection process aims to reflect national strengths and priorities in bioinformatics and engage its national community. For readers who are not familiar with the bioinformatics community in the UK, it would be helpful to include a short, general description of the local bioinformatics landscape.\nProcess overview\n- Is \"3. Setting up appropriate structures\" referring to the Scientific Development Group or does it include other structures? - Which are the Working Groups that are mentioned in Figure 1 (you mention the Agriculture-related data WG)?\nStrategic Prioritization\n- When you refer to \"ELIXIR-UK\", or \"discussions within the Node\", is this the same as \"ELIXIR-UK Executive\" in Figure 1 and the Executive Committee (http://www.elixir-uk.org/about-the-node)?\nAssessment criteria\n- There is a mix here between the criteria for the ELIXIR Services (brought forward by the ELIXIR Nodes through the Service Delivery Plans) and the indicators which have been developed for the ELIXIR Core Data Resources. The latter focus on databases only and therefore won't be very helpful for training (for example). It would be good to make this explicit to avoid any potential confusion.\nResults of the assessment\n- The list of assessment criteria is long and broad in scope. Is there any way of summarizing on which criteria the UK services are doing particularly well and on which criteria there can be improvement (or that were reasons for not including the services)? - Table 2: the services that are listed, seem to be UK-only. Certain are however broader collaborations (e.g. Ensembl, TeSS). Could this be made clear?\nFuture activities and conclusions\n- From the text, it seems that ELIXIR-UK is focusing on the identification of the ELIXIR UK Services. Is the UK node offering specific support (or other) to its services?",
"responses": [
{
"c_id": "2590",
"date": "27 Mar 2017",
"name": "John Hancock",
"role": "Author Response",
"response": "In response to this reviewer: As also requested by another reviewer we now mention that ELIXIR Services can be data resources, tools, and services Included a brief overview of the UK bioinformatics landscape, including its funding landscape Clarified the meaning of point 3: \"Setting up appropriate structures\"; in relation to working groups we only established the Agriscience working group in this round of the process although we might establish others in future Clarified (as also raised by another reviewer) that internal discussions within the Node primarily involved the Node Executive Clarified the relationship of the assessment criteria to training resources Made a brief comment on why some resources were not accepted immediately as Node resources Commented on how we treated resources that were international collaborations Commented on how we propose to support Node resources in future We hope these comments are helpful."
}
]
},
{
"id": "18698",
"date": "09 Jan 2017",
"name": "Alfonso Valencia",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe report describes the process of selection of the UK-ELIXIR xxxx to be presented to the ELIXIR SAB. The report is very informative and being the first node that describes in detail their national process, it has the potential to be very useful for nodes in other countries.\nI have a number of suggestions of additional information that is mentioned in the text but not explicitly included.\n\nComposition of the Scientific Development Group (SDG) A figure with the time line of the process that could be integrated with the Fig. 1.\nThe template used for the EoIs. The EoIs submitted by the selected resources (if possible)\n\nI also have a few other questions that may help to clarify specific aspects.\nThree training resources were finally selected but in the description of the UK-ELIXIR strategic items training is not described separately (´Technical infrastructure for interoperability and training including standards). If possible, it may help to clarify what was understood by training in the strategic items and how it is different, or not, of the technical infrastructure.\nIf it would be possible to give some additional explanation to some of the ' final set of criteria’ that may really help others. One possible way might be providing examples of answers provided by some of the applications.\nIt is not very clear how the 'ELIXIR criteria for the selection of Core Data Resources' was incorporated in the process. Given the importance of the ELIXIR guidelines for future similar processes this point could be quite relevant.\nFinally, even if I realise that this might be considered outside the scope of this paper, what will be really interesting is to include a short explanation of how the selected resources fit each one of the selection criteria, at least at some level of detail.",
"responses": [
{
"c_id": "2589",
"date": "27 Mar 2017",
"name": "John Hancock",
"role": "Author Response",
"response": "In response to this review: We note that the composition of the Scientific Development Group is listed in the Acknowledgements Revised Figure 1 to put it in the form of a timeline, as requested As noted in our response to the first reviewer, we have provided the EoI template as supplementary material We cannot make the EoIs themselves available as they were provided to us on a confidential basis. We have included a description of how training resources were dealt with in the process. As the reviewer notes, some criteria were less applicable to training resources but others remained applicable We have included an additional table which summarises how the criteria were interpreted and applied by the SDG during assessment of resources Expanded a little on how the 'ELIXIR criteria for the selection of Core Data Resources' were incorporated in the process although this was a relatively informal process We hope the reviewer will find these changes acceptable."
}
]
},
{
"id": "18696",
"date": "23 Jan 2017",
"name": "Janet Kelso",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript describes the process whereby the UK node of ELIXIR identifies and prioritises services for inclusion in the node.\nIt is an important contribution as it is the first time a strategy for service identification and prioritization has been documented for an ELIXIR nodes, and it may be able to be generalized for use by other nodes of the ELIXIR network. It would be good if the authors could comment on the extent to which this approach is generalizable to other nodes, and on whether there are aspects that are unique to the UK node.\n\nThe authors should clarify early on that the ‘services’ that ELIXIR nodes may offer may include any combination of software, data resources and training. This is implied, but may not be completely clear to readers unfamiliar with ELIXIR.\n\nThe authors focus primarily on a process to include new services to the UK ELIXIR node. It would be good to add a brief section that discusses long-term evaluation of existing services in order to assess ongoing quality and relevance, and to plan for retirement of services as required.\n\nThere are places where it is unclear who represents “the Node” eg: “These were initially identified by discussions within the Node” and “Brainstorming by members of the existing node”. It may be useful to explain who is responsible for the Node and decisions taken by “the Node”, and therefore for the strategic prioritization and evaluation.\n\nCould the template for EoI be made available for other nodes wishing to follow a similar process?\n\nIt would be useful to expand briefly on what “Work Package 3 of the EXCELERATE programme,” is, so that a reader unfamiliar with ELIXIR can understand the relevance.\n\nThe authors conclude that the process was transparent and fair. Has there been any community-feedback on the process? It would be interesting to know how well-accepted the process has been, and whether there are any suggestions for improvement.\n\nIt would be useful to include information about the matching of the EoIs received with the strategic priorities that were identified. Were applications received in all priority areas? Are there areas that are not yet represented? How do new services that have been included relate to those that were already existing within the node, and also to services provided by other nodes (and was this a consideration in the evaluation process?)\n\nIs there any plan to support service proposals that were assigned to the roadmap for future inclusion?",
"responses": [
{
"c_id": "2588",
"date": "27 Mar 2017",
"name": "John Hancock",
"role": "Author Response",
"response": "In response to this review we have: Added a section on the extent to which this approach is generalizable to other nodes, and on whether there are aspects that are unique to the UK node Clarified that the ‘services’ that ELIXIR nodes may offer may include any combination of software, data resources and training Commented briefly on the long-term evaluation of existing services in order to assess ongoing quality and relevance, and to plan for retirement of services as required. This will rely on regular assessments by our Scientific Development Group and SAB Clarified who is responsible for the Node and decisions taken by “the Node” - namely the Node Executive (now renamed as the Management Committee) Made the template for Expressions of Interest available as supplementary material Expanded briefly on the role and significance of Work Package 3 of the EXCELERATE programme Added some comment on community-feedback on the process - we carried out a survey that was supportive although it did suggest two areas for improvement in future: wider advertising and better feedback to proposers Included an overview of the matching of the EoIs received with the strategic priorities that were identified We also added a discussion of how we intend to support service proposals that were assigned to the roadmap for future inclusion"
}
]
}
] | 1
|
https://f1000research.com/articles/5-2894
|
https://f1000research.com/articles/6-194/v1
|
28 Feb 17
|
{
"type": "Research Note",
"title": "Bibliometric analysis of Oropouche research: impact on the surveillance of emerging arboviruses in Latin America",
"authors": [
"Carlos Culquichicón",
"Jaime A. Cardona-Ospina",
"Andrés M. Patiño-Barbosa",
"Alfonso J. Rodriguez-Morales",
"Carlos Culquichicón",
"Jaime A. Cardona-Ospina",
"Andrés M. Patiño-Barbosa"
],
"abstract": "Given the emergence and reemergence of viral diseases, particularly in Latin America, we would like to provide an analysis of the patterns of research and publication on Oropouche virus (OROV). We also discuss the implications of recent epidemics in certain areas of South America, and how more clinical and epidemiological information regarding OROV is urgently needed.",
"keywords": [
"Oropouche",
"arbovirus",
"epidemiology",
"public health",
"travelers",
"Latin America"
],
"content": "Introduction\n\nThe Oropouche virus (OROV) is an emerging arbovirus that threatens the Amazon region of Brazil, Peru, and Venezuela1. The coexistence of this pathogen with other long-term circulating arboviruses, such as dengue virus (DENV), West Nile virus (WNV), Venezuelan Equine Encephalitis (VEEV) and yellow fever virus (YFV), as well as emerging arboviruses such as chikungunya (CHIKV), Zika (ZIKV) and Mayaro virus (MAYV), may hinder clinical diagnosis and successful vector-control strategies2. Research is essential to be able to manage this complex scenario. As has been highlighted by Ballabeni and Boggio3, bibliometric analyses of publications on emerging and reemerging viral diseases are important as they may lead to insights on how the global scientific and health communities react to outbreaks. We aimed to conduct a bibliometric analysis of OROV research and the impact on the surveillance of emerging and re-emerging arboviruses in Latin America.\n\n\nMethods\n\nA bibliometric study was done about OROV scientific production, with a focus on Latin America. We searched in three important regional and international databases (all of them in English): Science Citation Index Expanded (SCI-E), Scopus and Medline (via GoPubMed®).\n\nThis search strategy used the following key words (MeSH, Medical Subject Headings): “Oropouche” AND “Latin America”, “Oropouche” AND “Argentina”, “Oropouche” AND “Colombia”, and the same way with the rest of the Latin American countries. Also, “OROV” was used instead of Oropouche for additional searches. All study types were included (original articles, reviews, case reports, editorials) and were categorized by year, international cooperation, city and institution, journal and authors with major contribution. Searches were done from May 30 to June 30, 2015.\n\nData was tabulated and analyzed in Excel 2007® for Windows 7® (Dataset 14), summarizing quantitative variables with means and interquartile ranges (IQRs), and qualitative variables with proportions.\n\n\nResults\n\nA total of 260 related records were retrieved in our search; from these, 97 manuscripts were recovered in Scopus (55% from Brazil, 28% from US, and 11% from Peru); 83 articles were recovered from Medline (43% from Brazil, 18% from US, and 6% from Peru) and 80 articles were recovered from SCI-E (61% from Brazil, 35% from US, and 15% from Peru) (Table 1). As observed in Medline, publications on OROV never reached more than 3 articles per year (Figure 1). Analyzing this database, it can be observed that Brazil has the more productive and cooperative research groups in Latin America (Figure 1).\n\nFor Scopus, the annual average number of articles published up to 2014 was 5 (IQR: 1–17) (Figure 2). In June 2015, only two articles had been published that year. Nevertheless, after 1996, although not uniform, there was an increasing trend in the number of articles published on OROV per year, reaching 9 in 2011 (Figure 2). At Scopus 19 countries contributed to the publication of at least 1 paper during the study period (Figure 3). For SCI-E, the annual average number of articles published up to 2014 was 6.2 (IQR: 1–20), with 16 countries contributing to the publication of at least 1 paper during the study period (Figure 4).\n\nData taken from Scopus.\n\nData taken from Scopus.\n\nData taken from SCI-E.\n\n“Universidade de Sao Paulo” in Sao Paulo, Brazil, was the institution with the most prolific research contribution, and “Figueiredo, L.T.M” was the author with the longest record in Oropouche research, with 12 articles (Figure 1 and Figure 2). The greatest H-indexes for Oropouche issues came from Brazil (H-index=12, 431 citations), the United States of America (H-index=10, 339 citations), Peru (H-index=9, 234 citations), United Kingdom (H-index=6, 144 citations), Canada (H-index=5, 155 citations) and Trinidad and Tobago (H-index=4, 92 citations).\n\n\nDiscussion\n\nOROV outbreaks increase when the rainy season starts (January to June) in endemic areas, where the population density of Culicoides paraensis is high1. In fact, the OROV dispersion routes and its genetic diversity5 impacted on the growth of scientific publications, as well as on the international collaboration on this topic. On the 2nd of May 2016, the Ministry of Health of Peru reported 57 cases of OROV fever6. Most cases originate in towns located in the northern part of the Cusco Region, which is situated in the Amazon rainforest. 79% were detected in January, with only 7% and 14% of the cases being identified in February and March, respectively. There were no fatalities and all patients have recovered following symptomatic treatment. In February 2016, a field mission to the Madre de Dios Region conducted jointly by the Ministry of Health of Peru and PAHO/WHO revealed a mixed outbreak of dengue (DENV-2) and OROV. While Madre de Dios already experienced an outbreak of OROV fever in 1994, at the time of the mission in February, this latest outbreak was of a higher magnitude, with 120 confirmed cases6. Cases have also been reported in other nearby countries such as Panama, Trinidad and Tobago and Brazil, and very recently in Venezuela (2016)1,7. It highlights the potential for expansion of OROV and other related reassortant viruses to other countries in the region, such as Colombia, Venezuela and Ecuador, amongst others in South and Central America.\n\nDespite this epidemiological situation, research on OROV is far below the level of research on other emerging arboviruses in Latin America such as CHIKV (6,344 articles recovered) or ZIKV8,9. This lack of published studies does not allow evidence-based decision-making on public health policies. More clinical and epidemiological information regarding OROV is urgently needed. Especially in highly vulnerable areas, such as those where other arboviruses (CHIKV, ZIKV, DENV) are circulating because vector and climate conditions are suitable for transmission10–13, research on OROV deserves more incentives among institutions, so that specific laboratory tests can be designed and more knowledge on this this emerging arbovirus can be gathered properly2,10–13. Currently, differential diagnosis of these arboviruses (CHIKV, DENV, ZIKV, MAYV) poses a significant challenge10, especially in the scenario of co-circulation and/or syndemics with emerging and circulating arboviruses, or even in the scenario of co-infections10–13.\n\nIn conclusion, Brazil is leading the initiative on OROV research. Besides this, international research networks should be expanded to gain a full understanding of this arboviral disease and explore its potential expansion and impact. To do this, the epidemic dispersion, transmission cycle, molecular epidemiology, pathogenesis, and clinical features of OROV need to be studied.\n\n\nData availability\n\nDataset 1: Raw data obtained from bibliographical databases (Medline, Scopus and SCI-E).\n\nDOI, 10.5256/f1000research.10936.d1529494",
"appendix": "Author contributions\n\n\n\nStudy design: AJRM. Data collection: CC, JACO, AMPB. Data analysis: AJRM, JACO. All authors were involved in the writing and read the final version submitted.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was partially funded by the Universidad Tecnologica de Pereira (UTP), Pereira, Risaralda, Colombia. Presentation of this study at the IV Latin American Congress of Travel Medicine (SLAMVI), Buenos Aires, Argentina, October 6–7, 2016, was funded by CTO Colombia, UTP and the Latin American Society for Travel Medicine (SLAMVI).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nThis study was previously selected for oral presentation and presented in part at the IV Latin American Congress of Travel Medicine (SLAMVI) in Buenos Aires, Argentina, 6th–7th October, 2016.\n\n\nReferences\n\nda Rosa JF, de Souza WM, de Paula Pinheiro F, et al.: Oropouche Virus: Clinical, Epidemiological, and Molecular Aspects of a Neglected Orthobunyavirus. Am J Trop Med Hyg. 2017; pii: 16-0672. PubMed Abstract | Publisher Full Text\n\nRodríguez-Morales AJ, Paniz-Mondolfi AE, Villamil-Gómez WE, et al.: Mayaro, Oropouche and Venezuelan Equine Encephalitis viruses: Following in the footsteps of Zika? Travel Med Infect Dis. 2016; pii: S1477-8939(16)30167-3. PubMed Abstract | Publisher Full Text\n\nBallabeni A, Boggio A: Publications in PubMed on Ebola and the 2014 outbreak [version 2; referees: 2 approved]. Version 2. F1000Res. 2015; 4: 68. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCulquichicón C, Cardona-Ospina JA, Patiño-Barbosa AM, et al.: Dataset 1 in: Bibliometric analysis of Oropouche research: impact on the surveillance of emerging arboviruses in Latin America. F1000Research. 2017. Data Source\n\nVasconcelos HB, Nunes MR, Casseb LM, et al.: Molecular Epidemiology of Oropouche Virus, Brazil. Emerg Infect Dis. 2011; 17(5): 800–6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO: Oropouche virus disease – Peru. Disease Outbreak News. 2016; Date accessed: October 1, 2016. Reference Source\n\nNavarro JC, Giambalvo D, Hernandez R, et al.: Isolation of Madre de Dios Virus (Orthobunyavirus; Bunyaviridae), an Oropouche Virus Species Reassortant, from a Monkey in Venezuela. Am J Trop Med Hyg. 2016; 95(2): 328–38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVera-Polania F, Muñoz-Urbano M, Bañol-Giraldo AM, et al.: Bibliometric assessment of scientific production of literature on chikungunya. J Infect Public Health. 2015; 8(4): 386–8. PubMed Abstract | Publisher Full Text\n\nMartinez-Pulgarin DF, Acevedo-Mendoza WF, Cardona-Ospina JA, et al.: A bibliometric analysis of global Zika research. Travel Med Infect Dis. 2016; 14(1): 55–7. PubMed Abstract | Publisher Full Text\n\nPaniz-Mondolfi AE, Rodriguez-Morales AJ, Blohm G, et al.: ChikDenMaZika Syndrome: the challenge of diagnosing arboviral infections in the midst of concurrent epidemics. Ann Clin Microbiol Antimicrob. 2016; 15(1): 42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nVillamil-Gómez WE, Rodríguez-Morales AJ, Uribe-García AM, et al.: Zika, dengue, and chikungunya co-infection in a pregnant woman from Colombia. Int J Infect Dis. 2016; 51: 135–138. PubMed Abstract | Publisher Full Text\n\nPaniz-Mondolfi AE, Villamil-Gómez WE, Rodríguez-Morales AJ: Usutu virus infection in Latin America: A new emerging threat. Travel Med Infect Dis. 2016; 14(6): 641–643. PubMed Abstract | Publisher Full Text\n\nRodriguez-Morales AJ, Villamil-Gómez WE, Franco-Paredes C: The arboviral burden of disease caused by co-circulation and co-infection of dengue, chikungunya and Zika in the Americas. Travel Med Infect Dis. 2016; 14(3): 177–9. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "20613",
"date": "02 Mar 2017",
"name": "Kateryna Kon",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe study is devoted to the bibliometric analysis of research published on the emerging pathogen Oropouche virus. The title of the article is totally appropriate, the abstract provides an adequate summary of the article. There is a comprehensive explanation of study design, methods and analysis used in the study. The conclusions are well balanced and justified on the basis of the results. All sufficient information is provided for replication of results described in the study.",
"responses": [
{
"c_id": "2528",
"date": "03 Mar 2017",
"name": "Alfonso Rodriguez-Morales",
"role": "Author Response",
"response": "Dear Dr. Kon Thank you very much for your comments regard our Research Note on Bibliometrics of Oropouche virus."
}
]
},
{
"id": "20599",
"date": "15 Mar 2017",
"name": "Luis Cuauhtémoc Haro-García",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe title of the manuscript seems to me appropriate, the summary is clear enough as to the purpose of the study. The design is adequate and sufficiently explained. The discussion chapter is balanced in such a way that it will be able to draw the attention of the scientific community on this emerging arbovirus in the region of Latin America.\nFurther Comments:\n- If some of the contributions for non-Latin American countries (the United States, Canada, Australia, Norway, Thailand and others), were carried out in a Latin America country or in their own country, please say the reason for including them in the bibliometric analysis.\n- Although the bibliometric analysis that was performed apparently fulfills the objective of the study ----but it includes results of non-Latin American countries---it would have been interesting to know if the scientific production of nations of other regions of the world share the concern they should be taking into account more the emergency and re-emergence of this arbovirus.\n- It would be appropriate to discuss, by the authors, that Brazil probably is the main contributor to the issue because its budget for science is one of the highest in Latin America, and not because it suffers in a greater proportion of this problem, or if there are other reasons.\n- It would have been desirable to consider some regional bibliographic databases such as LILACS and IMBIOMED, and not just English-speaking ones for a best bibliometric analysis, or to indicate why they were not taken into account.\n\nI have read this submission and believe that I have the level of expertise to indicate that the study has the sufficient scientific standard",
"responses": [
{
"c_id": "2563",
"date": "16 Mar 2017",
"name": "Alfonso Rodriguez-Morales",
"role": "Author Response",
"response": "Dear Dr. Haro-García Thanks for your comments. The current bibliometric analysis was not restricted to Latin American countries, although this is an arbovirus that emerged in the region, there is interest from international groups to cooperate in the research on OROV from outside Latin America. Even more, Latin America can be the source of imported cases in North America and Europe, as has been happening not just with arboviral diseases, but specially with Chagas disease. When revising in bibliometric analysis such situation, also is see that there research groups outside Latin America contributing with this disease (Delgado-Osorio N, Vera-Polanía F, López-Isaza AF, Martínez-Pulgarin DF, Murillo-Abadia J, Muñoz-Urbano M, Cardona-Ospina JA, Bello R, Lagos-Grisales GJ, Villegas S, Rodríguez-Morales AJ. Bibliometric assessment of the contributions of literature on Chagas disease in Latin America and the Caribbean. Recent Pat Antiinfect Drug Discov 2014 Sep-Dec; 9(3):202-208). Then, countries such as USA, Canada, Spain, among others would be concerned about the potential impact of the spread of this arbovirus outside Latin America. Certainly, we agree that Brazil probably is the main contributor to the issue because its budget for science is one of the highest in Latin America, and not necessarily because it suffers in a greater proportion of this problem. Nevertheless, OROV infection is a differential diagnosis in the Amazonas area of Brazil. Finally, regard LILACS and IMBIOMED, the number of articles found in them is very limited. In the case of LILACS only 35 and at IMBIOMED none. All these comments will be considered in the revised version that will corresponds to version 2 of this manuscript to be uploaded very soon."
}
]
}
] | 1
|
https://f1000research.com/articles/6-194
|
https://f1000research.com/articles/6-289/v1
|
17 Mar 17
|
{
"type": "Opinion Article",
"title": "Reframing the science and policy of nicotine, illegal drugs and alcohol – conclusions of the ALICE RAP Project",
"authors": [
"Peter Anderson",
"Virginia Berridge",
"Patricia Conrod",
"Robert Dudley",
"Matilda Hellman",
"Dirk Lachenmeier",
"Anne Lingford-Hughes",
"David Miller",
"Jürgen Rehm",
"Robin Room",
"Laura Schmidt",
"Roger Sullivan",
"Tamyko Ysa",
"Antoni Gual",
"Virginia Berridge",
"Patricia Conrod",
"Robert Dudley",
"Matilda Hellman",
"Dirk Lachenmeier",
"Anne Lingford-Hughes",
"David Miller",
"Jürgen Rehm",
"Robin Room",
"Laura Schmidt",
"Roger Sullivan",
"Tamyko Ysa",
"Antoni Gual"
],
"abstract": "In 2013, illegal drug use was responsible for 1.8% of years of life lost in the European Union, alcohol was responsible for 8.2% and tobacco for 18.2%, imposing economic burdens in excess of 2.5% of GDP. No single European country has optimal governance structures for reducing the harm done by nicotine, illegal drugs and alcohol, and existing ones are poorly designed, fragmented, and sometimes cause harm. Reporting the main science and policy conclusions of a transdisciplinary five-year analysis of the place of addictions in Europe, researchers from 67 scientific institutions addressed these problems by reframing an understanding of addictions. A new paradigm needs to account for evolutionary evidence which suggests that humans are biologically predisposed to seek out drugs, and that, today, individuals face availability of high drug doses, consequently increasing the risk of harm. New definitions need to acknowledge that the defining element of addictive drugs is ‘heavy use over time’, a concept that could replace the diagnostic artefact captured by the clinical term ‘substance use disorder’, thus opening the door for new substances to be considered such as sugar. Tools of quantitative risk assessment that recognize drugs as toxins could be further deployed to assess regulatory approaches to reducing harm. Re-designed governance of drugs requires embedding policy within a comprehensive societal well-being frame that encompasses a range of domains of well-being, including quality of life, material living conditions and sustainability over time; such a frame adds arguments to the inappropriateness of policies that criminalize individuals for using drugs and that continue to categorize certain drugs as illegal. A health footprint, modelled on the carbon footprint, and using quantitative measures such as years of life lost due to death or disability, could serve as the accountability tool that apportions responsibility for who and what causes drug-related harm.",
"keywords": [
"nicotine",
"illegal drugs",
"alcohol",
"evolutionary biology",
"governance",
"margins of exposure",
"well-being",
"health footprint"
],
"content": "Introduction\n\nA consortium of 67 scientific institutions from 24 European countries and beyond, covering over thirty scientific disciplines ranging from anthropology to toxicology, responded to an invitation by the European Commission to study the place of addictions in contemporary European society. The resulting five-year endeavour, the Addictions and Lifestyles in Contemporary Europe - Reframing Addictions Project (ALICE RAP, www.alicerap.eu), went beyond this. It reframed our understanding of addictions and formulated a blueprint for re-designing the governance of addictions. This paper summarizes the project’s conclusions, pointing to new understandings of the science and policy of nicotine, illegal drugs and alcohol, hereafter collectively referred to as ‘drugs’1–6. Although this paper does not cover process addictions (e.g., gambling3), much of what is said applies to addictions beyond drugs.\n\nThe paper starts by discussing why we need to re-think addictions. It contrasts two powerful pieces of evidence: the harm done by drugs, versus the poorly structured existing governance approaches designed to manage such harm. The paper continues by considering three bases for re-thinking the addiction concept in ways that could lead to improved strategies across different jurisdictions: recognition that there is a biological predisposition for people to seek out and ingest drugs; that heavy use over time becomes a replacement concept and descriptor for the term substance use disorder; and that quantitative risk assessment can be used to standardize harm across different drugs, based on drug potency and exposure. The paper finishes by proposing two approaches that could strengthen addictions governance: embedding governance within a well-being frame, and adopting an accountability system—a health footprint that apportions responsibility for who and what causes drug-related harm.\n\n\nWhy do we need to re-think addictions?\n\nThe need to re-think addictions is exemplified by the extent of harm caused by the drugs themselves, and by the fact that no single country, at least in Europe, has fully overcome poorly managed and fragmented governance structures.\n\nA standard way to document and describe the interference that drugs have on human biology and functioning is to use years of life lost to premature mortality (YLL) and disability adjusted life years (DALYs). DALYs are a measure of health that sum up YLL and years or life lost due to disability and detriments in functioning (YLD). In 2013, illegal drug use was responsible for 1.8% of YLL in the European Union (EU), alcohol was responsible for 8.2% and tobacco for 18.2% (Table 1), imposing economic burdens in excess of 2.5% of GDP7.\n\nSource: own calculations based on IHME Global burden of diseases, injuries and risk factors study (http://www.healthdata.org/gbd).\n\nYLL: Years of life lost due to premature mortality\n\nDALYs: Disability adjusted life years\n\nSource data available in Dataset 1102.\n\nThe data in Table 1 represents harm to the drug user. However, drug use also harms the health of others. For instance, operating machinery under the impact of illegal drugs can cause injury to others8,9. Although decreasing globally, second-hand smoking was estimated to kill more than 330 thousand people worldwide in 2013, and caused about 7% of the burden of disease in DALYs attributable to tobacco smoking10. The extent of harm to others caused by alcohol consumption is estimated to be proportionally even larger, mainly due to traffic accidents, violence, including homicide, and foetal alcohol spectrum disorders11.\n\nGovernance is defined as the processes and structures of public policy decision making and management that engage people across the boundaries of public agencies, levels of government, and public, private and civic spheres to carry out a public purpose that cannot be accomplished by any one sector alone12. The involvement of multiple stakeholders in governance is not without risk. The exclusive use of top-down bureaucratic approaches cannot maximize societal benefits when dealing with ‘wicked problems’ that are highly resistant to resolution13 (for definition of wicked problems, see ‘The New Governance of Addictive Substances and Behaviours by Anderson et al6). An analysis of 28 European countries found that no single country had a comprehensive policy for all drugs (including nicotine, illegal drugs and alcohol) within a broad societal well-being approach. For more detail, see ‘Governance of Addictions: European Public Policies’, by Albareda A et al1.\n\nThere are at least three reasons for ineffective and poorly integrated governance. Firstly, the same harm done by drugs is defined and understood in different ways in different countries and state systems14–16. Seen from a trans-national comparative perspective, there is a lack of a common understanding of appropriate policies, and responses are often constrained by approaches that are tied to assumptions that are not evidence-based4. Ways of thinking about the harm done by drugs vary enormously, with considerable heterogeneity between different drugs, and between international, national and local levels of governance. For detail, see ‘Concepts of Addictive Substances and Behaviours across Time and Place, by Hellman et al4.\n\nSecondly, a multitude of commercial, political and public stakeholders are active in addictions governance on national and international levels. In any given society, stakeholders that have power, means and influence are likely to achieve an advantageous influential position. The concepts of addiction are also shaped by popular constructs promulgated by the mass media and customs in the general population. Stakeholder positions and perceptions of drug problems also vary over time and by area4, implying that sustainable approaches must be interwoven into societal and governance structures.\n\nThirdly, corporate power17, through multiple channels of influence, can hinder evidence-based policy decisions5. Corporate strategies often include attempts to influence civil society, science and the media, as part of a wider aim to manage and, if possible, capture institutions that set policy. Transparency is insufficient given that the multiplicity of channels with corporate power is poorly acknowledged and understood by policy makers. Therefore, the rules in place to ensure level playing fields for discussions and equitable decision-making across all factors are inadequate6.\n\n\nReframing addictions\n\nThe consensus reached under ALICE RAP was that there are at least three ways to reframe addictions that could lead to improved strategies across different jurisdictions. These include:\n\n1) Recognition that humans have a biological predisposition for seeking out and ingesting drugs;\n\n2) Recognition that ‘heavy use over time’ should replace the concept and term ‘substance use disorder’;\n\n3) Recognition that a quantitative risk assessment accounting for drug potency and drug exposure, can standardize measures of harm across different drugs.\n\nThe idea that human exposure to drugs did not occur until late in human evolution—thus leaving our species inexperienced—is often posited as one of the reasons that these substances cause so much harm18. However, multidisciplinary scientific evidence suggests otherwise. Many substances consumed today are not evolutionary novelties18,19. In the story of terrestrial life over the last 400 million years or so, one ongoing theme has been the “battle” between plants and the animals that eat them. Of the many defence mechanisms in existence, plants produce numerous chemicals, including tetrahydrocannabinol, cocaine, nicotine, and opiates, all of which are potent neurotoxins that deter consumption of plant tissue by animals18. From an evolutionary perspective, we thus find natural selection for compounds that discourage consumption of the plant via punishment of potential consumers. By contrast, there has been no natural selection for expression of psychoactive compounds which encourage consumption (i.e., via consumer reward), which has also been predicted by neurobiological and behavioural psychology theories of reward and reinforcement for contemporary drugs20.\n\nCounterbalancing the development of plant neurotoxins, plant-eating animals have evolved to counter-exploit plants’ production of drugs, for instance by exploiting the anti-parasitic properties of some of them18. Many species of invertebrates and vertebrates engage in pharmacophagy, the deliberate consumption and sequestration of plant toxins, to dissuade parasites and predators. In a human context, present day examples of pharmacophagy may be seen with Congo basin hunter gatherers, amongst whom the quantity of cannabis21 and nicotine22 consumed is titrated against intestinal worm burden - the higher the intake, the lower the worm burden. In individuals treated with the anti-worm drug abendazole, the number of nicotine-containing cigarettes smoked is reduced22.\n\nAlthough parasite-host co-evolution is recognized as a potent selective force in nature, other, subtler evolutionary dynamics may affect human and animal interactions with plant-based drugs, including that they may buffer against nutritional and energetic constraints on signalling in the central nervous system23. Ethnographic research reveals that many indigenous groups classify “drugs” as food, rather than psychoactive entities, and that they are perceived as having food-like effects, most notably for increasing tolerance for fatigue, hunger and thermal stress in nutritionally-constrained environments23. The causes of these perceived effects have not been a research question, but there are clues that the “food” classification may be literal rather than allegorical. Common plant toxins not only mimic mammalian neurotransmitters, they are also synthesized from the same nutritionally-constrained amino acid precursors, such as tyrosine and tryptophan. In harsh environments, toxic plants could function as a “famine food” providing essential dietary building blocks, or, may function as a direct substitute for nutritionally-constrained endogenous neurotransmitters. There is some evidence to support this hypothesis in animal research; for example, wood rats in cold environments reduce thermoregulatory costs by modulating body temperature with plant toxins consumed from the juniper plant24.\n\nIn the case of ethanol, its presence within ripe fruit suggests low-level but chronic dietary exposure for all fruit-eating animals, with volatilized alcohols potentially serving in olfactory localization of nutritional resources (i.e., animals can use the smell of alcohol to locate food over long distances)19. Molecular evolutionary studies indicate that an oral digestive enzyme capable of rapidly metabolizing ethanol was modified in human ancestors near the time that they began extensively using the forest floor, about 10 million years ago25; humans have retained the fast-acting enzyme to this day. By contrast, the same alcohol dehydrogenase in our more ancient and mostly tree-dwelling ancestors did not oxidize ethanol as efficiently. This evolutionary switch suggests that exposure to dietary sources of ethanol became more common as hominids adapted to bipedal life on the ground. Ripe fruits accumulating on the forest floor could provide substantially more ethanol cues and result in greater caloric gain relative to fruits ripening within the forest canopy, and our contemporary patterns of alcohol consumption and excessive use may accordingly reflect millions of years of natural dietary exposure19.\n\nThis evolutionary evidence does not imply that humans also evolved to specifically consume nicotine, for example, or that nicotine use is beneficial in the modern world. What is novel in the modern world is the high degree of availability, and high concentration of psychoactive agents and routes of consumption that promote intoxication. What is different with alcohol in the modern world is novel availability through fermentative technology, enabling humans to consume it as a beverage, devoid of food bulk, with higher ethanol content, and artificially higher salience than that which characterizes fruit fermenting in the wild. The evolutionary evidence has two implications: firstly, policies that prohibit the use of drugs are likely to fail because people have a biological predisposition to seek out chemicals with varying nutritional and pharmacological properties; and secondly, in present-day society, drug delivery systems have been developed that are beyond what is reflected in the natural environment, particularly with respect to levels of potency, availability and taste, which could be argued as being the more central drivers of harm. Potency is largely determined by producer organisations operating in markets, which, from the perspective of overall societal well-being, are inadequately managed26. Better regulation of potency can become a major opportunity for additional policy interventions - for example with alcohol, see ‘Evidence of reducing ethanol content in beverages to reduce harmful use of alcohol’ by Rehm et al.27.\n\nTo better understand the interference of drugs in human biology and functioning, the consensus reached in ALICE RAP was that the concept and term ‘heavy use over time’ should be proposed as the replacement for ‘substance use disorder’. In medical settings and indeed often in academic and lay settings, heavy users of drugs are commonly dichotomized into those with a ‘substance use disorder’ or not. ‘Substance use disorder’ is a clinical construct that is often used as a shorthand to identify individuals who might benefit from advice or treatment. But as a condition in itself, it is a medical artefact which occurs in all grades of severity, with no natural distinction between ‘health’ and ‘disease’28,29.\n\nThis is illustrated with alcohol. The associated chronic organ damage (e.g., liver cirrhosis, cancers) exponentially increases in risk as alcohol consumption accumulates over time30,31. Unmanaged heavy drinking is associated with subsequent heavy drinking, often culminating in brain damage32, itself a consequence of heavy drinking33,34 but also a driver of future behaviour.\n\nAlcohol consumption itself is close to log-normally distributed in drinking populations, skewed towards heavy drinking35. There is no natural cut-off point above which \"alcohol use disorder\" definitively exists and below which it does not. “Alcohol use disorder” is clinically defined as a score on a checklist of symptoms, and there is a smooth line exponential relationship between levels of alcohol consumption and the score on the checklist29,36. Heavy drinking is a cause of the items on the checklist, including compulsion to drink more, which can also be a consequence of brain damage, itself caused by heavy drinking. Thus, “alcohol use disorder” is a diagnostic artefact. No more is needed to consider what is called “alcohol use disorder” other than heavy use over time28,29.\n\nFor alcohol (and other drugs as well), this approach does not imply that heavy use over time is the only cause of harm. There are other factors involved that that drive heavy alcohol use and harm3 that are independent of, or in interaction with, molecular and cellular levels (e.g., alcohol dehydrogenase polymorphisms37), individual levels (e.g., income38 and personality39) and environmental levels (e.g., stigma)\n\nThere is an ongoing discussion as to whether or not sugar is an ‘addictive’ substance that should be captured in the same category as drugs26. Framing the problem as one of heavy use over time provides insight into this debate. As with alcohol and high blood pressure40, chronic disease risk associated with plasma glucose levels has a continuous exponential relationship with sugar consumption41. The distribution of blood glucose levels is close to log-normally distributed in populations and skewed towards high consumption levels42. There is no natural cut-off point above which diabetes (or any other disease manifestation) linked to sugar definitively exists and below which it does not. Similar to the alcohol model where heavy use of alcohol over time leads to further heavy use of alcohol from the resulting brain damage, heavy use of sugar over time damages hippocampal function43, which leads to further heavy use of sugar over time44. Thus, in the ‘heavy use over time’ frame, sugar can be placed in the same category as alcohol and other drugs, and managed with similar governance approaches that promote public health.\n\nA core way to document the interference of drugs in human biology and functioning is to use quantitative risk assessment (QRA). QRA is a method applied in regulatory toxicology, for example, to evaluate water contaminants, and before safety approvals for food additives or pesticides. QRA has not been widely applied to drugs. Previous approaches for ranking harm have mostly been based on expert judgements45,46 which have been criticized as being arbitrary and biased47.\n\nThe advantage of QRA is that it provides a formal scientific method to rank the harm-potential of drugs, making optimum use of available data48. There are several approaches for QRA available, with Margin of Exposure (MOE) suggested by WHO49 as being most suitable for prioritizing risk management. In the alcohol field, MOE has been applied to evaluate the liver cirrhosis risk of ethanol, which is the single most important chronic disease condition attributable to alcohol globally50. MOE results have replicated those behind existing guidelines for low-risk drinking51. In a detailed study of the components in alcoholic beverages, ethanol was confirmed as the compound with highest risk52. In a detailed comparison between ethanol and non-metabolically produced acetaldehyde contained in beverages, it was also judged that the risk of ethanol comprises more than 99% of the total risk53. It can be concluded that the risk of alcoholic beverages can be evaluated by looking at the effects of ethanol alone. The situation is less clear for tobacco, for which some industry MOE studies find toxicants other than nicotine54,55. An MOE analysis of electronic cigarette liquids indicated that nicotine is the compound posing the highest risk56.\n\nMOEs are calculated as the ratio of a toxic dose of the drug (usually the benchmark dose BMDL10, the lowest dose which is 95% certain to cause no more than a 10% negative outcome incidence) with the dose consumed either individually or on a population scale47. The higher the MOE, the lower the level of risk, with low risk not implying safety. An MOE of 100 means that the drug is being consumed at one hundredth of the benchmark dose; an MOE of 1 means that the drug is being consumed at this toxic dose. The MOE for drugs can be calculated taking into account a range of hazard outcomes in health and other well-being domains, as long as suitable dose-response data are available (which is not the case for most drugs and many well-being indicators). Therefore, analyses to date are primarily restricted to lethal outcomes based on animal studies. Results for European adults are summarized in Figure 1. The low MOE for alcohol (and thus high risk) is due to the high levels of consumption by European adults. The MOE results are consistent with the consensus of expert rankings in which cannabis is ranked with lower risk and alcohol with higher risk than current policies assume45,46. The MOE is inherent to the drug itself; it does not account for the harms that arise from drug delivery systems, for example, smoked tobacco, or from secondary effects such as unclean syringes used for heroin intake.\n\nSource: Lachenmeier & Rehm (2015)47.\n\nOf course, MOE, as presented here, focuses on the physical body of the adult user as the locus of harm. It does not take into account the sex and age of the user, or harm to individuals other than the user or at collective levels, which are a primary source of social differentiation between drugs. It also focuses on mortality, rather than intoxication in the moment. Differences between the intoxicating power of substances in the moment, and in the behavioural consequences of taking them, are primary reasons why, for example, societies have treated alcohol differently to tobacco. Nevertheless, we believe that MOE should be applied at the current stage even when the underlying toxicological data are incomplete, to provide a better alignment of prioritization of policy to the drugs associated with higher risks, which in this case are nicotine, cocaine, heroin and alcohol.\n\n\nTowards better governance\n\nWe have described three harmonizing approaches to reframe our understanding of addictions: biological predisposition to seek out psychoactive substances; heavy use over time as a fruitful characterisation; and quantitative risk assessment. Here, we propose two underlying pillars for a re-design of the governance of drug controls: embedding drugs governance within a comprehensive model of societal well-being; and creating a health footprint which, modelled on the carbon footprint, promotes accountability by identifying who causes what harm to whom from drugs.\n\nWe propose that societal well-being should be our overarching frame for a more integrated governance and monitoring of drug control policies. Societal well-being, as captured by OECD57, includes quality of life (health, education and skills, social connections, civic engagement, and personal security), material conditions (income, employment and housing) and sustainability over time (see Figure 2). Gross domestic product (GDP) is included as a separate domain, recognizing that, while economic well-being is an important component of societal well-being, GDP has significant limitations. GDP excludes, for example, non-market household activity such as parenting, and activities such as conservation of natural resources. GDP also includes activities which do not contribute to well-being, such as pollution and crime, termed regrettables that are depicted within GDP but outside well-being. The use of and harm done by drugs are affected by and affect all well-being dimensions58.\n\nSource: OECD. (2011), How's Life?: Measuring Well-being, OECD Publishing, Paris. DOI: http://dx.doi.org/10.1787/9789264121164-en57.\n\nWell-being analyses have found that, whilst some illegal drug policies may reduce health harms, they often come with adverse side effects including criminalization, social stigma and social exclusion, all of which exacerbate health harms59. Humans are hard-wired to be social animals60, with social networks strongly influencing tobacco use61 and alcohol intake62. Punitive drug policies bring about the opposite: social exclusion due to stigma and social isolation63–65. Engagement with illegal drugs conveys especially strong social meanings and can lead to stigma of marginalized heavy users, as opposed to the supposedly more responsible mainstream users66. This can lead to punitive societal responses. Meanwhile, exclusion from the mainstream may allow harms to continue unchecked. If a user is caught using drugs in a country with “zero tolerance” to illegal drugs, the ensuing criminal sanctions will impede civic engagement and any improvements in quality of life and material living conditions. For more detail, see ‘Well-being as a frame for understanding addictive substances’ by Stoll & Anderson58. Changes in life expectancy in Mexico illustrate the negative consequences of criminalization67. After six decades of gains in life expectancy in Mexico, the trend stagnated after 2000 for both men and women, and for men was reversed after 200568. This was largely due to an unprecedented rise in homicide rates, mostly as a result of drug policies promoting ‘gang wars’ and conflicts between gangs, the police and army69.\n\nA well-being frame calls for whole-of-society approaches that progressively legalize illegal drugs to reduce violence and personal insecurity, while focusing on substances as drivers of harm6,70. It balances the complex factors impacting drug use and related harm through the continuous monitoring of policy effects in a proactive way, with regulations embedded in international coordination. It calls for whole-of-society approaches that avoid criminalization where possible and where costs of addressing the problem are equally distributed across society. Governance strategies manage nicotine, illegal drugs and alcohol as a whole to avoid overlaps, contradictions, gaps and inequalities1. The concern should be focused on harms, both to the user and to others, including family and friends, communities and society as a whole. The structures to support the strategies should be coordinated and multi-sectoral, involving high-level coordination of health, social welfare, and justice agencies in the context of international treaties, and, importantly, equitable across the lifespan, between genders and cultural groups. To increase the pace of policy change, regional and local public policies can create policy communities and networks within a common international framework.\n\nManaging ‘wicked problems’ requires clear rules of private sector engagement in policy making, particularly when private interests go against societal well-being6. An evolved governance system must include measures to avoid industry co-optation, through transparency, checks and balances. Private sector stakeholders should operate within established rules.\n\nThe ongoing monitoring of outcomes within a well-being framework would promote accountability. Modelled on the carbon footprint, we propose a health footprint as the accountability tool. Footprints were developed in the ecological field as a measure of human demand on ecosystems71, including water footprints72 and carbon footprints that apportion greenhouse gas emissions to certain activities, products and populations73. The central reason for estimating a carbon footprint is to help reduce the risk of climate change through enabling targeted and effective reductions in greenhouse gas emissions74.\n\nThe health footprint can be considered a measure of the total amount of risk factor attributable disability adjusted life years (DALYs)75 of a defined population, sector or action within a spatial (e.g., jurisdiction) or temporal boundary (e.g. one year). It can be calculated using standard risk factor-related YLL and DALY methodologies of the Global Burden of Disease Study10 and of the World Health Organization75. Health footprints are a starting point. To be accountable, we ultimately need to understand what drives the health footprint (Figure 3).\n\nAbove the health footprint of Figure 3 are the structural drivers of harm that directly influence the size of the health footprint. Biological attributes and functions include, for example, the biological pre-disposition to seek out and use drugs. Genetic variants, for example, could be those that affect the function of alcohol dehydrogenase, influencing consumption levels and harm8,76. Changes in global population size and structure can increase absolute numbers of drug-related DALYs, even though rates per person can decrease over the same time7. As sociodemographic status improves in lower income countries, so do drug-related DALYs10; yet, for the same amount of drug use, people with lower incomes suffer more drug-related DALYs than people with higher incomes77.\n\nAbove the structural drivers are the circumstantial drivers, those that can change. Related to drug potency and exposure, an MOE target for all drugs no greater than 10 has been argued6. Policies could achieve such a result by either reducing drug exposure or by reducing the potency of the drug. Technological developments have led to electronic nicotine delivery systems (widely known as e-cigarettes) as widespread alternatives to smoked tobacco, with current best estimates showing e-cigarettes to be considerably less harmful to health than smoked cigarettes78–80. It may be that once e-cigarettes are heavily produced and marketed by the tobacco industry, that society will see cigarette-like levels of sustained heavy use of nicotine. However, e-cigarette’s harm quotient should stay low, provided they are properly regulated in terms of their components, including nicotine. Social influences and attitudes drive harm through stigma, social exclusion and social marginalization; these are often side-effects of drug policies, which can bring more harm than drug use itself81,82.\n\nPolicies that reduce exposure to drugs are essentially those that limit availability by increasing the price and reducing physical availability59,83,84. The absence of such evidence-based policies is an important driver of harm. Limits to availability bring a range of co-benefits to educational achievement and productivity, for example, but they can also bring adverse effects – for example, the well-documented violence, corruption and loss of public income associated with some existing ‘illegal’ drug policies58,85. Individual choices and behaviour that drive harm from drug use are determined by the environment in which those choices and behaviours operate86. Banning commercial communications, increasing price and reducing availability are all incentives that impact individual behaviour. Research and development can be promoted to reduce the potency of existing drugs87 and their drug delivery packages27,56,78.\n\nUnfortunately, there remain enormous gaps between the supply and demand of evidence-based prevention, advice and treatment programmes88–92. Called for by United Nations Sustainable Development Goal 3.593, their supply can bring many co-benefits to society, including reduced social costs and increased productivity94. The harm driven by the gaps is due in large part to insufficient resources and insufficient implementation of effective evidence-based prevention and treatment programmes95. Currently these programmes represents less than 1% of all costs incurred to society by drugs96. Similar to medicines agencies (such as the European Medicines Agency) that assess and approve drugs, prevention agencies could be created95. Compounding the gap between supply and demand is the fact that often, considerable marginalization and stigmatization happens in the path to treatment, and this is then further exacerbated by the treatment itself82. The use of pharmacotherapy as an adjunct may be further limited due to ideological stances, poorly implemented guidelines, lack of appropriate medication, and even a perceived lack of effect, if the drug is available97.\n\nThe private sector is a core driver of harm, through commercial communications which include all actions undertaken by producers of drugs to persuade consumers to buy and consume more98. There are international models encouraging better control of commercial communications in the public health interest, the most notable being the Framework Convention on Tobacco Control83. In addition to commercial communications, the private sector drives harm through shaping drug policies, leading to more drug-related deaths5. Governance structures thus need to have the capability and expertise to supervise industry movements that shape drug-related legislation and regulations, including regulating and restricting political lobbying. One of the difficulties here is that politically driven change in difficult areas, such as drug policies, is highly dependent on collective decisions99 and influenced by what has been termed specular interaction100, in which a politician’s actions may be less determined by their own conviction, and more by their evaluation of beliefs of their rivals and friends.\n\nThe health footprint is the accountability system for who and what causes drug-related harm. Jurisdictional entities can be ranked according to their overall health footprint, in order to identify the countries that contribute most to drug attributable ill-health and premature death, and where the most health gain could be achieved at country level. Jurisdictional footprints could include ‘policy attributable health footprints’ which estimate the health footprint between current policy and ideal health policy. This would address the question: ‘what would be the improvement in the health footprint compared to present policies, were the country to implement strengthened or new policies?’ Conversely, the health footprint can provide accountability for when such evidence-based policy is not implemented correctly.\n\nA range of sectors are involved in nicotine and alcohol related risk factors. These include producer and retail organizations such as large supermarket chains, and service provider companies such as advertising and marketing industries. There is considerable overlap between sectors, and estimates will need to determine appropriate boundaries for health footprint calculations. Companies could report their health footprints and choose to commit to reducing them by a specified amount over a five to ten-year time frame. Direct examples of producer action could include switching from higher to lower alcohol concentration products27, and switching from smoked tobacco cigarettes to e-cigarettes80.\n\n\nConclusions\n\nThe points stated above underscore the need to redesign the governance of drugs; in Europe, and globally. Margins of exposure estimates for four drugs (nicotine, cocaine, heroin and alcohol) are exceedingly high and thus call for determined action. Drugs are responsible for a high proportion of years of life lost in the European Union; tobacco accounted for 18.2% of life years lost, illegal drugs for 1.8%, and alcohol for 8.2% in 2013. There are many side effects of existing policies, such as stigma, social exclusion, lack of personal security, civil unrest and homicide58.\n\nUnder the auspices of ALICE RAP, a large, multidisciplinary team of addiction scientists put forward a range of arguments for moving progressively towards regulated legalization of certain illegal drugs, proposing a well-being frame that calls for whole-of-society approaches and continuously monitors and accounts for adverse side effects of drug policy. Humans have a biological pre-disposition to seek out a range of drugs, so prohibitionist policies are likely to run into difficulty - and they have. Legalization does not imply that drug governance is left to market forces alone - the experience of nicotine and alcohol demonstrates that this is not possible. Instead, drug governance requires comprehensive regulation, with adequate and transparent rules of the game for stakeholder involvement, and appropriate international regulatory frameworks. With a health footprint, it can be documented who causes what harm from nicotine, illegal drugs and alcohol in the public and private sectors. Public bodies and private companies should be required to publish their health footprints on an annual basis, and indicate their plans for reducing the health footprint.\n\nThe consensus that ALICE RAP reached will not come without push-back. Without input from evolutionary theory, neurobiology will continue to maintain that human drug use is initiated and sustained by reward and reinforcement at both biological and behavioural levels, compounded by mistaken views that the human encounter with drugs is a relatively new evolutionary experience, and human vulnerability to drugs in moral, behavioural, and biological dimensions. Disease classification systems are based not only on measurement, but on qualification, and thus payment, for treatment. The concept heavy use over time does not prevent the use of qualification definitions for treatments. Threshold consumption levels determining treatment can be defined as levels above which advice and treatment have been shown to reduce the development or progression of end-organ damage. Extending margin of exposure analyses for a range of outcomes beyond mortality will overcome concern of one metric for drug policy - its strength is that it allows standard comparison across drugs and indicates options for changing both dose and exposure.\n\nWhilst measuring societal well-being as a whole has gained support, the implications for drug policy that favour regulated legalization will meet resistance from those who favour prohibition, particularly as prohibition is based more on a moral than an evidence-based standpoint, as has been the case with alcohol101. The footprint implies responsibility, which is often difficult for both public and private sectors to accept, in particular for producer companies whose vested interests might be challenged.\n\nWhat we propose in this paper are large adjustments to our understanding of addictions and to what needs to be done to effectively reduce the widespread harms done by drugs. We hope that what we have written might start a process for better drug policy for the good of the public.\n\n\nData availability\n\nDataset 1: Source data underlying the results presented in Table 1. The data was based on the IHME Global burden of diseases, injuries and risk factors study (http://www.healthdata.org/gbd).\n\nDOI, 10.5256/f1000research.10860.d154573102",
"appendix": "Author contributions\n\n\n\nPA, VB, PC, RD, MH, DWL, AL-H, DM, JR, RR, LS, RS, TY and AG drafted sections of the text and read the final manuscript, for which consensus was agreed. PA coordinated the drafting and edited the text.\n\n\nCompeting interests\n\n\n\nPA and AG coordinated the ALICE RAP project. VB, PC, MH, DWL, AL-H, DM, JR, RR, LS, and TY undertook various aspects of research for the ALICE RAP project. PA reports receipt of fees for public health comment to AB InBev’s goals to reduce the harmful use of alcohol, outside the submitted work. PC reports having served as a technical advisor to ABInBev Global Health Foundation, outside the submitted work. AG reports grants and personal fees from Lundbeck, grants and personal fees from D&A Pharma, personal fees from AbbVie, outside the submitted work. AL-H reports grants and personal fees from Lundbeck, outside the submitted work. JR reports grants, personal and other fees from Lundbeck, outside the submitted work. All other authors report no conflicts of interest. The views expressed here reflect only the authors’ and the European Union is not liable for any use that may be made of the information contained therein. No funds were used to prepare the paper.\n\n\nGrant information\n\nThe research leading to the basis of this paper has received funding from the European Commission's Seventh Framework Programme (FP7) 2007–2013, under Grant Agreement n° 266813 - Addictions and Lifestyle in Contemporary Europe – Reframing Addictions Project (ALICE RAP – www.alicerap.eu).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nYsa T, Colom J, Albareda A, et al.: Governance of Addictions: European Public Policies. Oxford: Oxford University Press, 2014. Reference Source\n\nAnderson P, Rehm J, Room R: The Impact of Addictive Substances and Behaviours on Individual and Societal Well-Being. Oxford, Oxford University Press, 2015. Reference Source\n\nGell L, Bühringer G, McLeod J, et al.: What Determines Harm from Addictive Substances and Behaviours? Oxford: Oxford University Press, 2016. Reference Source\n\nHellman M, Berridge V, Duke K, et al.: Concepts of Addictive Substances and Behaviours across Time and Place. Oxford: Oxford University Press, 2016. Reference Source\n\nMiller D, Harkins C, Schlögl M, et al.: Impact of Market Forces on Addictive Substances and Behaviours: The web of influence of addictive industries. Oxford: Oxford University Press, in press. 2017. Reference Source\n\nAnderson P, Braddick F, Conrod PJ, et al.: The New Governance of Addictive Substances and Behaviours. Oxford: Oxford University Press, in press. 2017. Reference Source\n\nShield KD, Rehm J: The effects of addictive substances and addictive behaviours on physical and mental health. In Eds. Anderson P, Rehm J, & Room R. The Impact of Addictive Substances and Behaviours on Individual and Societal Well-Being. Oxford, Oxford University Press, 2015. Publisher Full Text\n\nWorld Health Organization: Global status report on traffic safety 2015. Geneva: World Health Organization, 2015. Reference Source\n\nDegenhardt L, Hall W: Extent of illicit drug use and dependence, and their contribution to the global burden of disease. Lancet. 2012; 379(9810): 55–70. Publisher Full Text\n\nGBD 2013 DALYs and HALE Collaborators: Global, regional and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition. Lancet. 2015; 386(1009): 2145–2191. Publisher Full Text\n\nGell L, Ally A, Buykx P, et al.: Alcohol’s harm to others. Accessed 1 August 2016; 2015. Reference Source\n\nEmerson K, Nabatchi T, Balogh S: An integrative framework for collaborative governance. J Public Adm Res Theory. 2012; 22(1): 1−29. Publisher Full Text\n\nRoberts N: Wicked problems and network approaches to resolution. Int Public Manage Rev. 2000; 1(1): 1–19. Reference Source\n\nHellman M, Room R: What’s the story on addiction? Popular myths in the USA and Finland. Critical Public Health. 2015; 25(5): 582–598. Publisher Full Text\n\nHellman M, Majamäki M, Rolando S, et al.: What causes addiction problems? Environmental, biological and constitutional explanations in press portrayals from four European welfare societies. Subst Use Misuse. 2015; 50(4): 419–438. PubMed Abstract | Publisher Full Text\n\nEgerer M, Hellman M, Rolando S, et al.: General practitioners’ position on problematic gambling in three European welfare states. In: Hellman M, Berridge V, Duke K, Mold A. Concepts of Addictive Substances and Behaviours across Time and Place. 7Oxford University Press, 2016; 169–192. Publisher Full Text\n\nSolana J, Saz-Carranza A: The Global Context: How Politics, Investment, and Institutions Impact European Businesses. Barcelona: ESADE, 2016; Accessed 1 October 2016. Reference Source\n\nSullivan RJ, Hagen EH: Passive vulnerability or active agency? An evolutionarily ecological perspective of human drug use. In Anderson P, Rehm J, & Room R, (Eds), The Impact of Addictive Substances and Behaviours on Individual and Societal Well-Being. Oxford, Oxford University Press, 2015. Publisher Full Text\n\nDudley R: The Drunken Monkey: Why We Drink and Abuse Alcohol. Berkeley: University of California Press, 2014. Reference Source\n\nSullivan RJ, Hagen EH, Hammerstein P: Revealing the paradox of drug reward in human evolution. Proc Biol Sci. 2008; 275(1640): 1231–1241. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoulette CJ, Kazanji M, Breurec S, et al.: High prevalence of cannabis use among Aka foragers of the Congo Basin and its possible relationship to helminthiasis. Am J Hum Biol. 2016; 28(1): 5–15. PubMed Abstract | Publisher Full Text\n\nRoulette CJ, Mann H, Kemp B, et al.: Tobacco use vs. helminths in Congo basin hunter-gatherers: self-medication in humans? Evol Hum Behav. 2014; 35(5): 397–407. Publisher Full Text\n\nSullivan RJ, Hagen EH: Psychotropic substance-seeking: evolutionary pathology or adaptation? Addiction. 2002; 97(4): 389–400. PubMed Abstract | Publisher Full Text\n\nForbey JS, Harvey A, Huffman MA, et al.: Exploitation of secondary metabolites by animals: A response to homeostatic challenges. Integr Comp Biol. 2009; 49(3): 314–328. PubMed Abstract | Publisher Full Text\n\nCarrigan MA, Uryasev O, Frye CB, et al.: Hominids adapted to metabolize ethanol long before human-directed fermentation. Proc Natl Acad Sci U S A. 2015; 112(2): 458–463. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchmidt LA: What are addictive substances and behaviours and how far do they extend? In: Anderson P, Rehm J, Room R, eds. Impact of addictive substances and behaviours on individual and societal well-being. Oxford University Press, 2015. Publisher Full Text\n\nRehm J, Lachenmeier DW, Jané Llopis E, et al.: Evidence of reducing ethanol content in beverages to reduce harmful use of alcohol. Lancet Gastroenterol Hepatol. 2016; 1(1): 78–83. Publisher Full Text\n\nRehm J, Marmet S, Anderson P, et al.: Defining substance use disorders: do we really need more than heavy use? Alcohol Alcohol. 2013; 48(6): 633–640. PubMed Abstract | Publisher Full Text\n\nRehm J, Anderson P, Gual A, et al.: The tangible common denominator of substance use disorders: a reply to commentaries to Rehm et al. (2013a). Alcohol Alcohol. 2014; 49(1): 118–122. PubMed Abstract | Publisher Full Text\n\nShield KD, Parry C, Rehm J: Chronic diseases and conditions related to alcohol use. Alcohol Res. 2013; 35(2): 155–173. PubMed Abstract | Free Full Text\n\nRehm J, Roerecke M: Reduction of drinking in problem drinkers and all-cause mortality. Alcohol Alcohol. 2013; 48(4): 509–513. PubMed Abstract | Publisher Full Text\n\nRando K, Hong KI, Bhagwagar Z, et al.: Association of frontal and posterior cortical gray matter volume with time to alcohol relapse: a prospective study. Am J Psychiatry. 2011; 168(2): 183–192. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPaul CA, Au R, Fredman L, et al.: Association of alcohol consumption with brain volume in the Framingham Study. Arch Neurol. 2008; 65(10): 1363–1367. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDing J, Eigenbrodt ML, Moslet TH Jr, et al.: Alcohol intake and cerebral abnormalities on magnetic resonance imaging in a community-based population of middle-aged adults: the Atherosclerosis Risk in Communities (ARIC) study. Stroke. 2004; 35(1): 16–21. PubMed Abstract | Publisher Full Text\n\nKehoe T, Gmel G, Shield KD, et al.: Determining the best population-level alcohol consumption model and its impact on estimates of alcohol-attributable harms. Popul Health Metr. 2012; 10(1): 6. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRubinsky AD, Dawson DA, Williams EC, et al.: AUDIT-C Scores as a scaled marker of mean daily drinking, alcohol use disorder severity, and probability of alcohol dependence in a U.S. general population sample of drinkers. Alcohol Clin Exp Res. 2013; 37(8): 1380–1390. PubMed Abstract | Publisher Full Text\n\nPeng Y, Shi H, Qi XB, et al.: The ADH1B Arg47His polymorphism in east Asian populations and expansion of rice domestication in history. BMC Evol Biol. 2010; 10: 15. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHosseinpoor AR, Parker LA, Tursan d'Espaignet E, et al.: Socioeconomic inequality in smoking in low-income and middle-income countries: results from the World Health Survey. PLoS One. 2012; 7(8): e42843. PubMed Abstract | Publisher Full Text | Free Full Text\n\nConrod PJ, Nikolaou K: Annual Research Review: On the developmental neuropsychology of substance use disorders. J Child Psychol Psychiatry. 2016; 57(3): 371–94. PubMed Abstract | Publisher Full Text\n\nNational Heart, Lung and Blood Institute: The Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7).2004; Accessed 1 August 2016. Reference Source\n\nVistisen D, Colagiuri S, Borch-Johnsen K, et al.: Bimodal distribution of glucose is not universally useful for diagnosing diabetes. Diabetes Care. 2009; 32(3): 397–403. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThe Emerging Risk factors Collaboration, Sarwar N, Gao P, et al.: Diabetes mellitus, fasting blood glucose concentration, and risk of vascular disease: a collaborative meta-analysis of 102 prospective studies. Lancet. 2010; 375(9733): 2215–22. PubMed Abstract | Publisher Full Text | Free Full Text\n\nJacka FN, Cherbuin N, Anstey KJ, et al.: Western diet is associated with a smaller hippocampus: a longitudinal investigation. BMC Med. 2015; 13: 215. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHargrave SL, Jones S, Davidson TL: The Outward Spiral: A vicious cycle model of obesity and cognitive dysfunction. Curr Opin Behav Sci. 2016; 9: 40–46. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNutt D, King LA, Saulsbury W, et al.: Development of a rational scale to assess the harm of drugs of potential misuse. Lancet. 2007; 369(9566): 1047–1053. PubMed Abstract | Publisher Full Text\n\nvan Amsterdam J, Opperhuizen A, Koeter M, et al.: Ranking the harm of alcohol, tobacco and illicit drugs for the individual and the population. Eur Addict Res. 2010; 16(4): 202–207. PubMed Abstract | Publisher Full Text\n\nLachenmeier DW, Rehm J: Comparative risk assessment of alcohol, tobacco, cannabis and other illicit drugs using the margin of exposure approach. Sci Rep. 2015; 5: 8126. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHertz-Picciotto I: Epidemiology and quantitative risk assessment: a bridge from science to policy. Am J Public Health. 1995; 85(4): 484–491. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO IPCS: Environmental Health Criteria 239.Principles for modelling dose–response for the risk assessment of chemicals. WHO, Geneva, Switzerland. 2009. Reference Source\n\nRehm J, Shield KD: Global alcohol-attributable deaths from cancer, liver cirrhosis, and injury in 2010. Alcohol Res. 2013; 35(2): 174–183. PubMed Abstract | Free Full Text\n\nLachenmeier DW, Kanteres F, Rehm J: Epidemiology-based risk assessment using the benchmark dose/margin of exposure approach: the example of ethanol and liver cirrhosis. Int J Epidemiol. 2011; 40(1): 210–218. PubMed Abstract | Publisher Full Text\n\nPflaum T, Hausler T, Baumung C, et al.: Carcinogenic compounds in alcoholic beverages: an update. Arch Toxicol. 2016; 90(10): 2349–67. PubMed Abstract | Publisher Full Text\n\nLachenmeier DW, Gill JS, Chick J, et al.: The total margin of exposure of ethanol and acetaldehyde for heavy drinkers consuming cider or vodka. Food Chem Toxicol. 2015; 83: 210–214. PubMed Abstract | Publisher Full Text\n\nCunningham FH, Fiebelkorn S, Johnson M, et al.: A novel application of the Margin of Exposure approach: segregation of tobacco smoke toxicants. Food Chem Toxicol. 2011; 49(11): 2921–2933. PubMed Abstract | Publisher Full Text\n\nXie J, Marano KM, Wilson CL, et al.: A probabilistic risk assessment approach used to prioritize chemical constituents in mainstream smoke of cigarettes sold in China. Regul Toxicol Pharmacol. 2012; 62(2): 355–362. PubMed Abstract | Publisher Full Text\n\nHahn J, Monakhova YB, Hengen J, et al.: Electronic cigarettes: overview of chemical composition and exposure estimation. Tob Induc Dis. 2014; 12(1): 23. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOECD: How's Life? 2015. Paris: OECD, 2015; accessed 1 October 2016. Reference Source\n\nStoll L, Anderson P: Well-being as a framework for understanding addictive substances. In Anderson P, Rehm J & Room R, (Eds.) The Impact of Addictive Substances and Behaviours on Individual and Societal Well-Being. Oxford, Oxford University Press, 2015. Publisher Full Text\n\nBabor T, Caulkins J, Edwards E, et al.: Drug Policy and the Public Good. Oxford and London, Oxford University Press, 2010. Publisher Full Text\n\nChristakis NA, Fowler JH: Friendship and natural selection. Proc Natl Acad Sci U S A. 2014; 111(Suppl 3): 10796–10801. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChristakis NA, Fowler JH: The collective dynamics of smoking in a large social network. N Engl J Med. 2008; 358(21): 2249–2258. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRosenquist JN, Murabito J, Fowler JH, et al.: The spread of alcohol consumption behavior in a large social network. Ann Intern Med. 2010; 152(7): 426–433, W141. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKurzban R, Leary MR: Evolutionary origins of stigmatization: the functions of social exclusion. Psychol Bull. 2001; 127(2): 187–208. PubMed Abstract | Publisher Full Text\n\nOaten M, Stevenson RJ, Case TI: Disease avoidance as a functional basis for Stigmatization. Philos Trans R Soc Lond B Biol Sci. 2011; 366(1583): 3433–3452. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHawkley LC, Capitanio JP: Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach. Philos Trans R Soc Lond B Biol Sci. 2015; 370(1669): pii: 20140114. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoom R: Addiction and personal responsibility as solutions to the contradictions of neoliberal consumerism. Crit Public Health. Accessed 15 April 2013, 2011; 21(2): 141–151. Publisher Full Text\n\nRehm J, Anderson P, Fischer B, et al.: Policy implications of marked reversals of population life expectancy caused by substance use. BMC Med. 2016; 14: 42. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAburto JM, Beltrán-Sánchez H, García-Guerrero VM, et al.: Homicides In Mexico Reversed Life Expectancy Gains For Men And Slowed Them For Women, 2000–10. Health Aff (Millwood). 2016; 35(1): 88–95. PubMed Abstract | Publisher Full Text\n\nGamlin J: Violence and homicide in Mexico: a global health issue. Lancet. 2015; 385(9968): 605–6. PubMed Abstract | Publisher Full Text\n\nWerb D, Rowell G, Guyatt G, et al.: Effect of drug law enforcement on drug market violence: a systematic review. Int J Drug Policy. 2011; 22(2): 87–94. PubMed Abstract | Publisher Full Text\n\nRees WE: Ecological footprints and appropriated carrying capacity: what urban economics leaves out. Environ Urban. 1992; 4(2): 121–130. Publisher Full Text\n\nHoekstra AY: The water footprint of modern consumer society. London, Routledge, 2013. Reference Source\n\nWright LA, Kemp S, Williams I: Carbon footprinting: towards a universally accepted definition. Carbon Manage. 2011; 2(1): 61–72. Publisher Full Text\n\nWilliams I, Kemo S, Coello J, et al.: A beginner’s guide to carbon footprinting. Carbon Manage. 2012; 3(1): 55–67. Publisher Full Text\n\nEzzati M, Lopez A, Rodgers A, et al.: Comparative quantification of health risks. Global and regional burden of disease attributable to selected major risk factors. Geneva, Switzerland, World Health Organization, 2004. Reference Source\n\nHolmes MV, Dale CE, Zuccolo L, et al.: Association between alcohol and cardiovascular disease: Mendelian randomisation analysis based on individual participant data. BMJ. 2014; 349: g4164. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRoom R, Sankaran S, Schmidt LA, et al.: Addictive substances and socioeconomic development. In: Anderson P, Rehm J & Room R eds. The Impact of Addictive Substances and Behaviours on Individual and Societal Well-Being. Oxford, Oxford University Press, 2015. Publisher Full Text\n\nMcNeill A, Brose LS, Calder R, et al.: E-cigarettes: an evidence update. London: Public Health England, 2015; accessed 1 October 2016. Reference Source\n\nBrose LS, Brown J, Hitchman SC, et al.: Perceived relative harm of electronic cigarettes over time and impact on subsequent use. A survey with 1-year and 2-year follow-ups. Drug Alcohol Depend. 2015; 157: 106–11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTobacco Advisory Group of the Royal College of Physicians: Nicotine without smoke—tobacco harm reduction. Royal College of Physicians, 2016. Reference Source\n\nSchmidt LA, Mäkelä P, Rehm J, et al.: Alcohol: equity and social determinants. In: Blas E & Sivasankara Kurup A, eds., Equity, Social Determinants and Public Health Programmes. Geneva: World Health Organization, 2010; 11–29. Reference Source\n\nMoskalewicz J, Klingemann JI: Addictive substances and behaviours and social justice. In Anderson P, Rehm J, Room R, Eds. The impact of addictive substances and behaviours on individual and societal well-being. Oxford: Oxford University Press, 2015. Publisher Full Text\n\nBettcher D, da Costa e Silva VL: Tobacco or Health. In Leppo K, et al. eds. Health in All Policies. Helsinki, Ministry of Social Affairs and Health, 2013. Reference Source\n\nAnderson P, Casswell S, Parry C, et al.: Alcohol. In Leppo K, et al. eds. Health in All Policies. Helsinki, Ministry of Social Affairs and Health, 2013. Reference Source\n\nKleiman MAR, Caulkins JP, Jacobson T, et al.: Violence and drug control policy. In: Donnelly PD & Ward CL, eds. Oxford Textbook of Violence Prevention. Oxford: Oxford University Press, 2014. Publisher Full Text\n\nAnderson P, Harrison O, Cooper C, et al.: Incentives for health. J Health Commun. 2011; 16(Suppl 2): 107–133. PubMed Abstract | Publisher Full Text\n\nKupferschmidt K: The dangerous professor. Science. 2014; 343(6170): 478–481. PubMed Abstract | Publisher Full Text\n\nConrod P, Brotherhood A, Sumnall H, et al.: Drug and Alcohol Policy for European Youth: Current evidence and recommendations for integrated policies and research strategies. In: Anderson P, Rehm J, Room R, (Eds.). Impact of addictive substances and behaviours on individual and societal well-being. Oxford: Oxford University Press, 2015. Publisher Full Text\n\nGrant BF, Goldstein RB, Saha TD, et al.: Epidemiology of DSM-5 Alcohol Use Disorder: Results From the National Epidemiologic Survey on Alcohol and Related Conditions III. JAMA Psychiatry. 2015; 72(8): 757–66. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGrant BF, Saha TD, Ruan WJ, et al.: Epidemiology of DSM-5 Drug Use Disorder: Results From the National Epidemiologic Survey on Alcohol and Related Conditions-III. JAMA Psychiatry. 2016; 73(1): 39–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRehm J, Allamani A, Elekes Z, et al.: Alcohol dependence and treatment utilization in Europe - a representative cross-sectional study in primary care. BMC Fam Prac. 2015; 16: 90. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRehm J, Shield K, Rehm M, et al.: Alcohol consumption, alcohol dependence, and attributable burden of disease in Europe: Potential gains from effective interventions for alcohol dependence. Toronto, ON, Health, C. F. a. a. M, 2012. Publisher Full Text\n\nIAEG-SDGs: Report of the Inter-Agency and Expert Group on the Sustainable Development Goal Indicators. 791 UHC Economic and Social Council, 2016. Reference Source\n\nOECD: Tackling Harmful Alcohol Use. Paris, OECD Publishing, 2015. Reference Source\n\nFaggiano F, Allara E, Giannotta F, et al.: Europe needs a central, transparent, and evidence-based approval process for behavioural prevention interventions. PLoS Med. 2014; 11(10): e1001740. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRehm J, Gnam W, Popova S, et al.: The costs of alcohol, illegal drugs, and tobacco in Canada, 2002. J Stud Alcohol Drugs. 2007; 68(6): 886–895. PubMed Abstract | Publisher Full Text\n\nLingford-Hughes AR, Welch S, Peters L, et al.: BAP updated guidelines: evidence-based guidelines for the pharmacological management of substance abuse, harmful use, addiction and comorbidity: recommendations from BAP. J Psychopharmacol. 2012; 26(7): 899–952. PubMed Abstract | Publisher Full Text\n\nNational Cancer Institute (NCI): The Role of the Media in Promoting and Reducing Tobacco Use. Davis RM, Gilpin EA, Loken B, Viswanath K & Wakefield MA (Eds.) NCI Tobacco Control Monograph Series No. 19. Bethesda, MD: U.S. Department of Health and Human Services, National Institutes of Health, National Cancer Institute. NIH Pub. No. 07-6242, 2008. Reference Source\n\nGranovetter M:Threshold models of collective behaviour. Am J Sociol. 1978; 83: 14209–43. Reference Source\n\nCoceht Y: Green eschatology. In: Hamilton C, Bonneuil C & Gemenne F, eds. The Anthropocene and the Global Environmental Crisis. London: Routledge, 2015. Reference Source\n\nMcGirr L: The war on alcohol. New York: WW Norton & Company, 2016. Reference Source\n\nAnderson P, Berridge V, Conrod P, et al.: Dataset 1 in: Reframing the science and policy of nicotine, illegal drugs and alcohol – conclusions of the ALICE RAP Project. F1000Research. 2017. Data Source"
}
|
[
{
"id": "21185",
"date": "03 Apr 2017",
"name": "Richard L. Bell",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe present manuscript by Anderson et al. \"Reframing the science and policy of nicotine, illegal drugs and alcohol....\" is a well-written succinct, compilation of the findings and suggestions obtained from the Addictions and Lifestyle in Contemporary Europe--Reframing Addictions Project (ALICE-RAP). The authors, and colleagues of associated publications, astutely highlight the need for a systematic lexicon for addiction science and policy. This lexicon is needed not only internationally but also intranationally at all levels of the private and public sector. Thus, the need to destigmatize addiction and recognize that it is a natural phenomenon requiring treatment, and not criminalization, in some individuals. This will require a \"reframing of addiction\" in order to facilitate the treatment of addiction. While progress has been made in recognizing that addiction is a medical condition, that progress has not been matched by efforts to destigmatize addiction. As long as addiction is not recognized as a natural phenomenon that isn't isolated to one substance, but includes multiple licit and illicit substances (and possibly behaviors/process addictions), the public and policy makers will continue to have a mind set that addiction represents a \"wicked problem\". Thus, addiction will continue to be criminalized with most funds targeting addiction policy being slated towards the criminal justice system rather than social and clinical medicine to treat the phenomenon. The authors put forth a hypothesis that the lack of consistent addiction policy not only internationally but also intranationally is the absence of a standardized measure of social, medical, economic, and civic damage resulting from addiction to different licit and illicit substances. Disability Adjusted Life Years (DALYs) is a recommended way to address the health impact disparity observed across different classes of substance with abuse potential. DALYs can be used to determine Margin of Exposure (MOE) as a Quantitative Risk Assessment (QRA), which can be standardized across \"substances of abuse\". When this is done, as seen in Figure 1, it is clear that the licit substances ethanol and nicotine have a significantly greater deleterious \"health footprint\" compared to most illicit drugs. Yet, global addiction policy is not consistent in recognizing, nor addressing, this disparity. Finally the authors recognize and remind the reader that addiction policy is influenced by social, political and market place suppositions that are not evidence-based. Moreover, with the nontransparency of lobbying activity by multiple, and diverse, stakeholders on policy makers will resist a \"reframing of addiction\" in order for there to be consistent, equitable and humane policy both internationally and intranationally.\n\nAs far as particular manuscript content goes, the authors discuss the relatively novel concept that plant neurotoxins, which many drugs of abuse mimic or contain, are evolutionarily conserved in the plant kingdom. Thus, these toxins dissuade animals from ingesting the plant. Contrarily, plant chemicals that promote ingestion, in and of themselves, through reinforcing and/or rewarding effects are evolutionarily \"weeded out\". This point is receiving greater support through the recognition that immune signaling, centrally and peripherally, plays an important role in the neurobiology of addiction.",
"responses": []
},
{
"id": "21515",
"date": "03 Apr 2017",
"name": "Freya Vander Laenen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe opinion article on reframing and rethinking the science and policy of nicotine, illegal drugs and alcohol is an interesting article that challenges some of the current ‘frames’ to analyze addictive substances and the governance of these substances. The surplus value of the article lies in the insights provided from a truly interdisciplinary approach, including disciplines beyond the ‘usual suspects’ of health economics, criminology and psychology. It will stimulate discussing on governance approaches that promote public health.\nOverall, this is a well-written article addressing the important topic of reframing the science and policy of nicotine, illegal drugs and alcohol. The main messages of the paper are clearly described and sufficiently accentuated.\n\nThere is one important opinion that is included in the article that should be substantiated more extensively though. On p. 7 (final paragraph) and again in the overall conclusion, the authors plea to ‘legalize’ (p. 7) illegal drugs and they plea for approaches ‘that avoid criminalization’; in the conclusion, gain they plea for a ‘regulated legalization’ (p. 9), and for a ‘legalization (p. 10). First, the concepts used are not synonymous, as they are different legal concepts with a different meaning/ with different implications. Avoiding criminalization is not the same thing as legalization (avoiding criminalization does not require legalization, but can be reached through depenalization or by making use of the expediency principle on the prosecution level to settle drug offences); neither is ‘regulated legalization’ and ‘legalization’ (neither alcohol nor tobacco are legalized for that matter). I would advise the authors to reconsider the terms used and to extend on the implications of the option they suggest. Second, and linked to this, the transition from the paragraph on the consequences of criminalization (in Mexico) and the plea for legalization on p. 7 is too abrupt.\n\nNext, there are some minor questions that arise at some paragraphs in the article we would like the authors to elaborate on.\nOn page 3, section ‘Harm done by drugs’, the authors refer to the use of DALYs as a standard way to quantify the harm caused by drugs. The authors propose to use DALYs as a measure for the health footprint. I agree with this since the use of DALYs enables to make comparisons associated with the burden of drugs across substances and/or countries possible. It is well known that the (mis)use of drugs result in an increased risk of a number of conditions (somatic diseases, mental disorders, injuries). Relative risks (together with prevalence data) can serve as input to estimate substance-attributable fractions (SAFs) which can be used to quantify the economic burden of drug (mis)use. So, please elaborate a bit on this in the section ‘Harm done by drugs’.\nOn p. 5, the authors state that the heavy use over time of sugar can be placed in the same category as alcohol and other drugs. Do the authors imply that heavy use over time of sugar should be governed to the same extent as, e.g., the heavy use of heroin (and that heavy use should be the main element in policy decisions)? Or do the authors plea for a differentiated substance policy between different types of substances? Does this mean that the focus should be on rewarding healthy lifestyle behavior? In addition, policy initiatives to reduce the use of sugar should be integrated together with other lifestyle-related interventions such as the promotion of more physical activity, healthy eating (not restricted to only reducing the use of sugar). Please, add some comments on this.\nOn p. 6, the authors state that suitable dose-response data have to be available and they continue to state ‘which is not the case for most drugs and many well-being indicators’. Could the authors add what the main reasons are why these data are missing and how this lack of data could be overcome?\nOn p. 6, the authors only briefly touch upon the intoxicating power of substances in the moment and upon the behavioral consequences of taking them. One might argue that taking one of these elements is suitable for prioritizing risk management as well? Could the authors thus more clearly argue why, they suggest to use MOE and not, e.g. intoxicating power?\nOn page 9 section ‘Policies and measures’, ‘Banning commercial communications’, ‘increasing price’, and ‘reducing availability’ are incentives that impact individual behavior. Generally, these incentives could be considered as more or less ‘restrictive incentives’ that impact individual behavior. On the other hand, incentives can also be considered as ‘rewards’. What do the authors think about e.g. the use of financial incentives to reward ‘healthy behavior’? How could/should these be incorporated into an integrated governance approach?\nSome of the concepts in figure 3 are not explicitly discussed in the article, or at least not in a logical/sequential order (e.g. regulating private sector and research and development are, resource allocation and incentivizing individual behavioral are not). Could the authors briefly discuss each of the elements in this interesting figure? Could the authors please elaborate a bit more on the advantages (added value) and limitations of the conceptual model?",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-289
|
https://f1000research.com/articles/6-288/v1
|
17 Mar 17
|
{
"type": "Opinion Article",
"title": "Resistance mechanisms to drug therapy in breast cancer and other solid tumors: An opinion",
"authors": [
"Fedor V. Moiseenko",
"Nikita Volkov",
"Alexey Bogdanov",
"Michael Dubina",
"Vladimir Moiseyenko",
"Michael Dubina",
"Vladimir Moiseyenko"
],
"abstract": "Cancer is an important contributor to mortality worldwide. Breast cancer is the most common solid tumor in women. Despite numerous drug combinations and regimens, all patients with advanced breast cancer, similarly to other solid tumors, inevitably develop resistance to treatment. Identified mechanisms of resistance could be classified into intra- and extracellular mechanisms. Intracellular mechanisms include drug metabolism and efflux, target modulations and damage restoration. Extracellular mechanisms might be attributed to the crosstalk between tumor cells and environmental factors. However, current knowledge concerning resistance mechanisms cannot completely explain the phenomenon of multi-drug resistance, which occurs in the vast majority of patients treated with chemotherapy. In this opinion article, we investigate the role of these factors in the development of drug-resistance.",
"keywords": [
"cancer",
"breast cancer",
"chemotherapy",
"resistance"
],
"content": "\n\nBreast cancer is one of the most frequent cancers among solid tumors in women. Drug therapy is an important part of primary treatment for loco-regional breast cancer, and is a cornerstone of treatment in advanced disease1. In contrast to the significant efficacy of first line chemotherapy, inevitably in subsequent lines a vast majority of patients develop drug resistance2. Currently, knowledge concerning resistance to cytotoxic antineoplastic agents is based primarily and solely on separate mechanisms that underlie tolerance to single agents3–10. This approach, though it is experimentally verified, is not able to explain the resistance to multiple agents, which is not dependent on the mechanism of drug anticancer cation, and is either present at the beginning of treatment or formed during subsequent lines of therapy in all patients. Consequently, other universal complex mechanisms that allow tumor cells to escape inhibition by antineoplastic agents should exist and be revealed by now.\n\nFor localized stages, and particularly for breast cancer, elimination of tumor cells can be achieved by surgical excision or radical radiotherapy. Efficacy of these approaches does not depend on the heterogeneity of the tumor. Theoretically, administration of antineoplastic agents that interact with particular, sometimes not identified, mechanisms of tumor pathogenesis should also cause the death of all tumor cells, which would be equal to a cure. Due to various resistance mechanisms, described in detail below, drug therapy by itself rarely cures cancer, even in the case of such chemosensitive tumors as breast cancer. Malignant cells that survive primary treatment continue to evolve with appearance or overgrowth of a resistant clone population, which leads to progression and inevitably death of the patient. In these circumstances, identification of transforming mechanisms of resistance obtained by tumor cells might help to define optimal character, intensity and/or longevity of primary and consecutive treatment, which might achieve maximal eradication of tumor cells. This eradication by itself should decrease the clonal variability and influence the evolutionary potential of the tumor11.\n\nThis paradigm is particularly important for hematologic malignancies. All clones are present in the bloodstream and/or bone marrow. Therefore, monitoring of the residual tumor burden has become possible with the introduction of new, highly sensitive molecular diagnostics, including direct sequencing, allele specific RT-PCR and digital PCR. For hematologic malignancies, it is essential to achieve a complete molecular response, which has been correlated with the longest time to disease progression. For example, in chronic myeloid leukemia complete cytogenetic and molecular response during the first three months of treatment is correlated with maximal survival and longest disease free interval12.\n\nUnfortunately, in contrast to hematologic malignancies in solid tumors, such as breast cancer, markers fitting the so-called “liquid biopsy” paradigm (i.e. circulating tumor cells and DNA) cannot always be found in biofluids, even in progressing advanced conditions. This peculiarity defines the necessity for the acquisition of histological, or at least cytological, samples from the primary tumor or metastatic site. As a good example, we can mention monitoring activating mutations in the biofluids of patients with non-small cell lung cancer. The identification of driver molecular alterations is currently possible, with a very high sensitivity13,14. However, even the most advanced technologies allow the correct identification of mutations in bioliquids in 6–7 out of 10 patients (for example, direct sequencing: 16.7–77.8%; PCR with enrichment: 4.7–49.3%; cobas ROCHE: 12.1%)15,16.\n\nFurthermore, even achieving a complete clinical and radiologic response in tumors of a solid nature does not mean elimination of all tumor cells, as has been shown for preoperative treatment of rectal cancer or colorectal cancer metastases17,18. The same situation was also shown for perioperative treatment of breast cancer, where even one cell with epithelial markers found in bone marrow determines a significantly worse long-term outcome and risk of disease recurrence19.\n\nDespite a large number of identified mechanisms that may underlie resistance to conventional cytotoxic and targeted drugs, none of them can fully explain multidrug resistance, which is inevitably acquired by all patients with advanced breast cancer and other tumors. Among other examples of acquired resistance, one can mention the decrease in the longevity of second line treatment in comparison with first line8. This decrease might be caused by the genetic heterogeneity that is a characteristic feature of all malignancies. This conception is illuminated in the GERCOR trial, where patients with inoperable colorectal cancer were randomized to two treatment groups. In group one, patients received FOLFOX as a first line and FOLFIRI as second and vice versa in group two. As a result no difference in overall survival was observed (21.5 versus 20.6 months; p = 0.99), but what is important is that no difference was observed in the progression free survival of first line (8.5 versus 8.0 months; p = 0.26) or second line (14.2 versus 10.9 months; p = 0.64) chemotherapy20. Small differences were noticed in progression free survival of second line (4.2 versus 2.5 months, p = 0.003). Still the duration of the effect derived from first line was much larger than that of second line. This observation might be interpreted in the way that irrespectively of the initial regimen, the tumor mass at progression is composed of a clone with multidrug resistant features. This clone might have appeared during therapy or might have existed in a small proportion prior to the start of treatment. The latter can be illustrated with an example from NSCLC, when the T790M mutation that defines resistance to first generation TKI can be found in primary samples or may appear during TKI therapy14,21.\n\nFurthermore, we can speculate that the appearance of a resistant clone or its presence at the initial tumor development might be probabilistic. To illustrate this idea, we can mention the N9741 trial, in which out of 1508 patients with inoperable colorectal cancer, complete radiologic response was seen in 62 patients. During consecutive follow up, 10/62 patients did not have disease progression and might be considered cured of metastatic disease22. Thus, in combination with several circumstances, primary clones can be eradicated by primary chemotherapy and thus are not involved in the development of new resistant subclones.\n\nResearch aimed to define mechanisms of resistance are usually based on the determination of genotypic and/or phenotypic features that drive resistant clones, and a myriad of methods have been used, among them molecular, chemical and physical analysis. However, the most important avenue for resistance research might be the model by which resistance is created. There are two main directions to model the resistance to therapeutic agents: first, in vitro modeling of the interaction between tumor cells and active antineoplastic agent; and second, in vivo experimental systems, such as laboratory animals.\n\nIn vitro methods are historically the first type used. Interestingly, these methods were significant for antibiotics therapies, before it became necessary to use them for oncological purposes. Isolation and cultivation of a pathogenic microorganism beyond the host organism is used to define its sensitivity to antimicrobial agents, and describe phenotypes and molecular profiles, which are of essential importance for clinical decisions on treatment and for the development of newer agents.\n\nThis approach has been used for research in all malignant tumors. Immortalized cell lines and primary cell cultures have been successfully used to screen hundreds of components for antineoplastic activity and the definition of the mechanism of action of several therapeutic agents23,24. Unfortunately, despite numerous programs of investigation into resistance mechanisms in cell lines exposed to various doses and schedules of chemotherapeutic agents, a significant change in the understanding of these mechanisms has not occurred.\n\nFirstly, unlike bacteria and other microorganisms, whose population in one host organism is limited with rarely more than one strain and evolution of resistance to antibiotics take place in several hosting organisms, evolution of malignant tumors is limited to the life of one host organism and is driven by the diversity of clones and genome instability. For this reason, isolation of a cell line or primary cell culture can hardly model the representative heterogeneous tumor cell population as it is inevitably accompanied by tumor cell dedifferentiation and loss of phenotypical heterogeneity. This observation might not limit in vitro drug testing programs, but significantly restricts resistance research potential.\n\nSecondly, tumor cell cultures in vitro are usually deprived of microenvironment communication, which in some situations might be an essential mechanism for resistance generation and maintenance. Thirdly, tumor cell cultures are characterized by homogenous habitat conditions, for example there are no differences in the distance to supply blood vessels, which does not allow for model exposition to different drug concentrations at one time25.\n\nNevertheless, programs conducted on cell cultures allow the determination of several mechanisms that might underlie resistance, or at least compromise the efficacy of various agents. Amidst them, one can mention various mechanisms, inlcuding mediating drug efflux (increased expression of ATP-binding cassette, including P-glycolprotein, multidrug-resistance-associated protein 1 and breast cancer resistance protein3,26,27), increasing the expression of metabolic enzymes, deactivating cytotoxic drugs (CYP2C9*2), and modulating targets for cytotoxic drugs (increased expression pf beta-III-isoform of tubulin4, increased expression of Tau6, decreased expression of Top-II-alpha28,29). Unfortunately, patterns revealed once are rarely verified in consecutive series with the same conditions but different cell lines. Also, mechanisms identified as the primary mechanism in one series appear to be secondary or even nonsignificant in the others27. As an example, we can mention an experiment where the efficacy of paclitaxel was compromised by different resistance mechanisms on one cell line exposed to different schedules of the drug29,30. Interestingly this appeared to be true also for the targeted agents, such as NSCLC with EGFR activating mutations that depended on the exposition dose of gefitinib developed either T790M or MET mediated resistance.\n\nIn conclusion, we suggest that the mechanism of multidrug resistance that inevitably develops during drug therapy of breast cancer, and other tumors of solid origin, have not yet been revealed. In our opinion the mechanism of resistance is most likely not directly related to drug metabolism or its target in the tumor cell.",
"appendix": "Author contributions\n\n\n\nAll authors conceptualized the study, collected data and performed analysis. All authors were involved in the writing and revision of the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by The Ministry of Education and Science of Russian Federation [RFMEFI60414X0070].\n\nI confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nChabner BA, Roberts TG Jr: Timeline: Chemotherapy and the war on cancer. Nat Rev Cancer. 2005; 5(1): 65–72. PubMed Abstract | Publisher Full Text\n\nGonzalez-Angulo AM, Morales-Vasquez F, Hortobagyi GN: Overview of resistance to systemic therapy in patients with breast cancer. Adv Exp Med Biol. 2007; 608: 1–22. PubMed Abstract | Publisher Full Text\n\nChen YN, Mickley LA, Schwartz AM, et al.: Characterization of adriamycin-resistant human breast cancer cells which display overexpression of a novel resistance-related membrane protein. J Biol Chem. 1990; 265(17): 10073–10080. PubMed Abstract\n\nMurray S, Briasoulis E, Linardou H, et al.: Taxane resistance in breast cancer: mechanisms, predictive biomarkers and circumvention strategies. Cancer Treat Rev. 2012; 38(7): 890–903. PubMed Abstract | Publisher Full Text\n\nTommasi S, Mangia A, Lacalamita R, et al.: Cytoskeleton and paclitaxel sensitivity in breast cancer: the role of beta-tubulins. Int J Cancer. 2007; 120(10): 2078–2085. PubMed Abstract | Publisher Full Text\n\nWagner P, Wang B, Clark E, et al.: Microtubule Associated Protein (MAP)-Tau: a novel mediator of paclitaxel sensitivity in vitro and in vivo. Cell Cycle. 2005; 4(9): 1149–1152. PubMed Abstract | Publisher Full Text\n\nLi WJ, Zhong SL, Wu YJ, et al.: Systematic expression analysis of genes related to multidrug-resistance in isogenic docetaxel- and adriamycin-resistant breast cancer cell lines. Mol Biol Rep. 2013; 40(11): 6143–6150. PubMed Abstract | Publisher Full Text\n\nSharifi S, Barar J, Hejazi MS, et al.: Roles of the Bcl-2/Bax ratio, caspase-8 and 9 in resistance of breast cancer cells to paclitaxel. Asian Pac J Cancer Prev. 2014; 15(20): 8617–8622. PubMed Abstract | Publisher Full Text\n\nBaselga J, Zambetti M, Llombart-Cussac A, et al.: Phase II genomics study of ixabepilone as neoadjuvant treatment for breast cancer. J Clin Oncol. 2009; 27(4): 526–534. PubMed Abstract | Publisher Full Text\n\nKutuk O, Letai A: Alteration of the mitochondrial apoptotic pathway is key to acquired paclitaxel resistance and can be reversed by ABT-737. Cancer Res. 2008; 68(19): 7985–7994. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGillies RJ, Verduzco D, Gatenby RA: Evolutionary dynamics of carcinogenesis and why targeted therapy does not work. Nat Rev Cancer. 2012; 12(7): 487–493. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDruker BJ, Guilhot F, O’Brien SG, et al.: Five-year follow-up of patients receiving imatinib for chronic myeloid leukemia. N Engl J Med. 2006; 355(23): 2408–2417. PubMed Abstract | Publisher Full Text\n\nChen K, Zhou F, Shen W, et al.: Novel Mutations on EGFR Leu792 Potentially Correlate to Acquired Resistance to Osimertinib in Advanced NSCLC. J Thorac Oncol. 2017; pii: S1556-0864(17)30010-2. PubMed Abstract | Publisher Full Text\n\nThress KS, Paweletz CP, Felip E, et al.: Acquired EGFR C797S mutation mediates resistance to AZD9291 in non-small cell lung cancer harboring EGFR T790M. Nat Med. 2015; 21(6): 560–562. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLuo J, Shen L, Zheng D: Diagnostic value of circulating free DNA for the detection of EGFR mutation status in NSCLC: a systematic review and meta-analysis. Sci Rep. 2014; 4: 6269. PubMed Abstract | Publisher Full Text\n\nLevy B, Hu ZI, Cordova KN, et al.: Clinical Utility of Liquid Diagnostic Platforms in Non-Small Cell Lung Cancer. Oncologist. 2016; 21(9): 1121–1130. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNair RM, Siegel EM, Chen DT, et al.: Long-term results of transanal excision after neoadjuvant chemoradiation for T2 and T3 adenocarcinomas of the rectum. J Gastrointest Surg. 2008; 12(10): 1797–805; discussion 1805–6. PubMed Abstract | Publisher Full Text\n\nEgger ME, Cannon RM, Metzger TL, et al.: Assessment of chemotherapy response in colorectal liver metastases in patients undergoing hepatic resection and the correlation to pathologic residual viable tumor. J Am Coll Surg. 2013; 216(4): 845–56; discussion 856–7. PubMed Abstract | Publisher Full Text\n\nBraun S, Kentenich C, Janni W, et al.: Lack of effect of adjuvant chemotherapy on the elimination of single dormant tumor cells in bone marrow of high-risk breast cancer patients. J Clin Oncol. 2000; 18(1): 80–86. PubMed Abstract | Publisher Full Text\n\nTournigand C, Andre T, Achille E, et al.: FOLFIRI followed by FOLFOX6 or the reverse sequence in advanced colorectal cancer: a randomized GERCOR study. J Clin Oncol. 2004; 22(2): 229–237. PubMed Abstract | Publisher Full Text\n\nKosaka T, Yatabe Y, Endoh H, et al.: Analysis of epidermal growth factor receptor gene mutation in patients with non-small cell lung cancer and acquired resistance to gefitinib. Clin Cancer Res. 2006; 12(19): 5764–5769. PubMed Abstract | Publisher Full Text\n\nGoldberg RM: N9741: a phase III study comparing irinotecan to oxaliplatin-containing regimens in advanced colorectal cancer. Clin Colorectal Cancer. 2002; 2(2): 81. PubMed Abstract | Publisher Full Text\n\nWilding JL, Bodmer WF: Cancer cell lines for drug discovery and development. Cancer Res. 2014; 74(9): 2377–2384. PubMed Abstract | Publisher Full Text\n\nCree IA, Glaysher S, Harvey AL: Efficacy of anti-cancer agents in cell lines versus human primary tumour tissue. Curr Opin Pharmacol. 2010; 10(4): 375–379. PubMed Abstract | Publisher Full Text\n\nYeung DT, Parker WT, Branford S: Molecular methods in diagnosis and monitoring of haematological malignancies. Pathology. 2011; 43(6): 566–579. PubMed Abstract | Publisher Full Text\n\nNoguchi K, Katayama K, Sugimoto Y: Human ABC transporter ABCG2/BCRP expression in chemoresistance: basic and clinical perspectives for molecular cancer therapeutics. Pharmgenomics Pers Med. 2014; 7: 53–64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRedmond KM, Wilson TR, Johnston PG, et al.: Resistance mechanisms to cancer chemotherapy. Front Biosci. 2008; 13(13): 5138–5154. PubMed Abstract | Publisher Full Text\n\nRaguz S, Adams C, Masrour N, et al.: Loss of O⁶-methylguanine-DNA methyltransferase confers collateral sensitivity to carmustine in topoisomerase II-mediated doxorubicin resistant triple negative breast cancer cells. Biochem Pharmacol. 2013; 85(2): 186–196. PubMed Abstract | Publisher Full Text\n\nLi WJ, Zhong SL, Wu YJ, et al.: Systematic expression analysis of genes related to multidrug-resistance in isogenic docetaxel- and adriamycin-resistant breast cancer cell lines. Mol Biol Rep. 2013; 40(11): 6143–6150. PubMed Abstract | Publisher Full Text\n\nNg CK, Weigelt B, A’Hern R, et al.: Predictive performance of microarray gene signatures: impact of tumor heterogeneity and multiple mechanisms of drug resistance. Cancer Res. 2014; 74(11): 2946–2961. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "21105",
"date": "24 Mar 2017",
"name": "Alexey Tryakin",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this well-written mini-review the authors discuss important questions concerning mechanisms of tumor resistance which result to low concordance between in vitro and in vivo models.\nMinor comment: Authors have made a mistake by citing GERGOR trial data (FOLFOX vs. FOLFIRI). They write: \"As a result no difference in overall survival was observed (21.5 versus 20.6 months; p = 0.99), but what is important is that no difference was observed in the progression free survival of first line (8.5 versus 8.0 months; p = 0.26) or second line (14.2 versus 10.9 months; p = 0.64) chemotherapy20. Small differences were noticed in progression free survival of second line (4.2 versus 2.5 months, p = 0.003). \"\nPFS in second line was 4.2 vs. 2.5 months. However 14.2 vs 10.9 months was a second PFS (from the days 1 of 1-st line to the progression on 2-nd line). I suggest to omit data which I underlined.",
"responses": []
},
{
"id": "23679",
"date": "27 Jun 2017",
"name": "Phei Er Saw",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is supposed to focus on the possible resistance mechanism to drug therapy in breast cancer. Yet, most literature review and examples cited are based on colorectal cancer, and none of the breast cancer trials are mentioned. The article is not articulated in a way to guide the readers point-to-point and rather scarce in consummating all the points systematically.\n\nThe author also did not clearly point out what are the current problems and resistance mechanism in breast cancer, what are the current approaches to overcome these problems and what is the possible outlook in overcoming drug resistance in breast cancer.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Partly\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Partly",
"responses": []
},
{
"id": "24106",
"date": "10 Jul 2017",
"name": "Anna Herman-Antosiewicz",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nRecurrence and multidrug resistance is a major problem in cancer therapy. The authors discuss this important issue and it becomes clear that the first line treatment should be designed in such a way to eradicate all cancer cells, having in mind that at a time of diagnosis they are heterogenous and some of them are already drug resistant (in an otherwise naïve population).\nThe authors highlight that numerous mechanisms responsible for resistance to therapy have been identified, mainly thanks to in vitro experiments. They also mention disadvantages of such an approach, including lack of original microenvironment which is unstable in its nature. That’s why, probably, none of the known mechanisms fully explain multidrug resistance. However, there is experimental evidence that microenvironment conditions during tumor development (pH or oxygen level changes) might drive genetic and phenotypic changes in cancer cells leading to their more aggressive character and multidrug resistance (for example, Taylor et al. (2015)1 or Verduzco et al. (2015)2. In my opinion, this aspect should be mentioned by the authors, as the tumor microenvironment might be a good target for an adjuvant treatment, also to prevent the recurrence of the more aggressive tumors.\nMinor comments:\nGERCOR trial results are incorrectly presented Misspelling: p1, line 11- should be: action (not cation)\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Partly\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-288
|
https://f1000research.com/articles/6-284/v1
|
17 Mar 17
|
{
"type": "Opinion Article",
"title": "Prima facie reasons to question enclosed intellectual property regimes and favor open-source regimes for germplasm",
"authors": [
"Madeleine-Thérèse Halpert",
"M. Jahi Chappell",
"Madeleine-Thérèse Halpert"
],
"abstract": "In principle, intellectual property protections (IPPs) promote and protect important but costly investment in research and development. However, the empirical reality of IPPs has often gone without critical evaluation, and the potential of alternative approaches to lend equal or greater support for useful innovation is rarely considered. In this paper, we review the mounting evidence that the global intellectual property regime (IPR) for germplasm has been neither necessary nor sufficient to generate socially beneficial improvements in crop plants and maintain agrobiodiversity. Instead, based on our analysis, the dominant global IPR appears to have contributed to consolidation in the seed industry while failing to genuinely engage with the potential of alternatives to support social goods such as food security, adaptability, and resilience. The dominant IPR also constrains collaborative and cumulative plant breeding processes that are built upon the work of countless farmers past and present. Given the likely limits of current IPR, we propose that social goods in agriculture may be better supported by alternative approaches, warranting a rapid move away from the dominant single-dimensional focus on encouraging innovation through ensuring monopoly profits to IPP holders.",
"keywords": [
"Agroecology",
"agrobiodiversity",
"germplasm",
"innovation systems",
"intellectual property",
"plant breeding",
"seed systems"
],
"content": "Introduction\n\nGiven the challenges of sustainably providing food security for the present and future human population, it is often asserted that large-scale, technology-intensive agricultural innovation is necessary now, more than ever (Beddington, 2010; Monsanto, 2015). Indeed, there seems to be near consensus, from corporations to social movements, that “business as usual is not an option” (IAASTD, 2009; Joubert, 2016; Unilever, 2016). This sense of urgency is embraced by many agrifood corporations, who often put forward their products and services as key contributions to help society innovate into a better future. Leaving aside the flaws in this framing of the challenges facing us (see e.g., Lappé & Collins, 2015), large multi-national agricultural corporations are, in a certain sense, uniquely placed to lead in this innovation process: they increasingly dominate all aspects of the food system, including seeds (Howard, 2016). Further, within the area of agricultural inputs, these corporations have been able to assure their continued prominence through the dominant intellectual property regime (IPR), particularly patents on seeds. Whether the approaches to IPR embraced by such actors is to the (public) good in the face of today’s large-scale problems is the topic of this paper.\n\nThe underlying claim made (especially, but not solely) by large agricultural corporations in support of intellectual property protections (IPP) is that “locking up” innovations behind patents is a necessary mechanism to ensure continued innovation. The ex post (“after-the-fact”) inefficiency that occurs when IPPs prevent other innovators from building on new technologies is widely recognized, but is considered part of a “profitable bargain for society” (Moschini, 2010). The argument goes that in an area that involves high research costs, the net social good of innovations spurred by the potential monopoly protection of patents is greater than what is lost due to patents’ restrictions.\n\nThis logic is widespread and broadly accepted, and recent decades have seen an increase in the importance of IPPs in shaping seed systems around the world, particularly in the U.S. (Kloppenburg, 2004; Luby & Goldman, 2016). Given the challenges of providing both food security and environmental sustainability in agriculture, agricultural corporations argue that innovation—and therefore patents—will only become more important yet.\n\nBut what if patents pose more of an impediment than an aid to addressing current challenges? The actual balance of costs and benefits realized from intellectual property typically goes unquestioned. So it is possible, prima facie, that patents and similar elements of dominant global IP systems are unnecessary, and perhaps even inimical, to the development of socially-beneficial innovations in agriculture. Furthermore, alternative approaches may be equally or better able to support innovation through mechanisms that decrease or eliminate the ex post inefficiency (Cimoli et al., 2014). This paper explores these ideas, specifically with reference to germplasm, based on an analysis of existing and theoretical dynamics in agriculture and innovation.\n\n\nIntellectual property protections for plants in the U.S. and in international agreements\n\nSince the 1960s, a jumble of international governing organizations have attempted to regulate IPPs for plant genetic resources. During this time, significant pressure by individual governments and international institutions has been exerted to adopt what could be called a “global IPR”, which has been “developed principally in the Western legal context… principally a U.S. utilitarian approach” (Forsyth, 2016; see also Henry & Stiglitz, 2010).\n\nThe “utilitarian” innovation system of the U.S. is based on an approach whereby ideas and materials are putatively owned by individual entities, excluding all others from using such “intellectual property” without permission. Within this approach, three primary forms of IPP governing plants and plant genetic resources have been developed over the past century: plant patents; Plant Variety Protection certificates; and utility patents.\n\nThe 1930 Plant Patent Act allowed breeders to patent plant varieties that reproduce asexually (i.e., without seeds), protecting putative owners of IP while sidestepping controversies around seed saving practices (Heald & Chapman, 2011). Starting from 1970, sexually reproduced plants were also “protected” through Plant Variety Protection (PVP), providing that the varieties could be determined to be novel, distinct, and uniform. PVP certificates included two important exceptions: a breeders’ exception allowing the use of protected varieties for non-commercial research and the development of varieties not essentially derived from the protected variety; and a farmers’ exception allowing seed saving for personal use (Heald & Chapman, 2011; Pardey et al., 2013). The third form of IPP in the U.S. has its origin in the 1980 U.S. Supreme Court case Diamond vs. Chakrabarty. This case (and subsequent rulings) asserted that utility patents were applicable to plant varieties, and even genetic sequences in certain cases (Van Dooren, 2008). Unlike PVPs, the extension of utility patenting to plant and genetic materials involved no exceptions for seed saving, research, or other breeding activities. Certain forms of “dual protection” are also possible, combining different kinds of patents, or a PVP certificate and a utility patent (Pardey et al., 2013). Additionally, large commercial breeders have made use of contracts, laws protecting trade secrets, intra-industry-regulation and enforcement (“private ordering”) to exclude others from accessing their innovations (Butruille et al., 2015), and to reinforce formal modes of IPP. In some cases this has made IPP restrictions significantly more severe (Elkin-Koren, 2005; Kloppenburg, 2014).\n\nMany policies at the international level have paralleled the U.S.’s trajectory. For example, the International Union for the Protection of New Varieties of Plants (UPOV) establishes requirements that varieties to be protected are novel, distinct, uniform, and stable. While it had included farmers’ and breeders’ exceptions similar to that of PVPs, the most recent version of the agreement made these exceptions optional (Salazar et al., 2007; Van Dooren, 2008). Meanwhile, the World Trade Organization requires all member nations to have some form of IPP for plants through the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). Under TRIPS, countries technically have the option to develop their own IPP systems, which could potentially support alternatives to the dominant global IPR (Kloppenburg, 2010). However, in practice most countries simply adopt restrictions descending from the U.S./Western approach.\n\nAt times, however, international agreements have included approaches differing from this pattern. Some agreements have attempted to regulate plant genetic resources as common heritage, to address the uneven flow of germplasm from “developing” countries to “developed” ones, and to account for the crucial historical and continuing contributions of countless farmers to plant breeding (Aoki, 2009). While these attempts are useful for suggesting alternatives to protecting and supporting innovation, they have so far had relatively little impact, and have been significantly constrained by the dominant global IPR. For example, the International Treaty on Plant Genetic Resources for Food and Agriculture (ITPGR) established a system in which recipients of germplasm from international seed banks cannot patent any of the seeds they receive from these banks (Aoki, 2009). However, patenting germplasm or genetic materials that are subsequently derived from such multilateral system seeds or particular genes or DNA is allowed under the treaty (ibid.). This means that there is a still the opportunity for patenting entities to benefit from common heritage while refusing to share subsequent benefits (Vogel et al., 2011).\n\n\nDraining the pool of knowledge: Enclosing more than giving back\n\nWhile the necessity and effectiveness of IPPs, and patents in particular, are often assumed, some researchers have begun to examine the quality of these claims, especially in the context of germplasm. The pertinent question is essentially whether IPPs may limit future innovation more than they contribute to it. And patents, in particular, have been heavily scrutinized from this point of view. By granting exclusive rights of ownership to the patent holder, they are one of the most restrictive approaches to IPP, essentially allowing the owner to set the price for others to use their IP at infinity (Stiglitz, 2014). Thus,\n\n“What seem to be more important [than strong IPRs] are the ‘opportunities,’ the potential for discoveries, related to the pool of knowledge to be exploited…\n\n…Patents inevitably enclose what would otherwise have been in the public domain. In doing so, not only do they impede the efficient use of knowledge, but because knowledge itself is the most important input into the production of further knowledge (innovations), they may even impede the flow of innovations,” (Stiglitz, 2014).\n\nUnder Stiglitz’s models using “plausible conditions,” the incentives to innovation provided by patents encourages innovation initially, but ultimately does not sufficiently replace what it removes, stifling further innovation. This effect, combined with a possible underuse of existing innovations resulting from IPPs, is a dynamic that has been identified in biomedical research as the “anti-commons,” by Heller & Eisenberg (1998). In an anti-commons, “more intellectual property rights may lead paradoxically to fewer useful products for improving human health.” While Heller and Eisenberg clarify that they are not speaking about the “routine underuse” of innovations that occurs under patents, their description of an anti-commons appears to fit the results of Stiglitz’s model as well.\n\nGiven that agroecosystems span a wide range of cultural and environmental conditions and affect a wide variety of needs and impacts (from food security to their effects on wild biodiversity to climate change impacts), it would seem appropriate to have an innovation system that encourages greater accessibility to knowledge for a diversity of approaches and actors. Yet the current IPR appears to have contributed to the neglect of certain approaches to agriculture and plant breeding and contributed to a dearth of plant breeding PhDs (Goodman, 2002; Vanloqueren & Baret, 2009). Further, Vanloqueren and Baret point out that practices like agroforestry that provide a number of public goods (e.g. sustainable livelihoods, resilience, and environmental quality) are based on system-level practices that are not patentable, and generate benefits over a long time period, two characteristics that strongly limit the pertinence of IPPs for boosting innovation. Such dynamics should be an object of concern; robust innovation systems ought to grant appropriate consideration to the full scope of ideas that could be socially useful (van den Hove et al., 2012).\n\nTo give a brief practical example of the dynamics at hand, one study of genetic diversity in pearl millet cultivars in India demonstrated that farmer social and management practices helped to maintain diversity and variation among local landraces, which “possess superior nutritional quality as well as higher fodder yield under severe conditions,” (vom Brocke et al., 2003). This is consistent with numerous studies showing that maintenance of agrobiodiversity can significantly contribute to sustainable farmer livelihoods, resilience, and adaptability (Chappell et al., 2013). But despite these useful properties, the diversity within landraces means they are generally not distinct, uniform, and stable, as would be required for protection under global IPR (Salazar et al., 2007). Further, the restrictions of typical IPPs means farmers would not be able to legally treat protected varieties “as raw material for direct use and further improvement [which] is still the norm in many parts of the world” (ibid). Between this and the requirements of uniformity, distinctiveness, and stability, the dominant global IPR often exerts pressure to decrease diversity, and thus limit the usefulness and adaptability of our future seed supply—taking more out of the collective “pool of knowledge” than the IPPs put back.\n\nThese dynamics represent the dangers of enclosing knowledge, but we have not yet covered the evidence as to how much IPPs do incentivize further innovation, that is to say, what IPPs give back to the pool of knowledge. On the one hand, drawing from case studies in the Pacific Islands region, Forsyth & Farran (2013) observed that “a Western IP system deflects attention from the need to support the organisations actually generating agricultural innovation in the region” (where breeding funding comes primarily from the public sector and NGOs who are not seeking patent-based returns on investment). Further, the dominant Western IPR may undermine traditions of benefit sharing and “undermine regional initiatives to promote food security through the sharing of plant genetic resources” (Forsyth & Farran, 2013). In the cases they examined (and by analogy, they argue, many other “less developed countries”), prioritization of food security has generally been associated with supporting diversity, autonomy, and protection of farmers’ access to seeds within alternative and traditional networks, while approaches focusing on global IPR have often accompanied a trade-oriented mentality that does not truly address the needs and particularities of local communities (Forsyth & Farran, 2013; see similar conclusions based on research in other “less developed countries” in Chappell et al., 2013; McKeon, 2015).\n\nHowever, in contrast to areas where the public sector dominates funding for plant breeding, IPPs should theoretically be responsible for significant outputs of research systems where plant breeding research is largely funded by private companies, as it is in the U.S. Heald & Chapman (2011) examined this hypothesis in one of the most extensive empirical analyses of IPPs to date. The authors studied the relationships between diversity, PVPs, patents, and commercially available varieties for apples and 42 vegetables over the period of 1903–2004. While a substantial number of new varieties were commercialized during this timeframe (which they took to represent innovation), only 3.8% of varieties commercially available in 2004 (excluding corn) were ever subject to patents, and only 16% of patented varieties were ever commercialized, suggesting a weak connection between IPPs and innovation in this area of breeding. In other words, most of the innovation in these plants was produced independently from IPP incentives. It should be noted that in the case of corn, patenting activity was much more prominent and patented seeds were prevalent in the market. But Heald and Chapman assessed that patents in corn may represent rent-seeking more than protection for innovation. That is, patents in corn may serve to exclude others from accessing “protected” germplasm without supporting any further innovation (Heald & Chapman, 2011).\n\nEven if this is the case, Western-style IPP might still be justified if rent-seeking owners use what they have withdrawn from the pool of knowledge to produce even higher-quality innovations than they would have otherwise produced. That is, the “monopoly rent” they extract from patents on, say, corn varieties may not spur them to innovate by creating more varieties, but it is possible that they use their profits to come up with higher quality varieties. However, in the case of plant breeding, there are reasons to doubt that extremely high-cost patented research is worthwhile in this manner, either. For example, research on seed prices has demonstrated that transgenic traits in commercialized seed are overpriced with regard to the relative research costs and yield gains provided to farmers through conventional breeding (Goodman, 2002; Moss, 2013). In this case, IPPs may simply be enabling companies to charge prices for their IP that are actually greater than the benefits they produce (Moss, 2013). A second, if tentative, line of evidence that patents may not be driving innovation of significantly superior varieties comes from a study by Bulte et al. (2014). Their analysis of randomized control and double-blind trials in Tanzania found that differences in yield between modern and traditional cowpea varieties were wholly due to differences in farmers’ management practices based on perceived differences in the varieties (as harvests were the same for farmers who received modern varieties and those who did know which type of seed they got). While this was only one study, and its results cannot be extrapolated to all modern seeds (or specifically those under IPPs), a comprehensive review by Loevinsohn et al. (2013) implies that the literature evaluating agricultural innovation is rife with similar challenges to scientific validity: they screened over 20,000 studies, and came up with only 5 that met reasonable standards of rigor (e.g., that would be capable of eliminating the confounding effects found by Bulte et al.).\n\nIn short, there is a significant lack of rigorous evidence that IPPs have led to the kind of higher-quality innovation that would justify their restrictions, much less sufficient evidence to establish that IPPs are decisively “giving back” more than they are taking from the collective pool of knowledge.\n\n\nFurther considerations challenging contemporary dominant IPR\n\nBeyond the dynamics related to the “pool of knowledge,” other factors may limit the appropriateness of global IPR for plant genetic resources: the connections between agriculture, plant breeding, and a number of other public goods. For example, with regards to “essential facilities”—resources that have no substitute and are fundamentally necessary for further innovation to occur—Henry & Stiglitz (2010) argue that no broad patents, and possibly, no patents at all should be granted. They give the specific example of genetically modified foods (e.g., crops), and cite Harhoff et al.’s (2001) conclusion that patents in this area may not only not be necessary to innovation, but may hold back socially useful applications. Even those who strongly defend the use of IPPs for seeds point out at the same time that society at large benefits from broad access to plant genetic resources and the ability to save seeds (Scalise & Nugent, 1995). The problem with broad access, according to Scalise and Nugent, is that society’s benefits come at the cost of inventors, and therefore, society will see fewer important innovations by inventors and miss out on new technologies that will help feed everyone and improve general welfare. Beyond the challenges to this claim that we have already addressed, it is important to re-emphasize the large uncertainties present in evaluating the benefits of supposed innovations in plant breeding. That is: not only is the academic research on benefits of agricultural innovation lacking in rigorous and controlled studies, but as Stone et al. (2014) point out farmer decisions in many cases may be dominated by social learning (emulation) and lead to a high degree of “faddism”. In their study, farmer adoption of new varieties was dominated by cues taken from what other farmers were doing, and was not meaningfully related to the qualities of each set of new seeds. This was not due to some deficiency on farmers’ part; they note that “yields and profits from any given seed are highly variable”, and “attributing… performance advantages that have not been truly isolated from [their] confounding factors” is ubiquitous and long-standing throughout agriculture, echoing Loevinsohn et al.’s findings. The result is that it may, in practice, be impractical and unlikely that farmers will be able to decisively identify the benefits of innovations as quickly as private actors produce them, especially without the legal right to save and select seeds themselves, creating an information asymmetry benefitting IPP holders at the cost of farmers. One might term it as a problem of a “market for persimmons,” where farmers cannot quickly or easily distinguish the benefits of one seed versus another. (Various types of the persimmons fruit may appear similar, but need to be treated differently; cf. Akerlof (1970)).\n\nPlant genetic resources are also tied to other public goods, such as biodiversity. For instance, agrobiodiversity affects the conservation of wild biodiversity (Chappell et al., 2013), and the loss of the former can negatively affect the latter. Therefore, if IPPs were leading to a loss of agrobiodiversity, that would be another argument against the dominant approach. Unfortunately, the degree of agrobiodiversity loss (or increase) over the past decades is difficult to measure and highly contended (Montenegro de Wit, 2015). However, the loss of small independent seed companies is more straightforward to measure, and would imply some levels of lost diversity given that larger, consolidated seed firms will have higher incentives to breed for a small amount of “elite” lines with “‘broad adaptability’ – the capacity of a plant to produce a high average yield over a wide range of growing environments and years… [while] varieties yielding well in one zone but less in another are quickly eliminated” (Desclaux et al., 2012). With regards to seed company consolidation and IPPs, not only have the applications for plant and utility patents and PVPs risen steeply since the 1980s and 90s, but the percentage of patents and PVPs held by the top applicants has also increased (Pardey et al., 2013). Farmers have increased their reliance on purchased rather than saved seed over the same time period as consolidation has increased dramatically (Howard, 2016; Marco & Rausser, 2008). In 2007, four companies (Monsanto, DuPont, Syngenta and Groupe Limagrain) controlled more than half of the global proprietary seed market (ETC Group, 2008). And all of them except Limagrain are currently part of prospective mergers or buy-outs that would further increase consolidation throughout the agricultural input chain (Purdy, 2016). Howard (2016) in fact states that “the seed industry is… nearing domination by just two firms”.\n\nMany have expressed various concerns that such extreme concentration in this field has drawbacks for both food consumers and producers, depressing innovation and effectively allowing the industry to operate as an oligopolistic trust (Howard, 2016; Moss, 2013; Moss & Taylor, 2014). Such a high level of concentration also creates a paradox of collaboration. That is, cross-licensing traits theoretically makes IPP less exclusive. However, since the small number of large seed companies often only cross license with each other, this form of “collaboration” may threaten potential for innovation by reducing competition within this group and reinforcing the concentration of power in the industry (Howard, 2016; Moss, 2013). Indeed, in the case of germplasm, unlike other public goods, a depletion in current and future availability can be more related to a lack of use rather than overuse (Montenegro de Wit, 2015). In this way, the feedback between industry consolidation and the use restrictions represented by IPPs in germplasm should be worrisome.\n\n\nMassively parallel computing? Peasant seed innovations vs. high technology centralization\n\nAs we have presented, contemporary global IPR has been tied to increased consolidation in germplasm research and commercialization, effectively centralizing a huge amount of resources for germplasm “innovation”. The centralizing tendencies of global IPR compared to various forms of decentralized breeding have different potential strengths. Centralization often offers greater precision in measuring results, but typically does so by eliminating environmental variance from its considerations, encouraging homogenization and limiting the actual applicability of its innovations to the large diversity of agricultural systems (Desclaux et al., 2012; Howard, 2016). In contrast, decentralized selection may allow greater farmer participation and allow breeding efforts to provide closer matches between innovations and diverse local conditions. Several lines of research have suggested that this may be the case (Aistara, 2011; Desclaux et al., 2012). Decentralized in situ breeding further potentially allows crops to be exposed to their wild relatives, and thereby may incorporate diverse genetic material into the breeding pool (Jarvis & Hodgkin, 1999). While global IPR may in theory be compatible with this kind of in situ and decentralized efferts, in practice the very basis for these dynamics are restricted under dominant IPPs.\n\nAnother line of reasoning to prefer decentralized approaches reiterates Scott’s (1998) classic observations on how centralizing forces often necessarily seek to reduce or marginalize vital complexities that otherwise allow local communities to function, innovate, or thrive. The idea that improved innovation and problem-solving may come from allowing great access and freedom to use genetic resources for many decentralized actors is also analogous to the rationales and results from Massively Parallel Computing (e.g., Barney, 2016) and crowdsourcing (e.g., Brabham, 2008). Although these systems are not perfectly analogous, a baseline idea may still apply: that many different centers of experimentation and knowledge may solve problems more quickly than concentrating resources among a smaller number of specialists. And alternative approaches, like Participatory Plant Breeding, represent the potential to combine strengths of decentralization and expert knowledge (Desclaux et al., 2012).\n\nFurther, decentralized approaches may be better equipped to deal with the complex relationship between the diversity of cultural practices, crop biodiversity, and the diversity of localized and alternative IPPs (Forsyth, 2016). Stability and distinctiveness requirements of current IPPs, as well as their emphasis on genetic factors above all else can pose obstacles to management strategies that produce valuable and adaptable, but “unstable”, cultivars. IPPs may also contribute to systematic failures in acknowledging, recognizing, and respecting the importance of cultural and social practices around seeds, including the meaningful patterns of association between cultural diversity and biodiversity generated by localized factors (Desclaux et al., 2012; Montenegro de Wit, 2015).\n\nPractically speaking, decentralized selection can occur in both formal and informal settings. If done well, formal participatory approaches can balance centralized organization and decentralized selection, and can be more efficient (considering response to selection, adoption by farmers, and cost-benefit ratio) than conventional plant breeding programs (Ceccarelli, 2015). Informal farmer breeding systems take a variety of forms, including highly collaborative community-scale efforts. For example, some informal community breeding projects have involved a small number of farmers interested in performing earlier stage breeding work, with many farmers later selecting from advanced lines and providing the land necessary to grow them (Salazar et al., 2007).\n\nBodies of theory on networks and innovation also suggest possible disadvantages to centralized systems (and thus, the current global IPR and concomitant increases in consolidation). Insofar as centralized systems represent increased connectivity between potential “nodes” of innovation or adaptation, they may resemble studies of connected populations and networks in economics and ecology. Findings in both of these fields reflect increased risks of systematic failures and large-scale collapse at high levels of connectivity (Erola et al., 2012; He & Deem, 2010; Noble et al., 2015; Sensoy et al., 2013). Further, the significant power inequalities represented by centralized, consolidated systems may be directly inimical to innovation, if recent research is to be believed (Farrell & Shalizi, 2015; Page & Vandermeer, 2013).\n\nThe challenges and potential threats posed by the centralizing nature and unequal power relations present in many projects featuring public funding or international collaboration bear continued scrutiny. Minimizing or even eliminating the current global IPR may be necessary to allow for the most productive forms of decentralization, which might be better-suited to develop effective localized solutions in this realm. At the same time, even a radical revamping of global IPR may be insufficient to challenge centralizing and anti-egalitarian practices and structures, many of which operate on the assumption that centralization will result in desirable and worthwhile efficiency (Brooks, 2011).\n\n\nAlternative innovation systems\n\nNumerous proposals exist for alternative innovation systems for plant breeding and that may respond to some of the drawbacks of contemporary global IPR. Proposals range from those that modestly strengthen the public sector within the current regime to those that involve more fundamental dismantling of current IPPs for plant genetic resources. A few suggestions are outlined and discussed here.\n\nSome authors have argued that IPPs, even with recommended improvements to their implementation, should be thought of as only one piece of a larger innovation system (Henry & Stiglitz, 2010). However, one might step back even further and question, as others have done, the value of emphasizing the idea of “innovation” itself (Russell & Vinsel, 2016; Van den Hove et al., 2012). In considering innovation to be the ends, we risk overshadowing the importance of maintenance and building work that is not considered inventive, and we also might lose sight of the actual ends that the innovation process is meant to achieve. Thus, we may even consider innovation to be one part of a larger germplasm management system, rather than the point of it, and increase the recognition of the importance of conservation and seed saving activities that might not be considered “innovative”.\n\nEven within the “innovation” frame, Henry & Stiglitz (2010) advocate strengthening mechanisms to reduce the ex post inefficiency created by the knowledge enclosures of dominant IPPs. For example, a liability approach, where anyone can access a given previous innovation for a fixed cost, and patents that are not “winner-take-all” could enable greater follow-on innovation. Incentivizing research with prizes for achieving specific goals, and increasing the amount of research provided through universities could similarly help keep outcomes of high-cost breeding research publicly available (Henry & Stiglitz, 2010). These authors also explore the complex factors that have motivated people to contribute to highly successful open source software projects, suggesting that a similar structure could be effective in other areas, such as plant breeding.\n\nHowever, given the properties of plant germplasm; the possible advantages of a decentralized approach; and that it can be considered a “commons” in many ways (Luby & Goldman, 2016), one place to look for ideas on how to maintain or enhance it and its public use is the voluminous work on common property resource (CPR) management. The diversity of existing and traditional approaches to governing breeding and plant genetic resources in fact bears some similarity to the diversity of formal and informal approaches to governing CPRs, which were notably examined by Ostrom (1990). With reference to sustainably managing CPRs, Ostrom noted that “the centralizers and the privatizers” often advocated oversimplified solutions based on idealized notions of their own effectiveness. The models justifying their authority (e.g., the “tragedy of the commons”) often relied on what Ostrom called “extreme assumptions”, which could not be properly applied to smaller-scale CPRs. In contrast, as the official website of the Nobel Prize summarized the work for which she was awarded a Nobel, “in many, but not all, cases, allowing users to develop their own rules to regulate the use of common property results in the most efficient solution for managing those resources” (Nobel Media AB, 2014). Whether this general finding applies specifically to the seed/germplasm commons is an open question. But its possible applicability does, prima facie, further imply a shifting of the burden of proof onto those claiming overall benefits from global IPR.\n\nTo this point, one specific alternative approach to the dominant IPR has been developing in the form of the Open Source Seed Initiative (OSSI). OSSI has attempted to create the foundation for a robust, protected commons for plant genetic resources. As OSSI has pursued an approach focused on personal commitments made by the communities breeding and using OSSI-pledged materials, it has embraced a “moral economy” as opposed to a formal and definitively legally enforceable regime (CSA, 2014). OSSI is not currently attempting to coordinate with governments, and in this way the organization has perhaps had more freedom to directly contest the dominant IPR than those operating within that context. And while a “moral economy” approach may seem outdated from the perspective of the nearly-hegemonic Western-style global IPR, the literature on CPRs show that “informal” (i.e., non-state) institutions can be a powerful tool for managing a commons. Thus OSSI’s approach has come to focus on the “OSSI Pledge”: an agreement that can be printed on seed packets wherein by opening the packet, the opener agrees to not restrict others’ use of the germplasm or its derivatives, and to reprint the agreement on seed packets of all future derivatives of the seeds as well (Open Source Seed Initiative, 2016). The protection of derivatives as unpatentable is a critical distinction between this approach and regulations, such as the ITPGR, that do not prevent such privatization and future enclosure of parts of the commons. The open source pledge also decentralizes the very act of participating in the protected commons (or opting out of intellectual property). Agreement and responsibility are relocated to the level of individuals transferring the seed packet and enforcement through community norms and building relationships of trust.\n\nKloppenburg (2010) and CSA (2014) discuss the potential for open-source approaches, like the OSSI pledge, to create more inclusive plant breeding communities, and also to democratize the use of the tools of plant breeding, such as genomic and transgenic techniques. At the same time, given the concerns about OSSI expressed by some farmers and researchers involved in the food sovereignty and agroecology movements, as well as Native American communities (Breen, 2014), it is apparent that there is still work to be done to foster networks and relationships that fully serve all those who are managing biodiversity and agroecosystems, and that may truly serve as an alternative to global IPR.\n\nIt also may be possible for an open-source protected commons to coexist with IPP for germplasm, as some plant breeders participating in OSSI have considered releasing certain varieties through the open source system and protecting others under IP (Miller, 2014). However, a thorough exploration of the implications of having a dual commons-IP system is necessary to determine if it would actually be able to provide the potential benefits of both types of systems. At first glance, it seems that doing so would effectively create two separate breeding pools, which may be undesirable. It also seems that it would be difficult to allow patenting traits or specific sequences when such material may also exist in protected-commons varieties, and there is the distinct possibility that breeders of certain high-quality materials may choose the dominant IPR over a protected commons. Thus, while it may be theoretically feasible for the two to co-exist, it may be that to be successful, a protected commons for plant genetic resources would eventually necessitate the end of the current system of IPPs. A significant possibility, and the one that we have tried to show that there is at least a prima facie argument for, is that this might not be a bad thing for innovation, or society in general.\n\n\nConclusions\n\nAt this point, many questions remain around what a more effective germplasm management system might look like. Some researchers have identified the need for future research to apply scenarios and modeling methods to the study of seed networks (Pautasso et al., 2012). Perhaps these methods could also be used to project some of the outcomes of potential changes mentioned above, though as we have pointed out, at least some established economic models have already implied a more open-source/protected-commons IP system may actually help innovation.\n\nThat said, one lingering issue is whether any single IPR can effectively support innovations across all contexts, particularly with across so-called “developed” and “developing” countries (Forsyth & Farran, 2013; Stiglitz, 2014). Similarly, there is room to debate how responsibilities for managing plant genetic resources might be organized with regard to various organizations and scales of government. As Merson (2000) has pointed out, tension may arise when efforts to offer protection for genetic resources in developing countries focus on implementing sovereignty over resources at the national level, while management of plant breeding and biodiversity may actually be occurring on smaller community scales.\n\nAlthough the food sovereignty movement is concerned with many different issues within contemporary agrifood systems, certain challenges to seed sovereignty could potentially be made moot if a protected commons approach to plant genetic resources were to be the norm. Yet even then, questions would remain with regards to how entities at various scales might bear and exercise responsibilities to support plant breeding and conservation. Most likely, innovation systems that will be able to effectively support a more decentralized, diverse mix of approaches to developing localized plant varieties will need to be compatible with a wide range of governance structures as well. In other words, different geographies, ecosystems, and histories may mean that plant genetic resources are best managed by different sizes and types of organizations from one community to the next, and which acknowledge the many different types of existing intellectual property relations and (still-evolving) traditions (Forsyth, 2016).\n\nAs global IPR is currently only becoming more entrenched, the most important step may not be to settle on which new approach might be best, but rather to call into question the appropriateness of a global IPR for plant genetic materials. Given the analysis we have presented, the possibility that the dominant system may well be failing to live up to its purpose, and indeed may be mitigating against its supposedly desired effect, must be seriously considered.\n\nThe failings of this system should not surprise us, although the vast majority of those working within the pressures of this regime are likely doing their best to develop socially useful plant varieties within current norms. However, the logic of global IPR assumes that the viability of plant breeding depends first and foremost on its ability to generate profit; such an attitude may be sadly unsurprising to those familiar with the steady history of sublimating food security to other goals (McKeon, 2015). This observation in fact aligns with the pattern wherein societies that see plant breeding as a predominately economic and trade asset tend to implement IPPs, while those prioritizing breeding’s value to livelihoods and food security favor more open access systems (Forsyth & Farran, 2013). Or as Henry & Stiglitz (2010) have said, “the presumption that profit-maximizing behavior is socially optimal is not always right” (p. 238). However, OSSI Executive Director Claire Luby has asked, “In a globalized system where multinational companies are the major drivers, do societies still get a choice?” If it were to be agreed that alternatives should be more seriously pursued, it would also be incumbent on those who truly wish to support socially-beneficial innovation to agitate within their societies and to their governments to make sure such choices are possible—particularly those of us living in geopolitically powerful countries (like the U.S.) who have strongly influenced global IPR.\n\nIn part, what’s needed now is to reassess what is seen as success in a management system for plant breeding. For example, we may instead choose to focus on its ability to support a wide range of actors and activities for stable management of biodiversity on many scales. We may consider that a working plant breeding system will necessarily facilitate widespread access, exchange, and use of seeds, support decentralized efforts for local adaptation, and justly recognize the work of farmer-breeders today and over millennia. Therefore we must begin instead from these goals, and then ask what regulations or structures might encourage the dedication of the resources that will be necessary to support them, rather than assuming the status quo is the best system. Although any large-scale transition towards alternatives poses serious challenges, this is true of any attempt to fulfil the grand challenges of sustainable, food secure, and resilient agrifood systems. As many in the movements for food sovereignty and agroecology have proposed, working to improve participation, autonomy, political agency, and especially power imbalances will be indispensable in any attempt to prioritize these values and foster a more just and sustainable world (Chappell et al., 2013; Farrell & Shalizi, 2015; Perfecto et al., 2009).",
"appendix": "Author contributions\n\n\n\nEach author contributed equally to the final product. MJC conceived the paper. Research, writing, and revision were carried out by MH and MJC.\n\n\nCompeting interests\n\n\n\nMJC sits on the Board of Directors of the Open Source Seed Initiative (OSSI). At the time of the conception and initial drafting of this paper, MJC and MH both worked for the Institute for Agriculture and Trade Policy (IATP). IATP helped found OSSI and is a partner organization. The work here reflects the analysis performed by MH and MJC, and does not necessarily reflect the views of OSSI or IATP.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgments\n\nWe gratefully acknowledge help from a number of sources, including support for MH’s participation from the Institute of Agriculture and Trade Policy and Chatham University; and feedback on earlier drafts from C. Luby, J. Loos, J. Taylor and F. Shaw.\n\n\nReferences\n\nAistara G: Seeds of kin, kin of seeds: The commodification of organic seeds and social relations in Costa Rica and Latvia. Ethnography. 2011; 12(4): 490–517. Publisher Full Text\n\nAkerlof GA: The market for “lemons”: Quality uncertainty and the market mechanism. Q J Econ. 1970; 84(3): 488–500. Publisher Full Text\n\nAoki K: \"Free seeds, not free beer”: participatory plant breeding, open source seeds, and acknowledging user innovation in agriculture. Fordham Law Rev. 2009; 77(5): 2275. Reference Source\n\nBarney B: Introduction to Parallel Computing. Livermore, CA: Lawrence Livermore National Laboratory, 2016. Reference Source\n\nBeddington J: Food security: contributions from science to a new and greener revolution. Philos Trans R Soc Lond B Biol Sci. 2010; 365(1537): 61–71. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBrabham DC: Crowdsourcing as a Model for Problem Solving: An Introduction and Cases. Convergence. 2008; 14(1): 75–90. Publisher Full Text\n\nBreen SD: Saving seeds: The Svalbard Global Seed Vault, Native American seed savers, and problems of property. J Agric Food Syst Community Dev. 2014; 5(2): 39. Publisher Full Text\n\nBrooks S: Is international agricultural research a global public good? The case of rice biofortification. J Peasant Stud. 2011; 38(1): 67–80. PubMed Abstract | Publisher Full Text\n\nBulte E, Beekman G, Di Falco S, et al.: Behavioral Responses and the Impact of New Agricultural Technologies: Evidence from a Double-blind Field Experiment in Tanzania. Am J Agr Econ. 2014; 96(3): 813–830. Publisher Full Text\n\nButruille DV, Birru FH, Boerboom ML, et al.: Maize Breeding in the United States: Views from Within Monsanto. In J. Janick (Ed.), Plant Breeding Reviews. John Wiley & Sons, Inc. 2015; 39: 199–282. Publisher Full Text\n\nCeccarelli S: Efficiency of plant breeding. Crop Sci. 2015; 55(1): 87–97. Publisher Full Text\n\nCentre for Sustainable Agriculture (CSA): Building open source seed: Agriculture and Biodiversity Community 2014. Secunderabad, India: Centre for Sustainable Agriculture, 2014. Reference Source\n\nChappell MJ, Wittman H, Bacon CM, et al.: Food sovereignty: an alternative paradigm for poverty reduction and biodiversity conservation in Latin America [version 1; referees: 2 approved]. F1000Res. 2013; 2: 235. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCimoli M, Dosi G, Maskus KE, et al.: The role of intellectual property rights in developing countries: Some conclusions. In M. Cimoli, G. Dosi, K. E. Maskus, R. L. Okediji, and J. H. Reichman (Eds.), Intellectual property rights: Legal and economic challenges for development. Oxford University Press: Oxford. 2014; 503–513. Publisher Full Text\n\nDesclaux D, Ceccarelli S, Navazio J, et al.: Centralized or decentralized breeding: the potential of participatory approaches for low-input and organic agriculture. Organic Crop Breeding. Wiley-Blackwell: Hoboken. 2012; 99–123. Publisher Full Text\n\nElkin-Koren N: What contracts cannot do: The limits of private ordering in facilitating a creative commons. Fordham Law Review. 2005; 74(2): 375–422. Reference Source\n\nErola P, Díaz-Guilera A, Gómez S, et al.: Modeling international crisis synchronization in the world trade web. Networks & Heterogeneous Media. 2012; 7(3): 385–397. Publisher Full Text\n\nETC Group: Who Owns Nature? Corporate Power and the Final Frontier in the Commodification of Life. Ottawa: ETC Group, 2008. Reference Source\n\nFarrell H, Shalizi CR: Pursuing Cognitive Democracy. In D. Allen & J. Light (Eds.), From Voice to Influence: Understanding citizenship in a digital age. Chicago: The University of Chicago Press. 2015. Reference Source\n\nForsyth M, Farran S: Intellectual Property and Food Security in Least Developed Countries. Third World Q. 2013; 34(3): 516–533. Publisher Full Text\n\nForsyth M: Making the case for a pluralistic approach to intellectual property regulation in developing countries. Queen Mary J Intell Proper. 2016; 6(1): 3–26. Publisher Full Text\n\nGoodman MM: New sources of germplasm: Lines, transgenes, and breeders. In J. M. Martinez R., F. Rincon S., G. Martinez G. (Eds.), Memoria Congresso Nacional do Fitogenetica. Saltillo, COAH, Mexico: Universidad Autónoma Agraria Antonio Narro. 2002; 28–41. Reference Source\n\nHarhoff D, Régibeau P, Rockett K: Some simple economics of GM food. Econ Policy. 2001; 16(33): 264–299. Publisher Full Text\n\nHe J, Deem MW: Structure and response in the world trade network. Phys Rev Lett. 2010; 105(19): 198701. Publisher Full Text\n\nHeald P, Chapman S: Veggie Tales: Pernicious Myths About Patents, Innovation, And Crop Diversity In The Twentieth Century. Illinois Public Law Research Paper No. 11-03. 2011. Publisher Full Text\n\nHeller MA, Eisenberg RS: Can patents deter innovation? The anticommons in biomedical research. Science. 1998; 280(5364): 698–701. PubMed Abstract | Publisher Full Text\n\nHenry C, Stiglitz JE: Intellectual Property, Dissemination of Innovation and Sustainable Development. Glob Policy. 2010; 1(3): 237–251. Publisher Full Text\n\nHoward P: Concentration and power in the food System: Who controls what we eat? London: Bloomsbury, 2016. Reference Source\n\nInternational Assessment of Agricultural Knowledge Science and Technology for Development (IAASTD): Agriculture at a crossroads: International assessment of agricultural knowledge, science and technology for development. Washington, D.C.: Island Press, 2009. Reference Source\n\nJarvis D, Hodgkin T: Wild relatives and crop cultivars: detecting natural introgression and farmer selection of new genetic combinations in agroecosystems. Mol Ecol. 1999; 8(s1): 159–173. Publisher Full Text\n\nJoubert P: Business as usual is dead: how businesses are transcending boundaries to fight climate change. OlamGroup.com. 2016. Reference Source\n\nKloppenburg J: First the seed: The political economy of plant biotechnology. (2nd ed.). Madison, WI: University of Wisconsin Press, 2004. Reference Source\n\nKloppenburg J: Impeding Dispossession, Enabling Repossession: Biological Open Source and the Recovery of Seed Sovereignty. Journal Of Agrarian Change. 2010; (3): 367–388. Publisher Full Text\n\nKloppenburg J: Re-purposing the master's tools: the open source seed initiative and the struggle for seed sovereignty. J Peasant Stud. 2014; 41(6): 1225–1246. Publisher Full Text\n\nLappé FM, Collins J: World hunger: Ten myths. New York: Grove Press/Food First Books, 2015. Reference Source\n\nLoevinsohn M, Sumberg J, Diagne A, et al.: Under what circumstances and conditions does adoption of technology result in increased agricultural productivity? A Systematic Review prepared for the Department for International Development. Brighton, UK: IDS, 2013. Reference Source\n\nLuby CH, Goldman IL: Freeing Crop Genetics through the Open Source Seed Initiative. PLoS Biol. 2016; 14(4): e1002441. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMarco AC, Rausser GC: The Role of Patent Rights in Mergers: Consolidation in Plant Biotechnology. Am J Agric Econ. 2008; 90(1): 133–151. Publisher Full Text\n\nMcKeon N: Food Security Governance: Empowering Communities, Regulating Corporations. Routledge, 2015. Reference Source\n\nMerson J: Bio-prospecting or bio-piracy: intellectual property rights and biodiversity in a colonial and postcolonial context. Osiris. 2000; 15: 282–296. PubMed Abstract | Publisher Full Text\n\nMiller N: Novel open source seed pledge aims to keep new vegetable and grain varieties free for all. University of Wisconsin News. 2014. Reference Source\n\nMonsanto: Why does agriculture need to be improved? Monsanto.com. 2002–2015. Reference Source\n\nMontenegro de Wit M: Are we losing diversity? Navigating ecological, political, and epistemic dimensions of agrobiodiversity conservation. Agric Human Values. 2015; 33(3): 625–640. Publisher Full Text\n\nMoschini G: Competition issues in the seed industry and the role of intellectual property. Choices. The Magazine of Food, Farm, And Resources Issues. 2010; 25(2). Reference Source\n\nMoss DL: Competition, intellectual property rights, and transgenic seed. S D Law Rev. 2013; 58: 543. Reference Source\n\nMoss DL, Taylor CR: Short Ends of the Stick: The Plight of Growers and Consumers in Concentrated Agricultural Supply Chains. Wis L Rev. 2014; 2014(2): 337–368. Reference Source\n\nNobel Media AB: The Prize in Economics 2009 - Speed Read. NobelPrize.org. 2014. Reference Source\n\nNoble AE, Machta J, Hastings A: Emergent long-range synchronization of oscillating ecological populations without external forcing described by Ising universality. Nat Commun. 2015; 6 6664. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOpen Source Seed Initiative: About. OSSeeds.org. 2016. Reference Source\n\nOstrom E: Governing the commons: The Evolution of Institutions for Collective Action. New York: Cambridge University Press. 1990. Reference Source\n\nPage SE, Vandermeer J: Inequality and innovativeness. Econ Bull. 2013; 33(1): A59. Reference Source\n\nPardey P, Koo B, Drew J, et al.: The evolving landscape of plant varietal rights in the United States, 1930–2008. Nat Biotechnol. 2013; 31(1): 25–29. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPautasso M, Aistara G, Barnaud A, et al.: Seed exchange networks for agrobiodiversity conservation. A review. Agron Sustain Dev. 2012; 33(1): 151–175. Publisher Full Text\n\nPerfecto I, Vandermeer JH, Wright AL: Nature's matrix: Linking agriculture, conservation and food sovereignty. London: Earthscan. 2009. Reference Source\n\nPurdy C: Six companies are about to merge into the biggest farm-business oligopoly in history. Quartz.com. 2016. Reference Source\n\nRussell A, Vinsel L: Hail the maintainers. Aeon. 2016. Reference Source\n\nSalazar R, Louwaars NP, Visser B: Protecting Farmers’ New Varieties: New Approaches to Rights on Collective Innovations in Plant Genetic Resources. World Dev. 2007; 35(9): 1515–1528. Publisher Full Text\n\nScalise DG, Nugent D: International intellectual property protections for living matter: biotechnology, multinational conventions and the exception for agriculture. Case West Reserve J Int Law. 1995; 27(1): 83. Reference Source\n\nScott JC: Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press. 1998. Reference Source\n\nSensoy A, Yuksel S, Erturk M: Analysis of cross-correlations between financial markets after the 2008 crisis. Physica A: Statistical Mechanics and its Applications. 2013; 392(20): 5027–5045. Publisher Full Text\n\nStiglitz JE: Intellectual property rights, the pool of knowledge, and innovation. NBER Working Paper No. 20014. 2014. Reference Source\n\nStone GD, Flachs A, Diepenbrock C: Rhythms of the herd: Long term dynamics in seed choice by Indian farmers. Technol Soc. 2014; 36: 26–38. Publisher Full Text\n\nUnilever: The Unilever Sustainable Living Plan: Our strategy for sustainable business. Unilever.com. 2016. Reference Source\n\nvan den Hove S, McGlade J, Mottet P, et al.: The Innovation Union: a perfect means to confused ends? Environ Sci Policy. 2012; 16: 73–80. Publisher Full Text\n\nvan Dooren T: Inventing seed: The nature(s) of intellectual property in plants. Environ Plann D. 2008; 26(4): 676–697. Publisher Full Text\n\nVanloqueren G, Baret PV: How agricultural research systems shape a technological regime that develops genetic engineering but locks out agroecological innovations. Res Policy. 2009; 38(6): 971–983. Publisher Full Text\n\nVogel JH, Álvarez-Berríos N, Quiñones-Vilches N, et al.: The economics of information, studiously ignored in the Nagoya Protocol on Access to Genetic Resources and Benefit Sharing. Law, Environment and Development Journal. 2011; 7(1): 52. Reference Source\n\nvom Brocke K, Christinck A, Weltzien RE, et al.: Farmers' seed systems and management practices determine pearl millet genetic diversity patterns in semiarid regions of India. Crop Sci. 2003; 43(5): 1680–1689. Publisher Full Text"
}
|
[
{
"id": "21078",
"date": "10 Apr 2017",
"name": "Sheryl D. Breen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nHalpert and Chappell provide a knowledgeable, well-written overview of global intellectual property protections and plant genetic resources and present a set of arguments for moving toward an alternative open-source approach. As they point out in their introduction, the widespread assumption is that monopoly patents on germplasm can pass a cost-benefit analysis test by creating paths for future innovation. The authors question the truth of this assumption, however, and ask whether alternative approaches such as the open-source framework also can support agricultural innovation while decreasing the ex post inefficiency of the intellectual property regime.\n\nIntellectual property protections for plants in the U.S. and in international agreements: This section describes the evolution of a global intellectual property regime, primarily following the utilitarian approach of the United States and its development of plant patents, Plant Variety Protection (PVP) certificates, and utility patents. The authors provide a brief, useful summary of each stage in this development within U.S. law as well as the parallel international moves toward the International Union for the Protection of New Varieties of Plants (UPOV) and the World Trade Organization’s requirements in the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). While alternatives have been possible, the authors write, they have remained limited within the global intellectual property rights regime.\n\nDraining the pool of knowledge: Enclosing more than giving back: The authors ask whether intellectual property protections in fact erect barriers against future innovation, contrary to the widespread assumption, and thus decrease rather than augment our collective knowledge of agriculture and plant breeding. Using both theoretical arguments and empirical examples, Halpert and Chappell describe the way that plant patents promote the “anti-commons,” suppressing potentially valuable avenues of research. They present evidence that raises significant doubts about the validity of claims that plant patents lead to more or better commercial varieties and conclude that intellectual property protections for germplasm have not demonstrated their ability to promote needed innovation, much less net contributions to collective knowledge.\n\nFurther considerations challenging contemporary dominant IPR: This section provides further evidence that the global intellectual property regime may harm public goods, including biodiversity. The authors point out that research on agricultural innovation so far has been unsatisfying and that farmers’ decision-making on varieties does not provide reliable evidence of seed performance. In contrast, the seed industry has rapidly consolidated and is dominated by a small number of corporations.\n\nMassively parallel computing? Peasant seed innovations vs. high technology centralization: The authors compare the costs and benefits of centralized systems for plant breeding innovation in the global intellectual property regime and decentralized selection as practiced by peasant farmers. Unlike the centralized system, peasant farmers’ in situ practices promote integration with local soil and climate conditions and allow exposure to wild relatives and resulting increases in biodiversity. Drawing from theories of Massively Parallel Computing and crowdsourcing, the authors suggest that decentralized problem-solving is more efficient and can take both formal and informal approaches. In contract, the centralized intellectual property regime is less able to recognize and respond to localized cultural and social practices, is prone to power differences that stifle innovation, and may be at higher risk of large-scale failure.\n\nAlternative innovation systems: So far, the authors have challenged the belief that the global intellectual property regime is necessary and efficient in terms of plant breeding innovation. In this section, they lay out possible alternative systems, including revisions within the plant patent system, the use of common property resource management, a protected commons envisioned by the Open Source Seed Initiative, the establishment of a dual commons-intellectual property system, and – most radically – the elimination of intellectual property protections for plant genetic resources. These alternatives are not explored at length in this section, but the authors present a significant portfolio of possibilities that, in various ways, question the foundational assumptions beneath the intellectual property system and call for innovation in new directions.\n\nConclusions: The final section brings the authors to two concluding questions: 1) Can a single, centralized system of germplasm management encompass the wide range of traditions, practices, and relationships surrounding plant genetic resources? 2) How do we define “success” in a plant breeding management system? In other words, what are our goals? The authors suggest that multiple approaches are more promising than a unified global system and that the goal of profit maximization is limited and, when monopolistic, harmful.\n\nOverall, this article follows well-recognized standards of argumentation and includes credible, authoritative sources in its review of the literature. Two minor corrections are necessary, both in the conclusion:\nPage 8, left column, second and third line of final paragraph: either “with” or “across” needs deletion; Page 8, right column, third paragraph: Quotation from Claire Luby is not cited.",
"responses": []
},
{
"id": "21368",
"date": "19 Apr 2017",
"name": "William F. Tracy",
"expertise": [
"Reviewer Expertise Plant breeding"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an opinion article on issues surrounding intellectual property regimes. It is well written and easy to read. As an opinion piece it cites previous publications and does not generate new data or knowledge. The main thesis of the article is that intellectual property regimes in cultivated plants are \"neither necessary nor sufficient to generate socially beneficial improvements in crop plants and maintain agrobiodiversity\". The authors go further and dispute the benefits of IP in spurring innovation and greater gains in plant improvement, a conclusion many would dispute. I am quite open to this argument and the authors do a good job citing literature to support their position. The authors also discuss the negative effects of IP on 'collaborative and cumulative plant breeding', a conclusion with which, I think, most observers would agree. They also blame the use of IP for the decline of pubic plant breeding at universities, NGOs, and International Centers. While IP may have a role in this decline, neoliberal economic policies have been the main culprit. In fact we can locate the rise of IPP in plant breeding on those economic policies. Despite this quibble over the decline in public plant breeding, I tend to agree with, or at least am open to, many of the authors arguments.\n\nMy main concern with the article is that while they did an extensive literature review of papers that support their point of view, the review of the extensive body of opposing literature is almost non-existent. The few papers they mentioned are often from companies, which some readers might dismiss out of hand. Many readers, especially those antagonistic to the authors' thesis, will dismiss the current article because it does not address any of the numerous opposing publications and the data they include. I suggest the authors start by reviewing Smith et al., 20161 and the literature contained therein. This is a review that comes to a diametrically opposed conclusion from the current paper. It has an extensive citation list supporting their conclusions. I will state here that I do not agree with many of the conclusions and extrapolations in Smith et al.1 but it and many other articles opposing the conclusions of the current piece are out there and they need to addressed be the authors for this article to be taken seriously.\n\nFinally, in the authors' promotion of open source proposals, they recognize the concerns of food and seed sovereignty movements, but they do not address the chilling effect of the derivative clause on the widespread use of germplasm developed with public funds. To justify public support of breeding programs the germplasm needs to be made available as widely as possible with freedom to operate. Many public and private breeders will not use open source material because of restrictions on ownership of derivatives.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? No\n\nAre all factual statements correct and adequately supported by citations? No\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "22291",
"date": "27 Apr 2017",
"name": "Sue Farran",
"expertise": [
"Reviewer Expertise Legal pluralism",
"Pacific legal studies",
"intellectual property",
"human rights"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an interesting article which makes a number of robust criticisms of the application of IPR to germplasm and linked to this consequences of food security. It is very wide ranging and treads a sometimes difficult and not always successful path between the global and the local. Indeed while the current global IPR regime is clearly unsatisfactory, and this is a case well made, it is not always clear if the solution also lies at a global or more nuanced local and diffuse level.\n\nFor clarity given the potential international readership it should be made clear what aspects and information are USA located and what not.\n\nMore use might have been made of literature and examples from Africa and India in respect of food security, IPR and seeds/germplasm. Nevertheless, this article makes an important contribution to the debate on the stranglehold of the current global regime and usefully identifies a number of areas needing further research.\n\nA further strength of the article, which is introduced but not yet fully explored, is drawing from other disciplines and theoretical models to challenge the claimed benefits of the current regime.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Partly\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
},
{
"id": "21085",
"date": "12 Jul 2017",
"name": "Guntra A. Aistara",
"expertise": [
"Reviewer Expertise Environmental anthropology",
"organic agriculture",
"agrobiodiversity",
"intellectual property rights on seeds"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this opinion essay, the co-authors Halpert and Chappell question whether conventional intellectual property protections (IPPs) for germplasm contribute to or limit socially beneficial innovation in the face of today’s large-scale agricultural challenges of simultaneously achieving food security and environmental sustainability. They present a clear argument, supporting their opinion with relevant literature.\nThe essay is excellent and the conclusions are balanced and justified based on the evidence and arguments presented. Nevertheless, the topics covered raise a few further observations on the history of these debates that continue to create obstacles for alternative IP regimes, the exploration of which would help contextualize the current paper and hoped for solutions. These revolve around the definition of the “common heritage of mankind;” the separation of breeders from farmers; and the breeders’ and farmers’ exceptions present in UPOV-related regimes as opposed to patents. These debates have undergirded discussions about access to and sharing of benefits from agrobiodiversity, and may continue to pose obstacles for the types of alternatives suggested by the authors, necessary for fostering a more “just and sustainable world” (p. 9).\n\nThe authors summarize the US 1930 Plant Patent Act (PPA), the Plant Variety Protection (PVP) certificates in force since the 1970s, and the 1980 Diamond vs. Chakrabarty US Supreme Court case, which allowed utility patents to be used for plant varieties. These IPPs may also be combined in “dual protection” systems, or reinforced and spread through international treaties such as the Union for Plant Variety Protection (UPOV) and the World Trade Organization Trade Related Intellectual Property Rights (TRIPS) agreement. One exception to more restrictive IPPs is the FAO International Treaty on Plant Genetic Resources (ITPGR), which limits the opportunity to obtain patents on genetic materials received from international gene banks, though it does not prevent the patenting of genetic materials subsequently derived from those materials.\nHalpert and Chappell review critiques and evidence that IPPs may be taking more from the knowledge pool than they are giving back, in effect creating an “anti-commons” (citing Heller & Eisenberg 1998). They argue that IPPs should encourage further research to create more diversity, and cite studies showing that sometimes the opposite is true, therefore indicating a weak, or even inverse relationship between IPPs and innovation in plant breeding. The authors refer to the inability of farmers to quickly distinguish the benefits of one seed versus another, and emphasize how difficult it is to evaluate the benefits of the supposed innovation in plant breeding, a fact that in the end can benefit IPP holders at the expense of farmers. The authors then explain how the fact that a few companies now control the global seed industry negatively affects genetic diversity and is facilitated by the IPPs.\n\nFinally, Halpert and Chappell analyze the centralizing tendencies of IPPs and compare them to decentralized breeding schemes, showing evidence of the benefits of decentralized in situ management strategies. The authors introduce alternative innovation systems, such as the Open Source Seed Initiative (OSSI). OSSI functions through a “pledge” printed on seed packets, whereby users agree not to restrict anyone else’s use of the germplasm or future derivatives of it. While this alternative of creating a “protected commons” offers individuals a way to opt out of dominant intellectual property regimes, the authors submit that an exploration of the implications of the coexistence of open source and dominant IPPs will be necessary in order to evaluate the long-term effects on plant breeding and access to seeds.\nThe authors conclude that even if there are still a lot of questions to be answered about alternative open source/protected commons intellectual property regimes, it is necessary to question the assumption that dominant IP regimes are the only option that encourage innovation, and may in fact deter it, proposing that we may need to rethink what we mean by success in plant breeding.\n\nCommon heritage versus common concern The authors frame their inquiry in terms of “socially beneficial innovations,” but this raises a question about who defines the public good and how? The authors note that “some agreements have attempted to regulate plant genetic resources as common heritage” (p. 4), citing the ITPGR as an example. It is important to note, however, that even the ITPGR is not framed as “common heritage”, and it was exactly debates over whether or not plant genetic resources are a part of the “common heritage” of humankind that began the slow slide from a global commons approach to the dominant IPP approach. The concept of “common heritage” does not have one agreed definition, but was nevertheless diluted to an unspecified “heritage of mankind” in the ITPGR and a “common concern of humankind” in the CBD, because there has been no agreement on this issue since negotiations began on the FAO International Undertaking on Plant Genetic Resources in the 1980s (Brush 20071, Murillo 20082). This history of disagreement will also make a ‘return’ to a protected commons approach more difficult.\nThis is also significant because if the heritage of genetic resources is not seen as “common,” it is necessary to separate out groups who are the true “heirs” of these resources, and thus considered more suited to act on behalf of the “common concern of humankind” than others. These groups will thus be granted privileged access to the resources, or rights to further develop them. Although the ITPGR “recognizes the enormous contribution farmers have made to the ongoing development of the world’s wealth of plant genetic resources,” the farmers’ rights protections in the treaty have never had the same legal standing as IPPs, which enforce breeders’ rights to develop and protect varieties. This has created a separation of farmers from breeders, and established breeders in effect as the rightful heirs and developers of genetic resources.\nFarmers versus breeders The authors rightly point out that due to the broad range of cultural and environmental factors that affect plant breeding and growing practices, “it would seem appropriate to have an innovation system that encourages greater accessibility to knowledge for a diversity of approaches and actors” (p. 4). This would presume, however, that all of these different knowledge systems are equally valued, which has not been the case at least since the beginning of the dominant IP regime development, which has consciously enforced a separation between farmers and breeders and their knowledge systems.\nAll new plant varieties protected under IPPs have been created using genetic materials that are a co-product of natural selection and farmer selection. Van Dooren (2008)3 provides an insightful analysis of how foregrounding the “invention” of seeds or varieties by breeders, over the intricate webs of farmer and non-human interactions that have come before them, and on which they depend, results in fetishism of the seed. He observes that this creates a division between breeders as true “inventors” who are representative of “culture,” versus farmers who remain trapped as a part of “nature.” This also prioritizes the investments of individuals or institutions in the present and future over historical ones (Aistara 20124).\nThis separation of farmers from breeders forces a separation of scientific knowledge from traditional knowledge and creates scientific experts who are seen as better able to manage important global concerns. The UPOV system was originally differentiated from patents, in that discoveries of natural genetic mutation or cross- pollination, often made by farmers, could also be protected, which was not possible under patent law. Subsequently the UPOV treaty was also changed to require “discovery and development” rather than just discovery (UPOV 20025), narrowing the interpretation of breeding to fit laboratory settings more readily than farmers (Aistara 20124). This separation becomes especially important once different “exceptions” are applied to the different groups.\nExceptions versus exclusions As the authors note, “Unlike PVPs, the extension of utility patenting to plant and genetic materials involved no exceptions for seed saving, research, or other breeding activities” (p. 3). Indeed one of the main differences between patents and the UPOV regime and PVPs is that UPOV includes both a breeders’ exception and a farmers’ exception. There are fundamental inequalities between these exceptions, however. While the breeders’ exception allows breeders public access to protected germplasm for research purposes and the development of new varieties, the farmers’ exception, since the 1991 revision of UPOV, allows farmers to replant varieties for consumption purposes only. With the increasing narrowing of the farmers’ exception, similarities between UPOV and patents increase. Even in cases where the farmers’ exception does apply, it is sometimes so narrowly economically defined that it excludes income-generating farmers (Aistara 20124). More importantly, the farmers’ exception does not allow a space for farmer breeding using protected varieties. What is specifically allowed by exception for some (breeders) becomes a criminal act for others (farmers) (ibid).\n\nThe different functions of the exceptions are indicative of how dominant IPPs define innovation. The breeders’ exception is meant to allow for innovation, while the farmers’ exception is only meant to allow for consumption. If farmers are seen as consumers, rather than also potential breeders, they are effectively excluded from the possibility of innovation, despite the fact that is was farmer innovation over generations that created the diversity of varieties upon which breeders now depend. By separating the act of farming from breeding, it also separates the intangible versus tangible aspects of knowledge that surround farmer breeding, and that have been passed down with seeds over generations (Aistara forthcoming6). Losing this knowledge by constricting the space for farmer breeding practices may hinder future innovation.\n\nThe future-oriented idea of innovation also excludes many of the reasons why farmers’ may have innovated, and thus bred diversity, in the first place. There are benefits and uses from farmer-bred varieties that reach much beyond yield, and these other values and uses of seeds are also excluded from innovation in the future, because farmers do still tend to cross their own varieties with protected varieties, as noted by the authors (citing Salazar 2007). The farmers’ exception thus functions more as exclusion. Furthermore, farmer breeding is tightly interwoven with farmer seed exchange practices and social networks, which may have been what gave the diversity of seeds meaning in the first place, and may thus be the inspiration for innovation (Aistara 20117). Cutting social networks of exchange through the imposition of IPPs (Strathern 19968, Aistara forthcoming6) may thus also cut off future innovations.\n\nWhile patents have been mainly used in the US, UPOV has dominated in Europe and is spreading throughout many developing countries as a requirement of free trade agreements. There has recently been a push to move towards plant patents for conventional breeding techniques in Europe as well, which would eliminate the breeders’ exception. Ironically, the European Seed Association, a breeders’ trade association, opposed patents for conventional breeding techniques or based on “purely biological processes” (as opposed to biotechnology) for precisely the same reason that farmer-breeders have opposed their own exclusion from the breeders’ exception:\n\n“The breeders’ exemption is the cornerstone of a system that successfully balances the protection of individual intellectual property with the common interest of society to introduce innovation broadly and quickly by allowing free access for further research and breeding. This decision has the potential to not only restrict this free access to quite a number of products, but also to generally discourage breeding efforts in areas covered by such patents in the future.”\n\nThe ESA 2012 Position on Intellectual Property Protection for Plant-Related Inventions in Europe even goes on to claim that the breeders’ exception “can be regarded as a kind of ‘open source’ system and has always been relied upon by breeders for further improvement on each other’s varieties and boosted innovation in plant breeding.” This reinforces the fact that the question is really about who is included or excluded from the definition of breeder and thus has the exclusive rights to derive benefits from improving plant genetic resources that have already been improved upon for over 10,000 years. Dominant IPPs still try to create “privately public access” for some to derive exclusive benefits from such germplasm while excluding others (Aistara 20124), while true open source systems like the OSSI try to both broaden that space of access, and prevent certain groups from deriving unfair benefits at the expense of others from open source germplasm in the future.\n\nOpen Futures? As the authors indicate, many questions remain unresolved both about how open source or other alternative innovation systems might work in practice, and how they will co-exist with dominant IPPs. For example, rather than simply “building relationships of trust” (p. 8), how well would the OSSI pledge, still a hybrid between formal and informal systems, work in systems where farmers may lack skills to read and write or be reluctant to enter even into semi-formal agreements? How would the OSSI pledge work for acknowledging innovation at the community rather than individual level, for example in areas such as the Potato Park in Peru, and how does it intersect with current legal norms (Martinez Rizo, forthcoming9)?\n\nHalpert and Chappell suggest that rather than starting from the existing system, we should start from a set of goals and structure the system to accommodate it. Indeed in the meantime the first step would be to reform dominant IP regimes, including exceptions in both patents and UPOV/PVP systems. But rather than maintaining unequal exceptions for particular types of actors, exceptions should be made available for particular types of activities or purposes, such as for the maintenance of diversity, for research and development, for food security purposes, or even for small-scale business development purposes. This would already be a step towards maintaining a more open playing field and ending the segregation of actors and knowledge systems. Each such purpose will require different norms than the current requirements of novelty, distinctness, uniformity and stability, as well as options for recognizing community as well as individual contributions to breeding. OSSI and other open source seed initiatives would then be well placed to further broaden the spectrum of alternative approaches to facilitate both diversity and justice in seed politics.\n\nIs the topic of the opinion article discussed accurately in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nAre arguments sufficiently supported by evidence from the published literature? Yes\n\nAre the conclusions drawn balanced and justified on the basis of the presented arguments? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-284
|
https://f1000research.com/articles/5-2811/v1
|
02 Dec 16
|
{
"type": "Research Note",
"title": "Low-cost, rapidly-developed, 3D printed in vitro corpus callosum model for mucopolysaccharidosis type I",
"authors": [
"Anthony Tabet",
"Matthew Gardner",
"Sebastian Swanson",
"Sydney Crump",
"Austin McMeekin",
"Diana Gong",
"Rebecca Tabet",
"Benjamin Hacker",
"Igor Nestrasil",
"Matthew Gardner",
"Sebastian Swanson",
"Sydney Crump",
"Austin McMeekin",
"Diana Gong",
"Rebecca Tabet",
"Benjamin Hacker"
],
"abstract": "The rising prevalence of high throughput screening and the general inability of (1) two dimensional (2D) cell culture and (2) in vitro release studies to predict in vivo neurobiological and pharmacokinetic responses in humans has led to greater interest in more realistic three dimensional (3D) benchtop platforms. Advantages of 3D human cell culture over its 2D analogue, or even animal models, include taking the effects of microgeometry and long-range topological features into consideration. In the era of personalized medicine, it has become increasingly valuable to screen candidate molecules and synergistic therapeutics at a patient-specific level, in particular for diseases that manifest in highly variable ways. The lack of established standards and the relatively arbitrary choice of probing conditions has limited in vitro drug release to a largely qualitative assessment as opposed to a predictive, quantitative measure of pharmacokinetics and pharmacodynamics in tissue. Here we report the methods used in the rapid, low-cost development of a 3D model of a mucopolysaccharidosis type I patient’s corpus callosum, which may be used for cell culture and drug release. The CAD model is developed from in vivo brain MRI tracing of the corpus callosum using open-source software, printed with poly (lactic-acid) on a Makerbot Replicator 5X, UV-sterilized, and coated with poly (lysine) for cellular adhesion. Adaptations of material and 3D printer for expanded applications are also discussed.",
"keywords": [
"3D printing",
"neurodegenerative disease",
"cell culture",
"in vitro release",
"mucopolysaccharidosis",
"corpus callosum"
],
"content": "Introduction\n\nMucopolysaccharidosis (MPS) is a spectrum of inheritable conditions involving the accumulation of glycosaminoglycans (GAGs) following disruption of key lysosomal enzymes, which in turn leads to complications on a cellular, tissue, and organ level1. In MPS type I (MPS I), which is characterized by a deficiency of the enzyme α-L-iduronidase, brain MRI scans reveal thinning of white matter and lesions within the periventricular area and especially the corpus callosum (CC)2. The CC is the largest white matter structure in the brain, with more than 300 million axonal projections, and it interconnects the left and right hemispheres3. MPS I leads to patient-specific, irregular white matter density and geometry in the CC. Current treatment for MPS include enzyme replacement therapy (ERT) and hematopoietic stem cell transplantation (HSCT). ERT has been shown to ameliorate MPS symptoms, yet does not prevent disease progression4, owing in part to poor bioavailability. HSCT has been shown to improve cognitive development. Donors, however, can be hard to find unless umbilical cord blood is available; the procedure also has significant health risks5,6. As such, more research into potential targets and drug delivery excipients, which can provide tunable release kinetics, is needed to develop a library of promising treatment options. Additionally, owing to the highly patient-specific deterioration of cerebral white matter, patient-specific identification of synergistic drug combinations and optimal drug release kinetics can enable a more personalized medicine approach to treat MPS in the future.\n\nTraditional methods of screening use two-dimensional (2D) cell culture to study biochemical pathways and targets in cells. Yet, 2D designs of traditional cell cultures fail to account for complex cell-cell and/or cell-matrix interactions. There has been a growing literature that demonstrate the importance of three-dimensional (3D) environments in expressing phenotypes, genes, and proteins at levels found in vivo and not otherwise seen in 2D models7–9. 2D in vitro drug release studies of promising therapeutic targets are generally limited to providing qualitative insight into in vivo release behavior. Seemingly arbitrary choices in probing conditions such as material volume, material surface area, supernatant volume, and rotator conditions, hinders quantitatively rigorous conclusions of mass transfer, pharmacokinetic, and pharmacodynamics properties to be made from benchtop measurements. These in tandem demonstrate a pressing need for the use of 3D disease models as a more representative in vitro system. Here, we describe an inexpensive and fast method of developing such patient specific 3D models.\n\n\nMethods\n\nThe 3D brain MRI scans of a 20-year-old male subject with MPS I and an age-matched healthy male control were manually traced to obtain a 3D structure of the corpus callosum (CC). The 3D model was printed on a Makerbot Replicator 5X, sterilized (Figure 1), and could be used for cell culture or in vitro release studies. The de-identified MRI scans were obtained as Digital Imaging and Communications in Medicine (DICOM) files (Dataset 111). The CC was traced on the mid-sagittal slice and five adjacent slices in each hemisphere using open source InVesalius 3 (http://www.cti.gov.br/invesalius/, RRID: SCR_014693). Alternatively, OsiriX 8.0.1 software (http://www.osirix-viewer.com/, RRID: SCR_013618) may also be used. The software was then used to render the scans into a single .STL file (Dataset 212). The 3D model of the CC was loaded into MakerBot Desktop v. 3.6.0.78 (https://www.makerbot.com/download-desktop/) and printed on a MakerBot Replicator 5X with poly(lactic acid) at a resolution of 0.2 mm, maintaining life-size dimensions. Stratasys post-processing fluid was optionally used to remove any support material. The 3D printed structures were rinsed with a 70% ethanol/water solution and UV-sterilized overnight. The prints were then coated with polylysine (Sigma) for cellular adhesion, by dipping them upside down in a 0.5 mg/mL poly-L-lysine solution for at least 10 minutes. Only the top of the surface was dipped (Figure 1d), as this was the area of interest where the drug delivery materials would be loaded, but for other applications discussed in the next section, the entire structure can be dipped into ~50 mL of the poly-L-lysine solution for complete cell adhesion on the top and bottom.\n\n(a–b) T1-weighted brain MRI with resolution of 1×1×1 mm, midsagittal slice, arrow pointing at corpus callous in (a) healthy control and (b) MPS I subject. (c) CAD image of MPS I corpus callosum taken at five adjacent slices in each hemisphere. (d) 3D printed MPS I corpus callosum with poly(lactic-acid) on a Makerbot Replicator 5X. (e) UV sterilizing the print overnight.\n\nFor cell culture, this object would be used in a sterile flask, or alternatively, a larger sterilized container. For use as a drug delivery platform, the object could be both be cultured with cells as previously discussed and kept in cell culture media, or used without cells in PBS. A prenteral drug delivery system, such as shear-thinning hydrogels or hydrophobic polymer melts, can be injected with a syringe on top of the 3D model, until the amount of drug loaded is comparable to translational doses, or until the material completely coats the 3D model. When a cell coated surface is used, the drug will be released into the cell media. Conversely, when a cell-free model is used, the drug release kinetics are monitored in PBS, which is less prone to interference. Care must be taken to ensure that the release kinetic probing molecule’s absorbance or emission spectra are not greatly interfered with by cell culture media.\n\n\nDiscussion\n\nThis technique uses, but is not limited to, poly(lactic-acid) (PLA), a readily available filament for desktop 3D printers, such as the Makerbot Replicator 5X. PLA has been widely shown to be biocompatible13. Applications of this platform include studying in vitro drug release of injectable drug depots for delivery of therapeutics e.g. proteins for enzymatic deficiency disorders, such as MPS, and hydrophobic small molecules for brain cancer. In vitro drug release methodology can largely vary release profiles depending on the geometry of the container used. A 15 mL conical tube provides a different area for mass transfer than a 10 cm culture dish or 5 mL glass vial. This 3D modeling platform can potentially offer a more realistic and more standard geometry for monitoring drug release. Additionally, many therapeutic approaches to treat brain cancer and other diseases rely on injecting or implanting material that maintains a high interfacial concentration to improve drug bioavailability and efficacy, such as gold nanoparticle radiosensitizers for radiotherapy14,15. The drug depot material can be tested in vitro on this platform to determine the proper interfacial concentration given to-scale surface area of the tissue, and monitor the duration to which this concentration can be maintained.\n\nFigure 1 (a–b) demonstrates the thinning of the CC in MPS I (b) compared to that of a healthy brain (a). Given the uniqueness of each MPS patient’s brain pathology, density, and geometry, the ability to test the therapeutic window, effectiveness, and optimal drug loading concentration into an injectable drug depot for each specific patient is highly useful. A team of high school and undergraduate students was able to render the CAD file (Figure 1c) and 3D print on common desktop 3D printers at a low cost (Figure 1d), suggesting that this culture may be scaled more readily than expensive 3D in vitro platforms. This material’s modulus is approximately 3 GPa, several orders of magnitude larger than native tissue. In order to create a 3D cell culture platform which enables cell migration and proliferation within the tissue, a 3D bioprinting approach must be used16,17. In conclusion, this method’s robustness, ease, and low cost make it adaptable for use for a wide variety of applications in drug delivery, drug discovery, tissue engineering, and stem cell biology.\n\n\nData availability\n\nDataset 1: DICOM files for the de-identified MRI scans of the corpus callosum of a MPS I subject, doi: 10.5256/f1000research.9861.d14432711\n\nDataset 2: Resulting CAD file from InVesalius 3 software (.STL), used to render the DICOM files in Dataset 1, doi: 10.5256/f1000research.9861.d14432812\n\n\nEthics\n\nThe study protocol involving the brain MRI acquisition was approved by the University of Minnesota IRB committee. Written, informed consent to publish results from MPS patients and healthy volunteers was obtained.",
"appendix": "Author contributions\n\n\n\nAT, SS, and IN developed the experimental outline. AT, MG, SS, SC, AM, DG, BH, and RT developed the model. AT, SS, AM, DG, and IN wrote the first draft of the paper. All authors revised the manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe MRIs were provided from the projected funded by Lysosomal Disease Network (RDCRN; grant number NIH U54NS065768). Parts of this work were funded by a CoCreate Community Research Grant (#16354).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank Nicholas Powley, Mac Cameron, and Heather Fong for encouragement in pursuing this project.\n\n\nReferences\n\nMuenzer J: Overview of the mucopolysaccharidoses. Rheumatology (Oxford). 2011; 50(Suppl 5): v4–12. PubMed Abstract | Publisher Full Text\n\nZafeirioi DI, Batzlos SP: Brain and spinal MR imaging findings in mucopolysaccharidoses: a review. AJNR Am J Neuroradiol. 2013; 34(1): 5–13. PubMed Abstract | Publisher Full Text\n\nHofer S, Frahm J: Topography of the human corpus callosum revisited--Comprehensive fiber tractography using diffusion tensor magnetic resonance imaging. Neuroimage. 2006; 32(3): 989–994. PubMed Abstract | Publisher Full Text\n\nKakkis ED, Muenzer J, Tiller GE, et al.: Enzyme-replacement therapy in mucopolysaccharidosis I. N Engl J Med. 2001; 344(3): 182–188. PubMed Abstract | Publisher Full Text\n\nPeters C, Shapiro EG, Anderson J, et al.: Hurler syndrome: II. Outcome of HLA-genotypically identical sibling and HLA-haploidentical related donor bone marrow transplantation in fifty-four children. The Storage Disease Collaborative Study Group. Blood. 1998; 91(7): 2601–2608. PubMed Abstract\n\nTanaka A, Okuyama T, Suzuki Y, et al.: Long-term efficacy of hematopoietic stem cell transplantation on brain involvement in patients with mucopolysaccharidosis type II: a nationwide survey in Japan. Mol Genet Metab. 2012; 107(3): 513–520. PubMed Abstract | Publisher Full Text\n\nWeaver VM, Petersen OW, Wang F, et al.: Reversion of the malignant phenotype of human breast cells in three-dimensional culture and in vivo by integrin blocking antibodies. J Cell Biol. 1997; 137(1): 231–245. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeaver WM, Lelièvre S, Lakins JN, et al.: beta4 integrin-dependent formation of polarized three-dimensional architecture confers resistance to apoptosis in normal and malignant mammary epithelium. Cancer Cell. 2002; 2(3): 205–216. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDebnath J, Mills KR, Collins NL, et al.: The role of apoptosis in creating and maintaining luminal space within normal and oncogene-expressing mammary acini. Cell. 2002; 111(1): 29–40. PubMed Abstract | Publisher Full Text\n\nMazia D, Schatten G, Sale W: Adhesion of cells to surfaces coated with polylysine. Applications to electron microscopy. J Cell Biol. 1975; 66(1): 198–200. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTabet A, Gardner M, Swanson S, et al.: Dataset 1 in: Low-cost, rapidly-developed, 3D printed in vitro corpus callosum model for mucopolysaccharidosis type I. F1000Research. 2016. Data Source\n\nTabet A, Gardner M, Swanson S, et al.: Dataset 2 in: Low-cost, rapidly-developed, 3D printed in vitro corpus callosum model for mucopolysaccharidosis type I. F1000Research. 2016. Data Source\n\nShive MS, Anderson JM: Biodegradation and biocompatibility of PLA and PLGA microspheres. Adv Drug Deliv Rev. 1997; 28(1): 5–24. PubMed Abstract | Publisher Full Text\n\nSetua S, Ouberai M, Piccirillo SG, et al.: Cisplatin-tethered gold nanospheres for multimodal chemo-radiotherapy of glioblastoma. Nanoscale. 2014; 6(18): 10865–10873. PubMed Abstract | Publisher Full Text\n\nJoh DY, Sun L, Stangl M, et al.: Selective targeting of brain tumors with gold nanoparticle-induced radiosensitization. PLoS One. 2013; 8(4): e62425. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDubbin K, Hori Y, Lewis KK, et al.: Dual-Stage Crosslinking of a Gel-Phase Bioink Improves Cell Viability and Homogeneity for 3D Bioprinting. Adv Healthc Mater. 2016; 5(9): 2488–2492. PubMed Abstract | Publisher Full Text\n\nHinton TJ, Jallerat Q, Palchesko RN, et al.: Three-dimensional printing of complex biological structures by freeform reversible embedding of suspended hydrogels. Sci Adv. 2015; 1(9): e1500758. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "18886",
"date": "19 Jan 2017",
"name": "Dustin Sprouse",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIndeed our society and health care system is moving toward more personalized medicine and care. Currently this is expensive and there is a great need for high throughput screening, lower costs, and 3D human cell culture to verify new drug discovery.\nThe authors sum this up well in the title and introduction and the brief follows this idea throughout and provides an example with the rare genetic disorder of mucopolysaccharidosis.\nThe paper flows nicely, but it was the methods section that had a lack of details and data and leaves one with more questions than revelations. The data is available in two linked files; however, one would first need the correct software to open them. Thus, I recommend more images/video to convey what the scientists accomplished and show the readers the potential results and outcomes.\nThe discussion sums up some important ideas that need to be considered when developing a methodology such as this one. These methods will need to be independently verified and show promising potential prior to a new mass platform.\n\nIn conclusion, this paper puts forth the ideas of a low cost approach to 3D printing scaffolds for drug release profiles, hopefully leading to faster turnarounds and lower costs of personalized medicine, drug discovery, tissue engineering, and stem cell biology.",
"responses": []
},
{
"id": "20282",
"date": "06 Mar 2017",
"name": "Siddharth Chanpuriya",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors motivate the problem well by summarizing the drawbacks of 2D cell culture in vitro studies to predict in vivo responses in humans.\n3D human cell cultures offers several advantages; here, the authors highlight how, in this era of personalized medicine, 3D platforms can be tailor-made to offer a more realistic approximation of in vivo studies when examining drug release kinetics. Specifically, a 3D printed model of a mucopolysaccharidosis type I patient's corpus callosum is developed from an MRI trace.\nThe report is presented clearly and the authors demonstrate how to rapidly develop the aforementioned 3D model and prepare it for drug delivery studies. However, the paper lacks any studies that demonstrably show the advantage of the 3D model. Future work that experimentally establish the effectiveness of 3D models synthesized using the reported method would be useful.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2811
|
https://f1000research.com/articles/5-2293/v1
|
12 Sep 16
|
{
"type": "Research Article",
"title": "Evidence synthesis and decision modelling to support complex decisions: stockpiling neuraminidase inhibitors for pandemic influenza usage",
"authors": [
"Samuel I. Watson",
"Yen-Fu Chen",
"Jonathan S. Nguyen-Van-Tam",
"Puja R. Myles",
"Sudhir Venkatesan",
"Maria Zambon",
"Olalekan Uthman",
"Peter J. Chilton",
"Richard J. Lilford",
"Yen-Fu Chen",
"Jonathan S. Nguyen-Van-Tam",
"Puja R. Myles",
"Sudhir Venkatesan",
"Maria Zambon",
"Olalekan Uthman",
"Peter J. Chilton",
"Richard J. Lilford"
],
"abstract": "Objectives: The stockpiling of neuraminidase inhibitor (NAI) antivirals as a defence against pandemic influenza is a significant public health policy decision that must be made despite a lack of conclusive evidence from randomised controlled trials regarding the effectiveness of NAIs on important clinical end points such as mortality. The objective of this study was to determine whether NAIs should be stockpiled for treatment of pandemic influenza on the basis of current evidence. Methods: A decision model for stockpiling was designed. Data on previous pandemic influenza epidemiology was combined with data on the effectiveness of NAIs in reducing mortality obtained from a recent individual participant meta-analysis using observational data. Evidence synthesis techniques and a bias modelling method for observational data were used to incorporate the evidence into the model. The stockpiling decision was modelled for adults (≥16 years old) and the United Kingdom was used as an example. The main outcome was the expected net benefits of stockpiling in monetary terms. Health benefits were estimated from deaths averted through stockpiling. Results: After adjusting for biases in the estimated effectiveness of NAIs, the expected net benefit of stockpiling in the baseline analysis was £444 million, assuming a willingness to pay of £20,000/QALY ($31,000/QALY). The decision would therefore be to stockpile NAIs. There was a greater probability that the stockpile would not be utilised than utilised. However, the rare but catastrophic losses from a severe pandemic justified the decision to stockpile. Conclusions: Taking into account the available epidemiological data and evidence of effectiveness of NAIs in reducing mortality, including potential biases, a decision maker should stockpile anti-influenza medication in keeping with the postulated decision rule.",
"keywords": [
"Pandemic influenza",
"evidence synthesis",
"bias modelling",
"neuraminidase inhibitors",
"stockpiling"
],
"content": "Introduction\n\nLike many other potentially catastrophic events for which governments need to prepare, influenza pandemics are rare. Although the risk is considered to be 3–4% per annum1, the public health consequences are widely recognised to be potentially severe2. The epidemiology of only a small number of influenza pandemics has been well studied and evidence for the effectiveness of remedial influenza treatments in a pandemic scenario is scant. Yet, governments around the world still have to decide whether or not to stockpile anti-influenza medication like neuraminidase inhibitor (NAI) antivirals, such as oseltamivir (Tamiflu®) and zanamivir (Relenza®), as a defence against pandemic influenza.\n\nThe stockpiling of NAIs has been a controversial issue. Firstly, stockpiling may be seen to be a waste of large amounts of public money if the pandemic fails to materialise or if it is mild. In the United Kingdom, the previous Chief Medical Officer was criticised for spending £560 million on medicine that went largely unused in the 2009–10 pandemic3. However, taking a default position of not stockpiling, or making the decision on the basis of intuition alone, is not justifiable given the rare but potentially catastrophic losses associated with pandemic influenza and the large cost of stockpiling.\n\nSecondly, there has been a lack of conclusive evidence on the effectiveness of NAIs. Recent meta-analyses of randomised controlled trials (RCT) of seasonal influenza cases demonstrated reductions in rates of hospitalization, lower respiratory complications, and a decreased time to symptom alleviation but were unable to confirm or refute an effect of NAIs on more important clinical end points such as mortality4,5. A caveat of these studies, which were required for licensure of drug in healthy adults, is that they were not powered to determine low frequency but critical end points such as mortality in a largely healthy adult population. A further meta-analysis of observational data from pandemic influenza did find evidence of a reduction in the risk of mortality when NAIs were given to patients hospitalised with influenza6. Some authors have criticised it for being subject to a large degree of bias and rejected it as a suitable form of evidence with which to formulate policy decisions7,8, though others argue that this evidence strongly supports the use of NAI treatment for influenza in hospitalised patients9.\n\nEvidence that has a bearing on death rates is not confined to measurement of mortality alone – there are other sources of relevant evidence. Clinical trials show that NAIs have beneficial effects on a number of outcomes as described above4–6. The treatment has a plausible rationale and it works in vitro and in animal models for this zoonosis10. An arguably extreme position is to assume that these observations contain no information regarding effectiveness in preventing the rarer, but more severe outcomes, such as death. However, people who take to heart Bradford Hill’s list of factors that should affect the interpretation of data (Box 1), would reject such a completely non-theoretical stance. An observed reduction in the risk of mortality is consistent with the aforementioned evidence. There is thus a compelling case for extrapolation from various forms of evidence in order to examine the investment decision facing decision makers.\n\nPrevious studies have estimated how cost-effective NAI stockpiling would be under a range of different pandemic influenza scenarios11–16. Stockpiling is generally estimated to be cost-effective. However, these studies took observational evidence of effectiveness, often from seasonal influenza studies, at face value and did not model potential biases that may have led to overestimation of benefits. Moreover, they only examined a limited number of specific future scenarios. The results of such cost-effectiveness models hinge on the available evidence of effectiveness and it may not be immediately clear to decision makers the implications of new evidence. We have therefore taken a different approach.\n\nThe calculation of the number of deaths from an influenza pandemic is simply calculated from a number of relevant variables such as the size of the population, the clinical attack rate, and the case fatality ratio. The effectiveness of NAIs in terms of relative risks can then be used to estimate the potential number of deaths averted through their use. A simple model can provide a useful framework to synthesise the available evidence while also remaining clear and transparent to decision makers. There is a large degree of uncertainty regarding the variables in the model, due to factors such as random mutations in the influenza virus, individual behaviour, and distribution of NAIs, nevertheless appropriate distributions can be specified for each variable and the uncertainty propagated through the model to estimate the distribution of possible numbers of deaths and resulting QALYs under the stockpiling and no stockpiling options. The model presented here exemplifies an approach to decision making under the types of uncertainty described above using a simple, transparent model to assist decision makers and to help inform the stockpiling decision.\n\n\nMethods\n\nThe methods used in this study are founded in normative decision theory17,18, which considers what decisions we ought to take, and Bayesian statistics. We used a well-established technique based on expected utility theory17,18 to model the binary decision to stockpile or not to stockpile NAIs. Within this framework, the decision simplifies to a question of whether the expected net benefits of the stockpiling decision are positive19.\n\nThe net benefit associated with stockpiling was set as the value of the deaths averted minus the costs of stockpiling. If the expected net benefit of stockpiling is positive then the decision would be to stockpile, and if it is negative, not to stockpile.\n\nThe value of the deaths averted was modelled as:\n\nPop × Prob × CAR × CFR × Hospital × Treated × (1 – θ) × QALY × λ\n\nFirstly, the number of pandemic influenza deaths was calculated by multiplying the number of adults in the UK (Pop) by: the probability of there being a pandemic within the stockpile shelf-life (Prop), the clinical attack rate (CAR), and the case fatality ratio (CFR). We further multiply by the probability a pandemic influenza death occurred in hospital (Hospital), and the probability one of these patients receives NAIs (Treated). The number of deaths averted by NAI treatment in this population of NAI-treated adults was given by the relative risk reduction in mortality associated with NAI treatment (1 – θ). Finally, the value of these deaths averted was calculated by multiplying by the quality adjusted life years (QALY) associated with each pandemic influenza mortality (QALY), and the societal willingness to pay per QALY (λ). This model is further explicated in Figure 1.\n\nWe considered reductions in mortality among symptomatic adults resulting from stockpiling, but did not take into account possible additional effects on complications such as pneumonia or that community use might reduce complications, hospitalisation, or mortality. Only adults were considered on the grounds that NAI effectiveness4,6 is less certain in children and to determine if the decision to stockpile could be justified on the basis of any benefit among adults alone.\n\nDecision modelling is founded in the Bayesian paradigm, which was used to evaluate the stockpiling decision for a future pandemic with unknown epidemiological variables and unknown effectiveness of NAI. A sub-model was specified for each epidemiological variable in the decision model. Data from previous pandemics were assumed to be observations from an underlying common distribution, the parameters of which were estimated using these data as described in the following section. The decision was then evaluated over posterior predictive distributions for the epidemiological parameters. We used a bias corrected effectiveness estimate for the effectiveness of NAIs as described below. The model was estimated using Markov Chain Monte Carlo (MCMC) with 10,000 iterations using R 3.2.3 and Stan 2.11.0. This method obviates the need to conduct separate probabilistic sensitivity analyses since the posterior distribution of the net benefits represents the uncertainty about future influenza pandemics and NAI effectiveness. The expected net benefits represent the gains or losses from stockpiling, on average, given the different distributions for the different parameters. Convergence of the MCMC chains was assessed by visual inspection of autocorrelation, running mean, and trace plots in R.\n\nThe data and statistical code are provided with the paper.\n\nThe data used to estimate the parameters in the model were obtained from documents compiled to assess pandemic influenza and thus represent the decision maker’s prior knowledge1. The shelf-life of oseltamivir, the principle drug comprising the vast majority of the NAI stockpile, is ten years20.\n\nThe clinical attack rate and case fatality ratios from previous pandemics were assumed to be observations from beta distributions. Improper non-informative priors with a lower limit of zero were assigned to the parameters of these distributions, which were then updated with the data from the previous pandemics. We excluded the observation of a clinical attack rate of 60% in the 1889–92 Asiatic flu pandemic as the UK government’s worst case scenario is a clinical attack rate of 50%. The probability that a pandemic occurs in the shelf life of the stockpile was similarly estimated from the data with each decade between 1900 and 2010 as a binary observation equal to one if a pandemic occurred in that decade and zero otherwise. These binary observations were assumed to be observations from a Bernoulli distribution.\n\nNo RCT evidence for the effectiveness of NAIs in reducing the risk of mortality in pandemic influenza was available. Too few deaths were observed in RCTs of seasonal influenza4. We based our effectiveness estimate on a recently published pooled meta-analysis of observational, patient-level data from hospitalised pandemic influenza virus patients6. We converted the odds ratios (OR) for mortality associated with NAIs (irrespective of time from onset) provided in the paper into relative risks (RR): RR = OR/(1 – p + (p × OR)) where p is the baseline (approximately 10%)21. The study was based on hospitalised patients, in order to apply the observed relative risk from hospitalised patients to the general population considered here, we made two conservative assumptions. First, we assumed that there would be no difference in the patients that would be hospitalised and those that would remain in the community in a no stockpile and stockpile scenarios. This is conservative because community treatment will be given earlier, on average, in the course of the disease if it can be administered in the community and there is evidence that the earlier the treatment is given, the better4–6. Secondly, we assume that only deaths occurring in hospital in the non-stockpile scenario would be averted under the counterfactual stockpile scenario. A study of mortality in the A/H1N1 2009 pandemic in England, found that 92% of deaths (125 of 136 cases studied) occurred in hospital22. Assuming that none of these 8% of deaths taking place in a non-stockpile scenario would be averted under the counterfactual is as conservative as it can be. The logic of our approach is laid out in Figure 1.\n\nIn addition to these conservative assumptions regarding the application of in-hospital relative risk reductions to a community population, we also took into account the observational nature of the hospital based evidence itself. A number of authors have raised this issue in connection with the study used here7,8, although others dispute the strength of these criticisms9. We used a method previously published elsewhere to model bias23. Five reviewers (SIW, RJL, YFC, OU, and PJC) who were not associated with the observational data study independently completed a bias questionnaire and provided their beliefs about both additive and proportional bias present in the study across a range of domains. The median values for the mean and standard error of the bias across reviewers were used to ‘correct’ the observational evidence23. The method for bias modelling used here was originally intended for individual studies so that they could be adjusted prior to an evidence synthesis23. This method has been applied here since the study in question is an individual patient pooled meta-analysis, analysed using a similar method to that any single study would use, except that the data originate from multiple locations and are of varying quality. The reviewers considered this an additional source of uncertainty when evaluating the quality and potential for bias.\n\nThe distribution for the average age associated with an influenza death in previous pandemics was assumed to be drawn from a scaled Beta distribution with an upper limit of 81.5, which is the UK life expectancy at birth. The parameters of this distribution were then estimated from data; the average ages of influenza deaths from prior pandemics were 27 (1918), 65 (1957), 62 (1968), and 45 (2009)22,24,25, no data were available from the 1889–92 pandemic. To estimate QALYs lost due to an influenza death, the remaining life expectancy was calculated by differencing the average age at death from the UK life expectancy at birth (i.e. 81.5 years)21. These years were weighted by the average QALY weight for a person aged over 45 of 0.822, and then discounted at the rate of 3.5% per annum as recommended by the National Institute for Health and Care Excellence (NICE)26.\n\nWe also estimated the probability a pandemic influenza death occurred in hospital using data on 2009 pandemic influenza deaths22. We further considered a number of scenarios for the distribution of NAIs and the proportion of symptomatic pandemic influenza cases that would receive the drug. Our base case was 100%, however we also considered the decisions that would be made in the range of 0% to 100% in a deterministic sensitivity analysis – the value of the deaths averted was multiplied by a number between zero and one. The cost of stockpiling was assumed fixed at £560 million ($860m, €750m) and was based on the figures quoted in the above mentioned Select Committee hearings3. We considered the adult population of the UK, which was 50.5 million in 201527. The willingness to pay per QALY was selected as £20,000/QALY ($31,000/QALY) for the base case analysis, the lower end of the range (£20,000-£30,000/QALY; $31,000–$45,000/QALY) specified by NICE as being cost-effective26. We examined the decision that would be made under a range of willingness to pay per QALY values of £5,000/QALY ($7,500/QALY) to £30,000/QALY ($45,000/QALY).\n\n\nResults\n\nTable 1 shows the posterior mean and 95% credible intervals for the parameters in the model. Using data from previous influenza pandemics, mean values (95% credible intervals) were as follows: clinical attack rate 23.8% (5.2%, 50.6%), case fatality ratio 0.7% (0.0%, 3.0%), and probability of experiencing a pandemic within a decade 38.5% (15.3%, 64.9%). The expected value for the mean QALY losses associated with influenza mortality was 15.2 (5.7, 20.9). The proportion of pandemic influenza deaths that occurred in hospitalised patients was 91.9% (86.9%, 95.8%).\n\nCAR = clinical attack rate; CFR = case fatality ratio. Probabilities expressed as %.\n\naAssumed to be fixed.\n\nbSee Appendix A for derivation.\n\ncRelative risks converted from odds ratios (0.81, 95% CI: 0.70, 0.93) using a baseline risk of mortality of 10%16.\n\nThe observed relative risk was 0.83 (95% confidence interval: 0.71, 0.94) and the bias corrected relative risk was estimated as 0.89 (0.71, 1.07). The principle sources of bias identified by the reviewers were selection bias, due to a lack of randomisation, the possibility that studies with a positive finding may have been more likely to volunteer their data for the meta-analysis, and attrition bias. Not all reviewers were in agreement about the overall effects of bias, but the median response was that there was an overestimation of treatment benefit.\n\nTable 2 shows the results from various scenarios considered. The expected net benefit of stockpiling in the baseline analysis was £444 million ($668 million). The decision would be therefore to stockpile NAIs. Figure 2 shows the posterior distribution of net benefits. The mean number of deaths averted was 3,218. There was a 77% probability that the benefits were negative implying that no pandemic occurred, an insufficiently large pandemic occurred, or NAIs were not effective enough to justify the stockpile. The median net benefit was £-560 million in each case as in the majority of scenarios no pandemic occurred and there was only the net cost of the stockpile. Nevertheless, the mean estimated net benefit was positive, which was caused by the very large number of deaths, many of which may be prevented by stockpiling, in the unlikely event of a severe pandemic. This can be seen in the long tail on the left of the distribution in Figure 2.\n\nThe decision is to stockpile if the expected net benefit is greater than zero and not to stockpile otherwise. The willingness to pay per QALY is £20,000/QALY in all scenarios.\n\nThe x-axis has been truncated at £4·5b.\n\nFigure 3 shows the decision under a range values for the percentage of hospitalised, symptomatic adults who would receive NAIs and willingness to pay per QALY threshold. If 100% of hospitalised, symptomatic adults with influenza received NAIs then the decision would be to stockpile as long as our threshold willingness to pay per QALY was greater than £11,116/QALY. When only 50% of hospitalised, symptomatic adults receive NAIs this threshold increases to £22,232/QALY, which would still be considered cost-effective in the range considered by NICE. The minimum percentage of hospitalised, symptomatic adults with influenza that would need to receive NAIs for the decision to be to stockpile at a threshold willingness to pay of £20,000/QALY is 56%.\n\nThe green shaded region shows where the decision would be to stockpile. Three decision thresholds are shown by the dashed (100% of hospitalised, symptomatic adults receive NAIs), dotted (50% of hospitalised, symptomatic adults receive NAIs), and dash-dotted (£20,000/QALY willingness to pay) along with corresponding for willingness to pay or required proportion of hospitalised, symptomatic adults receiving NAIs.\n\n\nDiscussion\n\nThis study has found that the available evidence suggests that stockpiling NAIs for pandemic influenza is rational under a range of assumptions. Many of these assumptions are conservative, such as no reduction in adverse clinical outcomes other than mortality, no benefit in patients who would not have been hospitalised had there been no stockpile, and no effect in children. However, this decision required at least 56% of the influenza patients who would have died without a stockpile to receive NAIs if the threshold willingness to pay was £20,000/QALY. In the 2009 pandemic, 64% of hospitalised patients received NAIs6, and in the United Kingdom specifically this proportion was 75%28, suggesting that 56% is achievable, and that therefore, stockpiling is supported by the available evidence.\n\nThis paper is predicated on the purchase of a stockpile large enough to treat a large proportion of the population (80% in the UK) in the community and in hospital with NAIs. This may well be the correct strategy if new evidence emerges that community-based treatment reduces either complications, hospitalisations or mortality. Further research will be required; indeed, the Bayesian decision analysis used here can be extended to consider how much to stockpile rather than simply whether to stockpile. However, if the evidence base were to remain limited to mortality reductions in hospitalised patients, or if the societal willingness to pay per QALY was low, as it may be in many resource poor settings, a ‘hospital-treatment only’ policy might be considered. This would reduce the cost of the stockpile significantly. For example, in the 2009 pandemic only 0.5% of symptomatic cases were hospitalised29, these patients would require far fewer doses than the 1.16 million courses (at a minimum) of NAIs dispensed in the 2009 pandemic30. For a population of 50.5 million adults with a CAR of 25%, a hospitalisation probability of 0.5% would lead to only approximately 60,000 admissions. The evidence also suggests that more timely treatment of NAIs (within two days of symptom onset) is more effective than treatment at any point6, which would suggest that the effectiveness of NAIs could be more favourable than modelled under the stockpiling policy. In all cases the decision would remain to stockpile NAIs.\n\nOur conclusions are in line with the decision that would be made on the basis of cost-effectiveness evidence from previous studies11–16. However, our study does not take observational evidence at face value, but ‘downgrades’ it, thereby yielding a reduced estimate of effectiveness and wider credible limits. We have calculated the distribution of possible deaths from pandemic influenza using a relatively simple mathematical model and then ‘averaged’ over the distributions of the variables rather than examining cost-effectiveness on a scenario-by-scenario basis. This approach is intuitively simple and is aimed to provide correct inferences using a simple logical framework for the synthesis of the commonly available evidence in order to assist decision makers with a complex decision. The model allows the logical basis of the decision to be ‘reverse engineered’, allowing the decision to be critiqued within the framework established by the model.\n\nObtaining an estimate for the bias in any particular study, or consolidated group of studies, is clearly an uncertain undertaking. There is an evidence base on bias arising from meta-regressions or other analyses comparing the results of imperfect studies to those of a ‘gold standard’. A recent Cochrane review comparing treatment effects reported in observational studies as compared to RCTs found that, “on average, there is little evidence for significant effect estimate differences between observational studies and RCTs...”31 It is not surprising, given the considerable uncertainties surrounding the meta-analysis cited here, that the differences between the reported effects and our bias corrected effect resembles the differences in empirical studies comparing observational studies and RCTs31–33.\n\nWe acknowledge weaknesses in our study. The only outcome considered in the analyses was mortality. Adverse events caused by NAIs may also generate increased costs and hence reduced benefit. For example, a review of clinical trial evidence of NAIs found an increased risk of nausea and vomiting associated with treatment4. The authors also reported a possible increase in the risk of psychiatric adverse events. However, this only reached statistical significance in exploratory analyses including a supra-licence dose and off-treatment periods. A more recent meta-analysis based on individual-level patient data of clinical trials focusing on licensed dose only found no such effects, but the number of events was small5. Neuraminidase inhibitors may also have protective effects against some adverse events such as cardiac events, and may reduce the risk of influenza-associated pneumonia and hospitalisation4,5. The benefit of treatment is unlikely to be grossly over-estimated and is likely to be under-estimated given our conservative assumptions. We have also not considered potential effects on children or from reductions in complications, hospitalisations or mortality that might be associated with community-based treatment; nor have we considered wider societal effects, such as productivity gains, reduced community transmission, and the value placed on a stockpile for a potentially risk averse population, all of which may increase the benefits of stockpiling.\n\nWe have assumed independence between the clinical attack rate and case fatality ratio, as well as other variables, however there is some evidence to suggest that they could be correlated34. Nevertheless, the data are admittedly scant, and it is expected that this is a neutral assumption. Of course, if they are positively correlated then our conclusions become more conservative.\n\nOur model examines the decision in the abstract and does not concern itself with externalities such as the possibility that availability of the drug will affect attitudes and hinder the effort to contain the spread of the disease, or that resistance to antivirals may develop. Nor have we considered the sensitivity of clinical diagnosis of influenza in identifying true positives or the costs and logistics of establishing a distribution process for the NAIs. The propensity to consult is also an important factor that may have affect the proportion of true positives, which in turn may have a bearing on the use of a stockpile if used on a “first come, first served” basis. Further research is required to optimize distribution and behaviour during a pandemic to ensure the cost-effectiveness of the stockpiling.\n\n\nConclusions\n\nTaking into account the existing evidence on pandemic influenza and the effectiveness of NAIs the decision should be to stockpile, provided a utilitarian decision-making framework is used of minimising expected losses and hence maximising expected benefits.\n\n\nData availability\n\nThe data used to estimate the parameters in the model were obtained from documents compiled to assess pandemic influenza and thus represent the decision maker’s prior knowledge1.\n\nF1000Research: Dataset 1. Raw data of 'stockpiling neuraminidase inhibitors for pandemic influenza usage', 10.5256/f1000research.9414.d13265341.",
"appendix": "Author contributions\n\n\n\nSIW, RJL, and YFC conceived the study and developed the methodology; JSN-V-T, PRM, PJC, MZ, and SV contributed to the parameterisation of the model and provided background to pandemic influenza; SIW, RJL, YFC, OU, and PJC independently reviewed the observational evidence and adjusted effectiveness estimates for bias; SIW and RJL prepared the first draft of the paper; this and subsequent drafts were reviewed and revised by all authors.\n\n\nCompeting interests\n\n\n\nJSN-V-T, PRM, and SV have co-authored the Muthuri et al. (2014) study which was supported by an unrestricted educational grant from F. Hoffman La Roche. PRM and JSN-V-T were MUGAS Review Board members that reviewed the oseltamivir data (both from randomised controlled trials and observational studies including data from the 2009/10 pandemic) and agreed on evidence gaps and a statistical analysis plan that would address these gaps. JSN-V-T is Chair of NERVTAG (New and Emerging Respiratory Virus Threat Advisory Group).\n\n\nGrant information\n\nSIW, RJL, YFC, and PJC are part-funded/supported by the National Institute for Health Research (NIHR) Collaborations for Leadership in Applied Health Research and Care West Midlands. This paper presents independent research and the views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThis paper presents independent research funded by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care West Midlands. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.\n\n\nAppendices\n\nClick here to access the data.\n\n\nReferences\n\nDepartment of Health: Scientific Summary of Pandemic Influenza & its Mitigation. London; 2011. Reference Source\n\nCabinet Office: National Risk Register of Civil Emergencies. London; 2015. Reference Source\n\nHouse of Commons Committee of Public Accounts: Access to clinical trial information and the stockpiling of Tamiflu. London; 2013. Reference Source\n\nJefferson T, Jones MA, Doshi P, et al.: Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. In: Jefferson T, editor. Cochrane Database Syst Rev. Chichester, UK: John Wiley & Sons, Ltd; 2014; 4: CD001265. PubMed Abstract | Publisher Full Text\n\nDobson J, Whitley RJ, Pocock S, et al.: Oseltamivir treatment for influenza in adults: a meta-analysis of randomised controlled trials. Lancet. Elsevier Ltd; 2015; 385(9979): 1729–37. PubMed Abstract | Publisher Full Text\n\nMuthuri SG, Venkatesan S, Myles PR, et al.: Effectiveness of neuraminidase inhibitors in reducing mortality in patients admitted to hospital with influenza A H1N1pdm09 virus infection: a meta-analysis of individual participant data. Lancet Respir Med. 2014; 2(5): 395–404. PubMed Abstract | Publisher Full Text\n\nKmietowicz Z: Study claiming Tamiflu saved lives was based on “flawed” analysis. BMJ. 2014; 348: g2228. PubMed Abstract | Publisher Full Text\n\nWolkewitz M, Schumacher M: Statistical and methodological concerns about the beneficial effect of neuraminidase inhibitors on mortality. Lancet Respir Med. 2014; 2(7): e8–9. PubMed Abstract | Publisher Full Text\n\nThe Academy of Medical Sciences: Use of Neuraminidase Inhibitors in Influenza. London; 2015. Reference Source\n\nWard P, Small I, Smith J, et al.: Oseltamivir (Tamiflu®) and its potential for use in the event of an influenza pandemic. J Antimicrob Chemother. 2005; 55(Suppl 1): i5–i21. PubMed Abstract | Publisher Full Text\n\nBalicer RD, Huerta M, Davidovitch N, et al.: Cost-benefit of stockpiling drugs for influenza pandemic. Emerg Infect Dis. 2005; 11(8): 1280–2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSiddiqui MR, Edmunds WJ: Cost-effectiveness of antiviral stockpiling and near-patient testing for potential influenza pandemic. Emerg Infect Dis. 2008; 14(2): 267–74. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhazeni N, Hutton DW, Garber AM, et al.: Effectiveness and cost-effectiveness of expanded antiviral prophylaxis and adjuvanted vaccination strategies for an influenza A (H5N1) pandemic. Ann Intern Med. 2009; 151(12): 840. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarrasco LR, Lee VJ, Chen MI, et al.: Strategies for antiviral stockpiling for future influenza pandemics: a global epidemic-economic perspective. J R Soc Interface. 2011; 8(62): 1307–13. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLugnér AK, Mylius SD, Wallinga J: Dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemic. Health Econ. 2010; 19(5): 518–31. PubMed Abstract | Publisher Full Text\n\nLee VJ, Phua KH, Chenm MI, et al.: Economics of neuraminidase inhibitor stock piling for pandemic influenza, Singapore. Emerg Infect Dis. 2006; 12(1): 95–102. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPress SJ: Subjective and Objective Bayesian Statistics. 2nd Edition. Wiley; 2002. Reference Source\n\nBerger JO: Statistical Decision Theory and Bayesian Analysis. 3rd Edition. Springer; 1993. Reference Source\n\nClaxton K: The irrelevance of inference: a decision-making approach to the stochastic evaluation of health care technologies. J Health Econ. 1999; 18(3): 341–64. PubMed Abstract | Publisher Full Text\n\nElectronic Medicines Compendium: Tamiflu 75mg hard capsule. [cited 2015 Aug 10]. Reference Source\n\nGrant RL: Converting an odds ratio to a range of plausible relative risks for better communication of research findings. BMJ. 2014; 348: f7450. PubMed Abstract | Publisher Full Text\n\nDonaldson LJ, Rutter PD, Ellis BM, et al.: Mortality from pandemic A/H1N1 2009 influenza in England: public health surveillance study. BMJ. 2009; 339: b5213. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTurner RM, Spiegelhalter DJ, Smith GC, et al.: Bias modelling in evidence synthesis. J R Stat Soc Ser A Stat Soc. 2009; 172(1): 21–47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDávila J, Chowell G, Borja-Aburto VH, et al.: Substantial Morbidity and Mortality Associated with Pandemic A/H1N1 Influenza in Mexico, Winter 2013–2014: Gradual Age Shift and Severity. PLoS Curr. 2014; 6: pii: ecurrents.outbreaks.a855a92f19db1d90ca955f5e908d6631. PubMed Abstract | Publisher Full Text | Free Full Text\n\nViboud C, Miller M, Olson D, et al.: Preliminary Estimates of Mortality and Years of Life Lost Associated with the 2009 A/H1N1 Pandemic in the US and Comparison with Past Influenza Seasons. PLoS Curr. 2010; 2: RRN1153. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNational Institute for Health and Care Excellence: Guide to the methods of technology appraisal 2013. London; 2013. Reference Source\n\nOffice for National Statistics: Population Estimates for UK, England and Wales, Scotland and Northern Ireland, Mid-2014. 2015. Reference Source\n\nNguyen-Van-Tam JS, Openshaw PJ, Hashim A, et al.: Risk factors for hospitalisation and poor outcome with pandemic A/H1N1 influenza: United Kingdom first wave (May–September 2009). Thorax. 2010; 65(7): 645–51. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPresanis AM, Pebody RG, Paterson BJ, et al.: Changes in severity of 2009 pandemic A/H1N1 influenza in England: a Bayesian evidence synthesis. BMJ. 2011; 343: d5408. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNguyen-Van-Tam JS, Nicholson KG: Neuraminidase inhibitors were widely used in the UK during the 2009 influenza A(H1N1) pandemic. J Clin Virol. 2011; 50(2): 183. PubMed Abstract | Publisher Full Text\n\nAnglemyer A, Horvath HT, Bero L: Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Bero L, editor. Cochrane database Syst Rev. Chichester, UK: John Wiley & Sons, Ltd; 2014; 4(4): MR000034. PubMed Abstract | Publisher Full Text\n\nBenson K, Hartz AJ: A comparison of observational studies and randomized, controlled trials. N Engl J Med. 2000; 342(25): 1878–86. PubMed Abstract | Publisher Full Text\n\nIoannidis JP, Haidich AB, Pappa M: Comparison of evidence of treatment effects in randomized and nonrandomized studies. JAMA. 2001; 286(7): 821–30. PubMed Abstract | Publisher Full Text\n\nLee EC, Viboud C, Simonsen L, et al.: Detecting signals of seasonal influenza severity through age dynamics. BMC Infect Dis. 2015; 15: 587. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHsu J, Santesso N, Mustafa R, et al.: Antivirals for treatment of influenza: a systematic review and meta-analysis of observational studies. Ann Intern Med. 2012; 156(7): 512–24. PubMed Abstract | Publisher Full Text\n\nFiore AE, Fry A, Shay D, et al.: Antiviral agents for the treatment and chemoprophylaxis of influenza --- recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Reports. 2011; 60(1): 1–24. PubMed Abstract\n\nEuropean Medicines Agency: Tamiflu: EPAR - Scientific discussion. 2005. [cited 2015 Sep 7]. Reference Source\n\nHodson EM, Craig JC, Strippoli GF, et al.: Antiviral medications for preventing cytomegalovirus disease in solid organ transplant recipients. Cochrane Database Syst Rev. 2008; (2): CD003774. PubMed Abstract | Publisher Full Text\n\nOffice for National Statistics: Historic and Projected Mortality Data from the Period and Cohort Life Tables, 2012-based, UK, 1981–2062. 2013. Reference Source\n\nNatCen Social Research: Health Survey for England. Report No.: SN: 7649. Reference Source\n\nWatson SI, Chen YF, Nguyen-Van-Tam JS, et al.: Dataset 1 in: Evidence synthesis and decision modelling to support complex decisions: Stockpiling neuraminidase inhibitors for pandemic influenza usage. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17973",
"date": "18 Jan 2017",
"name": "Pasi M. Penttinen",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a well designed, carefully executed and documented study, that provides important insights into the cost-effectiveness of national stockpiles of neuraminidase inhibitors to be used during influenza pandemics.\n\nThe analysis is relying on a number of key assumptions, such as the effectiveness of NAI antivirals against mortality due to influenza, the probability of a pandemic occurring during the shelf life of the stockpile and the proportion of pandemic influenza deaths occurring in hospital. Many of these assumptions are based on a limited or controversial evidence base, however the authors acknowledge and address most of these limitations.\nThe assumption that most pandemic deaths occur in hospitals, is based on the observation during the 2009 pandemic in the UK, however in many countries, already during severe influenza A(H3N2) epidemics, and during many previous pandemics, the majority of deaths are likely to occur in the community, outside of hospitals. It is confusing that the authors compare the costs of a population wide (80%) stockpile with the estimated benefits on hospital mortality only. Although this is discussed in the second paragraph of discussion, it would be helpful to see an analysis or results taking also into account outpatient and community mortality.\nIt is likely that such an analysis would be useful for other countries than UK. Please discuss briefly the limitations of this approach and these assumptions, when replicating the study in other settings (such as differences in societal willingness to pay per QALY).\nIn Box 1. the two columns are not aligned when viewing as a pop-up on MS Internet Explorer.\nIn Figure 1. the references to UK, and the national pandemic flu service are not helpful and distract from the more general main message of this figure.",
"responses": [
{
"c_id": "2546",
"date": "16 Mar 2017",
"name": "Sam Watson",
"role": "Author Response",
"response": "We thank the review for their comments and detail our responses below, point by point. The referee's text is in Italics. This is a well-designed, carefully executed and documented study, that provides important insights into the cost-effectiveness of national stockpiles of neuraminidase inhibitors to be used during influenza pandemics. The analysis is relying on a number of key assumptions, such as the effectiveness of NAI antivirals against mortality due to influenza, the probability of a pandemic occurring during the shelf life of the stockpile and the proportion of pandemic influenza deaths occurring in hospital. Many of these assumptions are based on a limited or controversial evidence base, however the authors acknowledge and address most of these limitations. The assumption that most pandemic deaths occur in hospitals, is based on the observation during the 2009 pandemic in the UK, however in many countries, already during severe influenza A(H3N2) epidemics, and during many previous pandemics, the majority of deaths are likely to occur in the community, outside of hospitals. It is confusing that the authors compare the costs of a population wide (80%) stockpile with the estimated benefits on hospital mortality only. Although this is discussed in the second paragraph of discussion, it would be helpful to see an analysis or results taking also into account outpatient and community mortality. We did not consider non-hospital mortality as there were no data on the effectiveness of NAIs outside of the hospital setting, where there may be differences in compliance and other factors, when the study was conducted. We note that the way we have set up the analysis is to try to be as conservative as possible: the highest stated costs with a justifiable patient pool. On this basis we note that if a decision to stockpile is supported under our assumptions then it will certainly be supported if there is any benefit outside of the hospital. Recently published analyses outside of the hospital setting suggest a potential benefit (https://doi.org/10.1093/cid/cix127), however we opt to remain conservative in our analyses. It is likely that such an analysis would be useful for other countries than UK. Please discuss briefly the limitations of this approach and these assumptions, when replicating the study in other settings (such as differences in societal willingness to pay per QALY). We have amended the discussion to reflect this. In Box 1. the two columns are not aligned when viewing as a pop-up on MS Internet Explorer. This is an issue for the journal. In Figure 1. the references to UK, and the national pandemic flu service are not helpful and distract from the more general main message of this figure. We have removed that box from the figure."
}
]
},
{
"id": "19365",
"date": "27 Jan 2017",
"name": "Joel Kelso",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis article reports on a probabilistic cost effectiveness of stockpiling neuraminidase inhibitor antiviral drugs to mitigate against pandemic influenza deaths.\n\nThe study is methodologically sound. The decision-theoretic approach which selects the optimal course of action based on the utility of each outcome and the probability of each outcome conditional on each decision is appropriate. In this case the actions are whether or not to stockpile NAIs for use in a pandemic, and the outcome is the cost of the antiviral stockpile and the expected number of deaths during the shelf-life period of the antivirals.\nThe model of the expected number of pandemic deaths identifies is structurally sound and uses various appropriate data sources to quantify uncertainties present in all the parameters.\nThe conclusion that stockpiling NAIs is cost-effective for a sufficiently high willingness-to-pay cost per QALY follows from the model and the data used.\nHowever, I think that more attention needs to be drawn to dependency of this result to the crucial antiviral effectiveness parameter. The methodology where all parameters are treated probabilistically in a uniform way is excellent; however additional one- or two- way sensitivity analyses are still valuable for providing insight into the effect of the most important parameters. The authors have done this for the proportion of hospitalised cases receiving antivirals; it seems appropriate to also do this for NAI effectiveness, given the ongoing debate on the subject.\nI have divided further comments into three sections: major essential revisions, commentary with discretionary additions, and minor technical clarifications needed.\nEssential Revision ------------------\nAs stated above, the reader would be well served with an additional figure similar to Figure 3, but plotting QALY threshold against mean NAI effectiveness. In my shallow experimentation with the author's model, it looks like at 20,000 GBP / QALY, NAIs cease being cost effective at around 0.94 effectiveness (relative risk, compared to 0.89).\n\nCommentary ----------\nThe suggestions made below I think might improve the article however I think the authors can best judge whether the additional effort and added complexity would be worthwhile or would be too diverting.\nI commend the authors for including the model code in the Appendix, I managed to run this code with relatively little effort.\nThe methodology of using expert opinion to mitigate potential bias in the studies estimating NAI effectiveness is a practical measure that is probably worthwhile. Some additional detail on the process would be appreciated. For example: how were assessors selected? How much time did the reviewers take in their bias estimates?\n\nIn the discussion it could be noted that in a future pandemic with a large CAR or CFR, the proportion of severe cases receiving hospital care and the level of care are likely to be lower, simple due to hospitals being overwhelmed. The estimates of proportion of deaths occurring in hospital are from the 2009 pandemic which was very mild.\n\nIf NAIs have any effect in preventing further transmission, e.g if they shorten the period of viral shedding, then mass administration of antivirals may reduce the overall attack rate and consequent mortality even if NAIs are not effective for mortality reduction of severe cases. As the study's model does not capture this, this is another way in which the study is conservative.\n\nMinor Technical Revisions -------------------------\nThe CAR and CFR parameters used in the model are for a pandemic without NAI usage. Given that NAIs were used in the 2009 pandemic, should the CAR and CFR estimates for 2009 be included along side those of previous pandemics? If the 2009 CAR and CFR estimates are for example based on global data where NAI usage might be negligible that would be OK; but if they are based primarily on UK or US data they should possibly be excluded.\n\nIn the Appendix page 4 there is a citation [20] that isn't given in a reference list.\n\nIn the last sentence of the 3rd paragraph, the RR derived based on the OR and 10% mortality is stated as 0.89. This is the same as the bias-corrected RR given in the next paragraph. Is this intentional? Or should it be the RR value based on the OR and 10% mortality (but without bias correction), in which case it should be 0.825 (from the formula).\n\nThe R / BUGS code in the Appendix worked almost without alteration. I found that I had to:\nInstall BUGS (OpenBGUS). Hoist the npv function to the top. Remove the codaPkg=TRUE setting to obtain a result object. (also the \"obs\" and \"qaly\" values appear to be dead code)\nIf F1000 allows additional appendix files this could be supplied as an additional plain ASCII file, to avoid scraping the text from the PDF and correcting resulting formatting.\n\nI can't find the support for the n_hosp data value of 136. The tot_hosp value of 125 appears in the Donaldson BMJ paper. That paper gives 138 for the total number of confirmed deaths due to pandemic influenza.",
"responses": [
{
"c_id": "2547",
"date": "16 Mar 2017",
"name": "Sam Watson",
"role": "Author Response",
"response": "We thank the review for their comments and detail our responses below, point by point. The referee's text is in Italics. Essential Revision ------------------ As stated above, the reader would be well served with an additional figure similar to Figure 3, but plotting QALY threshold against mean NAI effectiveness. In my shallow experimentation with the author's model, it looks like at 20,000 GBP / QALY, NAIs cease being cost effective at around 0.94 effectiveness (relative risk, compared to 0.89). We have replaced figure 3 to incorporate these considerations and additional commentary. Commentary ---------- The suggestions made below I think might improve the article however I think the authors can best judge whether the additional effort and added complexity would be worthwhile or would be too diverting. I commend the authors for including the model code in the Appendix, I managed to run this code with relatively little effort. The methodology of using expert opinion to mitigate potential bias in the studies estimating NAI effectiveness is a practical measure that is probably worthwhile. Some additional detail on the process would be appreciated. For example: how were assessors selected? How much time did the reviewers take in their bias estimates? We have added additional description in the Methods section although we also refer the referee to the cited article In the discussion it could be noted that in a future pandemic with a large CAR or CFR, the proportion of severe cases receiving hospital care and the level of care are likely to be lower, simple due to hospitals being overwhelmed. The estimates of proportion of deaths occurring in hospital are from the 2009 pandemic which was very mild. If NAIs have any effect in preventing further transmission, e.g if they shorten the period of viral shedding, then mass administration of antivirals may reduce the overall attack rate and consequent mortality even if NAIs are not effective for mortality reduction of severe cases. As the study's model does not capture this, this is another way in which the study is conservative. We will add these points to the discussion Minor Technical Revisions ------------------------- The CAR and CFR parameters used in the model are for a pandemic without NAI usage. Given that NAIs were used in the 2009 pandemic, should the CAR and CFR estimates for 2009 be included along side those of previous pandemics? If the 2009 CAR and CFR estimates are for example based on global data where NAI usage might be negligible that would be OK; but if they are based primarily on UK or US data they should possibly be excluded. We would argue that the 2009 observed CAR and CFR are relevant data points to infer the distribution of possible CAR and CFR values. It is possible that mass NAI distribution may alter the parameters of these distributions, however, without further information it is not possible to model this. Excluding the 2009 pandemic would bias our estimates, and given the small amount of data, this data point provides a relatively large amount of information. We therefore opt to use all available data. In the Appendix page 4 there is a citation [20] that isn't given in a reference list. This has been amended. In the last sentence of the 3rd paragraph, the RR derived based on the OR and 10% mortality is stated as 0.89. This is the same as the bias-corrected RR given in the next paragraph. Is this intentional? Or should it be the RR value based on the OR and 10% mortality (but without bias correction), in which case it should be 0.825 (from the formula). We believe the referee may be in error, as it is 0.83 in the third paragraph of Appendix B. However, we have resubmitted the revised appendix to ensure the correct version is available. The R / BUGS code in the Appendix worked almost without alteration. I found that I had to: Install BUGS (OpenBGUS). Hoist the npv function to the top. Remove the codaPkg=TRUE setting to obtain a result object. (also the \"obs\" and \"qaly\" values appear to be dead code) If F1000 allows additional appendix files this could be supplied as an additional plain ASCII file, to avoid scraping the text from the PDF and correcting resulting formatting. We have provided a file for use in the program Stan to run the program as well. The models were initially run in WinBUGS before ‘upgrading’ to Stan. We have noted this in the Appendix but opt to provide both pieces of code for people using both programs. I can't find the support for the n_hosp data value of 136. The tot_hosp value of 125 appears in the Donaldson BMJ paper. That paper gives 138 for the total number of confirmed deaths due to pandemic influenza. This typo has been amended."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2293
|
https://f1000research.com/articles/6-126/v1
|
10 Feb 17
|
{
"type": "Research Article",
"title": "Monitoring disease activity and severity in lupus",
"authors": [
"Abidullah Khan",
"Iqbal Haider",
"Maimoona Ayub",
"Salman Khan",
"Iqbal Haider",
"Maimoona Ayub",
"Salman Khan"
],
"abstract": "Background: Systemic lupus erythematosus (SLE) is a relatively uncommon disease of young females in Pakistan. Usually, it has a relapsing-remitting course with variable severity and disease activity. Amongst the different clinical and laboratory parameters used to monitor disease activity in lupus, mean platelet volume (MPV) is a novel biomarker. Although MPV has been studied in other rheumatological conditions like rheumatoid arthritis, its role in adult SLE needs to be defined, especially in Pakistan. Methods: The aim of this study was to evaluate the role of MPV as a biomarker of disease activity in SLE. This study included 25 patients with active SLE, and another 25 participants with stable, inactive lupus. MPV was measured in each group and compared using SPSS version 16. MPV was also correlated with SLE disease activity index (SLEDAI) and erythrocyte sedimentation rate (ESR). Independent sample t-test and Pearson’s correlation tests were applied. Sensitivity and specificity of MPV were checked through ROC analysis. Results: The MPV of patients with active SLE (n=25, mean [M]=7.12, SD=1.01) was numerically lower than those in the inactive-SLE group (n=25, M= 10.12, SD=0.97), and this was statistically significant (P<0.001). MPV had an inverse relationship with both ESR (r=-0.93, P<0.001) and SLEDAI (r= -0.94, P<0.001). However, there was a strong positive correlation between ESR and SLEDAI (r=0.95, P<0.001). For MPV, a cutoff value of less than 8.5fl had a sensitivity of 92% and a specificity of 100% (P< 0.001). Conclusions: Higher disease activity in SLE is associated with a correspondingly low MPV.",
"keywords": [
"systemic lupus erythematosus",
"blood platelets",
"platelets aggregation"
],
"content": "Abbreviations\n\nMPV: Mean Platelet Volume, SLEDAI: Systemic Lupus Erythematosus Disease Activity Index, SLE: Systemic Lupus Erythematosus, ESR: Erythrocyte Sedimentation Rate, CRP: C-reactive Protein, ACR: American College of Rheumatology.\n\n\nIntroduction\n\nSystemic lupus erythematosus (SLE) is a chronic autoimmune disorder that can affect any organ system of the body. It has an annual incidence of 5 per 100,000 of the general population1. There are racial and ethnic variations, with higher rates reported in Black and Hispanic peoples2. Usually, this disease with protean manifestations has a remitting relapsing course; however, it has a tendency to vary from acutely progressive to chronic indolent forms1,2.\n\nThe clinical manifestations of SLE range from constitutional symptoms, such as fever, sweats, weight loss, joint pains and skin rashes (including the classic butter fly rash), to more serious features, including the involvement of the central nervous system and kidneys. However, to make a clinical diagnosis of SLE, simultaneous or sequential presence of 4 out of a total of 11 criteria, proposed by the American College of Rheumatology (ACR), must be present3,4.\n\nConsidering the remitting relapsing nature of most cases of SLE, it is important to have a biomarker to monitor its disease activity. Although, the most effective and reliable tool to measure SLE disease activity is still open to debate, there are fortunately many validated measures, including the Systemic Lupus Activity Measure, Systemic Lupus Erythematosus Disease Activity Index (SLEDAI), Lupus Activity Index, European Consensus Lupus Activity Measurement, and British Isles Lupus Activity Group5. These tools have been found to be beneficial in day to day practice6,7.\n\nNotable issues, apart from some other technical limitations, with the aforementioned severity assessment indices are that these validated instruments are confusing, lengthy and time consuming. However, very recently, mean platelet volume (MPV) has been shown to be a very good and easily accessible marker of disease activity in lupus8–10. Although MPV has been studied well as a simple but reliable inflammatory biomarker in several diseases, such as rheumatoid arthritis, scleroderma, rheumatic fever, ankylosing spondylitis and even chronic obstructive pulmonary disease, there is still a relative scarcity of its role as a disease severity indicator in lupus11–15. Therefore, we performed the present study to find out whether MPV does or does not correlate with SLEDAI and whether it can be used a predictor of lupus severity and activity.\n\n\nMethods\n\nThis cross-sectional study was conducted in the Department of Medicine of Khyber Teaching Hospital (KTH; Peshawar, Pakistan) between January 2015 and July 2016. Medical records, intranet of our hospital and referrals by the general practitioners were the sources of recruitment. Patient information sheet, letters and direct contact by the investigators, who were directly involved in the provision of healthcare to potential subjects, were the chief methods of recruitment. This study was approved by the Ethics Review Committee of the hospital and a written informed consent was obtained from every participant (approval number, KTH/2015/Med-A/86C). The patient sample was collected using a consecutive-random sampling technique. Nevertheless, as is true of cross-sectional studies, confounding and sample selection bias may be limitations to the generalization of our results.\n\nPatients from both genders in the age range of 18–70 years, and those with both either newly diagnosed or pre-existing SLE were included in the study. In order to avoid bias, only those patients with a normal platelet count were included. This is because; MPV is influenced by the number of platelets in circulation. The ACR criteria for the diagnosis of SLE was used as a diagnostic tool.\n\nIndividuals who had history of smoking, acute or chronic infectious diseases, hemoglobin >16.5 g/dl, thrombocytopenia (platelets <150,000/mm3), hypertension, angina pectoris, myocardial infarction, diabetes mellitus, hypo- or hyperthyroidism, anti-phospholipid syndrome, recurrent miscarriage, amyloidosis, thrombosis and acute or chronic renal failure were excluded from the study. Patients who had either clinical, biochemical or serological evidence of an autoimmune disorder other than SLE, such as rheumatoid arthritis, Sjogren’s syndrome or scleroderma, were also excluded from the study.\n\nThe sample size was calculated by using 5% margin of error and 95% CI on the WHO’s formula for determination of sample size in health studies (http://www.who.int/chp/steps/resources/sampling/en/). A total of 64 patients were assessed initially. However, only 50 of them satisfied the inclusion and exclusion criteria. The 50 participants recruited were divide into two equal groups, 25 subjects each in the active-SLE and the inactive-SLE groups, as detailed below.\n\nThe participants were divided into two groups, active and inactive SLE groups. The division of the patients into two groups was based on their final score from using the Systemic Lupus Erythematosus Disease Activity Index-2000 (SLEDAI-2000)16. Those who scored 5 or higher were classified as active-SLE, while those with a final score of less than 5 were regarded as patients with inactive-SLE. Patients with active-SLE, fulfilling the inclusion criteria (SLEDAI-2000), were admitted to any one of the five medical wards of KTH for further workup and treatment. However, those with stable inactive disease were recruited in the study from the Outpatient Department of KTH.\n\nA total of 5ml of venous blood was taken in an EDTA tube from every participant for the measurement of complete blood count, including hemoglobin, white blood cells, platelets, MPV, and erythrocyte sedimentation rate (ESR). All the blood samples were analyzed within less than one hour after sampling. The complete blood count, including all the hematological parameters, was performed using the same hematology analyzer, Medonic. The tests were performed and read by the same laboratory technician of KTH.\n\nAll the data was entered on a structured questionnaire specifically designed for this study (Supplementary File 1). Data was transferred to and analyzed using SPSS version 16. Means and standard deviations were determined for quantitative variables. Independent sample t-test was run to compare means of MPV between the two groups. ROC analysis was performed to estimate cutoff values for sensitivity and specificity of MPV. Finally, Pearson’s correlation test was used to assess any association between MPV, ESR and SLEDAI. P value of less than 0.05 was considered as significant.\n\n\nResults\n\nOf the 50 participants, 84% were female and 16% were male. There were 4 males and 21 females in each of the active- and inactive-SLE groups, respectively. Other demographic details are shown below (Figure 1). The overall mean age of all the participants was 27.94±2.52 years. The mean age of the patients in the active-SLE group (M=27.84, SD=2.06) was comparable to the inactive-SLE group (M=29.60, SD= 2.38). The clinical features of patients with active- and inactive-SLE are shown in Table 1 and Table 2, respectively. In the active-SLE group, 11 (44%) patients had evidence of clinically significant proteinuria; details of the histological sub-type of lupus nephritis are given in Table 3.\n\nThe MPV of patients with active-SLE (n=25, M=7.12, SD=1.01) was numerically lower than those in the inactive-SLE group (n=25, M= 10.12, SD=0.97). An independent sample t-test was run to compare the means of the two groups. The assumption of normality was tested by Kolmogorov-Samirnov test and was found tenable (P> 0.05).Moreover, similar results were obtained on Skewness and Kurosis testing (skewness=0.01, kurtosis= -1.06). The assumption of homogeneity of variances was tested using Levene’s test and was found tenable (F (48) = 0.23;P= 0.63). The results of the independent t-test showed a statistically significant difference between the mean values of MPV of the two groups(t (48) = 10.69;P<0.001; Cohen’s D= 3.02). The 95% Confidence Interval (CI) was -3.56 to -2.44. Receiver operator characteristic (ROC) curve was used to check the specificity and sensitivity of MPV (Figure 2). The ROC curve had an area under the curve of 0.98. At a value of 8.5fl for MPV, the sensitivity and specificity were 92% and 100%, respectively, (P<0.001;95%CI -0.96 to +1.01). At a cutoff value of 8.5fl, MPV has a maximum sensitivity and specificity. Therefore, we recommend that, at an MPV value of <8.5fl, the probability of SLE increases remarkably.\n\nThe SLEDAI scores between the two groups, active-SLE (M=16.36, SD=4.48) and inactive-SLE (M=3, SD=0.82), varied at a statistically significant level of P<0.001. The ESR was higher in patients with active SLE (M=49.52, SD=12.93) than those with stable disease (M=13.76 SD=1.72)(P<0.001). The details of the different hematological parameters are given in Table 4.\n\nSLEDAI, Systemic Lupus Erythematosus Disease Activity; MPV, mean platelet volume; ESR, erythrocyte sedimentation rate; WBC, white blood cells; Hb, hemoglobin.\n\nPearson’s correlation test was run to assess any relationship between MPV, SLEDAI and ESR in the active-SLE group. The results showed a statistically significant, negative correlation of MPV with both ESR (r= -0.93, P<0.001) and SLEDAI, (r= -.94, P<0.001).Moreover, there was a strong positive correlation between ESR and SLEDAI (r=0.95, P<0.001). Hence, it can be argued that increased disease activity of SLE is associated with both a higher ESR and SLEDAI score, and a correspondingly low MPV (P<0.001).\n\n\nDiscussion\n\nSLE, which is more common in Black women, has a female to male ratio of approximately 9:111. Our study group comprised of 84% female and 16% male population. Furthermore, most of the participants in our study were in the third decade of life. Participants with active-SLE were younger than those with the stable form of the disease. These findings are comparable to international statistics17,18. We observed that MPV was significantly lower in patients with active lupus than those without a flare. Similarly, we found that, MPV had a tendency to be lower with a correspondingly higher ESR in individuals with actively flaring SLE and vice versa. Moreover, we observed that SLEDAI was as effective as both ESR and MPV, as an indicator of disease activity in patients with SLE.\n\nGasparyan et al. concluded that high MPV correlated with a variety of diseases, like cardio- and cerebrovascular disorders, venous and arterial thrombosis and low-grade inflammatory conditions19. However, it was observed that, high intensity inflammatory disorders, such as active rheumatoid arthritis or relapses of familial Mediterranean fever, had low values of MPV, which could be reverted with anti-inflammatory medications20,22. Although we did not check the effect of anti-inflammatory medications, like corticosteroids, on MPV, we observed a strong inverse relationship of MPV with lupus severity and activity. Therefore, we recommend MPV as a global marker of disease activity in patients with SLE.\n\nApart from MPV, ESR has traditionally been used as a marker of disease severity in patients with inflammatory conditions, and SLE, specifically23. Similarly, C-reactive protein (CRP) has been studied, but not been found to be a marker of disease activity in lupus24. It is worth mentioning that, usually, there is a discordance between ESR and CRP in actively diseased SLE patients25. In our study, ESR correlated positively with both MPV and SLEDAI, which is consistent with these previous results.\n\nWhy is MPV low in active SLE? The answer cannot be clearly stated. However, previous studies have shown that, in active inflammatory conditions, especially rheumatoid arthritis and SLE, large and activated platelets are consumed preferentially at the site of inflammation, leaving small platelets behind20–21. This may also explain lower MPV values in actively flaring SLE patients in our study group.\n\nConsidering active-SLE as a state of severe inflammation, those with active disease in our study were treated with 1g daily dose of methylprednisolone for three days, followed by a maintenance dose of 1mg/Kg oral prednisolone for another 4–6 weeks. Although, all participants with active lupus achieved dramatic symptomatic and clinically obvious improvement, MPV was not studied after the completion of steroid therapy. However, in other studies, where pre-treatment MPV was compared with post-treatment MPV, it was observed that, after successful treatment with anti-inflammatory medications, MPV reverted back to normal21–26. Considering this data, we would advocate future studies focusing on comparing pre- and post-treatment MPV in patients treated with corticosteroids for acute flare of SLE.\n\nIt is noteworthy that; although, most of the studies found an inverse relationship between active-SLE and MPV in adults, a positive association was observed between MPV and disease activity in juvenile lupus erythematosus10. This finding of positive correlation between MPV and disease activity in juvenile lupus is in sharp contrast to the results of our and similar previous studies8,9.\n\nNotably, the results of this study were in accordance with the expectations of authors. Moreover, considering recent research studies showing a link between low MPV and disease activity in SLE patients, our study will add further evidence. However a limitation to the conduct and results of this study was a small sample size. This is because, SLE is not very common in Pakistan. Therefore, in order to highlight the actual role of MPV as a biomarker of lupus severity, we recommend that, cohort studies should be done in future, both in Pakistan and abroad.\n\n\nConclusions\n\nMPV is an excellent biomarker to monitor disease activity in SLE, as higher disease activity will reduce MPV and vice versa. Moreover, MPV has an inverse relationship with both ESR and SLEDAI. At a cutoff value of less than 8.5fl, MPV has an excellent sensitivity and specificity.\n\n\nEthics approval and consent\n\nThis study was approved by Ethics Review Committee of Khyber Teaching Hospital, Peshawar, Pakistan (approval number, KTH/2015/Med-A/86C). Informed written consent was obtained from every participant.\n\n\nData availability\n\nDataset 1: Raw data of disease severity indicators in lupus. This file contains data regarding disease severity indicators and demographics of patients with SLE. This coded data was stored on SPSS version 16. Group: 1, active-SLE; 2, inactive-SLE. Gender: 1, male; 2, female. SLEDAI, systemic lupus erythematosus disease activity index; MPV, mean platelet volume; ESR, erythrocyte sedimentation rate; WBC, white blood cell (thousand/mm3); Hb, hemoglobin (gm/dl); platelets, platelet count × 103. doi, 10.5256/f1000research.10763.d15106427",
"appendix": "Author contributions\n\n\n\nAK, IH, MA and SK conceived the idea and formulated the study design. All the authors contributed to the drafting of this manuscript. All the authors read the manuscript before approval and submission.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nSupplementary material\n\nSupplementary File 1: Lupus severity and disease activity questionnaire. This questionnaire was used to gather data from the participants. SLEDAI, systemic lupus erythematosus disease activity index; MPV, mean platelet volume; ESR, erythrocyte sedimentation rate; WBC, white blood cell (thousand/mm3); active SLE, SLEDAI >5 points; inactive SLE, SLEDAI <5 points.\n\nClick here to access the data.\n\n\nReferences\n\nMagro-Checa C, Zirkzee EJ, Huizinga TW, et al.: Management of Neuropsychiatric Systemic Lupus Erythematosus: Current Approaches and Future Perspectives. Drugs. 2016; 76(4): 459–483. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCrampton SP, Morawski PA, Bolland S: Linking susceptibility genes and pathogenesis mechanisms using mouse models of systemic lupus erythematosus. Dis Model Mech. 2014; 7(9): 1033–1046. PubMed Abstract | Publisher Full Text | Free Full Text\n\nArriens C, Mohan C: Systemic lupus erythematosus diagnostics in the “omics” era. Int J Clin Rheumtol. 2013; 8(6): 671–687. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBartels CM, Buhr KA, Goldberg JW, et al.: Mortality and cardiovascular burden of systemic lupus erythematosus in a US population-based cohort. J Rheumatol. 2014; 41(4): 680–687. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRai SK, Yazdany J, Fortin PR, et al.: Approaches for estimating minimal clinically important differences in systemic lupus erythematosus. Arthritis Res Ther. 2015; 17(1): 143. PubMed Abstract | Publisher Full Text | Free Full Text\n\nhttp://www.medscape.com/viewarticle/848817_2\n\nMikdashi J, Nived O: Measuring disease activity in adults with systemic lupus erythematosus: the challenges of administrative burden and responsiveness to patient concerns in clinical research. Arthritis Res Ther. 2015; 17(1): 183. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRupa-Matysek J, Gil L, Wojtasińska E, et al.: The relationship between mean platelet volume and thrombosis recurrence in patients diagnosed with antiphospholipid syndrome. Rheumatol Int. 2014; 34(11): 1599–1605. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSafak S, Uslu AU, Serdal K, et al.: Association between mean platelet volume levels and inflammation in SLE patients presented with arthritis. Afr Health Sci. 2014; 14(4): 919–924. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYavuz S, Ece A: Mean platelet volume as an indicator of disease activity in juvenile SLE. Clin Rheumatol. 2014; 33(5): 637–41. PubMed Abstract | Publisher Full Text\n\nŞahin A, Yetişgin A, Şahin M, et al.: Can Mean Platelet Volume be a Surrogate Marker of Inflammation in Rheumatic Diseases? West Indian Med J. 2016; 65(1): 165–9. PubMed Abstract | Publisher Full Text\n\nYolbas S, Yildirim A, Gozel N, et al.: Hematological Indices May Be Useful in the Diagnosis of Systemic Lupus Erythematosus and in Determining Disease Activity in Behçet's Disease. Med Princ Pract. 2016; 25(6): 510–516. PubMed Abstract | Publisher Full Text\n\nZhang M, Li Y, Zhang J, et al.: Mean platelet volume is elevated in exacerbated and convalescent COPD patients. ClinChimActa. 2015; 451(Pt B): 227–31. PubMed Abstract | Publisher Full Text\n\nBalbaloglu O, Korkmaz M, Yolcu S, et al.: Evaluation of mean platelet volume (MPV) levels in patients with synovitis associated with knee osteoarthritis. Platelets. 2014; 25(2): 81–5. PubMed Abstract | Publisher Full Text\n\nUlasli SS, Ozyurek BA, Yilmaz EB, et al.: Mean platelet volume as an inflammatory marker in acute exacerbation of chronic obstructive pulmonary disease. Pol Arch Med Wewn. 2012; 122(6): 284–90. PubMed Abstract\n\nGladman DD, Ibañez D, Urowitz MB: Systemic lupus erythematosus disease activity index 2000. J Rheumatol. 2002; 29(2): 288–91. PubMed Abstract\n\nHelmick CG, Felson DT, Lawrence RC, et al.: Estimates of the prevalence of arthritis and other rheumatic conditions in the United States. Part I. Arthritis Rheum. 2008; 58(1): 15–25. PubMed Abstract | Publisher Full Text\n\nAmbrose N, Morgan TA, Galloway J, et al.: Differences in disease phenotype and severity in SLE across age groups. Lupus. 2016; 25(14): 1542–1550. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGasparyan AY, Ayvazyan L, Mikhailidis DP, et al.: Mean platelet volume: a link between thrombosis and inflammation? Curr Pharm Des. 2011; 17(1): 47–58. PubMed Abstract | Publisher Full Text\n\nAksu H, Ozer O, Unal H, et al.: Significance of mean platelet volume on prognosis of patients with and without aspirin resistance in settings of non-ST-segment elevated acute coronary syndromes. Blood Coagul Fibrinolysis. 2009; 20(80): 686–93. PubMed Abstract | Publisher Full Text\n\nGasparyan AY, Sandoo A, Stavropoulos-Kalinoglou A, et al.: Mean platelet volume in patients with rheumatoid arthritis: the effect of anti-TNF-α therapy. Rheumatol Int. 2010; 30(8): 1125–9. PubMed Abstract | Publisher Full Text\n\nKim DA, Kim TY: Controversies over the interpretation of changes of mean platelet volume in rheumatoid arthritis. Platelets. 2011; 22(1): 79–80. PubMed Abstract | Publisher Full Text\n\nDima A, Opris D, Jurcut C, et al.: Is there still a place for erythrocyte sedimentation rate and C-reactive protein in systemic lupus erythematosus? Lupus. 2016; 25(11): 1173–9. PubMed Abstract | Publisher Full Text\n\nGaitonde S, Samols D, Kushner I: C-reactive protein and systemic lupus erythematosus. Arthritis Rheum. 2008; 59(12): 1814–1820. PubMed Abstract | Publisher Full Text\n\nPisetsky DS: Anti-DNA and autoantibodies. Curr Opin Rheumatol. 2000; 12(5): 364–368. PubMed Abstract | Publisher Full Text\n\nVakili M, Ziaee V, Moradinejad MH, et al.: Changes of Platelet Indices in Juvenile Idiopathic Arthritis in Acute Phase and After Two Months Treatment. Iran J Pediatr. 2016; 26(3): e5006. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKhan A, Haider I, Ayub M, et al.: Dataset 1 in: Monitoring disease activity and severity in lupus. F1000Research. 2017. Data Source"
}
|
[
{
"id": "20146",
"date": "13 Feb 2017",
"name": "Guillermo Delgado-García",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn this study, Khan et al explored the relationship between SLE disease activity and mean platelet volume (MPV). After reading this article, I have the following suggestions:\nI think it would be good if the title were more specific (perhaps including such terms as “platelet size” or “platelet volume”).\n\nContrary to what is mentioned in the last paragraph of the Introduction, Rupa-Matysek et al.1 did not include SLE patients in their study.\n\nOur team also published an article on this topic2, which is not mentioned in the Introduction, even when it appeared online before that of Yolbas et al3.\n\nI would like to know if the sampling technique was consecutive or random. I'm inclined to think that consecutive (i.e., non-random) sampling was used.\n\nI'm not sure if this STEPS Sample Size Calculator was the ideal one to estimate the sample size in the present study. I would like to know what other values were used in this calculation (level of confidence measure, MOE, baseline levels of the indicators, Deff, etc). I would also like to know which outcome was used to estimate the sample size. MPV? If so, why was not used a formula to determine the appropriate sample size for detecting a difference between the means of two samples?\n\nOne merit of this study when comparing it with others is that all the blood samples were analyzed within less than one hour after sampling, since MPV increases over time in EDTA tubes.\n\nI think that the bar chart is not ideal to display demographic information. Perhaps one table could be just enough.\n\nI suggest the first two tables could be merged into one.\n\nWhen calculating the correlations it is important to remember that the SLEDAI is an ordinal variable4.\n\nIn the discussion, when addressing the issue of pathophysiological mechanisms that could explain the decrease in MPV, two articles are cited. However, contrary to what is mentioned, neither of these articles deals with SLE patients.\n\nIt would be worthwhile to further discuss the findings of this study by comparing them more specifically with the other studies on this same topic.\n\nThere are some grammatical errors that would be worth correcting (e.g., an unnecessary semicolon in the second paragraph of the Methods).",
"responses": [
{
"c_id": "2505",
"date": "24 Feb 2017",
"name": "Abidullah Khan",
"role": "Author Response",
"response": "We thank Guillermo Delgado-Garcia from Mexico for reading our manuscript thoroughly and for pin-pointing various discrepancies. We have now corrected the shortcomings and we believe that, the changes made in the light of recommendation of our honorable reviewer will add a lot to the science of our article. Please find below, a point-by-point response to the reviewers comments; The title of the article has been modified as suggested. \"Contrary to what is mentioned in the last paragraph of the Introduction, Rupa-Matysek et al. did not include SLE patients in their study\". This has been rephrased. Thank you for identifying this. The article published by your team is very informative and has now been referred to. The sampling technique was consecutive. The sentence has been rewritten. Sorry for the mistake in version 1. \"I'm not sure if this STEPS Sample Size Calculator was the ideal one to estimate the sample size in the present study. I would like to know what other values were used in this calculation (level of confidence measure, MOE, baseline levels of the indicators, Deff, etc). I would also like to know which outcome was used to estimate the sample size. MPV? If so, why was not used a formula to determine the appropriate sample size for detecting a difference between the means of two samples?\" We used both the methods. However, we mentioned the WHO sample size calculator only. All the missing info has now been added. Thanks for your appreciation of the merits of our study. Bar chart has now been replaced with a table as suggested. \"I suggest the first two tables could be merged into one\". Merging the two tables will make the tables look lengthy and possibly confusing. Therefore, we believe that, illustration of the clinical features of the two groups in separate tables will be better read and understood. \"When calculating the correlations it is important to remember that the SLEDAI is an ordinal variable.\" We understand that SLEDAI is an ordinal variable. However, we used the final SLEDAI score rather than the overall conclusion (active/inactive) while running the correlation tests. As the SLEDAI score is continuous and in our case, was normally distributed, we believe running a correlation test was tenable. \"In the discussion, when addressing the issue of pathophysiological mechanisms that could explain the decrease in MPV, two articles are cited. However, contrary to what is mentioned, neither of these articles deals with SLE patients.\" This area has been rephrased. Thank you for pointing it out. In our opinion, discussing the study further, will unnecessary lengthen the text of this article. Every effort has been made to correct the English and grammar. We thank you once more for your efforts."
}
]
},
{
"id": "20320",
"date": "17 Feb 2017",
"name": "Alina Dima",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle: I don’t find the term “monitoring” appropriate as it is presented a cross-sectional study with only one MPV determination and I so think that the use of assessment or determination could be tried. Also the term MPV (as possible marker of SLE disease activity) should be included in title.\n\nAbstract: First two phrases of the abstract are too general, not directly related to this article subject. I think that for the background the idea MPV – activity – Inflammation – SLE is enough. In the method we understand that 25 patients with active vs. 25 inactive were chosen ... it should be clear noted; e.g. cross-sectional, prospective, successively inclusion and then how the two subgroups were defined. I don’t think that the details of statistic should appear in the abstract - methods, maybe a more detailed conclusion.\n\nIntroduction: As for the abstract, I find the first two paragraphs are too general, not directly related to the article topic. For the classification criteria, instead of classification criteria up-dated ACR 1997 (that should be cited properly) I would propose the use of SLICC 2012 or both criteria sets; but anyway, the diagnosis would be the same as the last criteria set is more sensitive.\n\nMethods/ Results: There are many exclusion criteria presented, maybe there could be some words in introduction or discussion about these factors influence on MPV (as it was discussed for thrombocytopenia) and reasons of exclusion. I understand from methods that it was a cross-sectional study with random, consecutive inclusion and then the lot was split in two groups according to SLE disease activity (assessed by SLEDAI). How was the SLEDAI cut-off of 5 points determined? This cut-off appears in previous published research, it is the mean value (as I understand all variables turned out to be parametric), or the group was split in two groups of similar patients number? And then, the bivariate correlations were realized only in the group with active SLE, so the results are applicable only in this subgroup of patients. I would propose a logistic regression for the entire lot (50 patients), with MPV, ESR (and adjusted for age and gender) as predictors for high SLEDAI. However, the degree of correlation obtained is very high and so it sustains the research conclusion.\n\nDiscussion: the phrase “SLEDAI was as effective as both ESR and MPV” should be written differently (MPV was correlated with/ is an effective marker of ...) as the research is about MPV, disease activity was assessed by SLEDAI and ESR was noted as parameter known to correlates with disease activity. In the third paragraph, I don’t find a relation between the discussion on CRP and the same paragraph last phrase. Of course, data on ESR usefulness as marker of SLE disease activity should be discussed in order to understand why SLEDAI and ESR as standard. And maybe also data on ESR and MPV in lupus or other diseases, if there are.\n\nConclusion: I think that the first phrase of conclusion should be reformulated. The term “monitor” I already discussed; “higher disease will reduce MPV” could be more specific; and “vice versa” I’m not sure that this can be concluded as the study does not discuss distinctly the relation between higher MPV with lower activity in the subgroup of inactive lupus patients (of course, it looks logical). I think also that the present last phrase of conclusion should be ended “ sensitivity and specificity for ...” and in active SLE, without APS ...",
"responses": [
{
"c_id": "2509",
"date": "24 Feb 2017",
"name": "Abidullah Khan",
"role": "Author Response",
"response": "We are really grateful to Alina Dima for her invaluable comments on our article. Please find below a point-wise response to the comments. Title: \"I don’t find the term “monitoring” appropriate as it is presented a cross-sectional study with only one MPV determination and I so think that the use of assessment or determination could be tried. Also the term MPV (as possible marker of SLE disease activity) should be included in title.\" Reply: This suggestion has been incorporated. Abstract: \"First two phrases of the abstract are too general, not directly related to this article subject. I think that for the background the idea MPV – activity – Inflammation – SLE is enough. In the method we understand that 25 patients with active vs. 25 inactive were chosen ... it should be clear noted; e.g. cross-sectional, prospective, successively inclusion and then how the two subgroups were defined. I don’t think that the details of statistic should appear in the abstract - methods, maybe a more detailed conclusion.\" Reply: Abstract has been re-phrased as suggested. However, we believe that, the statistics should appear in the abstract, as they give an overall idea to the researcher of what we did and what s/he may expect to find in the results? Introduction: As for the abstract, I find the first two paragraphs are too general, not directly related to the article topic. For the classification criteria, instead of classification criteria up-dated ACR 1997 (that should be cited properly) I would propose the use of SLICC 2012 or both criteria sets; but anyway, the diagnosis would be the same as the last criteria set is more sensitive. Reply: We have intentionally left the first two paragraphs as general looking. This is for the purpose of education of the young researchers and to give an impression as, how does SLE behave normally ? Methods/ Results: There are many exclusion criteria presented, maybe there could be some words in introduction or discussion about these factors influence on MPV (as it was discussed for thrombocytopenia) and reasons of exclusion. Reply: Sentence has now been added regarding the influence of the set exclusion criteria on MPV and the study results. I understand from methods that it was a cross-sectional study with random, consecutive inclusion and then the lot was split in two groups according to SLE disease activity (assessed by SLEDAI). How was the SLEDAI cut-off of 5 points determined? This cut-off appears in previous published research, it is the mean value (as I understand all variables turned out to be parametric), or the group was split in two groups of similar patients number? And then, the bivariate correlations were realized only in the group with active SLE, so the results are applicable only in this subgroup of patients. I would propose a logistic regression for the entire lot (50 patients), with MPV, ESR (and adjusted for age and gender) as predictors for high SLEDAI. However, the degree of correlation obtained is very high and so it sustains the research conclusion. Reply: The recommended changes have been made. Discussion: the phrase “SLEDAI was as effective as both ESR and MPV” should be written differently (MPV was correlated with/ is an effective marker of ...) as the research is about MPV, disease activity was assessed by SLEDAI and ESR was noted as parameter known to correlates with disease activity. In the third paragraph, I don’t find a relation between the discussion on CRP and the same paragraph last phrase. Of course, data on ESR usefulness as marker of SLE disease activity should be discussed in order to understand why SLEDAI and ESR as standard. And maybe also data on ESR and MPV in lupus or other diseases, if there are. Reply: Discussion has been modified at points suggested by the respectable reviewer. Conclusion: I think that the first phrase of conclusion should be reformulated. The term “monitor” I already discussed; “higher disease will reduce MPV” could be more specific; and “vice versa” I’m not sure that this can be concluded as the study does not discuss distinctly the relation between higher MPV with lower activity in the subgroup of inactive lupus patients (of course, it looks logical). I think also that the present last phrase of conclusion should be ended “ sensitivity and specificity for ...” and in active SLE, without APS ... Reply: The conclusion has been amended as recommended by the reviewer. Moreover, the term 'vice versa' has been removed. Thank you once more for your time and precious comments."
}
]
}
] | 1
|
https://f1000research.com/articles/6-126
|
https://f1000research.com/articles/6-278/v1
|
15 Mar 17
|
{
"type": "Research Article",
"title": "Replication of the principal component analyses of the human genome diversity panel",
"authors": [
"Thomas Charlon",
"Alessandro Di Cara",
"Sviatoslav Voloshynovskiy",
"Jérôme Wojcik",
"Alessandro Di Cara",
"Sviatoslav Voloshynovskiy",
"Jérôme Wojcik"
],
"abstract": "Background. In 2008, several principal component analyses (PCAs) applied on 660,918 single-nucleotide polymorphisms (SNPs) from 938 individuals from 51 worldwide populations of the Human Genome Diversity Panel were published by Li et al. PCAs were applied on subsets of individuals sharing a common geographic origin and showed that in several geographic regions, genome-wide variations of SNPs grouped individuals by populations in the two first principal components. In this study, we replicated the PCAs applied on two geographic subsets, first on individuals from Europe and second on individuals from the Middle East & North Africa. Methods. Quality control, feature selection, and PCA were applied on each geographic subset. The results were displayed on the two first principal components and compared to the original figures. Results. The replicated figures were found to match closely to the original figures. Conclusions. Therefore, the main results were replicated and can be independently reproduced by using publicly available data, source code, and computing environment.",
"keywords": [
"Bioinformatics",
"Evolutionary/Comparative Genetics",
"Genomics"
],
"content": "Introduction\n\nQuartz Bio and the Stochastic Information Processing group are involved in the PRECISESADS project (http://www.precisesads.eu/), which aims at reclassifying Systemic Autoimmune Diseases (SADs), a group of chronic inflammatory conditions characterized by the presence of unspecific autoantibodies in the serum and resulting in serious clinical consequences, based on genetic and molecular biomarkers rather than clinical criteria.\n\nIn order to use genetic similarities to deliver personalized treatments to patients affected by SADs as well as other diseases, it is important to first understand the genetic structures in healthy populations.\n\nIn 2008, Li et al.1 showed that although specific world regions have different genetic origins, all revealed population structures in principal component analyses (PCAs). Similar population structures were also observed in studies using other genome-wide variations datasets2,3.\n\nLi et al. applied PCAs on subsets of individuals from two geographic regions, Europe and the Middle East & North Africa, and displayed the results on the two first principal components in their article as Figures 2A and B, respectively, (with the latter labeled only Middle East).\n\nIn an attempt to replicate these two figures, we performed quality control, minor allele frequency filtering, tag SNP selection4, and PCAs on both regional subsets of the SNP microarray data. The PCAs were then displayed on the first two principal components.\n\nThe replicated figures were found to match closely to the original figures, and therefore confirmed a successful replication.\n\n\nMethods\n\nThe dataset consisted of two files: a zip file including the genotype data of 660,918 SNPs from 1,043 individuals with the annotations of the SNPs, and a text file composed of the annotations of 953 individuals (see Data and software availability).\n\nThe annotations of individuals were used to create two subsets of the data. The first contained 157 individuals from Europe and the second contained 163 individuals from the Middle East & North Africa.\n\nFor each geographic region subset of the data, we verified that no individuals had missing value rates above 3% and excluded SNPs with missing value rates above 1%. An additive genetic model was then used to encode each A/B SNP (A/A = 0, A/B = 1, B/B = 2), which converts categorical SNP values to numerics by assuming that the effect of the A/B heterozygote and B/B homozygote are proportional to the number of B alleles. SNPs with minor allele frequency below 5% were excluded to remove rare variants, which are more prone to genotyping errors. In addition, in order to decrease the required computation time and memory usage, redundant SNPs were removed by applying TagSNP4 (r2 > 0.8, window of 500,000 base pairs). The missing values were imputed by random sampling of each SNP. Then each SNP was centered and scaled to unit variance. All steps were performed using the SNPClust R package v1.0.02.\n\nFor the Europe subset, a total of 375,164 SNPs from 157 individuals were selected for analysis. This defines our Europe analysis set.\n\nFor the Middle East & North Africa subset, a total of 412,979 SNPs from 163 samples were selected for analysis. This defines our Middle East & North Africa analysis set.\n\nFor comparison, the supporting online material of Li et al. reported that individuals with missing value rates above 2.5% and SNPs with missing value rates above 5% were excluded. Table S1 of Li et al. reports that 156 individuals from Europe and 160 from the Middle East & North Africa were used and the supporting online material reports that 642,690 SNPs were used.\n\nPCAs were applied on the two analysis sets and displayed using the SNPClust R package v1.0.02. Principal component analysis (PCA) is a dimensionality reduction method, which projects SNPs by linear combination to maximize the variance on successive axes, i.e. principal components, while constraining the axes to be orthogonal.\n\nThe supporting online material of Li et al. reports that they first computed the Identity-by-State (IBS) matrix among the 938 individuals by using PLINK (version not provided)5 and then performed PCAs on the IBS matrix for each region separately. In this study, PCAs were applied on the analysis sets and not on IBS matrices.\n\n\nResults\n\nThe PCA of the Europe analysis set was displayed on the two first principal components (Figure 1). Individuals were grouped by population and the replicated figure matched closely with Li et al.'s Figure 2A.\n\nVisualization of the principal component analysis on 375,164 SNPs from 157 individuals from Europe. Individuals from North and South were differentiated in the first principal component and located in the lower and upper sides, respectively. Individuals from East and West were differentiated in the second and located in the right and left sides, respectively.\n\nThe explained variance was almost identical, as the replication stated 2.1% in PC1 and 1.6% in PC2, while Li et al.'s Figure 2A stated 2.4% and 1.6%, respectively.\n\nVisualization of the principal component analysis on 412,979 SNPs from 163 individuals from the Middle East & North Africa. Individuals from East and West were differentiated in the first principal component and located in the right and left sides, respectively. Individuals from North and South were differentiated in the second and located in the lower and upper sides, respectively.\n\nThe PCA of the Middle East & North Africa analysis set was displayed on the two first principal components (Figure 2). Individuals were grouped by populations and the replicated figure matched closely with Li et al.'s Figure 2B.\n\nTwo differences from Li et al.'s analysis were noted, first the Bedouin and Druze populations exhibited a larger spread on PC1 in the original figure. Second, one Bedouin individual was located with Mozabite individuals, which did not appear in Li et al.'s Figure 2B.\n\nThe explained variance was slightly smaller, as the replication stated 3.1% in PC1 and 2.2% in PC2, while Li et al.'s Figure 2B stated 5.0% and 2.6%, respectively.\n\n\nDiscussion\n\nThe replicated figures matched closely to the original figures, although two differences appeared when examining the Middle East & North Africa subset: the smaller spread of two populations and the presence of an outlier.\n\nTherefore, the main results were replicated and can be independently reproduced by using publicly available data, source code, and computing environment.\n\nWe successfully confirmed that although the two geographic regions studied had different genetic origins, both exhibited population structures in PCAs.\n\nUnderstanding the genetic structure of healthy populations will enable us to use genetic similarities to deliver personalized treatments to patients affected by SADs. Using this replication, the PRECISESADS project will be able to compare clusters of patients affected by SADs to clusters of healthy individuals, independently from their ancestry-driven genetic structure2.\n\n\nData and software availability\n\nAs stated in Li et al.1, the data sets are freely available online. Although the links that were provided are now outdated, the two data files are available from HGDP-CEPH: http://www.hagsc.org/hgdp/files.html (download link: http://www.hagsc.org/hgdp/data/hgdp.zip and http://www.cephb.fr/en/hgdp_panel.php#serie2; ftp link: ftp://ftp.cephb.fr/hgdp_v3/hgdp-ceph-unrelated.out).\n\nThe PCAs were computed and displayed using the previously published R package SNPClust v1.0.02.\n\nComputing environment in a Docker container is available from: https://hub.docker.com/r/thomaschln/reproducible-hgdp\n\nSource code required to generate this article and the definition of the corresponding computing environment, in which all required software are installed: https://github.com/ThomasChln/reproducible-hgdp\n\nArchived source code as at time of publication: doi, 10.5281/zenodo.3451376\n\nLicense: GNU General Public License version 3.0\n\n\nEthical statement\n\nThe data were previously published1 and approved by ethics committees. No samples were used and records were de-identified.",
"appendix": "Author contributions\n\n\n\nConceptualization: JW SV; Formal analysis: TC; Funding acquisition: JW; Investigation: JW ADC; Methodology: TC JW; Project administration: JW; Software: TC; Supervision: JW SV; Validation: TC JW ADC; Visualization: TC; Writing - original draft: TC; Writing - review & editing: JW ADC SV.\n\n\nCompeting interests\n\n\n\nThomas Charlon, Alessandro Di Cara, and Jérôme Wojcik are employees of Quartz Bio S.A., Switzerland. The authors declare no competing interests related to this commercial affiliation. This does not alter the authors’ adherence to F1000Research policies on sharing data and materials.\n\n\nGrant information\n\nQuartz Bio S.A. provided support in the form of salaries for Thomas Charlon, Alessandro Di Cara, and Jérôme Wojcik, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. This work has received support from the EU/EFPIA/ Innovative Medicines Initiative Joint Undertaking PRECISESADS (grant no. 115565).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgments\n\nWe thank K. Forner for contributions on the software.\n\n\nReferences\n\nLi JZ, Absher DM, Tang H, et al.: Worldwide human relationships inferred from genome-wide patterns of variation. Science. 2008; 319(5866): 1100–1104. PubMed Abstract | Publisher Full Text\n\nCharlon T, Martínez-Bueno M, Bossini-Castillo L, et al.: Single Nucleotide Polymorphism Clustering in Systemic Autoimmune Diseases. PLoS One. 2016; 11(8): e0160270. PubMed Abstract | Publisher Full Text | Free Full Text\n\nNovembre J, Johnson T, Bryc K, et al.: Genes mirror geography within Europe. Nature. 2008; 456(7218): 98–101. PubMed Abstract | Publisher Full Text | Free Full Text\n\nStram DO: Tag SNP selection for association studies. Genet Epidemiol. 2004; 27(4): 365–374. PubMed Abstract | Publisher Full Text\n\nPurcell S, Neale B, Todd-Brown K, et al.: PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007; 81(3): 559–575. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomasChln: ThomasChln/reproducible-hgdp: Review release [Data set]. Zenodo. 2017. Data Source"
}
|
[
{
"id": "21333",
"date": "28 Mar 2017",
"name": "Zoltán Kutalik",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis manuscript reports on the re-running of two PCA analyses presented in an earlier publication Li et al 2008). The authors confirm the PCA results presented in the original paper and point out two minor differences.\n\nThe analysis looks solid and carefully executed. There a few aspects that could be improved:\n\nWhat I missed a bit was the justification why only the middle Eastern and European subsets were reanalysed. Also, the authors motivate their reanalysis so that they can use these individuals as controls for their PRECISESADS study. I was expecting the authors to go slightly further: do they have control samples? Where do they map on these PCA plots? If they match the location of those from the HGDP, I agree that it is an excellent indication to go further with their study cases. I think these points would further our understanding and go beyond the partial re-analysis of a published data and reporting identical findings.\n\nWould be very helpful for the readers to see for every analysis step where did the authors use exactly the same tool as Li et al and where do they differ? If at some point different tools were used, were the parameters set to be identical? How close was the pruned subset of SNPs when analysed by them and by Li et al.?\n\nThe title and abstract reflect well the study content. The methods and results are clearly explained, the data are available and the analysis is provided in full details in a Docker container. Study motivation could be better explained and the conclusions in terms of consequences for their future study could be more detailed.",
"responses": []
},
{
"id": "21151",
"date": "18 Apr 2017",
"name": "Michael G. B. Blum",
"expertise": [
"Reviewer Expertise Population genetics",
"biostatistics",
"bioinformatics"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors replicate the ascertainment of worldwide population structure obtained by Li et al. (2008). They perform PCA to capture population structure. The PC axes closely match the ones obtained by Li et al.\n\nHowever, the authors found that some Bedouin individuals don't belong to the population they should belong to. The authors should read and cite the 2 following papers that found related results\n\nJakobsson M, Scholz SW, Scheet P et al: Genotype, haplotype and copy-number variation in worldwide human populations. Nature 2008; 451: 998-1003.1\n\nLeutenegger, A.L., Sahbatou, M., Gazal, S., Cann, H. and Génin, E., 2011. Consanguinity around the world: what do the genomic data of the HGDP-CEPH diversity panel tell us?. European Journal of Human Genetics, 19(5), pp.583-587.2\n\nAdditionally, I run the provided docker command (docker pull thomaschln/reproducible-hgdp) to reproduce the analysis but I don't find the generated results. The webpage (https://github.com/ThomasChln/reproducible-hgdp) should be improved and should include a more detailed tutorial.\n\nIs the work clearly and accurately presented and does it cite the current literature? Partly\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nYes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-278
|
https://f1000research.com/articles/6-221/v1
|
06 Mar 17
|
{
"type": "Opinion Article",
"title": "Heat remains unaccounted for in thermal physiology and climate change research",
"authors": [
"Andreas D. Flouris",
"Glen P. Kenny",
"Glen P. Kenny"
],
"abstract": "In the aftermath of the Paris Agreement, there is a crucial need for scientists in both thermal physiology and climate change research to develop the integrated approaches necessary to evaluate the health, economic, technological, social, and cultural impacts of 1.5°C warming. Our aim was to explore the fidelity of remote temperature measurements for quantitatively identifying the continuous redistribution of heat within both the Earth and the human body. Not accounting for the regional distribution of warming and heat storage patterns can undermine the results of thermal physiology and climate change research. These concepts are discussed herein using two parallel examples: the so-called slowdown of the Earth’s surface temperature warming in the period 1998-2013; and the controversial results in thermal physiology, arising from relying heavily on core temperature measurements. In total, the concept of heat is of major importance for the integrity of systems, such as the Earth and human body. At present, our understanding about the interplay of key factors modulating the heat distribution on the surface of the Earth and in the human body remains incomplete. Identifying and accounting for the interconnections among these factors will be instrumental in improving the accuracy of both climate models and health guidelines.",
"keywords": [
"global warming",
"hiatus",
"temperature",
"ocean heat uptake",
"hyperthermia"
],
"content": "Introduction\n\nThe Agreement reached in Paris during December 2015, under the auspices of the United Nations Framework Convention on Climate Change, binds countries to “…pursue efforts to limit the [global] temperature increase to 1.5°C above pre-industrial levels”. The same document also invites the Intergovernmental Panel on Climate Change (IPCC) to “…provide a special report in 2018 on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways” (full document: Paris Agreement, FCCC/CP/2015/10/Add.1, annex. UNFCCC secretariat. Retrieved 20 January 2017). In doing so, the IPCC must provide useful and robust evidence, which can be a challenge, as previously suggested (Hulme, 2016), given the limited analyses conducted thus far on the global and regional impacts of 1.5°C warming. Another challenge for the IPCC, and the scientific community, is to develop the integrated approaches necessary to evaluate the health, economic, technological, social, and cultural impacts of 1.5°C warming. As illustrated by a recent scientific conundrum (described in the following section), the concept of heat should be central to these integrated approaches. In this light, the objective of this article is to explore the fidelity of remote temperature measurements for quantitatively identifying the continuous redistribution of heat within both the Earth and the human body.\n\n\nUnaccounted heat in climate research\n\nHailed as “one of the biggest mysteries in climate science” (Tollefson, 2013), the slowdown of the Earth’s surface temperature warming in the period 1998–2013 undermined, for a number of years, the idea that human-generated greenhouse gases are the main driver and cause of climate change (Nieves et al., 2015). During this period, the existing climate models were unable (on average) to reproduce the observed atmospheric temperature trend. Several theories were proposed to explain this surface temperature “hiatus”. One of the first explanations put forth was that it is normal for warming rates to plateau occasionally (Trenberth, 2015). Other factors that were extensively investigated included the increased volcanic activity and the high levels of air pollution in China during the hiatus period, since atmospheric particles reflect more of the Sun’s energy back into space (Solomon et al., 2011). The ever-increasing decline in solar activity since 2000 was also considered as a possibility, since it reduces the amount of energy reaching the Earth (Ineson et al., 2015). Ultimately, in 2015, Nieves and colleagues showed that what was interpreted as a change in the planet’s warming rate was, in fact, a redistribution of heat within the ocean (Nieves et al., 2015). Specifically, Nieves et al., showed that heat moved from the surface of the Pacific Ocean to the deeper layers of the Western Pacific and Indian Oceans. As a result, the net ocean heat uptake remains dangerously high (Nieves et al., 2015), and the rate of global warming may have actually increased during the hiatus period (IPCC, 2014). Unfortunately, the findings of Nieves et al., were soon to be confirmed by the record-breaking temperatures throughout 2015 and 2016 (https://www.nasa.gov/press-release/nasa-noaa-data-show-2016-warmest-year-on-record-globally).\n\nResearch aiming to understand the 1998–2013 hiatus was significantly benefited by studies into the causes of a much longer hiatus observed from the 1950s to the 1970s. During this period, global mean surface temperature remained approximately constant despite increased burden from anthropogenic factors. Interestingly, as in the case of the 1998–2013 hiatus, the ocean heat uptake was a very important contributing factor to this longer offsetting of the warming rate (England et al., 2014). This is not surprising. Ocean circulation changes have a vast impact on the geographical distribution of warming and heat storage patterns, which ultimately affect the planet’s surface climate (Nieves et al., 2015). Measuring atmospheric temperature at specific regions can only indicate regional changes in heat content and – as in the case of the 1998–2013 hiatus – can lead to wrong conclusions about the planet’s warming rate.\n\n\nUnaccounted heat in thermal physiology research\n\nThe effect of ocean heat uptake on the geographical distribution of warming and the heat storage patterns on the surface of the planet closely mirror thermodynamic processes within the human body. Blood within the circulatory system is constantly redistributed, not only to meet the demands of metabolic and immune processes, but also to move heat from/to specific regions and maintain thermal homeostasis (Flouris et al., 2006; Flouris & Cheung, 2009; Kenny & Jay, 2013). These blood flow adaptations can reach dramatic proportions in the human body. For instance, during rest in thermoneutral conditions, 0.5 L/min of blood (5–10% of cardiac output) supply the skin (Lossius et al., 1993). Nevertheless, during heat stress, up to 8 L/min of blood (50–70% of cardiac output) is directed to the cutaneous circulation, mainly by restricting visceral and renal blood flow (Lossius et al., 1993). These enormous changes in blood flow have a vast effect on the regional distribution of heat and the heat storage patterns, which ultimately affect the entire body’s thermal homeostasis. Therefore, measuring temperature in specific regions of the body, such as the rectum, esophagus, or the visceral organs, can only indicate regional changes in heat content (Flouris & Cheung, 2010; Flouris & Cheung, 2011; Kenny et al., 2015; Kenny et al., (In press); Taylor et al., 2014; Webb, 1986). Nevertheless, for more than 100 years, scientists have been using such measurements to make assumptions about the thermal strain across the entire body. Moreover, the currently recommended criterion when addressing heat-related health risks is a single temperature measurement in the rectum (World Meteorological Organization and World Health Organization, 2015).\n\nA recent paper on occupational heat exposure (Meade et al., 2016) provides an example of the limitations inherent in single temperature measurements used to assess whole-body thermal strain. In hot environments, industries must take preventive measures to protect their workers against heat-related injury and illness. To determine what protections should be used, they may follow guidelines from bodies of knowledgeable experts. One such set of guidelines comes from the American Conference of Governmental Industrial Hygienists (ACGIH, 2007); they recommend that industries employ the Threshold Limit Values (TLV) for work in hot environments. The TLV consider both environmental conditions and work demands, with the goal of maintaining the internal body temperatures of workers within safe limits. However, it is unclear if these guidelines adequately protect workers. The aforementioned recent study applied the TLV recommendations for work-rest periods by having a group of participants perform moderate intensity work bouts in progressively hotter environments. According to the TLV, as environmental heat levels increased, the recovery periods between work bouts were lengthened (Meade et al., 2016). Once these guidelines are applied, the TLV predict that the body’s core temperature should be minimally affected. Yet, the core temperatures of the young physically active adults tested in this study rose continuously. Overall, the findings demonstrated that, under the work conditions tested, the TLV do not adequately protect workers from potentially dangerous increases in their internal temperatures. As mentioned above, these findings are likely ascribable to a heterogeneous distribution of heat within the body’s ‘core’ tissues (i.e., organs, muscles) (Taylor et al., 2014), which may be influenced by the profound thermoregulatory and cardiovascular (the majority of heat transfer within the body occurs through convective transfer via blood) alterations associated with recovery from exercise (Halliwill et al., 2014; Kenny & Journeay, 2010; Kenny & Jay, 2013). If this peripheral heat storage is transferred from the tissues to the body’s core, it presents an increased risk of heat related illness and injury during work. Therefore, the TLV guidelines should be revised, especially given the warming climate and the increase in the frequency and intensity of extreme heat events.\n\n\nHeat parallels in the two disciplines\n\nRecognition that ocean heat uptake plays a key role in modulating human-caused global surface warming has been one of the many valuable ancillary benefits of research aiming to understand the 1998–2013 hiatus. Identifying and accounting for errors in ocean heat uptake estimations has been vital in this improved understanding (Fyfe et al., 2016) and will be instrumental in improving the accuracy of future climate predictions. For instance, climate models accounting for these recent advances suggest a transition to a positive phase of the Interdecadal Pacific Oscillation, which will increase the warming rate of global surface temperature (Hawkins et al., 2014; Thoma et al., 2015). Similarly to climate studies, thermal physiology research employing calorimetric methods has shown that temperature measurements at a single region of the body’s core do not (on average) reflect whole-body thermal strain [(Flouris & Cheung, 2010; Kenny et al., 2013; Kenny et al., (In press); Meade et al., 2016; Stapleton et al., 2014; Stapleton et al., 2015; Webb, 1986), including reviews (Benzinger et al., 1961; Taylor et al., 2014; Webb, 1995)]. In addition to their limited accuracy, such measurements contain a significant time lag, which makes them less effective for scientific purposes and potentially problematic when used in health-related settings. Indeed, core temperature readings are notorious for responding with delays of 10–30 minutes (Kenny & Jay, 2013), even when the human body undergoes extreme discomfort, such as a sudden passive immersion in 12°C water (Flouris & Cheung, 2009). Interestingly, a similar time lag exists in the Earth’s warming rate phenomenon, since 80% of the heat added to the climate system is being taken up by the ocean (IPCC, 2014). As a result, even if greenhouse gas concentrations were to drastically reduce, the planet would continue to warm for many decades (IPCC, 2014).\n\n\nConcluding remarks\n\nScientists in thermal physiology and climate change research are measuring temperature but, very often, they are missing the heat. Relying on indicators that provide a less accurate and/or delayed view of a natural phenomenon is dangerous. In the case of climate change research, interpreting the slowdown of atmospheric temperature rise during 1998–2013 as a change in the planet’s warming rate would have given way to lax policies in environmental monitoring. In the 15 years required to identify the key factors involved, obtain more accurate data, and reassess the situation, these policies could have produced devastating effects and pushed the planet’s climate into unchartered territory. In a similar way, interpreting the minimal temperature changes often observed in a single region of the body’s core as a lack of whole-body thermal strain, gives way to an impassable abyss of controversial results in scientific experiments and, even worse, ineffective guidelines for work in hot environments, as well as delayed application of treatment in occupational or clinical settings.\n\nThe concept of heat is of major importance for the integrity of systems, such as the Earth (IPCC, 2014; Nieves et al., 2015) and the human body (Borden & Cutter, 2008; Flouris & Piantoni, 2015; Luber & McGeehin, 2008). At present, our understanding about the interplay of key factors modulating the heat distribution on the surface of the Earth and in the human body remains incomplete. Identifying and accounting for the interconnections among these factors will be instrumental in improving the accuracy of both climate models and health guidelines. In the aftermath of the Paris Agreement, there is a need for scientists in both thermal physiology and climate change research to embrace integrated approaches that provide comprehensive views of the natural phenomena under study, taking into account the distribution of warming and heat storage patterns. For on that may depend the future in these scientific disciplines and, possibly, the future of the planet.",
"appendix": "Author contributions\n\n\n\nBoth authors conceived the main idea of this paper. ADF prepared the first draft of the manuscript. Both authors revised the draft manuscript and have agreed to the final content.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work has received funding from the European Union’s Horizon 2020 research and innovation programme (grant no., 668786).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nACGIH: Heat stress and strain. In: Documentation of the Threshold Limit Values for Physical Agents Documentation. Cincinnati (OH): ACGIH; 2007.\n\nBenzinger TH, Pratt AW, Kitzinger C: The thermostatic control of human metabolic heat production. Proc Natl Acad Sci U S A. 1961; 47(5): 730–739. PubMed Abstract | Free Full Text\n\nBorden KA, Cutter SL: Spatial patterns of natural hazards mortality in the United States. Int J Health Geogr. 2008; 7: 64. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEngland MH, McGregor S, Spence P, et al.: Recent intensification of wind-driven circulation in the pacific and the ongoing warming hiatus. Nature Clim Change. 2014; 4: 222–227. Publisher Full Text\n\nFlouris AD, Cheung SS, Fowles JR, et al.: Influence of body heat content on hand function during prolonged cold exposures. J Appl Physiol (1985). 2006; 101(3): 802–808. PubMed Abstract | Publisher Full Text\n\nFlouris AD, Cheung SS: Influence of thermal balance on cold-induced vasodilation. J Appl Physiol (1985). 2009; 106(44): 1264–1271. PubMed Abstract | Publisher Full Text\n\nFlouris AD, Cheung SS: Thermometry and calorimetry assessment of sweat response during exercise in the heat. Eur J Appl Physiol. 2010; 108(5): 905–911. PubMed Abstract | Publisher Full Text\n\nFlouris AD, Cheung SS: Thermal basis of finger blood flow adaptations during abrupt perturbations in thermal homeostasis. Microcirculation. 2011; 18(1): 56–62. PubMed Abstract | Publisher Full Text\n\nFlouris AD, Piantoni C: Links between thermoregulation and aging in endotherms and ectotherms. Temperature (Austin). 2015; 2(1): 73–85. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFyfe JC, Meehl GA, England MH, et al.: Making sense of the early-2000s warming slowdown. Nature Clim Change. 2016; 6: 224–228. Publisher Full Text\n\nHalliwill JR, Sieck DC, Romero SA, et al.: Blood pressure regulation x: what happens when the muscle pump is lost? Post-exercise hypotension and syncope. Eur J Appl Physiol. 2014; 114(3): 561–578. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHawkins E, Edwards T, McNeall D: Pause for thought. Nat Clim Chang. 2014; 4: 154–156. Publisher Full Text\n\nHulme M: 1.5 °C and climate research after the paris agreement. Nat Clim Chang. 2016; 6: 222–224. Publisher Full Text\n\nIneson S, Maycock AC, Gray LJ, et al.: Regional climate impacts of a possible future grand solar minimum. Nat Commun. 2015; 6: 7535. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIPCC: Climate change 2014: Synthesis report. Contribution of working groups i, ii and iii to the fifth assessment report of the intergovernmental panel on climate change. Geneva, Switzerland: IPCC. 2014. Reference Source\n\nKenny GP, Journeay WS: Human thermoregulation: Separating thermal and nonthermal effects on heat loss. Front Biosci (Landmark Ed). 2010; 15: 259–290. PubMed Abstract | Publisher Full Text\n\nKenny GP, Jay O: Thermometry, calorimetry, and mean body temperature during heat stress. Compr Physiol. 2013; 3(4): 1689–1719. PubMed Abstract | Publisher Full Text\n\nKenny GP, Stapleton JM, Yardley JE, et al.: Older adults with type 2 diabetes store more heat during exercise. Med Sci Sports Exerc. 2013; 45(10): 1906–1914. PubMed Abstract | Publisher Full Text\n\nKenny GP, Larose J, Wright-Beatty HE, et al.: Older firefighters are susceptible to age-related impairments in heat dissipation. Med Sci Sports Exerc. 2015; 47(6): 1281–1290. PubMed Abstract | Publisher Full Text\n\nKenny GP, Poirier MP, Metsios GS, et al.: Hyperthermia and cardiovascular strain during an extreme heat exposure in young versus older adults. Temperature. (In press); 1–10. Publisher Full Text\n\nLossius K, Eriksen M, Walløe L: Fluctuations in blood flow to acral skin in humans: Connection with heart rate and blood pressure variability. J Physiol. 1993; 460(1): 641–655. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLuber G, McGeehin M: Climate change and extreme heat events. Am J Prev Med. 2008; 35(5): 429–435. PubMed Abstract | Publisher Full Text\n\nMeade RD, Poirier MP, Flouris AD, et al.: Do the Threshold Limit Values for Work in Hot Conditions Adequately Protect Workers? Med Sci Sports Exerc. 2016; 48(6): 1187–1196. PubMed Abstract | Publisher Full Text\n\nNieves V, Willis JK, Patzert WC: GLOBAL WARMING. Recent hiatus caused by decadal shift in Indo-Pacific heating. Science. 2015; 349(6247): 532–535. PubMed Abstract | Publisher Full Text\n\nSolomon S, Daniel JS, Neely RR 3rd, et al.: The persistently variable \"background\" stratospheric aerosol layer and global climate change. Science. 2011; 333(6044): 866–870. PubMed Abstract | Publisher Full Text\n\nStapleton JM, Larose J, Simpson C, et al.: Do older adults experience greater thermal strain during heat waves? Appl Physiol Nutr Metab. 2014; 39(3): 292–298. PubMed Abstract | Publisher Full Text\n\nStapleton JM, Poirier MP, Flouris AD, et al.: Aging impairs heat loss, but when does it matter? J Appl Physiol (1985). 2015; 118(3): 299–309. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTaylor NA, Tipton MJ, Kenny GP: Considerations for the measurement of core, skin and mean body temperatures. J Therm Biol. 2014; 46: 72–101. PubMed Abstract | Publisher Full Text\n\nThoma M, Greatbatch RJ, Kadow C, et al.: Decadal hindcasts initialized using observed surface wind stress: Evaluation and prediction out to 2024. Geophys Res Lett. 2015; 42(15): 6454–6461. Publisher Full Text\n\nTollefson J: Climate change: The forecast for 2018 is cloudy with record heat. Nature. 2013; 499(7457): 139–141. PubMed Abstract | Publisher Full Text\n\nTrenberth KE: CLIMATE CHANGE. Has there been a hiatus? Science. 2015; 349(6249): 691–692. PubMed Abstract | Publisher Full Text\n\nWebb P: Afterdrop of body temperature during rewarming: an alternative explanation. J Appl Physiol (1985). 1986; 60(2): 385–390. PubMed Abstract\n\nWebb P: The physiology of heat regulation. Am J Physiol. 1995; 268(4 Pt 2): R838–850. PubMed Abstract\n\nWorld Meteorological Organization, World Health Organization: Heatwaves and health: Guidance on warning-system development. Geneva, Switzerland: World Meteorological Organization. 2015. Reference Source"
}
|
[
{
"id": "20753",
"date": "07 Mar 2017",
"name": "Juha Oksa",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe opinion article by Dr’s Flouris and Kenny nicely describes the similarities in the changes of heat content pattern between the Earth and the human body. The difference between these two is naturally the time frame, the response of the Earth being several years and that of the human, minutes.\nGiven that Earth is warming, extreme heat wave episodes are increasing and the study the authors are referring to (Meade et al. 20161) points to the possibility that the existing TLV guidelines may not be sufficient in protecting labor force from excessive heat strain (as the authors state). Therefore, a re-evaluation of the current TLV guidelines might be desirable.\nHowever, it would also be desirable to see previous evaluations regarding the usability and effectiveness of the current guidelines. Has it been successful in preventing excessive heat strain? If not, what has been the causes? This might increase the justification for re-evaluation of the current TLV guidelines.",
"responses": [
{
"c_id": "2551",
"date": "15 Mar 2017",
"name": "Andreas Flouris",
"role": "Author Response",
"response": "We wish to thank you for reviewing the manuscript and for your constructive and helpful comments. We made appropriate changes in the paper based on your comments. The appropriate responses to all points that you raised appear below with each of your comments in underlined italics and our responses in bold font. Bold font is used to indicate revised parts of the text. The opinion article by Dr’s Flouris and Kenny nicely describes the similarities in the changes of heat content pattern between the Earth and the human body. The difference between these two is naturally the time frame, the response of the Earth being several years and that of the human, minutes. Given that Earth is warming, extreme heat wave episodes are increasing and the study the authors are referring to (Meade et al. 20161) points to the possibility that the existing TLV guidelines may not be sufficient in protecting labor force from excessive heat strain (as the authors state). Therefore, a re-evaluation of the current TLV guidelines might be desirable. Thank you for your encouraging comments. However, it would also be desirable to see previous evaluations regarding the usability and effectiveness of the current guidelines. Has it been successful in preventing excessive heat strain? If not, what has been the causes? This might increase the justification for re-evaluation of the current TLV guidelines. The history of the TLV guidelines starts in 1971 (ACGIH, 2007). However, to the best of our knowledge, the study referenced in our paper is the only one that directly assessed the influence of the work exposure limits outlined in the TLV on core temperature responses during work and the associated changes in whole-body heat content. To clarify this further, we added a relevant comment in the paper [2nd paragraph of section “Unaccounted heat in thermal physiology research”]: “While the history of the TLV guidelines starts in 1971 (ACGIH, 2007), to the best of our knowledge, the aforementioned recent study is the only one that directly assessed the influence of the work exposure limits outlined in the TLV on core temperature responses during work and the associated changes in whole-body heat content.”"
}
]
},
{
"id": "20720",
"date": "08 Mar 2017",
"name": "Eugene A. Kiyatkin",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nTitle: Heat remains unaccounted for in thermal physiology and climate change research\n\nComments:\n\nThis opinion piece touches on an important issue. It is critical to accurately measure the redistribution of heat within both climate and physiological systems. The authors correctly note the interconnectedness of the two: as the earth warms, heat-related health risks will rise as the dynamics of energy transfer between organism and environment change. Standards to protect workers and the environment should be updated to reflect the most accurate and helpful measurements and information as is possible.\n\nMajor comments:\n\nThe authors draw a parallel between heat distribution in human and heat distribution within climate systems throughout the piece. While I see the analogy, it might be best to reign in this comparison in some places, specifically, on page 3 under the heading “Heat parallels in the two disciplines”. While the point is well taken, the human body has many mechanisms that actively work to maintain thermo-homeostasis. The earth, on the other hand, is a passive system with no “goal temperature”, as it were. Would it be better simply to state that temperature distribution is important in both systems and that our understanding and measurements of it need to be improved in the interest of human health? Given that heat distribution in humans is dependent on the temperature of the environment, there is a strong argument to be made linking these two variables without drawing a comparison between an actively regulated homeostatic system and system in which heat distribution occurs through passive mechanisms. The first paragraph of “Unaccounted heat in thermal physiology research” contains this analogy as well.\n\nMinor:\n\nThe authors put forward that the findings of Nieves et al. were confirmed by increased temperatures in 2015 and 2016 (Page 2). When considering global trends, single year data is a fraught method of proof given the degree of between-year variability in temperature. By the same logic, the climate change slowdown from 1998-2013 discussed in this paper was used to justify a lack of action around climate change. Some slightly awkward phrasing persists in this article. For example, the first sentence of the second paragraph on page 2 under “Unaccounted heat in climate research” could be re-written as “Research aiming to understand the 1998-2013 hiatus significantly benefited from studies investigating the causes…”. The sentence, “This is not surprising.” Under the same heading on page 2, does not add to the paragraph and could be cut. “Unaccounted heat in thermal physiology research” could benefit from some discussion of the importance of brain temperature (Kiyaktin 2010 for review1). Individuals may experience robust changes in brain temperature during behavioral activation such as manual labor or intense exercise that can result, under conditions of diminished heat dissipation, in pathological hyperthermia2. Hyperthermia increases permeability of the blood-brain barrier, predisposing individuals to the development of vasogenic edema and damage of brain cells. Increased global temperature may also pose increased health risks for people using psychomotor stimulant drugs of abuse, the effects of which are greatly potentiated by increased ambient temperature. This is critical when considering issues of heat distribution in which peripheral tissues can tolerate greater changes in heat with less threat to the health and survivability of the organism, whereas increases in specific organ structures such as the brain can threaten organism health.",
"responses": [
{
"c_id": "2552",
"date": "15 Mar 2017",
"name": "Andreas Flouris",
"role": "Author Response",
"response": "We wish to thank you for reviewing the manuscript and for your constructive and helpful comments. We made appropriate changes in the paper based on your comments. The appropriate responses to all points that you raised appear below with each of your comments in underlined italics and our responses in bold font. Bold underlined font is used to indicate revised parts of the text. Comments: This opinion piece touches on an important issue. It is critical to accurately measure the redistribution of heat within both climate and physiological systems. The authors correctly note the interconnectedness of the two: as the earth warms, heat-related health risks will rise as the dynamics of energy transfer between organism and environment change. Standards to protect workers and the environment should be updated to reflect the most accurate and helpful measurements and information as is possible. Thank you for your encouraging comments. Major comments: The authors draw a parallel between heat distribution in human and heat distribution within climate systems throughout the piece. While I see the analogy, it might be best to reign in this comparison in some places, specifically, on page 3 under the heading “Heat parallels in the two disciplines”. While the point is well taken, the human body has many mechanisms that actively work to maintain thermo-homeostasis. The earth, on the other hand, is a passive system with no “goal temperature”, as it were. Would it be better simply to state that temperature distribution is important in both systems and that our understanding and measurements of it need to be improved in the interest of human health? Given that heat distribution in humans is dependent on the temperature of the environment, there is a strong argument to be made linking these two variables without drawing a comparison between an actively regulated homeostatic system and system in which heat distribution occurs through passive mechanisms. The first paragraph of “Unaccounted heat in thermal physiology research” contains this analogy as well. We agree with your observation. Specific comments were added in the relevant sections of the paper to improve clarity. 1st paragraph of section “Unaccounted heat in thermal physiology research”: “Although the Earth is a “passive system” with no active regulation of temperature or heat content (based on current knowledge, that is),…” 1st paragraph of section “Heat parallels in the two disciplines”: “Heat distribution (and its associated temperature variation) is important in both the Earth and the human body and our understanding and measurements of it need to be improved in the interest of human health (since human thermal homeostasis is largely dependent on environmental temperature).” Minor: The authors put forward that the findings of Nieves et al. were confirmed by increased temperatures in 2015 and 2016 (Page 2). When considering global trends, single year data is a fraught method of proof given the degree of between-year variability in temperature. By the same logic, the climate change slowdown from 1998-2013 discussed in this paper was used to justify a lack of action around climate change. To improve clarity, we revised the 1st paragraph of section “Unaccounted heat in climate research” as follows: “Specifically, Nieves et al., showed that heat moved from the surface of the Pacific Ocean to the deeper layers of the Western Pacific and Indian Oceans, a finding that was confirmed using a wealth of observational and simulated data. As a result, the net ocean heat uptake remains dangerously high (Nieves et al., 2015), and the rate of global warming may have actually increased during the hiatus period (IPCC, 2014). The observed record-breaking temperatures throughout 2015 and 2016 (https://www.nasa.gov/press-release/nasa-noaa-data-show-2016-warmest-year-on-record-globally) support the findings of Nieves et al.” Some slightly awkward phrasing persists in this article. For example, the first sentence of the second paragraph on page 2 under “Unaccounted heat in climate research” could be re-written as “Research aiming to understand the 1998-2013 hiatus significantly benefited from studies investigating the causes…”. The sentence, “This is not surprising.” Under the same heading on page 2, does not add to the paragraph and could be cut. As suggested, the following revisions were done: “Research aiming to understand the 1998–2013 hiatus significantly benefited from studies investigating the causes of a much longer hiatus observed from the 1950s to the 1970s.” “This is because ocean circulation…” “Unaccounted heat in thermal physiology research” could benefit from some discussion of the importance of brain temperature (Kiyaktin 2010 for review1). Individuals may experience robust changes in brain temperature during behavioral activation such as manual labor or intense exercise that can result, under conditions of diminished heat dissipation, in pathological hyperthermia2. Hyperthermia increases permeability of the blood-brain barrier, predisposing individuals to the development of vasogenic edema and damage of brain cells. Increased global temperature may also pose increased health risks for people using psychomotor stimulant drugs of abuse, the effects of which are greatly potentiated by increased ambient temperature. This is critical when considering issues of heat distribution in which peripheral tissues can tolerate greater changes in heat with less threat to the health and survivability of the organism, whereas increases in specific organ structures such as the brain can threaten organism health. As suggested, we added the following sentence in the 1st paragraph of the section entitled “Unaccounted heat in thermal physiology research”: “Moreover, peripheral tissues can tolerate greater changes in heat with less threat to the health and survivability of the organism, whereas increases in specific organ structures such as the brain can threaten organism health (Kiyatkin, 2010).”"
},
{
"c_id": "2564",
"date": "16 Mar 2017",
"name": "Eugene A. Kiyatkin",
"role": "Reviewer Response",
"response": "The Authors properly responded to all comments raised in our review."
}
]
}
] | 1
|
https://f1000research.com/articles/6-221
|
https://f1000research.com/articles/6-267/v1
|
14 Mar 17
|
{
"type": "Case Report",
"title": "Case Report: Sciatic nerve schwannoma - a rare cause of sciatica",
"authors": [
"Sunil Munakomi",
"Pratyush Shrestha",
"Pratyush Shrestha"
],
"abstract": "Herein we report a rare case of a sciatic nerve schwannoma causing sciatica in a 69-year-old female. Sciatic nerve schwannoma is a rare entity. It should always be considered as a possible cause of sciatica in patients that present with symptoms of sciatica with no prolapsed disc in the lumbar spine and a negative crossed straight leg raise test. Timely diagnosis and complete excision of the lesion leads to complete resolution of the symptoms of such patients.",
"keywords": [
"sciatica",
"sciatic nerve",
"schwannoma"
],
"content": "Introduction\n\nSciatic nerve schwannoma is a rare cause of sciatica1–3. However, it remains a probable diagnosis in patients that present with symptoms of sciatica with no prolapsed disc in the lumbar spine and a negative crossed straight leg raise test, suggesting the presence of a far lateral disc. Magnetic resonance imaging (MRI) along the course of the sciatic nerve is the cornerstone for coming to a correct diagnosis and thereafter implementing a right therapeutic decision1. This case report highlights the need to consider sciatic nerve schwannoma as a possible cause of a sciatica in patients that have a negative lumbar spine MRI, so that the correct therapeutic decision can be made.\n\n\nCase report\n\nA 69-year-old female from eastern Nepal presented to our outpatient clinic with a history suggestive of right sided sciatica for the last 2 years. She had been evaluated before for the same, but without any positive diagnosis. The patient denied any history of trauma or any alteration in her bladder and bowel habits, or of any symptoms which is suggestive of intermittent claudication. Upon neurological examination, the power in all the muscle groups in her lower limbs was normal - 5/5 as per the MRC Muscle scale (used with the permission of the Medical Research Council). Her ankle and the knee reflexes were normal and she had no sensory indifference in any of the dermatomes in the affected limb, as compared to the normal limb. There was no wasting of the extensor digitorum brevis muscle. Straight leg raise test and a crossed straight leg raise test were both negative. Her stance was also normal. While sitting in a squatting position, the patient complained of an exaggeration of her symptoms. We thereafter made a differential diagnosis of either a sciatic nerve tumor or a Pyriformis syndrome. Radio imaging with help of an MRI scan revealed the presence of a sciatic tumor alongside the sciatic nerve, near the ischial tuberosity on the right side (Figure 1). The unusual location of the lesion was in favor of a schwannoma rather than a neurofibroma (Figure 2).\n\nThe patient was counseled for the operative intervention that would remedy her persistent symptoms. A subgluteal approach was taken for the surgical corridor. Intra-operatively, a 3×3 cm2 well circumscribed lesion was seen lying within the sciatic nerve. It was carefully dissected off the nerve fascicles and fully removed (Figure 3). The sciatic nerve was confirmed to be intact intra-operatively with the aid of an intra-operative nerve monitor.\n\nPostoperatively, the patient was completely free of her previous symptoms. She made a full recovery with no adverse events and was discharged on the fifth day. The histopathological report confirmed the diagnosis of a sciatic nerve schwannoma, owing to the presence of Antoni A and B areas and Verocay bodies (Figure 4). The patient returned to her follow up visit at 1 month completely asymptomatic.\n\n\nDiscussion\n\nSciatic nerve schwannoma is a rare cause of sciatica, occurring only in 1 of every 100 cases1. It should be suspected in a patient who presents with a typical history of sciatica but with MRI scans that fail to reveal any inter-vertebral disc prolapse in the lumbar spine1. Other differential diagnoses included sciatic nerve tumors, a far lateral disc or Pyriformis syndrome. The main imaging modality for the diagnosis of sciatic nerve schwannoma was MRI imaging of the affected sciatic nerve.\n\nNeurofibromas are intrinsic lesions that cause fusiform dilatation of the nerve, since the lesions are intermixed with the nerve1. On the other hand, schwannomas are placed in such a way that nerve fascicles are being pushed to the periphery, allowing their safe preservation following excision of the schwannoma2,4–6. Intra-operative nerve monitoring helps immensely to outline the course of the nerve and define the boundary of the tumor during its removal. Definitive diagnosis however is only possible after the histopathological studies.\n\nFor the excision of such lesions, both a transgluteal or a subgluteal approach can be taken7,8. In both these approaches, the patient is placed in a prone position. The sciatic nerve invariably lies midway between the ischial tuberosity, medially, and the greater trochanter, laterally. A subgluteal approach may lead to prolonged discomfort due to retraction of the soft tissues and the gluteal muscles9. A transgluteal approach may sometimes lead to disastrous consequences due to retraction of the muscle arteries within the pelvis. However it provides a wider surgical corridor up to the sciatic notch9.\n\nHistopathology is the mainstay for differentiating the type of tumor involved, with only an occasional need for immunohistochemical markers like S1002,5,6.\n\nRecurrence is uncommon following complete excision6. Malignant transformation of such lesion is rare1,2. Good outcome is expected following its complete excision because of its benign nature2.\n\n\nConclusion\n\nThough rare, sciatic nerve schwannoma should be taken into account for the differential diagnosis in a patient presenting with long standing sciatica without positive findings of a disc in the lumbar spine. MRI imaging of the nerve is prudent for the diagnosis of the lesion. It is imperative to outline the course of the nerve and to define the boundary of the lesion to preserve the nerve fascicles. This can be facilitated with the aid of an intraoperative nerve monitor.\n\n\nConsent\n\nBoth written and verbal informed consent for publication of images and clinical data related to this case was sought and obtained from the patient.",
"appendix": "Author contributions\n\n\n\nBoth authors contributed equally in reviewing the literature, formatting the paper, revising and editing the final format.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nRhanim A, El Zanati R, Mahfoud M, et al.: A rare cause of chronic sciatic pain: Schwannoma of the sciatic nerve. J Clin Orthop Trauma. 2013; 4(2): 89–92. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBanshelkikar S, Nistane P: Intrasubstance Schwannoma of Posterior Tibial Nerve Presenting as Lumbo-Sacral Radiculopathy. J Orthop Case Rep. 2015; 5(2): 35–37. PubMed Abstract | Free Full Text\n\nEroglu U, Bozkurt M, Ozates O, et al.: Sciatic nerve schwannoma: case report. Turk Neurosurg. 2014; 24(1): 120–2. PubMed Abstract\n\nHaspolat Y, Ozkan FU, Turkmen I, et al.: Sciatica due to Schwannoma at the Sciatic Notch. Case Rep Orthop. 2013; 2013: 510901. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChikkanna JK, Gopal S, Sampath D: Mystery of Sciatica Resolved - A Rare Case Report. J Clin Diagn Res. 2016; 10(1): RD04–RD05. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar S, Ralli M, Sharma J, et al.: Sciatic schwannoma: A rare entity. Clin Cancer Investig J. 2015; 4(6): 720–2. Publisher Full Text\n\nPatil PG, Friedman AH: Surgical exposure of the sciatic nerve in the gluteal region: anatomic and historical comparison of two approaches. Neurosurgery. 2005; 56(1 Suppl): 165–171; discussion 165–71. PubMed Abstract | Publisher Full Text\n\nSocolovsky M, Garategui L, Campero A, et al.: Exposure of the sciatic nerve in the gluteal region without sectioning the gluteus maximus: an anatomical and microsurgical study. Acta Neurochir Suppl. 2011; 108: 233–240. PubMed Abstract | Publisher Full Text\n\nMontano N, Novello M, D'Alessandris QG, et al.: Intrapelvic sciatic notch schwannoma: microsurgical excision using the infragluteal approach. J Neurosurg. 2013; 119(3): 751–5. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "20984",
"date": "15 Mar 2017",
"name": "Ravi Dadlani",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI congratulate the authors on an interesting case report.\nIt is a well written report but I would like to suggest a few additional points.\n\nSince several such case reports have been published earlier, it would be interesting if the authors could add a 'review of literature'. A single tabulated format with some interesting characteristic, such as the exact location of the tumor along the course of the sciatic nerve.\nIt would also be interesting to see a small table with other 'sciatica mimicks'. Personally I have seen lumbosacral plexus tumors presenting with sciatica.\n\nThe article may be accepted for indexing with these minor additions.",
"responses": []
},
{
"id": "21114",
"date": "20 Mar 2017",
"name": "Guru Dutta Satyarthee",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nMicrosurgical excision of sciatic nerve schwannoma with good outcome\n\nThe authors reported an interesting case of sciatic nerve schwannoma in a 69-year old female, who was symptomatic for two-years, magnetic resonance imaging revealed presence of a mass lesion causing expansion of right sciatic nerve. A provisional diagnosis of peripheral nerve sheath tumor was made. She underwent micro-surgical excision\n\nusing a sub-gluteal approach, intraoperative expansion of the sciatic nerve was observed, and nerve fascicles were carefully separated from mass lesion and well circumscribed lesion was excised, with physiological nerve monitoring. Histopathology was suggestive schwannoma with amelioration of symptoms1.\nSchwannoma\n\nis a benign peripheral nerve tumor of\n\nSchwann cells origin, that usually presents as a slow growing, solitary, well circumscribed mass. The\n\nsciatic nerve involvement represents less than 1% of all schwannoma2. Peripheral nerve sheath schwannoma symptom relates to alteration in the function of nerve and surrounding muscle and neurovascular bundles, and mostly commonly\n\npresent with paraesthesia or pain of insidious onset and progresses slowly2-4. Pain is a much more common symptom than motor deficits. Pain due to sciatic nerve schwannoma may simulate chronic sciatica pain produced due to a prolapse of lumbar herniated disc. Physical examination may reveal the presence of a lump along the course of the sciatic nerve, which is tender, have mobility along the transverse axis but limited along the course of the nerve, and with a typical positive Tinel sign. However, pre-operative diagnosis cannot reliably in most cases (even with magnetic resonance imaging) distinguish among schwannoma, neurofibroma, or plexiform neurofibroma, but aids in delineating shape, size, location, extent and relation with parent nerve and adjacent neurovascular structures and muscle. Imaging plays a limited role in distinguishing among peripheral nerve sheath tumors. Magnetic resonance imaging may show presence of fusiform mass with characteristic tapering cephalad and distal ends, fasciculation sign and split fat signs3-4. The mass is located eccentrically, well-circumscribed, and shows isointense signal on T1-weighted images and hyperintense signal and peripheral rim demonstrate hypo-intensity signal representing capsule T2 weighted images3-4. The diagnoses of sciatic nerve schwannoma depends on MRI of sciatic nerve carried out in the event of normal MRI findings of lumbar spine, but the patient still complains of the persistence of sciatica-like pain. Treatment of epineurium encapsulated tumour is microsurgical excision with careful preservation of the sciatic nerve fascicles. Histopathological examination of resected specimen confirms the definitive diagnosis1-5.\n\nKim et al. analysed 397 cases of peripheral nerve sheath tumor, out of which 91% were benign and the rest were malignant. A total of 251 were located in the brachial plexus region or upper limb. 141 benign lesions were related to brachial plexus tumors, and the rest (110) belonged to upper-extremity benign peripheral nerve sheath tumors. In contrast to upper limb, the peripheral nerve sheath tumor involving lower-limbs included 32 cases of schwannomas and 53 cases of neurofibroma5.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-267
|
https://f1000research.com/articles/5-2927/v1
|
28 Dec 16
|
{
"type": "Research Article",
"title": "Systematic assessment of multi-gene predictors of pan-cancer cell line sensitivity to drugs exploiting gene expression data",
"authors": [
"Linh Nguyen",
"Cuong C Dang",
"Pedro J. Ballester",
"Linh Nguyen",
"Cuong C Dang"
],
"abstract": "Background: Selected gene mutations are routinely used to guide the selection of cancer drugs for a given patient tumour. Large pharmacogenomic data sets were introduced to discover more of these single-gene markers of drug sensitivity. Very recently, machine learning regression has been used to investigate how well cancer cell line sensitivity to drugs is predicted depending on the type of molecular profile. The latter has revealed that gene expression data is the most predictive profile in the pan-cancer setting. However, no study to date has exploited GDSC data to systematically compare the performance of machine learning models based on multi-gene expression data against that of widely-used single-gene markers based on genomics data. Methods: Here we present this systematic comparison using Random Forest (RF) classifiers exploiting the expression levels of 13,321 genes and an average of 501 tested cell lines per drug. To account for time-dependent batch effects in IC50 measurements, we employ independent test sets generated with more recent GDSC data than that used to train the predictors and show that this is a more realistic validation than K-fold cross-validation. Results and Discussion: Across 127 GDSC drugs, our results show that the single-gene markers unveiled by the MANOVA analysis tend to achieve higher precision than these RF-based multi-gene models, at the cost of generally having a poor recall (i.e. correctly detecting only a small part of the cell lines sensitive to the drug). Regarding overall classification performance, about two thirds of the drugs are better predicted by multi-gene RF classifiers. Among the drugs with the most predictive of these models, we found pyrimethamine, sunitinib and 17-AAG. Conclusions: We now know that this type of models can predict in vitro tumour response to these drugs. These models can thus be further investigated on in vivo tumour models.",
"keywords": [
"pharmacogenomics",
"pharmacotranscriptomics",
"precision oncology",
"machine learning",
"biomarkers",
"benchmarking",
"drug response",
"bioinformatics"
],
"content": "Introduction\n\nPersonalised approaches to the diagnosis and treatment of cancer is a strong current trend, often based on the analysis of tumour DNA1. Somatic DNA mutations can affect the abundance and function of a range of gene products, including those involved in the response of the tumour to anticancer therapy2. Therefore, the genomic profile of a tumour is usually valuable for predicting its sensitivity to a certain drug3,4. Thus, a number of studies have profiled tumours using single-nucleotide variants or copy-number alterations to use them as input features to predict which tumours will be sensitive to a given drug. In addition, transcriptomic data has also been proven to be an informative molecular profile5, as the expression levels of genes have led to the identification of cancer subtypes, prognosis prediction and drug sensitivity prediction6.\n\nHuman-derived cancer cell lines, especially immortalised cancer cell lines, play an important role in preclinical research for the discovery of genomic markers of drug sensitivity5,7–9. This type of tumour model permits experiments to be implemented quickly and with a relatively low cost10,11, unlike more patient-relevant models, such as ex vivo tumour cultures12,13 or patient-derived xenografts14,15 (in contrast to these advantages, cell lines have also well-known limitations that have to be kept in mind10). The molecular profiles of such cell lines are often used as input features for drug sensitivity prediction5,8 via the development of both single-gene markers and other models, like pharmacogenomics16–18, pharmacotranscriptomics19–21, multi-task learning16,17,22–25 and quantitative structure-activity relationship (QSAR) models26,27. Recently, several consortia have generated large pharmacogenomic data sets, which consist of both molecular and drug sensitivity profiles of several hundreds of cancer cell lines, e.g. Genomics of Drug Sensitivity in Cancer (GDSC)8, Cancer Cell Line Encyclopedia (CCLE)9, and profiling a panel of 60 cancer cell lines from the National Cancer Institute (NCI-60)7. Among them, GDSC data is currently offering the highest number of cell lines tested per drug8.\n\nTo date, predictive models based on GDSC data have been mostly restricted to single-gene markers of drug sensitivity8 (i.e. statistically significant drug-gene associations). However, multi-gene elastic net models have also been used for a related purpose, namely estimating the importance of somatic mutations in drug sensitivity prediction8. Some researchers have also investigated the performance of multi-gene machine learning models exploiting GDSC data16. Nevertheless, we and others9,17,18 have not studied how well multi-gene markers compare to single-gene markers. Such analysis is essential to understand the benefits of modelling multiple gene alterations. Very recently, machine learning models have been used to compare the predictive value of various molecular profiles in drug sensitivity modelling5, but without comparing such models to single-gene markers. An important outcome of that study revealed that gene expression data was the most predictive molecular profile in the pan-cancer setting. Beyond this research area, multi-variate machine learning models are also starting to be advocated for genomic-based prediction of other complex phenotypic traits28.\n\nIn practice, it is entirely possible that models based on one feature (single-gene markers) are more predictive than those based on more than one feature (multi-variate classifiers). In part, this is due to the high-dimensionality of training data (in the present study, the number of gene expression values is much higher than that of cell lines treated with the considered drug), which poses a challenge to classifiers. Furthermore, as cell line sensitivity to a drug depends on its molecular features, the performance of models exploiting different molecular profiles will be drug-dependent. Therefore, the key question is for which drugs are multivariate markers more predictive of cell line sensitivity than univariate markers. Very recently, this question has been finally investigated using large-scale GDSC data5, although there are several limitations in this analysis. First, this study considered LOBICO logic models with up to four features because searching for more complex models was not feasible with LOBICO5; however, a drug can have many more than four informative gene alterations. Second, machine learning models were only used to establish which molecular profiles were more informative on average across all drugs. Hence, the performances of these models were not compared against those of single-gene markers (this was only done with logic models). Third, both logic model selection and its classification performance assessment were performed using the same data folds in the adopted cross-validation procedure. Therefore, these cross-validated results represent an overoptimistic performance assessment of LOBICO models.\n\nHere we study the performance of machine learning exploiting gene expression profiles. In addition, we compare the performance of these multi-gene machine learning models to that of single-gene markers. For each drug, this analysis is conducted by selecting its best single-gene marker and its multi-gene model on a training set representing the data available at model selection time. Thereafter, we test both models in an unbiased manner using a time-stamped independent test set, i.e. data that was generated after the training data and not used for model building or selection. The advantages of using a time-stamped data partition instead of K-fold cross-validation are that this mimics a blind test, the same data is not used for both model selection and performance assessment (thus avoiding performance overestimation) and real-world issues like time-dependent batch effects29 are taken into account. On the other hand, since transcriptomic data has been found to be the most predictive in the pan-cancer setting5, our study focuses on the exploitation of transcriptomic data. In particular, the predictive performance of pan-cancer markers of drug sensitivity on an independent test set is most relevant to help stratify patients for basket trials30, where patients with any type of cancer are included if their tumours are predicted to be sensitive to the investigated treatment. Another reason to limiting the scope to transcriptomic-based machine learning models is that models integrating data from multiple molecular profiling technologies would be less amenable for clinical implementation, due to much higher requirements in cost, time and resources per patient. Therefore, there is a need to understand for which drugs models combining gene expression values provide better cell line sensitivity prediction than standard single-gene markers.\n\n\nMethods\n\nFrom the Genomics of Drug Sensitivity in Cancer (GDSC) ftp server (ftp://ftp.sanger.ac.uk/pub4/cancerrxgene/releases/), the following files from the first data release (release 1.0) were downloaded: gdsc_manova_input_w1.csv and gdsc_manova_output_w1.csv. There are 130 unique drugs in gdsc_manova_input_w1.csv, as camptothecin was tested twice (drug IDs 195 and 1003), and thus we only kept the instance that was more widely tested (i.e. drug ID 1003 on 430 cell lines). Hence, the data represent a panel of 130 drugs tested against 638 cancer cell lines resulting in a total of 47748 IC50 values (57.6% of all possible drug-cell pairs). In addition, we downloaded new data from release 5.0 (gdsc_manova_input_w5.csv), which is the latest release using the same experimental techniques to generate pharmacogenomic data, and considering the same genes as in the first release. Release 5.0 contains 139 drugs tested on 708 cell lines comprising 79,401 IC50 values (80.7% of all possible drug-cell pairs). Hence, the majority of the new IC50 values came from previously untested drug-cell pairs formed by drugs and cell lines in common between both releases. The downloaded IC50 values are actually the natural logarithm of IC50 in µM units, so negative values came from drug responses more potent than 1µM. Each of these values were converted into their logarithm base 10 in µM units, denoted as logIC50 (e.g. logIC50=1 corresponds to IC50=10µM). In this way, differences between the two drug response values are expressed as orders of magnitude in the molar scale.\n\ngdsc_manova_input_w1.csv also contains genetic mutation data for 68 cancer genes (these were selected as the most frequently mutated cancer genes8 and their mutational statuses characterise each of the 638 cell lines). For each gene-cell pair, a ‘x::y’ description is provided, where ‘x’ specifies a coding variant and ‘y’ states copy number information from SNP6.0 data. As usual8, a gene for which a mutation is not detected in a given cell line is annotated as wild-type (wt). A gene mutation is annotated if a) a protein sequence variant is detected (x ≠{wt,na}) or b) a gene deletion/amplification is detected. The latter corresponds to a copy number (cn) range that is different from the wt value of y=0<cn<8. Furthermore, three genomic translocations were considered (BCR_ABL, MLL_AFF1 and EWS_FLI1) by the GDSC. For each of these gene fusions, cell lines are either identified as a not-detected fusion or the identified fusion is stated (i.e. wt or mutated with respect to the gene fusion, respectively). The microsatellite instability (msi) status of each cell line is also determined and provided. Further details can be found in the original publication by Garnett et al.8.\n\nGene expression data was generated using Affymetrix Human Genome U219 Array Chip and was normalized with the Robust Multi-Array Average method. The number of cell lines with gene expression data in releases 1.0 and 5.0 of the GDSC are 571 and 624, respectively. In terms of data in common, both releases contain the expression level of 13,321 genes across 624 cancer cell lines. These genes consist of 12,644 protein coding genes, 47 pseudogenes, 29 non-coding RNA genes and 601 uncharacterized genes according to the HUGO Gene Nomenclature Committee (HGNC).\n\nThere are 127 drugs in common between both releases. Three drugs are exclusively included in the first release (A-769662, Metformin and BI-D1870), whereas release 5.0 contains 12 additional drugs (TGX221, OSU-03012, LAQ824, GSK-1904529A, CCT007093, EHT 1864, BMS-708163, PF-4708671, JNJ-26854165, TW 37, CCT018159 and AG-014699).\n\nRegarding genomic features, cell lines from both releases have been profiled for 71 common gene alterations in cancer. In addition to the three translocations and msi status, the mutational statuses of 67 genes could be considered (i.e. those for the 68 selected genes in the first release except for the mutational status of the WT1 gene, which was not included in the subsequent 5.0 release). To ensure that we are using exactly the same drug-gene associations as in the GDSC study, we directly employed the associations and their p-values as downloaded from release 1.0.\n\nThere are two non-overlapping data sets per drug. The training set contains the cell lines tested with the drug and gene expression data in release 1.0 (the minimum, average and maximum numbers of cell lines across training data sets are 237, 330 and 467, respectively), along with their IC50s for the considered drug. The test set contains the new cell lines tested with the drug and with gene expression data in release 5.0 (the minimum, average and maximum numbers of cell lines in the test data sets are 42, 171 and 306, respectively). Thus, a total of 254 pharmacotranscriptomic data sets were assembled and analysed for this study.\n\nThe pharmacotranscriptomic data for the ith drug (Di) can be represented as follows:\n\n\n\nin which, the sensitivity of cancer cell lines against the ith drug has been tested on ni cell lines. x is the vector with 13,321 gene expression values. The data can act as a training set, cross-validation fold or test set of any of the tested drugs.\n\nFirst, a cell line sensitivity threshold is defined to distinguish between those resistant or sensitive to a given drug. For each drug, we calculated the median of all the logIC50 values from training set cell lines and fix it as the threshold. Cell lines with logIC50 below the threshold are therefore sensitive, while those with logIC50 above the threshold are resistant to the drug of interest.\n\nUpon using the model to make predictions in a given data set, two different sets of cell lines will be obtained for each drug: those predicted to be sensitive and those predicted to be resistant. Then, using the threshold for the drug, we can assess classification performance by calculating the number of cell lines in each of these four categories: true positive (TP), true negative (TN), false positive (FP) and false negative (FN). These can be summarised by the Matthews Correlation Coefficient (MCC):\n\n\n\nMCC takes values from -1 to 1. A MCC value of 0 means that the tested model has no predictive value, MCC lower than 0 means that the tested model predicts drug sensitivity worse than random and an MCC value equal to 1 indicates that the tested model perfectly predicts the sensitivity of the cell lines against the drug of interest.\n\nIn addition to MCC, we also investigated precision (PR), recall (RC) and F1-scores (F1) of the model for each drug to provide a more comprehensive comparison of multi-gene models to single-gene markers. Precision and recall are the two measures of performance for binary classifier, which can be calculated as follows:\n\n\n\nBoth metrics can take values from 0 to 1. Precision and recall equal to 0 means that TP = 0, the model fails to identify any cell line sensitive to the drug. By contrast, PR and RC equal to 1 means that FP and FN are equal to 0, respectively. In these cases, either the model does not predict any resistant cell line as sensitive (FP = 0) or it does not misclassify sensitive cell lines as resistant (FN = 0), respectively.\n\nThe F1-score is another measure combining PR and RC. F1-score can be computed as:\n\n\n\nThe F1-score is at most 1 (when both PR and RC = 1) and minimum value equal to 0 (RC =0 regardless of the PR value or vice versa). High F1 scores mean that both precision and recall are high for the classifier.\n\nWe downloaded gdsc_manova_output_w1.csv containing 8701 drug-gene associations with their corresponding p-values computed by MANOVA test. Then, we kept those associations involving the 127 common drugs leading to a set of 8330 drug-gene associations, of which 386 were significant (i.e. p-value smaller than a FDR 20% Benjamini-Hochberg adjusted threshold of 0.00840749). As in previous studies5,8, each statistically significant drug-gene association is regarded as a single-gene marker of in vitro drug response.\n\nThe best single-gene marker for a drug was identified as its drug-gene association with the lowest p-value. This constitutes a binary classifier with a single independent variable, built using training data alone and fixed at this model selection stage. These drug-lowest p-values were not statistically significant in 15 of the 127 drugs, with the highest of these being P = 0.0354067. In these cases, we still selected them as the best available for these single-gene classifiers. Otherwise, multi-gene markers would be directly better than the single-gene approach for these drugs.\n\nAfter the model selection step, the single-gene marker for each drug is assessed on the corresponding independent test set. This form of external validation is particularly demanding since the test data is completely separate from training data and constitutes future data from the model training perspective. In 27 drugs, none of the cell lines in the test set harbour the marker mutation and hence TP=FP=0. Therefore, no prediction is provided by these markers and thus MCC and PR are assigned a zero value.\n\nFor each of the 127 drugs, we built a Random Forest (RF) classification model31 using exactly the same pharmacological data for training as the corresponding single-gene marker. However, while single-gene markers leverage genomic data, these RF models exploit transcriptomic data instead. All the 13321 gene expression values are used as features (RF_all). Each RF model was built using 1000 trees and the recommended value of its control parameter mtry (the square root of the number of considered features, thus mtry=115 here). All the described modelling was implemented in R language, using Microsoft R Open (MRO) version 3.2.5.\n\n\nResults and discussion\n\nA single-gene marker is a classifier considering the mutational status of a given gene as its only independent variable (i.e. whether the gene is wild-type or mutated). As the gene used as a marker arises from the analysis of which drug-gene associations are statistically significant based on the training data, external validation of such markers is not carried out. In this sense, machine learning represents a different culture32, where the validity of the predictor is only demonstrated if its prediction is better than random on a test set independent of the employed training set. In this study, we use the same test set to compare the performance of both single-gene markers and multi-gene transcriptomic-based RF models.\n\nFor each drug, there were two data sets generated with non-overlapping sets of cancer cell lines. The first data set was the training set, which contains cell lines that were tested prior to the release of release 1.0 of the GDSC data, each with its IC50 values for the drug and its gene expression profile. The second data set was the test set, including the new cell lines from release 5.0 (i.e. new data not included in the first release). The median logIC50 in µM units of all cell lines in the training set defines the sensitivity threshold for both the training set and the test set. The next step was evaluating the performance of both methods in both data sets by calculating the Matthews Correlation Coefficient (MCC), Precision (PR), Recall (RC) and F1-score (F1). The Methods section provides further details on performance evaluation.\n\nRandom Forest (RF)31 is a machine learning technique that works well on high-dimensional data33, including GDSC data16. Therefore, without making any claim about its optimality, we constructed a RF classification model on the same training data set as the single-gene marker. This permits a direct comparison of the two models. Each RF model was built using 1000 trees, with the default value of the control parameters mtry (the square root of the number of considered features). The built RF model was subsequently tested on the corresponding test set. Figure 1 displays the results for the drug pyrimethamine as an example. Pyrimethamine targets dihydrofolate reductase in the DNA replication pathway34 and its strongest association is to the BRAF gene (P=0.002) leading to a moderate level of prediction in this training set (Figure 1A). The prediction of this single-gene marker on the test set (Figure 1B) is worse than random (MCC=-0.03), with its recall being particularly poor (RC=0.03) and average precision at 0.50. Unsurprisingly, RF prediction on the training set is perfect due to intense overfitting35 arising from the high dimensionality of the problem (Figure 1C). Nevertheless, it is important to note that this overfitted model achieves a substantially better test set performance than that of the best single-gene marker (compare Figures 1D and 1B, respectively).\n\n(A) The single-gene marker with the lowest p-value on the training set was the pyrimethamine-BRAF sensitising association (P=0.002)8. (B) The boxplots show the sensitivity of cell lines on the independent test set for pyrimethamine depending on whether these harbour mutations in the BRAF gene or not (WT). Using this marker, BRAF-mutant cell lines are predicted to be sensitive to this drug (i.e. below the threshold in red established with training data), but the prediction is worse than random (Matthews Correlation Coefficient (MCC)=-0.03) with its recall being particularly poor (RC=0.03) and average precision (PR)=0.50. (C) The multi-gene marker was built using Random Forests (RF) and the gene expression profile on exactly the same drug-cell pairs as the single-gene markers. (D) On the test set, the RF classifier achieves a substantially better performance than single-gene markers (MCC=0.36 vs -0.03) with PR=0.76 and RC=0.66.\n\nTo assess the proportion of cell lines predicted to be sensitive that are actually sensitive to a drug by each model, we calculated their precision (PR) on the test set. Figure 2 shows the comparison between test set precision of single-gene markers and that of multi-gene models across 127 drugs. The precision of each method is highly drug-dependent and 61 drugs had their best single-gene marker leading to higher precision than the corresponding multi-gene model, whereas the other 66 drugs had the multi-gene model with better precision (see Supplementary Results). In other words, the sensitivity of cancer cell lines against 66 drugs can be predicted with higher precision when exploiting multi-variate gene expression data rather than a single gene mutation. In particular, the multi-gene model provides better precision for all the drugs for which the best single-gene marker involves a relatively rare mutation (i.e. those for which no test set cell line is mutated with respect to the marker gene and thus are unable to provide any level of precision).\n\nA large variability is observed, with 66 drugs obtaining better precision with Random Forest (RF) classifiers using all transcriptomic features. Cytotoxic drugs are in red and targeted drugs are in blue.\n\nNext, we present two examples of drugs for which the test set precision generated by the multi-gene model is higher than that of the single-gene model (Figure 3). AZD628 is a b-raf inhibitor, which plays a regulatory role in the MAPK/ERK pathway36. This drug is associated with the mutations in the BRAF gene (P=3∙10-15), which codes for the b-raf kinase. In total, 50% of BRAF-mutant cell lines are sensitive to this drug, while using the RF model combining all 13,321 transcriptomic features results in 88% of cell lines predicted to be sensitive being actually sensitive to this drug. The second example is the prediction of sensitivity to sunitinib, which targets multiple receptor tyrosine kinases regulating different aspects of cell signaling37. The most strongly associated gene to sunitinib is Kinase Insert Domain Receptor (KDR) (P=0.0002). As no KDR mutation was found in any test cell lines, the single-gene marker could not predict the sensitivity of any cell line to sunitinib (PR=0). In contrast, the multi-gene model provides a much better precision for this drug (PR=0.66). The multi-gene models of both drugs generate a higher recall than their corresponding single-gene model, which is investigated in the following section.\n\n(A) Test set precision obtained by the AZD628-BRAF marker is moderate (PR=0.50) despite being a strong drug-gene association (P=3∙10-15). By contrast, the multi-gene marker for AZD628 achieves a substantially higher precision (PR=0.88). (B) The sunitinib-Kinase Insert Domain Receptor (KDR) association (P=0.0002) offers no precision in the test set, since none of the test cell lines harbour mutations in the KDR gene. By contrast, the transcriptomic marker achieves a much higher precision (PR=0.75). Interestingly, both multi-gene markers achieve much better recall (RC=0.37 and RC=0.75) than their corresponding single-gene markers (RC=0.05 and RC=0.00), which means that a substantially higher proportion of sensitive cell lines are correctly predicted as sensitive.\n\nFigure 3 shows that the test set recall is much higher for multi-gene markers than for single-gene markers of AZD628 and sunitinib. To examine whether this is a general trend, Figure 4A plots test set recall across all the drugs. There is indeed a clear trend: 119 out of 127 drugs obtain a higher proportion of correctly predicted sensitive cell lines with the multi-gene markers.\n\n(A) Transcriptomic markers achieve much higher recall than single-gene markers in 117 of the 127 drugs. (B) Similarly, multi-gene markers achieve higher F-scores in 117 of the 127 drugs. In each plot, cytotoxic drugs are in red and targeted drugs are in blue. All cytotoxic drugs have better recall and F-scores by the Random Forest (RF) transcriptomic models.\n\nFigure 4B shows the test set F-score (F1) for the same drugs. High F1 values highlight markers achieving both high precision and high recall in the test set. Notably, the multi-gene classifiers lead to better recall and F1-scores in all the cytotoxic drugs. We have selected two drugs with high F1 values by the multi-gene marker, BAY-61-3606 and 17-AAG, in order to analyse them further (Figure 5).\n\n(A) Mutated Smoothened, Frizzled Class Receptor (SMO) was the most significant single-gene marker for BAY-61-3606 resistance (P=0.03) using training data. On the test set, this marker obtained no precision and no recall because the only SMO-mutant test set cell line was misclassified. By contrast, the corresponding multi-gene marker, built with the same training data, obtained a high precision (PR=0.68) and better recall (RC=0.83) on the same test data. (B) Mutated receptor tyrosine-protein kinase erbB-2 (ERBB2) is the most significant single-gene marker of 17-AAG sensitivity (P=0.008), but its test set recall is poor (RC=0.03). By contrast, the multi-gene marker achieves a much higher precision (PR=0.61) and recall (RC=0.75).\n\nFigure 5A compares the test set performance between single-gene and multi-gene models for the drugs BAY-61-3606 and 17-AAG, respectively. BAY-61-3606 is an inhibitor for the spleen tyrosine kinase, with key roles in adaptive immune receptor signalling, as well as regulation of cellular adhesion and vascular development38. The single-gene model generates poor precision and recall for this drug (PR = RC = 0), as the only cell line that harbours the actionable mutation was incorrectly predicted as resistant (TP = 0). By contrast, the multi-gene model achieves high performance in terms of both precision and recall (PR = 0.68 and RC = 0.83). On the other hand (Figure 5B), 17-AAG specifically inhibits HSP90, a protein that chaperones the folding of proteins required for tumour growth39. The multiple-gene model provides much higher PR (PR = 0.61) and RC (RC = 0.75) compared with its best single-gene marker (PR = 0.50 and RC = 0.03). This case exemplifies a common problem with single-gene markers: often only a small proportion of tumours harbour the actionable mutation40. This translates to very low recall, which in a clinical setting would mean that only a small proportion of patients responsive to the drug would be treated with it because of it being missed by its marker.\n\nAfter separately analysing the two sources of classification error via precision and recall, we analysed both types of error together in order to assess which predictors are better than random classification (i.e. MCC = 0)41.\n\nThe classification of both models can in principle be assessed in three ways across the considered drugs (Figure 6). Figure 6A evaluates the MCC of both predictors on the training data, which is common practice with single-gene markers. Figure 6C presents the evaluation of MCC on the non-overlapping test sets. Single-gene markers perform better on the training set than on the test set (on average, MCCtraining=0.11 vs MCCtest=0.05; Figures 6A and C), which is due at least in part to the identification of chance correlations in the training set. Unsurprisingly, multi-gene models perform much better on the training set due to intense overfitting (on average across drugs, MCCtraining=1 vs MCCtest=0.12). However, despite overfitting, it is important to note that these models provide on average better test set performance than single-gene markers (MCCtest=0.12 vs MCCtest=0.05). This is a well-known characteristic of the RF technique, which is robust to overfitting, in that it is able to provide competitive generalisation to other data sets despite overfitting (this behaviour has also been observed in analogous applications of RF43).\n\n(A) Performance assessment on the training data would be strongly biased towards multi-gene markers due to intense overfitting (given the high dimensionality of training data, multi-gene markers obtain maximum (Matthews Correlation Coefficient) MCC for all drugs). (B) The performance of single-gene markers on the test set is compared to the 10-fold cross-validated performance of multi-gene markers using training data. The cross-validation is not used for model selection as there is only one Random Forest (RF) model per drug (i.e. no RF control parameter is tuned because the recommended mtry is used, due to the high dimensionality of each of the 127 classification problems). However, cross-validation results are substantially better than those from the test set with more recent GDSC data (MCC of 0.18 averaged over the drugs), which suggests time-dependent batch effects10,42. (C) Using all the comparable data released after the initial GDSC release as a time-stamped test set, 66.1% of drugs are better predicted by the transcriptomic features (this figure is 84.2% using cross-validation). This is the most realistic form of retrospective performance assessment, which leads to the worse results on this challenging problem (MCC of 0.12 averaged over the drugs).\n\nFigure 6B shows the comparison between performance of the single-gene markers on the test set and the 10-fold cross-validated performance of the multi-gene markers on the training set. The latter provides a more optimistic performance assessment (average MCC=0.18 and 84.2% of drugs better predicted by the multi-gene models). This is likely due to effects between different batches of culture medium that are known to affect drug sensitivity measurements10,42. As expected, testing the models on the independent test sets generates worse results than on the training test or the cross-validation set.\n\n\nConclusions\n\nTo the best of our knowledge, this is the first systematic comparison of single-gene markers versus transcriptomic-based machine learning models of cell line sensitivity to drugs. This is important as transcriptomic data has been shown to be the most predictive data type in the pan-cancer setting5. A closely related analysis was included in a very recent study5. However, this analysis is based on logic classifiers that can only exploit up to four features instead of fully-featured machine learning classifiers. Furthermore, the performance results in that study are based on cross-validations, thus leading to overoptimistic performance, due to batch effects as we have seen here. The latter would be exacerbated if the same cross-validation is also used for model selection, as it was the case in the previous study5. Despite these limitations, these new logic classifiers are very valuable as they can potentially explain why a particular cell line is sensitive to the drug, something that machine learning classifiers are not suitable for.\n\nAlthough single-gene markers were able to predict the sensitivity of cancer cell lines to anti-cancer drugs with generally high test set precision (Figure 2), very poor precision and a very low recall was provided for other drugs, especially those that are best associated with relatively rare actionable mutation. On the other hand, multi-gene classifiers obtained a much better recall, also known as sensitivity, for most of the drugs (Figure 4). This result is in line with criticism of single-gene markers, which lead to an extremely small proportion of patients that can benefit40. In this sense, one could argue that there is a need for not only precision oncology, but for precision and recall oncology, and that multi-variate classifiers have the potential to identify all the responsive patients, not only a subset of those with an actionable mutation.\n\nWhile no strong single-gene markers of sensitivity were found for cytotoxic drugs8, the multi-gene machine learning models perform better than the single-gene markers in 12 of the 14 cytotoxic drugs (Figure 6C), with all cytotoxic drugs having better recall (Figure 4A). This suggests that the sensitivity to cytotoxic drugs has a stronger multi-factorial nature, which is thus better captured by multi-gene models. Although much less developed to date, personalised oncology approaches have already been suggested for cytotoxic drugs44,45.\n\nThe study of molecular markers for drug sensitivity is currently of great interest. This endeavour is not limited to improve personalised oncology, it is also important for drug development and clinical research46,47. As a part of cancer diagnosis and treatment research, a vast amount of tumour molecular profiling data is typically generated48 and thus there is an urgent need for their optimal exploitation49. Here we propose a method to exploit transcriptomic data of cancer cell lines to classify them into sensitive and resistant groups. Our study has found that cancer cell sensitivity to two thirds of the studied drugs, including 12 of the 14 cytotoxic drugs, are better predicted with multi-variate transcriptomic-based RF classifiers. These models are particularly useful in those drugs where their best genomic markers are based on rare mutations. Another contribution of this study is in the proposal of a more realistic performance assessment of markers, which leads to less spectacular, but more robust results. Beyond this proof-of-concept study across 127 drugs, there are several important avenues for future work, which are far too extensive to be incorporated here. For instance, there is a plethora of feature selection techniques that can be applied to reduce the dimensionality of the problem prior to training the classifier for a given drug. Furthermore, the predictive performance of these models can be evaluated on more data or integrated with other molecular profiles. Lastly, we have used a robust classifier technique, RF, but there are many others available and some of these may be more appropriate depending on the analysed drug.\n\n\nData availability\n\nThe Genomics of Drug Sensitivity in Cancer data sets used in the present study can be found at: ftp://ftp.sanger.ac.uk/pub4/cancerrxgene/releases/release-1.0/\n\nftp://ftp.sanger.ac.uk/pub4/cancerrxgene/releases/release-5.0/",
"appendix": "Author contributions\n\n\n\nP.J.B. conceived the study, designed its implementation and wrote the manuscript with the help of L.N. L.N. and C.C.D. implemented the software and carried out the numerical experiments. All authors discussed results and commented on the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work has been carried out thanks to the support of a A*MIDEX grant (#ANR-11-IDEX-0001-02) funded by the French Government ‘Investissements d’Avenir’ programme, and the 911 Programme PhD scholarship from Vietnam National International Development.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nSupplementary material\n\nSupplementary Results: For each analysed drug, the performances of the best MANOVA-based single-gene marker and Random Forest (RF)-based multi-gene marker on the same test set (both methods were in addition trained on the same data set) are provided. Furthermore, the 10-fold cross-validated performance of the RF-based multi-gene marker is included.\n\nClick here to access the data.\n\n\nReferences\n\nWheeler HE, Maitland ML, Dolan ME, et al.: Cancer pharmacogenomics: strategies and challenges. Nat Rev Genet. 2013; 14(1): 23–34. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMcLeod HL: Cancer pharmacogenomics: early promise, but concerted effort needed. Science. 2013; 339(6127): 1563–1566. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAzuaje F: Computational models for predicting drug responses in cancer research. Brief Bioinform. 2016; bbw065. PubMed Abstract | Publisher Full Text\n\nCovell DG: Data Mining Approaches for Genomic Biomarker Development: Applications Using Drug Screening Data from the Cancer Genome Project and the Cancer Cell Line Encyclopedia. PLoS One. 2015; 10(7): e0127433. PubMed Abstract | Publisher Full Text | Free Full Text\n\nIorio F, Knijnenburg TA, Vis DJ, et al.: A Landscape of Pharmacogenomic Interactions in Cancer. Cell. 2016; 166(3): 740–754. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRapin N, Bagger FO, Jendholm J, et al.: Comparing cancer vs normal gene expression profiles identifies new disease entities and common transcriptional programs in AML patients. Blood. 2014; 123(6): 894–904. PubMed Abstract | Publisher Full Text\n\nAbaan OD, Polley EC, Davis SR, et al.: The exomes of the NCI-60 panel: a genomic resource for cancer biology and systems pharmacology. Cancer Res. 2013; 73(14): 4372–82. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarnett MJ, Edelman EJ, Heidorn SJ, et al.: Systematic identification of genomic markers of drug sensitivity in cancer cells. Nature. 2012; 483(7391): 570–575. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBarretina J, Caponigro G, Stransky N, et al.: The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity. Nature. 2012; 483(7391): 603–307. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeinstein JN: Drug discovery: Cell lines battle cancer. Nature. 2012; 483(7391): 544–5. PubMed Abstract | Publisher Full Text\n\nMajumder B, Baraneedharan U, Thiyagarajan S, et al.: Predicting clinical response to anticancer drugs using an ex vivo platform that captures tumour heterogeneity. Nat Commun. 2015; 6: 6169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPemovska T, Kontro M, Yadav B, et al.: Individualized Systems Medicine Strategy to Tailor Treatments for Patients with Chemorefractory Acute Myeloid Leukemia. Cancer Discov. 2013; 3(12): 1416–29. PubMed Abstract | Publisher Full Text\n\nAzzam D, Volmar CH, Hassan AA, et al.: A Patient-Specific Ex Vivo Screening Platform for Personalized Acute Myeloid Leukemia (AML) Therapy. Blood. 2015; 126(23): 1352. Reference Source\n\nHidalgo M, Amant F, Biankin AV, et al.: Patient-derived xenograft models: an emerging platform for translational cancer research. Cancer Discov. 2014; 4(9): 998–1013. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGao H, Korn JM, Ferretti S, et al.: High-throughput screening using patient-derived tumor xenografts to predict clinical trial drug response. Nat Med. 2015; 21(11): 1318–25. PubMed Abstract | Publisher Full Text\n\nMenden MP, Iorio F, Garnett M, et al.: Machine Learning Prediction of Cancer Cell Sensitivity to Drugs Based on Genomic and Chemical Properties. PLoS One. 2013; 8(4): e61318. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAmmad-ud-din M, Georgii E, Gönen M, et al.: Integrative and personalized QSAR analysis in cancer by kernelized Bayesian matrix factorization. J Chem Inf Model. 2014; 54(8): 2347–59. PubMed Abstract | Publisher Full Text\n\nCortés-Ciriano I, van Westen GJ, Bouvier G, et al.: Improved large-scale prediction of growth inhibition patterns using the NCI60 cancer cell line panel. Bioinformatics. 2016; 32(1): 85–95. PubMed Abstract | Publisher Full Text | Free Full Text\n\nRiddick G, Song H, Ahn S, et al.: Predicting in vitro drug sensitivity using Random Forests. Bioinformatics. 2011; 27(2): 220–224. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGeeleher P, Cox NJ, Huang RS: Clinical drug response can be predicted using baseline gene expression levels and in vitro drug sensitivity in cell lines. Genome Biol. 2014; 15(3): R47. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKim S, Sundaresan V, Zhou L, et al.: Integrating Domain Specific Knowledge and Network Analysis to Predict Drug Sensitivity of Cancer Cell Lines. PLoS One. 2016; 11(9): e0162173. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWang Y, Fang J, Chen S: Inferences of drug responses in cancer cells from cancer genomic features and compound chemical and therapeutic properties. Sci Rep. 2016; 6: 32679. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYuan H, Paskov I, Paskov H, et al.: Multitask learning improves prediction of cancer drug sensitivity. Sci Rep. 2016; 6: 31619. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAmmad-Ud-Din M, Khan SA, Malani D, et al.: Drug response prediction by inferring pathway-response associations with kernelized Bayesian matrix factorization. Bioinformatics. 2016; 32(17): i455–i463. PubMed Abstract | Publisher Full Text\n\nZhang N, Wang H, Fang Y, et al.: Predicting Anticancer Drug Responses Using a Dual-Layer Integrated Cell Line-Drug Network Model. PLoS Comput Biol. 2015; 11(9): e1004498. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLee AC, Shedden K, Rosania GR, et al.: Data mining the NCI60 to predict generalized cytotoxicity. J Chem Inf Model. 2008; 48(7): 1379–88. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKumar R, Chaudhary K, Singla D, et al.: Designing of promiscuous inhibitors against pancreatic cancer cell lines. Sci Rep. 2014; 4: 4668. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOkser S, Pahikkala T, Airola A, et al.: Regularized machine learning in the genetic prediction of complex traits. PLoS Genet. 2014; 10(11): e1004754. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWeinstein JN, Lorenzi PL: Cancer: Discrepancies in drug sensitivity. Nature. 2013; 504(7480): 381–3. PubMed Abstract | Publisher Full Text\n\nRedig AJ, Jänne PA: Basket trials and the evolution of clinical trial design in an era of genomic medicine. J Clin Oncol. 2015; 33(9): 975–977. PubMed Abstract | Publisher Full Text\n\nBreiman L: Random Forests. Mach Learn. 2001; 45(1): 5–32. Publisher Full Text\n\nBreiman L: Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author). Stat Sci. 2001; 16(3): 199–231. Publisher Full Text\n\nChen X, Ishwaran H: Random forests for genomic data analysis. Genomics. 2012; 99(6): 323–329. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTommasino C, Gambardella L, Buoncervello M, et al.: New derivatives of the antimalarial drug Pyrimethamine in the control of melanoma tumor growth: an in vitro and in vivo study. J Exp Clin Cancer Res. 2016; 35(1): 137. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLever J, Krzywinski M, Altman N: Points of Significance: Model selection and overfitting. Nat Methods. 2016; 13: 703–704. Publisher Full Text\n\nAnderson DJ, Durieux JK, Song K, et al.: Live-cell microscopy reveals small molecule inhibitor effects on MAPK pathway dynamics. PLoS One. 2011; 6(8): e22607. PubMed Abstract | Publisher Full Text | Free Full Text\n\nShukla S, Robey RW, Bates SE, et al.: Sunitinib (Sutent, SU11248), a small-molecule receptor tyrosine kinase inhibitor, blocks function of the ATP-binding cassette (ABC) transporters P-glycoprotein (ABCB1) and ABCG2. Drug Metab Dispos. 2009; 37(2): 359–65. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPamuk ON, Tsokos GC: Spleen tyrosine kinase inhibition in the treatment of autoimmune, allergic and autoinflammatory diseases. Arthritis Res Ther. 2010; 12(6): 222. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWhitesell L, Lindquist SL: HSP90 and the chaperoning of cancer. Nat Rev Cancer. 2005; 5(10): 761–772. PubMed Abstract | Publisher Full Text\n\nHuang M, Shen A, Ding J, et al.: Molecularly targeted cancer therapy: some lessons from the past decade. Trends Pharmacol Sci. 2014; 35(1): 41–50. PubMed Abstract | Publisher Full Text\n\nLever J, Krzywinski M, Altman N: Points of Significance: Classification evaluation. Nat Methods. 2016; 13: 603–604. Publisher Full Text\n\nHaibe-Kains B, El-Hachem N, Birkbak NJ, et al.: Inconsistency in large pharmacogenomic studies. Nature. 2013; 504(7480): 389–93. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi H, Leung KS, Wong MH, et al.: Improving AutoDock Vina Using Random Forest: The Growing Accuracy of Binding Affinity Prediction by the Effective Exploitation of Larger Data Sets. Mol Inform. 2015; 34(2–3): 115–126. PubMed Abstract | Publisher Full Text\n\nFelip E, Martinez P: Can sensitivity to cytotoxic chemotherapy be predicted by biomarkers? Ann Oncol. 2012; 23(Suppl 10): x189–92. PubMed Abstract | Publisher Full Text\n\nEjlertsen B, Jensen MB, Nielsen KV, et al.: HER2, TOP2A, and TIMP-1 and responsiveness to adjuvant anthracycline-containing chemotherapy in high-risk breast cancer patients. J Clin Oncol. 2010; 28(6): 984–90. PubMed Abstract | Publisher Full Text\n\nde Gramont AA, Watson S, Ellis LM, et al.: Pragmatic issues in biomarker evaluation for targeted therapies in cancer. Nat Rev Clin Oncol. 2015; 12(4): 197–212. PubMed Abstract | Publisher Full Text\n\nTran B, Dancey JE, Kamel-Reid S, et al.: Cancer genomics: technology, discovery, and translation. J Clin Oncol. 2012; 30(6): 647–60. PubMed Abstract | Publisher Full Text\n\nAhmed J, Meinel T, Dunkel M, et al.: CancerResource: a comprehensive database of cancer-relevant proteins and compound interactions supported by experimental knowledge. Nucleic Acids Res. 2011; 39(Database issue): D960–D967. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBoutros PC, Margolin AA, Stuart JM, et al.: Toward better benchmarking: challenge-based methods assessment in cancer genomics. Genome Biol. 2014; 15(9): 462. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "20033",
"date": "08 Feb 2017",
"name": "Marc Poirot",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is an original research article reporting a machine learning approach exploiting gene expression profiles to predict pan-cancer cell line sensitivity to drugs. The title is appropriate and the abstract represent a suitable summary of the work I would however suggest that the authors define the abbreviation “GDSC”. The paper is well written, the experimental design is good and the conclusions fit with the data.\n\nMinor point: page 7 figure 3 legend, RC=0.37 is not of the figure: this must be corrected",
"responses": [
{
"c_id": "2544",
"date": "14 Mar 2017",
"name": "Pedro Ballester",
"role": "Author Response",
"response": "This is an original research article reporting a machine learning approach exploiting gene expression profiles to predict pan-cancer cell line sensitivity to drugs. The title is appropriate and the abstract represent a suitable summary of the work I would however suggest that the authors define the abbreviation “GDSC”. The paper is well written, the experimental design is good and the conclusions fit with the data. We thank the reviewer for his positive appraisal of this article. As suggested, we have now defined the abbreviation “GDSC” in the abstract too. Minor point: page 7 figure 3 legend, RC=0.37 is not of the figure: this must be corrected Thanks for the observation. RC=0.37 was in mentioned in the part B of the caption, but referred to the part A of the figure. As this is confusing, it has been rewritten to specify that RC=0.37 is the test set recall of the AZ628 (part A) and it compared to that of Sunitinib (RC=0.75) in part B."
}
]
},
{
"id": "19182",
"date": "13 Feb 2017",
"name": "Ronnie Alves",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present an empirical modeling approach, highlighting the pros and cons of using a robust machine learning strategy to pan-cancer cell line prediction solely based on gene expression profiles. Single-gene models may not provide an efficient solution on a such high dimensional data, therefore, multi-gene models have been used as a potential alternative to handle the combinatorial space (many candidate gene alterations). Even though the authors introduce quite well the GDSC pharmacotranscriptomic data, I would suggest the addition of more information regarding the baseline / benchmark regarding this classification problem. What would be an acceptable performance prediction? The evolution from single-gene to multi-gene classification could be improved along the feature engineering adopted within these classification strategies. The authors could motivate more the choice of the Random Forest (RF) technique. MANOVA has the problems of handling correlations among dependent variables, and effect size of these correlations. RF provides some improvement on MANOVA's limitations, however, it might suffer from feature subsampling selection and consequently, can overestimate the classification. The author could take a look at this work (Impact of subsampling and pruning on random forests) by Roxane Duroux and Erwan Scornet.1\nDecision tree models are quite tight on training data. Given that the authors used the R language, there are many possibilities of tuning parameters along RF. Importance plots and partial plots could allow to expose features that can help to understand key features of a multi-gene model based on RF. Even though RF has a good performance, one may observe (Figure 4.A and 4.B) that there are some instances where MANOVA is better. The authors could share some light on these observations. Why are those ones hard to classify for RF? Regarding the GDSC data, it is not clear, while splitting the data, whether the data is well balanced along all drugs or not. The authors did well in keeping an independent test data, and it would be interesting to share more information regarding class (127 drugs) distribution along training and test data.\nIt would be great of the authors to provide the data and model, so other researchers are able to fully reproduce this study, as well as, devise other robust ensemble learning techniques that might be as good as RF.\nThis is an original work and it can be the first, indeed, proposing a benchmark on the estimation of the importance of somatic mutations in drug sensitivity classification.",
"responses": [
{
"c_id": "2545",
"date": "14 Mar 2017",
"name": "Pedro Ballester",
"role": "Author Response",
"response": "The authors present an empirical modeling approach, highlighting the pros and cons of using a robust machine learning strategy to pan-cancer cell line prediction solely based on gene expression profiles. Single-gene models may not provide an efficient solution on a such high dimensional data, therefore, multi-gene models have been used as a potential alternative to handle the combinatorial space (many candidate gene alterations). Even though the authors introduce quite well the GDSC pharmacotranscriptomic data, I would suggest the addition of more information regarding the baseline / benchmark regarding this classification problem. What would be an acceptable performance prediction? A drug sensitivity model with MCC>0 on the independent test set can be regarded as an acceptable performance because the performance of classifying cell lines at random is MCC=0. In the context of this study, we are interested in those acceptable models with a MCC value higher than that provided by the best single-gene marker of the same drug (i.e. models with positive MCC in the lower triangular part of Figure 6C). This is now stated in page 8. The evolution from single-gene to multi-gene classification could be improved along the feature engineering adopted within these classification strategies. The authors could motivate more the choice of the Random Forest (RF) technique. MANOVA has the problems of handling correlations among dependent variables, and effect size of these correlations. RF provides some improvement on MANOVA's limitations, however, it might suffer from feature subsampling selection and consequently, can overestimate the classification. The author could take a look at this work (Impact of subsampling and pruning on random forests) by Roxane Duroux and Erwan Scornet. Decision tree models are quite tight on training data. Given that the authors used the R language, there are many possibilities of tuning parameters along RF. Thanks for the suggestion. In page 5, we now state that RF is also robust to overfitting, as evidenced in Figure 1. In our experience, going deeper into the tuning control parameters for RF only brings marginal improvements in performance, although it could certainly be interesting from a theoretical point of view. Importance plots and partial plots could allow to expose features that can help to understand key features of a multi-gene model based on RF. We agree with the reviewer. However, we think that properly looking at the feature selection/importance question for each of the 127 drugs would require a separated study. Even though RF has a good performance, one may observe (Figure 4.A and 4.B) that there are some instances where MANOVA is better. The authors could share some light on these observations. Why are those ones hard to classify for RF? This is certainly an interesting question. For example, Figure 6C shows that 33.9% of the drugs are harder to classify by a multi-variate RF model in the sense that a univariate model performs better. In page 3, we explained that the high dimensionality of the training data sets poses a challenge to classifiers and that these difficulties are drug-dependent. This is due to a number of convoluted factors. First of all, while both models look at the same data in each drug, each model employs a different set of features (genomic vs transcriptomic). Therefore, a single gene mutation might be more predictive of drug sensitivity than a model based on gene expression values in some cases. Second, only a very small subset of features might be predictive of cell line sensitivity to a given drug, which could be challenging for a RF using all the transcriptomic features. Third, the size of training and test sets varies because each drug was tested with a different number of cell lines. Consequently, class imbalances in training set and test set are also different depending on the drug. We are now stating these factors in page 3. Regarding the GDSC data, it is not clear, while splitting the data, whether the data is well balanced along all drugs or not. The authors did well in keeping an independent test data, and it would be interesting to share more information regarding class (127 drugs) distribution along training and test data. We completely agree with the reviewer in that it is essential to keep an independent test set to avoid overestimating performance, as standard k-fold cross-validation has been used for both model selection and performance assessment. Full information about the proportion of sensitive and resistant cell lines for each drug can be found in the data sets output by the released software (see below). One can see that training and/or test sets are not well balanced for some drugs and therefore more predictive RF models are likely to be obtained by using strategies to correct for class imbalances. However, the composition of training and test sets should not be altered, as this arise from a time-stamped partition and thus permit a realistic assessment of the performance that can be expected on future data sets (perhaps class imbalanced). It would be great of the authors to provide the data and model, so other researchers are able to fully reproduce this study, as well as, devise other robust ensemble learning techniques that might be as good as RF. We have now released the requested R script that was used to facilitate the construction of alternative machine learning models and their validation in the presented benchmark. This is available at http://ballester.marseille.inserm.fr/gdsc.transcriptomicDatav2.tar.gz. We hope that this release will facilitate further improvements on this class of problems. This is an original work and it can be the first, indeed, proposing a benchmark on the estimation of the importance of somatic mutations in drug sensitivity classification. We thank the reviewer for his positive assessment of this study. The released software implements this benchmark comprising 127 binary classification problems, one per drug. As drug response data is continuous, it is also possible to use the software to benchmark regression models. Furthermore, the software outputs the results of our study and hence these can be employed as a performance baseline for comparison to the results obtained by the benchmarked models."
}
]
}
] | 1
|
https://f1000research.com/articles/5-2927
|
https://f1000research.com/articles/6-266/v1
|
14 Mar 17
|
{
"type": "Research Note",
"title": "Achieving good adherence to inhaled corticosteroids after weighing canisters of asthmatic children",
"authors": [
"Wantida Chuenjit",
"Vorapan Engchuan",
"Araya Yuenyongviwat",
"Pasuree Sangsupawanich",
"Wantida Chuenjit",
"Vorapan Engchuan",
"Araya Yuenyongviwat"
],
"abstract": "Background: The metered-dose inhalers (MDIs) currently available for inhaled corticosteroid delivery do not offer an integrated dose counter; therefore, it is difficult to evaluate adherence of patients. The present authors developed a linear regression equation using canister weight to calculate the number of doses actuated from the MDIs. This study aimed to assess medical adherence after the integration of regular weighing of the canisters into the routine service. Methods: A cohort study was carried out between May 2013 and April 2014. Children aged less than 8 years with a diagnosis of asthma were recruited. The duration of adherence assessment was 24 weeks. Participants had a regular schedule every 8 weeks to obtain a new FLIXOTIDE® 125 inhaler. Parents were asked to collect the discarded MDI canisters, which were then weighed by a laboratory scale. The weight of each canister was replaced in the regression equation to calculate the number of doses actuated from the MDIs. Results: A total of 52 asthmatic children participated in the study. The median age was 52.7 months. At the end of 24 weeks, 44, 33, and 23 discarded MDI canisters were collected from visits 1, 2, and 3, respectively. The median percentages of adherence were 96.8%, 96.3%, and 96.3%, respectively. In 11 discarded canisters (11%), the remaining medication was more than 30% of the labeled doses. Approximately 90% of the participants had no asthma exacerbation during 24-week study period. Conclusion: High adherence rates were achieved after integration of canister weighing into the asthma care service.",
"keywords": [
"canister weight",
"adherence",
"compliance",
"inhaled corticosteroids",
"asthma",
"wheezing",
"children",
"cohort study"
],
"content": "Introduction\n\nInhaled corticosteroids (ICSs) are the standard treatment for asthmatic children. Non-adherence with prescribed ICS treatment clearly causes uncontrolled asthma1. Feedback from parents is the traditional approach to assess the adherence to the treatment regimen; however, there is an overestimation by the patients of the remaining amount of medication2. Even though integration of a dose counter into the inhaler device improves the tracking adherence to prescribed medication3, the metered-dose inhalers (MDIs) currently available for ICS delivery do not offer integrated dose counters.\n\nWeighing of the MDI canisters may be an alternative method to assess a patient's medication adherence. The present authors previously developed a linear regression equation using canister weight to calculate the number of doses actuated from the MDIs4. Weighing of the canisters was implemented into our asthma care system in March 2013. This study was designed to assess a patient's medication adherence after the integration of regular weighing of the canisters into the routine service.\n\n\nMethods\n\nA cohort study was carried out between May 2013 and April 2014. The inclusion criteria were children aged less than 8 years with a diagnosis of asthma who attended the Pediatric Allergy Clinic at Songklanagarind Hospital (Hat Yai, Songkhla, Thailand) and had exacerbation of asthma requiring hospitalization or an emergency department visit within the previous year. Patients who had a previous history of intubation or other chronic conditions were excluded.\n\nThe research protocol (REC 55-021-01-1-2) was approved by the Human Research Ethics Committee, Faculty of Medicine, Prince of Songkla University. Informed consent was obtained from the parents/guardians.\n\nFluticasone propionate was selected as the ICS therapy for the study. A FLIXOTIDE® 125 Inhaler (GlaxoSmithKline) is a pressurized metered-dose inhaler, which delivers 125 microgram of fluticasone propionate per actuation. Each canister supplies 120 actuations. FLIXOTIDE® 125 Inhaler and BabyHALER® (GlaxoSmithKline), a device to help patients taking inhaled medicine, were prescribed to all participants. The dosage of fluticasone propionate was one actuation twice a day. The add-on asthma therapies were provided according to the GINA guideline5. Participants needed to participate in a regular schedule every 8 weeks to obtain a new FLIXOTIDE® 125 inhaler. For patients who did not achieve adequate control or maintain the adherence rate, the inhalation technique and medication doses were revised at the time they visited the clinic. Exacerbation was defined as asthma deterioration that required treatment with systemic corticosteroids or emergency department utilization or hospitalization.\n\nThe duration of adherence assessment was 24 weeks. Each participant received three canisters of FLIXOTIDE® 125 Inhaler. Parents were asked to collect the discarded inhalers at the 8-week (visit 1), 16-week (visit 2), and 24-week (visit 3) after recruitment. The discarded MDI canisters were weighed by a laboratory scale (Sartorius Basic®). The weight of each canister was replaced in the regression equation to calculate the number of doses actuated from the MDIs. A regression equation for a fluticasone propionate MDI canister gives the number (n) of doses actuated from the MDIs:\n\nn = 276.16 – (14.62 × canister weight).4\n\nAll of the statistical analyses were conducted with R software (version 3.3.2) by the R Foundation for Statistical Computing. Adherence in each 8-week interval was calculated as the amount of medication actuated divided by the amount prescribed. Percentage of adherence was reported as median and range.\n\n\nResults\n\nA total of 52 asthmatic children participated in the study. The characteristics of the participants are shown in Table 1. Half of the participants were male. The median age was 52.7 months (range, 18.3–91.7) and the age at the onset of asthma was 12 months (range, 1.0–48.0). Parents were the major caregivers. In total, 32% of the participants had other allergic co-morbidities. Most of participants had received ICS therapy for longer than 3 months. At the end of 24 weeks, 44, 33, and 23 (total 100) discarded MDI canisters were collected from visits 1, 2, and 3, respectively. The remaining median weights of the discarded canisters from visits 1, 2, and 3 were 11.172g, 11.229g, and 11.113 g, respectively, and the median percentages of adherence were 96.8%, 96.3%, and 96.3%, respectively. In 11 discarded canisters (11%), the remaining medication was more than 30% of the labeled doses. Approximately 90% of the participants had no asthma exacerbation during 24-week study period (Table 2).\n\n\nDiscussion\n\nThe present study demonstrated high adherence rates with low variations between the three visits. The percentage of discarded canisters, which had more than 30% of the labeled dosage of medication remaining, reduced from 22% in our previous cross-sectional study6 to 11% in this study. Achieving good adherence in this cohort could be explained by the Hawthorn effect: the parents and participants knew that their adherence would be measured by weighing the canisters; therefore, the individuals modified or improved their adherence in response to the awareness of being observed.\n\nApproximately 90% of the participants had no asthma exacerbation throughout the study period. In our previous study, only 59% of the patients had adequate control6. Patients had access to the same educational and medication intervention, but the adherence rates were significantly different. Previous studies verified that the adherence rates had an association between lower adherence rates and poor asthma control7,8.\n\nAlthough weighing of canisters was less accurate than a dose counter for measuring adherence9, the present study demonstrated that a weight-remaining dose correlation could be used to determine the inhaler medication adherence in real life, and intensive monitoring of adherence was successful in achieving control.\n\nIn conclusion, our results demonstrated that high adherence rates were achieved after integration of canister weighing into the asthma care service. The present study highlighted the need to incorporate a method to monitor medical adherence in clinical practice, which may contribute to adequate asthma control.\n\n\nData availability\n\nDataset 1: Canister weight and percent of adherence. Raw data of canister weight (original and discarded weight), results of the actuated dose equation and percent of adherence. doi, 10.5256/f1000research.10710.d15097010",
"appendix": "Author contributions\n\n\n\nAll authors contributed to study design, interpretation of study findings, manuscript preparation, and approved the final manuscript. WC, VE and AY contributed to data acquisition and validation. PS was responsible for project management, data analysis and funding.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe study was funded by Faculty of Medicine, Prince of Songkla University (55-021-01-1-2).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nReferences\n\nIsmaila A, Corriveau D, Vaillancourt J, et al.: Impact of adherence to treatment with fluticasone propionate/salmeterol in asthma patients. Curr Med Res Opin. 2014; 30(7): 1417–25. PubMed Abstract | Publisher Full Text\n\nHolt S, Holt A, Weatherall M, et al.: Metered dose inhalers: a need for dose counters. Respirology. 2005; 10(1): 105–6. PubMed Abstract | Publisher Full Text\n\nJentzsch NS, Camargos PA, Colosimo EA, et al.: Monitoring adherence to beclomethasone in asthmatic children and adolescents through four different methods. Allergy. 2009; 64(10): 1458–62. PubMed Abstract | Publisher Full Text\n\nSahajarupat T, Sangsupawanich P: Relationship of canister weight to the amount of medication actuated from metered-dose inhalers. J Allergy Clin Immunol. 2012; 129(2 Supplement): AB41. Publisher Full Text\n\nGlobal Initiative for Asthma: Global Strategy for Asthma Management and Prevention. Reference Source\n\nDampanrat W, Sangsupawanich P, Yuenyongviwat A: Medication Remaining In Discarded Metered Dose Inhalers Of Asthmatic Children. J Allergy Clin Immunol. 2014; 133(2 Supplement): AB179. Publisher Full Text\n\nLasmar L, Camargos P, Champs NS, et al.: Adherence rate to inhaled corticosteroids and their impact on asthma control. Allergy. 2009; 64(5): 784–9. PubMed Abstract | Publisher Full Text\n\nJentzsch NS, Camargos P, Sarinho ES, et al.: Adherence rate to beclomethasone dipropionate and the level of asthma control. Respir Med. 2012; 106(3): 338–43. PubMed Abstract | Publisher Full Text\n\nBender B, Wamboldt FS, O'Connor SL, et al.: Measurement of children's asthma medication adherence by self report, mother report, canister weight, and Doser CT. Ann Allergy Asthma Immunol. 2000; 85(5): 416–21. PubMed Abstract | Publisher Full Text\n\nChuenjit W, Engchuan V, Yuenyongviwat A, et al.: Dataset 1 in: Achieving good adherence to inhaled corticosteroids after weighing canisters of asthmatic children. F1000Research. 2017. Data Source"
}
|
[
{
"id": "21967",
"date": "21 Apr 2017",
"name": "Wiparat Manuyakorn",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe article is well written. However, there is some minor comment. 1. The authors mentioned that \"The metered-dose inhalers (MDIs) currently available for inhaled corticosteroid delivery do not offer an integrated dose counter\". This statement was true at the time when this study was performed. But currently, there are some MDIs that have dose counter. So this statement needs some amendment. 2. Method: the authors mentioned that \"Participants needed to participate in a regular schedule every 8 weeks to obtain a new FLIXOTIDE® 125 inhaler.\" And the duration of treatment was 24 weeks. So it means that each participant would receive 3 inhalers. A total of 52 children was participated. So I wonder why only 100 MDI canister was evaluated. It should be 52*3=156?. This may need the clarification.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Yes",
"responses": []
},
{
"id": "22150",
"date": "24 Apr 2017",
"name": "Bee Wah Lee",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis study evaluated adherence to ICS prophylaxis in asthmatic children by using weight of canister as a means to determine inhaler compliance, by calculating doses used by using a regression equation based on canister weight. The authors conclude that in the absence of an inbuilt dose counter, weighing the canister was a useful means of improving adherence to daily treatment with ICS.\n\nDetailed comments:\nThere were 52 children participating in the study. However, with each follow up visit (n=3)the number of canisters measure reduced progressively (44, 33, 23). Was there loss to follow up or did these patients fail to return their canisters? Could these patients be used as their comparative group in terms of asthma control over the period of follow up? For the subjects with the 11 canisters that had remaining medication, was their asthma control affected by non compliance? It may be more appropriate to use the term ‘used’ rather than ’discarded’ canisters. Under the paragraph: adherence assessment. Some editing of sentences may improve readability:\nParents were asked to ‘return’ the ‘used inhalers at the 8 week…..\n‘The weight of the canister was replaced in the regression equation’ to “The number of remaining doses\n\nof remaining doses was extrapolated by using the weight of the used MDI canisters into the regression equation as shown:……..\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Yes\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
},
{
"id": "24513",
"date": "09 Aug 2017",
"name": "Orathai Piboonpocanun",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors tried to use canister weight to calculate the number of doses actuated from the MDIs and to assess medical adherence to MDIs. There are some issues which need clarification\nPlease clarify why the numbers of collected MDI canisters were not the same in each visit. Please mentioned about patients who had exacerbation. Were they the same patients? Were these patients nonadherence to medication or did they have any specific problem?\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Yes\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate? Yes\n\nAre all the source data underlying the results available to ensure full reproducibility? Partly\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-266
|
https://f1000research.com/articles/6-265/v1
|
14 Mar 17
|
{
"type": "Case Report",
"title": "Case Report: Intramammary lymph node metastasis of an unknown primary, probably occult breast, undifferentiated carcinoma",
"authors": [
"Zacharoula Sidiropoulou",
"Félix Adélia",
"Isabela Gil",
"Tobias Teles",
"Claudia Santos",
"Lucilia Monteiro",
"Félix Adélia",
"Isabela Gil",
"Tobias Teles",
"Claudia Santos",
"Lucilia Monteiro"
],
"abstract": "Little is known about the clinical importance of intramammary lymph node metastasis of breast cancer, even though it is not rare. In the present paper, the authors present an unusual, rare case of an intramammary lymph node metastasis of an unknown primary, probably occult breast cancer, and its management. The patient was submitted to various staging exams and surgical procedures and a definitive diagnosis was not established. From a multidisciplinary context, it was assumed that the patient had a breast triple negative primary with axillary involvement. This decision lead to adjuvant chemo and radiotherapy. Challenging cases like the one described here, should always be managed within the multidisciplinary team context and recorded in the institution’s database.",
"keywords": [
"Breast cancer",
"occult breast cancer",
"intramammary lymph node metastasis",
"multidisciplinary approach"
],
"content": "Background\n\nIntramammary lymph node metastasis is an unknown in everyday clinical practice and very little is known about its importance.\n\n\nCase presentation\n\nA woman, 33 years old, from Goa (India) presented to our consultation for a palpable mass on the upper external quadrant of the right breast. The patient had no personal relevant history. Menarche had occurred at 15 years with regular menses of 4/26 days, G0P0, without anticonceptional pills use, and no drug or alcohol abuse. The patient’s family history showed that the mother passed away at 40 years old with metastatic (brain) breast cancer and her maternal uncle was deceased at the age of 45 from esophageal cancer.\n\n\nInvestigations\n\nThe patient had already undergone ultrasound and bilateral breast mammography that reported the ‘presence of nodular multiloculated formation at the upper external quadrant of the right breast with 3 cm of diameter, probably corresponding to inflammatory/infectious lymph node’ (Figure 1).\n\nOn clinical observation, voluminous breast with grade III ptosis and a palpable solid mass was observed. It was an irregular mass of approximately 4 cm on the upper external right breast quadrant, not adherent to the skin or to the pectoralis muscle. The patient was submitted to an ultrasound guided fine-needle aspiration biopsy (FNAB) that reported ‘fragments of lymph node with poorly differentiated neoplastic infiltration. Presence of epithelioid neoplastic cells positive for AE1/E3 and negative for CK20, CEA, vimentin, protein S100, P63, CD56, TTF-1, GCDFP-15, estrogen receptors. Conclusion: lymph node metastasis of poorly differentiated carcinoma of unknown primary origin’.\n\nThe patient underwent a magnetic resonance imaging scan in which there was detected an additional 17mm lesion (BI-RADS-5) adjacent posterior to the lymph nodal mass previously detected, which was submitted to an ultrasound second look and FNAB (Figure 2). In this biopsy, no neoplastic tissue was identified, and the results reported ‘mammary gland fragments with inflammatory process, no isolated epithelial cells identified after IHC with CK8/18’.\n\nConsequently, the decision of the multidisciplinary team was to perform complementary studies (upper gastroscopy, otorhinolaryngological consultation, dermatology consultation, thoracic-abdominal-pelvic tomography, and full analytics with tumor markers). All the complementary studies were negative. Therefore, the multidisciplinary team decided that ‘the patient to be proposed for lumpectomy with axillary lymphadenectomy’ with a PET-TC scan positive only for the mass to the upper external quadrant of the right breast. The patient was submitted to lumpectomy on oncoplastic pattern, followed by axillary dissection level II, and was discharged without any complication on the third post-operative day.\n\nThe anatomopathology report of the surgical specimen stated that the ‘lumpectomy specimen constituted of skin, adipose tissue and mammary tissue where there exists a nodule, well delimited, white, with posterior margin of 1mm, consisting of a lymph node agglomerate with poorly differentiated metastasis with CK7 positive and rare CD56 positive cells, focally positive for EMA’ (Figure 3). In addition, the ‘lymphadenectomy specimen [had] 15 reactive, free of metastasis, lymph nodes’.\n\nA second pathology review of the lumpectomy specimen (external to our institution), indicated that the excised nodule consists of five lymph nodes as an agglomerate with histology of an undifferentiated metastasis, of a probable triple negative of mammary origin primary tumor. Therefore, the multidisciplinary team decided to propose the patient for total mastectomy, which was performed and the anatomopathological report showed neither abnormalities, nor the presence of neoplastic tissue in the remaining breast.\n\n\nTreatment\n\nIn an adjuvant setting, the patient was administered with the TAC chemotherapy protocol (docetaxel 75 mg/m2, doxorubicin 50mg/m2 and cyclophosphamide 500mg/m2, every 21 days, accompanied with pegfilgrastim) and successfully completed 6 cycles. The patient later received standard thoracic and lymphatic chain radiotherapy (50 Gy in 25 fractions over 5 weeks and boost to the tumor bed).\n\nBRCA 1 and 2 genetics were negative.\n\n\nOutcome and follow-up\n\nThe patient is currently in remission and had an uneventful follow-up at the Medical Oncology and Senology Department at our institution. According to our protocol, the patient undergoes clinical observation every three months accompanied by laboratory full set analysis (tumour markers included) and an annual breast imaging\n\n\nDiscussion\n\nVery little is known about the clinical importance of intramammary lymph node metastasis of breast cancer, even if they are not a rare site for metastasis. However, it is believed that metastasis to intramammary lymph nodes is an independent factor of poor prognosis for breast cancer patients1,2.\n\nIn a Pubmed search between 1900 and 2016, there is only one paper concerning metastatic intramammary lymph nodes as the primary presenting sign of occult breast cancer, which describes two cases3. The cases presented by Kouskos et al3 have some histological differences from the present one (for example, estrogen receptor positivity, axillary lymph node involvement, and late presentation of the primary breast tumour), and also the late appearance of the primary breast tumour. In our case and up until now, we have never detected a primary breast tumor. Similarly to our case, the other cases required an extensive complementary study of the patient.\n\nOur decision to treat the patient as a triple negative breast cancer patient with axillary metastatic involvement was been based on the histopathological suspicion of a breast-like primary site and the patient’s strong family history (1st degree familiar with breast cancer at <40 years old age).\n\nIn conclusion, intramammary lymph node metastasis requires a challenging workup and there is an urgent need to clarify its importance. Breast cancer patients should always undergo treatment in a multidisciplinary context. Being an extremely rare event, the one described here, good medical practice imposes a broad discussion among the various specialities that only can be achieved in the multidisciplinary setting. Decisions about treatment strategies to be offered are vast and should be patient centred.\n\n\nTake home messages\n\nIntramammary lymph node metastasis requires challenging workup\n\nThere is urgent need to clarify its importance\n\nBreast Cancer patients should always undergo treatment in multidisciplinary context\n\n\nConsent\n\nWritten informed consent from the patient has been obtained for the publication of this manuscript.",
"appendix": "Author contributions\n\n\n\nZS is the attending surgeon; CS, IG and TT performed the case and literature review; FA is the attending medical oncologist; LM is the responsible pathologist.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nShen J, Hunt KK, Mirza NQ, et al.: Intramammary lymph node metastases are an independent predictor of poor outcome in patients with breast carcinoma. Cancer. 2004; 101(6): 1330–7. PubMed Abstract | Publisher Full Text\n\nHogan BV, Peter MB, Shenoy H, et al.: Intramammary lymph node metastasis predicts poorer survival in breast cancer patients. Surg Oncol. 2010; 19(1): 11–16. PubMed Abstract | Publisher Full Text\n\nKouskos E, Rovere GQ, Ball S, et al.: Metastatic intramammary lymph nodes as the primary presenting sign of breast cancer. Breast. 2004; 13(5): 416–20. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "21961",
"date": "24 Apr 2017",
"name": "Sergi Vidal-Sicart",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is a well-written paper concerning an infrequent case of intramammary metastases without a known primary tumor.\nThe case description is adequate and nicely presented. We only have to add some minor requirements to the authors.\nIt could be adequate to add the MR and PET images demonstrating their findings.\n\nDid the authors consider that carboplatin could be added to the treatment. It seems that this agent offers good results in TN breast cancer.\n\nFinally, do you consider to expand the genetic study, even with a negative BCRA, due to a possibility to express other gens like PALB2?\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Yes\n\nIs the case presented with sufficient detail to be useful for other practitioners? Partly",
"responses": []
},
{
"id": "21962",
"date": "30 May 2017",
"name": "Ramesh Omranipour",
"expertise": [
"Reviewer Expertise Surgical oncologist"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is better to added the images of MRI and PET scan of the patient.\nThe discussion is too brief, although this presentation for intramammary lymph node is rare but there are many reports in the literature about the clinical importance and prognostic value of involved intramammary lymph node.\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": [
{
"c_id": "2762",
"date": "07 Jun 2017",
"name": "Zacharoula Sidiropoulou",
"role": "Author Response",
"response": "Dear Colleague,First of all thank you for your comments.As a reply:\"Better to added the images of MRI and PET scan of the patient\" Unfortunately we are not in possession of this imaging \"Discussion is too brief, although this presentation for intramammary lymph node is rare but there are many reports in the literature about the clinical importance and prognostic value of involved intramammary lymph node.\" Hereby our intention was just to report this specific unusual case, our first submission was more extended but afterwards we decided to limit to the presentation and not to proceed to a literature review of intramammary lymph node involvement in known,diagnosed breast cancer We hope our answer meet your kind and helpful commentsOnce more thank you for the review and we wait your feedback."
}
]
},
{
"id": "23354",
"date": "30 Jun 2017",
"name": "Kenji Gonda",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors described that Intramammary lymph node metastasis of an unknown primary, probably occult breast, undifferentiated carcinoma. These findings in this manuscript are interesting, and this manuscript is worthy of indexing. There are some problems should be resolved before publishing.\n\nWhat is the TNM classification for staging of this breast cancer patient?\n\nDo you think that this ectopic breast tissue may be accessory breast cancer?\n\nYou should reveal breast pathological diagnosis of the lumpectomy specimen, for example, papillo-tubular carcinoma or scirrhous carcinoma with mammary gland.\n\nWhat is the result of the human epidermal growth factor receptor 2 (HER-2/neu) and Ki-67 marker?\n\nYou should refer to an article of Egan, because intramammary lymph node metastases in the breast were reported for the first time in the world by Egan and McSweeney in 19831.\n\nAre additional ancillary studies, including immunostainings, beneficial to evaluate site of origin (in case of this tumor is CD10 focally positive)?\n\nWhat are the second line therapy options for this rare pathology, when this patient will have recurrence of breast cancer?\n\nWhat is the role of family history and its impact on therapy?\n\nIs the background of the case’s history and progression described in sufficient detail? Yes\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Yes\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? No\n\nIs the case presented with sufficient detail to be useful for other practitioners? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-265
|
https://f1000research.com/articles/6-262/v1
|
14 Mar 17
|
{
"type": "Research Article",
"title": "Questions on unusual Mimivirus-like structures observed in human cells",
"authors": [
"Elena Angela Lusi",
"Dan Maloney",
"Federico Caicci",
"Paolo Guarascio",
"Dan Maloney",
"Federico Caicci",
"Paolo Guarascio"
],
"abstract": "Background: Mimiviruses or giant viruses that infect amoebas have the ability to retain the Gram stain, which is usually used to colour bacteria. There is some evidence suggesting that Mimiviruses can also infect human cells. Guided by these premises, we performed a routine Gram stain on a variety of human specimens to see if we could detect the same Gram positive blue granules that identify Mimiviruses in the amoebas. Methods: We analysed 24 different human specimens (liver, brain, kidney, lymph node and ovary) using Gram stain histochemistry, electron microscopy immunogold, high resolution mass spectrometry and protein identification. Results: We detected in the human cells Gram positive granules that were distinct from bacteria. The fine blue granules displayed the same pattern of the Gram positive granules that diagnose Mimiviruses in the cytoplasm of the amoebas. Electron microscopy confirmed the presence of human Mimiviruses-like structures and mass spectrometry identified histone H4 peptides, which had the same footprints as giant viruses. However, some differences were noted: the Mimivirus-like structures identified in the human cells were ubiquitous and manifested a distinct mammalian retroviral antigenicity. Conclusions: Our main hypotheses are that the structures could be either giant viruses having a retroviral antigenicity or ancestral cellular components having a viral origin. However, other possible alternatives have been proposed to explain the nature and function of the newly identified structures.",
"keywords": [
"Mimiviruses",
"human cell structure",
"histone H4",
"retroviral antigen",
"polydnaviruses"
],
"content": "\n\nIn this study, we describe the presence of unusual Mimiviruses-like structures in human tissues. Like Mimiviruses (~450 nm giant viruses found in the amoebas), these human structures had the ability to retain the Gram stain and mass spectrometry revealed the presence of histone peptides having the same footprints as giant viruses. However, the human giant viruses-like structures displayed a distinct and unique mammalian retroviral antigenicity.\n\nThis initial discovery in human tissues presented the conundrum of whether the structures were giant viruses with a retroviral nature or cellular components having a viral footprint. The distinction between the virus and the cells was blurred. The most difficult part to explain arose from the unique mammalian retroviral antigenicity associated to the human mimivirus-like structures.\n\nThere was only one possibility to solve the dilemma: isolate the viruses (if really present) and verify if they contained genetic material. In a subsequent paper https://f1000research.com/articles/7-1005/v1 we describe:\n\n1. Isolation of human giant viruses from human T-cell leukemia on 25% sucrose gradient.\n\n2. Electron microscopy of the purified viral pellet confirming the presence of ~400 nm giant viruses with retroviral antigens.\n\n3. Purified giant viruses retaining the Gram stain.\n\n4. Human giant viruses had reverse transcriptase activity.\n\n5. RNA extracted from the giant viral particles contained retroviral genes (VLPQG-YMDD region of reverse transcriptase gene).\n\n6. Molecular phylogenetic analyses showing that human Retro-Giant viruses belong to a distinct branch, missing from the current classification of retroviruses.\n\nTherefore, the Mimivirus-like structures are human giant virus with a retroviral core and oncogenic potential.\n\nWe are facing not an archetypal retrovirus, nor even an amoebas DNA mimivirus, but a human, Gram-positive, giant virus (mimivirus-like) with a viral factory and a retroviral core. A preliminary phylogenetic analysis suggests that the Retro-Giant viruses are ancestral and evolved earlier than archetypal retroviruses.\n\n\nIntroduction\n\nThere is evidence that terrestrial giant viruses can also infect mammals and a recent article published on Lancet Infectious Diseases describes the presence of giant viruses in human lymph nodes1–3. One of the chemical characteristics of giant viruses is their property to retain the Gram stain, which is usually used to colour bacteria4,5.\n\nIn fact, Mimiviruses (giant viruses) were initially mistaken for gram-positive bacteria infecting the cytoplasm of an amoeba, which was stuffed with blue Gram positive granules under an optical microscope. Only in 2003 did electron microscopy clarify that the fine blue granules present in the cytoplasm of the amoebas were actually giant viruses6.\n\nGuided by the premise that giant viruses can also infect humans, we decided to perform a routine Gram stain on different human specimens to see if we could detect the same blue granules that were detected in the amoeba when Mimiviruses were first identified.\n\nHere we demonstrate, with the use of electron microscopy, mass spectrometry and histochemistry, that human cells have anatomical areas that manifest some of the biochemical and morphological properties also found in giant viruses. These structures are ubiquitously present in a variety of human tissues, including non-pathological tissues. Possible alternative explanations of the findings are discussed.\n\n\nMethods\n\n3 liver specimens with haemochromatosis and non-alcoholic steatohepatitis\n\n1 liver specimen with cryptogenic cirrhosis (unexplained)\n\n7 liver specimens with chronic hepatitis B\n\n2 liver specimens with chronic hepatitis C\n\n3 liver specimens with non-specific minimal histological lesions\n\n2 liver specimens with no lesions\n\n2 liver specimens with primary biliary cirrhosis\n\n1 liver specimen from a patient with Crohn’s disease\n\n1 kidney specimen\n\n1 brain specimen\n\n1 ovary specimen\n\nThe Institutional Review Board of St Camillo Hospital of Rome approved the use of stored tissues for electron microscopy and proteomics investigations in accordance with the Helsinki Declaration of 2002 (approval number, 56/2015). Informed consent were obtained in writing from all patients prior to tissues biopsy procedure, which encompassed processing of the clinical data and the use of tissues for investigation and research. The present study fits within the terms of the obtained consent.\n\nGram staining of human specimens was performed using a Gram Yellow Stain Kit (Artisan from Dako), following the standard protocol for paraffin specimens, according to the manufacturer’s instructions. Positive controls were formalin fixed human tissues with bacteria. Before staining, slides were heated at 80°C for 45 minutes.\n\nElectron microscopy analysis of the human biopsies was conducted at the University of Naples Federico II, CISME Division and the University of Padua, Department of Biology. The two different operators were blinded to each other's. The samples were fixed with fixative (4% paraformaldehyde in PBS buffer solution), dehydrated and embedded in LR White Resin followed by polymerization at 58°C. Ultrathin sections (100 nm) were placed on Formvar-coated nickel grids (Maxtaform Grids; M200-Ni) and used the next day for immunogold labelling. For immunostaining reaction, the post-embedding immunogold method was applied. Nickel grids were immersed in 1% citraconic anhydride solution (Sigma) at 90°C for 30’. Subsequently, the sections were first treated with blocking solution (1% BSA, 0.1% Tween 20, PBS 1x), then incubated with primary mouse monoclonal antibody identifying common retroviral antigen among mammalian retrovirus (sc-65623; Santa Cruz Biotecnology; IgG1 provided at 100 µg/ml) diluted 1:50 for 1 hour at 37°C. Antibody binding was detected using a secondary goat anti-mouse IgG antibody at room temperature for 1 hour (British BioCell International; EM.GAM15EM), diluted to 1:100 and coupled to gold particles (15nm; British BioCell International). Sections were analyzed using an FEI Tecnai G2 transmission electron microscope operating at 100 kV. The images were acquired with TIA Fei software Cam 4.7SP3 (https://www.fei.com/service-support/) and collected and typeset in Corel Draw X3 (http://www.coreldraw.com/en/pages/coreldraw-x3/). Controls were performed by omitting the primary antibody, which resulted in absence of cross-reactivity.\n\nHuman samples were ground with liquid nitrogen. Six volume sample preparation buffer (9M urea, 2% ampholytes and 70 mM DTT) were added to the frozen powder, followed by three frozen/thaw cycles (liquid nitrogen −196°C/30°C). After incubation for 30 min at room temperature and centrifugation for 45 min at 15000xg the supernatant was removed and frozen in new tubes at -80°C. FFPE slices were treated with 0.5 ml Heptan for 1h at room temperature. Subsequently, 25µl methanol were added and mixed for 25min. After centrifugation (5min, 13200xg) the pellet was air dried and 100µl lysis-buffer (250 mM Tris pH 9.5; 2% SDS) were added. The sample was boiled for 2h, centrifuged (30min, 13200xg, room temperature) and the supernatant was used for SDS-PAGE.\n\nTwo dimensional gel electrophoresis (2DE) was performed according to standard 2DE techniques. Briefly, 50 µg of protein was applied to vertical rod gels (9M urea, 4 % acrylamide, 0.3 % PDA, 5 % glycerol, 0.06% TEMED and 2 % carrier ampholytes [pH 2–11], 0.02% APS) for isoelectric focusing at 1820 Vh in the first dimension. After focusing, the IEF gels were incubated in equilibration buffer, containing 125 mM trisphosphate (pH 6.8), 40% glycerol, 65 mM DTT, and 3% SDS for 10 minutes and subsequently frozen at -80°C. The second dimension SDS-PAGE gels (7x8x0.1cm) were prepared, containing 375 mM Tris-HCl buffer (pH 8.8), 12% acrylamide, 0.2% bisacrylamide, 0.1% SDS and 0.03% TEMED. After thawing, the equilibrated IEF gels were immediately applied to SDS-PAGE gels. Electrophoresis was performed using 150 V for 75 min until the front reached the end of the gel. After 2DE separation, the gels were stained with FireSilver (Proteome Factory; PS2001).\n\nThe 2DE gels used for comparison analysis were digitized at a resolution of 150 dpi using a PowerLook 2100XL scanner with transparency adapter.\n\nFor western blot applications, two identical gels were run. One 2DE gels was stained with FireSilver or Coomassie for preparative applications and the other gel was used for western blotting to detect the proteins by immunostaining. Blotting of 2DE gels was performed using an Immobilon-P membrane (PVDF; pore size 0.45 mm; Millipore) and a Trans-Blot SD Semi-Dry Transfer Cell (BioRad) at a constant current 5 V overnight at 4°C using a blotting buffer consisting of 25 mM Tris–HCl, 192 mM glycine, 0.1% SDS (pH 8.3) and 20% methanol. For immunodetection of proteins, membranes were washed in TBST (20 mM Tris–HCl [pH 7.5]; 154 mM NaCl, 0.1% Tween-20) and blocked in TBST containing 2% (w/v) BSA for 2 h. Membranes were incubated with the primary antibody (sc-65623; Santa Cruz Biotecnology; IgG1) diluted 1:1000 for 2DE blot and 1:50 for 1D-blots in TBST containing 1% (w/v) BSA overnight and then incubated with anti-mouse IgG (Fc specific–peroxidase antibody produced in goat; A0168; Sigma; diluted to 1:2000 in TBST containing 1% (w/v) BSA) for 1 h at room temperature. Finally, the bound antibody was detected by incubating with Luminol for 1s-20min (Roth). The membrane was washed in TBST (5 times for 10 min) between all incubation steps.\n\nProtein identification was performed using nano LC-ESI-MS/MS. The MS system consisted of an Agilent 1100 nanoLC system (Agilent), PicoTip electrospray emitter (New Objective) and an Orbitrap XL mass spectrometer (Thermo-Fisher). Protein spots from the membranes were in-gel digested by trypsin (Promega) (with and without citraconic anhydride treatment) and applied to nanoLC-ESI-MS/MS. Peptides were trapped and desalted on the enrichment column (Zorbax SB C18; 0.3x5 mm; Agilent) for five minutes using 2.5% acetonitrile/0.5% formic acid as eluent, then peptides were separated on a Zorbax 300 SB C18 column (75µmx150mm; Agilent) using an acetonitrile/0.1% formic acid gradient from 5 to 35% acetonitril within 40 minutes. MS/MS spectra were recorded data-dependently by the mass spectrometer, according to manufacturer's recommendations.\n\nSynthetic peptide KTVTSMDIVYALK was synthesized by solid-phase technique using a multiple peptides synthesizer (SyroII; MultiSynTech GmbH) on a pre-loaded Wang resin (Novabiochem) (100–200 mesh) with Fmoc-Nε-tert-butyloxycarbonyl-l-lysine (Novabiochem). The fluoren-9-ylmethoxycarbonyl strategy was used throughout the peptide chain assembly, utilizing O-(7-azabenzotriazol-1-yl)-N,N,N′,N′-tetramethyluronium hexafluorophosphate (HATU) as a coupling reagent. Cleavage of the peptides was performed by incubating the peptidyl resins with trifluoroacetic acid/H2O/triisopropylsilane (95/2.5/2.5%) for 2.5 h at 0°C. Crude peptide was purified by reverse phase HPLC on a preparative column (Prep Nova-Pak; HR C18). Molecular masses of the peptide were confirmed by mass spectroscopy on a MALDI TOF-TOF using an Applied Biosystems 4800 mass spectrometer.\n\nImmunoblot positive bands from frozen and FFPE tissues, were analyzed by mass spectrometry (nano LC-ESI-MS/MS), using a Thermo Orbitrap XL with CID fragmentation.\n\nA database search was performed first against human proteins contained in UniProtKB/TrEMBL (http://www.ebi.ac.uk/uniprot) and virus proteins contained in UniProtKB/TrEMBL separately. After that, to reduce the risk of false positive results, the search was made against a combined human and viral database within a 1% false discovery rate. The search parameters were: 20 ppm precursor error tolerance, 0.6 Da fragment error tolerance, trypsin allowing non-specific cleavage at 1 end and a maximum of 3 missed cleaves, carbamidomethylation set as a fixed ptm, acetylation(k), oxidation (M), deamidation(NQ), formylation (K, Nterm), phosphorylation (STY) set as variable modifications.\n\nThe raw files were also processed through PEAKS Studio 8.0 (Bioinformatics Solutions Inc.) de novo and PEAKS DB modules. The parent mass error tolerance was set to 3 ppm, the fragment mass error tolerance was 0.6 Da. Carbamidomethylation of cysteine was set as a fixed modification and oxidation was set as a variable modification. The enzyme rules specified were trypsin, allowing non-specific cleavage at one end maximum and a maximum of three missed cleavages per peptide. The database searched was trEMBL (version is 2016_09). Only human and polydnaviridae proteins were searched, 1109386 protein sequences were searched along with a decoy database containing an equal number of proteins.\n\n\nResults\n\nWe Gram stained 21 different types of human liver specimens. We initially chose the liver, since this organ is the bio-chemical processing centre and the cross road of microbial invasions of the human body.\n\nGram positive blue granules were diffusely and ubiquitously expressed in all tested human liver samples, including unaffected liver samples with no histological lesions. These blue granules were absolutely distinct from common pigments, such as lipofuscin, and different from gram positive bacilli that were used as controls. The granules had a typically fine granular aspect, similarly to the one present in the amoebas infected by Mimiviruses, as reported by the French authors6. Figure 1 (premise picture) illustrates Gram positive granules that are Mimiviruses infecting amoebas. The permission to use this image was kindly provided by Prof Bernard La Scola.\n\nThis picture illustrates Mimiviruses in the amoebas when first detected by the French authors6. Viral particles appeared as Gram-positive fine blue granules (black arrows) resembling bacterial cocci, from which the name Mimiviruses, was derived, i.e. Mimicking microbes. The blue gram positive granules in the cytoplasm of the amoeba (A and B) proved to be Mimiviruses and not bacteria when viewed using electron microscopy (C). Permission to use this picture was kindly provided by Prof La Scola Bernard.\n\nIn the human liver cells, this fine blue granularity was detected in the cytoplasm and nuclei (Figure 2).\n\nGram staining of a human liver (magnification, ×80). After the Gram stain, human liver cells displayed fine blue granules that, for didactic reasons, are enclosed in the black circles, but they can be seen scattered in the parenchyma. Note the similarities between the amoebal Mimiviruses appearing as blue granules (small picture frame and Figure 1) with the human blue granules. In the human cells, the gram positive granules appear as fine granules and are distinct from bacteria and other pigment, like lipofucsin, that is also present (brown colour).\n\nTo further verify the ubiquitous presence of these typical blue granules in the human cells, we also Gram stained human brain, ovary, lymph node and kidney tissues. Granules could be detected in the kidney glomeruli, but not in the renal tubules (Figure 3). The brain was intensely stained, showing a diffuse granular pattern (Figures 4A and B). The ovary did not display the gram positive granules (Figure 5). In the lymph node, Gram positive granules were absent from the germinal centres and only appeared in the paracortex (Figures 6A and B).\n\nHuman kidney Gram stain. Absence of Gram positive granules in the renal tubules (magnification, ×43.3). Gram positive granules were only present on the glomeruli, red circles (×55).\n\n(A and B) Human brain Gram stain (magnification, ×80). Gram positive granules were also present in the brain. Black arrows indicate the intracellular blue granules.\n\nAbsence of Gram positive granules in the ovary (×20).\n\n(A) Gram positive granules were absent in the germinal centres L1, L2, L2. Gram positive granules were specifically detected in the paracortex, outside the germinal centres, L4. (B) The picture with the green frame is L4 at higher magnification (×80) with blue granules in the circle.\n\nSubsequent electron microscopy (EM) analyses of the Gram positive human tissues confirmed the presence of cellular structures resembling Mimiviruses (Figure 7). This was exactly the same case of the French authors when they proved that the fine blue granules in the amoeba were actually Mimiviruses6. To enhance the resolution detection of the EM, we used a particular antigen retrieval solution with citraconic anhydride and heat7,8. EM analyses were conducted in two different international centres and 300 micrographs were scrutinized by operators who performed a blinded reading and were also blind to each other. The immunogold labelling assays revealed also a retroviral antigenicity associated to the structures when a mammalian anti-retroviral gag-p27 MoAb, recognizing common epitopes among several mammalian retroviruses, was tested.\n\nElectron microscopy (EM) of human liver tissues with the gram positive granules (black circle 1a) displayed Mimivirus-like structures at EM (1b). Similar gram positive blue granules in the amoeba (2a) are Mimiviruses (2b). Comparative morphological analysis revealed striking similarities between the human cellular structures (1c) with amoebas giant Mimiviruses (2c). Comparison of the two viral factories (VF) and the giant particles (red arrows) in the larger EM micrographs in section 1 and 2 show similarities between the two.\n\nMass spectrometry (nano LC-ESI-MS/MS) and protein identification using PEAKS 7.5 software revealed the presence within the structures of human proteins, including conventional human histone proteins that co-existed with a histone H4 peptide KTVTSMDIVYALK. This manifested a distinct viral footprint of giant polydnaviruses that did not match any human sequence. In fact, the human and many other eukaryotes display in correspondence of their C-terminus histone-H4 tail a typical and extremely conserved sequence KTVTAMDVVYALK, with a I -> V replacement and human histone H4 variants that have never been described9,10.\n\nTo rule out false positive identifications when searching just with the virus database, we combined all identified proteins in the virus database and all identified proteins in the human database into one FASTA file (Supplemental File 1). The raw files were processed through PEAKS Studio 8.0, de novo and PEAKS DB modules.\n\nWhen analysing the biological samples, the peptide KTVTSMDIVYALK was identified confidently in two replicates at similar retention times: 23.05 minutes in replicate one and 23.37 minutes in replicate two.\n\nTo validate our results, a synthetic peptide with the same sequence as our candidate peptide KTVTSMDIVYALK was produced at the CRIBI peptide facility, University of Padua. A significant number of high intensity b and y ions matched the synthetic peptide spectrum. In particular, the b and y ion series from IVY (the part of the sequence that differs from the human protein), were prevalent in both spectra. We also performed a narrow scan in the mass range 730–740 m/z; MSMS for center mass 734.40 m/z (2nd isotope of 733.9 m/z; z=2); MSMS for center mass 744.40 m/z (2nd isotope of 734.9 m/z; z=2). The canonical human histone H4 and the IVY histone H4 variant were both present at m/z= 734.907; z=2. A summary of the proteomics assays are reported in Figure 8–Figure 12 and in Supplemental File 2.\n\nTop-bottom: Total ion chromatogram (TIC); extracted ion chromatogram (EIC) DB-hit; EIC H4 histone variant.\n\nFragmentation table at t m/z= 734.907; z=2.\n\nSection 1 and section 2 illustrate the fragmentation table and the spectrum (PEAKS software) of the ancestral variant of the histone H4 peptide KTVSMDIVYALK, respectively. Section 3 is the fragmentation table of the synthetic peptide that was synthesized and used to validate the KTVSMDIVYALK identification. Fragment ions that matched the spectrum in both the biological and synthetic spectrum are highlighted in colour. Red = xyz ions; blue = abc ions. The yellow region IVYALK indicates the histone H4 variant that was detected, along with the conventional histone H4, in the human cells. The same pattern IVY is also present in the histone H4 biology of polydnaviruses.\n\nSection 1: The identified peptides are the canonical human histone-4 isoforms. Section 2: The histone H4 peptide KTVSMDIVYALK, indicated by the green arrow, has a unique footprint not found in any human or other eukaryotes proteins. This peptide displays the same unique sequence found in the C-terminus of the histone H4 of giant polydnaviruses (purple colour in the alligmnements between eukaryotes and polydnaviruses histone H4 sequences).\n\nMass spectrometry identified human conventional and ancestral human H4 histone variants. Direct analyses and narrow scan in mass range 730–740 m/z; MSMS for center mass 734.40 m/z (2nd isotope of 733.9 m/z; z=2); MSMS for center mass 744.40 m/z (2nd isotope of 734.9 m/z; z=2) confirmed the co-existence of the human and ancestral H4 isoforms (with the IVY pattern) at 734.905 m/z; z=2.\n\nThree dimensional (3D) protein models of the canonical human histone H4 protein and the histone H4 isoform having the viral footprint were generated by using the Swiss Model (https://swissmodel.expasy.org/).\n\n\nDiscussion\n\nAlthough there are morphological and biochemical properties similar to giant viruses, the newly identified structures are possibly beyond the concept of typical viruses. The structures are ubiquitous in human tissues and are not associated to a specific medical disease. We are aware that being ubiquitous does not necessarily mean that these structures are not viruses and not being infectious does not imply that they are not viruses, since viruses can be also ubiquitous and not pathogenic11–13. However, the type of the histological pattern and the mass spectrometry identification do not completely rule out that these structures could be human cellular components having a viral footprint14–16. Like mitochondria that were originally bacteria cells and still retain the bacterial features17–19, the human Mimivirus-like structures manifest an ancestral origin. Some of the histone variants detected within the human structures have the same universal motifs associated to the same function that are also used by giant polydnaviruses to manipulate their host transcription. The IVY histone pattern, that is present in these structures, tells the cells that some genes should be \"off\".\n\nThe basis for this assertion corresponds to the findings of an identical IVY pattern in giant viruses that represses host gene transcription20–22. In addition, the three-dimensional analysis of the histone H4 that displays the IVY sequence shows a closed conformation that might prevent gene transcription (Figure 13). It would be interesting to trace if an evolutionary link may exist between these human cellular structures, giant viruses or archaea. The recent finding that giant viruses can integrate into modern eukaryotic genomes have motivated the fascinating and highly provocative idea that giant viruses, along with archaea and bacteria, contributed significantly to the evolution of the first eukaryotes23–26.\n\n3D protein structure models generated by SWISS-MODEL suggests that the human Mimivirus-like structures have a role in the regulation of transcription. The histone H4 with the IVY pattern have a closed transcriptionally inactive conformation. The canonical human histone H4 have an open conformation that is transcriptionally active.\n\n\nConclusions\n\nIn conclusion, did we find ubiquitous giant viruses suppressing human responses or human structures with \"something that was originally giant\" and are not viruses any longer? The ancestral non-human nature of these structures is supported by the IVY histone pattern identified with mass spectrometry and by their capability to retain the Gram stain, which colour peptidoglycans. However, there are other alternative explanations for the structures that need to be considered as well. For example, the documented mammalian retroviral antigenicity does not entirely exclude the possibility that these structures could represent particles formed by the concurrent activity of retro-transposons.\n\nBy the virtue of development of the science of microscopy, the ultrastructure of the cell apparatus has been established by the 1960. Since then, new structures have been sporadically reported. The main challenge when uncovering cellular components is proteomics, which can be technically much more complex than transcriptomics, and electron microscopy is perceived by some scientists as an old fashioned technique prone to artefacts. However, it is worth mentioning that the Golgi apparatus was discovered with the use of a rudimental microscope in 1898 and many scientists did not believe that the Golgi apparatus was real and instead argued that the apparent body was a visual distortion caused by staining27–30. It took almost a century to fully understand the function of the Golgi apparatus. The aim of this paper is to merely report what we have found inside the human cells and offer some hypotheses. Only time and additional experiments will clarify if the identified structures are giant viruses having a retroviral antigenicity or cellular components having a viral ancestry or human retrotransposon-like elements.\n\n\nData availability\n\nAll the histological samples, slides, EM grids are available to be examined; please contact the corresponding author.\n\nDataset 1: Entire project-raw mass spectrometry data of the positive protein spots (band 1–4). doi, 10.5256/f1000research.11007.d15380231.",
"appendix": "Author contributions\n\n\n\nEAL: conceived and led the research, protocols strategies, data analyses, manuscript; DM: mass spectrometry analyses, use of the PEAKS software; CF: Electron Microscopy Analyses; PG: Histochemistry, pathology reading, patient analyses and materials collection.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was supported by the St Vincent Health Care Group of Dublin.\n\n\nAcknowledgements\n\nWe thank Oriano Marin and his team for the synthetic peptide and the MS validation test, Grillo Rosalba for logistic support, Vittoria Balzano for technical assistance in histochemistry staining.\n\n\nSupplementary material\n\nSupplemental File 1: FASTA file containing all identified proteins in the virus database and all identified proteins in the human database.\n\nClick here to access the data.\n\nSupplemental File 2: Mass data and spectra of the identified histone H4 proteins.\n\nClick here to access the data.\n\n\nReferences\n\nAherfi S, Colson P, Audoly G, et al.: Marseillevirus in Lymphoma: a giant in the lymph node. Lancet Infect Dis. 2016; 16(10): e225–34. PubMed Abstract | Publisher Full Text\n\nColson P, Aherfi S, La Scola B, et al.: The role of giant viruses of amoebas in humans. Curr Opin Microbiol. 2016; 31: 199–208. PubMed Abstract | Publisher Full Text\n\nColson P, La Scola B, Raoult D: Giant viruses of amoebae as potential human pathogens. Intervirology. 2013; 56(6): 376–85. PubMed Abstract | Publisher Full Text\n\nRaoult D, La Scola B, Birtles R: The discovery and characterization of Mimivirus, the largest known virus and putative pneumonia agent. Clin Infect Dis. 2007; 45(1): 95–102. PubMed Abstract | Publisher Full Text\n\nBeveridge TJ: Use of the gram stain in microbiology. Biotech Histochem. 2001; 76(3): 111–8. PubMed Abstract | Publisher Full Text\n\nLa Scola B, Audic S, Robert C, et al.: A giant virus in amoebae. Science. 2003; 299(5615): 2033. PubMed Abstract | Publisher Full Text\n\nMoriguchi K, Mitamura Y, Iwami J, et al.: Energy filtering transmission electron microscopy immunocytochemistry and antigen retrieval of surface layer proteins from Tannerella forsythensis using microwave or autoclave heating with citraconic anhydride. Biotech Histochem. 2012; 87(8): 485–493. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeong AS, Haffajee Z: Citraconic anhydride: a new antigen retrieval solution. Pathology. 2010; 42(1): 77–81. PubMed Abstract | Publisher Full Text\n\nTalbert PB, Henikoff S: Histone variants--ancient wrap artists of the epigenome. Nat Rev Mol Cell Biol. 2010; 11(4): 264–75. PubMed Abstract | Publisher Full Text\n\nKamakaka RT, Biggings S: Histone variants: deviants? Genes Dev. 2005; 19(3): 295–310. PubMed Abstract | Publisher Full Text\n\nRoossinck MJ: Plants, Viruses and the environment: Ecology and mutualism. Virology. 2015; 479–480: 271–77. PubMed Abstract | Publisher Full Text\n\nPollicino T, Raffa G, Squadrito G, et al.: TT virus has a ubiquitous diffusion in human body tissues: analyses of paired serum and tissue samples. J Viral Hepat. 2003; 10(2): 95–102. PubMed Abstract | Publisher Full Text\n\nMortimer PP: Orphan viruses, orphan diseases: still the raw material for virus discovery. Rev Med Virol. 2013; 23(6): 337–9. PubMed Abstract | Publisher Full Text\n\nKeck KM, Pemberton LF: Histone chaperones link histone nuclear import and chromatin assembly. Bioch Biophys Acta. 2013; 1819(3–4): 277–89. PubMed Abstract | Publisher Full Text\n\nLiu WH, Churchill ME: Histone transfer among chaperones. Biochem Soc Trans. 2012; 40(2): 357–63. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLi Q, Burgess R, Zhang Z: All roads lead to chromatin: multiple pathways for histone deposition. Bioch Biophys Acta. 2013; 1819(3–4): 238–46. PubMed Abstract | Publisher Full Text\n\nGray MW, Burger G, Lang BF: Mitochondrial evolution. Science. 1999; 283(5407): 1476–81. PubMed Abstract | Publisher Full Text\n\nGray MW, Burger G, Lang BF: The origin and early evolution of mitochondria. Genome Biol. 2001; 2(6): REVIEWS1018. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAndersson SG, Karlberg O, Canbäck B, et al.: On the origin of mitochondria: a genomics perspective. Philos Trans R Soc Lond B Biol Sci. 2003; 358(1429): 165–77; discussion 177–9. PubMed Abstract | Publisher Full Text | Free Full Text\n\nThomas V, Bertelli C, Collyn F, et al.: Lausannevirus, a giant amoebal virus encoding histone doublets. Environ Microbiol. 2011; 13(6): 1454–66. PubMed Abstract | Publisher Full Text\n\nGad W, Kim Y: A viral histone H4 encoded by Cotesia plutellae bracovirus inhibits haemocyte-spreading behaviour of the diamondback moth, Plutella xylostella. J Gen Virol. 2008; 89(Pt 4): 931–8. PubMed Abstract | Publisher Full Text\n\nHepat R, Song JJ, Lee D, et al.: A viral histone h4 joins to eukaryotic nucleosomes and alters host gene expression. J Virol. 2013; 87(20): 11223–30. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDurzyńska J, Goździcka-Józefiak A: Viruses and cells intertwined since the dawn of evolution. Virol J. 2015; 12: 169. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYamada T: Giant viruses in the environment: their origins and evolution. Curr Opin Virol. 2011; 1(1): 58–62. PubMed Abstract | Publisher Full Text\n\nMoreira D, López-García P: Evolution of viruses and cells: do we need a fourth domain of life to explain the origin of eukaryotes? Philos Trans R Soc Lond B Sci. 2015; 370(1678): 20140327. PubMed Abstract | Publisher Full Text | Free Full Text\n\nForterre P, Gaïa M: Giant viruses and the origin of modern eukaryotes. Curr Opin Microbiol. 2016; 31: 44–9. PubMed Abstract | Publisher Full Text\n\nEricsson JL: Studies on induced cellular autophagy. I. Electron microscopy of cells with in vivo labelled lysosomes. Exp Cell Res. 1969; 55(1): 95–106. PubMed Abstract | Publisher Full Text\n\nProfessor Camillo Golgi. Br Med J. 1926; 1(3396): 221. PubMed Abstract | Free Full Text\n\nMazzarello P, Garbarino C, Calligaro A: How Camillo Golgi became \"the Golgi\". FEBS Lett. 2009; 583(23): 3732–7. PubMed Abstract | Publisher Full Text\n\nBentivoglio M, Mazzarello P: One hundred years of the Golgi apparatus: history of a disputed cell organelle. Ital J Neurol Sci. 1998; 19(4): 241–7. PubMed Abstract | Publisher Full Text\n\nLusi EA, Maloney D, Caicci F, et al.: Dataset 1 in: Questions on unusual Mimivirus-like structures observed in human cells. F1000Research. 2017. Data Source"
}
|
[
{
"id": "20949",
"date": "25 May 2017",
"name": "Carlo Presutti",
"expertise": [
"Reviewer Expertise Molecular Biology"
],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper by Lusi et al. describes the identification of giant virus particles in human cells that seem to be quite ubiquitously and not related to any pathology. As long as I can judge, considering the fact that I am not a virologist nor an EM (electron microscopy) expert, the experiments seem to be clear and well executed. In particular the mass spectrometry experiments sound convincing. Authors indicate that \"Mimivirus-like structures identified in the human cells were ubiquitous and manifested a distinct mammalian retroviral antigenicity.\" However no data or experiments are shown about this interesting feature. This could be included in the paper or at least added to the discussion. The authors should also indicate why they did not go for nucleic acid identification: this could be an easier and clearer way to characterize these organisms.\n\nThe Gram-positive staining of human tissues described by the Authors is quite curious and potentially interesting, although the images presented are not so clear. Moreover, same magnification should be shown for all samples in order to better appreciate the differences among different tissues underlined by the Authors. Concerning the EM micrograph, while the virus particles inside the amoeba cell are clearly visible, is the giant particle in liver cells that indicated by the two arrowheads? If so, the similarities that can be appreciated are just the large dimensions, as they appear rather different in morphology. The Authors also refer to a retroviral antigenicity associated to the granules, as determined by staining with anti p27-gag. In the MM of the manuscript a western blot for this protein is also described, however I could not find any data about them. Since the antibody used was specific for the p27-gag from FeLV, was the cross-reactivity with the human retrovirus Gag protein tested? The data from mass spectrometry analysis look interesting, although it is not clear to me what kind of tissues were analysed. Albeit the data would need to be improved as suggested, the findings reported appear very intriguing and of interest for future developments. Certainly, it seems strange that no one has ever appreciated the presence of these intracellular structures before.\n\nIs the work clearly and accurately presented and does it cite the current literature? Yes\n\nIs the study design appropriate and is the work technically sound? Partly\n\nAre sufficient details of methods and analysis provided to allow replication by others? Yes\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNot applicable\n\nAre all the source data underlying the results available to ensure full reproducibility? No source data required\n\nAre the conclusions drawn adequately supported by the results? Partly",
"responses": [
{
"c_id": "3062",
"date": "27 Sep 2017",
"name": "Elena Angela Lusi",
"role": "Author Response",
"response": "Author ResponseMy previous report of unusual Mimiviruses-like structures in human cells required additional investigations. This time I will focus on clarifying their unique retroviral nature. With a pan retroviral-PCR detection system (1) and genomic sequencing, I can be quite confident in stating that the mimivirus-like structures are human Retro-Giant-Viruses. The retroviral sequences identified showed ≥90% nucleotide identity to the HERV-W species of human endogenous retroviruses (sequence deposited in GenBank with the accession number BankIt2050100 BSeq#1 MF996371). This genetic analysis further confirms the already described EM tests where an anti FeLV p27gag MoAb specifically marked the giant viruses as well as the viral factory, (refer to the striking EM immunogold image with an anti Feline Leukemia virus protein p27gag, black dots). In fact, the pol genes of the viruses HERV-W are all related to other mammalian C-type retroviruses, such as murine leukemia virus, gibbon ape leukemia virus, and feline leukemia virus, on the basis of nucleotide and amino acid sequence homology (2-7).We are facing not an archetypal human retrovirus nor even a large human retrovirus, but a human giant virus with an ancestral mammalian retroviral core.Although sharing some morphological features with Mimiviruses, this human Retro-giant virus differ substantially from the DNA-amoebal giant viruses for its unique presence of mammalian retroviral genes (gag, pol and env).I believe that this finding will add a new dimension to the giant viruses in general and challenge our current concepts in retrovirology. The old classification and taxonomy of retroviruses might need an update and should include the Retro-giant viruses. Elena Angela Lusi M.D., Ph.D.St Vincent Health Care Group-UCD, Dublin, Ireland. Suggested References Tuke PW, Perron H, Bedin F, Beseme F, Garson JA. Development of a pan-retrovirus detection system for multiple sclerosis studies. Acta Neurol Scand Suppl. 1997;169:16-21. Sherr C. J., Fedele L. A., Benveniste R. E., Todaro G. J. , Interspecies antigenic determinants of the reverse transcriptases and p30 proteins of mammalian type C viruses. J Virol. 15(6), 1440-8 (1975) M. A. Morgan, T. D. Copeland, S. Oroszlan, Structural and antigenic analysis of the nucleic acid-binding proteins of bovine and feline leukemia viruses. J Virol. 46(1), 177-86 (1983) G. Geering, T. Aoki, L. J. Old, Shared viral antigen of mammalian leukaemia viruses. Nature. 226(5242),265–266 (1970) J. Davis, R. V. Gilden , S. Oroszlan, Multiple species-specific and interspecific antigenic determinants of a mammalian type C RNA virus internal protein. Immunochemistry 12(1), 67-72 (1975) Blond J-L, Besème F, Duret L, Bouton O, Bedin F, Perron H, Mandrand B, Mallet F. Molecular characterization and placental expression of HERV-W, a new human endogenous retrovirus family. J Virol. 1999;73:1175–1185. M. Wunsch, A. S. Schulz, W. Kock, R. Friedrich, G. Hunsmann, Sequence analysis of Gardner-Arnstein feline leukaemia virus envelope gene reveals common structural properties of mammalian retroviral envelope genes. EMBO J. 2(12), 2239–2246 (1983)"
}
]
},
{
"id": "21292",
"date": "12 Jun 2017",
"name": "Didier Raoult",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nAccept as it is This is a fascinating paper. This reviewer doesn’t know what it means anyway. However, Marseillevirus was reported in the blood of healthy donors that fuel the hypothesis of an asymptomatic carriage of giant viruses. Wholes sequencing of the positive samples after depletion of human genes may reveal if there is viral DNA.",
"responses": [
{
"c_id": "3061",
"date": "27 Sep 2017",
"name": "Elena Angela Lusi",
"role": "Author Response",
"response": "Author ResponseMy previous report of unusual Mimiviruses-like structures in human cells required additional investigations. This time I will focus on clarifying their unique retroviral nature. With a pan retroviral-PCR detection system (1) and genomic sequencing, I can be quite confident in stating that the mimivirus-like structures are human Retro-Giant-Viruses. The retroviral sequences identified showed ≥90% nucleotide identity to the HERV-W species of human endogenous retroviruses (sequence deposited in GenBank with the accession number BankIt2050100 BSeq#1 MF996371). This genetic analysis further confirms the already described EM tests where an anti FeLV p27gag MoAb specifically marked the giant viruses as well as the viral factory, ( refer to the striking EM immunogold image with an anti Feline Leukemia virus protein p27gag, black dots .)In fact, the pol genes of the viruses HERV-W are all related to other mammalian C-type retroviruses, such as murine leukemia virus, gibbon ape leukemia virus, and feline leukemia virus, on the basis of nucleotide and amino acid sequence homology (2-7).We are facing not an archetypal human retrovirus nor even a large human retrovirus, but a human giant virus with an ancestral mammalian retroviral core.Although sharing some morphological features with Mimiviruses, this human Retro-giant virus differ substantially from the DNA-amoebal giant viruses for its unique presence of mammalian retroviral genes (gag, pol and env).I believe that this finding will add a new dimension to the giant viruses in general and challenge our current concepts in retrovirology. The old classification and taxonomy of retroviruses might need an update and should include the Retro-giant viruses. Elena Angela Lusi M.D., Ph.D.St Vincent Health Care Group-UCD, Dublin, Ireland. Suggested References Tuke PW, Perron H, Bedin F, Beseme F, Garson JA. Development of a pan-retrovirus detection system for multiple sclerosis studies. Acta Neurol Scand Suppl. 1997;169:16-21. Sherr C. J., Fedele L. A., Benveniste R. E., Todaro G. J. , Interspecies antigenic determinants of the reverse transcriptases and p30 proteins of mammalian type C viruses. J Virol. 15(6), 1440-8 (1975) M. A. Morgan, T. D. Copeland, S. Oroszlan, Structural and antigenic analysis of the nucleic acid-binding proteins of bovine and feline leukemia viruses. J Virol. 46(1), 177-86 (1983) G. Geering, T. Aoki, L. J. Old, Shared viral antigen of mammalian leukaemia viruses. Nature. 226(5242),265–266 (1970) J. Davis, R. V. Gilden , S. Oroszlan, Multiple species-specific and interspecific antigenic determinants of a mammalian type C RNA virus internal protein. Immunochemistry 12(1), 67-72 (1975) Blond J-L, Besème F, Duret L, Bouton O, Bedin F, Perron H, Mandrand B, Mallet F. Molecular characterization and placental expression of HERV-W, a new human endogenous retrovirus family. J Virol. 1999;73:1175–1185. M. Wunsch, A. S. Schulz, W. Kock, R. Friedrich, G. Hunsmann, Sequence analysis of Gardner-Arnstein feline leukaemia virus envelope gene reveals common structural properties of mammalian retroviral envelope genes. EMBO J. 2(12), 2239–2246 (1983)"
}
]
}
] | 1
|
https://f1000research.com/articles/6-262
|
https://f1000research.com/articles/6-258/v1
|
13 Mar 17
|
{
"type": "Research Note",
"title": "Assessing the species composition of tropical eels (Anguillidae) in Aceh Waters, Indonesia, with DNA barcoding gene cox1.",
"authors": [
"Zainal A. Muchlisin",
"Agung Setia Batubara",
"Nur Fadli",
"Abdullah A. Muhammadar",
"Afrita Ida Utami",
"Nurul Farhana",
"Mohd Nor Siti-Azizah",
"Agung Setia Batubara",
"Nur Fadli",
"Abdullah A. Muhammadar",
"Afrita Ida Utami",
"Nurul Farhana",
"Mohd Nor Siti-Azizah"
],
"abstract": "The objective of the present study was to evaluate the species diversity of eels native to Aceh waters based on genetic data. Sampling was conducted in western coast waters of Aceh Province, Indonesia, from July to August 2016. Genomic DNA was extracted from the samples, a genomic region from the 5’ region of the cox1 gene was amplified and sequenced, and this was then used to analyse genetic variation. The genetic sequences were blasted into the NCBI database. Based on this analysis there were three valid species of eels that occurred in Aceh waters, namely Anguilla marmorata, A. bicolor bicolor, and A. bengalensis bengalensis.",
"keywords": [
"Sidat",
"Ileah",
"Anguilla bicolor",
"Anguilla marmorata",
"DNA barcoding"
],
"content": "Introduction\n\nThere are 114 species of freshwater and brackish water fish distributed around 17 sampling locations across Aceh waters1. Several of these have the potential for aquaculture, e.g. the Anguilla spp. of tropical eels, locally known as sidat or illeah in Acehnese language2–3. Based on morphological characteristics, only two species of eels have been recorded in Aceh waters, Anguilla bicolor and Anguilla marmorata1, but it is believed that the true number of species is greater because some parts of the inland waters in Aceh province have not been explored yet. According to Miller and Tsukamoto4, there are 19 species of eels that have been identified worldwide, 7 of which are found in Indonesian waters5. It is therefore very likely that new species will be found in Aceh waters.\n\nFor fisheries management it is crucial to identify these species in order to plan a better conservation strategy, since each one has unique behavioral patterns, and should be independently managed. Eels are very similar morphologically, so it is very difficult to distinguish one species from the other based on morphological characteristics only. Analysing genetic data through DNA barcoding can solve this problem6, so that the true number of eel species living in the waters of Aceh can be evaluated. The objective of the present study was to verify the taxonomic status of eels in Aceh waters by amplifying the cox1 gene and analysing the genetic data.\n\n\nMethods\n\nThe study was conducted on the western coast of Aceh Province, Indonesia, from July to November 2016. The samples were processed and analyzed in the School of Biological Sciences, Universiti Sains Malaysia. Sampling was done at night from 18.00 to 06.00 hours. Adult eels were caught using line fishing, while traps were used to catch glass eels. Eel larvae are called glass eels; they have translucent white bodies and measure about 5–10 cm. The length of adult eels is species dependent but most measure between 40–120 cm.\n\nApproximately 1 cm2 of caudal fin tissue was taken from each specimen using a sterile procedure to avoid contamination of specimens. The tissue was placed into 2.0 ml tubes containing 96% alcohol. Genomic DNA was isolated using Aqua Genomic DNA solution following the manufacturer’s protocol7–8. DNA electrophoresis was carried out on a 0.8% agarose gel at 100V. The quality and quantity of extracted DNA was assessed using a spectrophotometer. A genomic region approximately 655 bp in size was amplified from the 5’ region of the Mitochondrial Cytochrome Oxidase Subunit I (cox1) gene following the protocol from Ward et al.9 with these primer pairs:\n\nFishF1: 5’TCAACCAACCACAAAGACATTGGCAC3’\n\nFishR1: 5’TAGACTTCTGGGTGGCCAAAGAATCA3’\n\nAfter amplification, PCR products were run on 1.2% agarose gels at 100V. The clearest band was selected and purified using purification kits (PCR Clean-Up System, Promega), following the manufacturer's protocol. The purified products were run on 1.2% agarose gels at 100V to check for bands and only clear products were sent for sequencing to First BASE Laboratory Sdn Bhd in Kuala Lumpur, Malaysia. All obtained sequences were edited and aligned using MEGA 6.0 program10. Multiple sequence alignments were performed on the edited sequences with Cluster W, which is integrated into the MEGA 6.0 program. The sequences were then blasted into the NCBI database to compare and identify species. Nucleotide divergence among sequences was estimated for their genetic distance by Neighbour-Joining (NJ) based on Kimura 2 parameter. NJ was also used to construct phylogenetic trees to determine genetic relationships among haplotypes.\n\n\nStatement on animal ethics\n\nAll procedures involving animals were conducted in compliance with The Syiah Kuala University Research and Ethics Guidelines, Section of Animal Care and Use in Research (Ethic Code No: 958/2015). Please refer to Supplementary File 1 for the completed ARRIVE guidelines checklist.\n\n\nResults\n\nGenomic DNA from the 5’ region of the cox1 gene from a total of 13 glass eel samples and 31 adult eel samples were successfully amplified (Table 1). The results from NCBI BLAST identified two species of eel from adult eel samples, shortfin eel A. bicolor bicolor and giant mottled eel A. marmorata. In addition, there were three species of eels that were recognized among the glass eel samples, namely A. bicolor bicolor, A. marmorata and Indian mottled eel A. bengalensis bengalensis. A total of 20 haplotypes, consisting of 3 haplotypes of the A. bengalensis bengalensis, 1 haplotype of the A. marmorata, 15 haplotypes of the A. bicolor bicolor and 1 haplotype of the Uroconger lepturus (out-group) were produced from 44 samples (Table 2), out of 132 variable sites. and a haplotype diversity (Hd) of 0.8742. The haplotype number four belongs to A. marmorata and was shared by 9 samples from 4 different locations. The haplotype number 5 belongs to A. bicolor bicolor and was shared by 13 samples from 6 locations. All of the haplotype sequences have been deposited in the NCBI GenBank with accession numbers KY618767 to KY618795.\n\nTherefore, the study revealed that there are three valid species of tropical eels found in Aceh waters: A. bicolor, A. marmorata, and A. bengalensis; the last species being a newly recorded species in Aceh waters. The study indicates that multiple species of glass eels migrate from the sea into freshwater. One interesting finding was that one sample of conger eels (Uroconger lepturus) was detected among the Tropical glass eel samples. This is indicatory of DNA barcoding being successful in identifying species of eels in Aceh waters which cannot be identified by biometric data. Genetic data has become an important tool in assessing gene flow between marine populations11, species identification12 and monitoring the resources of marine fisheries13.\n\nThe genetic divergence between A. bicolor and A. marmorata was 5.0%, between A. bicolor and A. bengalensis it was 6.7% and between A. marmorata and A. bengalensis genetic divergence was 4.0% (Table 3). The phylogenetic tree showed a close relationship between A. marmorata and A. bengalensis (Figure 1). Based on IUCN14 data, A. bengalensis bengalensis and A. bicolor bicolor are categorized as near threatened, while the status of A. marmorata is on least concern. However, based on direct sampling in Aceh waters the shortfin eels are still abundant and most frequently caught, and are distributed over a wide range of areas including small streams, marshes, peat swamp, estuaries and irrigation channel in paddy fields1,15. Indian mottled and giant mottled eels on the other hand have been very rarely caught and are generally only found in large rivers directly connected to the sea.\n\n\nConclusion\n\nIt is concluded that three species of tropical eels are found in Aceh waters, namely, A. marmorata, A. bicolor bicolor, and A. bengalensis bengalensis where A. bengalensis bengalensis is the newly recorded species.\n\n\nData availability\n\nSequenced DNA of Tropical eels from Aceh waters can be found in the NCBI GenBank repository (https://www.ncbi.nlm.nih.gov/genbank/) with accession numbers KY618767 to KY618795.",
"appendix": "Author contributions\n\n\n\nZAM is responsible for developing research proposal and study design and approved the final draft of the paper. ASB, NF, AAM, NF and AIU are responsible for sample collection, sample processing, and data analysis. MNS is responsible for manuscript sequence alignment and proofreading of the draft.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis study was supported by Syiah Kuala University through the 2016 H index scheme (Contract number: 230/UN11.2/2016).\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nThe authors thank the Rector of Syiah Kuala University for providing the financial support to this study. Appreciation goes to Mr. Bahtiar Lubis and Mufakir Sidiq for their assistance during field work.\n\n\nReferences\n\nMuchlisin ZA, Siti-Azizah MN: Diversity and distribution of freshwater fishes in Aceh Water, Northern-Sumatra, Indonesia. International Journal of Zoological Research. 2009; 5(2): 62–79. Publisher Full Text\n\nMuchlisin ZA: Potency of freshwater fishes in Aceh waters as a basis for aquaculture development program. Jurnal Iktiologi Indonesia. 2013; 13(1): 91–96. Reference Source\n\nMuchlisin ZA, Maulidin M, Muhammadar AA, et al.: Inshore migration of Tropical glass eels (Anguilla spp.) in Lambeso River, Aceh Jaya District, Aceh Province, Indonesia. Aceh Journal of Animal Science. 2016; 1(2): 58–61. Publisher Full Text\n\nMiller MJ, Tsukamoto K: An introduction to leptocephali biology and identification. Ocean Research Institute, The University of Tokyo, Tokyo, 2004. Publisher Full Text\n\nSugeha HY, Aoyama J, Tsukamoto K: Downstream migration of tropical angullid silver eels in the Poso Lake, Central Sulawesi Island, Indonesia. Jurnal Limnotek. 2006; 23(1): 18–25. Reference Source\n\nHebert PD, Cywinska A, Ball SL, et al.: Biological identifications through DNA barcodes. Proc Biol Sci. 2003; 270: 313–321. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMuchlisin ZA, Fadli N, Siti-Azizah MN: Genetic variation and taxonomy of Rasbora group (Cyprinidae) from Lake Laut Tawar, Indonesia. Journal of Ichthyology. 2012; 52(4): 284–290. Publisher Full Text\n\nMuchlisin ZA, Thomy Z, Fadli N, et al.: DNA barcoding of freshwater fishes from Lake Laut Tawar, Aceh Province, Indonesia. Acta Ichthyologica et Piscatoria. 2013; 43(1): 21–29. Publisher Full Text\n\nWard RD, Zemlak TS, Innes BH, et al.: DNA barcoding Australia’s fish species. Philos Trans R Soc Lond B Biol Sci. 2005; 360(1462): 1847–1857. PubMed Abstract | Publisher Full Text | Free Full Text\n\nTamura K, Stecher G, Peterson D, et al.: MEGA6: Molecular Evolutionary Genetics Analysis version 6.0. Mol Biol Evol. 2013; 30(12): 2725–2729. PubMed Abstract | Publisher Full Text | Free Full Text\n\nPalumbi SR, Cipriono F: Species identification using genetic tools: the value of nuclear and mitochondrial gene sequences in whale conservation. J Hered. 1998; 89(5): 459–464. PubMed Abstract | Publisher Full Text\n\nPrioli SM, Prioli AJ, Julio HF Jr, et al.: Identification of Astyanax altiparanae (Teleostei, Characidae) in the Iguaçu River, Brazil, based on mitochondrial DNA and RAPD markers. Genetic and Molecular Biology. 2002; 25(4): 421–430. Publisher Full Text\n\nMenezes MR, Ikeda M, Taniguchi N: Genetic variation in skipjack tuna Katsuwonus pelamis(L.) using PCR-RFLP analysis of the mitochondrial DNA D-loop region. J Fish Biol. 2006; 68(supplement A): 156–161. Publisher Full Text\n\nInternational Union for Conservation of Nature (IUCN): IUCN redlist and of threatened species. Accessed on January 17, 2017. 2004. Reference Source\n\nMuchlisin ZA, Akyun Q, Rizka S, et al.: Ichthyofauna of Tripa Peat Swamp Forest, Aceh Province, Indonesia. CheckList. 2015; 11(2): 1560. Publisher Full Text"
}
|
[
{
"id": "20933",
"date": "22 Mar 2017",
"name": "Salman Abdo Al-Shami",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript in hands present interesting information about species diversity of tropical eels (Anguilla spp.) in Aceh, Indonesia. Although the manuscript is nicely presented, few justifications and clarification are still required.\n\nTitle\n\nFor general readers, using the word \"waters\" may make the readers confused about what type of water bodies in which the samples collected from. For example, how about using \"coastal line\" instead of \"waters\" or just delete the word \"waters\".\n\nPlease replace the family name of eels with \"Anguilla spp.\" to be more precise.\n\nAbstract I believe adding an introductory sentence will make the research summary more meaningful. This introductory sentence will highlight the importance of the study and make a sound justification of the study objectives.\n\nShould be read \"...the present study is to evaluate...\"\n\nShould be read \"the western\"\n\n\"coastal waters\" change into \"coastal line\" or \"marine environment\".\n\nAdd semicolon after \"namely;\"\n\nAdd comma after \"Based on this analysis\"\n\nThe word \"genomic\" makes me confused. Is it mitochondrial or genomic gene? Please correct me if I am wrong.\n\nIntroduction\nI would suggest extending the introduction in a way that gives the readers a comprehensive background about the research based on the available literature.\n\nIt would be nice to start the introduction with an introductory paragraph to give the readers the brief understanding about the research context.\n\nMethods The procedures and tools used to collect the eel samples should be described elaborately. For example, it was stated that traps were used to catch the glass eels. It was not mentioned what type of traps? How did the researchers set the trap? For how long did they leave the traps?\nIt will be excellent if the authors provide a geographical map showing the approximate locations of sampling sites.\n\nIt would be nice to add a reference to the method of sampling the eels' tissue.\n\nPlease add a reference to \"Kimura 2 parameters\"\n\nResults The presentation of the results is adequate and no further corrections or additions are required.\nFigure 1: please add \"Anguilla spp.\" to the figure caption.",
"responses": []
},
{
"id": "21916",
"date": "18 Apr 2017",
"name": "Mudjekeewis D. Santos",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis paper has some scientific merit in providing new information about species composition of eel in Aceh waters and in using COX1 gene as a marker. As such I find it suitable for indexing after major revisions:\nAuthors need to highlight in the Introduction the existing information/status about eels in Aceh waters.\n\nIn addition, they need to relate the study on existing eel trade (domestic or export) in the area if any since this is the main threat for the said species.\n\nThe paper of Asis et al. (2014)1 would help enrich the objective of this paper.\n\nThe reference DNA sequences used in the paper/trees are not clear. Did the authors established their own reference sequences for cox1? This should be indicated or made clear.",
"responses": []
},
{
"id": "22410",
"date": "11 May 2017",
"name": "Murugaiyan Kalaiselvam",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIn Introduction, add few more points regarding the importance of Genetic identification, and demerits of conventional identification strategies (Only one point had been given in introduction for name sake, add few more)\n\nIn addition, author will add the possible outcome after identification of eels, in what way this work serves to the research community?\n\nPrior to molecular identification, the author done the sample identification by morphometric characters? Though it’s a old procedure, it is of much importance and the results of morphometric analysis acts as an base step for identification.\n\nData on Taxonomic characters will serve as a guide for identification of the same, whereas having sequences on hands will not be useful for further reference.\n\nAuthors stated that only two species of eels have been recorded in Aceh waters and in results they recorded 3 species with molecular results? Thus the morphometric identification of eels should be included so that what are the distinct features of 3 eels can be clarified to the readers.\n\nGenerally for DNA barcoding analysis lateral tissue from the left side of fish will be taken in to consideration, but the authors had chosen caudal fin tissue, is there any justification for taking the caudal fin tissue, if so justify that and add proper reference for that methodology.\n\nTotally 13 glass eel and 31 adult eels were, so totally out of the 44 samples, the results inferred belongs to only 3 species, so care should be taken prior to analyzing the samples for molecular identification as it is cost effective process and wastage of chemicals.\n\nHow did author arrived the genetic divergence?\n\nMaterials needs clear cut procedures and reference alone doesn’t enough: - Genomic DNA was isolated using Aqua Genomic DNA solution following the manufacturer’s protocol7-8 - Mitochondrial Cytochrome Oxidase Subunit I (cox1) gene following the protocol from Ward et al.9\n\nDiscussion part need to be written with the comparative studies made by author authors regarding the availability of eel in study area, identification problems, and results of the present study with respect to molecular identification and highlights the importance of the obtained results.\n\nConclusion seems to be the result and what is the inference made from the study should be written precisely and accurately.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-258
|
https://f1000research.com/articles/6-256/v1
|
13 Mar 17
|
{
"type": "Case Report",
"title": "Case Report: Efficacy of propranolol in delaying the growth of hemangioblastomas in a Von Hippel Lindau patient",
"authors": [
"Ana-Belen Perona-Moratalla",
"Gemma Serrano-Heras",
"Tomas Segura",
"Gemma Serrano-Heras",
"Tomas Segura"
],
"abstract": "Von Hippel Lindau is an inherited disease which leads to tumor growth, including hemangioblastomas in the central nervous system and retina. No pharmacological treatment has demonstrated efficacy. Propranolol is a beta-blocker widely used in some neurological and cardiac diseases, and its safety is known. We present a patient diagnosed with Von Hippel Lindau disease who was treated with propranolol for worsening migraine. The patient exhibited two asymptomatic hemangioblastomas, which showed no change in size during treatment with propranolol. Our case report suggests that propranolol could be effective in delaying the growth of hemangioblastomas in the central nervous system.",
"keywords": [
"Propranolol",
"hemangioblastomas",
"Von Hippel Lindau disease",
"case report"
],
"content": "Case description\n\nA 33-year-old Caucasian female who was diagnosed with Von Hippel Lindau (VHL) disease in 2002. Her mother suffered sudden death in 2002; and a diagnosis of VHL was made at her autopsy. Therefore, the patient was studied by a neurologist, and one hemangioblastoma (HB) of 2cm in size was observed at the medulla during magnetic resonance imaging (MRI). The patient exhibited no symptoms; however, she underwent surgery in 2003 for the risk of complications due to the size of the HB. The patient’s recovery was uneventful. Since then an annual MRI of the central nervous system (CNS) has been performed.\n\nFrom 2009, a progressive tumor growth of two HBs in the medulla was observed, which was checked annually by the neurosurgeon because the patient was asymptomatic.\n\nFurthermore, the patient suffered from occasional migraine episodes since 2003. She presented with a worsening of her previous migraine, having headache attacks everyday since October 2013. After discussing the various treatment options, the patient opted for propranolol at increasing doses up to 120 mg per day starting in March of 2014. At the 3 month follow-up visit after starting propranolol, the patient reported a slight reduction in her migraines; however the dose was increased to 160 mg per day because patient still suffered more than 10 migraine episodes per month. No adverse events were observed during that period of time. At the 9 month follow-up visit, 6 months after 160mg per day of intake, she showed a significant improvement. During propranolol treatment, the patient underwent a cerebral and spinal cord MRI in October 2014, which showed no changes from the previous scan performed one year before. The patient continued to take propranolol; however, side effects appeared (orthostatic hypotension) in March 2015 (after 12 months of propranolol treatment) and necessitated a slow decrease in propranolol dosage until the treatment was stopped in July 2015. Subsequently, the patient’s migraine did not worsen; however, a clear growth in the medullary HBs was shown by control MRI (Figure 1) in October 2015. The patient required surgery in January 2016, due to an increase in tumor size observed in MRI. Since then the patient has remained asymptomatic.\n\n\nDiscussion\n\nVHL disease, a rare autosomal dominant disorder, is caused by the deletion or mutation of the VHL tumor suppressor gene1–3. It has been reported that the absence of functional VHL protein, which occurs in the disease, often leads to the formation of highly vascular tumors, such as hemangioblastomas (HBs)4–6. Although some antiangiogenic therapies have been tried7,8, there are currently no effective pharmacological therapies for HBs, thus surgery remains the standard procedure9. Our patient was monitored by means of an annual MRI to check the growth of the tumors. The images were reviewed by the neurosurgeon in order to determine if the hemangioblastomas were of sufficient size for safe surgery.\n\nPropranolol is a beta-blocker that is offered as first line treatment in the prophylaxis of migraine10. It is also used for the treatment of essential tremor11, hypertension and some cardiac diseases. In our case, the patient suffered worsening of her migraine and propranolol administration was indicated. Propranolol has also a proven efficacy in infantile hemangioma treatment12. Furthermore, propranolol has shown an antiangiogenic effect13, and a recent publication indicates that propranolol reduces the viability of HBs cultivated in vitro14. In the light of these data and after discussing the options with the patient, she decided to continue taking propranolol; however due to symptomatic orthostatic hypotension, the patient had to stop. Unfortunately, the HBs showed clear growth in MRI after stopping treatment.\n\nIn summary, propranolol treatment appeared to inhibit growth of the HBs after several years of steady progression, as seen in the MRI results. Tumor growth commenced again once the treatment with propranolol was interrupted. Our case study suggests that propranolol can delay the growth of hemangioblastomas in the CNS.\n\n\nConsent\n\nWritten informed consent for publication of the clinical details and images was obtained from the patient.",
"appendix": "Author contributions\n\n\n\nABPM and GSH wrote the paper. TSM was the physician responsible for the patient in this case report. All authors have participated in the concept and design/analysis and interpretation of data, drafting and revising the manuscript, and they have given final approval for the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nLatif F, Tory K, Gnarra J, et al.: Identification of the von Hippel-Lindau disease tumor suppressor gene. Science. 1993; 260(5112): 1317–1320. PubMed Abstract | Publisher Full Text\n\nFriedrich CA: Genotype-phenotype correlation in von Hippel-Lindau syndrome. Hum Mol Genet. 2001; 10(7): 763–767. PubMed Abstract | Publisher Full Text\n\nNordstrom-O'Brien M, van der Luijt RB, van Rooijen E, et al.: Genetic analysis of von Hippel-Lindau disease. Hum Mutat. 2010; 31(5): 521–537. PubMed Abstract | Publisher Full Text\n\nKim WY, Kaelin WG: Role of VHL gene mutation in human cancer. J Clin Oncol. 2004; 22(24): 4991–5004. PubMed Abstract | Publisher Full Text\n\nGläsker S: Central nervous system manifestations in VHL: genetics, pathology and clinical phenotypic features. Fam Cancer. 2005; 4(1): 37–42. PubMed Abstract | Publisher Full Text\n\nMaher ER, Neumann HP, Richard S: von Hippel-Lindau disease: a clinical and scientific review. Eur J Hum Genet. 2011; 19(6): 617–623. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMadhusudan S, Deplanque G, Braybrooke JP, et al.: Antiangiogenic therapy for von Hippel-Lindau disease. JAMA. 2004; 291(8): 943–944. PubMed Abstract | Publisher Full Text\n\nJonasch E, McCutcheon IE, Waguespack SG, et al.: Pilot trial of sunitinib therapy in patients with von Hippel-Lindau disease. Ann Oncol. 2011; 22(12): 2661–2666. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCapitanio JF, Mazza E, Motta M, et al.: Mechanisms, indications and results of salvage systemic therapy for sporadic and von Hippel-Lindau related hemangioblastomas of the central nervous system. Crit Rev Oncol Hematol. 2013; 86(1): 69–84. PubMed Abstract | Publisher Full Text\n\nLoder E, Burch R, Rizzoli P: The 2012 AHS/AAN guidelines for prevention of episodic migraine: a summary and comparison with other recent clinical practice guidelines. Headache. 2012; 52(6): 930–945. PubMed Abstract | Publisher Full Text\n\nSchneider SA, Deuschl G: The treatment of tremor. Neurotherapeutics. 2014; 11(1): 128–38. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLéauté-Labrèze C, Hoeger P, Mazereeuw-Hautier J, et al.: A randomized, controlled trial of oral propranolol in infantile hemangioma. N Engl J Med. 2015; 372(8): 735–746. PubMed Abstract | Publisher Full Text\n\nLamy S, Lachambre MP, Lord-Dufour S, et al.: Propranolol suppresses angiogenesis in vitro: inhibition of proliferation, migration, and differentiation of endothelial cells. Vascul Pharmacol. 2010; 53(5–6): 200–208. PubMed Abstract | Publisher Full Text\n\nAlbiñana V, Villar Gómez de Las Heras K, Serrano-Heras G, et al.: Propranolol reduces viability and induces apoptosis in hemangioblastoma cells from von Hippel-Lindau patients. Orphanet J Rare Dis. 2015; 10: 118. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "22185",
"date": "25 Apr 2017",
"name": "Marie Louise Mølgaard Binderup",
"expertise": [
"Reviewer Expertise Molecular and clinical genetics",
"Von Hippel-Lindau disease"
],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nPerona-Moratalla et al. describe an interesting case of a von Hippel-Lindau (vHL) patient treated with the beta-blocker Propanolol for migraines, where they have documented the progression of the patient’s CNS tumors, hemangioblastomas (HBs). The concurrent lack of tumor growth in a 15-month period of Propanolol treatment is described to indicate a possible effect of the drug on delaying vHL hemangioblastoma development.\nThe case report is well-written and deals with the important subject of identifying factors that potentially modify vHL tumor growth. The authors take a cautious approach in suggesting that Propanolol possibly reduces CNS hemangioblastoma growth. However, we feel that the paper could benefit from a broader discussion of other factors that are known to or suggested to modulate vHL tumor development.\nThe authors describe that no changes in HB progression were seen during the first year (from October 2013-October 2014) when the patient had been given Propanolol for 9 months. During the next year (from October 2014- October 2015) a clear growth in the hemangioblastoma was seen, even though the patient had been treated with Propanolol during most of the period (until July 2015, although in reduced dosages from March 2015). It is unknown when exactly in this period that the described growth spurt had taken place. In the discussion it is described that the patient’s HBs had shown a steady progression on MRI in the years prior to the Propanolol treatment. It would be helpful for the reader to get an idea of the extent of this growth during the many years that the patient was observed. This could for example be done with use of a figure showing a timeline on which the tumors sizes at selected time points are indicated or with use of a table showing the MRI-evaluated sizes at the annual MRI scans throughout the observation period.\nFurther, we feel that the article would benefit from mentioning that there have been several reports of the natural fluctuations in growth patterns of CNS HBs in vHL patients that show a clear tendency for periods of stagnation and periods of growth spurts1-3. The exact triggers of tumor growth have not been fully described, and it is both novel and interesting to explore the possible effects of drugs like Propanolol. Other factors that have been suggested to affect the natural pattern of tumor progression in vHL could also be discussed: genotype, anatomical tumor location, certain age intervals, and possible hormonal factors (such as gender and pregnancy).\nCase description: It would be useful to the reader to learn details on the phenotype and the genotype of the patient, and the family history: Which vHL-affections have been diagnosed in the patient, and at which age? Has the VHL gene been analysed and what has been found? (If the gene has not been analysed yet, we highly recommend that this is done, see below). What was observed at the autopsy of the mother? Are other relatives affected and/or carrying the VHL variant?.\n\nGenotype: Several genotype-phenotype correlations have been described in relation to vHL. Most importantly, it has been shown that carriers of VHL variants that do not result in a functional protein product (i.e. deletions, nonsense variants, frame-shift variants etc.) have more severe phenotypes than patients with VHL missense variants that produce an altered, but functional protein product1,4,5. The case would benefit from mention of the patient’s genotype as well as a brief discussion of the possible effect of the genotype on her disease progression.\n\nAnatomical location of the CNS HBs: It would be of interest to the readers to know the exact anatomical locations of the medullary hemangioblastomas (HBs) as well as the radiologically estimated sizes/volumes of the tumors and whether there was associated cyst development. This has previously been done in several other publications reporting on the progression of CNS HBs in vHL patients over time2,3. Also, a HB’s anatomical location in CNS has been shown to be correlated with the pattern of progression and cyst development1,4,6.\n\nAge periods: More specific details of the age of the patient at the mentioned time points in the case: was the patient 33 years old when she was first diagnosed with vHL in 2002 and in 45 years old in 2014 when the propranolol treatment started? This is important, as it is also known that a patient’s age influences tumor growth and cyst development1,4,6.\n\nHormonal factors: A patient’s sex has been shown to be correlated to tumor progression; men have a tendency to have develop more CNS HBs and have a more aggressive CNS HB growth compared to women1,7. Also, pregnancy has been suggested to influence tumor progression7-9. Has the patient been pregnant in the observation period?\n\nIn addition, a more detailed description of the molecular background of the disease; i.e. the main cellular functions of the VHL protein in relation to tumorigenesis would also be of interest to the reader, especially if followed by theories of how and why Propanolol might affect tumor growth on a cellular level. The authors have previously published important observations in their already cited study regarding the effect of Propanolol and its in vitro effect on HB cells. A more detailed description of these findings would be interesting to include in the case.\n\nIs the background of the case’s history and progression described in sufficient detail? Partly\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? Partly",
"responses": []
},
{
"id": "22531",
"date": "11 May 2017",
"name": "Eric Jonasch",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe a patient with VHL disease who demonstrated arrested growth of a CNS hemangioblastoma while being treated with propranolol and propose a causal relationship between propranolol therapy and hemangioblastoma growth arrest.\n\nThe authors should provide more detail on mutational subtype and family history.\n\nThe authors should provide more detail on imaging studies, in particular the size of the hemangioblastoma at specific timepoints, and overlay the time period and dose of propranolol.\n\nThe authors should discuss potential mechanisms of action of propranolol and other reports on propranolol efficacy in the context of known hemangioblastoma biology.\n\nIf the above points are addressed this case series may be useful.\n\nIs the background of the case’s history and progression described in sufficient detail? Partly\n\nAre enough details provided of any physical examination and diagnostic tests, treatment given and outcomes? Partly\n\nIs sufficient discussion included of the importance of the findings and their relevance to future understanding of disease processes, diagnosis or treatment? Partly\n\nIs the case presented with sufficient detail to be useful for other practitioners? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-256
|
https://f1000research.com/articles/6-253/v1
|
13 Mar 17
|
{
"type": "Research Article",
"title": "Cost-effectiveness of early versus delayed antiretroviral therapy in tuberculosis patients infected with HIV in sub-Saharan Africa",
"authors": [
"Rashidah T. Uthman",
"Olalekan A. Uthman",
"Olalekan A. Uthman"
],
"abstract": "Background: The most challenging issue physicians are facing is the appropriate timing of introducing antiretroviral therapy (ART) along with ongoing tuberculosis (TB) therapy in HIV and TB co-infected patients. This study examined the cost-effectiveness of early versus delayed ART initiation in TB patients, infected with HIV (co-infected patients) in a sub-Saharan Africa setting. Methods: A decision analytic model based on previously published and real-world evidence was applied to evaluate clinical and economic outcomes associated with early versus delayed ART in TB and HIV co-infection. Incremental cost-effectiveness ratio (ICER) was calculated with both costs and quality-adjusted life years (QALYs). Different assumptions of treatment benefits and costs were taken to address uncertainties and were tested with sensitivity analyses. Results: In base case analysis, the expected cost of giving early ART to TB patients infected with HIV was $1372, with a QALY gain of 0.68, while the cost of delayed ART was $955, with a QALY gain of 0.62. The ICER shows $6775 per QALYs, which suggests that it is not as cost-effective, since it is greater than 3 x GDP per capita ($5,086) for sub-Saharan Africa willingness to pay (WTP) threshold. At $10,000 WTP, the probability that early ART is cost effective compared to delayed ART is 0.9933. At cost-effectiveness threshold of $5086, the population expected value of perfect information becomes substantial (≈US$5 million), and is likely to exceed the cost of additional investigation. This suggests that further research will be potentially cost-effective. Conclusions: From the perspective of the health-care payer in sub-Saharan Africa, early initiation of ART in HIV and TB co-infection cannot be regarded as cost-effective based on current information. The analysis shows that further research will be worthwhile and potentially cost-effective in resolving uncertainty about whether or not to start ART early in HIV and TB co-infection.",
"keywords": [
"HIV",
"tuberculosis",
"early intervention",
"sub-Saharan Africa"
],
"content": "Introduction\n\nCo-infected patients with HIV and tuberculosis (TB) has been a serious concern to healthcare sectors in many countries, commonly countries with resource constrained settings (Blanc et al., 2011; Manosuthi et al., 2012; Sinha et al., 2012). The incidence population of TB globally in 2012 was reported by the World Health Organisation (WHO) to be 8.6 million, and 1.1 million of this population were HIV-infected individuals (Mfinanga et al., 2014; WHO, 2013). The most challenging issue physicians are facing is the appropriate timing of introducing antiretroviral therapy (ART) along with ongoing TB therapy in HIV and TB co-infected patients (Mfinanga et al., 2014; Sinha et al., 2012). Delaying the introduction of ART for co-infected patients, and prescribing antibiotics only to these patients, has been proven to increase the risk of reactivation and reinfection of TB among patients, as a result of the HIV infection (Daley et al., 1992; De Cock et al., 1992; Sinha et al., 2012; Wilkinson & Moore, 1996). Hence it increases the death rate among co-infected patients, compared to individuals infected only with TB (Sinha et al., 2012; Wilkinson & Moore, 1996). The combination of the two therapies (ART and antibiotics) has been reported to have a significant outcome in reducing the mortality among co-infected patients, leading to a 90% reduction of TB reinfection (Manosuthi et al., 2006; Sanguanwongse et al., 2008). However, combining the two therapies is complicated, and can lead to drug-drug interactions, severe toxicities, poor medication adherence, increase pill burden, and risk of developing immune reconstitution inflammatory syndrome (IRIS) associated with TB (Mfinanga et al., 2014). To date, the cost effectiveness of early versus delayed ART initiation in co-infected patients has not been reported. The aim of this study is to examine the cost effectiveness of early versus delayed ART initiation in TB patients infected with HIV (co-infected patients).\n\n\nMethods\n\nAnalytical decision model, comparing the early versus delayed ART initiation in co-infected patients, using cost utility analysis and cost effectiveness analysis.\n\nA decision tree model was developed to compare the impact of early introduction of ART to delayed ART in the management of tuberculosis patients infected with HIV, and their mortality rate using Microsoft Excel 2013 (Figure 1). Though both conditions are chronic and as such a Markov model would also have been an appropriate tool to use (Soto, 2002). However, since the interested outcome can be assessed within a short time period (12 weeks), the decision tree model was used (Halpern et al., 1998).\n\nART – Antiretroviral Therapy; OBS – Observational Studies; RCT- Randomised Controlled Trial.\n\nThe cost-effectiveness model was performed from the health-care’s payer perspective, where only the direct programs and medical costs were included. Indirect costs incurred by patients were not considered.\n\nThe study time horizon, used for the model is 12 weeks, and this was chosen based on the treatment period of latent TB, which can be 12 weeks (Manosuthi et al., 2012; Sinha et al., 2012).\n\nThe patients with TB, commencing on anti-TB treatment for 12 weeks, infected with HIV, are the population of interest. The setting was in sub-Saharan Africa. The data used in the model for both costs and consequences was extracted from previous published literature (summarised in Table 1; Abimbola et al., 2012; Cleary et al., 2006; Esfahani et al., 2011; Holland et al., 2009; Uthman et al., 2015). We conducted focused searches for the studies in Medline (from inception to December 2016) using the following keywords: tuberculosis, HIV, cost, and quality of life. Probabilities derived from published literatures were used for each arm on the tree to determine the number of patients that will either have adverse events or no adverse events, and those that will either survive or die.\n\nART, antiretroviral therapy; QALY, quality-adjusted life year.\n\nProbabilities: Probabilities derived from published literatures were used for each arm on the tree to determine the number of patients that will either have adverse events or with no adverse events, and those that will either survive or die.\n\nUtilities: The utility values, Quality adjusted life year (QALY) for each arm of the tree were also derived from published literatures.\n\nCosts: Direct costs were the only cost considered in the model, as the perspective was based on health care only. The costs include costs of a complete treatment of TB with adverse or no adverse events (Esfahani et al., 2011) and costs of ART and costs of additional treatments for dying patients (Table 1). All costs were converted to US dollars ($), and inflated to 2014 price, using a USA inflation calculator (http://www.usinflationcalculator.com/).\n\nReduced death/mortality benefit was the primary outcome measured, and the incremental cost effectiveness ratio (ICER) was used, measured in QALY. The result will be presented on cost effectiveness plane (CE-plane), cost-effectiveness acceptability curves (CEACs), along with probabilistic sensitivity analysis (PSA) to represent the uncertainty in model output.\n\nThe following assumptions were made:\n\nTB treatment - all patients were assumed to be undergoing anti-TB treatment that takes 3 months for completion (Holland et al., 2009).\n\nTotal ART treatment - it was assumed that the total ART treatment for 12 weeks is half the cost of healthcare utilization of patients that died within the first 6 months of ART. Also the same assumption was used for the patients that survive the first 6 months of ART. The cost excluded the expected expenditure per ART of included patients.\n\nMortality rate - it was assumed that the relative risk is three times more in patients with an adverse event group, than in a non-adverse group (Hoyo-Ulloa et al., 2011).\n\nAdditional costs of dying patients - it was assumed that the additional cost of treating dying patients was a result of the adverse event developed by the patients, which can result in death.\n\nTo handle uncertainty surrounding the model parameters, and the robustness of the model outcome, probabilistic sensitivity analysis was carried out to justify the decision on whether starting ART early or delaying treatment in TB patients infected with HIV is cost effective. About 10,000 random variables were generated, using the Microsoft Excel 2013 random generator. Cost effectiveness acceptability curves (CEAC) can also be used to summarize the uncertainty around cost effectiveness analysis (Fenwick et al., 2006). CEAC shows the probability of how cost effective an intervention is, compared with the alternative intervention, based on the range of threshold values accepted by decision makers per QALY (Fenwick et al., 2006).\n\nUncertainties around the cost effectiveness estimates can also be assessed using EVPI (Eckermann et al., 2010). Errors in cost effectiveness estimates can lead to wrong decisions, in which health benefit and resources can be forgone to an alternative choice (Briggs et al., 2006). The value cost of the forgone health benefit and resources as a result of uncertainty in the estimate, can be expressed as the EVPI (Briggs et al., 2006). EVPI can be expressed as the different association between the expected net benefit with no uncertainty and the expected net benefit with uncertainties. The value of EVPI rises as the threshold increases, as a result of the increment in decision uncertainty (Briggs et al., 2006). EVPI reaches the maximum when the value of threshold and expected ICER are equal, and this is the highest level of decision uncertainty (Briggs et al., 2006). The population EVPI was estimated by multiplying per patient EVPI by the effective population, i.e. the estimated number of people with TB and HIV co-infection. According to the WHO report in 2012, there were 1.1 million cases of co-infected patients, and 320,000 deaths were recorded among this population globally (WHO, 2013: http://apps.who.int/iris/bitstream/10665/91355/1/9789241564656_eng.pdf).\n\n\nResults\n\nTable 1 summarizes the model input parameters. The results of the analysis are shown in Table 2. From the tabulated results, the expected cost of providing early ART to TB patients infected with HIV was $1372, with a QALY gain of 0.68, while the cost of delayed ART was $955, with a QALY gain of 0.62. The results demonstrate that early ART provides a higher QALY value than delayed ART, but with a higher cost. The ICER shows $6775 per QALYs, which suggests that it is not cost-effective, since it is greater than 3 × GDP per capita ($5086) for the sub-Saharan Africa willingness to pay threshold (Evans et al., 2005; Murray et al., 2000).\n\nART, antiretroviral therapy; QALY, quality-adjusted life year.\n\nThe output for the probabilistic sensitivity analysis for 10,000 simulations is shown in Figure 2. All of the model outputs were in the northeast quadrant of the cost-effectiveness plane, suggesting that early ART is more costly and more effective than delayed ART; it is never cost saving and never has a negative impact on patient outcomes. At a threshold of $9,000, directly administered ART was found to be 50% more likely to be cost-effective, and if the willingness to pay for a QALY was $18,000 then directly administered ART is likely to be at least 95% cost-effective. The probability that early ART was cost-effective at the WHO-CHOICE threshold (Evans et al., 2005; Murray et al., 2000) of $5086 was just 1% (Figure 3). However, if the policy makers are willing to pay for a QALY ($10,000), then early ART is likely to be at least 95% cost-effective.\n\nQALY- Quality Adjusted Life Years.\n\nQALY- Quality Adjusted Life Years.\n\nThe population EVPI is illustrated in Figure 4. At a cost-effectiveness threshold of $5086, the population EVPI becomes substantial (≈$5 million), and is likely to exceed the cost of additional investigation. This suggests that further research will be potentially cost-effective. Co-infected early versus delayed antiretroviral therapy is unlikely to be cost-effective.\n\nEVPI – Expected value of perfect information; QALY- Quality Adjusted Life Years.\n\n\nDiscussion and conclusions\n\nThe decision analysis model was used to assess the cost effectiveness of early ART in TB patients infected with HIV. According to the ICER estimates, early ART is not cost effective from a sub-Saharan Africa health-care payer perspective, i.e. the ICER is >3 × GDP per capita ($5086) (Evans et al., 2005; Murray et al., 2000).\n\nTo the best of our knowledge, this is the first cost-effectiveness model on optimal timing of ART in people with HIV and TB co-infection from sub-Saharan Africa’s perspective. Our cost-effectiveness model incorporated probabilistic sensitivity analysis to simultaneously and comprehensively estimate uncertainty around model input parameters. This approach follows WHO health economists' recommendations for economic evaluation and priority setting (Baltussen et al., 2002). In addition, the decision analytical approach we used have several advantages compared with economic evaluations alongside clinical trials (Ehlers et al., 2009). Evidence from multiple sources were combined, reflective of real-world evidence rather than evidence from just one trial conducted in a restricted setting. This can be combined and systematic sensitivity analyses performed (Ehlers et al., 2009).\n\nThe specific appropriate time to initiate ART within an early period could not be stated in the model, but it may be assumed that it should be within 8 weeks, as this is the recommended time by the WHO (Mfinanga et al., 2014). Only direct costs were considered in the model, based on the health care perspective. The costs, probabilities and utilities used in the model were estimated from published literature, and probabilistic sensitivity analysis was conducted to assess the uncertainties around parameter's value. The costs used seem to be general costs, which might not be the appropriate cost setting in sub-Saharan Africa.\n\nIn conclusion, from the perspective of the health-care payer in sub-Saharan Africa, early initiation of ART in HIV and TB co-infection cannot be regarded as cost-effective based on current information. The value of information analysis shows that further research will be worthwhile and potentially cost-effective in resolving the uncertainty about whether or not to start ART early in HIV and TB co-infection.\n\n\nData availability\n\nDataset 1: Raw data for Figure 2, Figure 3 and Figure 4 (in zipped file). doi, 10.5256/f1000research.10620.d151708 (Uthman & Uthman, 2017).",
"appendix": "Author contributions\n\n\n\nRTU and OAU were responsible for conception and design of the research. Acquisition of data was carried out RTU and OAU. Economic modelling and statistical analysis were carried out by RTU and OAU. RTU and OAU were responsible for review, analysis and interpretation of the outcomes. RTU and OAU were responsible for development of the manuscript. RTU and OAU were responsible for critical revision of the manuscript for important intellectual content. All authors read and approved the final manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting work.\n\n\nReferences\n\nAbimbola TO, Marston BJ, Date AA, et al.: Cost-effectiveness of tuberculosis diagnostic strategies to reduce early mortality among persons with advanced HIV infection initiating antiretroviral therapy. J Acquir Immune Defic Syndr. 2012; 60(1): e1–7. PubMed Abstract | Publisher Full Text\n\nBaltussen RM, Hutubessy RC, Evans DB, et al.: Uncertainty in cost-effectiveness analysis. Probabilistic uncertainty analysis and stochastic league tables. Int J Technol Assess Health Care. 2002; 18(1): 112–9. PubMed Abstract\n\nBlanc FX, Sok T, Laureillard D, et al.: Earlier versus later start of antiretroviral therapy in HIV-infected adults with tuberculosis. N Engl J Med. 2011; 365(16): 1471–81. PubMed Abstract | Publisher Full Text | Free Full Text\n\nBriggs A, Claxton K, Sculpher M: Decision Modelling for Health Economic Evaluation. United States, Oxford University press Inc, New York; 2006. Reference Source\n\nCleary SM, McIntyre D, Boulle AM: The cost-effectiveness of antiretroviral treatment in Khayelitsha, South Africa--a primary data analysis. Cost Eff Resour Alloc. 2006; 4: 20. PubMed Abstract | Publisher Full Text | Free Full Text\n\nDaley CL, Small PM, Schecter GF, et al.: An outbreak of tuberculosis with accelerated progression among persons infected with the human immunodeficiency virus. An analysis using restriction-fragment-length polymorphisms. N Engl J Med. 1992; 326(4): 231–5. PubMed Abstract | Publisher Full Text\n\nDe Cock KM, Soro B, Coulibaly IM, et al.: Tuberculosis and HIV infection in sub-Saharan Africa. JAMA. 1992; 268(12): 1581–7. PubMed Abstract | Publisher Full Text\n\nEckermann S, Karnon J, Willan AR: The value of value of information: best informing research design and prioritization using current methods. Pharmacoeconomics. 2010; 28(9): 699–709. PubMed Abstract | Publisher Full Text\n\nEhlers L, Overvad K, Sørensen J, et al.: Analysis of cost effectiveness of screening Danish men aged 65 for abdominal aortic aneurysm. BMJ. 2009; 338: b2243. PubMed Abstract | Publisher Full Text | Free Full Text\n\nEsfahani K, Aspler A, Menzies D, et al.: Potential cost-effectiveness of rifampin vs. isoniazid for latent tuberculosis: implications for future clinical trials. Int J Tuberc Lung Dis. 2011; 15(10): 1340–6. PubMed Abstract | Publisher Full Text\n\nEvans DB, Edejer TT, Adam T, et al.: Methods to assess the costs and health effects of interventions for improving health in developing countries. BMJ. 2005; 331(7525): 1137–40. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFenwick E, Marshall DA, Levy AR, et al.: Using and interpreting cost-effectiveness acceptability curves: an example using data from a trial of management strategies for atrial fibrillation. BMC Health Serv Res. 2006; 6: 52. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHalpern MT, Luce BR, Brown RE, et al.: Health and economic outcomes modeling practices: a suggested framework. Value Health. 1998; 1(2): 131–47. PubMed Abstract | Publisher Full Text\n\nHolland DP, Sanders GD, Hamilton CD, et al.: Costs and cost-effectiveness of four treatment regimens for latent tuberculosis infection. Am J Respir Crit Care Med. 2009; 179(11): 1055–60. PubMed Abstract | Publisher Full Text | Free Full Text\n\nHoyo-ulloa I, Belaunzarán-zamudio PF, Crabtree-ramirez B, et al.: Impact of the immune reconstitution inflammatory syndrome (IRIS) on mortality and morbidity in HIV-infected patients in Mexico. Int J Infect Dis. 2011; 15(6): e408–14. PubMed Abstract | Publisher Full Text | Free Full Text\n\nManosuthi W, Chottanapand S, Thongyen S, et al.: Survival rate and risk factors of mortality among HIV/tuberculosis-coinfected patients with and without antiretroviral therapy. J Acquir Immune Defic Syndr. 2006; 43(1): 42–6. PubMed Abstract | Publisher Full Text\n\nManosuthi W, Mankatitham W, Lueangniyomkul A, et al.: Time to initiate antiretroviral therapy between 4 weeks and 12 weeks of tuberculosis treatment in HIV-infected patients: results from the TIME study. J Acquir Immune Defic Syndr. 2012; 60(4): 377–83. PubMed Abstract | Publisher Full Text\n\nMfinanga SG, Kirenga BJ, Chanda DM, et al.: Early versus delayed initiation of highly active antiretroviral therapy for HIV-positive adults with newly diagnosed pulmonary tuberculosis (TB-HAART): a prospective, international, randomised, placebo-controlled trial. Lancet Infect Dis. 2014; 14(7): 563–71. PubMed Abstract | Publisher Full Text\n\nMurray CJ, Evans DB, Acharya A, et al.: Development of WHO guidelines on generalized cost-effectiveness analysis. Health Econ. 2000; 9(3): 235–51. PubMed Abstract | Publisher Full Text\n\nSanguanwongse N, Cain KP, Suriya P, et al.: Antiretroviral therapy for HIV-infected tuberculosis patients saves lives but needs to be used more frequently in Thailand. J Acquir Immune Defic Syndr. 2008; 48(2): 181–9. PubMed Abstract\n\nSinha S, Shekhar RC, Singh G, et al.: Early versus delayed initiation of antiretroviral therapy for Indian HIV-Infected individuals with tuberculosis on antituberculosis treatment. BMC Infect Dis. 2012; 12: 168. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSoto J: Health economic evaluations using decision analytic modeling. Principles and practices--utilization of a checklist to their development and appraisal. Int J Technol Assess Health Care. 2002; 18(1): 94–111. PubMed Abstract\n\nUthman OA, Okwundu C, Gbenga K, et al.: Optimal Timing of Antiretroviral Therapy Initiation for HIV-Infected Adults With Newly Diagnosed Pulmonary Tuberculosis: A Systematic Review and Meta-analysis. Ann Intern Med. 2015; 163(1): 32–9. PubMed Abstract | Publisher Full Text\n\nUthman RT, Uthman OA: Dataset 1 in: Cost-effectiveness of early versus delayed antiretroviral therapy in tuberculosis patients infected with HIV in sub-Saharan Africa. F1000Research. 2017. Data Source\n\nWHO: Global Tuberculosis report. 2013. Reference Source\n\nWilkinson D, Moore DA: HIV-related tuberculosis in South Africa--clinical features and outcome. S Afr Med J. 1996; 86(1): 64–7. PubMed Abstract"
}
|
[
{
"id": "21466",
"date": "03 Apr 2017",
"name": "Simon Walusimbi",
"expertise": [],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nIt is unclear if the authors realize that cost-effectiveness is a tool employed by decision makers to arrive at a decision regarding resource allocation. Given that early versus delayed ART is not debatable any more, the researchers need to motivate their study in this context i.e. decision makers already agreed to initiate ART early in TB/HIV patients from a clinical point-of-view and public health perspective to avert new TB cases.\n\nThe study is not well motivated for economic evaluation. Issues surrounding why cost-effectiveness of the study is useful are not explained\n\nThe decision analysis tree needs to be simplified. As it is now, it is a busy figure. Secondly, the argument that the outcome of interest is short term is not true i.e. is death from TB a short outcome event?\n\nWhich outcome are the researchers referring to for the 12 weeks period? I think, if they intend to pursue this question further, they need to employ Markov modelling as well.\n\nUnder model time horizon, the authors infer that latent TB is treated for 12 weeks. This appears to be in contradiction to WHO guidelines for 6-9 months of treatment for latent TB. It is also not clear from the outset that the authors are treating for latent TB. In the beginning, I thought the study referred to TB/HIV co-infection from the disease point of view i.e. HIV patient with signs & symptoms of TB.\n\nLiterature for model parameters appears to be so limited. Authors need to utilize assumptions based on standard practice guidelines issued by for example the WHO. Authors need to be clear which costs were considered.\n\nThe general approach to the study is appropriate. However, the authors need to refer to this work to improve the quality of reporting of their report: JAMA. 1996 Oct 23-30;276(16):1339-41. Recommendations for reporting cost-effectiveness analyses. Panel on Cost-Effectiveness in Health and Medicine. It is likely that the authors conclude that early versus delayed ART in TB/HIV patients is not cost-effective because of the limited time horizon, assumptions on duration of treatment, unclear disease being treated i.e. latent TB infection or TB disease in HIV patients and limited literature search. Moreover, these are not well discussed.\n\nThe argument that an intervention is not cost-effective based on threshold of 3 × GDP per capita is debatable nowadays. Authors need to consult more about this.\n\nFinally, the discussion section is so limited and it does not adequately discuss the results of the study. Including the implications of the findings considering the current treatment guidelines for TB/HIV patients. This needs to be improved.",
"responses": []
},
{
"id": "21666",
"date": "26 May 2017",
"name": "Kogieleum Naidoo",
"expertise": [
"Reviewer Expertise TB-HIV Treatment"
],
"suggestion": "Not Approved",
"report": "Not Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral Comments: This study attempts to undo the WHO recommendation, and the huge body of evidence that supports early ART initiation in all HIV infected patients irrespective of CD4 count or co-infection status. The analysis presents data from a variety of sources in a conceptual framework of a decision analysis model. Whether the model incorporated the best available evidence is extremely doubtful, as the variety of sources collated is limited and the information drawn was not described in enough detail. The cost-effectiveness analysis itself is over simplified and may not be of particular use to policy- or decision-makers. The scientific merits are lacking in this particular piece of work and a lack of transparency in reporting has been noted. The extent of this study’s ability to add value to existing scientific literature is debatable. Moreover, substantial language editing is required to enable an ease of understanding by readers. There is direct repetition of information provided by the authors across multiple adjacent sections. For example, the use of “probabilities derived from published literature”. Numerous grammatical errors were observed, largely involving the confusion of single and plural terms – “literatures” instead of “literature”.\n\nCritique of Scientific Merit Introduction While the research question and the economic importance thereof is clearly stated, the relevance of the research question is ambiguous, given that recommendations of early ART administration regardless of co-infection status, or CD4 cell count, were issued by the WHO in September 2015, (WHO, 2015). Furthermore, the question of risk vs benefit of ART timing in TB therapy has also previously been addressed. These recommendations have been ignored, hence this study should therefore have not been conducted on those grounds.\n\nIt should be mentioned in the introduction and abstract that the cost-effectiveness analysis is based on the synthesis and meta-analysis of other studies. This information only arises on page 4 of the article.\n\nThe alternative interventions were not described in sufficient detail to enable the reader to assess the relevance of all setting specific interventions. More specifically, early and late ART initiation strategies were not described in time units. The authors failed to define early ART initiation and delayed ART initiation.\n\nStudy design The use of economic evaluation, namely cost-utility analysis and cost-effectiveness analysis have not been substantiated. The authors need to provide a clear justification of why they have chosen both economic evaluation methods, in the context of the research question stated.\n\nDecision model The use of a decision analytic model is clearly articulated but perhaps incorrectly substantiated: The authors have justified the use of decision tree model instead of a Markov model on the rationale that the outcome of interest can be assessed within a 12-week period – this is questionable.\nThe actual decision tree (Figure 1) is too cluttered, cannot be interpreted on face value and there is no referring explanation within the text.\n\nModel perspective The viewpoint of the analysis is not entirely clear – the authors refer to a “health-care’s payer perspective” (page 4) this could be interpreted as either the health care provider or the health care user. Their use of this perspective also needs to be substantiated within the context of the analysis.\n\nModel time horizon The choice of time horizon is dubious –as the treatment of latent TB may vary, HIV-TB coinfected patients may be infected with either latent or active TB, and patients with resistant or extrapulmonary manifestations of TB may require lengthier treatment. Time horizon should account for the minimum and the lengthiest duration of treatment.\n\nSetting and population The method of synthesis or meta-analysis of evidence is not described in enough detail. For example, only a brief description of the search strategy was mentioned while the criteria for inclusion of studies in the overview was omitted (page 4).\n\nModel input parameters The authors did not provide sufficient detail regarding the model used within the study. The actual resources costed and quantities thereof are not mentioned.\n\nKey parameters of the model are mentioned but not discussed at length or justified. For example, why were those parameters included in the model? What does ART treatment cost comprise – are overheads and personnel costs included or are these estimations limited to the cost of drugs? Did all authors included in the cost estimation (Table 1) use similar methodology to calculate these costs? If not, this would impact on the usefulness of the analysis. The country settings of these published studies used to collect evidence is not mentioned, yet should be. Even though there was no need for discounting due to the time horizon being less than a year, this should have been made explicit within the text.\n\nModel output The time horizon of the model was too short to accurately observe the primary outcome measure of reduced mortality, and thus both time horizon and primary outcome measure are inappropriate.\n\nAssumptions Once again, the assumption that TB treatment is completed within a three-month period is incomplete and inaccurate (refer to comments above). Their assumption of the cost of ART treatment is not well articulated, neither is it justified or at least referenced within the text. The same can be said for the assumption of additional costs of dying patients.\n\nIt is completely unclear from the text how these assumptions (page 4) were derived or the basis of their foundation. What evidence supports these assumptions or derivations?\n\nSensitivity analyses/Probability sensitivity analysis No mention was made of the actual variables chosen for the sensitivity analysis, the justification of the choice, and the ranges over which they were varied. There is no conclusion to the sensitivity analysis regarding the robustness of their results.\n\nResults It is not clear how the estimates displayed by Table 1 generate the results in Table 2 – again touching on the notion of an incomplete description of the model used. The results section fails to clearly differentiate between the results of the cost utility analysis and those of the cost-effectiveness analysis.\n\nDiscussion and conclusions While the answer to the original research question has been answered, the presentation of results is too simplistic and has not been accompanied by appropriate qualifications and reservations. The authors’ acknowledgement of study limitations has not been clearly set out. The authors discuss items that should have been addressed in previous sections first, and therefore only seen by the reader at the end of the text: the recommended time of ART initiation by WHO; the presumption of the use of general costs by the literature base which provided inputs for the model. Based on the methodological and structural issues raised above, the discussion and conclusions drawn may not carry much weight.\n\nConclusion This study simply brought together data from a variety of sources into the conceptual framework of a decision analysis model. Whether the model incorporated the best available evidence is unsure, as the variety of sources collated was quite limited and the information drawn was not described in enough detail. The cost-effectiveness analysis itself is over simplified and may not be of particular use to policy- or decision-makers. The scientific merits are lacking in this particular piece of work and a lack of transparency in reporting has been noted. The extent of this study’s ability to add value to the surrounding literature base is debatable. Moreover, substantial language editing is required to enable an ease of understanding by readers.\n\nIs the work clearly and accurately presented and does it cite the current literature? No\n\nIs the study design appropriate and is the work technically sound? No\n\nAre sufficient details of methods and analysis provided to allow replication by others? No\n\nIf applicable, is the statistical analysis and its interpretation appropriate?\nNo\n\nAre all the source data underlying the results available to ensure full reproducibility? No\n\nAre the conclusions drawn adequately supported by the results? No",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-253
|
https://f1000research.com/articles/6-245/v1
|
09 Mar 17
|
{
"type": "Review",
"title": "Recent advances in the management of dry age-related macular degeneration: A review",
"authors": [
"Francesco Bandello",
"Riccardo Sacconi",
"Lea Querques",
"Eleonora Corbelli",
"Maria Vittoria Cicinelli",
"Giuseppe Querques",
"Francesco Bandello",
"Riccardo Sacconi",
"Lea Querques",
"Eleonora Corbelli",
"Maria Vittoria Cicinelli"
],
"abstract": "Age-related macular degeneration (AMD), the most important cause of vision loss in elderly people, is a degenerative disorder of the central retina with a multifactorial etiopathology. AMD is classified in dry AMD (d-AMD) or neovascular AMD depending on the presence of choroidal neovascularization. Currently, no therapy is approved for geographic atrophy, the late form of d-AMD, because no treatment can restore the damage of retinal pigment epithelium (RPE) or photoreceptors. For this reason, all treatment approaches in d-AMD are only likely to prevent and slow down the progression of existing atrophy. This review focuses on the management of d-AMD and especially on current data about potential targets for therapies evaluated in clinical trials. Numerous examinations are available in clinics to monitor morphological changes in the retina, RPE and choroid of d-AMD patients. Fundus autofluorescence and optical coherence tomography (OCT) are considered the most useful tools in the diagnosis and follow-up of d-AMD alterations, including the monitoring of atrophy area progression. Instead, OCT-angiography is a novel imaging tool that may add further information in patients affected by d-AMD. Several pathways, including oxidative stress, deposits of lipofuscin, chronic inflammation and choroidal blood flow insufficiency, seem to play an important role in the pathogenesis of d-AMD and represent possible targets for new therapies. A great number of treatments for d-AMD are under investigation with promising results in preliminary studies. However, only few of these drugs will enter the market, offering a therapeutic chance to patients affected by the dry form of AMD and help them to preserve a good visual acuity. Further studies with a long-term follow-up would be important to test the real safety and efficacy of drugs under investigation.",
"keywords": [
"Age-related macular degeneration",
"Anti-inflammatory agents",
"Geographic atrophy",
"dry-AMD",
"Lipofuscin",
"Neuroprotective therapy",
"Nutritional Supplements",
"Stem cell-based therapy"
],
"content": "Introduction\n\nAge-related macular degeneration (AMD) is the most important cause of vision loss in elderly people in developed countries1,2. Given that age is the primary risk factor for AMD, the prevalence and severity of this disease are likely to increase as human life expectancy increases3. The exact pathophysiological mechanisms behind AMD remain to be determined, but certainly AMD is a multifactorial pathology, in which genetic and environmental risk factors play a crucial role4. Early/intermediate stages of AMD, clinical conditions without overt functional loss, are characterized by deposition of drusen and/or retinal pigment epithelium (RPE) alterations in the macular area5. In the late stages, the disease may progress to either geographic atrophy (GA) or neovascular AMD (n-AMD). The presence of choroidal neovascularization (CNV) is the hallmark of n-AMD that distinguishes this form from non-neovascular dry AMD (d-AMD). In the last years, the introduction in clinics of intravitreal injections of antivascular endothelial growth factor (anti-VEGF) drugs and the development of new therapies targeting vessel maturation and remodeling have revolutionized the natural history of the disease6–8. In contrast, no existing approved therapy for GA is available because no treatment is able to repair damaged RPE or photoreceptors. For this reason, all treatment approaches are only likely to slow down the progression of existing atrophy.\n\nThis review focuses on the management of d-AMD and especially on current data of studies and clinical trials about drugs that have already been evaluated or are under investigation in the management of dry AMD.\n\n\nManagement of dry AMD: monitoring progression\n\nThe term “dry AMD” is commonly used to cover a range of fundus signs, including drusen and pigmentary changes to patchy areas of atrophy and GA9. Reticular pseudodrusen represent an additional phenotype, associated with worse visual function from early stage, and an overall higher likelihood of progression to both forms of late AMD (n-AMD and GA)10–12. All these morphological findings of the retina, RPE and choroid are monitored by fundus photography, fundus autofluorescence (FAF), optical coherence tomography (OCT), infrared reflectance (IR) and optical coherence tomography angiography (OCT-A). Fundus photography has limited value in assessing and monitoring the progression of atrophic areas. FAF is currently considered as the gold standard in monitoring progression of atrophic areas; some authors suggest that FAF may also predict the rate of GA progression13,14. Structural OCT is currently extensively used in clinical practice as standard for d-AMD diagnosis and follow-up, as it allows a great visualization, measurement and monitoring of retinal layers, RPE, hyperreflective foci, GA areas and drusen15–18. OCT-A is a new noninvasive imaging tool able to characterize and quantify the vascular network in early, intermediate and advanced forms of d-AMD19–21. It has been demonstrated that in the early stages of the disease the choroidal layer shows dramatic alterations in its composition, with a predominance of stromal tissue on the vascular network19. Although it is still an emerging technique, OCT-A is a promising imaging device that may add further information in patients affected by d-AMD and may provide support in relating structural and functional changes.\n\n\nManagement of dry AMD: current therapeutic developments\n\nSeveral pathways have been studied and related to the pathogenesis of d-AMD, including oxidative stress, deposits of lipofuscin, chronic inflammation (including complement activation), and choroidal blood flow insufficiency22. A great number of treatments for d-AMD are under study. In this section, we analyze the main treatments under study by dividing therapeutic agents into six categories.\n\nIn recent years, there has been an enormous interest about nutrition and its relation to health. Many researchers demonstrated that food components are able to decrease the incidence of several diseases, including AMD. The AREDS study has shown that AREDS formula supplementation (a daily dose of 80 mg zinc oxide, 2 mg cupric oxide, 15 mg β-carotene, 500 mg vitamin C and 400 IU vitamin E) was effective in certain categories of patients affected by d-AMD, significantly reducing the risk of AMD progression23. In particular, these results were achieved in patients with high-risk, using the late stage disease as the primary endpoint. Whether AREDS formula supplementation also has a beneficial effect in patients affected by the earliest stages of the disease is unknown23. Also in GA patients these results were not confirmed; partially, this was due to the relative small sample size of GA patients included in the study23. However, patients affected by a specific form of GA, involving the central area of the retina, were found to benefit from AREDS supplementation formula because they showed lower rates of progression to n-AMD, similarly to patients affected by a moderate stage of d-AMD24. Since, β-carotene increases the incidence of lung cancer in smokers25, a following study (AREDS2) evaluated the effect of β-carotene elimination from the original AREDS formula supplementation26. AREDS2 demonstrated that β-carotene elimination or lower-dose zinc did not influence the progression to late AMD26. However, a great incidence of lung cancer was recorded in patients treated with AREDS formula supplementation compared with patients treated with AREDS2 formula supplementation, mostly in former smokers26.\n\nA new active area of research is the association between the vitamin supplements and genetic profile: the interest was based on evidence that showed that genetic risk profile of a patient may influence the benefit of vitamin supplements27. The results on this specific area will be available in the next years. However, there is general agreement that the AREDS and AREDS2 supplementation have a healthy effect mainly for their antioxidative action, and that they will play a great role in the treatment of d-AMD patients for a long period of time.\n\nChronic inflammation is thought to be crucial for AMD pathogenesis3,28. Currently, corticosteroids are being investigated for their antiangiogenic and antiinflammatory effect. Iluvien (Alimera Sciences, Alpharetta, GA, USA) is a sustained-release formulation of fluocinolone acetonide, just approved for the treatment of diabetic macular edema (DME), which could slow the progression of GA. A total of 40 patients affected bilaterally by GA were recruited in a phase II study (NCT00695318)29. The study was completed, but the results are not yet available.\n\nThe histopathological identification of different complement complexes in patients with GA and the presence of variations in genes encoding complement proteins clarify several treatment strategies to study the contribution of systemic complement in the pathogenesis of the disease30,31. Although different complement inhibitors are being studied to treat GA, none has been approved yet or has proven to be effective.\n\nPOT-4 (Potentia Pharmaceuticals, Louisville, KY, USA; ALcon, Hünenberg, Switzerland) is a C3 inhibitor administered by intravitreal injection, with 6 months duration of action. A phase I clinical trial (NCT00473928)32 was completed without safety concerns and a phase II clinical trial would be required to prove the safety and efficacy of this drug in d-AMD.\n\nARC1905 (Zimura; Ophthotech Corp., Princeton, NJ, USA) is an anti-C5 aptamer targeting C5, which has completed a phase I trial (NCT00950638)33. Plans for initiating a phase II/III trial of ARC1905 are reported to be under way (http://www.ophthotech.com/product-candidates/arc1905/).\n\nAnother drug targeting C5 is Eculizumab (Soliris; Alexon Pharmaceuticals, Cheshire, CT, USA). The COMPLETE study showed no reduction of GA progression by Eculizumab, but the low luminance deficit at baseline was significantly correlated with the progression of GA over 6 months34,35.\n\nLampalizumab (FCFD4514S; Genentech/Roche, San Francisco, CA, USA) is a humanized monoclonal antibody targeting complement factor D in the alternative complement pathway. The Lampalizumab phase II clinical trial (NCT02288559)36 was the first study to demonstrate a positive effect in slowing growth of GA through complement inhibition. Two on-going phase III trials, Chroma (NCT02247479)37 and Spectri (NCT02247531)38 are under way to investigate the safety and efficacy of 10 mg Lampalizumab injections every 4 or 6 weeks vs sham injections.\n\nSirolimus (Rapamycin; MacuSight/Santen, Union City, CA, USA) is a macrolide immunosuppressive agent with antiinflammatory, antiangiogenic and antifibrotic activity. It was generally well tolerated, but no evidence of efficacy has been shown39.\n\nGlatiramer acetate (Copaxone; Reva Pharmaceuticals, Kfar-Saba, Israel) has been studied for its immunomodulatory effect altering T-cell differentiation in GA treatment. A phase I study (NCT00541333)40 has demonstrated reduction of the drusen area in patients with drusen after weekly subcutaneous Glatiramer injections over 12 weeks. A phase II/III study (NCT00466076) is underway41.\n\nIn addition, amyloid-beta may play a role in AMD progression. RN6G (Pfizer, New York, NY, USA) and GSK933776 (GlaxoSmithKline, Brentford, UK), humanized monoclonal antibodies targeting amyloid-beta have been studied in phase II clinical trials (NCT01577381 and NCT01342926, respectively)42,43, but results are not available.\n\nAnother interesting area under development is neuroprotection. There are two drugs under investigation: ciliary neurotrophic factor-501 (CNFT) and Brimonidine.\n\nCNFT, a member of the IL-6 cytokine family, has been shown to protect photoreceptors in animal models44. Neurotech Pharmaceuticals (Cumberland, RI, USA) developed a well-tolerated intraocular encapsulated cell technology (ECT), which when combined with CNTF in a sustained-release platform (NT-501), releases the substance for more than one year45. A randomized, double-masked, phase II trial (NCT00447954)46 studied the 2-year results of the NT-501 implant in GA patients with promising results. In total, 51 patients were randomized and treated by high-dose or low-dose NT-501 implants or sham treatment. Zhang et al.47 showed a dose-dependent stabilization of visual acuity, evaluated as a loss of <15 letters on the ETDRS chart, in high-dose patients (96.3%) versus low-dose (83.3%) and sham treatment (75%) at 12-month evaluation. The stabilization of visual acuity was related with a retinal thickness increase at structural OCT.\n\nBrimonidine, a member of α-2 agonists, which is frequently used in glaucoma patients, has also demonstrated a neuroprotective effect on retinal cells in animal models48,49. A multicenter, phase II, double-masked, randomized study (NCT00658619)50 evaluated the efficacy and the safety of Brimonidine administered by an intravitreal biodegradable implant (Allergan, Irvine, CA, USA). The study evaluated the changes of GA area and BCVA in 119 patients with bilateral GA, randomly divided and treated by 200 or 400 μg of Brimonidine or a sham therapy every 3 months through month 21. The results did not show reliable data and for this reason a second multicenter trial (NCT02087085)51 is currently ongoing. The primary outcome measure of this trial is the GA area change from baseline to 24-month evaluation in the 311 study eyes treated by 400 µg Brimonidine implant or sham treatment. The estimated study completion date is March 2019.\n\nThe rationale for using visual cycle inhibitors in the treatment of GA is the documented phototoxic and proinfiamamtory effect of lipofuscin accumulated at the sites of RPE atrophy in patients with GA52.\n\nFenretinide (Sirion Therapeutics, Tampa, FL, USA) is a synthetic retinoid that competitively prevents the uptake of retinol by the RPE with downregulation of visual cycle. A phase II clinical trial (NCT00429936)53 demonstrated that 100 mg and 300 mg daily Fenretinide did not reduce the growth rate of GA, but patients seemed to tolerate it well.\n\nEmixustat (ACU-4489; Acucela, Seattle, WA, USA) is a non-retinoid visual cycle modulator of the isomerase (RPE65) preventing conversion of all-trans-retinol to 11-cis-retinalin in the RPE, with minor accumulation of lipofuscin. A phase IIa trial (NCT01002950)54 showed a biological effect in GA eyes. Phase II/III study (NCT01802866)55 was completed, but results are not available.\n\nThe choroid is diminished in thickness in older age patients and for this reason a new target therapy in d-AMD could lead to the restoration of a higher choroidal blood flow56. Choroidal circulation plays an important role in providing nutrients and removing wastes from the RPE and retina layers57. Many vasodilators are currently under investigation in clinical trials with the rationale that the use of these drugs may increase the blood flow of the choroid, and thus may delay the progression of d-AMD.\n\nA phase 3, multicenter, controlled, randomized study (NCT00619229)58 proved that Alprostadil (UCB Pharma, Berkshire, UK) was superior to placebo treatment in patients affected by d-AMD59. Patients treated with Alprostadil showed a best-corrected visual acuity superior of 0.94 lines compared with patients treated by placebo after 3 months, increasing to 1.51 lines at 6-month follow-up. However, further trials are strongly recommended to evaluate the long-term effects and safety of Alprostadil, and to understand the role that this drug could play in d-AMD therapy.\n\nA small pilot trial (NCT01922128)60 studied a new vasodilator called MC-1101, and demonstrated not only a safe and well tolerated profile for topical administration, but also a choroidal blood flow increase. MC-1101 has also shown an anti-inflammatory and an antioxidant profile. The safety and efficacy of MC-1101 will be evaluated by a randomized phase II/III trial (NCT02127463)61 that is currently ongoing and includes 60 patients affected by mild to moderate d-AMD.\n\nMoxaverine, a nonselective phosphodiesterase inhibitor, showed contradictory results in different studies: Schmidl et al.62 reported that oral administration of Moxaverine is not effective in increasing choroidal blood flow, while Resh et al.63 and Pemp et al.64 demonstrated that intravenous administrated Moxaverine increases choroidal blood flow compared with placebo. These different results may be due to the different mode of administration, but further studies are necessary to investigate the clinically efficacy of Moxaverine in patients affected by d-AMD.\n\nSildenafil (Viagra; Pfizer Inc, New York, NY, USA) is a known vasodilator, but its role in the treatment of d-AMD is not clear. Metelitsina et al.65 reported that this drug was not effective in improving the choroidal blood flow of the fovea in patients affected by AMD.\n\nStem cell therapy represents a promising new approach for AMD. Evidence suggests that RPE and photoreceptors are primarily affected in GA, their transplantation seems to be an interesting therapeutic option66. Thus, human pluripotent stem cells, embryonic (hESC) or induced (iPSC) are currently being investigated in clinical trials for AMD67–69.\n\nHowever, stem-cell-based therapy carries a long-term and multi-disciplinary approach. Therefore, the pros and cons of therapy must be analyzed in order to fully develop these new attractive approaches.\n\n\nConclusions\n\nGeographic atrophy, the late form of d-AMD, is a progressive disease and no treatment is approved at the moment. Nevertheless, there are currently many trials underway with the aim of finding an effective drug in preventing the enlargement of the atrophy and to avoid d-AMD patients to progress to a more devastating form of the disease, and to maintain a good visual function. Probably, many drugs will prove ineffective for AMD and only a few will be available in clinical practice.\n\nIn this review, we focused on current data about potential targets for therapies that seem to play a crucial role in the progress of d-AMD, but the pathogenesis of the disease remains unclear. Future studies should focus on understanding all mechanisms connected to d-AMD and develop other approaches in the therapy of this disease.\n\nSome of the drugs here described have been shown to be potentially effective in preliminary studies, and these are probably the ones that have more chance to be really effective in d-AMD patients and that will enter the market earlier. Nevertheless, first it will be important to prove the efficacy and the safety of the drugs currently investigated with long-term follow-up. Only after that we will have therapeutics to offer to our patients to help them to maintain their visual acuity.",
"appendix": "Author contributions\n\n\n\nAll authors meet the following 4 criteria: substantial contributions to the conception, acquisition and interpretation of data for the work; drafting the work or revising it critically for important intellectual content; final approval of the version to be published; agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.\n\n\nCompeting interests\n\n\n\nFrancesco Bandello is a consultant for: Alcon (Fort Worth, Texas, USA), Alimera Sciences (Alpharetta, Georgia, USA), Allergan Inc (Irvine, California, USA), Farmila-Thea (Clermont-Ferrand, France), Bayer Shering-Pharma (Berlin, Germany), Bausch And Lomb (Rochester, New York, USA), Genentech (San Francisco, California, USA), Hoffmann-La-Roche (Basel, Switzerland), NovagaliPharma (Évry, France), Novartis (Basel, Switzerland), Sanofi-Aventis (Paris, France), Thrombogenics (Heverlee, Belgium), Zeiss (Dublin, USA).\n\nGiuseppe Querques is a consultant for: Alimera Sciences (Alpharetta, Georgia, USA), Allergan Inc (Irvine, California, USA), Heidelberg (Germany), Novartis (Basel, Switzerland), Bayer Shering-Pharma (Berlin, Germany), Zeiss (Dublin, USA).\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nReferences\n\nCongdon N, O’Colmain B, Klaver CC, et al.: Causes and prevalence of visual impairment among adults in the United States. Arch Ophthalmol. 2004; 122(4): 477–485. PubMed Abstract | Publisher Full Text\n\nKlein R, Klein BE, Lee KE, et al.: Changes in visual acuity in a population over a 15-year period: the Beaver Dam Eye Study. Am J Ophthalmol. 2006; 142(4): 539–549. PubMed Abstract | Publisher Full Text\n\nGehrs KM, Jackson JR, Brown EN, et al.: Complement, age-related macular degeneration and a vision of the future. Arch Ophthalmol. 2010; 128(3): 349–358. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLim LS, Mitchell P, Seddon JM, et al.: Age-related macular degeneration. Lancet. 2012; 379(9827): 1728–1738. PubMed Abstract | Publisher Full Text\n\nAge-Related Eye Disease Study Research Group: A randomized, placebo-controlled, clinical trial of high-dose supplementation with vitamins C and E, beta carotene, and zinc for age-related macular degeneration and vision loss: AREDS report no. 8. Arch Ophthalmol. 2001; 119(10): 1417–1436. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCouch SM, Bakri SJ: Review of combination therapies for neovascular age-related macular degeneration. Semin Ophthalmol. 2011; 26(3): 114–120. PubMed Abstract | Publisher Full Text\n\nLally DR, Gerstenblith AT, Regillo CD: Preferred therapies for neovascular age-related macular degeneration. Curr Opin Ophthalmol. 2012; 23(3): 182–188. PubMed Abstract | Publisher Full Text\n\nJo N, Mailhos C, Ju M, et al.: Inhibition of platelet-derived growth factor B signaling enhances the efficacy of anti-vascular endothelial growth factor therapy in multiple models of ocular neovascularization. Am J Pathol. 2006; 168(6): 2036–2053. PubMed Abstract | Publisher Full Text | Free Full Text\n\nFerris FL 3rd, Wilkinson CP, Bird A, et al.: Clinical classification of age-related macular degeneration. Ophthalmology. 2013; 120(4): 844–851. PubMed Abstract | Publisher Full Text\n\nArnold JJ, Sarks SH, Killingsworth MC, et al.: Reticular pseudodrusen. A risk factor in age-related maculopathy. Retina. 1995; 15(3): 183–191. PubMed Abstract | Publisher Full Text\n\nZweifel SA, Imamura Y, Spaide TC, et al.: Prevalence and significance of subretinal drusenoid deposits (reticular pseudodrusen) in age-related macular degeneration. Ophthalmology. 2010; 117(9): 1775–1781. PubMed Abstract | Publisher Full Text\n\nQuerques G, Massamba N, Srour M, et al.: Impact of reticular pseudodrusen on macular function. Retina. 2014; 34(2): 321–329. PubMed Abstract | Publisher Full Text\n\nHolz FG, Bellman C, Staudt S, et al.: Fundus autofluorescence and development of geographic atrophy in age-related macular degeneration. Invest Ophthalmol Vis Sci. 2001; 42(5): 1051–1056. PubMed Abstract\n\nBrader HS, Ying GS, Martin ER, et al.: New grading criteria allow for earlier detection of geographic atrophy in clinical trials. Invest Ophthalmol Vis Sci. 2011; 52(12): 9218–9225. PubMed Abstract | Publisher Full Text | Free Full Text\n\nSchuman SG, Koreishi AF, Farsiu S, et al.: Photoreceptor layer thinning over drusen in eyes with age-related macular degeneration imaged in vivo with spectral-domain optical coherence tomography. Ophthalmology. 2009; 116(3): 488–496.e2. PubMed Abstract | Publisher Full Text | Free Full Text\n\nLeuschen JN, Schuman SG, Winter KP, et al.: Spectral-domain optical coherence tomography characteristics of intermediate age-related macular degeneration. Ophthalmology. 2013; 120(1): 140–150. PubMed Abstract | Publisher Full Text | Free Full Text\n\nChristenbury JG, Folgar FA, O’Connell RV, et al.: Progression of intermediate age-related macular degeneration with proliferation and inner retinal migration of hyperreflective foci. Ophthalmology. 2013; 120(5): 1038–1045. PubMed Abstract | Publisher Full Text | Free Full Text\n\nYehoshua Z, Rosenfeld PJ, Gregori G, et al.: Progression of geographic atrophy in age-related macular degeneration imaged with spectral domain optical coherence tomography. Ophthalmology. 2011; 118(4): 679–686. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCorvi F, Souied EH, Capuano V, et al.: Choroidal structure in eyes with drusen and reticular pseudodrusen determined by binarisation of optical coherence tomographic images. Br J Ophthalmol. 2016; pii: bjophthalmol-2016-308548. PubMed Abstract | Publisher Full Text\n\nMoult EM, Waheed NK, Novais EA, et al.: Swept-source optical coherence tomography angiography reveals choriocapillaris alterations in eyes with nascent geographic atrophy and drusen-associated geographic atrophy. Retina. 2016; 36 Suppl 1: S2–S11. PubMed Abstract | Publisher Full Text | Free Full Text\n\nToto L, Borrelli E, Mastropasqua R, et al.: Association between outer retinal alterations and microvascular changes in intermediate stage age-related macular degeneration: an optical coherence tomography angiography study. Br J Ophthalmol. 2016; pii: bjophthalmol-2016-309160. PubMed Abstract | Publisher Full Text\n\nZarbin MA, Rosenfeld PJ: Pathway-based therapies for age-related macular degeneration: an integrated survey of emerging treatment alternatives. Retina. 2010; 30(9): 1350–1367. PubMed Abstract | Publisher Full Text\n\nAge-Related Eye Disease Study Research Group: Risk factors associated with age-related macular degeneration. A case-control study in the age-related eye disease study: Age-Related Eye Disease Study Report Number 3. Ophthalmolgy. 2000; 107(12): 2224–2232. PubMed Abstract | Publisher Full Text | Free Full Text\n\nClemons TE, Milton RC, Klein R, et al.: Risk factors for the incidence of Advanced Age-Related Macular Degeneration in the Age-Related Eye Disease Study (AREDS) AREDS report no. 19. Ophthalmology. 2005; 112(4): 533–539. PubMed Abstract | Publisher Full Text | Free Full Text\n\nOmenn GS, Goodman GE, Thornquist MD, et al.: Risk factors for lung cancer and for intervention effects in CARET, the Beta-Carotene and Retinol Efficacy Trial. J Nat Cancer Inst. 1996; 88(21): 1550–1559. PubMed Abstract | Publisher Full Text\n\nAge-Related Eye Disease Study 2 Research Group: Lutein + zeaxanthin and omega-3 fatty acids for age-related macular degeneration: the Age-Related Eye Disease Study 2 (AREDS2) randomized clinical trial. JAMA. 2013; 309(19): 2005–2015. PubMed Abstract | Publisher Full Text\n\nHo E, Beaver LM, Williams DE: Dietary factors and epigenetic regulation for prostate cancer prevention. Adv Nutr. 2011; 2(6): 497–510. PubMed Abstract | Publisher Full Text | Free Full Text\n\nAnderson DH, Mullins RF, Hageman GS, et al.: A role for local inflammation in the formation of drusen in the aging eye. Am J Ophthalmol. 2002; 134(3): 411–431. PubMed Abstract | Publisher Full Text\n\nAlimera Sciences: Fluocinolone Acetonide Intravitreal Inserts in Geographic Atrophy. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nHaines JL, Hauser MA, Schmidt S, et al.: Complement factor H variant increases the risk of age-related macular degeneration. Science. 2005; 308(5720): 419–421. PubMed Abstract | Publisher Full Text\n\nYates JR, Sepp T, Matharu BK, et al.: Complement C3 variant and the risk of age-related macular degeneration. N Engl J Med. 2007; 357(6): 553–561. PubMed Abstract | Publisher Full Text\n\nPotentia Pharmaceuticals, Inc: Safety of Intravitreal POT-4 Therapy for Patients With Neovascular Age-Related Macular Degeneration (AMD) (ASaP). In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nOphthotech Corporation: A Study of ARC1905 (Anti-C5 Aptamer) in Subjects With Dry Age-related Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nYehoshua Z, de Amorim Garcia Filho CA, Nunes RP, et al.: Systemic complement inhibition with eculizumab for geographic atrophy in age-related macular degeneration: the COMPLETE study. Ophthalmology. 2014; 121(3): 693–701. PubMed Abstract | Publisher Full Text | Free Full Text\n\nGarcia Filho CA, Yehoshua Z, Gregori G, et al.: Change in drusen volume as a novel clinical trial endpoint for the study of complement inhibition in age-related macular degeneration. Ophthalmic Surg Lasers Imaging Retina. 2014; 45(1): 18–31. PubMed Abstract | Publisher Full Text\n\nGenentech, Inc: A Study of Lampalizumab Intravitreal Injections Administered Every Two Weeks or Every Four Weeks to Participants With Geographic Atrophy. In: ClinicalTrials.gov [cited 2017 Jan 24]. Reference Source\n\nRoche HL: A Study Investigating the Efficacy and Safety of Lampalizumab Intravitreal Injections in Participants With Geographic Atrophy Secondary to Age-Related Macular Degeneration (CHROMA). In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nRoche HL: A Study Investigating the Safety and Efficacy of Lampalizumab Intravitreal Injections in Patients With Geographic Atrophy Secondary to Age-Related Macular Degeneration (SPECTRI). In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nWong WT, Dresner S, Forooghian F, et al.: Treatment of geographic atrophy with subconjunctival sirolimus: results of a phase I/II clinical trial. Invest Ophthalmol Vis Sci. 2013; 54(4): 2941–2950. PubMed Abstract | Publisher Full Text | Free Full Text\n\nKaplan Medical Center: Weekly Vaccination With Copaxone as a Potential Therapy for Dry Age-related Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 24]. Reference Source\n\nKaplan Medical Center: Copaxone in Age Related Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nPfizer: Efficacy, Safety And Tolerability Study Of RN6G In Subjects With Geographic Atrophy Secondary to Age-related Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nGlaxoSmithKline: Clinical Study to Investigate Safety and Efficacy of GSK933776 in Adult Patients With Geographic Atrophy Secondary to Age-related Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nTao W, Wen R, Goddard MB, et al.: Encapsulated cell-based delivery of CNTF reduces photoreceptor degeneration in animal models of retinitis pigmentosa. Invest Ophthalmol Vis Sci. 2002; 43(10): 3292–3298. PubMed Abstract\n\nKauper K, McGovern C, Sherman S, et al.: Two-year intraocular delivery of ciliary neurotrophic factor by encapsulated cell technology implants in patients with chronic retinal degenerative diseases. Invest Ophthalmol Vis Sci. 2012; 53(12): 7484–7491. PubMed Abstract | Publisher Full Text\n\nNeurotech Pharmaceuticals: A Study of an Encapsulated Cell Technology (ECT) Implant for Patients With Atrophic Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nZhang K, Hopkins JJ, Heier JS, et al.: Ciliary neurotrophic factor delivered by encapsulated cell intraocular implants for treatment of geographic atrophy in age-related macular degeneration. Proc Natl Acad Sci U S A. 2011; 108(15): 6241–6245. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWoldeMussie E, Ruiz G, Wijono M, et al.: Neuroprotection of retinal ganglion cells by brimonidine in rats with laser-induced chronic ocular hypertension. Invest Ophthalmol Vis Sci. 2001; 42(12): 2849–2855. PubMed Abstract\n\nWheeler L, WoldeMussie E, Lai R: Role of alpha-2 agonists in neuroprotection. Surv Ophthalmol. 2003; 48 Suppl 1: S47–S51. PubMed Abstract | Publisher Full Text\n\nAllergan: Safety and Efficacy of Brimonidine Intravitreal Implant in Patients With Geographic Atrophy Due to Age-related Macular Degeneration (AMD). In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nAllergan: A Safety and Efficacy Study of Brimonidine Intravitreal Implant in Geographic Atrophy Secondary to Age-related Macular Degeneration (BEACON). In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nBrunk UT, Terman A: Lipofuscin: mechanisms of age-related accumulation and influence on cell function. Free Radic Biol Med. 2002; 33(5): 611–619. PubMed Abstract | Publisher Full Text\n\nSirion Therapeutics, Inc: Study of Fenretinide in the Treatment of Geographic Atrophy Associated With Dry Age-Related Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nAcucela Inc: Study of the Safety, Tolerability, Pharmacokinetics and Pharmacodynamics of ACU-4429 in Subjects With Geographic Atrophy. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nAcucela Inc: Safety and Efficacy Assessment Treatment Trials of Emixustat Hydrochloride (SEATTLE). In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nGrunwald JE, Metelitsina TI, Dupont JC, et al.: Reduced foveolar choroidal blood flow in eyes with increasing AMD severity. Invest Ophthalmol Vis Sci. 2005; 46(3): 1033–1008. PubMed Abstract | Publisher Full Text\n\nBooij JC, Baas DC, Beisekeeva J, et al.: The dynamic nature of Bruch’s membrane. Prog Retin Eye Res. 2010; 29(1): 1–18. PubMed Abstract | Publisher Full Text\n\nUCB Pharma: Alprostadil in Maculopathy Study (AIMS). In: ClinicalTrials.gov [cited 2017 Jan 24]. Reference Source\n\nAugustin AJ, Diehm C, Grieger F, et al.: Alprostadil infusion in patients with dry age related macular degeneration: a randomized controlled clinical trial. Expert Opin Investig Drugs. 2013; 22(7): 803–812. PubMed Abstract | Publisher Full Text\n\nMacuCLEAR, Inc: Safety Study of a Topical Treatment for Dry Age Related Macular Degeneration. In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nMacuCLEAR, Inc: Phase II/III Study of the Efficacy and Safety of MacuCLEAR MC-1101 in Treating DryAge-Related Macular Degeneration (McCP2/3). In: ClinicalTrials.gov [cited 2017 Jan 6]. Reference Source\n\nSchmidl D, Pemp B, Lasta M, et al.: Effects of orally administered moxaverine on ocular blood flow in healthy subjects. Graefes Arch Clin Exp Ophthalmol. 2013; 251(2): 515–520. PubMed Abstract | Publisher Full Text\n\nResch H, Weigert G, Karl K, et al.: Effect of systemic moxaverine on ocular blood flow in humans. Acta Ophthalmol. 2009; 87(7): 731–735. PubMed Abstract | Publisher Full Text\n\nPemp B, Garhofer G, Lasta M, et al.: The effects of moxaverine on ocular blood flow in patients with age-related macular degeneration or primary open angle glaucoma and in healthy control subjects. Acta Ophthalmol. 2012; 90(2): 139–145. PubMed Abstract | Publisher Full Text\n\nMetelitsina TI, Grunwald JE, DuPont JC, et al.: Effect of Viagra on the foveolar choroidal circulation of AMD patients. Exp Eye Res. 2005; 81(2): 159–164. PubMed Abstract | Publisher Full Text\n\nBhutto I, Lutty G: Understanding age-related macular degeneration (AMD): relationships between the photoreceptor/retinal pigment epithelium/Bruch’s membrane/choriocapillaris complex. Mol Aspects Med. 2012; 33(4): 295–317. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCarr AJ, Vugler AA, Hikita ST, et al.: Protective effects of human iPS-derived retinal pigment epithelium cell transplantation in the retinal dystrophic rat. PLoS One. 2009; 4(12): e8152. PubMed Abstract | Publisher Full Text | Free Full Text\n\nCho MS, Kim SJ, Ku SY, et al.: Generation of retinal pigment epithelial cells from human embryonic stem cell-derived spherical neural masses. Stem Cell Res. 2012; 9(2): 101–109. PubMed Abstract | Publisher Full Text\n\nBuchholz DE, Hikita ST, Rowland TJ, et al.: Derivation of functional retinal pigmented epithelium from induced pluripotent stem cells. Stem Cells. 2009; 27(10): 2427–2434. PubMed Abstract | Publisher Full Text"
}
|
[
{
"id": "20863",
"date": "22 Mar 2017",
"name": "Igor Kozak",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present a concise and yet comprehensive review on current therapies and ongoing clinical trials for non-exudative (dry) age-related macular degeneration. The authors list and comment on current trials registered at government website. As such, this represents a useful update for ophthalmologists in this area. Few minor comments for the authors:\n\n1.\n\nI would suggest to expand the second paragraph dealing with associations between vitamin supplements and genetic profile under Nutritional Supplements section. While some studies suggest genotypic influence on clinical response to vitamin supplementation, there are opposing studies as well1. Nice recent review by Rowan & Taylor (2016)2 is also worth mentioning.\n2.\n\nApart from therapies as part of registered clinical trials there are few investigation running independently such as testing oral trimetazine (anti-ischemic agent with cytoprotective effects) by Institut de Recherches Internationales Servier; neuroprotective agent tandospirone by Alcon Research; oral crocetin; oral curcumin; or intravitreal LFG316 by Novartis (NCT01527500) and oral doxycycline (Oracea)(NCT01782989).\n3.\n\nFor section on stem cell-based therapy I would update literature on recent (even though not fully convincing) study by Schwartz SD et al (2016)3 or the same author in Lancet 2015.",
"responses": []
},
{
"id": "22527",
"date": "09 May 2017",
"name": "Kamron N. Khan",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThank you for asking me to review this work. This is a clearly set out and easy to read manuscript, covering all major avenues of current research in non-neovascular AMD. I only have a couple of minor comments.\nPerhaps the title could be altered to better reflect the manuscript content: “Therapeutic strategies under current investigation for dry age-related macular degeneration”?\n\nI think the manuscript could be improved by expanding the section on “stem cells”.\n\nA couple more trials may be worthy of mention (a) Oracea Phase 2/3 trial and (b) Drusen clearance with laser - Laser Intervention in Early Age-Related Macular Degeneration Study (LEAD).\n\nIs the topic of the review discussed comprehensively in the context of the current literature? Yes\n\nAre all factual statements correct and adequately supported by citations? Yes\n\nIs the review written in accessible language? Yes\n\nAre the conclusions drawn appropriate in the context of the current research literature? Yes",
"responses": []
}
] | 1
|
https://f1000research.com/articles/6-245
|
https://f1000research.com/articles/5-2725/v1
|
21 Nov 16
|
{
"type": "Research Note",
"title": "Electroantennogram response of the parasitoid, Microplitis croceipes to host-related odors: The discrepancy between relative abundance and level of antennal responses to volatile compound",
"authors": [
"Tolulope Morawo",
"Matthew Burrows",
"Henry Fadamiro",
"Matthew Burrows",
"Henry Fadamiro"
],
"abstract": "Herbivores emit volatile organic compounds (VOCs) after feeding on plants. Parasitoids exploit these VOCs as odor cues to locate their hosts. In nature, host-related odors are emitted as blends of various compounds occurring in different proportions, and minor blend components can sometimes have profound effects on parasitoid responses. In a previous related study, we identified and quantified VOCs emitted by cotton plant-fed Heliothis virescens (Lepidoptera: Noctuidae) larvae, an herbivore host of the parasitoid Microplitis croceipes (Hymenoptera: Braconidae). In the present study, the olfactory response of female M. croceipes to synthetic versions of 15 previously identified compounds was tested in electroantennogram (EAG) bioassays. Using M. croceipes as a model species, we further asked the question: does the relative abundance of a volatile compound match the level of antennal response in parasitoids? Female M. croceipes showed varying EAG responses to test compounds, indicating different levels of bioactivity in the insect antenna. Eight compounds, including decanal, 1-octen-3-ol, 3-octanone, 2-ethylhexanol, tridecane, tetradecane, α-farnesene and bisabolene, elicited EAG responses above or equal to the 50th percentile rank of all responses. Interestingly, decanal, which represented only 1% of the total amount of odors emitted by cotton-fed hosts, elicited the highest (0.82 mV) EAG response in parasitoids. On the other hand, (E)-β-caryophyllene, the most abundant (29%) blend component, elicited a relatively low (0.17 mV) EAG response. The results suggest that EAG response to host-related volatiles in parasitoids is probably more influenced by the ecological relevance or functional role of the compound in the blend, rather than its relative abundance.",
"keywords": [
"Braconidae",
"endoparasitoid",
"Heliothis virescens",
"cotton plant"
],
"content": "Introduction\n\nInfested plants emit volatile organic compounds (VOCs) as an indirect defense against herbivore damage1,2. Similarly, herbivores emit plant-associated VOCs that can guide parasitoids to their hosts3. However, such odor cues are usually released as a blend of various compounds in nature. Consequently, differentiating useful cues from ecologically irrelevant odors can be challenging for foraging parasitoids. Therefore, it is expected that antennal sensitivity of parasitoids will vary in response to different compounds. Antenna sensitivity in insects can be measured with electroantennogram (EAG) recording. EAG measures the summed activity of olfactory receptor neurons in the antenna and forms the basis for the level of biological activity elicited by various compounds4.\n\nMicroplitis croceipes (Hymenoptera: Braconidae) is an endoparasitoid of Heliothis virescens (Lepidoptera: Noctuidae), which is an important pest of cotton plant. In a previous related study5, 15 compounds in the volatile blend emitted by cotton-fed H. virescens larvae that attracted M. croceipes were identified and quantified using gas chromatography-mass spectrometry (GC/MS). The compounds in the attractive blend occurred in varying proportions (Table 1). However, the relative abundance of a blend component does not necessarily indicate its relevance to resource location in insects6. In the present study, olfactory response of M. croceipes to synthetic versions of 15 previously identified compounds was tested in EAG bioassays. Comparing EAG results in the present study and GC/MS analyses in a previous study5, we indicated the discrepancy between relative abundance of a volatile blend component and the level of antennal response in parasitoids.\n\nThis table was modified from Morawo and Fadamiro (doi: 10.1007/s10886-016-0779-7)5, with permission from the authors.\n\n1In order of elution during gas chromatography.\n\n2Compounds that were not tested in the present study.\n\n\nMethods and materials\n\nMicroplitis croceipes was reared on 2nd-3rd instar larvae of H. virescens and adult wasps were supplied with 10% sugar water upon emergence in our laboratory at Entomology & Plant Pathology Department, Auburn University. For more details about rearing protocol, see Lewis and Burton7. Female parasitoids used for EAG bioassays were 2–3 days-old, presumed mated (after at least 24 h of interaction with males), and inexperienced with oviposition or plant material. The general rearing conditions for all insects were 25±1 °C, 75±5 % relative humidity and 14:10 h (light:dark) photoperiod.\n\nEAG responses of M. croceipes to 15 synthetic compounds (Table 1), previously identified in the headspace of cotton-fed H. virescens larvae5, were recorded according to the method described by Ngumbi et al.8 with modifications. Two compounds, α-bergamotene (not commercially available) and an unidentified compound reported in the previous study5 were not tested in the present study. α-Pinene, β-pinene, myrcene, limonene, 2-ethylhexanol, tridecane, (E)-β-caryophyllene, α-humulene, α-farnesene and α-bisabolol with purity 95–99% were purchased from Sigma-Aldrich® (St. Louis, MO, USA). 1-Octen-3-ol, 3-octanone, decanal, tetradecane and bisabolene with purity 96–99% were purchased from Alfa Aesar® (Ward Hill, MA, USA). Test compounds were formulated in hexane at 0.1 μg/μl and delivered onto Whatman®No.1 filter paper strips at an optimum dose of 1 µg. Impregnated filter papers were placed inside glass Pasteur pipettes and stimulus was introduced as 0.2 s odor puffs. A glass capillary reference electrode filled with 0.1 M KCl was attached to the back of the wasp head, and a similar recording electrode was connected to the excised tip of the wasp antenna. The analog signal was detected through a probe and processed with a data acquisition controller (IDAC-4, Syntech, The Netherlands). Data was assessed using EAG 2000 software (Syntech, The Netherlands). EAG responses to the 15 compounds and control (hexane) were sequentially recorded for each of 15 insect replicates. Each compound was assigned positions 1 through 15 across replicates to minimize positional bias.\n\nDifferences in absolute EAG values (EAG response to compound minus response to solvent control) of synthetic compounds was analyzed using the Kruskal-Wallis test, followed by Sidak’s multiple comparison test. The relationship between EAG response and relative abundance was analyzed with Proc Corr (correlation) procedure in SAS. All analyses were performed in SAS v9.2 (SAS Institute Inc., Cary, NC, USA) with P=0.05 level of significance.\n\n\nResults\n\nFemale M. croceipes showed varying EAG responses to test compounds (range: 0.05–0.82 mV; Figure 1). Decanal elicited the highest EAG response (0.82 mV; χ2 = 134.13; df = 14; P<0.0001), while β-pinene elicited the lowest response (0.05 mV) in parasitoids. Decanal, tridecane, 3-octanone, 2-ethylhexanol, 1-octen-3-ol, bisabolene, tetradecane and α-farnesene elicited EAG responses ≥0.22 mV (50th percentile rank). Four of the top bioactive compounds, decanal, 3-octanone, 1-octen-3-ol and 2-ethylhexanol were emitted in quantities ≤2.2% of the total blend (Table 1). On the other hand, (E)-β-caryophyllene, the most abundant (29.2% of total blend) component, elicited a relatively low EAG response (0.17 mV) in parasitoids (Figure 1). However, the negative correlation between EAG response and relative abundance of compounds was not statistically significant (r = -0.33; N = 15; P=0.23).\n\nMean absolute Electroantennogram (EAG) responses (mV ± SEM; N = 15) of female Microplitis croceipes to 15 volatile compounds identified in the headspace of cotton-fed Heliothis virescens larvae5. Synthetic compounds were formulated in hexane (solvent control) and tested at an optimum dose of 1 μg. Orange line indicates the arbitrary response threshold of 0.22 mV (50th percentile rank). Bars with no letters in common are significantly different (P<0.05; Kruskal-Wallis test followed by Sidak’s multiple comparison test).\n\n\nDiscussion\n\nEAG responses of Micropiltis croceipes in the present study indicated variation in biological activity elicited by test compounds at the peripheral level, and revealed a discrepancy between relative abundance and level of antennal responses in parasitoids. High EAG response elicited by decanal in M. croceipes agrees with previous reports on olfactory responses of the parasitoids, Microplitis mediator9 and Bracon hebetor10. Furthermore, decanal is a key attractant for host-seeking M. croceipes5. Although compounds are emitted in different quantities in natural blends, minor components can have a profound effect on resource location in parasitoids6,11. Interestingly, decanal constituted only 1% of the total blend emitted by cotton-fed H. virescens5, but elicited the highest EAG response in M. croceipes, supporting the “little peaks-big effects” concept6. On the other hand, (E)-β-caryophyllene, the most abundant blend component, elicited a relatively low EAG response in parasitoids.\n\nTherefore, it is more likely that the ecological relevance of a compound, rather than its relative abundance determines the level of olfactory response in foraging insects. For instance, small amounts of isothiocyanates in the volatile blend of brassica plants serve as host location cues for parasitoids of brassica herbivores12,13. More importantly, blend components act in concert to provide parasitoids with complete information14. Consequently, certain compounds function as background odors to enhance detectability (olfactory contrast) of other attractive components in a blend12,15. It is possible that (E)-β-caryophyllene serves as a background odor in the blend emitted by cotton-fed H. virescens. Finally, it should be noted that while EAG measures the level of bioactivity, behavioral bioassays are usually needed to establish the functional role of various compounds.\n\n\nData availability\n\nDataset 1. EAG responses of Microplitis croceipes to synthetic compounds and correlation with relative abundance of compounds. Electroantennogram (EAG) data shows actual EAG response readouts to different compounds for 15 insect replicates. Absolute EAG value for each compound in a replicate can be obtained by deducting the average of two controls (Control 1 and Control 2) from the actual EAG values. Correlation data shows relative abundance of 15 blend components and their corresponding mean absolute EAG values. Details of data analyses were indicated in the main text and Figure 1 legend. Raw data behind the representation shown in Figure 1 and analyses referred to in the Results section are included. DOI: 10.5256/f1000research.10104.d14344616.",
"appendix": "Author contributions\n\n\n\nTM and HF conceived the study. TM designed the experiment. TM and MB carried out the research. All authors contributed to writing and revision of the manuscript.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThe author(s) declared that no grants were involved in supporting this work.\n\n\nAcknowledgements\n\nWe thank Brandice Kopishke for rearing the insects used for this study.\n\n\nReferences\n\nDe Moraes CM, Lewis WJ, Paré PW, et al.: Herbivore-infested plants selectively attract parasitoids. Nature. 1998; 393: 570–573. Publisher Full Text\n\nTurlings TC, Wäckers F: Recruitment of predators and parasitoids by herbivore-injured plants. Adv Insect Chem Ecol. 2004; 2: 21–75. Reference Source\n\nde Rijk M, Krijn M, Jenniskens W, et al.: Flexible parasitoid behaviour overcomes constraint resulting from position of host and nonhost herbivores. Anim Behav. 2016; 113: 125–135. Publisher Full Text\n\nPark KC, Ochieng SA, Zhu JW, et al.: Odor discrimination using insect electroantennogram responses from an insect antennal array. Chem Senses. 2002; 27(4): 343–352. PubMed Abstract | Publisher Full Text\n\nMorawo T, Fadamiro H: Identification of key plant-associated volatiles emitted by Heliothis virescens larvae that attract the parasitoid, Microplitis croceipes: implications for parasitoid perception of odor blends. J Chem Ecol. 2016; 1–10. PubMed Abstract | Publisher Full Text\n\nClavijo McCormick A, Gershenzon J, Unsicker SB: Little peaks with big effects: establishing the role of minor plant volatiles in plant-insect interactions. Plant Cell Environ. 2014; 37(8): 1836–1844. PubMed Abstract | Publisher Full Text\n\nLewis WJ, Burton RL: Rearing Microplitis croceipes in the laboratory with Heliothis zea as host. J Econ Entomol. 1970; 63(2): 656–658. Publisher Full Text\n\nNgumbi E, Chen L, Fadamiro H: Electroantennogram (EAG) responses of Microplitis croceipes and Cotesia marginiventris and their lepidopteran hosts to a wide array of odor stimuli: correlation between EAG response and degree of host specificity? J Insect Physiol. 2010; 56(9): 1260–1268. PubMed Abstract | Publisher Full Text\n\nYu H, Zhang Y, Wyckhuys KA, et al.: Electrophysiological and behavioral responses of Microplitis mediator (Hymenoptera: Braconidae) to caterpillar-induced volatiles from cotton. Environ Entomol. 2010; 39(2): 600–609. PubMed Abstract | Publisher Full Text\n\nDweck HK, Svensson GP, Gündüz EA, et al.: Kairomonal response of the parasitoid, Bracon hebetor Say, to the male-produced sex pheromone of its host, the greater waxmoth, Galleria mellonella (L.). J Chem Ecol. 2010; 36(2): 171–178. PubMed Abstract | Publisher Full Text\n\nBeyaert I, Wäschke N, Scholz A, et al.: Relevance of resource-indicating key volatiles and habitat odour for insect orientation. Anim Behav. 2010; 79(5): 1077–1086. Publisher Full Text\n\nWajnberg É, Bernstein C, van Alphen J: Behavioral ecology of insect parasitoids: from theoretical approaches to field applications. Wiley-Blackwell, Malden, MA. 2008. Publisher Full Text\n\nNajar-Rodriguez AJ, Friedli M, Klaiber J, et al.: Aphid-deprivation from Brassica plants results in increased isothiocyanate release and parasitoid attraction. Chemoecology. 2015; 25(6): 303–311. Publisher Full Text\n\nvan Wijk M, de Bruijn PJ, Sabelis MW: Complex odor from plants under attack: herbivore’s enemies react to the whole, not its parts. PLoS One. 2011; 6(7): e21742. PubMed Abstract | Publisher Full Text | Free Full Text\n\nMumm R, Hilker M: The significance of background odour for an egg parasitoid to detect plants with host eggs. Chem Senses. 2005; 30(4): 337–343. PubMed Abstract | Publisher Full Text\n\nMorawo T, Burrows M, Fadamiro H: Dataset 1 in: Electroantennogram response of the parasitoid, Microplitis croceipes to host-related odors: The discrepancy between relative abundance and level of antennal responses to volatile compound. F1000Research. 2016. Data Source"
}
|
[
{
"id": "19376",
"date": "16 Jan 2017",
"name": "Amanuel Tamiru",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments:\nThis study examined the relationship between the relative abundance of a volatile organic compounds (VOCs) emitted by Heliothis virescens (Lepidoptera: Noctuidae) and level of antennal response by the larval parasitoid, Microplitis croceipes (Hymenoptera: Braconidae). The study builds on the previous work by Morawo & Fadamiro (2016) which identified key plant-associated volatiles emitted by cotton-fed H. virescens larvae that attract the parasitoid, M. croceipes. Here, the synthetic versions of 15 previously identified plant-associated volatiles emitted by H. virescens larvae were tested in electroantennogram (EAG) bioassays. The authors conclude that the level of parasitoid’s antennal response is not directly related to relative abundance of the volatile components rather influenced by the ecological relevance or functional role of the compound in the blend. It should be noted that identifying volatile compounds that insects detect through EAG is an initial step in understanding olfactory stimuli responsible for modulating insect behavior. Hence, behavioral studies with the identified EAG active compounds should be carried out, individually and as blends, to determine responses of parasitoids to the volatile compounds and explore their ecological or functional role. Though the authors intended to examine relationship between relative abundance of the VOCs and level of antennal responses, different levels of the volatile compounds used in the study and their corresponding EAG response is not shown. Rather, only one dose (1 µg) is used for all volatile compounds tested. The study would have been very informative if different levels of the test compounds are tested and their corresponding EAG response recorded.\n\nSpecific comments\n\nTitle: The title is appropriate for the content of the article; however, it can be made more concise. E.g. the first part of the existing title would adequate, i.e. ‘Electroantennogram response of the parasitoid, Microplitis croceipes to host-related odors’.\n\nIntroduction: The introduction clearly states the objective of the study. However, adequate background is missing in the area of herbivore emitted plant associated VOCs. Are these plant derived volatile compounds emitted by herbivore itself (after feeding) or are they adsorbed into the hervivore body during feeding process (e.g. from frass)\n\nMethods and materials: The authors followed standard insect rearing (Lewis & Burton, 1970) and electroantennogram recording (Ngumbi et al., 2010) and data analysis procedures. However, the number of insect replicates used in the study is not clear. Was a single insect antennal preparation used to record EAG responses to the 15 compounds and control (hexane)? Was the abundance of volatile components varied based on the corresponding quantities in the natural headspace samples? The latter has not been specified in the methodology except mentioning ‘Test compounds were formulated in hexane at 0.1 μg/μl and delivered onto Whatman®No.1 filter paper strips at an optimum dose of 1 µg’.\n\nResults: The results show that female M. croceipes showed varying EAG responses to test compounds. Notably, decanal which constituted only 1% of total blend emitted by cotton-fed H. virescens elicited the highest EAG response (0.82 mV); while (E)-β-caryophyllene, the most abundant component (29.2% of total blend), elicited a relatively low EAG response (0.17 mV) in the parasitoid antenna. This is possible as earlier reports also indicated compounds with highest EAG response may not necessarily be those emitted in largest quantity (Tamiru et al. 20151). Given the fact that the main research question of this study is examine the relationship between relative abundance and level of antennal responses, it would have been more informative to test different concentrations/amounts of the test compounds and their corresponding EAG response to reliably measure statistical significance of correlations between relative abundance of the compounds and level of EAG response.\n\nDiscussion: The authors discuss about the ecological relevance of a compounds. However, electrophysiological responses do not necessarily mean that a behavioral response will occur; rather it elucidates potential behaviorally relevant compounds. To fully explore the significance of the results, bioassays need to be carried out with identified compounds, both individually and as a blend, to determine the kind of behavioral response the volatile compounds trigger in the parasitoid and their ecological/functional role. I suggested authors to refer and perhaps include in their discussion the work by Tamiru et al. (2015)1 which demonstrated combined use of electrophysiological and behavioral studies for better understanding of odor mediated behavior in insects.",
"responses": []
},
{
"id": "19925",
"date": "06 Feb 2017",
"name": "Yonggen Lou",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe manuscript reported the the relationship between the relative abundance of volatile chemical emitted from Heliothis virescens larvae and the antennal response by the larval parasitoid of the herbivore, Microplitis croceipes. By electroantennogram bioassays, the authors found that the level of parasitoid’s EAG response is not related to relative abundance of the volatile components in the blend. Since a chemical that elicits an EAG response in an insect does not mean to elicit a behavioral response, and a chemical eliciting a bigger EAG response does not mean to elicit a stronger behavioral response, these experiments seem little and the novelty and significance of this study seem limited.\n\nIntroduction: In a previous study, the authors have investigated the attractiveness of these individual volatile compounds to the parasitoid, thus it would be better to introduce these results briefly in the section of Introduction.\n\nMethods: It has been well documented that an insect has different responsive ranges to different chemicals. Thus, only using one concentration of chemicals is not enough.\n\nDiscussion: Based on the previous results reported by the authors1, 8 chemicals in the blend, including decanal and (E)-β-Caryophyllene, had a role in attraction of the parasitoid. The authors should give a discussion based on above results.",
"responses": []
},
{
"id": "19230",
"date": "10 Feb 2017",
"name": "Torsten Meiners",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe research note of Tolulope et al. reports EAG responses of the parasitic wasp Microplitis croceipes and relates the antennal responses to the relative abundance of the compounds in the odour bouquet of the larvae.\nWhen looking at the relative abundance of a compound you might need to consider all environmentally occurring contexts and compounds. Microplitis is orientating to the host habitat first, then it locates the host plant in the habitat, and then the host. That means that plant odours play an important role (see Li et al.1) and not only larval odours. In my opinion you have to rank the importance of odours according to all environmental contexts and consider the abundance of compounds in cotton or also in other relevant host (and habitat) plants. Heliothis feeds on more than 100 plant species, thus M. croceipes is confronted with the odour of this plants and with the odour of larvae having fed on these plants. Li et al.1 have performed antennal studies with M. croceipes and cotton plant compounds and found similar responses to similar compounds as in your study. Heptanal was the most stimulating tested compound while caryophyllene was less stimulating. The the discrepancy you indicate in the title might be easily explained when including habitat and host plant volatiles.\nThe wasps in your study had the experience of living and hatching from larvae having fed on cotton. Thus they might have experience with the compounds you present in Table 1. It has been shown that M. croceipes can learn almost any compund (e.g. Olson et al.2).\nIn your discussion you point out that it might be the ecological relevance of a compound that determines the antennal response – however, Park et al.3 showed in electroantennogram studies that the antenna of M. croceipes is also responding to anthropogenic compounds with high sensitivity. Thus, the antennal response might not reflect the ecological relevance. This might be more reflected in the behavioural response, as you indicate in your discussion. And this might be fine-tuned by leaning in case of a parasitoid with a polyphagous host.\n\nMinor points:\n\nMethods: Why is 1 µg an optimal dose?\n\nData analyses: Differences …were analysed",
"responses": []
},
{
"id": "19829",
"date": "13 Feb 2017",
"name": "Feng Liu",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nGeneral comments\nThis study investigated the EAG response of a parasitoid Microplitis croceipes to the volatiles emitted from its host, the cotton plant-fed Heliothis virescens larvae. 15 compounds were tested at the dose of 1 μg on the female antennae of M. croceipes. Different level of bioactivity of M. croceipes to these compounds were presented and discrepancy was observed between the relative abundance and level of antennal responses in parasitoids. In the end, the authors suggested that ecological relevance but not the relative abundance of these compounds weighted more on the EAG responses of M. croceipes. However, since there is no behavior bioassay showing that these host-released compounds were truly importantly in the host-seeking process, it is hard to tell the ecological relevance of those compounds (like decanal) with strong EAG responses. A more cautious conclusion would be appropriate.\nSpecific comments\nIntroduction: Please give more information about why the authors specifically stated that the Heliothis virescens larvae were cotten-fed in the lab. Will different food sources affect the compounds released from the insect bodies?\nMethod: Please justify why mass concentration was used in preparing the compounds. Different chemicals possess various molecular weight and vapor pressure. Therefore, the number of molecules delivered onto the antenna may be dramatically different. In addition, only female wasps were used in the experiments. Apparently females need to find host to lay eggs. Just curious to know what the male's EAG responses to these compounds or if there are any related studies.\n\nResults: Since 50% of the EAG responses from blend volatiles were used as a standard to make comparison, it would be better to add the EAG response to the blend volatiles in the bar figure. In addition, please specify the EAG response to the control solvent (hexane).\nDiscussion: The authors initiated a good start to discuss some compounds in the blend may function as background odors to enhance olfactory contract. There are many excellent reviews about the possible mechanisms behind this pheromone, such as Riffell and Hildebrand (2016).The authors may discuss a little bit about the mechanisms.",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2725
|
https://f1000research.com/articles/5-2422/v1
|
30 Sep 16
|
{
"type": "Method Article",
"title": "Identifying ELIXIR Core Data Resources",
"authors": [
"Christine Durinx",
"Jo McEntyre",
"Ron Appel",
"Rolf Apweiler",
"Mary Barlow",
"Niklas Blomberg",
"Chuck Cook",
"Elisabeth Gasteiger",
"Jee-Hyub Kim",
"Rodrigo Lopez",
"Nicole Redaschi",
"Heinz Stockinger",
"Daniel Teixeira",
"Alfonso Valencia",
"Ron Appel",
"Rolf Apweiler",
"Mary Barlow",
"Niklas Blomberg",
"Chuck Cook",
"Elisabeth Gasteiger",
"Jee-Hyub Kim",
"Rodrigo Lopez",
"Nicole Redaschi",
"Heinz Stockinger",
"Daniel Teixeira",
"Alfonso Valencia"
],
"abstract": "The core mission of ELIXIR is to build a stable and sustainable infrastructure for biological information across Europe. At the heart of this are the data resources, tools and services that ELIXIR offers to the life-sciences community, providing stable and sustainable access to biological data. ELIXIR aims to ensure that these resources are available long-term and that the life-cycles of these resources are managed such that they support the scientific needs of the life-sciences, including biological research.\nELIXIR Core Data Resources are defined as a set of European data resources that are of fundamental importance to the wider life-science community and the long-term preservation of biological data. They are complete collections of generic value to life-science, are considered an authority in their field with respect to one or more characteristics, and show high levels of scientific quality and service. Thus, ELIXIR Core Data Resources are of wide applicability and usage.\nThis paper describes the structures, governance and processes that support the identification and evaluation of ELIXIR Core Data Resources. It identifies key indicators which reflect the essence of the definition of an ELIXIR Core Data Resource and support the promotion of excellence in resource development and operation. It describes the specific indicators in more detail and explains their application within ELIXIR’s sustainability strategy and science policy actions, and in capacity building, life-cycle management and technical actions.\nEstablishing the portfolio of ELIXIR Core Data Resources and ELIXIR Services is a key priority for ELIXIR and publicly marks the transition towards a cohesive infrastructure.",
"keywords": [
"ELIXIR",
"Sustainability",
"Data resources",
"Indicators",
"Capacity building",
"Infrastructure",
"Bioinformatics",
"Life sciences"
],
"content": "Introduction\n\nThe core mission of ELIXIR is to build a stable and sustainable infrastructure for biological information across Europe. At its heart are the data resources, tools and services that ELIXIR Nodes offer to the life-science community, providing stable and sustainable access to biological data.\n\nELIXIR resources vary from archives, or deposition databases, which contain research data outputs such as DNA sequences, to highly dynamic knowledge bases which aggregate, process and visualize research data, often adding layers of value through manual curation by highly qualified personnel. ELIXIR aims to ensure that these resources are available long-term and that their life-cycles are managed so that they support the scientific needs of life-sciences and biological research.\n\nOver 500 data resources exist in Europe1. Only a small fraction of these have institutional support and long-term funding commitments. The fact that the mid- and long-term survival of many crucial bioinformatics resources is not guaranteed threatens the foundations of academic and industrial life-science activities, and risks the loss of an immense wealth of biological and medical information, and the associated investments.\n\nIdentifying ways to assess the quality and impact of these crucial data resources will (a) promote excellence in resource development and operation to support capacity building through spreading best practice, and (b) provide a basis for technical and science policy actions required to support the long-term sustainability of the resources that form the backbone of bioinformatics infrastructure (Figure 1).\n\nThe proposal for establishing ELIXIR Services and ELIXIR Core Data Resources was put to the ELIXIR Scientific Advisory Board (SAB) in December 20142. This paper describes how to put the proposal into practice and provides guidelines for the implementation of life-cycle management.\n\nELIXIR Nodes define, through their Node applications and Service Delivery Plans or Work Programme, a set of services and data resources that are offered to the research community, the ELIXIR Services. These resources form the backbone of the life-science data infrastructure.\n\nELIXIR Core Data Resources are defined as a set of European data resources that are of fundamental importance to the wider life-science community and the long-term preservation of biological data. They provide complete collections of generic value to life-science, are considered an authority in their field with respect to one or more characteristics, and show high levels of scientific quality and service. Thus, ELIXIR Core Data Resources are of wide applicability and usage.\n\nELIXIR Core Data Resources tend to be well-known within the life-science community and are known to key stakeholders such as funders and journals. ELIXIR Core Data Resources are well maintained with a professional service delivery plan based on well-established life-cycle management processes and well-understood dependencies with related data resources. The ELIXIR Core Data Resources coexist with a broader range of databases with diverse motivations, often specialising in a particular scientific topic.\n\nThe ELIXIR Core Data Resources will form the focal point of technical and science policy actions to drive long-term sustainability. Transparent indicators for the ELIXIR Core Data Resources will also provide strategic intelligence on resource quality and impact, notably to policy makers and funders.\n\nThrough the ELIXIR Scientific Programme and ELIXIR-EXCELERATE grant, the infrastructure will deliver and enable a range of initiatives to support and strengthen the ELIXIR Services and ELIXIR Core Data Resources. ELIXIR Services and ELIXIR Core Data Resources will be the most widely used and outwardly visible part of ELIXIR. Establishing the portfolio of these data resources and services is the key priority for ELIXIR and publicly marks the transition towards a cohesive infrastructure. Through the establishment of the ELIXIR Services portfolio, ELIXIR also aims to support and implement best practice in resource management and bring European bioinformatics resources to the next level, building confidence among users.\n\n\nMethods\n\nThis section outlines the framework and stages for life-cycle management of the ELIXIR Services (Table 1). This framework will be implemented through the ELIXIR-EXCELERATE Node Capacity Building and Communities of Practice and Training Programme work packages, strengthening the ELIXIR infrastructure by creating a pathway to excellence.\n\nThe agreed set of indicators for the ELIXIR Core Data Resources sets quality standards that guide and inform the managers of Emerging Services in the development of their Resource towards an ‘ELIXIR Service’ status.\n\nMonitoring of usage trends and the scientific impact of the ELIXIR Services provides information to support their management, contributing to the maintenance of the ELIXIR Service status, or – where appropriate – leading a resource towards the Legacy stage.\n\nIn their report on the role of metrics in research assessment and management in the United Kingdom3, Wilsdon et al. highlight that the term ‘metric’ may be misunderstood. For example, the number of citations received by a publication is a citation metric, as it does not directly measure the impact of that researcher’s work.\n\nThey therefore suggest that the term ‘indicator’ is used in contexts in which there is the potential for confusion. An ‘indicator’ is defined as a measurable quantity that substitutes for something less readily measurable and is presumed to associate with it without directly measuring it. Citation counts could be used as indicators for the scientific impact of journal articles, even though scientific impacts can occur in ways that do not generate citations. We therefore use the term ‘indicators’ throughout.\n\nIdentification of the ELIXIR Core Data Resources involves a careful evaluation of the multiple facets of the data resources.\n\nIndicators are grouped in five categories:\n\n(1) Scientific focus and quality of science\n\n(2) Community served by the resource\n\n(3) Quality of service\n\n(4) Legal and funding infrastructure, and governance\n\n(5) Impact and translational stories\n\nWhen collecting and interpreting indicators, it is important to articulate the methods used and, where possible, standardise terminology. This facilitates the understanding of the indicators and avoids misinterpretation across different Nodes.\n\n(1) Scientific focus and quality of science\n\nThis includes the inherent scientific quality of the data and of the metadata, and its uniqueness and comprehensiveness. Also included are benchmarking against other resources, and whether the resource is an authority in its field.\n\nA differentiation should be made between archival or deposition databases that receive and archive de novo data sets and well-structured metadata deposited by scientists, and added-value databases or knowledge bases, which are based on the archival data and add substantial value through expert curation, annotation of metadata, sophisticated data processing and/or data integration. The curation effort and outputs linked to a resource are an important measure of its quality.\n\n(2) Community\n\nThis category reflects the size and the measured demand of the communities that are served by the resource: web statistics, user reach, and international use. The community that is served can be the depositors, as some resources are vital for deposition and/or the end-users. The community can be identified and measured in different ways, such as access to URLs, to download servers, and through APIs, and also through the citation of data and data resources in publications.\n\nIn addition, certain resources play a foundational role to derived services and data-driven research. Their data are distributed to many other resources and/or services that rely on their existence.\n\nThe scientific context in which the resource operates should be taken into account. A resource that serves a small scientific community may not have as many users as a resource serving a broader interest, and yet it may reach 90% of the community it supports (coverage) and be crucial for the scientific work of that community.\n\n(3) Quality of service\n\nCertain service levels and reliability can be quantified with specific technical indicators such as: the uptime of the resource; response times; availability and periodic application of meaningful and automated tests; user support and related training; use of community-recognised standards; diversity of data retrieval mechanisms; and other services. Usually, this requires a quality-assurance process during service development and operation. The Accelerating the ELIXIR Training Programme and the ELIXIR Training Platform will support resources delivering training, as well as provide good-practice guidelines and systems for evaluation.\n\n(4) Legal and funding infrastructure, and governance\n\nAs stable research infrastructures, Core Data Resources can demonstrate that they have a sound legal, funding and governance structure.\n\nA viable resource has a suitable legal framework (clear terms of use, licensing, data security, ethical compliance, etc.). Open data is a critical driver for life-sciences research and therefore for ELIXIR, but the policy for data access must be considered in view of resource funding. Longevity can be gauged through institutional support, funding schemes and the duration of financial stability. Core Data Resources will have demonstrated transition through different funding sources. A strong governance structure includes an international, independent Scientific Advisory Board (SAB), which allows community input and provides permanent oversight.\n\n(5) Impact and translational stories\n\nImpact evaluation attempts to provide a definitive answer to the question of whether the resource is meeting its objective of fulfilling a specific need of the scientific community. The translational stories relate to the role of the resource in accelerating science and are thus a very important indicator.\n\nImpact evaluation attempts to assess whether the Resource is meeting its objective of fulfilling a specific need. In the UK, the HM Treasury’s Magenta Book4 provides guidelines for policy makers and analysts on how policies and projects should be assessed and reviewed. According to this guidance, the key characteristic of a good impact evaluation is that it recognises that most needs can be met by a range of elements, not just the project in question. To test the extent to which the Resource is responsible for meeting the need, it is necessary to estimate – usually on the basis of a statistical analysis of quantitative data – what would have happened if the Resource had not existed. This is known as the counterfactual. Establishing the counterfactual is not easy, since by definition it cannot be observed. A strong evaluation is successful in isolating the effect of the Resource from all other potential influences, thereby producing a good estimate of the counterfactual.\n\nWhen communicating the impact of ELIXIR’s resources and their role in accelerating science to funders and the public, the indicators should be relevant to the audience. This can be done by presenting them within a context that is readily understandable.\n\nA set of key indicators may be used to make a case for a Core Data Resource. Indicators aim to reflect the essence of the definition of an ELIXIR Core Data Resource and support the promotion of excellence in resource development and operation. Box 1 describes the indicators used in each category.\n\nELIXIR Core Data Resources are defined as a set of European data resources that are of fundamental importance to the broad life-science community and the long-term preservation of biological data.\n\nA set of key indicators may be used to make a case for a Core Data Resource. Indicators aim to reflect the essence of the definition of an ELIXIR Core Data Resource and support the promotion of excellence in resource development and operation.\n\nIndicators are grouped in five categories:\n\n(1) Scientific focus and quality of science\n\n(2) Community served by the resource\n\n(3) Quality of service\n\n(4) Legal and funding infrastructure, and governance\n\n(5) Impact and translational stories.\n\nThe indicators recognise the heterogeneous nature of biological data, and the diversity of the supporting data resources, use cases, and communities served. Indicators can be used to measure technical and/or scientific readiness of a resource compared to defined quality standards.\n\nOne of the challenges of data-intensive science is to facilitate knowledge discovery by assisting humans and machines in their discovery of, and access to, scientific data. FAIR is a set of guiding principles to make data Findable, Accessible, Interoperable, and Reusable6.\n\nThese indicators will be used to demonstrate that ELIXIR Core Data Resources are compatible with the FAIR data principles. The Table below maps indicators to corresponding FAIR criteria.\n\nAs the context of a core resource is critical to understanding its importance, indicators alone are not sufficient. Qualitative evidence is needed so that the resource can be reviewed throughout its life-cycle through the expert judgment of the ELIXIR Heads of Nodes and Scientific Advisory Boards.\n\nIndicators and Related Information.\n\nAll elements in sections 1–4 require a response.\n\nQuantitative indicators are underlined.\n\na. Archives vs knowledge bases: is the resource archival (taking submissions) or a knowledge base (added-value)?\n\nb. Scope statement: describe the scientific coverage and comprehensiveness of the resource. For example, all species or a subset of species, families, outputs from a particular experimental method? What is position of the resource with respect to other similar data resources?\n\nc. International dimension: does the resource have a global footprint? (Demonstrated through, for example, an international consortium delivering the resource, geographical diversity in sources of submissions, global literature curated, international diversity of delivery partners and/or funders)\n\nd. Staff effort: number of FTEs per year for the past 2–3 years\n\ni. Curators\n\n❑ support for submission adherence to metadata requirements? (see also 3d)\n\n❑ support for extraction of information from the scientific literature?\n\nii. Bioinformaticians\n\niii. Technical staff\n\na. Overall usage: what is the usage of the resource for the past 2–3 years?\n\ni. Access via a web browser: number of visits, unique visitors, hits, and page views\n\nii. Access via additional access methods: visits, unique visitors, hits, and downloads (includes FTP downloads and programmatic access)\n\nb. Potential usage: what is the estimated size of the global potential user community?\n\nc. Usage in research as measured through citation in the literature:\n\ni. Citation of a resource name: the number of times the resource name is mentioned in scientific articles per year (in Europe PMC)\n\nii. Citation of data of a resource: the number of times accession numbers from the resource are mentioned or cited in research articles (in Europe PMC)\n\niii. Key publications describing the resource list (e.g. publications in NAR Database issue) and the number of citations (in Europe PMC).\n\nd. Dependency of other resources: do other resources have a dependency on the resource described here to provide that service (i.e. what is the reach-through)?\n\na. Identifier use: does the resource provide persistent and unique identifiers?\n\nb. Data throughput: number of entries, depositions (records or bytes ingested per year), records processed, genomes assembled, etc. per year, for past 2–3 years.\n\nc. Technical performance:\n\ni. Uptime: percentage availability per month for a sample of key web pages (or similar) over the past 12 months (e.g. search results, homepage, data record pages).\n\nii. Response times of key web pages.\n\nd. Use of standards: which community-recognised standards are used for metadata and data (e.g. MIAME, JATS, INSDC features, ontologies)? Provide a link to documentation.\n\ne. Links to documentation of provenance: does the resource link to the scientific literature for provenance of facts or biological context?\n\nf. Data availability - access services and formats\n\ni. Data sharing services: list services through which data is shared (e.g. website, APIs, FTP, TripleStore)\n\nii. Data sharing formats: list formats for available data (e.g. plain text, FASTA, XML, RDF, Dublin Core, tsv, JSON)\n\ng. Customer service\n\ni. Helpdesk: does the resource run a helpdesk?\n\nii. User feedback: does the resource seek and incorporate user input into service design decisions?\n\niii. Training: does the resource undertake training?\n\na. Scientific Advisory Board: does the resource have an international, independent Scientific Advisory Board\n\nb. Open Science: does the resource have a legal framework that supports Open Science? e.g. open licenses or a public statement of open terms of use.\n\nc. Privacy policy: does the resource have a publically available privacy policy in which security around personal data and cookies are described?\n\nd. Ethics policy: does the resource have an ethics policy that complies with all relevant international standards and best practices?\n\ne. Sustainable support and funding: demonstrate the past and future funding and/or other commitments that support the resource by the host institution and/or other entities.\n\na. Counterfactual: what would be the impact on the scientific community if the resource had not existed, or were to disappear and not be replaced? Is the resource globally unique? What would the impact on other dependent resources be?\n\nb. Accelerating science: how does the resource accelerate science? For example, does the resource set standards; promote reuse of data or software; promote research efficiencies; extend technical products in other areas?\n\nc. Translational data: are there ‘translational’ figures familiar to the audience that will help them grasp the core nature of the resource?\n\nDefinition of terms used to measure overall resource usage (see 2.a)\n\nVisits: a visit, or session, is a set of requests/interactions by a uniquely identified client within a specific time (typically, 30 minutes). The number of visits/sessions is a measure of website traffic.\n\nUnique Visitors: the number of visitors (unique IP addresses, unique visitors, or visitors) measures how many individuals access a website in a specified time, regardless of how often they visit. It can be determined in different ways. For example, number of: unique IP addresses, user cookies, unique IP addresses + user agent (a ‘user agent’ is the client that is used to access a web site.\n\nHits: can be used to analyse usage trends of a web resource. Hits measure the number of files downloaded when a web page is viewed. A web page is typically made up of a number of individual files, such as HTML documents, images, JavaScript files. When a web page is viewed, each file is requested from the server, adding to the hit-count.\n\nPage views: (or pages, impressions or URLs): a request to load a single HTML file (web page) of a web site, identified by the URL in a browser. During a single visit, several different pages may be accessed.\n\nDownloads: measures the data downloaded from a resource in volume/bandwidth, often in GigabytesGB.\n\nBox 2 presents a ‘Case Document’ template for describing a data resource using these indicators.\n\nTaking into account that ‘Not everything that can be counted counts, and not everything that counts can be counted’ (William Bruce Cameron5), the indicators will be used to inform a peer-review process described below.\n\nA carefully chosen set of qualitative and quantitative indicators, tailored to bioinformatics resources, will inform identification of the ELIXIR Core Data Resources. The indicators will support, but not supplant, expert judgment.\n\nELIXIR Core Data Resources should each have an international independent Scientific Advisory Board. Such boards are made up of distinguished academic and industry researchers and professionals who conduct scientific and/or technological review, ensuring quality and providing strategic advice to resource managers. Identification of ELIXIR Core Data Resources does not encroach on these governance structures. The establishment of Scientific Advisory Boards for Core Resources and Nodes is among the best practices that will be promoted by the Node Capacity Building and Communities of Practice.\n\nIndicators can only be useful if they are underpinned by an open, transparent and coherent collection infrastructure, so clear methods of collection and processing are needed.\n\nUsing the definition of ELIXIR Core Data Resources above, we identified a ‘seed list’ of candidate core resources (Table 2) to inform Core Data Resource indicators.\n\nIdentification of ELIXIR Core Data Resources involves a careful evaluation of the multiple facets of the data resources. This paper describes the overall approach for the selection of Core Data Resources, which will evolve over the coming months as the principles described in this paper are into practice.\n\nIndicators used are described in Box 1. The relevant ELIXIR Node submits the completed ‘Case Document’ (Box 2) to the ELIXIR Hub.\n\nOnly data resources that are part of an ELIXIR Node Application and/or Service Delivery Plan (in the case of EMBL-EBI, the ‘Work Programme’) can be candidate ELIXIR Core Data Resources.\n\nInitial evaluation of the ELIXIR Core Data Resources takes place annually.\n\nThe ELIXIR Hub checks the Case Document for completeness and verifies whether the proposed Resource is included in the Node's Service Delivery Plan (or Work Programme). The ELIXIR Hub has an advisory role in selecting ELIXIR Core Data Resources. The Hub has no decision-making power and does not evaluate the proposals.\n\nThe ELIXIR Director informs the Heads of Nodes Committee of the candidate ELIXIR Core Data Resources. The Heads of Nodes Committee can request additional information about a candidate Resource from the relevant Head of Node.\n\nThe Heads of Nodes Committee convenes annually in person to review submitted Case Documents and determine the list of ELIXIR Core Data Resources. The initial selection is expected to grow with time.\n\nBefore the initial selection of ELIXIR Core Data Resources is confirmed, the ELIXIR Scientific Advisory Board will review the selection process. The ELIXIR Scientific Advisory Board also reviews the portfolio of ELIXIR Core Data Resources and provides ongoing advice on the process for their identification.\n\nAs each ELIXIR Core Data Resource already has a governance structure that includes an independent, international Board, this individual review is not duplicated by the ELIXIR Advisory Board. The outcome is presented to the ELIXIR (governance) Board for information and to ensure that the process has been correctly applied.\n\nThrough the work of the Nodes, Advisory Board and the ELIXIR Hub, standardized data on indicators can also be collected and monitored.\n\nIn collaboration with the Nodes, monitoring data will be automatically collected at the ELIXIR Hub on an ongoing basis and will be regularly transmitted to the Heads of Nodes. Nodes undertake to provide the necessary data to the specification defined.\n\nELIXIR Core Data Resources may be requested to report regularly on certain indicators, and to provide updates on any major changes.\n\nThe Heads of Nodes meeting will review all ELIXIR Core Data Resources every two to three years. However, a minimum of three Heads of Nodes may request an extraordinary evaluation of an individual resource, in particular, on the basis of the monitoring data. If the review raises issues concerning an ELIXIR Core Data Resource, the Heads of Nodes Committee is responsible for determining what action should be taken.\n\n\nDiscussion\n\nELIXIR Core Data Resources form the centre of ELIXIR’s sustainability strategy. The collected key indicators for these bioinformatics resources, and more specifically the impact and translational stories, will be used to make a case to funders. This information will in turn help them to translate the impact that Core Data Resources make.\n\nImpact evaluation attempts to provide a definitive answer to the question of whether the resource is meeting its objective of fulfilling a specific need of the scientific community. The translational stories relate to the role of the resource in accelerating science and are thus a very important indicator.\n\nIn addition, the ELIXIR Core Data Resources could contribute to impact and econometric analysis of life-science data within ELIXIR, as well as events focused on communicating the value of sustainable infrastructure for open data to the European Commission and other stakeholders.\n\nCore Data Resources will act as flagships of excellence. The use of defined indicators, in particular those around user policies and procedures, will be useful as benchmarks of quality and will support capacity building within the ELIXIR Community.\n\nFor example, the ELIXIR Core Data Resources, especially the knowledge bases, can function as ’concept authorities’ within and beyond ELIXIR, having a clear role in standardising what the community understands by a given biological concept.\n\nCertain additional indicators could be used outside of ELIXIR (e.g. uptime) to consolidate confidence across a wide range of stakeholders. This would require full transparency on how indicators are produced, so as to avoid misunderstanding or misuse.\n\nKey indicators will inform life-cycle management, identifying trends and supporting decision-making around a given resource. This is important not only for the resource teams, but also for identifying Emerging Services that may evolve into ELIXIR Services. As new resources are listed in the ELIXIR Node Service Delivery Plans, indicators and capacity building around the Core Data Resources will support Emerging Services as they mature.\n\nELIXIR Core Data Resources will be prioritised for technical actions and for training. ELIXIR Core Data Resources become the primary resources for ELIXIR Cloud, storage and data distribution efforts within the ELIXIR Nodes network. These actions will be important for supporting the evolution of Emerging Services associated with Core Data Resources.\n\nELIXIR will strive to add value to all ELIXIR resources, including ELIXIR Services, by supporting interactions of the Core Data Resources with one another and with ELIXIR Services and Emerging Services for the benefit of the larger user community. Examples of this are use-case driven enhancement of the interoperability of the ELIXIR Core Data Resources with one another and with other ELIXIR Services, supporting helpdesks to scale national operations, and implementation studies to explore links to national infrastructures and data services.\n\n\nConclusion\n\nELIXIR Core Data Resources form the centre of ELIXIR’s sustainability strategy and science policy actions. The collected key indicators reflect the diversity of these bioinformatics resources, and will be used to make a case to funders. This information in turn will help them to translate the impact that Core Data Resources make.\n\nKey indicators for Core Data Resources, in particular those around user policies and procedures, will be useful as flagships of excellence and best practice to support capacity building within the ELIXIR Community. The process may be extended to incorporate best practices on interoperability on concept naming, identifier resolution, identifier mappings and data identity provision and protection.\n\nThe key indicators will inform life-cycle management, identifying trends and supporting decision-making around a given resource. This is important not only for the teams managing the resources, but also for the identification of Emerging Services that may evolve into Core Data Resources. As new resources are listed on the ELIXIR Node Service Delivery Plans, the indicators and capacity building around the Core Data Resources will support the growth of Emerging Services as they mature.\n\nAs ELIXIR continues to mature, the framework for life-cycle management will be put into practice, supporting the Emerging Services, and strengthening the ELIXIR infrastructure by creating a stairway to excellence.\n\nThe use of both quantitative and qualitative indicators reflects the need to understand the context in which resources operate, providing a clear and rational basis for efforts to strengthen resources and improve capacity building. Establishing the portfolio of ELIXIR Core Data Resources and ELIXIR Services is a key priority for ELIXIR and publicly marks the transition towards a cohesive infrastructure.\n\nA ‘Case Document’ describes a (candidate) Core Data Resource and is based on the indicators introduced in Box 1.\n\nDocument owner: [Insert Name] [email address]\n\na. Archival vs knowledge base: is the resource\n\n• archival (taking submissions)\n\n• knowledge base (added-value)\n\nb. Scope statement: describe the scientific coverage and comprehensiveness of the resource. For example, all species or a subset of species, families, outputs from a particular experimental method? How is the resource positioned with respect to other similar data resources?\n\nc. International dimension: does the resource have a global footprint? (e.g. demonstrated through an international consortium delivering the resource, geographical diversity in the source of the submissions, global literature curated, international diversity of delivery partners and/or funders)\n\nd. Staff effort:\n\nCurators\n\n• support for submission adherence to metadata requirements\n\n• support for extraction of information from the scientific literature\n\nBioinformaticians\n\nTechnical staff\n\na. Overall usage - quantitative: what is the usage of the resource for the past 2–3 years?\n\nPlease indicate the method used to derive these indicators.\n\nAccess via a web browser (using web analytics, example: Google Analytics\n\nAccess via a web browser (using log analytics)\n\nData downloads (FTP, APIs, etc.)\n\nb. Potential usage:what is the estimated size of the global potential user community?\n\nc. Usage in research as measured through citation in the literature:\n\nPlease indicate the method used to derive these indicators.\n\nKey publications describing the resource list (e.g. publications in NAR Database issue) and the number of citations (in Europe PMC):\n\nd. Dependency of other resources:do other resources depend on the resource described here to provide that service (i.e. what is the reach through)? Please list.\n\na. Identifier use: does the resource provide persistent and unique identifiers?\n\nb. Data throughput: number of entries, depositions (records or bytes ingested per year), records processed, genomes assembled, etc. annually for past 2–3 years.\n\nc. Technical performance:\n\ni. Uptime:percentage availability per month for a sample of key web pages (or similar) over the past 12 months (e.g. search results, homepage, data record pages).\n\nii. Response times of key web pages.\n\nd. Use of standards: which community-recognised standards are used for metadata and data (e.g. MIAME, JATS, INSDC features, ontologies)? Provide a link to documentation.\n\ne. Links to documentation of provenance: does the resource link to the scientific literature for provenance of facts or biological context?\n\nf. Data availability – access services and formats:\n\ni. Data sharing services: list services through which data is shared (e.g. website, APIs, FTP, TripleStore)\n\nii. Data sharing formats: list formats data is available in (e.g. text, FASTA, XML, Dublin Core, tsv, JSON)\n\ng. Customer service:\n\ni. Helpdesk: does the resource operate a helpdesk?\n\nii. User feedback: does the resource seek and incorporate user input into service design decisions?\n\niii. Training: does the resource undertake training activities?\n\na. Scientific Advisory Board: does the resource have an international, independent Scientific Advisory?\n\nb. Open Science: does the resource have a legal framework that supports Open Science? e.g. open licenses or public statement of open terms of use.\n\nc. Privacy policy: does the resource have a publically available privacy policy in which security around personal data and cookies are described?\n\nd. Ethics policy: does the resource have an ethics policy that complies with all relevant international standards and best practices?\n\ne. Sustainable support and funding: demonstrate the past and future funding commitments and/or other commitments that support the resource by the host institution and/or other entities.\n\na. Counterfactual: what would the impact on the scientific community be if the resource had not existed or was to disappear and not be replaced? Is the resource globally unique? What would the impact on other dependent resources be?\n\nb. Accelerating science: how does the resource accelerate science? For example, does the resource set standards; promote reuse of data or software; promote research efficiencies; extend technical products in other areas?\n\nc. Translational data: are there ‘translational’ figures that are familiar to the audience that will help them grasp the core nature of the resource?",
"appendix": "Author contributions\n\n\n\nCD and JM wrote, edited and finalised the manuscript; RA, RA, NB, and AV provided many valuable insights and intellectual contributions; JHK provided text-mining expertise regarding indicators found in the literature; MB and CC helped shape the indicators and processes from an impact assessment perspective; EG, RL, NR, HS, and DT were primarily responsible for insights around technical indicators but also made many valuable contributions to constructing the document throughout.\n\n\nCompeting interests\n\n\n\nNo competing interests were disclosed.\n\n\nGrant information\n\nThis work was done in the context of Work Package 3 of the ELIXIR-EXCELERATE project that is funded by the European Commission within the Research Infrastructures programme of Horizon 2020, grant number 676559.\n\nThe funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\n\n\nAcknowledgements\n\nWe thank all the colleagues who have contributed over many years to arriving at this position and state of clarity since the ELIXIR Preparatory Phase. You know who you are.…\n\n\nReferences\n\nThe ELIXIR Strategy for Data Resources Draft Report from Workpackage 2 The ELIXIR Preparatory Phase. Reference Source\n\nELIXIR Scientific Advisory Board paper. ELIXIRSAB/2014/4, 2014.\n\nWilsdon J, Allen L, Belfiore E, et al.: The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. London: Higher Education Funding Council for England (HEFCE), 2015. Publisher Full Text\n\nThe Magenta Book – Guidance for evaluation. London: HM Treasury, 2011. Reference Source\n\nCameron WB: Informal Sociology: A casual introduction to sociological thinking. New York: Random House, 1963; 13. Reference Source\n\nWilkinson MD, Dumontier M, Aalbersberg IJ, et al.: The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016; 3: 160018. PubMed Abstract | Publisher Full Text | Free Full Text"
}
|
[
{
"id": "16871",
"date": "10 Oct 2016",
"name": "Helen M. Berman",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThis is a very well written article describing how ELIXIR Core Data Resources are identified and evaluated. The figures, tables, and illustrative boxes have been carefully designed and add to the clearly written text. This paper should be required reading for every panel and funding agency tasked with evaluating these resources.",
"responses": []
},
{
"id": "16722",
"date": "18 Oct 2016",
"name": "Maryann E. Martone",
"expertise": [],
"suggestion": "Approved",
"report": "Approved\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nI understand that this article is included in the Elixir report collection and it does, in fact, read like a report. If one is very familiar with Elixir, it probably makes sense, but if one isn't, then it is a bit confusing to read.\nBut it contains valuable information that I think would be generally useful to everyone trying to develop methods for evaluating data resources. In fact, an RFI put out by the US National Institutes of Health on repository metrics just closed today. So if the authors are willing, I think that providing some modifications would make the report more readable to a general audience.\n\nIt would be nice if the first paragraph introduced Elixir a bit more and explained its structure. This could be done through either a diagram or a reference. But the Hub idea is critical to the governance of Core Data Resources proposed and it would be nice to make the structure clear.\n\nThe tense of the article is a bit unclear. Are there already approved Core Data Resources that have been evaluated by the criteria outlined? At times, it seems that way and other times, it seems like the process has not yet been implemented. In Table 2, some examples that are considered \"core\" are given. But in the text, it says that the resources in table 2 were identified as a \"seed list\" to inform Core Data Resources. So it implies that they haven't yet gone through the process. I think stating up front where you are in the process would make it less confusing.\n\nMethods section: The term \"indicator\" is first used in second paragraph of the first section of methods section, but is not defined until the next section. It should be defined earlier.\n\n\"Legacy stage\" is used in the 3rd paragraph. Legacy has a meaning in data-already existing-and so I think some definition is required here. It is, in fact, defined in Table 1, so a reference to the Table would be sufficient.\n\nThe indicators are listed in the main text, again in Box 1 and again in Fig 2. Seems like a bit of overkill. Also, the purpose of the indicators is defined in multiple places and it is a bit repetitive, e.g., the explanation given under the section \"Detailed description of the indicators and related methodology\" really isn't necessary.\n\nFAIR is introduced in Box 1, but not in the text at all. It seems like it should be mentioned in the introduction to indicators in the methods as it is used as a set of criteria throughout Box 1.\n\nIs there a timeline as to when the first core resources will be approved (if they haven't been already-see pt 2)",
"responses": []
}
] | 1
|
https://f1000research.com/articles/5-2422
|
https://f1000research.com/articles/5-2522/v1
|
14 Oct 16
|
{
"type": "Software Tool Article",
"title": "Simple and adaptable R implementation of WHO/ISH cardiovascular risk charts for all epidemiological subregions of the world",
"authors": [
"Dylan Collins",
"Joseph Lee",
"Niklas Bobrovitz",
"Constantinos Koshiaris",
"Alison Ward",
"Carl J. Heneghan",
"Joseph Lee",
"Niklas Bobrovitz",
"Constantinos Koshiaris",
"Alison Ward",
"Carl J. Heneghan"
],
"abstract": "The World Health Organisation and International Society of Hypertension (WHO/ISH) cardiovascular disease (CVD) risk assessment charts have been implemented in many low- and middle-income countries as part of the WHO Package of Essential Non-Communicable Disease (PEN) Interventions for Primary Health Care in Low-Resource settings. Evaluation of the WHO/ISH cardiovascular risk charts and their use is a key priority and since they only exist in paper or PDF formats, we developed a simple R implementation of the charts for all epidemiological subregions of the world. The main strengths of this implementation are that it is built in a free, open-source, coding language with simple syntax, can be modified by the user, and can be used with a standard computer.",
"keywords": [
"WHO/ISH",
"Cardiovascular Risk Charts",
"Risk Score",
"R"
],
"content": "Introduction\n\nCardiovascular disease (CVD) is the leading cause of death worldwide, including in many low-and-middle income countries (LMIC)1,2. Preventing CVD is therefore a worldwide priority and the World Health Organisation (WHO) is coordinating a global strategy for LMIC to systematically prevent CVD in primary care3.\n\nIn 2007 the WHO and the International Society of Hypertension (ISH) published the WHO/ISH CVD risk charts for all WHO epidemiological subregions of the world4. These charts are to be used as part of the WHO’s Package of Essential NCD (PEN) Interventions for Primary Health Care in Low-Resource Settings in jurisdictions that do not have their own population-derived risk assessment algorithms. While these charts are a good resource for many health systems, little is known about their validity5. Therefore, it is important that jurisdictions that implement these charts conduct operational research and attempt to validate and optimise them for their setting.\n\nTwo paper-based versions of WHO/ISH charts are available for each subregion: one that requires measured total cholesterol and one that does not. The latter was made available for use in settings with limited access to laboratory testing or where the cost of cholesterol testing is prohibitive. Both charts require information on age, gender, diabetes status, smoking status, and systolic blood pressure to stratify people into one of five risk categories of 10-year risk of a fatal or non-fatal CVD event. Further instructions for their use have been published3.\n\nThrough our experience collaborating with LMIC with the implementation of WHO PEN, we identified a common need for an open-source tool to facilitate the implementation of WHO/ISH risk charts and operational research of WHO PEN at a population level. We therefore developed an open source tool in R (https://www.r-project.org/), which we describe here and make available to researchers in LMIC.\n\n\nMethods\n\nWe extracted all versions of the paper-based WHO/ISH CVD risk charts by hand into a standardized Microsoft Excel template, independently and in duplicate. We used RStudio (version 0.99.489) to compare the duplicate extractions and to calculate Cohen’s kappa coefficient for inter-rater reliability, using the irr package (version 0.84). Discrepancies were reviewed by the same two extractors and resolved by referring to the original paper chart.\n\nOne author wrote the initial code for the WHO/ISH risk function in R (DC). This was reviewed and adapted by a second author experienced in the R language (CK). Two additional authors (JL, NB), new to the R language, reviewed the code to ensure the syntax was simple and comprehensible.\n\nA MatLab implementation of WHO/ISH risk charts for epidemiological subregion SEAR D had been previously reported6. We used Octave (www.gnu.org/software/octave/) version 8.3.2 to calculate the SEAR D WHO/ISH risk score for every possible combination of risk factors using the previously reported MatLab implementation, and compared the percent agreement to the risk scores generated by our R implementation.\n\n\nResults\n\nAll WHO/ISH risk charts were extracted by hand into a single comma delimited file (Dataset 1). Our function is dependent on this file. Cohen’s kappa for initial agreement between the independent extractors was 0.97, indicating excellent agreement. All remaining discrepancies were resolved by consensus.\n\nWe developed a simple function, named WHO_ISH_Risk(), that, when loaded in the R workspace, will calculate the WHO/ISH risk score for any epidemiological subregion (Dataset 2) (Figure 1). We intentionally used simple syntax such that users with a beginner’s level of experience with R can adapt the code as needed.\n\nThe WHO_ISH_Risk function requires seven parameters: age, gender, smoking status, diabetes status, systolic blood pressure, total cholesterol, and the appropriate WHO epidemiological subregion. These parameters and their codes are summarised in Table 1. The function format in the workspace is: WHO_ISH_Risk(age, gdr, smk, sbp, dm, chl, subregion). No default values are specified for any parameter.\n\nWHO_ISH_Risk() function uses the base package in R and requires no package dependencies. Once the function is loaded in the workspace, it requires access to the comma delimited file named “WHO_ISH_Scores.csv” (Dataset 1). The user needs to ensure that this file is accessible in the working directory of R before running the WHO_ISH_Risk() function. We have included a worked example of how to use the function in Dataset 2.\n\nInternally, the WHO_ISH_Risk() function creates a data frame of the risk factor values passed to it (Figure 1). It then categorises the continuous parameters age, systolic blood pressure, and total cholesterol. Age and systolic blood pressure were categorised according to WHO guidance7. Total cholesterol was categorised according to common clinical practice, rounding up from 0.5 to the nearest integer. The categorisation boundaries can be adapted by the user as needed.\n\nA unique identification code is generated corresponding to the combinations of risk factors for each individual. This code is matched to a reference code from the “WHO_ISH_Scores.csv” file, which the function automatically calls into the workspace (Dataset 1). Internally, the function stores the risk scores in a data frame that includes the risk factors, and ultimately returns a vector containing the risk scores.\n\nComparison with the published MatLab implementation of the SEAR D risk charts6 showed 100% agreement with our R implementation, for all possible combinations of risk factors.\n\n\nDiscussion\n\nTo our knowledge, this is the first publically available R implementation of WHO/ISH CVD risk charts for all WHO epidemiological subregions of the world. Our implementation may be used for analysis of cardiovascular risk when electronic patient data is available. The code will automatically apply WHO/ISH risk scores to patients based on age, gender, systolic blood pressure, smoking status, diabetes status, total cholesterol, and epidemiological subregion. This code could be used, for example, during a pilot implementation of WHO PEN to audit the accuracy of risk assessment by comparing documented risk scores to actual risk scores calculated using this tool. We have provided a complete worked example in the data files. While more sophisticated implementations are possible, we intentionally sought to use simple syntax in the base package to allow for easy interpretation and use by novice R users on standard computers.\n\nAlthough we modelled the function based on WHO PEN guidance for risk assessment, we recognise that some users may wish to change the boundaries of certain risk factor parameters. While WHO PEN guidance specifies the range of systolic blood pressure values for each systolic blood pressure category, it provides no such guidance for categorising total cholesterol. Based on our opinion and clinical experience, and on a previously published implementation in MatLab6, we chose to categorise total cholesterol by rounding up at 0.5 to the next integer. These boundaries could be changed by users to adapt to local practice. We caution changing the boundaries beyond recommended guidance.\n\nThe “WHO_ISH_Scores.csv” file can be adapted by the user if desired. Each row of the file represents one unique combination of risk factors. The first six columns specify the risk factor values, and the last 14 columns specify the corresponding risk category for a given subregion. These risk categories can be changed by the user, but in their current state they represent the WHO/ISH risk charts as published.\n\n\nConclusion\n\nWe created a simple R implementation of WHO/ISH CVD risk charts for all WHO epidemiological subregions of the world, requiring only the base R package. It has one dependency file from which it draws the WHO/ISH risk scores based on a combination of seven parameters: age, gender, systolic blood pressure, smoking status, diabetes status, total cholesterol, and epidemiological subregion. This tool can be used, and adapted, by policy-makers and researchers involved in the implementation and evaluation of WHO/ISH CVD risk charts.\n\n\nData and software availability\n\nF1000Research: Dataset 1. CSV file required for function (file name= “WHO_ISH_Scores.csv”), 10.5256/f1000research.9742.d1383098\n\nF1000Research: Dataset 2. Code for the WHO_ISH_Risk() function and worked example (file name = “Worked_Example.rtf”), 10.5256/f1000research.9742.d1383109",
"appendix": "Author contributions\n\n\n\nDC conceived of the idea, extracted data, wrote the initial code, and manuscript. JL and NB extracted data and contributed to writing the manuscript. CK reviewed and adapted the code and contributed to writing the manuscript. AW and CH reviewed and contributed to writing the manuscript.\n\n\nCompeting interests\n\n\n\nDC has received payment from the WHO for consulting work. AW and CH have received expenses and grant income from the WHO for projects related to CVD and Self Care in NCDs, and direct a WHO Collaborating Centre. CH also receives funding form the National Institute of Health Research (NIHR) School of Primary Care Research. JL, CK, and NB declare no competing interests.\n\n\nGrant information\n\nThe WHO Collaborating Centre for Self Care paid for the open access publishing fees. No other funding was provided for this work.\n\n\nReferences\n\nMathers CD, Loncar D: Projections of global mortality and burden of disease from 2002 to 2030. PLoS Med. 2006; 3(11): e442. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO: Noncommunicable Diseases, Fact Sheet. 2015. Reference Source\n\nWHO: Package of essential noncommunicable (PEN) disease interventions for primary health care in low-resource settings. Geneva, Switzerland; 2010. Reference Source\n\nMendis S, Lindholm LH, Mancia G, et al.: World Health Organization (WHO) and International Society of Hypertension (ISH) risk prediction charts: assessment of cardiovascular risk for prevention and control of cardiovascular disease in low and middle-income countries. J Hypertens. 2007; 25(8): 1578–82. PubMed Abstract | Publisher Full Text\n\nCooney MT, Dudina A, D'Agostino R, et al.: Cardiovascular risk-estimation systems in primary prevention: do they differ? Do they make a difference? Can we see the future? Circulation. 2010; 122(3): 300–10. PubMed Abstract | Publisher Full Text\n\nRaghu A, Praveen D, Peiris D, et al.: Implications of Cardiovascular Disease Risk Assessment Using the WHO/ISH Risk Prediction Charts in Rural India. PLoS One. 2015; 10(8): e0133618. PubMed Abstract | Publisher Full Text | Free Full Text\n\nWHO/ISH: World Health organization/International Society of Hypertension (WHO/ISH) risk prediction charts. 2007. Reference Source\n\nCollins D, Lee J, Bobrovitz N, et al.: Dataset 1 in: Simple and adaptable R implementation of WHO/ISH cardiovascular risk charts for all epidemiological subregions of the world. F1000Research. 2016. Data Source\n\nCollins D, Lee J, Bobrovitz N, et al.: Dataset 2 in: Simple and adaptable R implementation of WHO/ISH cardiovascular risk charts for all epidemiological subregions of the world. F1000Research. 2016. Data Source"
}
|
[
{
"id": "17007",
"date": "20 Oct 2016",
"name": "Raivo Kolde",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe paper describes a software tool for calculation of WHO/ISH cardiovascular risk scores for different epidemiological subregions of the world. This could be a useful piece of software for researchers working with cardiovascular epidemiological data and make the practices for calculating such scores more reproducible over multiple studies. However, in the present form I found the paper to be severely lacking in both implementation and substance.\nFirst, the code should not be distributed as an RTF file. R packages are well accepted standard for reproducibly distributing R code that also forces to provide some rudimentary documentation and examples. As an additional benefit, hosting the package in a repository like CRAN, Bioconductor or even Github, makes it easy to install and update the program. There is no reason to not follow this convention for this particular project, thus, the code should be converted into package format and hosted in one of the above-mentioned services to be any use to the community.\n\nThe argument of the code in the present form being simply adjustable is not valid. One has to spend quite a bit of time understanding what is happening in the code, especially when not an R expert. For the code to fulfill its purpose it has to be rewritten in a way that adjustable parts are presented to the user as function parameters (with reasonable defaults). This way users have to know even less R to use the code effectively.\n\nAlso, having a figure depicting the code is uncommon to say the least and not useful in any way. It would make much more sense to include a worked out example to the text, where you would describe a common use case of your R package.\nTo enhance usefulness of the package and make it more universally applicable it would be good to see some more risk prediction models like Framingham and SCORE added to the software. This would make it easy to compare the validity of different scores over a study population.",
"responses": [
{
"c_id": "2542",
"date": "08 Mar 2017",
"name": "Dylan Collins",
"role": "Author Response",
"response": "Dear Raivo Kolde, Thank you for your insightful review and comments. In response we have made the following changes: As suggested, we created an R package which can be downloaded directly from github We no longer advocate for the code to be adjustable and through the development of a package have made it simpler to use We removed the figure of the code, and have added a worked example in the text The main difference between scores like Framingham, SCORE, and QRISK2 is that they have underlying cox model equations which can be implemented in R, whereas WHO/ISH risk charts do not – hence the rationale for the development of this package. While the additional of further risk scores was outside the scope of this work, we welcome collaborators who might want to contribute in this respect. Every Best, Dylan Collins"
}
]
},
{
"id": "17006",
"date": "25 Oct 2016",
"name": "Scott A. Chamberlain",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors describe some R code for helping to calculate cardiovascular risk scores. I think the code needs significant work.\n\nThe code is in an .rtf file. This is very bad software practice. The authors should at the very very least put the code in a file with .R extension.\n\nIdeally the code should be put in an R package. It is relatively straight-forward to make an R package these days. See http://r-pkgs.had.co.nz/ for help. This makes it easy to add documentation (of which there is currently essentially none for the current function included in the manuscript), tests, etc. Looks like the authors have a Github repository (https://github.com/DylanRJCollins/WHO_ISH_R_Implementation) - this could be made into an R package.\n\nAnother benefit of making a package, is that R packages have a way of including datasets in them. The csv file is small enough that you can include it in the package if that's appropriate for the use case here. If the `WHO_ISH_Scores.csv` dataset is not likely to change, or not likely to change very often, they can include it in the package.\n\nThe dataset `WHO_ISH_Scores.csv` has two columns that are probably row names that should be removed. If they aren't data, remove them.\n\nThe dataset `WHO_ISH_Scores.csv`: the data is not in a tidy format that is easy to work with. There shouldn't be numeric data combined with symbols (e.g. >=40%). If possible, I urge authors to find a way to make these into numeric columns while retaining the same information. However, it may be that it's too difficult to separate numeric values from the greater than / less than symbols etc.\n\nR packages should be cited by reference. e.g. \"using the irr package (version 0.84)\". It is good they cite the version, but put a reference in in your references. Run `citation(\"irr\")` in an R session to get a reference for it.\n\n\"We used RStudio (version 0.99.489) to compare the duplicate extractions ...\": Rstudio is just an IDE. Say that you used R, not RStudio. It's fine to cite RStudio, but also cite R. Run `citation()` in an R session to get the citation for R.\n\nOctave is an open source language, and thus the authors should share the Octave code.\n\n\"[...] we intentionally sought to use simple syntax in the base package\" - the authors script is in fact not a package, but if they do make a package this wording is good.\n\nIn the Conclusion: \"We created a simple R implementation of WHO/ISH CVD risk charts for all WHO epidemiological subregions of the world, requiring only the base R package.\" - refer to this as \"base R\", not \"the base R package\".\n\nIf the dataset \"WHO_ISH_Scores.csv\" can be modified, thus different versions of it may be used by the user, it makes more sense to pass in the dataset as a parameter, instead of hard coding reading in the data in the function.\n\nI urge authors to include a license with their R package (to be created) - and to submit the package to CRAN so it's easy for all to install.\n\nCode comments:\nI'm not sure the authors meant to do this, but the use of single ampersand in the `ifelse` statements (e.g., `ifelse(df$age > 17 & df$age < 50, 40, df$age)`) means that they get a vector of logicals (e..g., `df$age > 17 & df$age < 50` evaluates to `TRUE FALSE FALSE FALSE FALSE`) using their example, when I think what they want is a single logical return value. If so, use double ampersand instead: `&&` instead of `&`.\n\nI'm pretty sure, but have not tested thoroughly, that the long series of if statements in the section \"Match the look up value with the reference value\" can be replaced with just this: `ref[[subregion]][match(df$luv, ref$refv)]`.",
"responses": [
{
"c_id": "2541",
"date": "08 Mar 2017",
"name": "Dylan Collins",
"role": "Author Response",
"response": "Dear Scott A. Chamberlain, Thank you for your thoughtful and considered feedback. We have responded in the updated version as detailed below. We have created an R package which can be downloaded from github We provide the code from the R package as a .R file We have included the “WHO_ISH_Scores.csv” data file in the R package We tidied the ”WHO_ISH_Scores.csv” files but have kept the risk scores as character strings to retain a literal implementation of the risk charts, as described in the updated manuscript We have cited R packages and R as suggested The MatLab code used in Octave has previously been published as cited in the manuscript and is freely available We no longer suggest users to change the “WHO_ISH_Scores.csv” files We have included a GPL-3 license for the R Package Every Best, Dylan Collins"
}
]
},
{
"id": "17004",
"date": "28 Oct 2016",
"name": "Maria Suarez-Diez",
"expertise": [],
"suggestion": "Approved With Reservations",
"report": "Approved With Reservations\n\ninfo_outline\nAlongside their report, reviewers assign a status to the article:\n\nApproved The paper is scientifically sound in its current form and only minor, if any, improvements are suggested\n\nApproved with reservations\nA number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.\n\nNot approved Fundamental flaws in the paper seriously undermine the findings and conclusions\n\nThe authors present an R script and an additional datafile to calculate cardiovascular risk using the Who/ISH risk assessment charts. Previous implementations required the use of MatLab and I believe an R implementation can be an useful addition to the field.\nThe authors intended to design a code requiring little R expertise. I think the authors have only partly succeeded in accomplishing this goal and the code still needs to be improved.\nAs mentioned by the other reviewers, RTF is far from an optimal format to distribute the code. The authors should present their code as a package hosted in some repository. Currently the RTF file contains the function definition, data loading and working example in a single view. When creating the package the authors should clearly separate the code from the documentation and provide a working example. If the authors intent this tool to be used and possibly modified by users with little R expertise then a separate file with detailed instruction should be provided.\nThe authors state that users can modify the WHO_ISH_Scores.csv file to represent other risk categories. That would require explanation of the content of the file. Although the used abbreviations gdr for gender might seem obvious, they should be explained. Also when downloading the file from F1000, the file name is changed and appears as “5ae9107XXX..._WHO_ISH_Scores.csv” so that the reading fails. This could be fixed by including the file as a dataset in the corresponding package.\nWhen data from multiple patients are provided (as in the example) the output is a factor. I think it would help the users match input and output values is some tabular format or some patient identifier is provided in the output.\nIf any of the input parameters are missing (NA values) the code outputs “NA” but no information regarding the missing values is provided. It might be helpful if the code were to report which value was actually missing.\nFig. 1 shows the code and contains mainly information on how some continuous variables (age, cholesterol and systolic blood pressure are discretized). I think a more efficient way of conveying this information is by including it in Table 1.",
"responses": [
{
"c_id": "2540",
"date": "08 Mar 2017",
"name": "Dylan Collins",
"role": "Author Response",
"response": "Dear Maria Suarez-Diez, Thank you for your valuable comments. In response we have made the following changes: We created an R package including all documentation files which can be downloaded from github We no longer suggest users to change the “WHO_ISH_Scores.csv” files and we have updated the file name hosted by F1000 to be “WHO_ISH_Scores.csv”. This file, as suggested, is internal to the package and while we report it here is not necessary to download. We have added a series of warning messages to help explain outputs of “NA” and to help catch errors in the input values (e.g. out of range parameters) We have described how continuous variables are discretized in the main text Every Best, Dylan Collins"
}
]
}
] | 1
|
https://f1000research.com/articles/5-2522
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.